Cataloguing Strategic Innovations and Publications    

Surfing a digital wave, or drowning

Unknown


Information technology is everywhere. For companies’ IT departments, that is a mixed blessing

His days of derision are long gone: now geeks are gods. Amazon, Apple, Facebook, Google, and Twitter are reinventing how mere mortals converse, read, play, shop, and live. To thousands of bright young people, nothing is cooler than coding the night away, striving to turn their startup into the next big thing.

A little of this glamour should by rights be lighting up companies’ information-technology departments, too. Corporate IT has been around for decades, growing in importance and expense. Its bosses, styled for 20-odd years as chief information officers, may perch only a rung or two from the top of the corporate ladder.

However, IT departments have, in many non-tech firms, remained hidden away, automating unexciting but essential functions—supply chains, payroll, and so forth. And by now, this digitizing of business processes “has played itself out in a lot of enterprises,” says Lee Congdon, the chief information officer of Red Hat, a provider of open-source software.

There is still plenty going on in the back office: the advent of cloud computing means that software can be continually updated and paid for by subscription and that fewer companies will need their data centers. But the truly dramatic change is happening elsewhere. Demands for digitization are coming from every corner of the company. The marketing department would like to run digital campaigns. Sales teams want seamless connections to customers as well as to each other. Everyone wants the latest mobile device and to try out the cleverest new app. And they all want it now.

Rich prizes beckon companies that grasp digital opportunities; ignominy awaits those that fail. Some are seizing their chance. Burberry, a posh British fashion chain, has dressed itself in IT from top to toe. Clever in-store screens show off its clothes. Employees confer on Burberry Chat, an internal social network. This may explain why Apple has poached Angela Ahrendts, Burberry’s chief executive, to run its shops.

In theory, this is a fine opportunity for the IT department to place itself right at the center of corporate strategy. In practice, the rest of the company is not always sure that the IT guys are up to the job—and they are often prepared to buy their IT from outsiders if need be. Worse, it seems that a lot of IT guys doubt their ability to keep up with the pace of the digital age. According to Dave Aron of Gartner, a research firm, in a recent survey of chief information officers around the world just over half agreed that both their businesses and their IT organizations were “in real danger” from a “digital tsunami”. “Some feel excited, some feel threatened,” says Mr. Aron, “but nobody feels like it’s boring and business as usual.”

One reason for worry is that IT bosses are conservative by habit and with good reason. Above all they must keep essential systems running—and safe. Those systems are under continual attack. If they are breached, the head of IT carries the can. More broadly, IT departments like to know who is up to what. Many of them gave up one battle long ago, by letting staff choose their smartphones (a trend known as “bring your device”). When the chief executive insists on an iPhone rather than a fogeyish BlackBerry, it is hard to refuse.

That has been no bad thing, given the enormous number of applications being churned out for Apple’s devices and those using Google’s Android operating system, many of which can do wonders for productivity. The trouble lies in keeping tabs on all the apps people like to use for work. With cloud-based file-sharing services or social media, it is easy to share information and switch from a PC in the office to a mobile device. But if people are careless, they may put confidential data at risk. They may run up bills as well. Many applications cost nothing for the first few users but charges kick in once they catch on.

Impatient marketers

The digital world, however, runs faster than the typical IT department’s default speed. Other bits of the business are not always willing to wait. Marketing, desperate to use digital wiles to woo customers and to learn what they are thinking, is especially impatient. Forrester, another research firm, estimates that marketing departments’ spending on IT is rising two to three times as fast as that of companies as a whole. Almost one in three marketers think the IT department hinders success.

The IT crowd worries that haste has hidden costs. The marketers, point out that Vijay Gurbaxani of the Centre for Digital Transformation at the University of California, Irvine, will not build in redundancy and disaster recovery so that not all is lost if projects go awry. To the cautious folk in IT departments, this is second nature.

A lack of resources does not help. Corporate budgets everywhere are under strain, and IT is often still seen as a cost rather than as a source of new business models and revenues. A lot of IT heads, indeed, report to the chief financial officer—although opinions differ about how much formal lines of command matter. But even if money is not in short supply, bodies are. When the whole company is looking for new ways to put technology to work, the IT department cannot do it all.

In different ways, a lot of companies have decided that it shouldn’t. Many technology-intensive organizations have long had chief technology officers, who keep products at the cutting edge while leaving chief information officers in charge of the internal plumbing. Lately, a new post has appeared: the chief digital officer, whose task is to seek ways of embedding digital technology into products and business models. Gartner estimates that 5-6% of companies now have one. About half practice some form of “two-speed IT”.

If the chief information and digital officers work nicely together, it’s “fantastic,” says Didier Bonnet of Capgemini, a firm of consultants. He points to Starbucks, where such a pair have operated in tandem since last year. The chief digital officer, Adam Brotman, oversees all the coffee chain’s digital projects, from social media to mobile payments, which used to be spread around different groups. But there are also examples, Mr. Bonnet adds, of conflict, which “can slow you down rather than speed you up”. Whatever the digital team comes up with still needs to fit in with the business’s existing IT systems.

So IT chiefs somehow have to let a thousand digital ideas bloom while keeping a weather eye on the whole field. At Dell, a PC-maker shifting towards services and software, Adriana Karaboutis, the chief information officer, says that she works closely with the marketing department: people there have developed applications that, once screened by the IT team, have ended up in Dell’s internal and external app stores. With such cooperation, says Ms. Karaboutis, “people stop seeing IT as something to go around, but as something to partner with.”

Corporate IT bosses are right to fear being overwhelmed. But cleaving to their old tasks and letting others take on the new unsupervised ones is not an option. Forrester calls this a “titanic mistake”. The IT department is not about to die, even if many functions ascend to the cloud. However, those of its chiefs who cannot adapt may fade away.

The Future of Information Technology

For any business or individual to succeed in today’s information-based world, they will need to understand the true nature of information. Business owners will have to be information-literate entrepreneurs and also their employees will have to be information-literate knowledge workers. Being a piece of an information literate means you can define what information is needed, know how and where to obtain the information, understand the meaning of the information once received, and can act appropriately based on the information to help your business or organization achieve its goals.

As the world develops, new emerging information technologies will pop up on the market,                                  and for businesses to gain a competitive advantage; they will have to learn how to use information technology to their advantage. Employees must become literate about the latest information technology so that they can cope with demanding challenges at work.

Information technology as a subject will not change, it’s the tools of information technology that will change, more technologies will be developed to simplify the way we use IT at work, at home, and everywhere in our lives.

Information can be described in four different ways and these include; Internal, External, Objective, and Subjective. Internal information describes specific operational aspects of the organization or business, External information describes the environment surrounding the organization, Objective information describes something that is known and Subjective information will simply describe something unknown.

Business owners will have to know what their customers want and provide services or products in time. Technically I can term this” The Time Dimension ”.

The time dimension of information involves two major aspects and these include; (1) providing information when your customer wants it. (2) Providing information that describes the period your customer wants.

Information technology tools like computers will still be useful in the future and these computers will change their functionality with the main goal of improving the way we do business or transfer information. Institutions like Banks, Schools, Shopping Malls, and Government agencies will all have to use new information technology tools to serve their users based on the needs and expectations of their users.

Future information technology will change the face of business. Already we have seen how current information Technology has shaped the e-commerce world. Now with services like Google Wallet and Squareup, buyers can easily turn their mobile phones into payment gateways, the introduction of these new e-commerce payment gateways has shaped our e-commerce world and more technologies will merge as consumer demands increase with time. In brief, let’s look at some examples of information technology tools that will shape our future and simplify our lives.

(1)        Google Wallet:  Google Wallets will enable you to use any smartphone to purchase products online. It supports all credit and debit cards. The good thing about Google Wallet is that it will enable you to store all your cards online so that they’re with you wherever you go.

(2)        Squareup: Also square-up technology will enable you to make online transactions using your mobile phone.  As a user, you will only pay 2.75% per swipe, and there will be no additional fees or next-day deposits. This is a great tool for business, it is flexible and affordable. Square works with Android and iOS smartphones.

More technologies will emerge as the world develops because our demands will change with time. So it is up to us to be literate and learn how to take advantage of future Information Technology.

Use of Technology in Classroom – Students Demand It

The increased use of technology in our daily lives has forced students to demand their right to use technology in the classroom. Many schools and teachers have been reluctant to integrate the use of technology into their education circular. Out of school, students interact with various technological tools like tablets, computers, and smartphones which can be used to simplify the way they learn. There is great tension on which technology should be used in schools, students have their needs, but also parents and teachers are debating on this particular subject.

For the teachers, it will demand technical training, so that they get to know how to use technology in the classroom. Some of these educational technologies like computers or smart whiteboards are easy to learn, but the trouble is mastering how to use them for more than just one function to meet both the teacher’s needs and the student’s needs.

As the world develops, new opportunities are coming up and for our students to get a chance to compete in tomorrow's demanding job market, they will need to know how to use technology in various ways, and this has forced them to demand the use of technology in schools so that they get prepared for tomorrows challenge.

Technology can simplify the way students learn both in the classroom and outside the classroom. It makes education mobile and flexible, so in this case, students can have a chance to study from wherever they want. Technology makes education more personal because students will have the ability to do research and read on their own at any time of the day. So let's see how students are using technology for their personal use, and how can schools take advantage of their technical know-how.

They use Tablets to read news and watch videos: Tablets are flexible and easy to use, since they have the capability of accessing the internet from anywhere, it is very easy for students to find the information they need in an instant. So, how can schools and teachers take advantage of this? I think teachers can also jump on the same technology and offer their students coursework and other academic information via these tablets. For example, a teacher can create a classroom blog, where they post notes and assignments for students; they can even suggest e-books to download to aid the student’s research on a specific subject. On the same classroom blog, a teacher can integrate video and image illustrations on specific subjects, this can help students learn easily.

They use smart mobile phones to listen to audiobooks: All smartphones can access the internet and they have big storage hardware which can store downloaded audiobooks. Many students are downloading these audiobooks for their personal use. You can simply listen to an audiobook as you’re exercising or doing some other activity. Audiobooks simplify the way we learn, for example, a student can download a novel and listen to the author as they attend to other activities. So, how can schools and teachers take advantage of this technology? I think teachers can create audio notes that students can download and listen to at any time. This can simplify the way students access academic information and it also improves the way they learn.  The private business community has invested money in mobile educational applications that can be downloaded on mobile phones, most of these mobile Apps, can allow students to access digital libraries.

They use the Internet to connect with friends: Students spend most of their time connecting and sharing their personal lives via the Internet. Social networks like Facebook.com enable students to discover their old friends and also connect with new friends. So, how can we use social network technology in our education system? In my opinion, schools or teachers can use educational social networks like ”Piazza.com” to interact with their students. On Piazza, teachers can manage coursework, monitor students' performance, and assign coursework to students and students can use the same platform to ask questions and get answers on instantly. Platforms like Facebook can connect teachers with their students in a social manner.

In conclusion, students are not waiting for teachers or their schools to integrate technology into their classrooms, most of these students are taking online courses that can prepare them for tomorrow’s technologically competitive world.

How To Use Technology – 100 Proved Ways To Use Technology

Technology keeps on advancing and it is becoming very essential in our lives, everyday people Use Technology To improve the way they accomplish specific tasks and this is making them look more smarter. Technology is being used in many ways to simplify every aspect of our lives.  Technology is being used in various sectors. For example, we use technology in education to improve the way we learn, we use technology in business to gain competitive advantage and to improve customer care services and relationships, technology can be used in agriculture to improve agricultural outputs and to save time,

we use technology in classrooms to improve the way our students learn and to make the teacher's job easier, technology is also used in health care to reduce on mortality rate, we use technology for transportation as a way of saving time, we use technology in communication to speed the flow of information, technology is being used for home entertainment, we use technology at the workplace to spend less time working and to increase production. Their so many ways in which humans can use technology and I have listed over 100 ways how technology is being used in our lives.

BUSINESS COMMUNICATION

Communication is a very important factor in a business. Business owners, managers, and employees need good communication technology to enable them to transfer information that might be needed to make decisions. The flow of information within or outside of the business will determine the growth of that business. It does not matter how big or small the business might be, communication is very necessary, for example, business owners need to communicate to their customers on time, they also need to communicate to their suppliers or business partners, they also have to communicate with their employees daily to know about the activities in the firm. This sounds like a lot of responsibility, but with technology, all this can be done on the same day with less stress on the business owner.  Businesses can use communication technology tools like electronic mail ‘’email’’, mobile videoconferencing, fax, social media networks, mobile phones, and text messaging services to communicate with everyone in a single day. Below I have listed some detailed points on the use of technology in business communication.

1. Use Sharepoint or Intranet Networks: Both big and small businesses will find a great need to have an internal intranet network. A website used by employees and business owners at work only, intranet websites or portals can not be accessed from outside the company because they are hosted on a local company server, and this helps the business exchange information with its employees without exposing it to the World Wide Web (www). Many companies have these networks and employees have intranet emails used for communication at work only. Business managers can easily draft a message and send it to all employees via an intranet network; also employees can use the same network to share information within the business. This process protects information and it also facilitates the flow of information within the business. Big companies like Apple, Microsoft, Dell, and IBM still use these intranet networks to communicate with their employees.

2. Use Instant Messaging Services: Many small business owners have found instant messaging as a valuable and affordable tool that makes communication easy, text messaging is far more effective than electronic mail, though with email communication technology you can send big data files which you can not do with instant text messaging services. However, instant messaging can be used when a simple message needs to be passed over to any party in the business, it could be from the business manager to employees or from employees to the business manager. The good news is that most of these instant messaging services can be used at zero cost, for example, Yahoo Messenger, Google Chat, Skype, and other services.

3. Use Electronic mail communication: Electronic mail ”Email” is a default communication technology for every business and organization. On every business card, you will see an email address in the company name of that business. For example, if someone owns a Web Agency, you will see an email from the Web Agency in this format (sales@thatwebangency.com). Emails are used to communicate with employees, suppliers, customers, and business managers. Unlike text messaging, emails are for professional messages and they can be used to transfer big files of data, the size of files that can be transferred via email will be determined by your email hosting company, but in most cases, they range between 1MB – 5GB per file.  To look professional, avoid free mail hosting business services, and make sure you have a customized email in your company name, this looks professional and it will also help in the marketing of your website.

4. Use Telephone Communication: Just like email also telephones are standard business communication tools. In normal circumstances, businesses have both fixed telephone lines for offices and mobile phones. Fixed telephone lines are used during working hours and some well-established businesses dedicate a phone assistant to simply answer any business calls that come in during working hours. People who call on fixed lines are commonly customers or business suppliers, these fixed telephone lines also have voice mail recorders which can be replayed during working hours. Mobile phones, seem to be personal gadgets, so communication via mobile phones is commonly between business owners and their business partners or employees.

5. Use Social Media: Both companies and consumers use special social media to communicate. Well-established businesses use company-based social networks like Yammer.com, as of now, Yammer.com is being used by more than 200,000 companies worldwide. For those who are not aware of this network, it’s an enterprise social network, basically created for companies and employees to exchange business and work-related information. You can only use this network if you have a custom company email address, so only people with a verified company email address can join your company network.

Then we also have consumer-based social networks like facebook.com, on this network, businesses create customer service pages that they use to interact with their customers in real-time. The integration of your business with consumer-based social networks will help it in improving your customer care service and it will also help you reach more potential customers.

6. Use Fax machines: Not every business uses a fax machine, but as your business expands and grows bigger, you will find yourself in need of a fax machine. Fax machines are used to send and receive files over a telephone line. Then we also have e-fax machines that receive files from emails. Just like text messaging, a fax machine will deliver a printed message in an instant, although you will have to be at the office to receive this message. Fax machines are one of the first and oldest communication technologies used in business communication.

7. Use Multimedia tools: This is recorded content that can be used in a meeting or can be used by human resource managers to train new employees. Messages are transferred in the form of recorded videos or audio and they can be accessed via computers smartphones or smart whiteboards at work. Very few businesses use this type of communication, though it also has its impact on business communication. Videos or audio messages can be uploaded on intranet networks so that only staff members get access to these Media.

8. Use Voice mail machines: It is commonly known as a message bank. This is a centralized system that helps in storing telephone messages which can be retrieved during working hours. In most cases, these voicemail systems are installed on fixed business telephone lines which are used by customers and other business associates, so if a business does not work 24 hours, they can leave their telephone line on voice mail. Then in the morning, the person in charge can retrieve the recorded phone calls and act immediately.

9. Use Teleconferencing / Video Conferencing Tools: Technology has changed the way business owners communicate with their employees or business partners. Now communication technologies like mobile video conferencing which enable a business owner to hold a business meeting using a smartphone or a mini tablet like the iPad while traveling have made business communication so easy and flexible. tele conferencing software like ”SKYPE or Vidyo” can be used on any smartphone with a webcam.

BUSINESS

 10. Use Technology to Save Time: Both small and big organizations use technology to save time. Time is a very crucial factor in business. Many business managers use technology to hold meetings via videoconferencing tools, employees use technology at work to complete tasks on time, and technology is used to speed up the flow of information within an organization and this helps in the process of decision-making among employees and business managers. Some businesses or organizations have automated some sectors and others have equipped their employees with technological tools like computers to help them speed up their tasks while at work.

 11. Use Technology To Transfer Information: The rate and speed at which information moves within and outside the organization or business will determine the growth of that business. Well-equipped organizations or businesses have used technology to create centralized data networks, via these networks, information can be stored both remotely or internally, and employees or managers of that organization can retrieve that information at any time to help them make analytical business decisions. Making decisions in a business is based on facts and data, so with a centralized database of information, the process of accessing and analyzing data becomes simple.

12. Use Technology To Gain a competitive advantage: Business competition is healthy because it results in business growth. During the process of competing for specific markets, customers in that market tend to get the best services as an incentive to win their loyalty.  Technology has helped small businesses compete with big well-structured businesses. Unlike, in the past when some productive technologies could only be accessed by wealthy companies, today, also small businesses can use simple technology to gain a competitive advantage. With technology, it is not about who has the best technological tools, but how you use them to serve your clients. We have seen many wealthy companies with advanced technologies losing markets to small businesses, simply because they can use the technology they have to serve their customer's demands.

13. Use Green Technology to save the environment: Our feature depends on the survival of our environment. Many green technologies have been developed for future use and development. For example, we have technologies like green electric cars which will no longer depend on fuel but they can be charged with solar power or wind energy while in motion. The more we exploit Mother Nature for natural resources the more our environment will be at risk. Many technological companies have started changing their manufacturing technologies to reduce air pollution and they now produce environmentally friendly technological products like green computers which use less power.

14. Use Mobile Videoconferencing: Communication is one of the most important factors in development, for our small businesses to survive in the future, we need to use advanced communication tools like mobile videoconferencing. In this case, you do not have to worry about being late for a meeting which might create a positive impact on your business growth. Smartphones and portable gadgets like the Mini iPad can make mobile conferencing possible in the future. Just imagine yourself being in a meeting while on a plane or a train. This is one of the smartest communication technologies that should be used for future business development; technological companies like Vidyo.com have made this possible.

15. Use Mobile Learning: iBooks have made learning mobile, text on iPads, Kindle Fire, and other tablets is clear and these portable gadgets have big storage space of about 65GB and more, this is enough storage space to keep all your books, videos, and audio notes. Unlike computers which are heavy to carry, smartphones and mini tablets like the iPad Mini will make it easy for students to learn and access academic information in the future. For now, these technologies are expensive, but tomorrow, we shall have more students learning via their mobile gadgets. Not only can you read textbooks via these smart tablets, but you can also place orders for electronic books using the internet, and you can simply download the book after placing the order.

TEACHING AND LEARNING PROCESS

16. Use automated programs: Many teachers have issues with assessing student's work and grades, you might find one teacher has to grade over 60 students, analyze their work, comment and suggest areas of improvement. This sounds like a lot of work for one person, and they have to be accurate because any mistake made in a student’s grade can affect their future. We live in an economy, where less money is spent on reducing the sizes of classes, so you will find that many teachers are stressed out with big classrooms, the workload is too much yet the payment is also little. So as a teacher, you can use technology to automate some processes. For example, you can use tools like Piazza.com to manage your students’ coursework, track their performance, and also assign them work. On the same network, you can create a virtual classroom, where students can post and answer questions.

17. Use grammar tutorials and puzzles to teach English: Speaking English and writing in English is completely different. Some students are so good when it comes to spoken English, but very poor when it comes to composing a good sentence. These students find it difficult to understand some grammatical principles even when taught by a good teacher. So the best way of making them learn is by giving them grammar tutorials and videos. Technology will help teachers reveal the burden of attending to each student with a special grammar need, in general, the teacher can use grammar puzzles or games in the classroom, and students will participate in a fan way, yet they will be learning. Teachers can also use computer word processing applications to correct student’s grammar; every computer has a word processing application like ”Microsoft Word ”

18. Use tracking software to monitor students' writing skills: Teachers can use writing software like ”Essay Punch” to help students learn how to write an informative essay. Essay Punch helps students with guides on how to write a short essay that describes, persuades, or informs. The software comes with a menu of topics; students can choose any topic from this menu and start working on their writing skills. When the essay is complete, the student is guided in the process of re-writing the essay, editing, outlining, organizing the essay, and publishing the essay. Then the teachers can use a record management system to monitor their students’ progress.

19. Use tablets for visual illustrations in the science classroom: Teaching a science classroom will require visual illustrations. It is very difficult to explain every science topic in text format. So teachers can find great use of tablets in the science classroom. Since these tablets are expensive, teachers can group students to share a tablet. In this case, students can form groups of 3-4 students to share a tablet, then the teacher can control data derived on these tablets using an internal network with the science classroom. Teachers can use this internal network, to send illustrations and visual data on tablets.

20. Use the Internet to publish students' work: Teachers can use the Internet to publish the works of bright students; this will inspire other students in the classroom. This work can be published on a classroom blog, slideshare.net, or Google Docs, or it can also be published as an e-book for other students to download. This process will encourage good students to write better essays so that their works can be read by others.

TECHNOLOGY IN THE WORKPLACE

21. Use Technology to improve efficiency at work: Technology can change the way we do most of our work and it can also reduce the stress we get because of the many tasks we have to do in one day. Employees can perform more than one task using technology, for example, a secretary can compose a mail to be sent to all employees, yet at the same time, they can make a call to a supplier or a customer within the same time. This employee is using three types of technologies, internet, and computer to compose an electronic mail, then they also use a telephone to contact the supplier of the customer.

22. Use communication technology to improve information flow at the workplace: Communication technology improves the way we interact with each other at work. Tools like the internet, text messaging services, telephone, enterprise social networks like (Yammer.com), e-fax machines, and many more tools facilitate the flow of information at the workplace. Decision makers at the workplace will depend on the speed of data flow to make quick decisions that might be of benefit to the company's growth.

CUSTOMER CARE SERVICE:

23. Respond to customer needs on time: Every business survives on its customers, the more clients you have the more successful your business will become. So it is very important to serve your clients on time and also tailor products and services based on their needs. Use internet technology to get responses on what your customers need, create a company website to collect data from your customers, and make sure that your customers can contact you directly via your websites. Respond to your customers’ requests on time, some companies have full-time online assistants who will handle orders, complaints, and suggestions from customers. Build your business with your customers then you will be the winner of all time.

24. Improve payment systems: The module of payment will also be part of your customer service. Use technology to improve the way people pay for your services or products. Today we have various methods of payment, i.e. PayPal, smart car payments, and mobile phone payments (Google Wallet and Square ). If a client like your service or product, they will want to place an order, so this process has to be very simple, the more payment options you give your clients, the more money you will make, and also your clients will be pleased by the good customer service.

WAYS TO USE TECHNOLOGY IN MARKETING

For any business to succeed it has to market its services or products. However, some advertising Media can be too expensive for small businesses. But with technology, also small businesses can reach targeted markets and gain a competitive market. In the past years, I have seen small businesses like Instagram.com gaining a competitive advantage over big well-established tech companies like Facebook and Twitter. This success shows you that, if you market your product or service well, consumers will come and they will not mind if you’re small or big, as long as you provide what you promise. So let’s see how you can use technology to market your small business and spend less.

25. Use electronic mail marketing: Many times marketers have debated on the effectiveness of email marketing, some think it will die very soon, but I think, nothing will ever replace private life, and email communication is private. So the best way to engage with your clients and avoid your messages from going to the spam filter is by creating a website, and then persuading your readers or customers to subscribe for updates or shopping deals. Once the user is subscribed to your emails, they will receive all notifications in their inbox; you can use this opportunity to increase sales by offering special discounts and shopping coupons to your email subscribers

26. Use social media marketing: The field of marketing has been changed by the wave of social Media. Top social media like Facebook, Twitter, LinkedIn, Google Plus, and Pinterest drive massive traffic to both small and big businesses. The trick is to define your target market and also know which content attracts them. Recent studies show that Pinterest and Facebook are driving lots of traffic to e-commerce sites. When you look at a site like Pinterest, most of the users are women who share things like clothes, shoes, bags, cakes, wedding ideas, and much more, to promote your business on Pinterest or Facebook, you will need to know which content to use. On Facebook, you can run a targeted advertisement for as low as $100 and reach the people you need.

EDUCATION:

 27. Online Education: e-Learning has changed the face of education worldwide, unlike in the past when students and educators were bound by physical boundaries, today, internet technology has played a big role in making education effective. Many colleges and universities provide online professional courses like ACCA and this has helped many students from developing countries gain access to internationally recognized courses which also increases the chances of these students to compete for Jobs internationally. Also, adults who want to go back to school, have used online education facilities to enable them to study from their homes after work, some lessons can be downloaded as podcasts or videos, so students can learn at any time anywhere.

28. Use of computers in education:  To a certain extent, computers help students learn better and they also simplify the teacher’s job. Computers are used to write classroom notes, create classroom blogs, play educational video games and puzzles, access the internet, store academic information and so much more. Many schools have set up computer labs where students are taught computer basics, and then some private schools have equipped their students with computers in the classroom. Teachers use computerized smart whiteboards which can help them explain subjects using visual illustrations, these smart whiteboards can also save teachers' work for later use.

29. Use the Internet for educational research: Both students and teachers use the Internet for research purposes. In most cases textbooks have little information about specific subjects, so students and teachers use the Internet to do extensive research. Search engines like Google.com & Bing.com are being used to find great educational content online. Also, community-edited portals like Wikipedia.org have vast amounts of educational content that can be used by both students and teachers for educational purposes or reference purposes. Video streaming sites like Youtube.com are being used in the classroom to derive real-time visual illustrations or examples on specific subjects.

HUMAN RESOURCE MANAGEMENT:

30. Use Online Recruitment Services: Many companies are using the Internet to recruit professionals. Well-known social media like Linkedin.com and Facebook.com have helped many companies discover talented employees. Job search engines like indeed.com also make it easier for job listing portals, because, talented employees use these job search engines to find listed jobs on various job portals on the internet. Some job portals also require applicants to post videos about themselves, and submit their academic papers and recommendation letters from past employers, which helps the human resource manager to easily get a clue about each job applicant. This process saves time and human resource managers get a chance to meet talented employees.

31. Use Electronic Surveillance to Supervise Employees: When the business gets big, you will end up with many employees and it will be difficult to keep track of them. So some companies have decided to install electronic surveillance devices that can monitor the performance of all employees. It is quite funny, but humans will perform better under supervision, it is very difficult to find self-motivated employees, most of them come to their workplace to pass the time and wait for a paycheck at the end of the month, so using an electronic surveillance system can ensure that all employees complete their tasks.

HOW TO USE TECHNOLOGY IN HEALTH CARE

32. Use Technology for Research Purposes: Many healthcare professionals use the internet to search for information. Since this internet can be accessed from anywhere, doctors or nurses can do their research work at any time. Even though some of the information published online is not that accurate, the little available approved information can help nurses, doctors, and other healthcare professionals dig deep into certain causes. Popular healthcare information-based websites like www.webmd.com and www.mayoclinic.com have played a big role by publishing relevant healthcare information online. Data published on these two portals is written by experienced doctors and this makes the data relevant.

33. Use Technology to improve treatment and reduce pain: The use of technology in healthcare facilities has changed the way patients are being treated, it speeds up the process of treating a patient and it also helps remove any pain that might cause discomfort to the patient. Machines are being used in surgical rooms and this has reduced the human risks while performing surgery.

34. Use Technology to improve patient care: Technology is being used to manage patient information effectively; nurses and doctors can easily record patient data using portable devices like tablets. This data can be stored on an internal database, so it becomes easier to mine data about each patient. Doctors always depend on the history of patients, so this stored information on an internal database within the hospital will make it simple for Doctors to make quick decisions.

DECISION MAKING PROCESS:

35. Use Technology to Mine Data: Once information is captured and processed, many people in an organization will need to analyze that information to perform various decision-making tasks. This data can be stored in a database which can make it simple for users to retrieve it from the database onto their computers to make quick decisions. With the help of a data manipulation subsystem, users can be in a position to add, change, and delete information on a database and mine it for valuable information. Data mining can help you make business decisions by giving you the ability to slice and dice your way through massive amounts of information.

36. Use Technology to Support Group Decision-Making: Information technology brings speed, vast amounts of information, and sophisticated processing capabilities to help groups use this information in the process of making decisions. Information technology will provide your group with power, but as a group, you must know what kinds of questions to ask of the information system and how to process the information to get those questions answered. To make all processes simple, you can use a group decision support system (GDSS), which facilitates the formulation of and solution to problems by a team. A GDSS facilitates team decision-making by integrating things like groupware, DSS capabilities, and telecommunications.

HOW TO USE TECHNOLOGY IN YOUR CLASSROOM

37. Visual Illustrations: Teachers can use technology in the classroom by integrating visual illustrations while teaching. Many times students get bored with the normal text-based learning process. It is very easy to lose interest in text rather than images or videos. Teachers can use advanced smart whiteboards and projectors to derive live visual 3D images and videos. These smart boards can also access the internet, so teachers can use websites like YouTube, Google Images, and Pinterest, to derive visual examples about any subject. Students will enjoy learning in this form and they can easily remember each point explained using visual images. Teachers can also tell students to use these smart whiteboards to explain points to their fellow students, some students learn better when taught by a fellow student.

38. Create a classroom blog: This might sound advanced to some teachers or students, but it is very simple to own a classroom blog. You can use free blog hosting services like Blogger.com and wordpress.org. With these free blog hosting programs, you will not need to worry about domain renewals or website hosting, all services are free, just because, your classroom blog will be hosted under a free domain name, for example (myclassroom.wordpress.org). Teachers can post coursework on these blogs, post assignments, or create debate topics which will require students to carry out the debate using commenting systems like Disqus.com

39. Video games to solve puzzles: Students can learn through educative video games or puzzles, and subjects like English and Math can be taught using video games and puzzles. Teachers can create game challenges among students and reward points to winning students or groups. If the classroom has computers and the internet, the teacher can tell their students to form small groups of 3-4 students per group, and the teacher assigns each group a challenge. This can make learning fun and students will learn better.

40. Use computers to Improve writing skills: Teachers can tell their students to write sentences or classroom articles which can be shared with the classroom. Computers have advanced word processing applications that can be used in the writing of articles, this word processing application has an inbuilt dictionary that can auto-correct spelling mistakes and also suggest correct English terms. During this process of article writing using a computer, students get to learn how to spell, how to type, and how to compose an article. Maybe this is the reason why we have so many bloggers nowadays.

41. Encourage Email Exchange: Teachers can encourage their students to exchange email contact with their friends in the classroom and with other friends from other schools. This process helps students create relationships with students who take the same classes as them, and these students can exchange academic information like past exam papers or homework assignments which can help them learn and socialize with relevant friends. Also, teachers can communicate with their students using email, which in return creates a strong bond between teachers and students.

42. Create podcast lessons: Some teachers might find this difficult because it requires time to record podcast messages or lessons, but once composed, the teacher will have more time to do other educational activities.  Once a podcast lesson is recorded, it can be uploaded on a classroom blog where students can download it and store it on their smartphones or tablets. Podcast lessons are convenient because a student can listen to a podcast lesson while doing housework.

43. Use text message reminders: Many times students get overwhelmed by the amount of classroom work they have to complete, and the endless tests and exams they have to do. Sometimes they even forget to attend some lessons or they submit coursework when it’s too late which affects their end-of-term grades.  Teachers can use mobile phone applications like remind101.com, to create text messages which remind their students to submit coursework or to prepare for a test.

IN THE BANK:

44. Use of Plastic Money Cards: Technology has played a big role in changing the face of the banking industry. Unlike in the past when you had big sums of money and got exposed to the risks of moving with lots of cash, today, plastic money cards are being used to make transactions of any kind. Many banks have VISA CARDS or CREDIT CARDS which can be used worldwide to purchase products or make payments of any sort. The same smart money cards can be used online because all e-commerce websites accept payments with Visa or Debit cards. When the owner of the VISA CARD makes a purchase, that money will be transferred from their bank account to the merchant’s account, so the all process of exchanging money is electronic.

45. Use of Mobile Banking Services: This service has helped many developing countries in ASIA and AFRICA. Many banks and information technology companies have enabled people in third-world countries to use mobile phones as banking tools. In most rural areas, their no banks simply because they have poor infrastructures. So BANKS and telecom companies have invested money in mobile phone banking services which enable people to transact business using mobile phones. Users of these mobile money services can save money using a mobile phone or withdraw cash using a mobile phone via a mobile money agent in their area. Top mobile phone service providers like Airtel and MTN, have played a big role in facilitating mobile banking in AFRICA and ASIA

46. Use of Electronic Banking: Many banks have simplified the way their customers access personal account information and transfer money from one account to another. Banks use Internet technology to enable their customers to request bank statements or transfer money. This has made the banking industry so flexible and it also saves time and money. Since most of this work is done by web technologies and other banking technologies, banks cut costs on human labor which increases their profit returns.

BUSINESS ORGANIZATION:

Technology can be used in various ways to facilitate business organization. For example, technology can be used to organize information, it can be used to aid data transfer and information flow within an organization, and technology can be used to process, track, and organize business records. Without technology most businesses would be a mess, just imagine going through the trouble of writing data on paper and keeping large piles of files of data. So in my point of view, technology helps businesses operate effectively. Below I have listed summarized points on the use of technology in business organizations.

47. Use Technology to Speed up the Transfer of Information and Data: The rate at which information flows within a business will determine how first things get done. If the flow of information from one level to another is slow, the productivity of the business will be slow and inefficient, customers will not be served on time and this can harm a business or even give a chance to its competitors to gain strength in the market. But if information can move easily and fast, business managers and employees will find it easy to make decisions, customers will be served on time and the business will gain a competitive advantage. So how does technology facilitate information flow within a business? A business can use technological tools like intranet networks to aid the flow of information within the business, they can also use external networks which require a public website and email to facilitate the flow of information within and out of the business, customers can use email or website contact forms to make inquiries or orders. Also, businesses can use centralized data systems to improve the storage of data and also grant remote or in-house access to this data. Banks use centralized data systems to spread information to all customers via their local bank branches, ATMS, internet, and mobile phones.

48. Use Technology to Simplify communication in an organization: For any organization to be organized and efficient, they have to use communication technology tools like emails, e-fax machines, videoconferencing tools, telephones, text messaging services, internet, social media and so much more. Communication in a business is a process and it also helps in the transfer of information from one level to another. For a business to stay organized and serve its customers well, it has to use effective communication tools. The customer service departments must be in a position to solve customers’ problems on time, orders are supposed to be fulfilled on time. Business managers can use technology to easily allocate work to specific employees on time.

49. Use Technology to Support Decision Making: Since technology makes the transfer of information fast and it also simplifies communication, employees and business managers will always find it easier when they want to make quick decisions.  To make decisions, employees will need approved facts about a subject or any customer in question. For example, if an accountant wants to know how much money customer X owes the company, they will have to retrieve data from a centralized database within that organization on that specific customer, this data will show the spending and purchasing patterns of customer X if this data was stored using an accounting software, the system will clearly show appropriate figures. This will save the accountant and customer X time and it will also help the accountant make a quick decision based on facts.

50. Use Technology to Secure and Store Business Data: Just imagine a business where you have to record everything on paper, and then file each paper, that would be a waste of time and resources, and also this data will not be safe, because anyone will have access to those hard copy files. But with technology, every process is simplified. I remember when I was still working for a computer service company, every engineer had an account on an internal database for the company, so they could access electronic Job cards which they had to fill whenever they finished a job, this data could be accessed by the workshop manager and he could follow up on the customers to ensure that the job was done right. When a Job card was submitted for review, engineers would have no permission to re-edit the data, this permission was left to the workshop manager, so the process seemed organized and secure. Many businesses are using internal databases and networks to simplify data transfer and also ensure that their data is well-secured and stored.

AGRICULTURE:

53. Use Technology to Speed up the Planting and Harvesting Process: Preparing farmland using human labor can take a lot of time, so many large-scale farmers have resorted to using technological tools like tractors to cultivate and prepare farmland. After this process of preparing the farmland is done, farmers can still use technological tools like Cluster bomb technology used for seed planting. Then when these crops grow and reach the stage when they can be harvested, the farmer will use another technological tool for harvesting such as a combine harvester. As you have seen, the all process of preparing the field, planting, and harvesting is done by machines. This can be expensive for small-scale farmers, but large-scale farmers will save time and money in managing the all process.

54. Irrigate crops: Farmers in dry areas that receive little rain use technology to irrigate their crops. Water is a very essential factor in plant growth, it contributes almost 95% towards plant growth, even if the soils are fertile, or when the plants are genetically engineered to survive the desert conditions, still water will be needed. Farmers can use automated water sprinklers which can be programmed to irrigate the farm during specific periods during the day. Water pipes can be laid across the farmland, and water sprinklers can be scattered all over the farm getting the water from the water pipes, farmers can add some nutrients to this water, so that as plants get irrigated, they also get some important nutrients which can enhance their growth.

55. Create disease and pest-resistant crops: Genetic engineering has enabled scientists to create crops that can be resistant to diseases and pets. They have also succeeded in engineering crops that can survive in desert conditions and this has helped many farmers in drought areas like Egypt to grow various cash and food crops which in return boosts their income. However, some farmers have been reluctant to use these genetically engineered crops because they fear that they can damage their farm soils and also these genetically engineered crops do not produce seeds that can be harvested and planted again. To some extent this is true, in my country, I have seen these engineered oranges, they look so nice and they are big, but they have no seeds, so if you want to replant it, you have to go and buy a full-grown plant which is expensive some times and this all process does not sound logical when it comes to farming.

AT YOUR HOME:

56. Home Entertainment:  Technology has completely changed the way we entertain ourselves while at home. Many advanced entertainment technologies for homes have been invented and this has improved our lifestyles at home. Home entertainment gadgets like 3D – HDMI televisions which show clear images have improved the way we enjoy movies, video games which keep our kids entertained all the time while at home and some video games are educational so our kids tend to solve puzzles while having a fan at home, advanced home theater systems for playing clear music live from iTunes music store, fast broadband internet we use to stream Youtube videos on pads, electric guitars, and pianos we use to play our music.

57. Use it to improve Home Security:  Technology is being used to improve on our home security. Home spy technologies will enable you to keep track of what is going on in your home while at work or on holidays, this spy technology can be installed on your smartphone device or tablet, and then connected to the spying webcam device at home using the Internet. You can also use hardware home alarm systems, which can be triggered when something wrong happens at home. For example, the alarm system can be connected to report any forced entry into your home, or it can be set to report a fire outbreak in the house.

58. Save Energy: With the increasing costs of living, it has become a must to use technology to save energy. Since we use power to do almost anything at home, it is advisable to opt for low-cost energy-consuming technological tools. If you apply energy-saving electronic gadgets in your home, your bills will be cut down by half. For example, you can replace your electric cooker with a gas cooker, gas is cheaper than power, you can use energy-saving bulbs, you can use a solar water heater, use can put all your home lights on solar power.

IN ACCOUNTING:

 59. Data Security and storage: Accounting as a process deals with analyzing the financial data of a business or an organization. Technology helps in keeping this data safe to ensure that it can be retrieved at any time by employees in the financial department. Financial information is very sensitive, so it can not be accessed by anyone in the business or organization. It is only people with experience in accounting have access to this data, so securing this data on an encrypted data server is very important. Many accountants have undergone special technological training so that they learn how to use various accounting technologies like computers and accounting software.

When it comes to data storage, many technological companies like Dell, Microsoft, and Apple have devised high-end data servers that can store sensitive data for their clients, in this case, these servers are heavily protected from experienced hackers who can take advantage of your financial data.

60. Use Technology to be accurate: Unlike humans accounting software and computers are more accurate as long as they are used in the right way. Accounting deals with detailed data and record keeping. Data processed during the accounting process is used to project business growth and it also helps in making decisions in an organization. Technology has so far proved very effective when it comes to data processing and storage. Simple technological tools like calculators and computers are used in almost every business. Also,                                             technology reduces the number of errors during the process of analyzing data, humans can not deal with figures and data for a long period without making any errors.

IN AFRICAN SCHOOLS

61. Use Computers in African classrooms: Africa has been left behind for years, but now technology is spreading all over the world, and African schools have started using technology in their Curriculum which has brought excitement to African students. Some African schools have created computer labs where students get taught basic computer skills like typing, using the internet to do research, and using educational video games to solve academic puzzles.  This process has improved the way students learn in African schools. Big organizations like UNICEF have facilitated the One Computer per Child program which ensures that African children get access to computers and also learn how to use them.

 62. Replace blackboards with smart boards: Not all schools have managed to achieve this, because of the poor infrastructure in Africa. Most schools have no power, yet these smart boards use power. However, some urban-based schools have the opportunity to use smart boards in the classroom. Students can learn easily via these smart boards because teachers use visual illustrations derived directly from the internet. Some African teachers use real-time YouTube videos to derive examples on specific subjects, and this has helped African students learn easily and also get exposed to more information. Read more about the use of ICT in schools without electricity here worldbank.org

63. Use the Internet in African schools:  The Internet is a very crucial information technology tool that helps in the search for information. Most urban-based African schools have free wireless internet and some have internet in their computer labs. This internet helps students do educational-related research, some of these students have also found great use of social networks like Facebook.com to connect with their former schoolmates and they also use social sites like ePals.com to get educational information shared by students from various schools. Internet technology has also helped students in Africa study online, for example, accounting courses like ACCA can be done online and this has helped African students get access to advanced educational courses.

IN A BAKERY:

64. Use Temperature Sensors to Monitor Room Temperature in a Bakery: Bakeries can use technology to monitor the temperature of baking rooms. The quality of bakery products like bread or cakes will be determined by time and temperature. If the temperature is too high or too low, the products will be of low quality. So most advanced bakeries have employed automatic temperature sensors that can report any temperature drop or rise. These temperature sensors can send information to the bakery operator and they will act immediately if there is any temperature change within the baking room.  Every baked product has a specific temperature under which it has to be baked, for example, the amount of heat used to bake bread is not the same heat used when baking a cake. However, it is very difficult for humans to measure temperature regularly, so this process can be managed by technology.

65. Use Technology to Produce quality bakery products: Technology can be used in a bakery to produce high-quality products. We have seen that electronic sensors can monitor the temperature of baking rooms and they can also determine the time when these products should be in the baking machine. The process is automated and it is perfect, so you will see that bakeries that use this technology always have the best products. We also have the Hefele turbo flour sifting machine which helps in cleaning the flour. The flour will be fed into the rotating sifting wall and it is not exposed to any grinding effect, which means the flour will be cleaned but it will not lose its texture nor can it heat up. This automation process will help save time and also increase production:

IN A CLOTHING BUSINESS:

66. Inventory: A clothing company needs to keep track of its inventory, because, if they run out of a hot-selling fashion trend, the business can suffer a big loss. So sales managers have to use inventory tracking software to know which fashion trend sells more and how many people are demanding that trend. With this information, the cloth store manager will stock only the top most selling fashion trends on demand which will increase sales. This inventory tracking software will save the cloth business owner time and money. In simple business economics, demand must equal supply.

67. Point of Sale: Since clothing stores transact full-time sales, they need to use technology to help them in cash registering and order tracking. Many advanced clothing stores are using cash register software which helps in adding total sales per day, calculating tax, processing coupon codes, and scanning item bar codes they also help in updating inventory records after each purchase. Also, stores are using technology to simplify the way customers pay for items, for example, most clothing stores accept smart card payments like debit cards, and they also operate online stores that accept PayPal payments. This improves their customer service and it also results in increased sales.

68. Promotion: If you look at the clothing business, it has a very high level of entry, which means that competition has to be very high. So for any clothing store to break through, they have to use technology to gain a competitive advantage over other clothing stores. For example; a clothing store can create an online store and socialize that store with top social networks like Facebook, then give away free shopping coupons to its fans on Facebook. Also,                                             fashion social networks like Pinterest can help a small clothing store promote itself and give away shopping gifts to Pinterest users. This process will not cost a small clothing store more than $1000, yet it will gain them a big audience of fashion lovers and also increase their sales. Look at small fashion e-commerce sites like nastygal.com which has gained a competitive advantage over Amazon.com in the fashion world.

IN ARCHITECTURE

69. Use Web Technologies: Many architectures use web technologies like email, and the internet, to perform various tasks. Web technologies are being used in transferring information, storing data, filtering, and securing data, all this enables architectures to organize and easily access architectural data. Architectures can use shared screen software like Skype, to discuss drawings with other parties or their customers. Video conferencing tools can be used to discuss a project with clients and this simplifies the way architectures do their job. Also, a website can be created to showcase past works done by these architectures, this website can be used to acquire new customers.

70. Use Computers to Make Drawings: Most architects find great use of computers during the process of making sketches of their drawings. Computer-aided design software (CAD) is being used during the process of making sketches. Then the architect can use a projector and a smart board to share a sketched plan with their workmates to suggest areas of improvement. Also,                                             computers can be used as storage devices for all architectural works. It is always advisable to keep a soft copy of the final architectural drawing, and then use the printed copy for on-site development.

71. Use Large Format Printers to Print Out Drawings: After using a computer to sketch any drawing, the architect will need to use a large format printer to print out a hard copy of that sketch. In most cases, they use 24 x 36 large papers for these print-outs.  Many copies can be made so that some are sent to the customer and others are shared with fellow architects to suggest areas of improvement.

72. Use Digital Cameras: Before the architect starts sketching a new plan for any construction, they will need to take pictures of the building or ground to be worked on; this helps them during the process of planning a new design. Photographs do help architects remember important site characters that can be referred to when creating a new design.

73. Use Laser Measuring Tools: Every architect will need good measuring tools to ensure the accuracy of their work. Laser measuring tools are more accurate compared to a normal ruler, though some architects might prefer an ordinary ruler while drawing sketches on paper. These measurements are very important because construction engineers will base on those quoted measurements to build a standard infrastructure.

IN ART AND DESIGN:

74. Use technological tools to create sculptures: Artists use technological tools to create sculptures and other art pieces. For example, a flexcut Mallet Tool can be used to shape large amounts of wood, these mallets come in different sizes and each mallet will have its role during the process of wood carving. Each mallet tool has a high-carbon steel blade that is attached to an ash handle. Other tools can include shaping and patterning sculpting thumb, Loop and ribbon tools, and many more. Buy a sculpting tool here dickblick.com

75. Use the Internet to Market Art Work:  Before we started using the Internet, it used to be very difficult for good artists to market their creative works, most artists would die before selling their masterpieces, and museums would take the role of hunting and discovering these great artworks. Today, internet technology enables artists to showcase their great artworks online. Some social-based networks like www.500px.com  allow artists to showcase their works in the form of photography and they also enable them to sell these works via this network. However, some artists do not want to expose their works online because they fear that someone might take advantage of their creativity and copy their art pieces.

76. Use technology to get Inspiration: Art is aided by inspiration; once an artist is exposed to various experiences, their brain will create an art fact out of that experience. So the internet helps in this process of creating ideas. A lot of information is published online in the form of videos or pictures, so artists can use this data to create meaningful art pieces. Also, young artists use the internet to study the works of professional artists who are beyond their reach. Some museums have published these art facts in the form of pictures online, so any artist can access this data from anywhere and learn some basics from great artists.

77. Use laser sensors to secure art pieces in museums: Art theft is on the increase because of the abnormal prices paid for these art pieces. According to The New York Times, an original Pablo Picasso’s Nude, Green Leaves, and Bust cost $106.5 million, so this attractive value of just a single art piece will attract art thieves to any museum with this item. This has forced art museums to use electronic laser sensors that can detect movement and sound, then they trigger alarms to alert armed guards if anyone tries to get close to them. To some extent, this has helped in securing some great artworks.

IN BADMINTON

78. Use Advanced Badminton Rackets. Badminton is a cool game with many fans; technology has also helped to advance the badminton game. Advanced Badminton rackets help players enjoy the game and play for longer hours. The newly advanced rackets have a lighter frame, which makes them so flexible during the game. These frames are made of elastic particles mixed with carbon fibers which enable the player to swing faster and the racket puts less pressure on the players’ hands. Also, these newly advanced rackets have shock-less grommets, these grommets will not tear or pop out during the game due to the pressure of the ball on the racket and this improves the game of badminton.

IN BASKETBALL:

79. Use Basketball Shoe Technology: In most cases when you think of sports shoes, you will not think of how technology can be used in the development of basketball shoes. However, basketball is a motion sport that requires players to jump up and down, so the types of shoes they have to wear have to be in a position to support these movements. When these basketball shoes are being manufactured, the design is focused on making them light and breathable so that they support the players well and prevent any injuries that might occur during the process of jumping. These basketball shoes prevent injuries in various ways, for example, they provide adequate ankle support and they also lace up to the top which provides a snug fit. Though these technologically advanced basketball shoes come at a high cost, all NBA players are advised to wear them and players are supposed to have more than one pair because playing in one pair all the time will cause the sneakers to wear out and they will start putting pressure to the player's ankle and feet which might expose them to injuries during the match.

80. Use broadcasting technology: Many basketball fans have been in a position to watch live NBA basketball games while at home on their televisions. Well-known cable television companies like CBS INTERACTIVE have managed to broadcast these NBA basketball games live using their advanced 3D broadcasting technologies. As a fan of basketball, I have benefited from this technology, because I have no worries about missing my favorite game.

BEAUTY SALONS:

81. Marketing: Unlike in the past when your next-door beauty saloon was only known by that community, today things have changed, Beauty salons have adopted the social marketing fever and many of them are using social networks like Facebook, Yelp, and Foursquare. The most important social networks for these local businesses are Yelp.com and Foursquare.com. With Yelp.com, customers can post reviews on any beauty salon within their location, then users of Yelp can base on those reviews when searching for recommended beauty salons within their location. Then foursquare.com will suggest beauty salons your friends on Facebook like. For example, if a beauty salon owns a Facebook page and promotes itself well to targeted users, its customers will click the Like Button if they appreciate the services being offered at that beauty salon, so if your friend Likes a specific salon within your location, the next time you use foursquare, they will use the data from Facebook and suggest beauty salons liked by your friends within that particular location. So Foursquare will act as a localized recommendation service powered by your friends ‘’LIKES’’.

82. Use SMS REMINDERS: Due to the huge competition in the beauty salon business, it is advisable to stay in touch with your client all the time at an affordable cost. If you want to gain a competitive advantage and increase on the number of customers for your beauty salon, you will need to integrate SMS technology into your business. Launch SMS campaigns with offers to your already verified customers and also promote incentives to new customers using SMS advertisements. It is very cheap to send bulk SMS, all you need are the contacts of your clients and other targeted customers. Satisfy your customers by offering the best service and also keep them engaged all the time. Customer service is a very crucial factor in business growth. Some of the services you can offer via SMS include appointment reminders, holiday promotions, coupons for discounts, suggesting new styles for your clients, and much more.

83. Use Salon Management Software: This SMS ”salon management software” will help you manage your salon effectively, this software can help you manage appointments, financial records, manage inventory and also manage payrolls. With this software you can be in a position to build a list of clients and track their spending patterns whenever they come to your salon, this will help you customize packages for them which will improve your customer care service. The cost of this software varies depending on the functions and needs of a client, so you have to specify what you need, but in most cases, a standard SMS with basic features can cost you like $100 only. If you want to try this software out, visit melissamsc.com

BIOLOGY CLASSROOM

84. Use Visual Illustrations: Biology teachers have found great use of technology in their biology classrooms. Biology teachers use smart boards and projectors to derive live examples in visual form in the classroom. Biology is more of a practical subject, where for students to understand some biological concepts, images, and video illustrations have to be used. Let’s assume students are learning about the human heart and the way it works. Text or verbal explanations will not work well in a lesson like this, so the teacher has to use a smart whiteboard to show visual illustrations of how the human heart functions.

HOW TO USE TECHNOLOGY IN CAREER COUNSELING

85. Use Computers in Career Counseling: Choosing the right career for your future is a big decision, in many cases, we make decisions based on facts, and computers are good technological tools that can be used to store and analyze facts. Many career counselors use computers to show their clients facts about specific careers, so both the counselor and the client will go through the data which might include several people competing for that particular career in the market, how many companies are in a position to cater for that career, how much money is paid, challenges and opportunities of that specific career. To simplify the job of career counselors, they use computer guidance systems, for example; the ” System for Assessment and Group Evaluation (SAGE), Computerized Career Assessment and Planning Program (CCAPP),

86. Use the Internet in Career Counseling: Sometimes, you will have no access to a real career counselor; sometimes you might not even have the money to pay for one. So here is where the internet comes into play. You can use the internet to research different careers. Use top search engines like Google.com / Bing.com / Yahoo.com /Ask.com, to get specific information about any career, you can know which companies will hire you, how much money they can pay based on average salaries issued in that company, risks and opportunities involved if you pursue that career, and much more. All this information is available online. You can also try to use job listing portals like indeed.com to search for salaries and jobs within your location.

IN CHURCH :

87. Use the Internet to deliver church Sermons: As the world develops, more and more people are getting busy and they get attached so much to their careers, the cost of living is increasing so Christians find themselves working on Sundays. For churches to keep up with their Christians, they have decided to use the internet to reach masses of Christians across the globe. Well-known pastors like Joyce Meyer are using the internet to reach millions of Christians across the globe (Joycemeyer.org), so as Christians are at work, they can use their smartphones, computers, or tablets like iPads to access spiritual information in the form of videos, audio, or text.

88. Use Social networks to preach the gospel:  Many Christian groups have found great use of social networks to spread the gospel and also to get in touch with fellow Christians across the globe. Social networks like facebook.com are being used to create social Christian groups, followers of a specific faith can create a religious group page and invite other believers to join that group, so whenever a new update is made via that page, all followers will be notified via their Facebook feed. Also, these followers can use that group page to ask questions about anything.

 89. Use text messages as reminders of the gospel: Many times Christians are faced with challenges of all sorts and sometimes they might not be in a position to go and seek spiritual advice from church leaders. Advanced churches have embraced text messaging technology, which is used to stay in touch with Christians. Biblical and self-empowerment messages are sent using text messaging services, this helps Christians stay on track.

IN COLLEGE:

90. Use the Internet for Research: Many college students know how to use the Internet, and most of them have the Internet both at school and at home. Top technological colleges provide free wireless internet on their campuses, this has helped college students get access to free internet while at school, which has helped them do educational research online. Sometimes teachers assign college students to research work that requires them to use the internet. For example, information technology students must have access to the internet so that they understand certain concepts. Let’s take an example of a college student in America, who has to complete a research report about ‘’The impact of technology in AFRICA’’, by all means, this student will need access to the internet and learn about Africa and how technology has changed social and economic lives of people in Africa.

IN CONSTRUCTION

Construction builds two types of structures and these include buildings and heavy engineering structures. Technology is used in both the planning process of these structures and it is also used during the building process. Buildings are enclosures to protect or provide security to humans, their products, and equipment from getting damaged by external elements and these can include residential homes and warehouses. Yet heavy engineering structures are commercial buildings like skyscrapers, Shopping malls, Sports stadiums, etc. Below I have listed a summarized list of how technology is used in construction.

91. Use Technology to Prepare a Construction Site: The location of a building has to be well planned so that it meets the needs of the owner of the building and the needs of the people surrounding that area. This process will involve site inspection and technological tools like Gas Detectors which offer protection from dangerous concentrations of VOC solvents like methyl ketone. Ketones are toxic and explosive, so engineers will always inspect the site to make sure that it is good for the construction. Once the site is proved to be okay, all obstacles will be cleared and the location of the new building will be marked.

SOLVING CRIME

92. Use Lie Detectors: Police use a lie detection machine to find a convicted criminal guilty. To some extent, this machine has helped police solve criminal cases, but also criminals have found a way of tricking the machine. Technology is not that perfect when it comes to humans, but it helps in solving some problems. A Lie Detector will be based on your physiological reaction whenever you’re asked a question, if you’re lying, the machine will detect a change in your blood flow, heartbeat, brain functionality, etc. In most cases, when a person is lying, their brains will skip and withhold information; this process will trigger the detector to show that you’re lying.

93. Use surveillance cameras: Law enforcements use street view surveillance cameras to track down lawbreakers and criminals. Also, financial institutions, government agencies, and big organizations use these surveillance cameras to track activities 24-7. Criminals fear exposing their identities, so they stay away from areas with these surveillance cameras. Most of these cameras can be hidden so that no one recognizes them. A surveillance camera sends live stream videos to law enforcement so it becomes easy to track criminals. Advanced security organizations like the PENTAGON, use the satellite to track every activity in suspicious areas, this helps them spot terror activities in any country. Nowadays they have started linking surveillance cameras to the internet, so the law enforcement team can easily access data from anywhere.

DATA COLLECTION:

94. Use Data Collecting Tools to  Speed up the process of data collection: Data collection is a process of gathering facts, so you can use technology to aid the process of data collection. However, the type of data you need to collect will also determine the type of technology you want to use to collect this data. Advanced technology has provided us with various tools that we can use to collect data, some of these tools include; portable computers, flash disks, mobile phones, graphic calculators, pH meters, portable microscopes, and electronic sensor detectors which can be used when collecting environmental data and many others. All these technological tools can be used to collect data. But computers will be needed in any process of data collection. You can use a computer to analyze your data or use a computer to store and organize your data.

DATA STORAGE:

95. Use Cloud servers:  After collecting your data, you will need to store it safely to ensure that predators do not get access to it. Cloud hosting facilities will enable you to store your data safely on a remote cloud server. We have many cloud data hosting companies online and most of them provide free storage of up to 5GB, you can use services like Dropbox.com / Box.com, both programs have mobile phone apps and tablet apps which you can use to upload your data directly to the cloud server using the internet. These online data hosting services use strong encryption technology to ensure that your data is safe.

96. Use External Hard drives and Flash disks: You can also store your data using an external hard drive or a flash disk. External drives come in various types; they have different storage capacity which ranges from 60GB – 360GB. To transfer data to an external hard drive, you will need to use a data transfer USB cable, connect this cable to your computer and external hard drive, and then copy files over to the external drive. The advantage of storing data on these external drives is that, you will be in a position to move with your data and when it comes to data security, these drives can not be hacked via the internet so your data will be safe and you will have no limit on what to store.

DAYCARE CENTERS:

97. Use hidden cameras to monitor children: Many daycare centers are using technology to make sure that children stay safe while in their hands. Since these hidden cameras can be integrated with the internet, parents also get access to real-time data from the daycare center. Even though many parents have no time to use these remote monitoring facilities, some of the parents who use them have found them important. If you’re a parent, just imagine yourself logging into a remote network and gaining access to what your child is doing, monitoring the environment around them. To me, the all idea sounds safe.

These are just a few uses of technology, technology can be used in many other sectors, for example, you can use technology in restaurants,  in the manufacturing of products,  in controlling society,  in quality management, use it to enable remote working, in promoting healthy eating using web-technologies and broadcasting technologies, use technology in political campaigns, use it to increase human life span, you can as well use technology in hotel and travel business, we use also use it in transportation. The truth is that the uses of technology are unlimited. You can build on this list and make it more valuable to your friends by using the commenting box below.


Impact of Information Technology

Unknown

Technology has come to stay with us with accompanying ramifications and mixed blessings. The influence of current and future Information Technology and its applications is beyond human imagination. How pervasive or beneficial are information systems and technology? Well, this is a question worth discussing. Throughout my secondary education in my beloved country—Ghana—I never had the chance to practically see, touch, or work with computers until my first year in college; however, the good thing was that I was literate enough to read and understand some basic computer technologies. My first experience with computers occurred during my first year as an electrical engineering student at the School of Engineering, Kwame Nkrumah University of Science & Technology (KNUST) back in 1996. Theoretically, I did understand some computer concepts but practically it was a nightmare. What do we all see today, just a little over a decade? Well, the answer seems obvious. For instance, my five-year-old now uses a computer as a learning “tool” to play all of his computer games, watch movies, and study kindergarten-level math and reading online. 

Some of us must change our mindset to embrace the proliferation of technology, and if possible, embed it in our DNA; we must however be aware of the negative consequences associated with technology. In a nutshell, being current and immersed in information technology concepts and applications is a catalyst to be able to siphon the good part of technology for our daily needs. Some of these state-of-the-art technologies include digital and multimedia convergence; intelligent applications; embedded systems; mobile, satellite, and wireless communications; and distributed computing among others. Today, technology finds useful applications in medicine, legal systems, and in almost every facet of our lives, as shown in Figure 1in the appendix. We need to be inquisitive of Information Technology and its vibrant applications in shaping our future together. The next sections delineate a few contributions of IT in our world today.


Software Applications and Social Media: A computer without software or relevant applications is like a car without fuel and loses its real value. Software planning and development is ever-changing the phase of businesses, organizations, and human-computer interactions, for instance. Here are a few examples. Everyone may agree with me that software applications continue to change more innovatively without our explicit understanding of these changes. Of course, software applications need to be aligned with current and future changes and adjust to these changes accordingly. Typical examples are the embedded operating systems (OS) in smartphones, PDAs; and applications like online calendars, word processors, and spreadsheets that we have at our disposal today. Also, Web 2.0 (or Web 3.0) applications such as blogs, wikis, and social networks, as well as Web-based e-mail clients (such as Yahoo or Gmail) can be provided via cloud computing or some relevant means. It is fascinating to see the e-Government application in global politics today. Yeah, prospects are high for Web 2.0 and its subsequent technologies to be the mainstream for transparency globally. Additionally, we see how social networking establishments like Twitter, Facebook, LinkedIn, Myspace, Orkut, and others continue to galvanize global politics. A typical example is what we are witnessing now in some parts of the world where social media is perceived to be a powerful tool in politics. Fortunately, it helps to maintain transparency while exposing the tyrants characterized by all sorts of negative behavior paradigms. Greedy and autocratic leaders in certain parts of the world are being exposed and “teardown” day after day. At the time of writing this article, some tyrants are struggling to cling to power while others have “crashed downhill” previously. Well, let's hope others follow like it seems to be happening now. I think social media tools must take most credit because, without them, the United Nations or the World would have little or no evidence, and sanctions and other forms of punishments could have been difficult to implement —does this sound familiar? What a world of High-Tech is! Well, I leave this part for readers to evaluate. 



World Peace and Security: Who else does not want to see peace and security permeate our society? If for some reason, individuals or certain groups of people distance themselves from peace and security then, probably, they do not belong to us. How can technology maintain international peace and security? Recently, the United States Ambassador to the United Nations, Susan Rice, hosted a session of the United Nations Security Council titled, “Voices of a New Generation," that sought to open new doors of opportunity to the world’s youth (ages between 13 and 21 years old) on matters concerning international peace and security. I watched the video that genuinely touched my heart. Unfortunately, I’m yet to know if such a program is or can be extended to older folks. The point is, we see how technology can help “fuel and energize” such excellent initiatives. As the speaker noted, “They poured in by e-mail, on YouTube, and through Facebook. And some were even written by hand” (para. 7). Again, as I illustrated earlier, this certainly demonstrates the power of social media. In the next year or so, I hope to embark on behavioral science research studies that thoroughly investigate the effects of human behavior on advances in technology and society in general.



Medical Applications: As medical practitioners continue to face complex medical systems, emerging technologies continue to relieve them of such complexities. There are currently advances in wireless devices with accompanying advanced processors, memory chips, and relevant operating system implementations that are found useful in medicine. Typical examples are Bluetooth and smartphone technologies. To delineate further, Bluetooth wireless technology has helped in areas of remote patient monitoring, wireless biometric data, and medicine dispensers. The use of smartphones has also become an important technology in healthcare establishments. A smartphone, for example, has advanced capabilities and functionality much like that of laptop computers. Medical doctors can remotely monitor their patients’ lifestyle changes like their daily exercises (Marshall, Medvedev & Antonov, 2008). The use of this technology can help patients manage chronic diseases from home, regardless of their location. 



Business Opportunities: The aviation and transportation industries have come a long way. Today’s airline industry has been deeply rooted in the complexity of Internet technology in terms of e-services, e-communications, and e-business for day-to-day business operations. Also, Bluetooth technology has paved the way for an electronic boarding pass to be issued for air travelers thus eliminating the need for a traditional paper boarding pass. Technology has over the past decades influenced many organizations’ internal and external IT operations in alignment with strategic business goals. There is a mounting business imperative to make strategic decisions when it comes to investing in information technology. We have already seen the interplay of business and technology on the horizon. We see how Walmart makes use of Radio-frequency identification (RFID) tags to provide automatic tracking of its inventory systems. RFID, as an emerging technology, has generated an enormous amount of interest in supply chain initiatives and others. Wal-Mart is not alone; according to Potgantwar and Wadhai (2009), “RFID technology has been used in many organizations and agencies such as the U.S. Department of Defense (DoD), the Food and Drug Administration (FDA)” (p. 154). Also, RFID technology has useful applications in wireless systems. Research notes that “There are several approaches in location-sensing systems that use RFID technology” (Potgantwar & Wadhai, 2009, p. 154). Moreover, Technology has become a huge drive for e-businesses through such implementations as data integration, network storage systems, and database systems. E-commerce, for example, is a striking technology for businesses and their respective customers due to the growth and proliferation of the Internet. Now consumers buy all sorts of items such as books, music, videos, toys, and games online rather than shopping in a traditional setting. In a nutshell, all these together tend to “ignite” business initiatives.



Judiciary Systems: There may be contentions between governments, citizens, and judicial systems but the contribution of science and technology to our legal systems cannot be overemphasized. For instance, DNA has served a good purpose in this direction: using DNA to solve crimes, protect the innocent, and identify missing persons. According to the Department of Justice "since the creation in 2000 of the Department of Justice’s (DOJ’s) Convicted Offender DNA Backlog Reduction Program, more than 493,600 offender samples from 24 states have been analyzed"(p. 4). This was an effort to solve criminal cases. So one can imagine what impact technology may have on the judicial systems in the coming decades. It’s imperative to know how these advances influence the court’s interaction with the public now and in the future. Consequently, future technological innovations in the decision-making process in our legal systems are quite beyond imagination. 



Sports and Entertainment: There’s no doubt that the world of digital entertainment and high-tech sports is here to stay with us. Technology has shaped the way we humans plan, organize, analyze, or do business with sports and entertainment. Technically speaking, technology helps or may replace, human involvement in sporting events. Using lasers for instance is believed to limit sporting controversies as this became evident during the FIFA World Cup 2010 in South Africa. This advancement can well relieve referees and linesmen of attacks and humiliations. The future of home entertainment is likely to be software related than hardware like the one already seen in Microsoft's Media Center or the Apple TV. The “soft” part will certainly need the support of the “hard” counterpart or the underlying hardware components. It is amazing to see how digital entertainment has been transformed from CDs to DVDs technologies, and now to Blu-ray technology. This technology is a high-definition media format designed to supplant the DVD format we are familiar with today. Fortunately, Blu-ray designers were smart enough to make disc drives backward compatible to allow ease of smooth transition from the traditional DVDs and CD technologies. Well, this is just a piece of how technology seems to be driving innovations in the world of entertainment and sports. 



Future Trend: The above discussion so far is just the tip of the iceberg viewing from both current and future lenses. The future of technology looks more promising now than ever before. Practically speaking, our world may very well be dictated thoroughly by computers due to advances in information systems and technology with accompanying human-computer interactions. In the future, information systems and technology would be expected to be evolving, converging, and ubiquitous technologies in all facets of human needs. Likewise, we can’t even think of the innovations that space exploration, military technologies, online education, and the world of sports and entertainment could bring. For instance, some plans for space exploration have been announced recently by both government agencies and private sectors. According to El-Rayis, Arslan, and Erdogan’s (2008) study on space technology, space computation challenges are projected to confront future space missions. Evidence notes that “rapid developments in semiconductor technologies have led to progression in sensors technology and an enormous increase in their capability and accuracy addresses the architectural requirements and processing demands for future space missions” (El-Rayis et al., 2008, p. 199). In short, IT is believed to play a vital role in the convergence of computing and application in science, engineering, business, aviation, entertainment, politics, culture, and medicine as well as other disciplines. Intuitively, computers might be viewed as a central home storage containing a full stream of digital information where many of these demands could be met at home. For instance, we do not know for sure—thinking of a virtual world—if physical office buildings and traditional brick-and-mortar classrooms in our educational systems today will continue to exist in the future. 



Conclusion: In summary, it is a fact that advances in information technology are dynamic and continue at an ever-increasing rate. We all need to be conscious of this and continue to explore innovative ways that advance and transform our neighborhood in particular, and the world as a whole. In this discussion, the author outlines a few ways that these advances can promote opportunities for us to become successful in our society now and in the future. Also, these innovations may well put us in a metaphorical dilemma beyond human imagination. Therefore, to be able to siphon the good part of technology for our needs, we need to be inquisitive of technology with a drive to shape our future together. In any case, the advantages that technology brings outweigh the disadvantages. It is vitally important to promote state-of-the-art research in future IT disciplines. In the final analysis, we need to be current and immersed in information technology concepts and applications going forward.


ERP Implementing Methodology

Unknown

Why, when, and how?

Of many reasons to implement an ERP solution, the chief reason is the need for a common IT platform. Other reasons include a desire for process improvement, data visibility, operating cost reductions, increased responsiveness to customers,                                  and improvement in strategic decision-making.

ERP is certainly acting as an impetus for the replacement of a mix of an aging legacy system with a common platform. This reputation of a legacy system with a common platform has become imperative for several reasons. Mainly because a mix of aging legacy systems had led to high-cost support, and the firms expected business benefits such as process improvements and data visibility to result in cost reductions.

Implementation methodology

Assuming a decision on an ERP has been taken, the implementation normally consists of five stages:

1. Design

2. Implementation

3. Stabilization

4. Continuous improvement

5. Transformation

The structured implementation program can speed system deployment and return on investment. This can be done in the following manner:

•   Conducting an effective gap assessment

•   Business and technical processes

•   Organizational measures

•   Data conversion and data clean-up

•   Agreeing on the implementation boundaries

•   Project sponsorship and governance

The implementation strategy is ultimately built on a foundation of people, processes, and product

ERP implementation: when, why, and how?

Methodical implementation of the ERP system starts from the preliminary analysis of the ERP readiness audit for the client. What is discussed here is the key issue concerning the implementation methodology. The current business performance of the client, forthcoming strategic plans, investment potentials, culture, and general human resource characteristics, expectations, and objective settings for ERP should be included in this analysis. Once the study report is submitted based on the above analysis, the client has to accept the recommendations emphasized in it.

Keeping with the current trend, it will be very risky for the client side to reject ERP based on poor readiness audit results. To sustain competitiveness ERP cannot be substituted by fragmented legacy systems. The only way out is the proper reengineering program for ERP adoption. Though may seem critical, this is the main focus of the methodology.

A blueprint should be drawn with details of migration and other things and scientific analysis of 'to be adopted' and its phase execution to be discussed thoroughly. The activities for the overall implementation program should be planned with time units. The consulting firms overlook the time required for data collection and policy decisions in master settings. ERP master setting can be regarded as the heart of ERP implementation.

Sufficient time must be allocated to identify, clarify, validate, and simulate the exact information about all the master-level entries. In India, such exercise is the only opportunity for the companies to do for their lifetime better performance. It would not be false to say that an organization of size more than 1000 crores should take at least three months to set up Item Master. No consulting company or client will agree on such figures in today's jet age. The idea is to sell ERP quickly and start utilizing it.

One more important element is parallel user training and knowledge upgrading. The best practice that resources recommend is to take users to the actual ERP-implemented sites to show them the process flow. What you see is the best training you ever thought of, especially when you have no idea or clue about such concepts.

What is to be kept in mind while going live is, the time to be planned properly and the organization should be equipped with abilities to face whatever shortcomings they may encounter with. There are ideal implementation methodologies, that not only will keep the customer satisfied, but also will boost the confidence of the users. One of the methods used is mentioned here for readers' reference.

The method was applied in one of the manufacturing companies. It was recommended ten transactions every day be carried out in ERP without any entry into the legacy system. Then it generated all the documents from ERP as supporting papers for the manual entries and reports. The rest of the entries in a day used to be carried as manual entries in the legacy system. Soon users understood ERP is better on all the grounds. This formula worked wonderfully well for generating confidence and reducing resistance in users' minds.

Another question is, should ERP implementation be carried out module-wise or all the modules in one go? The answers to these questions are case-specific. It will be difficult to generalize the theory for it. Mostly decision on this depends on the readiness audit result, available strength of consulting human power - both internal and external - relevance of ERP at various sites, timeframe, and the budget. But it is highly recommended that the organization should go for the implementation of all the modules at one site, and then go for a roll-out strategy for all sites.

Finally, there is a wrong concept about the BPR and ERP system. The main doubt that persists in the mind of many decision-makers is whether the organization should first carry out BPR and then implement ERP, or vice versa. One has to understand that the physical form of the BPR concept is the ERP system. In some organizations where BPR is carried out first then ERP is implemented. In all these places, the BPR document was thrown into the dustbin and a new 'to document' was prepared suitable for ERP.

Such cases clearly show that BPR benefits as such don't exist without ERP reference, and ERP just cannot be carried out without proper BPR. What we assume is that the BPR must be the outgrowth of ERP implementation and, therefore, it should be carried out during implementation only. However, the major element of BPR is the change in the mindset, which can be initiated before ERP adoption, maybe during the preliminary analysis by way of user training. The best thing to do in these situations is to carry 'noiseless BRP' which triggers the implementation smoothly.


How Cybercriminals Make Money With Your Email

images

Cybercriminals make enormous amounts of money by exploiting weak defenses in corporate and personal email defenses, deficiencies in corporate policies focused on protecting email users, and user ignorance. Criminals are aided in their efforts by three key trends that are becoming increasingly prevalent: 

Criminals can develop highly sophisticated malware because they are well-funded, and often supported directly by organized criminal groups. 

Many users share large amounts of information through social media and other venues that enable criminals to obtain useful information about their potential victims that can be used to develop sophisticated spear phishing attacks. 

There are a growing number of devices and access points from which users access email, making it more difficult for organizations to defend against email-borne threats, and that makes it easier for criminals to exploit weak defenses on several levels. 

KEY TAKEAWAYS 

Email-delivered malware, as well as the total volume of new malware, are increasing at a rapid pace. 

Cybercriminals use a variety of techniques, including spearphishing, shortened URLs, advanced persistent threats, traditional phishing, man-in-the-middle attacks, spam, botnets, ransomware, scareware, and other techniques to defeat corporate defenses. Scareware is often delivered as a pop-up message but sometimes is delivered via spam messages in email. 

The financial and auxiliary consequences of cybercrime can be enormous and can be multi-faceted: direct costs of remediating the cybercriminal activity, lost business opportunities, a damaged corporate reputation, and the like. 

Cybercrime is a business – albeit a nefarious one – that is driven by fairly traditional business decision-making. The goal of any email defense solution, therefore, is to make continued attacks against an organization unprofitable so that cybercrime activity is reduced. 

To minimize the impact and effectiveness of cybercriminal activity, an organization should undertake an ongoing program of user education, as well as deploy appropriate technologies designed to address new cybercriminal techniques. 

This paper focuses on key issues that organizations should address in the context of cybercrime delivered through email, and it offers some practical advice on what organizations should do to protect themselves. 

WHAT DO CYBERCRIMINALS DO? 

THE PROBLEM IS GETTING WORSE 

Cybercriminals use several methods to deliver email-based threats to their victims and they do so quite successfully, as evidenced by the following figure that demonstrates the large proportion of mid-sized and large organizations in North America that have been the victims of email and Web-based threats during the previous 12 months. Illustrating the seriousness of the malware problem itself, the next figure shows the rapid increase in new malware over the past few years. 

Percentage of Organizations Infiltrated by Email-Based Malware 2007-2012 

New Malware Detected (millions of malware programs detected) 2005-2012 

It’s important to note that while we saw something of a hiatus in the infection growth rate from email-based malware during 2011, as well as a flattening in the amount of new malware detected, this may have been due to the March 2011 takedown of the Rustock botnet – a key delivery path for spam and malware – that had infected more than 800,000 Windows-based computers. 

METHODS USED BY CYBERCRIMINALS 

Among the many methods used by cybercriminals are: 

Spearphishing is a more focused variant of phishing in which a single individual or a small group of individuals within a firm are targeted by cybercriminals. Quite often, a company’s CFO or CEO will be targeted because they are likely to have access to a company’s financial accounts. A common method for gaining access to this information is through the delivery of a highly targeted email that will contain an attachment or a link, clicking on which will infect the victim’s PC with a Trojan that can then be used to harvest login credentials to a bank account. Smaller companies, churches, school districts, and similar types of small to mid-sized organizations are among the more common targets of spearphishing attacks because they often lack sophisticated defenses that can protect against these types of attacks. 

Spearphishing has been aided to a great extent by social media since cybercriminals can use content posted to Facebook, Twitter, or other social media sites to improve the likelihood of delivering their content. For example, a CFO that posts to Facebook information about their recent online purchase of a new Lytro camera will be very likely to open a malicious email with the subject line “Problem with your Lytro camera order” and to click on any links contained therein. 

One spearphishing attack may have derailed Coca-Cola’s $2.4 billion acquisition of China Huiyuan Juice Group. Coca-Cola’s Pacific Group deputy president received an email from what he thought was the company’s CEO, but in reality, the email was from a (probably) Chinese firm known as the Comment Group. The email contained malware that allowed the perpetrator to access sensitive content for more than 30 days. Shortly thereafter, the Chinese government blocked the acquisition because of concerns over competition in the beverage industry.

Short URLs Shortened URLs – that might appear in emails, Tweets, etc. – are commonly used to bring unsuspecting victims to malicious sites with the hope of infecting their devices with malware. The attraction of a short URL for potential victims is that they fit nicely in character-limited tools like Twitter, and they can also condense very long links into a short URL when used in non-HTML emails. More importantly for cybercriminals, they mask the identity of the malicious site, hiding it from both individuals who might be suspects when reviewing the URL, as well as automated systems. 

Advanced Persistent Threats Advanced Persistent Threats (APTs) are protracted attacks against a government, company, or some other entity by cyber criminals. Underscoring the seriousness of APTs is the fact that these threats are generally directed by human agents (as opposed to botnets) that are intent on penetrating corporate or other defenses, not simply random or automated threats that are looking for targets of opportunity. As a result, those responsible for APTs will change tactics as they encounter resistance to their attacks by their targets, such as the deployment of new defense mechanisms. 

Phishing A phishing attack is a campaign by a cybercriminal designed to penetrate anti-spam and/or anti-malware defenses. The goals of such an attack can include infection of users’ PCs to steal login credentials, gain access to corporate financial accounts, steal intellectual property, search through an organization’s content, or simply gain access for a purpose to be determined at a later date. Email is a useful threat vector for phishing attacks and can be quite successful for cybercriminals. For example, a common phishing 

the scheme is to send an email citing UPS’ inability to deliver a package and a request for a user to click on a link to print an invoice. 

THE EASE OF GATHERING INFORMATION THROUGH SOCIAL MEDIA 

To see how much information we could gather on a senior executive, in late February 2013 Choose a company at random, A do a quick Google search for companies in the area. 

Our researcher then visited this company’s Web site, found an owner listed, and then searched for his name on Facebook. Although we have no relationship with this individual, a quick look at his wall revealed his former employers, where he went to high school, the fact that he is also a realtor, where he had lunch last Friday, his phone number, information about his Ferry ride on the previous Tuesday, information about an upcoming company event in early March 2013, the names of two people who gave him gifts in late January 2013, and what he had for dessert on January 13, 2013. 

A cybercriminal could have used any of this information to craft a spearphishing email with a subject line that would likely have attracted his attention and made it more likely for him to click on a link to a malware site that might have infected his PC. 

Man-in-the-Middle Attacks A man-in-the-middle attack is one in which a third party intercepts messages between two parties when both parties are attempting to exchange public keys. In essence, the third party impersonates both the recipient and sender, so that the two legitimate recipients and senders think they are communicating with each other, when in fact each is communicating directly with the unauthorized third party. The result of a man-in-the-middle attack can be relatively innocuous, with the third party simply listening in on a conversation; or it can be more malicious and result in the loss of network credentials or sensitive information. 

Spam While in some ways spam is less of a problem today than it was before the successful takedown of various botnets at the end of 2010 and early 2011, it remains a serious and vexing problem for organizations of all sizes. Spam consumes storage and bandwidth on corporate servers, users must scan spam quarantines to ensure that valid messages have not been misidentified and placed into the quarantine, and malicious content can mistakenly be withdrawn from a spam quarantine, thereby increasing the potential for infecting one or more PCs on the corporate network. 

Spam filters can often be defeated by simple text obfuscation like the misspelling of particular words, Bayesian poisoning, the introduction of valid text into spam messages to make them look legitimate, the use of various HTML techniques to trick spam filters, the use of various languages, etc. Spam filters that use less sophisticated filtering techniques and Bayesian approaches to filtering can be fooled by these tactics. 

Spam that contains attachments used to be quite common as a means of delivering malware. While not as common today, spam with malicious attachments still finds its way into many organizations. PDF files, images, calendar invitations, spreadsheets, and zip files are all used as payloads to carry malicious content. 

Botnets Cybercriminals often use botnets that consist of tens of thousands of ‘zombie’ devices – personal and workplace devices that are infected with a virus, worm, or Trojan that permit them to be controlled by a remote entity. Spammers can rent botnets for the distribution of their content, typically at relatively modest rates. By using botnets, cybercriminals can send a small number of messages from each of thousands of computers, effectively hiding each sending source from detection by ISPs or network administrators using traditional detection tools. Botnets are a serious problem not only because they are responsible for a large proportion of spam sent today, but also because they are used for a range of purposes beyond simple spam delivery: perpetrating distributed denial-of-service attacks, click fraud, and credit card fraud. Botnets are successful because they can be difficult to detect and take down. 

Ransomware is a type of cybercriminal attack, most often introduced to a PC by an email-delivered or another worm, in which a user’s PC is locked or its files encrypted until a “ransom” is paid to a cybercriminal. For example, one variant of ransomware, Reveton, is a drive-by virus that displays a message informing victims that they have downloaded child pornography or pirated material, demanding payment of a fine to restore access to their PC. During two days in May 2012, victims paid a total of more than $88,000 to cybercriminals to restore access to their PCs. 

Scareware is a less invasive form of ransomware in that it warns users that their PC is infected with malware, often reporting the discovery of thousands of different instances of malware. It then offers to disinfect the computer by offering anti-virus software for a nominal fee. While the fee is typically on the order of $40 – albeit for software that does nothing – the real damage often results from providing cybercriminals with a valid credit card number and CVV code. Scareware is often delivered as a pop-up message but sometimes is delivered via spam messages in email. 

State-sponsored malware One example of state-sponsored malware is Stuxnet. This malware was designed to target a particular type of Siemens controller used in Iran’s uranium enrichment plant at Natanz, Iran, and was set to expire in June 2012 (although the malware propagated globally before its expiration date). While the malware was not designed to attack companies or consumers, it was a good example of how malware can be designed to go after a specific type of target and remain undetected by its victim. 

BENEFITS REALIZED BY CYBER CRIMINALS 

First and foremost, it is essential to understand that cybercrime is a business – an illegitimate one to be sure – but one that is guided by fundamental business principles focused on the benefits to be gained from a particular activity, return-on investment considerations, investments in research and development, and the like. 

The benefits to cybercriminals from their activities are substantial. For example, cybercriminals that use phishing, spearphishing, or other techniques can steal enormous amounts of money in a short period, as discussed below. Cybercriminals can also gain access to confidential information, intellectual property, Protected Health Information, or other information that might prove valuable at present or a future date. 

THE CONSEQUENCES TO BUSINESS AND GOVERNMENT 

The flip side of the benefit to cyber criminals is the pain experienced by their victims. Aside from the direct financial losses that can result, an organization that falls victim to email-based or other types of cybercrime can suffer a loss of reputation as news of the problem is reported in the press or among their customer base. Some customers may cancel orders or switch to a different supplier if they determine they can no longer trust the victims of cybercrime to safeguard their data and, by extension, the data provided to them by their customers or business partners. The negative publicity alone can be worse than the loss of funds. 

DATA BREACHES 

Among the more serious and expensive consequences of email-based or other cybercrime is the breach of customer data. Because 46 of the 50 US states, one Canadian province, and many countries around the world have data breach notification laws in place, organizations that are victims of cybercrime and a resulting data breach are liable for notifying the affected parties about the breach. Aside from the direct cost of notifying customers about the breach is the potentially much higher cost of losing customers who are upset about the loss of their data, paying for credit reporting services for customers as a means of ameliorating their concerns, and the negative publicity that can result. 

Underscoring the seriousness of data breaches is the sheer magnitude of the problem. For example, the Privacy Rights Clearinghouse maintains a database of data breaches dating back to 2005. Since they have been keeping records, there have been 3,680 data breaches made public as of mid-April 2013 resulting in the breach of 607.5 million records. Among the data breaches published are the following two examples that illustrate just how serious the problem has become. 

As reported in March 2013, Uniontown Hospital (Uniontown, PA) was the victim of one or more hackers who accessed patient information, including encrypted passwords, contact names, email addresses, and usernames. 

Between May and November 2012, a computer used by an employee of St. Mark’s Medical Center (La Grange, TX) was infected by malware, resulting in the potential exposure of sensitive content, including patient billing information that was stored on the device. 

DRAINING OF FINANCIAL ACCOUNTS 

A variety of organizations have been targeted with keystroke loggers like Zeus that allow criminals to transfer funds out of corporate financial accounts. There have been several cases of this type of theft – many targeted small and mid-sized organizations as noted earlier – resulting in major financial losses, as in the examples below: 

Hillary Machinery: $800,000v (its bank was able to recover only $600,000) 

1. The Catholic Diocese of Des Moines: $600,000

2. Patco: $588,000

3. Western Beaver County School District: $700,000 

4. Experi-Metal, Inc. : $560,000

5. Village View Escrow: $465,000

6. An unidentified construction company in California: $447,000

7. Choice Escrow: $440,000

8. The Government of Bullitt County, Kentucky: $415,000

9. The Town of Poughkeepsie, New York: $378,000

10. An unidentified solid waste management company in New York: $150,000 

11. An unidentified law firm in South Carolina: $78,421

12. Slack Auto Parts: $75,000

BEST PRACTICES TO ADDRESS THE PROBLEM 

To protect against email-borne threats, organizations should undertake a two-pronged course of action: 

Train users Most will agree that despite the enormous amounts spent on email security solutions, users are still the weak link in the security chain. The primary reason for this is that increasingly they are the targets, often supplying cybercriminals with the information they need by posting detailed personal information on social networks and other sites. Moreover, criminals can often harvest many corporate email addresses and use them to launch a phishing or spearphishing attack against a company’s employees. Smaller organizations are typically most vulnerable to attack because they often lack the budget or expertise to thwart sophisticated attacks. 

While users cannot prevent all attacks, they should be considered the first line of defense in any email-based defense system. Consequently, users should be trained to take a common-sense approach to managing email. Although the following recommendations seem obvious, many users are guilty of violating these basic provisions, often because they are rushed in their work or simply are not sufficiently cautious when dealing with email: 

  • Do not click on links in emails from unknown sources. 
  • Do not reuse passwords and change them frequently. 
  • Do not connect to unsecured Wi-Fi hotspots, such as might be found in a coffee shop, at an airport, etc. 
  • Double-check the URL of links that seem legitimate before clicking on them. Although the URL displayed may not match the URL behind the link, many email clients will display the actual URL upon mouseover. 
  • If an email is trapped in spam quarantine, assume that the spam-filtering system accurately trapped the email – do not assume it is a false positive unless you are certain that it is. 
  • Do not send sensitive content via email without encrypting either the content or the message. 
  • Be careful to ensure that sensitive content is not openly posted on social media sites, particularly those that are used for corporate purposes. 

While initial training is important, ongoing training that is designed to remind employees of new cyber threats, new spam and malware techniques, etc. is essential as a means of maintaining a robust defense posture. This might include sending simulated phishing emails to employees to determine the effectiveness of employee training, just how carefully employees pay attention to their training, etc. The goal is to provide a feedback loop that consists of testing, training, testing, and remediation. Employees who fall prey to simulated phishing attempts or other cyber threats can receive additional training or other remediation education designed to help them become more careful when inspecting their email. 

Implement the appropriate technologies The next and more important step is to implement the appropriate technologies that will thwart cybercriminal activity. This should include a layered defense system designed to: 

  • Filter spam with a high degree of accuracy and a minimum of false positives. 
  • Detect incoming malware, denial-of-service attacks, zero-day threats, phishing and spearphishing attempts, blended threats, bounceback attacks, and other threats. 
  • Detect threats that are presented in short URLs. 
  • Evaluate solutions that offer not just protection at the time the message is scanned, but at the time the message is clicked – in other words, protect the user from the click. Criminals often get past defenses with unknown or good reputation URLs and switch the URL intent once it has gone through the initial defenses. 
  • Integrate with other systems, including DLP, encryption, and other capabilities to provide an integrated solution that can be managed from a single pane of glass. 


How To Make ERP Implementation A Success

Unknown

An enterprise resource planning (ERP) system will change the way your company does business. By helping to increase profit through improved efficiencies, enabling decision-making, and facilitating the identification of problem areas that are hindering your business' growth and success, an ERP system might well be the single most important purchase you make for your business.

Too often, companies are not completely satisfied with their ERP implementation or the system they have chosen, says Johani Marais, Channel Manager Africa, Epicor Software Corporation. This is very distressing considering the positive change that can be initiated with a proven ERP system that is skillfully implemented.

Here are some of the most common problems that arise, which can threaten to sink an ERP implementation before it even takes off:

* Ensure you understand your pain areas so that your ERP provider can tailor the implementation to suit your unique requirements. ERP providers are experts in their field, who can help you to delve into the nuts and bolts of your business. They are also objective third parties who can shed new light on business challenges you may not have realized even exist.

* The right ERP package then needs to be selected according to your business's unique requirements and your future growth plans, as well as your budget. Ensure your new ERP system is scalable so it can grow with your business and your budget. Many ERP providers lack the expertise and financial backing to regularly update their products with significant positive evolution encompassing the latest technology.

* Proven ERP providers will have a long history of successful implementations across a variety of sectors. They will assist you in automating processes and outlining every facet of your operations, from start to finish. And they will not deem the project to be completed until the system is working to specification.

* Ensure your ERP provider has enough resources to carry out your ERP implementation. The provider must have an experienced team that is well-supported to carry out a successful implementation in the promised timeframes. After all, is said and done, downtime costs you money and means you won't meet your customers' needs.

* Make sure everyone has a good understanding of what you have undertaken to do and what they need to do to support the project. Too often, businesses don't understand the full depth of the product offering or the jargon that goes with it. At the end of the day, decision-makers have different expertise and come from different areas of a business. It is advisable to employ the services of business process engineers who understand both ERP as well as business processes to assist in the gap analysis between various software packages and the business.

* Commitment can mean the difference between success and failure. Top management must be committed to the success of the implementation and clear goals must be agreed to in terms of their expectations and the expected return on investment. This will ensure the outcome is favorable and as expected.

* There must be clear project objectives and milestones agreed upon from both sides in terms of what is required to make the project a success. This will set the stage for a successful implementation with the least amount of hiccups.

* Organisational buy-in is imperative to the smooth implementation of an ERP implementation. Change management must be the top priority and should be carefully planned and executed to avoid disgruntled employees from sabotaging the project. Considering that human resources are one of your most valuable resources, it is extremely important that the system is well received and there is buy-in across the organization.

* Be clear about the benefits that you expect, and take a careful look at the costs. This will put you in a better position to assess the benefits of the project and ensure you have realistic expectations.

* Ensure users attend training sessions and practice regularly on the test system before going live so that users are empowered to reap the full benefits of the implementation.


How to Write an Information Assurance Policy

images

An Information Assurance Policy is the cornerstone of an Information Assurance Program. It should reflect the organization's objectives for security and the agreed-upon management strategy for securing information.

To be useful in providing authority to execute the remainder of the Information Assurance Program, it must also be formally agreed upon by executive management. This means that to compose an Information Assurance policy document, an organization has to have well-defined objectives for security and an agreed-upon management strategy for securing information. If there is debate over the content of the policy, then the debate will continue throughout subsequent attempts to enforce it, with the consequence that the Information Assurance Program itself will be dysfunctional.

There are a plethora of security-policy-in-a-box products on the market, but few of them will be formally agreed upon by executive management without being explained in detail by a CSO (Chief Security Officer). This is not likely to happen due to time constraints inherent in executive management. Even if it was possible to immediately have management endorse an off-the-shelf policy, it is not the right approach to attempt to teach management how to think about security. Rather, the first step in composing a security policy is to find out how management views security. As a security policy is, by definition, a set of management mandates concerning information security, these mandates provide the marching orders for the CSO. If the CSO instead provides mandates to executive management to sign off on, management requirements are likely to be overlooked.

A CSO whose job it is to compose security policy must therefore assume the role of sponge and scribe for executive management. A sponge is a good listener who can easily absorb the content of each person's conversation regardless of the group's diversity concerning communication skills and culture. A scribe documents that content faithfully without embellishment or annotation. A good sponge and scribe will be able to capture common themes from management interviews and prepare a positive statement about how the organization as a whole wants its information protected. The time and effort spent to gain executive consensus on the policy will pay off in the authority it lends to the policy enforcement process.

Good interview questions that solicit management's opinions on Information Assurance are:

Ø  How would you describe the different types of information you work with?

Ø  Which types of information do you rely on to make decisions?

Ø  Are there any information types that are more of a concern to keep private than others?

From these questions, an information classification system can be developed (e.g. customer info, financial info, marketing info, etc.), and appropriate handling procedures for each can be described at the business process level.

Of course, a seasoned CSO will also have advice on how to mold management opinions concerning security into a comprehensive organizational strategy.

Once it is clear that the CSO completely understands management's opinions, it should be possible to introduce a security framework that is consistent with it. The framework will be the foundation of the organization's Information Assurance Program and thus will serve as a guide for creating an outline of the Information Assurance policy.

Often, a security industry standards document is used as the baseline framework. For example, the Security Forum's Standard of Good Practice (www.securityforum.org), the International Standards Organization's, Security Management series (27001, 27002, 27005, www.iso.org), and the Information Systems Audit and Control Association's Control Objectives for Information Technology (CoBIT, www.isaca.org). This is a reasonable approach, as it helps to ensure that the policy will be accepted as adequate not only by company management but also by external auditors and others who may have a stake in the organization's Information Assurance Program.

However, these documents are inherently generic and do not state specific management objectives for security. So they must be combined with management input to produce the policy outline. Moreover, it is not reasonable to expect the management of an organization to change the way the organization is managed to comply with a standards document. Rather, the Information Assurance professional may learn about good security management practices from these documents, and see if it is possible to incorporate them into the current structure of the target organization.

Security policy must always reflect actual practice. Otherwise, the moment the policy is published, the organization is not compliant. It is better to keep the policy as a very small set of mandates to which everyone agrees and can comply than to have a very far-reaching policy that few in the organization observe. The Information Assurance Program can then function to enforce policy compliance while the controversial issues are simultaneously addressed.

Another reason that it is better to keep the policy as a very small set of mandates to which everyone agrees is that, where people are aware that there are no exceptions to policy, they will generally be more willing to assist in getting it right up front to ensure that they will be able to comply going forward.

Once a phrase such as "exceptions to this policy may be made by contacting the executive in charge of...." slips into the policy itself or the program in which it is used, the document becomes completely meaningless. It no longer represents management commitment to an Information Assurance Program but instead communicates suspicion that the policy will not be workable. A CSO should consider that if such language were to make its way into a Human Resources or Accounting policy, people could thus be excused from sexual harassment or expense report fraud. A CSO should strive to ensure that the Information Assurance policy is observed at the same level as other policies enforced within the organization. Policy language should be crafted in such a way that guarantees complete consensus among executive management.

For example, suppose there is a debate about whether users should have access to removable media such as USB storage devices. A CSO may believe that such access should never be required while a technology executive may believe that technology operations departments responsible for data manipulation must have the ability to move data around on any type of media. At the policy level, the consensus-driven approach would produce a general statement that "all access to removable media devices is approved via a process supported by an accountable executive." The details of the approval processes used by the technology executive can be further negotiated as discussions continue. The general policy statement still prohibits anyone without an accountable executive supporting an approval process from using removable media devices.

In very large organizations, details on policy compliance alternatives may differ considerably. In these cases, it may be appropriate to segregate policies by the intended audience. The organization-wide policy then becomes a global policy, including only the least common denominator of security mandates. Different sub-organizations may then publish their policies. Such distributed policies are most effective where the audience of sub-policy documents is a well-defined subset of the organization. In this case, the same high level of management commitment need not be sought to update these documents.

For example, information technology operations policy should require only information technology department head approval as long as it is consistent with the global security policy, and only increases the management commitment to a consistent security strategy overall. It would presumably include such directives as "only authorized administrators should be provided access capable of implementing operating system configuration changes" and "passwords for generic IDs should be accessed only in the context of authorized change control processes."

Another type of sub-policy may involve people in different departments engaged in some unusual activity that is nevertheless subject to similar security controls, such as outsourcing information processing or encrypting email communications. On the other hand, subject-specific policies that apply to all users should not be caused to draft new policies but should be added as sections in the global policy. Multiple policies containing organization-wide mandates should be discouraged because multiple policy sources make it more difficult to accomplish a consistent level of security awareness for any given individual user. It is hard enough to establish policy-awareness programs that reach all in the intended community, without having to clarify why multiple policy documents were created when one would do. For example, new organization-wide restrictions on Internet access need not be caused to create a new "Internet Access" policy. Rather, an "Internet Access" section can be added to the global security policy.

Another caveat for the CSO using the sub-policy approach is to make sure sub-policies do not repeat what is in the global policy, and at the same time are consistent with it. Repetition must be prohibited, as it would allow policy documents to get out of sync as they individually evolve. Rather, the sub-documents should refer back to the global document and the two documents should be linked in a manner convenient for the reader. Even while giving sub-policies due respect, wherever there is an Information Assurance directive that can be interpreted in multiple ways without jeopardizing the organization's commitment to Information Assurance goals, a CSO should hesitate to include it in any policy.

The policy should be reserved for mandates. Alternative implementation strategies can be stated as a responsibility, standard, process, procedure, or guideline. This allows for innovation and flexibility at the department level while still maintaining firm security objectives at the policy level. This does not mean that the associated information protection goals should be removed from the Information Assurance Program. It just means that not all security strategies can be documented at the policy level of executive mandate. As the Information Assurance Program matures, the policy can be updated, but policy updates should not be necessary to gain incremental security improvements. Additional consensus may be continuously improved using other types of Information Assurance Program documents.

Supplementary documents to consider are:

Roles and responsibilities — Descriptions of security responsibilities executed by departments other than the security group.

For example, technology development departments may be tasked with testing for security vulnerabilities before deploying code, and human resources departments may be tasked with keeping accurate lists of current employees and contractors.

Technology standards — Descriptions of technical configuration parameters and associated values that have been determined to ensure that management can control access to electronic information assets.

Process - Workflows demonstrating how security functions performed by different departments combine to ensure secure information handling.

Procedures — Step-by-step instructions for untrained staff to perform routine security tasks in ways that ensure that the associated preventive, detective, and/or response mechanisms work as planned.

Guidelines — Advice on the easiest way to comply with security policy, usually written for non-technical users who have multiple options for secure information handling processes. This leaves the question:

What is the minimum information required to be included in an Information Assurance Policy?

It must be at least enough to communicate management aims and direction concerning security.

It should include:

1.    Scope — should address all information, systems, facilities, programs, data, networks, and all users of technology in the organization, without exception.

2.    Information classification - should provide content-specific definitions rather than generic "confidential" or "restricted".

3.    Management goals for secure handling of information in each classification category (e.g. legal, regulatory, and contractual obligations for security, may be combined and phrased as generic objectives such as "customer privacy entails no authorized clear-text access to customer data for anyone but customer representatives and only for purposes of communicating with the customer,". "information integrity entails no write access outside accountable job functions," and "prevent loss of assets")

4.    Placement of the policy in the context of other management directives and supplementary documents (e.g., is agreed by all at the executive level, all other information handling documents must be consistent with it)

5.    References to supporting documents (e.g. roles and responsibilities, process, technology standards, procedures, guidelines)

6.    Specific instruction on well-established organization-wide security mandates (e.g. all access to any computer system requires identity verification and authentication, no sharing of individual authentication mechanisms)

7.    Specific designation of well-established responsibilities (e.g. the technology department is the sole provider of telecommunications lines)

8.    Consequences for non-compliance (e.g. up to and including dismissal or termination of contract)

This list of items will suffice for Information Assurance policy completeness concerning current industry best practices as long as accountability for prescribing specific security measures is established within the "supplementary documents" and "responsibilities" sections. While items 6 and 7 may contain a large variety of other agreed-upon details concerning security measures, it is ok to keep them to a minimum to maintain policy readability and rely on sub-policies or supporting documents to include the requirements. Again, it is more important to have complete compliance at the policy level than to have the policy include a lot of detail.

Note: The policy production process itself is something that necessarily exists outside of the policy document itself. Documentation concerning policy approvals, updates, and version control should also be carefully preserved and available if the policy production process itself is audited.

 

Software Release Management

Unknown

To come to fruition, software projects take investment, support, nurturing, and a lot of hard work and dedication. Good release management practices ensure that when your software is built, it will be successfully deployed to the people who want to use it. You have the opportunity to satisfy existing customers and hopefully to win new ones.

A major U.K. telecommunications provider had a problem. It needed to implement a business critical supplier switch, which required it to re-engineer its billing and account management systems. These systems had to be in place within three months, otherwise, the organization risked losing hundreds of millions of pounds and a decline in their stock value. However the telecom's development processes were poor, and its release management was extremely problematic and inconsistent.

The company brought us in to help deliver the software within the time constraints and to turn around a failing release management process. Within three months, we'd released both the pending releases and two scheduled releases of the re-engineered applications. Most importantly, we established a straightforward and lightweight release management process to ensure that future releases would happen on time and to the required quality. Follow along as we show you how we did it—including the mistakes we made.

1. Understand the current state of release management.

You can't begin to fix something without understanding what it is, and how and where it is broken. Our first step in improving our client's release management system was to form a detailed picture of the current release process. We began with several walk-through sessions with key individuals involved in the software process.

From these sessions, we determined that our starting point was pretty bad. When we joined the project, there was software still waiting to be released two months after being completed.

Test environments were limited and not managed, so they were regularly out of date and could not be used. Worse still, it took a relatively long time to turn around new environments and refresh existing ones.

When we arrived on the scene, regression testing was taking up to three months to manually execute. It was usually dropped, significantly reducing the quality of any software that made it to release.

Overall, morale and commitment were very low. These people had never been helped to deliver great software regularly, and it had worn them down.

2. Establish a regular release cycle.

Once we got a picture of the current state of the process, we set about establishing a regular release cycle.

If the engineering team is the heart of the project, the release cycle is its heartbeat. In determining how often to release into production, we had to understand how much nonfunctional testing was needed and how long it would take. This project required regression, performance, and integration testing.

Establishing a release cycle is vital because:

It creates an opportunity to meaningfully discuss non-functional testing that the software may need.

It announces a timetable for when stakeholders can expect to get some functionality. If they know that functionality will be regularly released, they can get on with agreeing on what that functionality will be.

It creates a routine with which all teams can align (including marketing and engineering).

It gives customers confidence that they can order something and it will be delivered.

Your release cycle must be as accurate as you can make it, not some pie-in-the-sky number that you made up during lunch. Before you announce it, test it out. There is nothing worse for a failing release process than more unrealistic dates!

We started by suggesting a weekly cycle. That plan proved unfeasible; the client's database environment could not be refreshed quickly enough. Then we tried two-week cycles. There were no immediate objections from the participants, but it failed the first two times! In the end, two weeks was an achievable cycle, once we overcame some environment turnaround bottlenecks and automated some of the tests.

Finally, we established a cycle whereby, every two weeks, production-ready code from the engineering team was put into the system test. Then two weeks later, we released that code into production.

Remember: Your release cycle is not about when your customer wants the release. It's about when you can deliver it to the desired level of quality. Our customers supported our release cycle because we engaged them in determining the cycle. There is only one consideration in determining the release regularity.

3. Get lightweight processes in place. Test them early and review them regularly.

If there is one single guiding principle in engineering (or reengineering) a process, it is to do a little bit, review your results, and then do some more. Repeat this cyclic approach until you get the results you want.

Lightweight processes are those that do not require lengthy bureaucratic approvals or endless meetings to get agreement. They usually require only the minimum acceptable level of inputs and outputs. What they lack in bulk and bureaucracy, they make up for in response to change and popular adoption!

Underpinning this approach is the thorny issue of documentation. You need to record what you did and how you did it. Otherwise, what do you review and how do you improve?

We don't mean the kind of documentation that endangers rainforests and puts its readers to sleep. We mean documentation that people (technical and otherwise) can read and act on.

The engineering team chose Confluence—a commercial tool—to collaboratively document their work. They used the software to create minimal but effective documentation of what they were agreeing to build in every cycle of work. They recorded what they built, how they built it, and what was required to make it work. We saw the value in this approach and rolled it out (both the approach and the tool) to everyone else involved in the process.

Initially, we suggested a sequence of tasks to release the software we got from the engineering teams. It covered how we took delivery from the source control management system; what packages would be called and how each element (executable code, database scripts, etc.) would be run and on which platforms. Then we did a dry run, using dummy code for each element. We tested our sequence, documenting what we did as we did it. This formed the basis of the installation instructions.

The next step was to get the people who would be deploying the real release to walk through another dry run, using only our documentation. They extended, amended, and improved our instructions as they went through. The process became a more inclusive one where everyone contributed to the documentation; since they'd been part of its definition, the process became more widely adopted with better quality.

After each release, we reviewed the process. We examined the documentation and identified changes made during the release. Every time, we looked at how the documentation could be improved and fed the enhancements back into the process.

4. Establish a release infrastructure early.

Your release infrastructure is anything that needs to be in place to deploy the software and enable users to use it. Your obligation to the customer is not just that you build great software; it is that it's available for them to access and use.

Crucial to getting a good release process is figuring out what you need to have in place to make it available to the customer—before the engineering team is done building the software.

The release infrastructure covers the hardware, storage, network connections, bandwidth, software licenses, user profiles, and access permissions. Human services and skills are part of the release infrastructure, too. For example, if you require specialist software installed and configured, it's not smart to exclude the availability or cost of getting such skills into your infrastructure plan.

You must discover, as early as you can, hidden bottlenecks in procuring the required hardware or the missing skills (say, to configure secure networks). You need to resolve them before they hold up your delivery.

This isn't trivial. We strove to get our release infrastructure in place as soon as we started on the project. Even after six weeks' lead time, we were still waiting on specialist memory and hard drives for the test servers!

5. Automate and standardize as much as you can.

Automation enables you to do repetitive tasks without tying up valuable human resources. Standardizing ensures that your automation's inputs and outputs are consistent every time.

Before our involvement with the project, the engineering teams manually crafted a deployable package. A new package was not guaranteed to be the same as the last one; in fact, it was not even guaranteed to be the software they had been building, much less guaranteed to work! It often took the tech staff days to create a package with the features they were delivering in a structure that could be deployed.

We immediately drew up a structure and acceptance criteria for the deployable package the team was delivering to us and helped them standardize its packaging. This triggered the implementation of automated processes to build the software in a consistent structure for every release point.

Suddenly, the packaging of the software for release was not even an issue. Because we had automated the verification of the acceptance criteria—for example, that code must be unit tested before delivery and test deployed to ensure that it could be deployed—we had guaranteed its executability. As a result, we were able to package, version, test, and deploy finished code with a single command in a very short time.

But automation did not stop there. With each development cycle, we had even more regression tests to do. The existing regression tests would have taken three months to manually execute; as a result, the releases were never properly tested.

Our newly established release cycle meant that a release had to be regression, performance, and integration tested in two weeks for us to be able to release it into production. We could overcome the different types of testing (integration and performance) by having separate environments for each type. But how would we accommodate three months of regression tests into a two-week window?!

First, we initiated a prioritization exercise. The customer identified the highest-priority regression tests: the minimum they would accept as proof that the old functionality still worked. Then we set about automating this set. Subsequent acceptance tests also became automated, ensuring that we could regression test every release in hours rather than days.

6. Establish positive expectations.

If getting software released is important to you, don't keep it a secret. Our teams improved their commitment to deliver the software release when they knew it was important.

We backed up this importance by establishing that the designated release manager would expect the software to be ready when the teams agreed it would be ready. We got the program manager (who effectively was our customer) to explain to the teams why the release was important. (Ultimately it boiled down to losing millions of pounds!)

We requested that the software delivered by the engineering teams conform to a standard (versioned, tested, documented, and packaged); we established that we would request this standard package for every release cycle. We needed to explain why we wanted the software in this way (it made our automated process easier and more consistent) and we integrated the team's feedback into the process.

Establishing positive expectations is a really good way to empower everyone involved in the process. We were not given any executive authority, so there was no fear of sanction or sacking. Instead, we tapped into the power of positive expectation to get people on board to help us improve the release process. We had individuals making key decisions (which they never felt able to make before) because "Mike and Tym need this software by Thursday and we said we would deliver it."

7. Invest in people.

No matter how much you spend on hardware, software, and fancy processes, without the commitment of team members you will not enjoy sustainable success in releasing your software. Heck, you may not even end up with any software to release!

You probably thought we were going to talk about getting the right people and rewarding them well or that we would harp on the tools and skills teams need to do their jobs. The truth is that you know you should get the right people for your teams (the definition of "right" is different from business to business), you should reward them adequately for the value they deliver and yes, you should ensure they have the tools and skills they need.

Our basic assumption is that people are inherently interested in doing good work. If you want the people in your teams to care about your product and about doing a good job, you have to first demonstrate that you care about what is important to them. From the outset of the project, we formed an excellent rapport with everyone on the teams, based on mutual respect and understanding. We demonstrated that we were flexible about personal challenges and we did whatever we could to help. Whether this was buying lunch, fetching drinks, organizing training and advice, listening to problems, or playing devil's advocate, we did whatever was needed to make each person feel valued as a critical part of the process.

When we came to the project we found a general sense of apathy. Some longer-term permanent employees were simply waiting for the redundancy package; others were never asked to do anything because they had never done anything right. It took a lot of relationship-building investment of time and positive affirmation to get many people back to a point where they cared about delivering personal value to the process.

Release management is a really important part of any software project and is not often given the attention it deserves. There are lots of other great hints, tips, and observations we can share about our experience of straightening out the release process of this medium-size telecom enterprise. But these are the seven most important for us in this particular case, though we suspect that they are pretty good ideas for any case.

Good release management takes hard work, resolve, and great communication; however, the greatest skill is the ability to review, learn, and adapt improvements.


IT Boom For Hospitality Industry

images

Information is the key to decision-making in any business, therefore getting the right information at the right time, at the right place, and faster makes a lot of difference in any business and so, especially in Hospitality Business where the decisions are taken instantly at some levels. In this context, the age-old phrase “Garbage In and Garbage Out” is valid even today as incorrect information may lead to problems. It is not uncommon that one may have a good computer system yet may not be successfully getting the right information for the business. 

It is a combination of the Right People and the Right System that makes a business successful. Although lots of developments have taken place in terms of computers and their applications, making use of them completely rests with the person using them. Computers cannot replace men! With this, I would like to put a few of my thoughts that may be useful to the Hotel Fraternity.

There was a time when people used to write simple expense statements and general ledgers to balance sheets manually. Most difficult front office operations used to be monitored manually through a long sheet of paper. Computer pundits came out with great solutions specifically to the hotel industry making life easier for front-end staff who need to attend to guests most of their time rather than looking at the paper. Information and good service are the keys to success in the hospitality industry. 

Today, computers do magic for the front-end staff enabling them to devote more time to pleasingly attending guest requirements without compromising the Standard Operating Procedures (SOP). From the time of reservation until the time the customer checks out of the hotel, everything is recorded and the data is available. The computer system monitors the guest's requirements, likes and dislikes, wants, and satisfaction levels in a readable way that helps the hotel enhance its future services.

To achieve good results through the computer system one should first understand one’s requirement in terms of fastness of information, kinds of reports, formats, etc. Second, the flow of work that takes place in each activity. Third comes the procedures and last its effective implementation. All this becomes the system component. Once this is understood, the next step is to make a program or customize it accordingly through the software. The software cannot work without hardware. A good Information Technology System comprises three components, Systems, Software, and Hardware.

Selecting the right type of system is most important for any hotel operation. Most of the hotels use special software made for hotels which is generally called Property Management System (PMS). PMS comprises both front-end and back-end solutions. Various other solutions are not part of PMS but, get interfaced with the PMS. There are very few property management systems that have integrated solutions built in with both front and back ends, like IDS Fortune Enterprise. Choosing the right system requires expertise and knowledge about the hotel operation. 

Normally, all systems come with Room Division – with Front Office System and House Keeping Module, Food, and Beverage Division – with Point of Sale and Back End system – with Accounting, Inventory, and Human Resources. 

System requirement for each facility differs and it should preferably be a cost-effective solution. Not all hotels need to require the most expensive computer systems. Small hotels need a simpler system than big operations where complicated services, standards, data assimilation, and decision-making tools are required. Many big operations require various interfacing such as Telephones, Internet, Interactive Television, Door Locking systems, Yield Management, Global Distribution Systems, Visa/Master Card, etc. 

All these can be automated through the Property Management System efficiently and effectively. This reduces not only the manpower but also reduces mistakes that happen when handled manually.

While choosing the system, a proper evaluation of the PMS has to be made. The evaluation must be in terms of User Friendliness, Menu Driven navigation, Key defined access, Lesser number of keystrokes, Easy access to required data, Visual Impacts, Meaningful reports, various levels of security access, the possibility of customization at the user level, etc.

Some of the systems are more strong in the front end and weak in the back end. Some of the properties may require a strong front end and the back end could just be reasonable. Some of the properties such as Hotel Apartments generate more room-oriented business and hence Front Office system must be stronger. Similarly, a property with various food and beverage outlets must use the right Point of Sale (POS) system. 

One has to choose the system that is sufficient for the property depending on the number of keys, food and beverage outlets, other minor operating departments, facilities, and services offered in the property. Many times, one may come across choosing a very powerful system for a small operation by paying huge sums of money or vice-versa. Many of the features of the system may not be even useful or under-utilized and similarly, big operations may not have the right system offering sufficient features.

Good IT personnel should have knowledge of all three components, i.e., System (flow of each activity), Software (that translates the activity in measurable terms both quantitatively and qualitatively), and Hardware (Media through which we can see these activities). All this should reflect primarily guest satisfaction, staff satisfaction, management satisfaction, and owner satisfaction. 

Most PMS software gives generic solutions. Customization is important to achieve what one wants from the system chosen. Some of the PMS software is customer friendly and customization becomes easier there is also rigid software that does not allow customization. It all depends on the architecture one has used to build the software. User-friendly software will allow customization without making structural changes.

Many times, I have come across hotels not using the software to its full extent. Although the system is capable of delivering various reports and usage, they are not fully utilized due to the lack of proper training and induction in using the software. This happens when properties are opened in a hurry without giving sufficient time for training. Due to the vastness of the PMS, a person has to undergo training for a minimum of two months to understand the complete system. Out of two months, one month must be on-the-job training. Most of the users will come across various problems while on the real job. A good PMS supplier will be able to give solutions for all the problems that have logical answers. Proper training is the key to effective implementation.

While choosing PMS software for a property, one has to derive the guest needs in the front end on one side and the needs of the management and the owner at the back end on the other side. Staff should be able to use the system comfortably in achieving both ends. One has to set the right parameters in the system that can be used to create meaningful reports that help in decision-making easier and faster at all levels. This, as said before, requires expertise and knowledge of all three components of IT.

Today, computers and software can give solutions to the most complicated logical problems in any operation. Computers make life easier in getting the right reports at the right time which helps in the decision-making process easier and faster. Computer systems can prompt if any mistake is made, it can prompt opportunities and it can prompt to take corrective action proactively. Yield Management in the rooms division or Menu Engineering in the food and beverage division is an example of such functions. The computer system can be used to enable both planning and control functions to achieve the objectives of the organization.

Yet, man made the computer and not the other way. One has to understand, that man is intelligent and intellectual. The computer is made with the intellect of man and hence works only with the logic of the mind. The logic of the mind cannot go beyond mathematics! Intelligence is beyond mathematics. 

Man cannot be replaced at any level. Despite having the most advanced system, one has to rely on the supervision of a man to see the level of satisfaction guests get in a hotel. Expression of satisfaction from the guests can be seen only when you see them eye to eye.

 

 

Marcus Aurelius’ 10 Rules For Being An Exceptional Leader


images

The Roman emperor Marcus Aurelius ruled from 161 to 180 A.D. and maintained the reputation for being the ideal wise leader whom Plato called the "philosopher king."

His book "Meditations" has inspired leaders for centuries because of its timeless wisdom about human behavior.

It's a collection of personal writings from the chaotic last decade of his life. This turmoil inspired him to develop his interpretation of Stoic philosophy, which focused on accepting things out of one's control and maintaining mastery over one's emotions.

We've taken a look at a section from Book 11 in which Marcus writes reminders to himself on how to be a great leader. Using Gregory Hays' accessible translation of ancient Greek (Marcus used the language of his philosophical heroes), we've broken down his 10 points into further simplified language, contextualized by the rest of Marcus' ideology.

Here are 10 things every great leader should know:

1. Understand that people exist to help one another.

Marcus believed that even though there will always be people who live selfishly and those who want to destroy others, mankind was meant to live in harmony. "That we came into the world for the sake of one another," he writes.

And within society, leaders such as himself emerge. And they must be the guardian of their followers.

2. Be mindful of others' humanity.

Remember that every one of your followers, every one of your superiors, and every one of your enemies is a human being who eats and sleeps and so forth. It sounds obvious, but it is easy to belittle or magnify the importance of others when you are deciding between them.

Remember that every person has dignity and pride.

3. Realize that many mistakes, even egregious ones, are the result of ignorance.

When a person makes a decision that offends you, Marcus writes, first consider whether they were "right to do this" in the sense that they are acting in a morally acceptable way, even if it is against your self-interest. In that case, do not spend energy complaining about it.

If, however, they are behaving in a reprehensible way, consider their actions to be based on ignorance. It's for this reason that many of these offenders "resent being called unjust, or arrogant, or greedy," Marcus writes. When dealing with your followers, punishment or chastisement should thus be done educationally.

4. Do not overly exalt yourself.

Leaders should indeed take their leadership roles seriously, but not in a way that makes them feel godlike in some way.

Remember, "you've made enough mistakes yourself," Marcus writes. "You're just like them." And if you've managed to avoid some of the mistakes your followers make, then recognize that you have the potential to falter and do even worse.

5. Avoid quick judgments of others' actions.

Sometimes what you initially perceive as your followers' or your competition's mistakes are wiser and more deliberate than you think.

"A lot of things are means to some other end. You have to know an awful lot before you can judge other people's actions with real understanding," Marcus says.

6. Maintain self-control.

While it is natural to react to an offense by losing your temper or even becoming irritated, it is in no way constructive. To maintain control over your emotions, Marcus writes, remember that life is short.

You can choose to spend your time and energy languishing over things that have already happened, or you can choose to be calm and address any problems that arise.

7. Recognize that others can hurt you only if you let them.

Think about a time when someone insulted you, for example. You decided to let their words hurt you when you could have instead pitied them for being ignorant or rude.

The only actions that should truly hurt you, Marcus writes, are things you do that are shameful, since you are in control of your self-worth and values.

8. Know that pessimism can easily overtake you.

It is common to have strong emotional reactions to disasters, but behaving in this way only keeps you from addressing the challenges that arise and fills you with powerful negative thoughts.

"How much more damage anger and grief do than the things that cause them," Marcus says.

9. Practice kindness.

Sincere kindness is "invincible," Marcus writes, and more powerful than any negative transgression. It takes a strong leader to set aside ego-based emotions and behave with compassion.

"What can even the most vicious person do if you keep treating him with kindness and gently set him straight - if you get the chance - correcting him cheerfully at the exact moment that he's trying to do you harm," Marcus says.

10. Do not expect bad people to exempt you from their destructive ways.

While great leaders can do everything possible to behave constructively and compassionately, they must also understand that some find meaning in destroying others. It is not only foolish, Marcus writes, but "the act of a tyrant" to think that you can try to change these kinds of people or persuade them to treat you differently.


Mastering The Release Management Process

Unknown

Release management helps you preserve customers' production systems when new software and hardware are deployed.

The release management process is a demanding operation. The goal of release management is to preserve the integrity and availability of production systems while deploying new software and hardware.

 

Several processes are included under the umbrella of release management:

·     Planning software and hardware releases

·     Testing releases

·     Developing software distribution procedures

·     Coordinating communications and training about releases

The release management process is the bridge that moves assets from development into production.

 

Planning Releases

Planning releases is often the most time-consuming area of the release management process because there are so many factors that must be taken into consideration. For example, when deploying a new sales support system, the release managers must address:

·     How to distribute client software to all current users

·     How to migrate data from the current application's database to the new database with minimal disruption to database access

·     How to verify the correct migration of data

·     How to uninstall and decommission the applications replaced by the new system

·     Verifying all change control approvals are secured

Each of these issues breaks down into a series of more granular tasks. Consider distributing client software. Release managers must account for variations in OSs and patch levels of client devices, the need for administrative rights to update the registry if the software is installed, and the possibility of conflicts or missing software on clients.

 

Testing and Verifying Releases

Release managers can play an important part in the testing phase of the software development life cycle; the key areas for testing and verification are:

 

·     Software testing

·     Data migration testing

·     Integration testing

In each case, the testing process should constitute the final test and primary verification that the newly deployed applications operate as expected in the production environment.

Software Testing

Software should be thoroughly tested before it is released. In the ideal world, software developers work in the development environment and deploy their code to a testing and quality assurance environment that is identical to the production environment. It is in the test environment that integrated module testing and client acceptance testing is performed. This is not always possible. Large production environments may not be duplicated in test environments because of cost and other practical limitations. It is especially important in these cases that release managers work closely with software developers.

With responsibility for deploying software, release managers can provide valuable implementation details about the production environment that developers should test. For example, release managers will have information about the types of client devices and the types of network connectivity supported as well as other applications that may coexist with the system under development. Release managers may need to address data migration issues as well.

Data Migration Testing

In addition to supporting software developers on application testing and quality assurance processes, release managers may also have to support database administrators who are responsible for migrating data between applications. When the release of a new application entails shutting down one system, exporting data, transforming it to a new data model, and importing it into the new system, release managers will share responsibility for ensuring the data is extracted, transformed, and loaded correctly. Again, this process should be thoroughly tested before release, but realistic data loads are not always possible in test environments.

Integration Testing

Integration testing is the process of testing the flow of processing across different applications that support an operation. For example, an order processing system may send data to business partners' order fulfillment system, which then sends data to a billing system and an inventory management system. Certainly, these would have been tested before deployment, but real-world conditions can vary and uncommon events can cause problems. For example, spikes in network traffic can increase the number of server response timeouts forcing an unacceptable number of transactions to roll back. In this case, it is not that the systems have a bug that is disrupting operations, but that the expected QoS levels are not maintained. Testing and verifying software functions, data migration, and integrated services can be easily overlooked as "someone else's job," but release managers have to share some of this responsibility.

Software Distributions

Software distribution entails several planning steps. At the aggregate level, release managers must determine whether a phased release is warranted, and if so, which users will be included in each phase. Phases can be based on several characteristics, including:

·     Organizational unit

·     Geographic distribution

·     Role within the organization

·     Target device

When deploying new software or major upgrades, a pilot group often receives the software first. This deployment method limits the risks associated with the release. (Even with extensive testing and evaluation, unexpected complications can occur—especially with end users' response to a new application).

When distributing software, several factors must be considered:

·     Will all clients receive the same version of the application? Slightly different versions may be required for Windows 2000 (Win2K) clients and Windows XP clients.

·     Will all clients receive the same set of modules? If a new business intelligence application is to be deployed, power users may need the full functionality of an ad hoc query tool and analysis application, while managers and executives may require only summary reports and basic drill-down capability.

·     How will the installation recover from errors or failure? Downloads can fail and need to be restarted. There may be a power failure during the installation process. Disk drives can run out of space. In some cases, the process can restart without administrator intervention (for example, when the power is restored) but not in other cases (such as when disk space must be freed).

·     How will the installation be verified? Depending on the regulations and policies governing IT operations, differing levels of verification may be required. At the very least, the CMDB must be updated with basic information about the changes.

Software distribution is the heart of the release management process, but the ancillary process of communication and training is also important.

Communications and Training

The goal of communication in release management is to make clear to all stakeholders the schedule and impact of releases. This is the responsibility of release managers.

Training users and support personnel on the released system is not the responsibility of release managers, but both training and release managers should coordinate their activities. When training occurs too far in advance or too late following the release of the software, it may be of little use; the users may forget what they are taught or have already learned the basics by the time training occurs.

The release management process is a bridge from project-oriented software development and application selection to production operations. Although testing can be a well-defined and well-executed part of the development life cycle, release management still maintains a level of testing and verification responsibilities. In addition, the core operations of planning, software distribution, and communications constitute the bulk of what is generally considered release management.


Maybe you have heard about Cloud Computing

images

Maybe you have heard about Cloud Computing, maybe not.  One thing is for sure: It is all "the-buzz," and companies like Google and Microsoft are investing in it, along with many others.  Here is a quick primer on Cloud Computing, what it is, how it can be used, and more.

As the song says, “Let’s start at the very beginning because it’s a very good place to start." For me, the beginning is a definition.  Cloud Computing is Internet computing, where "cloud" is a metaphor for the Internet.  Using SaaS (software as a service), Web 2.0, and other virtual technologies, applications are provided to users via the net with the data stored on the provider’s servers.  In other words, it is the Internet version of remote computing, just way more virtual.

Cloud Computing has its roots in the service bureau concepts of the 1960s.  The cloud part, dates back to the 1990s, when the term was used to refer to ATM networks.  Network diagrams use a cloud symbol to represent the Internet.  So, don’t let the term “cloud” mess you up--it simply represents the Internet.  Cloud Computing provides an alternative to investing in one’s infrastructure and software.  Instead, through Cloud Computing, companies can subscribe to an online service using a per-use model, thus reducing capital investments and making computing a variable vs.

Cloud Computing is a term that is often bandied about the web these days and often attributed to different things that -- on the surface -- don't seem to have that much in common. So just what is Cloud Computing? I've heard it called a service, a platform, and even an operating system. Some even link it to such concepts as grid computing -- which is a way of taking many different computers and linking them together to form one very big computer.

A basic definition of cloud computing is the use of the Internet for the tasks you perform on your computer. The "cloud" represents the Internet.

Cloud Computing is a Service

The simplest thing that a computer does is allow us to store and retrieve information. We can store our family photographs, our favorite songs, or even save movies on it. This is also the most basic service offered by cloud computing.

Flickr is a great example of cloud computing as a service. While Flickr started with an emphasis on sharing photos and images, it has emerged as a great place to store those images. In many ways, it is superior to storing the images on your computer.

First, Flickr allows you to easily access your images no matter where you are or what type of device you are using. While you might upload photos of your vacation to Greece from your home computer, you can easily access them from your laptop while on the road or even from your iPhone while sitting in your local coffee house.

Second, Flickr lets you share the images. There's no need to burn them to a compact disc or save them on a flash drive. You can just send someone your Flickr address.

Third, Flickr provides data security. If you keep your photos on your local computer, what happens if your hard drive crashes? You'd better hope you backed them up to a CD or a flash drive! By uploading the images to Flickr, you are providing yourself with data security by creating a backup on the web. And while it is always best to keep a local copy -- either on your computer, a compact disc, or a flash drive -- the truth is that you are far more likely to lose the images you store locally than Flickr is of losing your images.

This is also where grid computing comes into play. Beyond just being used as a place to store and share information, cloud computing can be used to manipulate information. For example, instead of using a local database, businesses could rent CPU time on a web-based database.

The downside? It is not all clear skies and violin music. The major drawback to using cloud computing as a service is that it requires an Internet connection. So, while there are many benefits, you'll lose them if you are cut off from the Web.

Cloud Computing is a Platform

The web is the operating system of the future. While not exactly true -- we'll always need a local operating system -- this popular saying means that the web is the next great platform.

What's a platform? It is the basic structure on which applications stand. In other words, it is what runs our apps. Windows is a platform. The Mac OS is a platform. But a platform doesn't have to be an operating system. Java is a platform even though it is not an operating system.

Through cloud computing, the web is becoming a platform. With trends such as Office 2.0, we are seeing more and more applications that were once the province of desktop computers being converted into web applications. Word processors like Buzzword and office suites like Google Docs are slowly becoming as functional as their desktop counterparts and could easily replace software such as Microsoft Office in many homes or small offices.

But cloud computing transcends Office 2.0 to deliver applications of all shapes and sizes from web mashups to Facebook applications to web-based massively multiplayer online role-playing games. With new technologies that help web applications store some information locally -- which allows an online word processor to be used offline as well -- and a new browser called Chrome to push the envelope, Google is a major player in turning cloud computing into a platform.

Cloud Computing and Interoperability

A major barrier to cloud computing is the interoperability of applications. While it is possible to insert an Adobe Acrobat file into a Microsoft Word document, things get a little bit stickier when we talk about web-based applications.

This is where some of the most attractive elements of cloud computing -- storing the information on the web and allowing the web to do most of the 'computing' -- become a barrier to getting things done. While we might one day be able to insert our Google Docs word processor document into our Google Docs spreadsheet, things are a little stickier when it comes to inserting a Buzzword document into our Google Docs spreadsheet.

Ignoring for a moment that Google probably doesn't want you to have the ability to insert a competitor's document into their spreadsheet, creates a ton of data security issues. So not only would we need a standard for web 'documents' to become web 'objects' capable of being generically inserted into any other web document, but we'll also need a system to maintain a certain level of security when it comes to this type of data sharing.

Possible? Certainly, but it isn't anything that will happen overnight.

What is Cloud Computing?

This brings us back to the initial question. What is cloud computing? It is the process of taking the services and tasks performed by our computers and bringing them to the web.

What does this mean to us?

With the "cloud" doing most of the work, this frees us up to access the "cloud" however we choose. It could be a super-charged desktop PC designed for high-end gaming, or a "thin client" laptop running the Linux operating system with an 8 gig flash drive instead of a conventional hard drive, or even an iPhone or a Blackberry.

We can also get the same information and perform the same tasks whether we are at work, at home, or even at a friend's house. Not that you would want to take a break between rounds of Texas Hold'em to do some work for the office -- but the prospect of being able to do it is pretty cool.


Prevalent Myths surrounding the Cloud

Unknown

See how experts and industry analysts take on prevalent myths surrounding the cloud.

While the Cloud eliminates numerous challenges in environmental management, particularly in the area of deployment it is also creating new ones, like in the area of change monitoring and configuration management.

The myth and mistaken assumption have been propagated that monitoring applications in a cloud is only slightly different from monitoring traditional internal enterprise applications. That assumption is far from the truth.

To bring more clarity to managing the data center in a cloud platform, we illustrate and explore some of the prevalent and persistent myths surrounding cloud-based operations, revealing the truth.

Myth: Cloud Configuration Management is Less Complicated for IT Operations

Take the management of the automation deployment infrastructure. This is still another software system that must be installed and maintained.

There is the mistaken assumption regarding script-driven deployment, that you don't need to check the scripts. However, like any other software system, scripts could be quite sophisticated, with varying tasks executed based on parameters.

The actual results of the script execution, i.e. the actual configuration need to be analyzed and controlled together with the scripts themselves. When errors occur in the automated scenario, they happen on a much larger scale, and hence additional processes and tools are required to recover from them.

Cloud configuration management is also affected as there is no control of artifacts or monitoring across environments, making partial deployment rollback extremely difficult. When deployment errors occur, lacking the ability to easily investigate the errors and recover can lead to failed launches, with costly downtime delays.

Let's Trade Myths For Reality: What Are You Doing In The Cloud?

Today In my oodles of conversations with enterprise clients about the cloud, there isn't a day that goes by without some myth about cloud use coming up. It's time we, together, did something about this.

If it's not "devs are only using it for test & dev," then it's "all those cloud apps are noncritical experiments." The other common ones are "It's all startups and web businesses" and "nothing critical is going there because of security." I use our ForrSights survey data, aggregated learnings from client inquiries, and evidence gathered through case study analysis to refute these rumors - all of which are untrue - but some beliefs just won't die. This week, we published an update to our definitive report from 2008, "Is Cloud Computing Ready for the Enterprise?" that shows that, yes indeed, the leading clouds have matured to the point that there are few legitimate excuses left for not using them. Who isn't ready are the enterprises - IT ops team in particular. And frankly, many enterprise IT ops teams aren't moving quickly enough to get ready for the cloud because they don't see the sense of urgency - they believe the myths above.

5 Big Myths Of Cloud Computing

We are living through a wave of cloud computing hype. It seems like there is a new cloud feature or product being launched by big technology companies almost every day - and sometimes concurrently - and the cloud is at the center of all the important tech industry discussions today, from job creation and destruction to the growth and decline of companies. This hype generates several false expectations and concerns that may lead companies to make bad decisions about the technology.

The simple possibility of helping people avoid bad decisions would be reason enough to look into these “myths” that surround the cloud, but other advantages may also come from this exploration: a better understanding of fundamental concepts that can help in the dialogue between vendors, early adopters and those who are still holding back.

The Myth of the Green Cloud

For a few years before the 2008 crisis hit the world’s economy, being green was even more fashionable for tech companies than being in the cloud today. Green IT movements were in full force, and some cloud vendors have been once again raising this banner claiming that moving to the cloud is the greenest decision a company can make. The logic behind this myth is that cloud data centers can optimize the use of computing resources, making them more efficient than any privately-owned data center around.

This, however, is only partly true. What most companies forget is to look for the source of that energy for their data centers. If you operate your servers in a country where most energy comes from renewable sources (such as Brazil, with a large percentage of hydroelectric power), and you move them to a cloud-based in a country whose energy matrix is dominated by thermoelectric power (coal and oil), the net effect may be an increase in your company’s carbon footprint. Any cloud is only as green as its power sources.

The Myth of “No More IT Worries”

This is one that almost always causes trouble for first-time cloud users. People move their servers/applications to a cloud environment (IaaS or PaaS) and think that everything will be magically backed-up and updated regularly and that multiple server copies will be hosted on redundant servers without them having to ever worry about it again. This is not true. Rackspace has been sending out emails to its customers warning them that cloud servers don’t come with automated backups enabled and that they must do this manually. The same thing goes for having contingency servers: you must set this up manually.

The Myth of 100% Uptime

This is perhaps the culprit of the “no worries” myth above. Service providers, especially on the IaaS level of the cloud stack, have for some time been offering 100% uptime guarantees. What they don’t seem to understand is that no technology is foolproof. I haven’t heard of a single service provider out there that has been able to deliver on this 100% uptime promise for all customers, so this is simply a misleading promise that may make newcomers feel more at ease than they should. As I’ve said before, your cloud servers will eventually suffer downtime, and you better be ready for it.

The Myth of Security

The second most discussed issue about the cloud is security and saying that the cloud is less secure than a private setup has become sort of a knee-jerk reaction for many companies. The truth of the matter is that the cloud by itself is no more (nor less) secure than anything else. On one hand, having services and data concentrated on just a few data centers makes these places much better targets; on the other hand, the concentration increases the likelihood that security patches and updates get properly applied to servers. Who is more likely to maintain updated servers with security monitoring: Rackspace or the thousands of small businesses with Windows XP servers out there?

That is not to say that security shouldn’t be a concern. Cloud vendors tend to downplay security to such an extent that it only makes companies more worried about what is going on. What they need to understand is that to keep the cloud secure, they need to work together with their customers to establish the proper processes. And, in working together, they need to share the responsibility for the security of the environment as a whole.

The Myth of Cost Savings

Saving the best for last, we come to the greatest enduring myth about the cloud: cloud computing will result in great cost savings for companies. It won’t. Cloud computing is about the optimization of computing resources, not their reduction. It allows savings only in the sense that you no longer have to provision servers based on your peak demands, but can instead dynamically grow and shrink your capacity as necessary, paying only for what is in use. If your computing resource needs are fairly steady, there isn’t any real gain.

One possible origin for this myth is the fact that, by using the cloud, startups can avoid spending large amounts of money upfront on infrastructure or software licenses. They perceive this lack of upfront investment as cost savings, even if in the long run they may spend more.

Myths are a natural part of any hype cycle. Some come from vendors who are overeager to please their customers, others from early adopters who desperately want to defend their positions. By looking at them in a detached manner, we can improve the dialogue surrounding the cloud. While I tried to cover the greatest cloud myths, this list is far from complete.

2 More Cloud Myths Busted: Lock-In And Locked-Up

The world of cloud computing grows like a weed in summer, and many assumptions are being made that just aren't correct. I've previously exposed four cloud myths you shouldn't believe. Now it's time for me to climb up on my soapbox and correct a few more.

Myth 1: Cloud computing is bringing back vendor lock-in.

The notion that using cloud computing features (such as APIs) created by one provider or another causes dreaded lock-in seems to be a common mantra. The reality is that using any technology, except the most primitive, causes some degree of dependency on that technology or its service provider. Cloud providers are no exception.

Here's the truth about technology, past, present, and future: Companies that create technology have no incentive to fly in close enough formation to let you move data and code willy-nilly among their offerings and those provided by their competitors. The cloud is no different in that respect.

We can talk about open-source distributions and emerging standards until we're blue in the face, but you'll find that not many changes in terms of true portability. As long as technology and their service providers' profitability and intellectual property value trump data and code portability, this issue will remain. It's not a new situation.

Myth 2: Cloud computing use will put you in jail.

Yes, you need to consider compliance issues when moving to any new platform, including private, public, and hybrid clouds. However, stating in meetings that moving data and processes to cloud-based platforms somehow puts you at risk for arrest is a tad bit dramatic, don't you think? Yet I hear that attention-getting claim frequently.

We've been placing data, applications, and processes outside of our enterprises for years, and most rules and regulations you find in vertical markets (such as health care and finance) already take this into account. Cloud computing is just another instance of using computing resources outside your span of control, which is nothing new, and typically both perfectly legal and not at all risky. Cut out the false drama as an excuse to say no.

7 Stubborn Myths About Cloud Computing

Early adopters of the cloud had to iron out a lot of teething troubles. Because of their pioneering work, building and deploying a cloud is now much easier, making the benefits available to all kinds of businesses, says Gerry Carr, Director of Communications at Canonical.

The cloud has been the preserve of technology-focused companies until recently - but not anymore. IT departments across all industries are getting interested in what the cloud can offer - from increased service flexibility to lower capital expenditure and infrastructure costs.

Although many companies are turned on to the cloud, there are still barriers to adoption. A common one, for example, is the lingering perception that cloud environments are too complex, difficult to deploy, and onerous to manage.

Here, we show how the cloud has come of age, and de-bunk some common cloud myths that are holding back adoption.

Myth 1) Building a cloud can take months

Not anymore. The whole operation now takes just a few days. This is largely thanks to the global open-source community, which has developed great tools to speed up cloud deployment - from installing and configuring physical servers in the cloud to creating, deploying, and scaling cloud-based services dynamically.

Myth 2) You need new hardware to deploy a cloud

Not necessarily. The best cloud solutions can be deployed on your existing X86 hardware - increasing server utilization and performance. What's more, you can convert old servers into additional compute nodes - increasing your processing capacity at no extra cost.

Myth 3) There aren't many applications available for the cloud

Not true. There are now a large number of excellent open-source applications designed specifically for the cloud - from databases and web-based applications to big data applications.

Myth 4) Developers need to learn new languages to deploy services in the cloud

No. While working with new languages such as Node.js or Go is useful if you want your cloud-based apps to look like desktop apps, they're far from essential.

Myth 5) It's complex to move from public to private clouds (and vice versa)

Only if you choose proprietary platforms, which use their APIs. By choosing the best open-source clouds instead, you can ensure compatibility with all the major public and private clouds, and move workloads around easily in the future.

Myth 6) The cloud isn't secure

It is if you do things right. If you take services from a public cloud provider, you should review the security service level agreement (SLA) thoroughly. If you're building a private cloud, you should standardize the way software is deployed in the cloud and regularly review your firewall rules.

Myth 7) It's hard to support a cloud environment

Not at all. There are now great support packages specifically designed for open-source clouds - both public and private.

As we've seen here, the old barriers to cloud adoption are falling away. This is largely thanks to the immense effort of the global open-source community, which is building cloud products and standards that simplify cloud deployment, management, and support.

4 Cloud Myths That Won't Go Away

You would think that rank-and-file IT staffers and leaders would understand the advantages and disadvantages of cloud computing by now. However, the misconceptions continue to show up, some of which are disconcerting. Here are a few of the most common:

If I use public clouds, I give up security.

This one is tossed at me about once a day, and I've addressed it in this blog many times. The fact is, when you use public clouds, you do not necessarily put data and processes at a security risk. The degree of risk comes down to your planning and the use of the right technologies -- just as it does in an on-premises deployment.

Cloud computing will put my job at risk.

Chances are, if you're worried about the use of some technology taking your job, you're already at risk. In reality, cloud computing won't displace many jobs in enterprise IT, but IT roles and responsibilities will change over time.

Cloud computing is an all-or-nothing proposition.

Not really. You can move to cloud-based systems, such as storage and compute services, as needed, both intersystem and intrasystem. Moreover, you can move in a fine-grained manner, shifting only certain system components, such as user interface processing or storage, and leaving the remainder on the premises. You do have to consider the colocation of data for data-process-intensive system components.

Cloud computing requires a complete replacement of the enterprise network.

This is true only if your existing network is awful and needs replacement anyway or if you plan to keep most of the data in the cloud, with the data processing occurring within the firewall (a bad architectural call). Other than that, bandwidth is typically not an issue. However, bandwidth does need to be considered and monitored, as it is a core component of the overall business systems that use cloud platforms.

3 Myths Clouding CIO Judgment

For today's CIO, the perceived barriers to cloud computing remain security, regulation, and compliance. The danger that data loss poses to brand equity, customer trust, and share price is just the same whether data is stored in a cloud computing or traditional infrastructure model.

The severity of the issue is reflected in legislation like the recent Criminal Justice and Immigration Bill which states that the Information Commissioner's Office (ICO) now has the authority to levy fines of up to £500,000 on organizations that recklessly lose confidential or personal information.

Security quite rightly should be at the top of every CIO's agenda but several myths lead to over-simplification or indeed dangerous assumptions about cloud computing. In light of this, we explore three myths that we have encountered recently and why they may be distracting CIOs from the real questions that need to be asked.

Security and compliance are "external issues"

Whether you choose to place your data "in the cloud" or create a hosting platform from dedicated servers, security must remain your concern. Security cannot be handed over wholesale to a cloud service provider because the very real question of security policies and procedures concerns your users as well. Firewalls and the rules that govern them still stand irrespective of whether the infrastructure is virtual or physical. Likewise, the usual security processes such as changing passwords and enforcing permission levels need to be observed within your organization.

These are simple examples but they serve to illustrate the point. Robust data protection is critical to preserving the brand value and reputation of any company. Every week there seems to be another high-profile example of a security breach undermining customers' trust in a brand, whether that is an online gaming site; web retailer, or even a government department. Regulations regarding the security, control, and privacy of data are complex. CIOs need to be certain that their service providers can help them navigate these rules and clearly understand where the responsibility for applying each part of the security policy sits.

Better SLAs will give sufficient protection

To some degree, the question of SLAs reinforces the same point. If you are using a traditional managed-to-host service to host your data, you will ask for a robust SLA that leaves you confident that you can deliver your SLA to the business. Businesses adopting cloud computing need to take the same approach.

However, relying on the SLA alone does not guarantee performance. It may mean there are penalties in the event of downtime, but that is cold comfort to an e-commerce organization at the height of its busiest season faced with a website that has been offline for hours. Uptime availability figures aren't enough. 99.99% uptime may sound impressive until you work out the cost of 0.01% downtime.

CIOs should be asking the same questions about cloud services as they would about any other IT service they use. What is your organization's tolerance for downtime? What are the disaster recovery and backup services available? What will happen in the event of a failure at any point in the service? This does not point to the lowest cost, best endeavors service. The economy of scale should mean that your chosen service provider can invest to minimize these failures.

CIOs have to be confident their service provider can respond and support their business, especially in the face of a "disaster". Furthermore, this should form a key part of your organization's business continuity plan.

The private cloud is inherently more secure than public cloud services

Cloud services have moved on since the first definition from NIST in 2009. The background of early public cloud services has contributed to the perception that this type of cloud has lower levels of security. A private cloud should not be seen as a guarantee of security. The private cloud is dedicated to your organization. By definition, this can reduce the risk of using a platform shared by many customers, but again it is only as secure as the policies and procedures that you enforce. Firewalls still need rules. Data centers still need physical security. A private cloud can be more secure than a public cloud, but like any other system, it is at risk of poor housekeeping and human error. Assumptions should not be made.

The decision criteria for private or public cloud implementation should be far wider than which is perceived to be more secure. As a CIO you will be asking what your organization wants to achieve. Is it cost savings, speed to market, flexibility to scale up or down, or more likely, a combination of all three?

As one of the most significant changes in IT in a generation, cloud computing can deliver real benefits in the way organizations consume IT. However, like any significant business change, careful consideration needs to be given to what the organization is trying to achieve and why. Our own annual CIO cloud research demonstrates that the majority of businesses are using or piloting cloud computing services across parts of the enterprise, but very few businesses are deploying cloud services 'in full'.

The deployment of cloud services across the entire enterprise was only 16 percent, while the deployment of cloud services 'in part' averages out at 35 percent. This demonstrates that companies are engaging in cloud computing, but very few are making or will ever make the shift to cloud computing outright. Cloud computing is not simply about buying CPU cycles at the cheapest rate, it represents a fundamental change in how we consume and take advantage of IT. The consumerization of IT is increasing this rate of change and old methods just won't hack it. While going on this journey from old methods to new is daunting, choosing who you take with you on the journey is perhaps the most important decision any CIO can make at this stage.

6 Biggest Cloud Computing Myths

It’s insecure.

People are afraid of losing control,” says Leandro Balbinot, CIO of Brazilian retailer Lojas Renner. But “just because your data is somewhere else, doesn’t mean it’s less—or more—secure,” says Accenture CIO Frank Morrison. Test, monitor, and review. That’s the only way to mitigate risk in or out of the cloud.

It’s simple.

Vendors will always tell you it’s a turnkey implementation,” says Carmen Malangone, global director of information management for Coty. “But moving customized systems to the cloud takes time—eight months or more to standardize and test in the new environment.” And modify cloud systems with care. “Configuration can quickly become customization and each upgrade will be a major headache,” says Malangone.

CFOs love it.

Here’s the pitch: The cloud turns sunk capital expenditure (capex) into flexible operational expenditure (OPEX). But your company may not want that. “The assumption is that there’s an economic preference for opex over capex,” says Mark White, CTO of Deloitte Consulting’s technology practice. “But not every business wants opex; some want capex.” The years of friendly capex tax depreciation left on a data center may be most important.

The fact that it isn’t done much doesn’t mean that it can’t be done at all. Balbinot, for example, is running a billion-dollar business’ core retail systems in the cloud.

 

Malangone was looking at a cloud-based single-sign-on tool, but with each additional application and user, the bill rose. “[The tool] was a great idea,” he says. “But you have to negotiate the right price based on your expected growth.”

Only the business benefits.

Most CIOs funnel cloud savings to the business. But there’s no law against reinvesting in IT. “I take some of my cost savings and put it into team building,” says David Riley, senior director of information systems for Synaptics. “We need to keep morale high.”

It can’t be used for core systems.

The fact that it isn’t done much doesn’t mean that it can’t be done at all. Balbinot, for example, is running a billion-dollar business’ core retail systems in the cloud.

It’s always cheaper.

Malangone was looking at a cloud-based single-sign-on tool, but with each additional application and user, the bill rose. “[The tool] was a great idea,” he says. “But you have to negotiate the right price based on your expected growth.”

Dispelling the Cloud's Myths

The pace of cloud computing will only accelerate in 2012. The increasing development of information technology, and the intense focus on cost reduction, are highlighting the benefits of moving IT administration off-site. One cloud computing expert wants CFOs to be aware of the short-term challenges and long-term benefits to organizations.

"Large enterprises must realize cloud computing will not(in all cases) provide immediate cost benefits to their organizations," Sadagopan (Sada) Singam, global vice president, of cloud computing, of HCL Technologies, said in a recent interview. CEOs and CFOs "must understand how cloud computing technologies will drive long-term benefits two to three years following the initial implementation."

Many organizations look for cloud computing technologies to provide immediate and sustainable cost benefits to drive bottom-line improvements. However, there are several implementation and security issues that CFOs must address with their IT departments before a successful cloud migration takes place.

Finance chiefs and their controllers must look at information security measures beyond the minimum standards required by legal and regulatory requirements. "We consider SAS 70 (Statement on Auditing Standards) as a basic standard for cloud computing security efforts," Singam says. "Companies must recognize the value and importance of their data and ensure they have multiple backups to protect their data and information."

Cloud computing allows companies of all sizes to quickly scale their IT operations, and minimize the fixed costs traditionally associated with major implementation efforts.

However, moving to the cloud does present a unique set of challenges for organizations of all sizes. With many IT departments now reporting to the CFO, cost often becomes the key consideration during cloud implementations. When looking for the right cloud technologies, CFOs must focus on long-term productivity and value to their companies. Cloud computing is more than a fad, and companies must make sure they are focused on more than the next quarter's financial results when making their decisions on the future of their IT operations.

"Hard is soft and soft is hard when considering major changes," Singam asserts when advising CFOs on the key factors for cloud computing decisions. Organizations must focus on so much more than the hard, short-term costs when identifying the right opportunities for their organizational IT.

Coming in part two of this blog: CFOs and their IT directors must understand the importance of hard costs, but they should not forget their employees who will ultimately be impacted by these changes.

There is Only 1 Cloud-Computing Myth

The only cloud-computing myth is that there are cloud-computing myths.

Instead, there are many articles about cloud myths, an endless parade of strawman arguments put out by writers, analysts, and marketers that lecture us on why we're so stupid to believe the "myths" that the cloud is inexpensive, is easy to deploy and maintain, that it automatically reduces costs, etc.

Anyone who's ever written a line of code or approved an enterprise IT contract knows there are no simple solutions and no universal solvents in their world. Never have been and never will be.

However, there are many powerful arguments in favor of enterprises migrating some of their apps and processes to the cloud, and there is a separate consumer-cloud industry that allowed me to listen to Igor Presnyakov rip through AC/DC's "All Night Long" and Dire Straits' "Sultans of Swing" on my Android phone last night.

I thank Google for the latter opportunity, even as the company remains as enigmatic as Mona Lisa about what's going on behind the scenes.

It's too bad Google is not one of our great sharers because the enterprise IT shops of the world could no doubt learn a lot more about cloud computing from watching Google at work than it can from using Google Apps.

Find Your Cloud

But enough whining. Each organization needs to find its cloud, and this should be a rigorous, perhaps time-consuming process. Discussion of particular cloud strategies and vendors should come at the end of this process. First, figure out what you want to do and why.

A nice cost analysis is helpful, of course, but my brain starts to seize up when the term "ROI" is put into play. At this point, it becomes a contest to game the system and produce an ROI forecast that will have false advertising of the direct impacts of the technology on the company's business. When used to justify technology, ROI and its sinister cousin, TCO, are the enemies of business success.

A nice thing about the cloud is that the heated political and religious debates over Open Source have been (mostly) replaced by practical arguments over which specific product, framework, or architecture provides the best option for a particular initiative. If discussion of the cloud should come at the end of the overall decision-making process, discussion of Open Source should come at the end of that discussion.

Don't try to transform the organization overnight. This will happen on its own as more and more clouds float into the enterprise. And don't believe in the myth that there are cloud myths. There isn't; only more wondrous technology that needs to be examined carefully as you continue the eternal quest to keep things as unscrewed up as possible in your organization.   

     

Strategy & Technology

Unknown

Managers are confused and for good reason. Management theorists, consultants, and practitioners often vehemently disagree on how firms should craft tech-enabled strategies, and many widely read articles contradict one another. Headlines such as "Move First or Die" compete with "The First Mover Disadvantage." A leading former CEO advises "destroy your business,” while others suggest firms focus on their "core competency" and "return to basics." The pages of the Harvard Business Review declared “IT Doesn’t Matter”, while a New York Times bestseller hails technology as the "steroids" of modern business.

Theorists claiming to have mastered the secrets of strategic management are contentious and confusing. But as a manager, the ability to size up a firm's strategic position and understand its likelihood of sustainability is one of the most valuable, yet difficult skills to master. Layer on thinking about technology – a key enabler to nearly every modern business strategy, but also a function often thought of as easily ‘outsourced’ – and it's no wonder that so many firms struggle at the intersection where strategy and technology meet. The business landscape is littered with the corpses of firms killed by managers who guessed wrong. Developing strong strategic thinking skills is a career-long pursuit – a subject that can occupy tomes of text, a roster of courses, and a lifetime of seminars. While this chapter can't address the breadth of strategic thought, it is meant as a primer on developing the skills for strategic thinking about technology.

A manager who understands the issues presented in this article should be able to more clearly see through seemingly conflicting assertions about best practices, be better prepared to recognize opportunities and risks and be more adept at successfully brainstorming new, tech-centric approaches to markets. 

THE DANGER OF RELYING ON TECHNOLOGY

Firms strive for sustainable competitive advantage and financial performance that consistently outperforms their industry peers. The goal is easy to state but hard to achieve. The world is so dynamic, with new products and new competitors rising seemingly overnight, that truly sustainable advantage might seem like an impossibility. New competitors and copycat products create a race to cut costs, cut prices, and increase features that may benefit consumers but erode profits industry-wide. Nowhere is this more difficult than when competition involves technology. The fundamental strategic question in the Internet era is “How can I possibly compete when everyone can copy my technology and the competition is just a click away?” Put that way, it seems like a lost cause.

But there are winners – big, consistent winners – in the world of tech. how do they do it? To think about how to achieve sustainable advantage, it's useful to start with two concepts defined by Michael Porter. A professor at the Harvard Business School, and father of the Value Chain and the Five Forces concepts (see the sections at the end of this chapter), Porter is rightly considered one of the leading strategic thinkers of the last quarter century.

According to Porter, the reason so many firms suffer aggressive, margin-eroding competition is that they've defined themselves according to operational effectiveness rather than strategic positioning. Operational effectiveness refers to performing the same tasks better than rivals perform them. Everyone wants to be better, but the danger in operational effectiveness is in "sameness." This risk is particularly acute in firms that rely on technology for competitiveness.

After all, technology can be easily acquired. Buy the same stuff as your rivals, hire students from the same schools, copy the look and feel of competitor websites, reverse engineer their products, and you can match them. The fast-follower problem exists when savvy rivals watch a pioneer's efforts, learn from their successes and missteps, and then enter the market quickly with a comparable or superior product at a lower cost.

Since tech can be copied so quickly, followers can be fast, indeed. Several years ago while studying the web portal industry (Yahoo and its competitors), a colleague and I found that when a firm introduced an innovative feature, at least one of its three major rivals would match that feature in, on average, only one and a half months1. When technology can be matched so quickly, it is rarely a source of competitive advantage. The phenomenon is not limited to the Web.

Tech giant EMC saw its stock price appreciate more than any other firm during the decade of the '90s. however, when IBM and Hitachi entered the high-end storage market with products comparable to EMC's Symmetrix unit, prices plunged 60 percent the first year and another 35 percent the next. Needless to say, EMC's stock price took a comparable beating. TiVo is another example. At first blush, it looks like this first mover should be a winner since it seems to have established a leading brand; TiVo is now a verb for all digital recording. But despite this, TiVo is a money loser, going years without posting an annual profit. Rival digital video recorders offered by cable and satellite companies appear the same to consumers and are offered along with pay television subscriptions, a critical distribution channel for reaching customers that TiVo doesn’t control.

Operational effectiveness is critical. Firms must invest in techniques to improve quality, lower costs, and design-efficient customer experiences. But for the most part, these efforts can be matched. Because of this, operational effectiveness is usually not sufficient enough to yield sustainable dominance over the competition.

Different is Good

In contrast to operational effectiveness, strategic positioning refers to performing different activities than rivals, or the same activities in a different way. While the technology itself is often very easy to replicate, technology is essential to creating and enabling novel approaches to business that are defensibly different than rivals and can be quite difficult for others to copy.

For an example of the relationship between technology and strategic positioning, consider FreshDirect. The New York City-based grocery firm focused on the two most pressing problems for Big Apple shoppers: selection is limited and costs are high. Both of these problems are a function of the high cost of NYC real estate. The solution? Use technology to craft an ultra-efficient model that makes an end-run around stores.

The firm’s ‘storefront’ is a website offering one-click menus, semi-prepared specials like ‘meals in four minutes’, and the ability to pull up prior grocery lists for fast re-orders – all features that appeal to the time-strapped Manhattanites who were the firm’s first customers. Next-day deliveries are from a vast warehouse the size of five football fields located in a lower-rent industrial area of Queens. At that size, the firm can offer a fresh goods selection that’s over five times larger than local supermarkets. The service is now so popular that NYC apartment buildings have begun to redesign common areas to include secure freezers that can accept FreshDirect deliveries, even when customers aren’t there.

The FreshDirect Website and images of the firm's tech-enabled warehouse operation The FreshDirect model crushes costs that plague traditional grocers. Worker shifts are highly efficient, avoiding the downtime lulls and busy rush hour spikes of storefronts. The result? Labor costs are 60% lower than at traditional grocers. As for freshness, consider that while the average grocer may have seven to nine days of seafood inventory, FreshDirect’s seafood stock turns each day. Stock is typically purchased directly from the docks the morning of delivery to fulfill orders placed the prior night. The firm buys what it sells and shoplifting can’t happen through a website, so loss from waste and theft plummets.

Artificial intelligence software, coupled with some seven miles of fiber optic cables linking systems and sensors, supports everything from baking the perfect baguette to verifying orders with 99.9 percent accuracy. Since FreshDirect avoids the money-sucking open-air refrigerators found in a traditional grocery store, the firm even saves big on energy (instead, staff bundle up for shifts in climate-controlled cold rooms tailored to the specific needs of the dairy, deli, and produce). And a new initiative uses recycled biodiesel fuel to cut down on delivery costs.

Buying direct from suppliers, paying them in days rather than weeks, carrying a greater product selection, and avoiding the ‘slotting fees’ (payments by suppliers for prime shelf space) common in traditional retail all help FreshDirect negotiate highly favorable terms with suppliers. Add all these advantages together and the firm’s big, fresh selection is offered at prices that can undercut the competition by as much as 35 percent. And FreshDirect does it all with margins in the range of twenty percent, easily dwarfing the razor-thin one-percent margins earned by traditional grocers.

Technology is critical to the FreshDirect model, but it's the collective impact of the firm’s differences, this tech-enabled strategic positioning, that delivers success. Operating for more than half a decade, the firm has built up a set of strategic assets that address the specific needs of the NYC grocery consumer, and that are also extremely difficult for any upstart to compete against. Traditional grocers can’t fully copy the firm’s delivery business because this would leave them straddling two markets (low-margin storefront and high-margin delivery), unable to gain optimal benefits from either. Competing against a firm with such a strong, and tough-to-match strategic position can be brutal. Today there are one-third fewer supermarkets in New York City than when FreshDirect first opened for business.

But What Kinds of Differences 

The principles of operational effectiveness and strategic positioning are deceptively simple. But while Porter claims strategy is “fundamentally about being different”, how can you recognize whether your firm's differences are special enough to yield a sustainable competitive advantage?

An approach known as the resource-based view of competitive advantage can help. The idea here is that if a firm is to maintain a sustainable competitive advantage, it must control a set of exploitable resources that have four critical characteristics. These resources must be

1) valuable,

2) rare,

3) imperfectly imitable (tough to imitate), and

4) non-substitutable.

Having all four characteristics is key. Miss value and no one cares what you've got. Without rareness, you don't have something unique. If others can copy what you have, or others can replace it with a substitute, then any seemingly advantageous differences will be undercut.

Strategy isn't just about recognizing opportunity and meeting demand. Resource-based thinking can help you avoid the trap of carelessly entering markets simply because growth is spotted. The telecommunications industry learned this lesson in a very hard and painful way. With the explosion of the Internet, it was easy to see that demand to transport web pages, e-mails, MP3s, video, and everything else you can turn into ones and zeros, was skyrocketing. Most of what travels over the Internet is transferred over long-haul fiber-optic cables, so telecom firms began digging up the ground and laying webs of fiberglass to meet the growing demand. Problems resulted because firms laying long-haul fiber didn't fully appreciate that their rivals and new upstart firms were doing the same thing. By one estimate there was enough fiber laid to stretch from the Earth to the moon over 280 times!10 On top of that, a technology called DWDM enabled existing fiber to carry more transmissions than ever before.

The result– these new assets weren't rare and each day they seemed to be less valuable.

For some firms, the transmission prices they charged on newly laid cable collapsed by over 90 percent. Established firms struggled, upstarts went under, and WorldCom became the biggest bankruptcy in US history. The impact was felt throughout all industries that supplied the telecom industry. Firms like Sun, Lucent, and Nortel, whose sales growth relied on big sales to telecom carriers, saw their values tumble as orders dried up. Estimates suggest that the telecommunications industry lost nearly four trillion dollars in value in just three years, much of it due to executives who placed big bets on resources that weren't strategic.

POWERFUL RESOURCES

Management has no magic bullets. There is no exhaustive list of key resources that firms can look to to build a sustainable business. Recognizing a resource doesn't mean a firm will be able to acquire it or exploit it forever. But being aware of major sources of competitive advantage can help managers recognize an organization's opportunities and vulnerabilities, and can help them brainstorm winning strategies.

Imitation-Resistant Value Chains

While many of the resources below are considered in isolation, the strength of any advantage can be far more significant if firms can leverage several of these resources in a way that makes each one stronger and makes the firm's way of doing business more difficult for rivals to match.

Firms that craft an imitation-resistant value chain have developed a way of doing business that others will struggle to replicate, and in nearly every successful effort of this kind, technology plays a key enabling role. The value chain is the set of interrelated activities that bring products or services to market. When we compare FreshDirect’s value chain to traditional rivals, there are differences across every element. But most importantly, the elements in FreshDirect’s value chain work together to create and reinforce competitive advantages that others cannot easily copy. Incumbents trying to copy the firm would straddle between two business models, unable to reap the full advantages of either. And late-moving pure-play rivals will struggle, as FreshDirect’s lead time allows the firm to develop brand, scale, data, and other advantages that newcomers lack.

Dellʼs Struggles: Nothing Lasts Forever Michael Dell enjoyed an extended run that took him from assembling PCs in his dorm room as an undergraduate at the University of Texas at Austin to heading the largest PC firm on the planet. Dell's super-efficient, vertically integrated manufacturing and direct-to-consumer model combined to help the firm earn seven times more profit on comparably configured rival PCs. And since Dell PCs were usually cheaper, too, the firm could often start a price war and still have better overall margins than rivals.

It was a brilliant model that for years proved resistant to imitation. While Dell sold directly to consumers, rivals had to share a cut of sales with the less efficient retail chains responsible for the majority of their sales. Dellʼs rivals struggled in moving toward direct sales because any retailer sensing its suppliers were competing with it through a direct-sales effort could easily choose another supplier that sold a nearly identical product. It wasnʼt that HP, IBM, Sony, and so many others didnʼt see the advantage of Dellʼs model – these firms were wedded to models that made it difficult for them to imitate their rivals.

But then Dellʼs killer model, one that had become a staple case study in business schools, began to lose steam.

Nearly two decades of observing Dell had allowed the contract manufacturers serving Dellʼs rivals to improve manufacturing efficiency. Component suppliers located near contract manufacturers, and assembly times fell dramatically. And as the cost of computing fell, the price advantage Dell enjoyed over rivals also shrank. On top of that, the direct-to-consumer model also suffered when sales of notebook PCs outpaced the more commoditized desktop market. Notebook customers often want to compare products in person – lift them, type on keyboards, and view screens – before making a purchase decision. You simply can't do that through a website.

Dellʼs struggles as costs, customers, and the product mix change, all underscore the importance of continually assessing a firm's strategic position among changing market conditions. There is no guarantee that todayʼs winning strategy will dominate forever.

Brand

A firm's brand is the symbolic embodiment of all the information connected with a product or service, and a strong brand can also be an exceptionally powerful resource for competitive advantage. Consumers use brands to lower search costs, so having a strong brand is particularly vital for firms hoping to be the first online stop for consumers. Want to buy a book online? Auction a product? Search for information? Which firm would you visit first? Almost certainly Amazon, eBay, and Google. But how do you build a strong brand? It's not just about advertising and promotion. First and foremost, customer experience counts. A strong brand proxy’s quality inspires trust, so if consumers can't rely on a firm to deliver as promised, they'll go elsewhere. As an upside, tech can play a critical role in rapidly and cost-effectively strengthening a brand. If a firm performs well, consumers can often be enlisted to promote a product or service (so-called viral marketing). Consider that while scores of dot-coms burned through money on Super Bowl ads and other costly promotional efforts, Google, Hotmail, Skype, eBay, MySpace, Facebook, YouTube, and so many other dominant online properties built multi-million member followings before committing any significant spending to advertising.

Promotions at the end of each Hotmail message, and the ʻe-mailʼ and ʻshareʼ links at the New York Times, enlist customers to spread the word about products & services, user to user like a virus Early customer accolades for a novel service often mean that positive press (read free advertising) will also likely follow. But show up late and you may end up paying much more to counter an incumbent's place in the consumer psyche. In recent years, Amazon has spent no money on television advertising, while rivals Buy.com and Overstock spent millions. MSN's budget for promoting its search product was twenty-two times greater than Google's spend. 

Also, if done well, even complex tech products can establish themselves as killer brands.

Consider that Intel has taken an ingredient product that most people don't understand, the microprocessor, and built a quality-conveying name recognized by much of the developed world. Scale

Many firms gain advantages as they grow in size. Advantages related to a firm's size are referred to as scale benefits. Businesses benefit from economies of scale when the cost of an investment can be spread across increasing units of production or in serving a growing customer base.

Firms that benefit from scale economies as they grow are sometimes referred to as being scalable. Many Internet and tech-leveraging businesses are highly scalable since, as firms grow to serve more customers with their existing infrastructure investment, profit margins improve dramatically.

Consider that in just one year, the Internet firm BlueNile sold as many diamond rings with just 115 employees, and one website as a traditional jewelry retailer would sell through 116 stores.

With lower operating costs, BlueNile can sell at prices that brick-and-mortar stores can't match, attracting more customers and further fueling its scale advantages. Profit margins improve as the cost to run the firm's single website and operate its one warehouse is spread across increasing diamond sales.

A growing firm may also gain bargaining power with its suppliers or buyers. As Dell grew large, the firm forced suppliers wanting in on Dell’s growing business to make concessions such as locating close to Dell plants. Similarly, eBay can raise auction fees because of its market dominance. Sellers who leave eBay lose pricing power since fewer bidders on smaller, rival services mean lower prices.

The scale of technology investment required to run a business can also act as a barrier to entry, discouraging new, smaller competitors. Intel's size allows the firm to pioneer cutting-edge manufacturing techniques and invest three billion-plus dollars in next-generation plants. Although Google was started by two Stanford students in a trailer, the firm today runs on an estimated 450,000 to 1 million servers. The investments being made by Intel and Google would be cost-prohibitive for almost any newcomer to justify.

Switching Costs and Data Switching costs exist when consumers incur an expense to move from one product or service to another. Tech firms often benefit from strong switching costs that cement customers to their firms. Users invest their time learning a product, entering data into a system, creating files, and buying supporting programs or manuals – and these investments may make them reluctant to switch to a rival’s effort.

Similarly, firms that seem dominant but that don't have high switching costs can be rapidly trumped by strong rivals. Netscape once held an eighty-plus percent market share in web browsers, but when Microsoft began bundling Internet Explorer with the Windows operating system and (through an alliance) with America Online, Netscape's market share plummeted.

Customers migrated with a mouse click as part of an upgrade or installation. Learning a new browser was a breeze, and with the web's open standards, most customers noticed no difference when visiting their favorite websites with their new browser.

Sources of Switching Costs Challengers must realize that to win customers away from a rival, a new entrant must not only demonstrate to consumers that an offering provides more value than the incumbent, but also have to ensure that their value-added exceeds the incumbent's value plus any perceived customer switching costs. If it’s going to cost you and be inconvenient, there’s no way you’re going to leave unless the benefits are overwhelming.

Data can be a particularly strong switching cost for firms leveraging technology. A customer who enters her profile into Facebook, movie preferences into NetFlix, or grocery list into FreshDirect may be unwilling to try rivals – even if these firms are cheaper – if moving to the new firm means she'll lose information feeds, recommendations, and time savings provided by the firms that already know her well. Fueled by scale over time, firms that have more customers and have been in business longer can gather more data, and many can use this data to improve their value chain by offering more accurate demand forecasting or product recommendations.

To win customers from an established incumbent, a late-entering rival must offer a product or service that not only exceeds the value offered by the incumbent, it must exceed the incumbent's value and any customer switching costs.

Competing on Tech Alone Is Tough: Gmail vs. Rivals Switching e-mail services can be a real pain. You have to convince your contacts to update their address books, hope that any message-forwarding from your old service to your new one remains active and works properly, and regularly check the old service to be sure nothing is caught in junk folder purgatory. Not fun. So when Google entered the market for free e-mail, challenging established rivals Yahoo and Microsoft Hotmail, it knew it needed to offer an overwhelming advantage to lure away customers who had used these other services for years.

Googleʼs offering? A mailbox with vastly more storage than its competitors. With 250 to 500 times the capacity of rivals, Gmail users were liberated from the infamous ʻmailbox full error and could send photos, songs, slideshows, and other rich media files as attachments.

A neat innovation, but one based on technology that incumbents could easily copy. Once Yahoo and Microsoft saw that customers valued the increased capacity, they quickly increased their mailbox size, holding on to customers who might otherwise have fled to Google. Four years after Gmail was introduced, the service still had less than half the users of each of its two biggest rivals.

Differentiation

Commodities are products or services that are nearly identically offered by multiple vendors 

Consumers buying commodities are highly price-focused since they have so many similar choices. To break the commodity trap, many firms leverage technology to differentiate their goods and services. Dell gained attention from customers, not just due to the low prices, but also because it was one of the first PC vendors to build computers based on customer choice.

Want a bigger hard drive? Don't need the fast graphics card? Dell will oblige.

Technology has allowed Lands’ End to take this concept to clothing. Now forty percent of the firm's chino and jeans orders are for custom products, and consumers pay a price markup of one-third or more for the tailored duds. This kind of tech-led differentiation creates and reinforces other assets. While rivals also offer custom products, Lands' End has established a switching cost with its customers, since moving to rivals would require twenty minutes to re-enter measurements and preferences versus two minutes to reorder from LandsEnd.com. The firm's reorder rates are forty to sixty percent on custom clothes, and Lands' End also gains valuable information on more accurate sizing – critical since current clothes sizes provided across the U.S. apparel industry comfortably fit only about one-third of the population.

Data is not only a switching cost, it also plays a critical role in differentiation. Each time a visitor returns to Amazon, the firm uses browsing records, purchase patterns, and product ratings to present a custom home page featuring products that the firm hopes you'll like. Customers value the experience they receive at Amazon so much, that the firm received the highest score ever recorded on the University of Michigan's American Customer Satisfaction Index (ACSI).

The score was not just the highest performance of any online firm, it was the highest ranking that any service firm in any industry had ever received.

Capital One has also used data to differentiate its offerings. The firm mines data and runs experiments to create risk models for potential customers. Because of this, the credit card firm was able to aggressively pursue a different set of customers that other lenders considered too risky based on simplistic credit scoring. Technology determined that these underserved customers not properly identified by conventional techniques were good bets. Finding profitable new markets that others ignored allowed Capital One to grow EPS (earnings per share) twenty percent a year for seven years, a feat matched by less than one percent of public firms.

Network Effects

AIM has the majority of instant messaging users in the United States. Microsoft Windows has a ninety percent market share in operating systems. eBay has an eighty percent share of online auctions. Why are these firms so dominant? Largely due to the concept of network effects Network effects (sometimes called network externalities or Metcalfe's Law) exist when a product or service becomes more valuable as more people use it. If you're the first person with an AIM account, then AIM isn't very valuable. But with each additional user, there's one more person to chat with. A firm with a big network of users might also see value added by third parties. Sony's PlayStation 2 was the dominant video game console in part because it had more games than its rivals, and most of these games were provided by firms other than Sony. Third-party add-on products, books, magazines, or even skilled labor are all attracted to networks of the largest number of users, making dominant products more valuable.

Switching costs also play a role in determining the strength of network effects. Tech user investments often go way beyond simply the cost of acquiring technology. Users spend time learning a product, they buy add-ons, create files, and enter preferences. Because no one wants to be stranded with an abandoned product and lose this additional investment, users may choose a technically inferior product, simply because the product has a larger user base and is perceived as having a greater chance of being offered in the future. The virtuous cycle of network effects doesn't apply to all tech products, and it is strongest when a firm controls a standard (think AIM with their closed system versus Netscape, which used open standards), but in some cases where network effects are significant, they can create winners so dominant that firms with these advantages enjoy a near monopoly hold on a market.

Distribution Channels

If no one sees your product, then it won't even be considered by consumers. So distribution channels – the path through which products or services get to customers – can be critical to a firm's success. Again, technology opens up opportunities for new ways to reach customers. Users can be recruited to create new distribution channels for your products and services (usually for a cut of the take). You may have visited websites that promote books sold on Amazon.com.

Website operators do this because Amazon gives them a percentage of all purchases that come in through these links. Amazon now has over one million of these Associates (the term the firm uses for its affiliate program), yet it only pays them if a promotion gains a sale. Google similarly receives over forty percent of its ad revenue not from search ads, but from advertisements distributed within third-party sites ranging from lowly blogs to the New York Times.

In another move by Google to get its ads served from more places, the firm paid Dell an estimated one billion dollars for the privilege of pre-installing (distributing) the Google Toolbar and Google Desktop Search software on all the PCs Dell sells. The price tag for access to Dell desktop real estate may seem excessive, but Google feels it needs it to secure these distribution channels for its search service since Microsoft can bundle its search as the default in the Windows Vista operating system, on Internet Explorer, within MSN, and through its other The ability to distribute products by bundling them with existing offerings is a key Microsoft advantage. But beware – sometimes these distribution channels can provide firms with such an edge that international regulators have stepped in to try to provide a more level playing field.

Microsoft was forced by European regulators to unbundle the Windows Media Player, for fear that it provided the firm with too great an advantage when competing with the likes of RealPlayer and Apple's QuickTime (see Network Effects chapter).

What about Patents?

In the United States, technology and (more controversially) even business models can be patented. Firms that receive patents have some degree of protection from copycats that try to identically mimic their products and methods. But even if an innovation is patentable, that doesn't mean that a firm has bulletproof protection. Some patents have been nullified by the courts upon later review (usually because of a successful challenge to the uniqueness of the innovation). Software patents are also widely granted, but notoriously difficult to defend. In many cases, coders at competing firms can write substitute algorithms that aren't the same, but accomplish similar tasks. For example, although Google's PageRank algorithms are fast and efficient, Microsoft, Ask, and Yahoo now offer their own, non-infringing search that presents results with an accuracy that many would consider on par with PageRank. Patents do protect tech-enabled operations innovations at firms like NetFlix and Harrah's (casino hotels), and design innovations like the iPod click wheel. However, in a study of the factors that were critical in enabling firms to profit from their innovations, Carnegie Mellon professor Wes Cohen found that patents were only the fifth most important factor. Secrecy, lead time, sales skills, and manufacturing all ranked higher.

BARRIERS TO ENTRY, TECHNOLOGY, AND TIMING

Some have correctly argued that the barriers to entry for many tech-centric businesses are low.

This is particularly true for the Internet where rivals can put up a competing website seemingly overnight. But it’s critical to understand that market entry is not the same as building a sustainable business, and just showing up doesn't guarantee survival.

iWon.com entered the portal market with amazing speed. The founders went from discussing the idea over cheeseburgers to launching a Yahoo look-alike in less than nine months. Entry barriers were low because so much of the technology and services that the firm needed to acquire were available through third parties. time also handled the search for Yahoo) provided search, and DoubleClick handled ad sales. The firm's partnership with CBS (an early investor) allowed iWon to showcase giveaways in a television program that reached a national primetime audience. But even with its rapid entry and heavy media exposure, latecomer iWon never came close to challenging Yahoo's brand power.

If barriers to entry appear to be low, rivals may initially flood the market. However, as the difficulty in competing with incumbents becomes apparent, the intensity of competition from new entrants will taper off.

Platitudes like "follow, don't lead" can put firms dangerously at risk, and statements about low entry barriers ignore the difficulty many firms will have in matching the competitive advantages of successful tech pioneers. Should Blockbuster have waited while Netflix pioneered? In a year where Netflix profits were up seven-fold, Blockbuster lost more than one billion dollars.

Should Sotheby's have dismissed seemingly inferior eBay? Sotheby earned $69 million in profit in 2005; eBay earned $1.3 billion. Barnes & Noble waited seventeen months to respond to Amazon.com. Amazon now has over three and a half times the profits of its offline rival and its market cap is twenty-five times greater. Today's Net giants are winners because in most cases they were the first to move with a profitable model and they were able to quickly establish resources for competitive advantage. With few exceptions, established offline firms have failed to catch up to today's Internet leaders.

Timing and technology alone will not yield a sustainable competitive advantage. Yet both of these can be enablers for competitive advantage. Put simply, it's not the time lead or the technology, it's what a firm does with its time lead and technology. True strategic positioning means that a firm has created differences that cannot be easily matched by rivals.

Moving first pays off when the time lead is used to create critical resources that are valuable, rare, tough to imitate, and lack substitutes. Anything less risks the arms race of operational effectiveness. Build resources like brand, scale, network effects, switching costs, or other key assets and your firm may have a shot.

But guess wrong about the market or screw-up execution and failure or direct competition awaits. Most tech can indeed be copied – there's little magic on eBay's servers, Intel's processors, Oracle's databases, or Microsoft's operating systems, that past rivals have not at one point improved upon. But the lead that each of these tech-enabled firms had was leveraged to create network effects, switching costs, and data assets, and helped build solid and well-respected brands.

But Google Arrived Late! Why Incumbents Must Constantly Consider Rivals Yahoo was able to maintain its lead in e-mail because the firm quickly matched and nullified Gmailʼs most significant tech-based innovations before Google could inflict real damage. Perhaps Yahoo had learned from prior errors. The firmʼs earlier failure to respond to Googleʼs emergence as a credible threat in search advertising gave Sergey Brin and Larry Page the time they needed to build the planetʼs most profitable Internet firm.

Yahoo (and many Wall Street analysts) saw search as a commodity – a service the firm had sub-contracted out to other firms including Alta Vista and Inktomi. Yahoo saw no conflict in providing startup funding for Google and in using the firm for its search results, as well. But Yahoo failed to pay attention to Googleʼs advance. Over time, Googleʼs unmatched technical lead allowed the firm to build up an advertising network (distribution channel), brand, and scale – all competitive resources that rivals have never been able to match.

Googleʼs ability to succeed after being late to the search party isn’t a sign of the power of the late mover, it’s a story about the failure of incumbents to monitor their competitive landscape, recognize new rivals, and react to challenging offerings. That doesn’t mean that incumbents need to respond to every potential threat. Indeed, figuring out which threats are worthy of response is the real skill here. Video rental chain Hollywood Video wasted over $300 million in an Internet streaming business years before the high-speed broadband-to-the-home was available to make the effort to work19. But while Blockbuster avoided the balance-sheet-cratering gaffes of Hollywood Video, the firm also failed to respond to Netflix – a new threat that had timed market entry perfectly.

Firms that quickly get to market with the right model can dominate, but it is equally critical for leading firms to pay close attention to the competition. Take your eye off the ball and rivals may use time and technology to create strategic resources. Just ask Friendster!

KEY FRAMEWORK: THE FIVE FORCES OF INDUSTRY

COMPETITIVE ADVANTAGE

Professor and strategy consultant Gary Hamel wrote in a Fortune cover story that "The dirty little secret of the strategy industry is that it doesn't have any theory of strategy creation”. While there is no silver bullet for strategy creation, strategic frameworks help managers describe the competitive environment a firm is facing. Frameworks can also be used as a brainstorming tool to generate new ideas for responding to industry competition. If you have a model for thinking about competition, it's easier to understand what's happening and to think creatively about possible solutions. One of the most popular frameworks for examining a firm's competitive environment is Porter's

Five Forces, also known as the Industry and Competitive Analysis. As Porter puts it, "Analyzing these forces illuminates an industry's fundamental attractiveness, exposes the underlying drivers of average industry profitability, and provides insight into how profitability will evolve in the future.” The five forces this framework considers are

1) the intensity of rivalry among existing competitors,

2) the threat of new entrants,

3) the threat of substitute goods or services,

4) the bargaining power of buyers, and

5) the bargaining power of suppliers.

New technologies can create jarring shocks in an industry. Consider how the rise of the Internet has impacted the five forces for music retailers. Traditional music retailers like Tower and Virgin find that customers are seeking music online and are scrambling to invest in the new channel out of what is perceived to be a necessity. Their intensity of rivalry increases because they not only compete based on the geography of where brick-and-mortar stores are physically located, but they now compete online as well. Investments online are expensive and uncertain, prompting some firms to partner with new entrants such as Amazon.

Free from brick-and-mortar stores, Amazon, the dominant new entrant has a highly scalable cost structure. And in many ways, the online buying experience is superior to what customers see in stores. Customers can hear samples of almost all tracks, the selection is seemingly limitless (the "long tail" phenomenon—see this concept illuminated in the Netflix Case), and data is leveraged using collaborative filtering software to make product recommendations and assist in music discovery. Tough competition, but it gets worse because CD sales aren't the only way to consume music. The process of buying a plastic disc now faces substitutes as digital music files become available on commercial music sites. Who needs the physical atoms of a CD filled with ones and zeros when you can buy the bits one song at a time? Or don't buy anything, subscribe to a limitless library.

From a sound quality perspective, the substitute good of digital tracks purchased online is almost always inferior to their CD counterparts. To transfer songs quickly and hold more songs on an MP3 player, tracks are encoded in a smaller file size than what you'd get on a CD, and this smaller file contains lower playback fidelity. But the additional tech-based market shock brought on by MP3 players (particularly the iPod) has changed listening habits. The convenience of carrying thousands of songs trumps what most consider just a slight quality degradation. iTunes is now responsible for selling more music online or off than any other firm, online or off. Most alarming to the industry is the other widely adopted substitute for CD purchases – theft. Music is available illegally, but free. And while exact figures on real losses from online piracy are in dispute, the music industry has seen sales drop by roughly one-third since 2000. All this choice gives consumers (buyers) bargaining power. They demand cheaper prices and greater convenience. The bargaining power of suppliers – the music labels – also increases. At the start of the Internet revolution, retailers could pressure labels to limit sales through competing channels. Now, with many of the major music retail chains in bankruptcy, labels have a freer hand to experiment.

While it can be useful to look at changes in one industry as a model for potential change in another, it's important to realize that the changes that impact one industry do not necessarily impact other industries in the same way. For example, it is often suggested that the Internet increases the bargaining power of buyers and lowers the bargaining power of suppliers. This is true for some industries like auto sales and jewelry where the products are commodities and the price transparency of the Net counteracts a previous information asymmetry where customers often didn't know enough information about a product to bargain effectively. But it's not true across the board.

In cases where network effects are strong or a seller's goods are highly differentiated, the Internet can strengthen supplier bargaining power. The customer base of an antique dealer used to be limited by how many likely purchasers lived within driving distance of a store. Now with eBay, the dealer can take a rare good to a global audience and have a much larger customer base bid up the price. Switching costs also weakens buyer bargaining power. Wells Fargo has found that customers who use online bill pay (where switching costs are high) are 70 percent less likely to leave the bank than those who don't, suggesting that these switching costs help cement customers to Wells even when rivals offer more compelling rates or services.

Tech plays a significant role in shaping and reshaping these five forces, but it's not the only significant force that can create an industry shock. Government deregulation or intervention, political shock, and social and demographic changes can all play a role in altering the competitive landscape. Because we live in an age of constant and relentless change, managers need to continually visit strategic frameworks to consider any market-impacting shifts.

Predicting the future is difficult, but ignoring change can be catastrophic.

KEY FRAMEWORK: THE VALUE CHAIN

The value chain is the "set of activities through which a product or service is created and delivered to customers". By examining the activities in a firm's value chain, managers can gain a greater understanding of how these factors influence a firm's cost structure and value delivery. There are five primary components of the value chain and four supporting components.

The primary components are:

  • Inbound logistics – getting needed materials and other inputs into the firm from suppliers
  • Operations – turning inputs into products or services
  • Outbound logistics – delivering products or services to consumers, distribution centers, retailers, or other partners
  • Marketing and Sales – customer engagement, pricing, promotion, transaction
  • Support – service, maintenance, and customer support
  • The secondary components are:
  • Firm infrastructure – functions that support the whole firm, including general management, planning, IS, and finance
  •  Human resource management – recruiting, hiring, training, and development
  • Technology / Research & Development – new product and process design
  • Procurement – sourcing and purchasing functions


While the value chain is typically depicted, goods and information don't necessarily flow in a line from one function to another. For example, an order taken by the marketing function can trigger an inbound logistics function to get components from a supplier, operations functions (to build a product if it's not available), and/or outbound logistics functions (to ship a product when it's available). Similarly, information from service support can be fed back to advise R&D in the design of future products.

An analysis of a firm's value chain can reveal operational weaknesses, and technology is often of great benefit to improving the speed and quality of execution. Software tools such as supply chain management (SCM: linking inbound and outbound logistics with operations), customer relationship management (CRM: supporting sales, marketing, and in some cases R&D), and enterprise resource planning software (ERP: software implemented in modules to automate the entire value chain), can have a big impact on more efficiently integrating the activities within the firm, as well as with its suppliers and customers. But remember, these software tools can be purchased by all competitors. Although they can cut costs and increase efficiency, if others can buy the same or comparable products then these technologies, while valuable, may not yield lasting competitive advantage.

Even more important to consider, if a firm adopts software that changes a unique process into a generic one, it may have co-opted a key source of competitive advantage. SCM, CRM, and ERP software typically require adopting a very specific way of doing things. Dell stopped the deployment of the logistics and manufacturing modules of its ERP implementation when it realized that the software would require the firm to make changes to its unique and highly successful operating model. By contrast, Apple had no problem adopting ERP because the firm competes on product uniqueness rather than operational differences.

From a strategic perspective, managers can also consider the firm's differences and distinctiveness compared to rivals. If a firm's value chain cannot be copied by competitors without engaging in painful tradeoffs, or if the firm's value chain helps to create and strengthen other strategic assets over time, it can be a key source of competitive advantage. Many of the cases covered in this book, including FreshDirect, Amazon, Zara, NetFlix, and eBay, illustrate this point.

 

The Future of ERP

Unknown


The Enterprise Resource Planning (ERP) industry is continuously evolving to fit the needs of its users. Let’s take a look at four of the top ERP trends for 2014 and beyond.

1.     Mobility. The use of smartphones, tablets, and other mobile devices is on the rise and showing no signs of slowing. Today’s workforce is increasingly becoming more and more mobile, driving demand for greater flexibility and the ability to access information from all sorts of remote locations. ERP offerings are evolving to accommodate a more mobile environment – giving business professionals remote access to critical information such as key performance indicators (KPIs), finances, inventory levels, sales orders, and customer information. We expect ERP mobile business intelligence solutions to remain a priority in 2014 and beyond.

2.     Cloud-based ERP. These days it seems everything is moving towards an “in the cloud” model, and ERP is not immune from that trend. Demand has been growing for cloud-based ERP solutions, which tend to be less expensive and quicker to implement than their on-premise counterparts. Many ERP vendors have jumped on the “in the cloud” bandwagon, and we expect this to continue to gather steam in 2014.

3.     More informed buyers. An abundance of ERP information, case studies, white papers, and so forth are making it easier for businesses to do their due diligence and become better-informed buyers of ERP software.

4.     Tough competition. In the ERP market, ERP vendors are divided loosely into tiers or types. Tier 1 vendors generally serve large global businesses, while Tier 2 serves mainly small to mid-market businesses. Even though the larger and well-known Tier 1 vendors have done a pretty good job retaining market share, Tier 2 vendors continue to gain momentum. It is often said that Tier 1 vendors “buy” innovations while Tier 2 vendors “create” them. This boost in innovation has led Tier 2 to gain market share and introduce some fierce competition into the mix.

The ERP Evolution

Fundamentally reshaping business processes using in-memory technologies

Data is exploding in size—with incredible volumes of varying forms and structures—and coming from inside and outside of your company’s walls. No matter what the application—on-premise or cloud, package or custom, transactional or analytical—data is at its core. Any foundational change in how data is stored, processed, and put to use is a big deal. Welcome to the in-memory revolution.

With in-memory, companies can crunch massive amounts of data, in real-time, to improve relationships with their customers. As in-memory technologies move from analytical to transactional systems, the potential to fundamentally reshape business processes grows.

Technical upgrades of analytics and ERP engines may offer total cost of ownership improvements, but potential also lies in using in-memory technologies to solve tough business problems. CIOs can help the business identify new opportunities and provide the platform for the resulting process transformation.

Trends in Enterprise Resource Planning Systems

With a new year comes both reflection of the past and contemplation of the future. It is with these thoughts in mind that many manufacturing companies are now looking at the possibility of a new enterprise resource planning (ERP) system. Regardless of whether the ERP evaluation is to replace an existing system or to purchase a first system, knowing the trends in ERP can be a critical factor in decision-making.

ERP has evolved through the years. What began as a Material Resource Planning (MRP) system has grown to include most aspects of the enterprise such as estimating, sales and distribution, quality, maintenance, and accounting. With so many ERP companies in business today, it is important to know what sets each package apart and the common trends among the ERP players.

One significant and recent trend is increased interest in Software as a Service (SaaS) applications as a solution to reduce workload for network administrators. Whether it is an online option (data hosted offsite) or a Managed Service (on-premise solution where the data resides locally), the ability to have an ERP system that doesn’t require any system administration has quickly gained momentum in the manufacturing ERP segment. Each option may have both advantages and disadvantages over the other and those may vary greatly by salesperson or User Company. For instance, Saas and Managed Service solutions are much costlier in the long run because you pay “forever”, but appear cheaper up-front because of lower start-up costs. Additionally, some might find the automatic updates of SaaS versions a plus but others, like medical device manufacturers that require validation before software updates, may want to control their update frequency and therefore have a vastly different take.

Another trend that has gained momentum is mobility because people need to stay connected to their companies. As computers become smaller, manufacturing company employees are becoming more mobile, and therefore ERP systems themselves are pushing to keep up. From laptops to smartphones, Blackberries to iPhones, people have never been more in touch technologically speaking. With mobility on the rise, many ERP companies have found the need to create applications that focus solely on the need to have information on the go, even while away from a facility. A smartphone, Blackberry, or iPhone application will be a must for an ERP company to stay viable and relevant.

Shop floor integration and automation continue to gain traction in a growing number of ERP systems. From true, real-time shop floor-to-ERP compatibility to extremely basic batch data transfers, ERP system companies understand that packages that can reach down into the plant floor are more valuable than packages that do not place as much emphasis on this. The challenge for the manufacturing company is to understand the vital difference between a simple claim of shop floor automation and integration and the actual reality of execution. Some ERP systems today are capable of updating schedules in real-time based on machine output, providing two-way communication with shop floor equipment, automating shop floor equipment based on preset indicators (such as box count, etc.), and providing alerts via facility PA systems. It is important to keep in mind that while some ERP companies can do this, others just make claims. Due diligence will be key for manufacturing companies to weed out the “who can” and the “who cannot” when it comes to shop floor integration and automation.

Trends in manufacturing are like trends in everything else – everything old is new again. Some might say this idea can be attributed to Service Oriented Architecture (SOA). While losing some of its steam over the last few months, this catchphrase for how disparate systems will now communicate is reminiscent of the old best-of-breed idea that ‘interfaces’ between systems would make seamless transitions. Many ERP companies claim their SOA system can create better communication from totally unrelated software packages – Quality from ERP Company X can talk to Payroll from ERP Company K. But, like the old best-of-breed interface talk, what happens when upgrade paths are not the same? No one can pinpoint or explain exactly how it works. For every ERP company that claims SOA, there is an article, blog, or comment from a reporter or analyst that counters with an “SOA what?”

In the last few years, the consolidation of ERP companies has subsided. Gone are the times when a new buy-out was announced almost every week. The ERP graveyard websites (i.e., http://www.erpgraveyard.com/tombs.html) are updated less regularly and it is easier to remember who owns which package and who bought what. With that in mind, ERP systems are being thought of as more of a commodity now. Because there are still many ERP companies in the manufacturing segments, manufacturers aren’t cognizant of the variances that lie in the ERP systems and companies themselves. Yes, all ERP systems indeed have bills of material, inventory management, scheduling, and shipping functions, and even an AR and AP capability. But is having that module or function enough to say there is no variance between the ERP packages? Not really. It is still an “apples to oranges” comparison for some. On the plus side, at least for the manufacturer, ERP systems are becoming less expensive due to commoditization. The key point to remember here is that, regardless of how low the entry cost is, if the ERP can’t reach beyond the core features inherent in all packages and show true value or developmental growth potential for the future, then it really shouldn’t be compared to a system that can.

The last trend is the need for scalability. Perhaps it is a by-product of one of the toughest manufacturing times in history, but people want to manage cash flow like never before. Purchasing an ERP system is no different. It used to be that people would buy everything they needed, regardless of when they planned to implement it. The thought was, “Get the best deal I can now and implement when the company is ready.” This is not the case anymore. If a manufacturing company leaps a new ERP, the change that could doom a project is best handled in smaller doses. Therefore, the realization that companies should only pay for what they plan to use now and shortly is a strong negotiation point. Why put out money for a quality system that will not begin implementation for another four or five months? Therefore, an ERP system that is scalable and modular has a big advantage over a system that isn’t. Cash flow is handled more smartly, change is more manageable, and the process changes that always come with a new system are employed and more valued. Think of scalability as the small, continuous improvement steps in the ERP journey. A steadier pace is realized and is more likely to achieve a successful implementation.

There are certainly more trends in the ERP industry, including hot-button topics like industry specificity and the shift in database popularity. The key point is that the manufacturing industry itself is poised for recovery. An ERP system can be a make-or-break factor in the success of a company. Knowing the trends, evaluating an ERP system based on all the facts, and separating the hype from the real value are essential in choosing and sustaining an ERP vendor relationship.

Daniele Fresca is the director of marketing for IQMS. Since 1989, IQMS has been designing and developing ERP software for the repetitive, process, and discrete manufacturing industries. Today, IQMS provides leading real-time manufacturing, accounting, production monitoring, quality control, supply chain, CRM, and eBusiness solutions to the automotive, medical, packaging, consumer goods, and other manufacturing markets. The innovative, single-source enterprise software solution, EnterpriseIQ offers complete functionality and scalable solutions all in a single database. With offices across North America, Europe, and Asia, IQMS serves manufacturers around the world. 

Three major trends are impacting Enterprise Resource Planning (ERP) systems today.

1. The emergence of new business models that require ERP support.

2. The broadening interest in cloud computing for ERP systems.

3. The growing need to push ERP functionality out to mobile devices.

 

Keeping Pace with the New Business Models

Throughout much of our economic history, competition between companies was driven by who produced the best products and services.   Today, success in dominating a market is influenced more from behind the scenes (back offices, cubicles, factory floors warehouses, etc.), than by who builds the better product or service.

Companies compete far more aggressively in trying to understand their customers and prospects as individuals rather than selling to cold buyers.  Getting close to customers by developing better relationships has become the primary aim of most companies, even more than convincing buyers how excellent their products or services are.  Those companies that possess a solid understanding of their customer base, and apply that knowledge to build brand loyalty, often dominate their respective markets.

The closeness a company can get to its customer base compared to is determined by how well its business model matches the needs of the market. The closer a company is to its target audience – the more competitive they are in the marketplace. Ultimately, this comes down to competing business models for market share, not products as one may think.

Taking this idea one step further, in complex organizations the business models need ERP systems to manage the information flow throughout the entire company and its supply chain.  Without ERP systems, most large and mid-size companies simply could not compete.

ERP systems are now one of the most powerful weapons in the battle to win customers and keep them coming back.  By supporting business models and enabling companies to adapt to changing market conditions, ERP has emerged as the central player in Information Technology ecosystems.  Whether the ERP product is from Oracle, SAP, Microsoft, Epicor, Plex Online, Sage ERP X3, QAD Enterprise, or a host of other vendors; they all have one thing in common – ERP systems must always be transformed to satisfy constantly evolving business models.

However, most ERP solutions in use today were designed years ago and, although they have been enhanced and updated, the logic behind much of the source code still reflects the business mindset of the mid and late 20th century.  As a result, one of the biggest challenges for ERP has been to keep pace with a manufacturing sector that has been rapidly moving from a product-centric focus to a customer-centric focus.  This change in attitude required most ERP vendors to add a variety of functions and modules on top of their core systems, while the basic design of most ERP systems remained product-centric.

ERP in the Cloud

The ERP vendors are gradually re-designing their systems, in some cases completely rebuilding them, to accommodate the change in business perspectives supporting different industries.  This redesign effort is also being influenced by new infrastructure outsourcing trends for ERP systems.  Many buyers of ERP software want to maintain their systems in either public or private clouds.   They are also more likely to look for third-party hosting providers that can offer technical assistance and ongoing support, in addition to a stable physical infrastructure.

Since updating old ERP designs to meet the needs of the new business models requires a considerable amount of programming, building in the additional functionality to support cloud computing at the same time makes sense.   The variety of trends that drive organizations to change are the same trends that drive ERP systems and vendors to consistently innovate and improve their products and services.

Modern programming technologies and new access technologies like cloud computing have made ERP more accessible and with a broader appeal than they were just 5 years ago. The cloud delivery model has reduced IT costs since implementation is fast, and data storage and management are handled by third-party outsourcing providers.   The cloud has also made ERP affordable to many more businesses, which enables companies to replace old accounting packages with more robust financial tools.

One example of how ERP vendors are innovating to stay current is SAP’s Business One.  This product, which is a smaller version of the full SAP ERP system, is aimed at meeting the needs of small businesses. It is also available on-premises or hosted by a third-party enterprise outsourcing firm.

Chasing Mobility

Another significant trend related to ERP is mobility.  Most ERP vendors offer their solutions on mobile devices, but some have simply enabled their web interface to be accessible from web browsers.  Others have created mobile applications, or applets, that provide the same functionality as their core ERP product.   No matter how an ERP vendor chooses to provide mobile computing options to their users, it is difficult to deliver all the functionality of an ERP system via mobile devices with the technology available today.  However, the mobility trend is still in its infancy, but ERP vendors large and small are providing apps that plug into their core ERP systems, causing the mobility trend to rapidly evolve and mature.

Key Takeaways

ERP has been and still is the cornerstone of the corporate computing environment, but its full potential can only be realized through integration with 21st-century business models.  Contemporary enterprises now depend on customer relationship management (CRM), product lifecycle management (PLM), human capital management (HCM), supply chain management (SCM), and many other modules that take them beyond the core ERP functionality that was sufficient just a few years ago.

ERP vendors are scrambling to bring their products and services in line with the new needs of their customers and prospective customers.  Of course, the product development struggle never ends because the world is constantly changing.  However, the trends mentioned here are particularly challenging for traditional ERP vendors because meeting these new needs requires a deeper commitment and a significant amount of time to make adjustments to their software.  These trends also open the door for niche competitors to enter the fray.  These new competitors have the advantage of building applications from the ground up and in perfect alignment with current trends; so there is no need to retrofit and redesign anything.


Skills Professionals Need to Master

Unknown


All working individuals and professionals need significant competency and skill in all of the following eight areas to be successful, and most are sorely lacking in several if not most of them. (My anecdotal research shows that most are lacking in at least three of these skills at the same time):

·      Communication Skill

To be successful in your job and career, you must communicate powerfully and effectively with confidence and clarity. There's been much written about introverts as leaders and managers, and how they can use their innate skills and gifts to succeed as leaders. Your personality type and level of introversion/extroversion aside, if you can't communicate your ideas in an empowered, clear, and engaging way, you simply won't perform or progress as well as your counterparts who can communicate with ease and strength.

·      Building Relationships

So many professionals don't get this one basic point until it's too late - you cannot do what you want in your career, and advance successfully, if you're an island. And you certainly can't achieve what you long for if you've alienated all your colleagues, peers, and managers. One terrible boss had taught me something very smart many years ago. As horrible as he was at leading and managing, he did know one core principle - no matter how talented and gifted you are at your job, if you don't have supportive relationships at work, you won't succeed. Another way to say this is that if you hate who you work with and for, they'll end up hating you back. (To get started building your online relationships and community, download my free LinkedIn Primer.)

·      Decision-Making

Professionals must make scores of decisions every day - from whom they sit with at lunch, to what raise to ask for, to new assignments they'll accept. Do you understand HOW to decide so that it 1) aligns with what you want, 2) adds to your skill base and experience, and 3) creates new opportunities for you that will be beneficial? Further, do you know how to make business decisions that will generate the outcomes that are most desired for the enterprise? Most individuals have never learned how to evaluate with discernment what's in front of them, or how to calculate the risks and benefits of each decision they face.

·      Leadership

I don't know about you, but I never received one scrap of training in my 18 years of corporate life about how to be an inspiring leader and manager. I had no clue about the traits, behaviors, and actions that true leaders demonstrate, and what stands them apart from the rest. Key to a professional's success is learning how to empower, inspire, and motivate others, build a compelling vision, and engender trust, loyalty, and support from others to strive toward that vision. In my corporate life, I didn't understand the importance of being other-focused vs. self-focused, or see how my every action either built on or eroded my leadership and managerial ability and impact.

·      Advocating and Negotiating for Yourself and Your Causes

In business, you have to advocate and negotiate continually - for yourself, for your staff, for your business concerns, for your budget, etc. How many professionals today can say they know how to speak up for their causes and support their advancement in effective, productive ways? And how many know how to negotiate powerfully for what they want? In working with women, I’ve seen that females struggle far more in this arena than men. That said, if you can't advocate powerfully on your behalf, it's a rare thing that anyone else will.

·      Career Planning and Management

I'm sure you've noticed - your career doesn't tend to grow in the right direction unless you proactively manage it. In doing so, can you answer this question: When you're 90 years old looking back, what do you want to have stood for, given, contributed, taught, created, and left behind? What do you want people to say about you? In your professional life, do you know what you want, and what you want? Until you can answer these questions (and more), you'll struggle to create a career path that will lead you to the ultimate destination you want. You'll end up floating in an aimless sea of missed opportunities.

·      Work-Life Balance

While the struggles of balancing life and work continue to hit working moms with young children the hardest, the need and desire for work-life balance is an issue that everyone faces. Do you know exactly how to balance (or integrate) your life and work? Do you understand that it requires fierce prioritization and a deep and unwavering knowledge of what matters most to you so that you can act from that knowledge with confidence and power? Have you received training on how to negotiate the conflicting demands of our home and family life with what our employer wants from you? Most would answer "Heck no, and I need it!" to that question.

·      Boundary Enforcement

From my training as a marriage and family therapist, I learned that "boundaries" are the invisible barrier between you and your outside systems (work, school, church, family, friends, etc.). Your boundaries regulate the flow of information and input to and from you and your outside systems. If you are unable to 1) understand yourself, and your own needs and wants, and 2) create an appropriate, protective boundary around these non-negotiable, then success as a professional will be extremely challenging. Developing sufficient boundaries and enforcing them every day in your professional life is an essential behavior, and how you defend your boundaries can make or break your career. Do you know where you end and others (including your employer) begin?

 

The Evolution Of Corporate Cyberthreats

Unknown


Protecting Your Organization Today, Tomorrow, and Beyond

Most established organizations have large IT departments with staff exclusively devoted to IT security. As your business grows, hopefully, your IT security team is thriving, too, and getting the intelligence and resources needed to stay abreast of the latest threats to your organization.

Unfortunately, the bad guys are keeping pace, and in some cases, they’re taking the lead. To keep your organization safe, it’s imperative to stay at least a few steps ahead of the cybercriminals. Education is a key component of this defensive strategy in today’s cybercriminal ecosystem. If you don’t know it’s there, you can’t defend against it.

Threats are increasing in frequency and sophistication. In fact, according to the recently released Verizon Data Breach report, there were 1,367 confirmed data breaches and 63,437 security incidents in 2013. The severity and cause of these incidents vary depending on the goals of the cyber criminals and, sometimes, the size of the potential victim. Although you may be more equipped to fight cybercrime, larger organizations are vulnerable to a wider array of attacks, including Advanced Persistent Threats (APTs), cyber espionage, and more sophisticated malware.

Advanced Persistent Threats (APTs)

Every corporation, regardless of its size or industry, is at risk of becoming the victim of a targeted attack by a variety of threat actors including APT groups, politically-driven “hacktivists,” and more advanced cybercriminals, who offer their services for hire. These adversaries will target any organization that has valuable information or data relevant to their objectives.

Depending on the adversaries’ operational motives and objectives, the information identified as valuable will vary. However, it’s important to note that regardless of the motive, attackers are targeting very specific information from a specific set of victims, and they will relentlessly customize and optimize their techniques until they successfully realize their objective.

 

All APTs are vehicles for cybercrime but not all cybercrimes involve APTs. Although both are based on monetary gain, APTs specifically target more sensitive data including passwords, competitive intelligence, schematics, blueprints, and digital certificates, and are paid for by third-party clients or resold in the underground. General cybercrime operations are direct “for profit” attacks and target customers’ personal and financial information which can be quickly monetized and laundered underground for ID theft and fraud.

Cybercriminals will either provide the hijacked information to the third party who hired them to steal it, or they will repackage and resell the data underground to interested parties, such as nation-states or competing organizations. Earned through years of hard work and investment, stolen intellectual property enables third parties to accelerate their technological and commercial developments while weakening corporations’ intellectual and competitive advantages in the global economy.

There are many different types of targeted attacks, including:

  • Economic Espionage Targeted Information: Intellectual property; proprietary information; geopolitical, competitive,                                                  or strategic intelligence
  • Insider Trading TheftTargeted Information: Pending M&A deals or contracts; upcoming financial earnings; future IPO dates
  • Financial & Identify TheftTargeted Information: Employee and customer personally identifiable information; payment transactions; account numbers; financial credentials
  • Technical Espionage Targeted Information: Password or account credentials, source code, digital certificates; network and security configurations; cryptographic keys; authentication or access codes
  • Reconnaissance and Surveillance: Targeted Information: System and workstation configurations; keystrokes; audio recordings; emails; IRC communications; screenshots; additional infection vectors; logs; cryptographic keys

One of the biggest challenges in defending against targeted attacks is being able to correlate data and identify attack patterns amidst the high volume of incidents coming from disparate sources at various times. However, with careful observation, research, and proper analysis, concrete information can show similarities in targeted attack campaigns.

Icefog

Most APT campaigns are sustained over months or years, continuously stealing data from their victims. By contrast, the attackers behind Icefog, an APT discovered by the Kaspersky Security Network in September 2013, focused on their victims one at a time, in short-lived, precise hit-and-run attacks designed to steal specific data. Operational since at least 2011, Icefog involved the use of a series of different versions of the malware, including one aimed at Mac OS.

The Mask

In February 2013, the Kaspersky Lab security research team published a report on a complex cyberespionage campaign called The Mask or Careto (Spanish slang for ‘ugly face’ or ‘mask’). This campaign was designed to steal sensitive data from various types of targets. The victims, located in 31 countries around the world, included government agencies, embassies, energy companies, research institutions, private equity firms, and activists.

The Mask attacks start with a spear-phishing message containing a link to a malicious website rigged with several exploits. Once victims are infected, they are then redirected to the legitimate site described in the e-mail they received (e.g. a news portal, or video). The Mask includes a sophisticated backdoor Trojan capable of intercepting multiple communication channels and harvesting all kinds of data from the infected computer. Like Red October and other targeted attacks before it, the code is highly modular, allowing the attackers to add new functionality at will. The Mask also casts its net wide - there are versions of the backdoor for Windows and Mac OS X and there are references that suggest there may also be versions for Linux, iOS, and Android. The Trojan also uses very sophisticated stealth techniques to hide its activities.

The key motivation of The Mask attackers is to steal data from their victims. The malware collects a range of data from the infected system, including encryption keys, VPN configurations, SSH keys, RDP files, and some unknown file types that could be related to bespoke military/government-level encryption tools. Security researchers don’t know who’s behind the campaign. Some traces suggest the use of the Spanish language but that fact doesn’t help pin it down, since this language is spoken in many parts of the world. It’s also possible that this could have been used as a false clue, to divert attention from whoever wrote it. The very high degree of professionalism of the group behind this attack is unusual for cybercriminal groups – one indicator that The Mask could be a state-sponsored campaign.

SecureList

This campaign underlines the fact that there are highly professional attackers who have the resources and the skills to develop complex malware – in this case, to steal sensitive information. It also highlights the fact that targeted attacks, because they generate little or no activity beyond their specific victims, can ‘fly under the radar’.

The entry point of The Mask involves tricking individuals into doing something that undermines the security of the organization they work for – in this case, by clicking on a link or an attachment. Currently, all known C&C (Command-and-Control) servers used to manage infections are offline. However, researchers believe that the danger hasn’t been eradicated and that the attackers can renew the campaign in the future.

Bitcoin

Bitcoin is a digital crypto-currency. It operates on a peer-to-peer model, where the money takes the form of a chain of digital signatures that represent portions of a Bitcoin. There is no central controlling authority and there are no international transaction charges – both of which have contributed to making it attractive as a means of payment.

As the use of Bitcoin has increased, it has become a more attractive target for cybercriminals. In end-of-year forecasts, security researchers anticipated attacks on Bitcoin. “Attacks on Bitcoin pools, exchanges, and Bitcoin users will become one of the most high-profile topics of the year. Such attacks will be especially popular with fraudsters as their cost-to-income ratio is very favorable.”

MtGox, one of the biggest Bitcoin exchanges, was taken offline in February 2014.6 This followed a turbulent month in which the exchange was beset by problems – problems that saw the trading price of Bitcoins on the site fall dramatically. There have been reports that the exchange’s insolvency followed a hack that led to the loss of $744,408.

Spammers are also quick to make use of social engineering techniques to draw people into a scam. They took advantage of the climb in the price of Bitcoins in the first part of this quarter (before the MtGox collapse) to try to cash in on people’s desire to get rich quickly. There were several Bitcoin-related topics used by spammers. They included offers to share secrets from a millionaire on how to get rich by investing in Bitcoins; and offers to join a Bitcoin lottery.

 

Tor

Tor (short for The Onion Router) is software designed to allow someone to remain anonymous when accessing the Internet. It has been around for some time, but for many years was used mainly by experts and enthusiasts. However, the use of the Tor network has spiked in recent months, largely because of growing concerns about privacy. Tor has become a helpful solution for those who, for any reason, fear surveillance and the leakage of confidential information.

Tor’s hidden services and anonymous browsing enable cybercriminals to cover their operations and provide a hosting platform to sell stolen information using bitcoins as the currency. Since Bitcoin’s architecture is decentralized and more difficult to trace than traditional financial institutions, it provides a more efficient way for cybercriminals to launder their ill-gotten gains.

In 2013, security experts began to see cybercriminals actively using Tor to host their malicious malware infrastructure, and Kaspersky Lab experts have found various malicious programs that specifically use Tor. Investigation of Tor network resources reveals lots of resources dedicated to malware, including Command-and-Control servers, administration panels, and more. By hosting their servers in the Tor network, cybercriminals make them harder to identify, blacklist, and eliminate.

Cybercriminal forums and marketplaces have become familiar on the ‘normal’ Internet. But recently a Tor-based underground marketplace has also emerged. It all started with the notorious Silk Road market and has evolved into dozens of specialist markets — for drugs, arms, and, of course, malware. Carding shops are firmly established in the Darknet, where stolen personal information is for sale, with a wide variety of search attributes like country, bank, etc. The goods on offer are not limited to credit cards: dumps, skimmers, and carding equipment are for sale too.

A simple registration procedure, trader ratings, guaranteed service, and a user-friendly interface are standard features of a Tor underground marketplace. Some stores require sellers to deposit a pledge – a fixed sum of money – before starting to trade. This is to ensure that a trader is genuine and his services are not a scam or of poor quality.

The development of Tor has coincided with the emergence of the anonymous crypto-currency, Bitcoin. Nearly everything on the Tor network is bought and sold using Bitcoins. It’s almost impossible to link a Bitcoin wallet and a real person, so conducting transactions in the dark net using Bitcoin means that cybercriminals can remain virtually untraceable. Kaspersky Lab’s expert blog, Securelist, discusses bitcoins extensively.

It seems likely that Tor and other anonymous networks will become a mainstream feature of the Internet as increasing numbers of ordinary people using the Internet seek a way to safeguard their personal information. But it’s also an attractive mechanism for cybercriminals – a way for them to conceal the functions of the malware they create, trade in cybercrime services, and launder their illegal profits. Researchers believe that the use of these networks for cybercrime will only continue.

Like technology, the specifics of cybercrime are constantly changing. To keep your organization safe today and into the future, partnering with a cybersecurity expert is critical.

 

Technology Trends

Unknown-1

Among the most anticipated aspects of the gathering are the ruminations from the Gartner pontificators regarding IT trends.  Among several trends shared were the Top 10 Strategic Technology Trends for 2014. Here is a summary of those trends:

Mobile Device Diversity and Management

Gartner suggests that now through 2018, a variety of devices, user contexts, and interaction paradigms will make “everything everywhere” strategies unachievable. The unintended consequence of bring-your-own-device (BYOD) programs has been to render much more complex (by two or three times, Gartner estimates) the size of the mobile workforce, straining both the information technology and the finance organizations. It is recommended that companies better define expectations for employee-owned hardware to balance flexibility with confidentiality and privacy requirements.

Mobile Apps and Applications

Gartner predicts that through 2014, improved JavaScript performance will begin to push HTML5 and the browser as a mainstream enterprise application development environment. As a consequence, it was suggested that developers focus on expanding user interface models including richer voice and video that can connect people in new and different ways. Apps will grow and applications will shrink, continuing a trend that has been documented for a while now.  The market for creating apps continues to be very fragmented (Gartner estimates that there are over 100 potential tool vendors), and consolidation is not likely to happen in earnest for a while. It is suggested that ‘the next evolution in user experience will be to leverage intent, inferred from emotion and actions, to motivate changes in end-user behavior.”

The Internet of Everything

The Internet is expanding into enterprise assets and consumer items such as cars and televisions. The problem is that most enterprises and technology vendors have yet to explore the possibilities of an expanded Internet and are not operationally or organizationally ready. Gartner identifies four basic usage models that are emerging:

   Manage

   Monetize

   Operate

   Extend

These can be applied to people, things, information, and places, and therefore the so-called “Internet of Things” will be succeeded by the “Internet of Everything.”

Hybrid Cloud and IT as Service Broker

Gartner suggests that bringing together personal clouds and external private cloud services is essential. Enterprises should design private cloud services with a hybrid future in mind and make sure future integration/interoperability is possible. Early hybrid cloud services will likely be more static, engineered compositions, and Gartner suggests that more deployment compositions will emerge as cloud service brokerages evolve.

Cloud/Client Architecture

As the power and capability of many mobile devices increases, the increased demand on networks, the cost of networks, and the need to manage bandwidth use “creates incentives, in some cases, to minimize the cloud application computing and storage footprint, and to exploit the intelligence and storage of the client device.” Gartner also notes that as mobile users continue to demand more complex uses of their mobile technologies, it will drive a need for higher levels of server-side computing and storage capacity.

The Era of Personal Cloud

The push for more personal cloud technologies will lead to a shift toward services and away from devices. The type of device one has will be less important, as the personal cloud takes over some of the roles that the device has traditionally had with multiple devices accessing the personal cloud.

Software Defined Anything

Software-defined anything (SDx) is defined as “improved standards for infrastructure programmability and data center interoperability driven by automation inherent to cloud computing, DevOps, and fast infrastructure provisioning.” Dominant vendors in a given sector of an infrastructure type may elect not to follow standards that increase competition and lower margins, but end-customers will benefit from simplicity, cost reduction opportunities, and the possibility for consolidation.

Web-Scale IT Large cloud service providers such as Amazon, Google, Salesforce.com, and the like are re-inventing how IT services can be delivered.  Gartner points out that the capabilities of these companies exceed the “scale in terms of sheer size to also include scale as it pertains to speed and agility.” The suggestion is that IT organizations should align with and emulate the processes, architectures, and practices of these leading cloud providers. The combination of the aforementioned three among others is how Gartner defines “Web-scale IT.”

Smart Machines

Gartner suggests that “the smart machine era will be the most disruptive in the history of IT.” These will include the proliferation of

   contextually aware, intelligent personal assistants

   smart advisors (e.g., IBM, Watson)

   advanced global industrial systems

   autonomous vehicles

The company also projects that smart machines will strengthen the forces of consumerization after enterprise buying commences in earnest.

3-D Printing

The growth of 3-D printers is projected to be 75 percent in the coming year, and 200 percent in 2015. Gartner suggests that “the consumer market hype has made organizations aware of the fact 3-D printing is a real, viable and cost-effective means to reduce costs through improved designs, streamlined prototyping, and short-run manufacturing.”


Six Technology Trends That Will Sweep Through The Enterprise

Unknown

Digital technology and always-on connectivity have created new and impressive opportunities for the enterprise. But behind the façade of browser interfaces, mobile apps,                                  and cloud-based data that's accessible anywhere and anytime across the organization, there's one simple truth: IT is becoming far more complex.

Businesses, educational institutions, and government agencies are discovering that they must approach IT in new and radically different ways. Here are six information technology trends that will sweep through the enterprise over the next year:

The app-centric enterprise emerges.

Consumerization, bring your device (BYOD), personal clouds and mobility have created an entirely different IT and computing environment. Today, the focus is less on a traditional client-server computing model that requires monolithic enterprise applications and more on narrow, highly targeted functionality through apps, which are increasingly delivered via an enterprise app store.

 

"The explosion of mobile devices has increased the number of tech-savvy employees over the past five years, all of whom are pushing to consumerize the way that IT departments operate," observes James Gordon, vice president of information technology at Needham Bank in Massachusetts. "Employees want to be able to download the apps they need, and they don't want to have to ask for download permissions or access rights to get their job done."

This translates into a greater need to monitor how apps and data are used on the network and to block unwanted software that poses a security risk.

It also requires business and IT leaders to "fundamentally rethink how they deliver applications and services," says Tiffani Bova, a Gartner vice president and distinguished analyst. "It's not about devices, but, rather, what you can do on the devices."

PwC technology industry sector lead Tom Archer adds that business and IT decision-makers must examine how best to adopt an app-centric framework, but they also must understand how the enterprise can fully tap the environment. "It changes engagement models, economic models, and operating models," Archer explains. "It alters workflows and creates different cost and pricing paradigms."  

The shift from personal devices to personal clouds accelerates.

Cloud computing is changing the face of the enterprise in profound ways. One of the most significant but overlooked areas involves personal clouds. Employees are turning to applications such as Salesforce, DropBox, and Evernote in droves—in some cases leading organizations down the path of shadow IT. Personal clouds are also ushering in a more mobile-centric approach that allows users to rely on a spate of devices, including smartphones, tablets, laptops, and desktop computers.

Gartner predicts that in 2014 the personal cloud will replace the PC at the center of users' digital lives. "Personal clouds offer a much more flexible and productive way to manage applications and data," says Bova.

She notes that the trend is fueling further consumerization of IT and creating a more application-centric computing environment. It's also leading to a more OS-agnostic approach to IT and creating "new and different delivery models, pricing structures, usage patterns, and application design requirements," Bova adds.

Big data and analytics get real.

There has been no shortage of hype about big data and analytics. However, the technology is now advancing at a rapid pace and, thanks in part to clouds, better ways to extract data, and next-gen analytics tools such as IBM's Watson, organizations can transform a growing mountain of data (including unstructured data) into knowledge.

"With 80 to 90 percent of data today existing in an unstructured state, big data tools are essential for distinguishing the 'signal from the noise,'" says Menka Uttamchandani, vice president for business intelligence at Denihan Hospitality Group, which operates 14 boutique hotels in the United States. Although BI has been around for years, she says that organizations are now learning how to plug in the right tools and build better partnerships, cultivate the necessary internal skill sets and create an analytics-friendly culture that takes appropriate risks.

Joshua Greenbaum, principal at Enterprise Applications Consulting and an IEEE blogger, says that real-time capabilities are emerging. "There is now the opportunity to look at vast amounts of data in real-time and use the data to understand the supply chain, logistics, customer behavior, patient outcomes, and many other things in a way that wasn't possible in the recent past," he says. "Big data is creating a new lease on life for many traditional

The Internet of Things connects to business.

The growing number of connected devices and machines is radically changing the business and IT landscape. Cisco Systems Internet Business Solutions Group predicts that the number of Internet-connected devices will hit 25 billion by 2015 and reach 50 billion by 2020. The firm also forecasts that 99 percent of physical objects will eventually become part of a network.

"Any sensor—physical or virtual—can be transformed into the source of data," says Dejan Milojicic, 2014 IEEE Computer Society president and senior research manager at HP Labs. "And all that data, once collected, can be analyzed, so the opportunities are infinite."

John Devlin, a practice director at ABI Research, says that businesses must begin to understand market opportunities for the Internet of Things, a.k.a. the Internet of Everything. Big data and the cloud are integral components.

"The underlying technologies for the Internet of Things already exist," Devlin says."A large part of the puzzle is understanding how to fit all the pieces together in an appropriate manner." That includes understanding which systems and tools work best, and building in secure access and authentication, he points out.

Séverin Kezeu, CEO of SK Solutions, a Dubai-based manufacturer of anti-collision and safety systems for aerospace, construction, and oil and gas drilling, says that the Internet of Things provides a way to dive deeper into big data and analytics, including historic, real-time and predictive systems. SK Solutions has already built connected capabilities into its ERP system through an SAP Internet of Things solution. The system is used to deliver relevant and actionable insights, as well as better decisions.

"This can be an iterative process for many companies as they uncover unexpected insights and connections from new streams of data," Kezeu says.

Emerging standards take hold in software-defined everything.

The virtualization of networking, storage solutions, and data centers has revolutionized the way businesses operate, manage content and connect with third parties. However, "The explosion of software-defined solutions has grown so rapidly that there are few standards that mandate how data is stored, shared, and managed between vendors and businesses," states Needham Bank's Gordon. Software-defined everything (SDx) takes direct aim at that challenge.

The term, as defined by Gartner, strives for "improved standards for infrastructure programmability and data center interoperability driven by automation inherent to cloud computing." But decoupling the hardware that executes the data transactions from the software layer that orchestrates them isn't an easy task—and not only for technical reasons.

Currently, SDx incorporates various initiatives, such as OpenStack, OpenFlow, the Open Compute Project, and Open Rack. However, Gartner notes that other sticking points exist: "Vendors that dominate a sector of the infrastructure may only reluctantly want to abide by standards that have the potential to lower margins and open broader competitive opportunities—even when the consumer will benefit by simplicity, cost reduction, and consolidation efficiency."

There are also the realities of operating a business. For example, Gordon says that public clouds are not compliant with the regulations he faces at Needham Bank.

Nevertheless, "Software-defined everything is moving beyond technology as organizations apply the concept to business models, including people, structure, and data," PwC's Archer says. "We are likely to see quite a bit of movement in the space in 2014."

Enterprise social collaboration grows and becomes more holistic.

Although it's nearly impossible to find an organization that hasn't been touched by social media, business, and IT leaders continue to underutilize these tools internally. A recent McKinsey & Co. survey found that 80 percent of executives believe collaboration is critical to growth, but only 25 percent describe their organization as "effective" at collaboration.

"The value of social tools extends far beyond marketing and listening," Archer points out. "Employee collaboration is a natural next step as organizations look to increase the speed at which data moves."

Emerging social collaboration tools largely deliver on the lost promise of 1990s-era knowledge management systems. Mobility, clouds, unified messaging, and the always-on Internet have made real-time communication possible, as well as making it easy to find experts within an enterprise. Archer says that these systems when used effectively, serve as platforms for a wide array of interactions.


Top 5 technology trends that will 'reset’ IT

Unknown

With a disruptive 2013 coming to a close, Progress says IT departments are looking forward to 2014 when the enterprise will 'hit the reset button

Cloud, mobile and social technologies are forcing the information technology industry to effectively 'hit the reset button' to meet the pace of change that businesses and organizations are facing. As these new technologies mesh with new demands for greater information access, and force industry leaders to continually reinvent themselves, applications development and deployment solution provider Progress predicts five major technology trends that will shape the year ahead for the industry.

1.     The Internet of Things does not make for one big happy IT family. 

While developers and IT decision-makers already have their hands full with changes complicated by the sheer number of smartphones and tablets and BYOD (bring your device) policies, they are in for a surprise. The 'Internet of Things – composed of wearable personal technology, smart consumer and medical devices, as well as connected machines and sensors located all around the planet – is about to make the challenge even greater.

The nearly unlimited addresses provided by the adoption of IPv6 will ignite an explosion of new data that must be harnessed, meaning scalability and complexity will take on new meaning. Furthermore, ever-improving “smarts” will mean device-to-device “conversations” will start to become more important than user-to-user “conversations.”

2.     Analytics moves to the forefront. 

Analytics will finally stop being an afterthought in 2014. For decades, job one was a connection, data movement, and immediate application functionality. Except where analytics was the application, analytics was an add-on, a 'nice to have.' The accelerating data tsunami --powered by the Internet of Things and the growing recognition of the potential value of all data -- means that developers must build in analytics from the start, making it an inherent aspect of information technology delivery and making context-sensitive and location-aware capabilities ubiquitous.

3.     When it comes to apps, everyone’s paying attention

While the performance and usability of large, publicly visible projects, especially in healthcare, have drawn growing scrutiny, increased adoption of model-driven, democratized, and user-based development, will also drive high expectations for application delivery.

This will lead to increased adoption of new rapid development tools and practices to speed delivery, increase predictability and reliability, meet stringent service-level requirements, and control costs.

4.     The IT budget shift. 

The prevalence of the cloud and democratized development trends present many new options, especially for individual lines of business – which will increasingly seek to control their destiny by funding their projects or wresting money from IT.

As a result, businesses – and CIOs – will need to find ways to adopt and adapt without losing control of information, encouraging security risks, or taking new directions that could lead to technical dead-ends or expensive rework in the future. In an era of very rapid transformation, they must stay ahead of this curve and take the lead.

5.     PaaS goes mainstream. 

Platform-as-a-Service finally goes mainstream in 2014. This cloud layer will become the choice for many businesses and IT decision-makers because it supports better and faster development, agility, analytics, cloud-based cost advantages, and vast scalability. 

Providing structure and control that meets the near-term and strategic needs of management will further accelerate adoption. The capabilities available through a PaaS will drive further organizational changes – putting powerful data integration tools into the hands of a line of business specialists and making data integration ubiquitous.

'Whether it’s the Internet of Things, big data, cloud or mobility, businesses are in for accelerating change in 2014,' Karen Tegan Padir, chief technology officer at Progress, commented. 'The successful organization will take a good look at the needs of the end-user first – creating an empowerment profile for each group – and providing them with the tools they need to be productive and efficient.'

'One size will no longer fit all. For some, that will mean deploying a PaaS for rapid application development, for others that will mean launching an enterprise app store to help employees gain access to the apps they want in a secure environment.  In any case, it’s about the information and being able to access it whenever, wherever and however it's needed.’


Usability in Open Source Software



Open-source software developers have created an array of amazing programs that provide a great working environment with rich functionality. At work and home, I routinely run Fedora Linux on my desktop, using Firefox and LibreOffice for most of my daily tasks. I’m sure you do, too. But as great as open source can be, we’re all aware of a few programs that just seem hard to use. Maybe it’s confusing has awkward menus, or has a steep learning curve. These are hallmarks of poor usability.

But what do we mean when we talk about usability? Usability is just a measure of how well people can use a piece of software. You may think that usability is an academic concept, not something that most open-source software developers need to worry about, but I think that’s wrong. We all have a sense of usability – we can recognize when a program has poor usability, although we don’t often recognize when a program has good usability.

So how can you find good usability? When a problem has good usability, it just works. It is really easy to use. Things are obvious or seem intuitive.

But getting there, and making sure your program has good usability, may take a little work. But not much. All it requires is taking a step back to do a quick usability test.

A usability test doesn’t require a lot of time. Usability consultant Jakob Nielsen says that as few as 5 usability testers are enough to find the usability problems for a program. Here’s how to do it:

1. Figure out who are the target users for your program.

You probably already know this. Are you writing a program for general users with average computer knowledge? Or are you writing a specialized tool that will be used by experts? Take an honest look. For example, one open-source software project I work on is the FreeDOS Project, and we figured out long ago that it wasn’t just DOS experts who wanted to use FreeDOS. We determined there were three different types of users: people who want to use FreeDOS to play DOS games, people who need to use FreeDOS to run work applications, and developers who need FreeDOS to create embedded systems.

2. Identify the typical tasks for your users.

What will these people use your program for? How will your users try to use your program? Come up with a list of typical activities that people would do. Don’t try to think of the steps they will use in the program, just the types of activities. For example, if you were working on a web browser, you’d want to list things like visiting a website, bookmarking a website, or printing a web page.

3. Use these tasks to build a usability test scenario.

Write up each task in plain language that everyone can understand, with each task on its page. Put a typical scenario behind the tasks, so that each step seems natural. You don’t have to build on each step – each task can be separate from its scenario. But here’s the tricky part: be careful not to use terminology from your program in the scenario. Also avoid abbreviations, even if they seem common to you – they might not be common to someone else. For example, if you were writing a scenario for a simple text editor, and the editor has a “Font” menu, try to write your scenario without using the word “Font.” For example, instead of this: “To see the text more clearly, you decide to change the font size. Increase the font size to 14pt.” write your scenario like this: “To see the text more clearly, you decide to make the text bigger. Increase the size of the text to 14 points.”

Once you have your scenarios, it’s just a matter of sitting down with a few people to run through a usability test. Many developers find the usability test to be extremely valuable – there’s nothing quite like watching someone else try to use your program. You may know where to find all the options and functionality in your program, but will an average user with typical knowledge be able to?

A usability test is simply a matter of asking someone to do each of the scenarios that you wrote. The purpose of a usability test is not to determine if the program is working correctly, but to see how well real people can use the program. In this way, a usability test is not judging the user; it’s an evaluation of the program. So start your usability test by explaining that to each of your testers. The usability test isn’t about them; it’s about the program. It’s okay if they get frustrated during the test. If they hit a scenario that they just can’t figure out, give them some time to work it through, then move on to the next scenario.

Your job as the person running the usability test is the hardest of all. You are there to observe, not to make comments. There’s nothing tougher than watching a user struggle through your program, but that’s the point of the usability test. Resist the temptation to point out “It’s right there! That option there!”

Once all your usability testers have had their chance with the program, you’ll have lots of information to go on. You’ll be surprised how much you’ll learn just by watching people using the program.

But what if you don’t have time to do a usability test? Or maybe you aren’t able to get people together. What’s the shortcut?

While it’s best to do your usability test, I find there are a few general guidelines for good usability, without having to do a usability test:

In general, I see Familiarity as a theme. When I did a usability test of several programs under the GNOME desktop, testers indicated that the programs seemed to operate more or less like their counterparts in Windows or Mac. For example, Gedit wasn’t too different from Windows Notepad, or even Word. Firefox is like other browsers. Nautilus is very similar to Windows Explorer. To some extent, these testers had been trained under Windows or Mac, so having functionality – and paths to that functionality – that was approximately equivalent to the Windows experience was an important part of their success.

Consistency was a recurring theme in the feedback. For example, right-click menus worked in all programs, and the programs looked and acted the same.

Menus were also helpful. While some said that icons (such as in the toolbar) were helpful to them during the test, most testers did not use the quick-access toolbar in Gedit – except to use the Save button. Instead, they used the program’s drop-down menus: File, Edit, View, Tools, etc. Testers experienced problems when the menus did not present possible actions.

And finally, Obviousness. When an action produced an obvious result or indicated success – such as saving a file, creating a folder, opening a new tab, etc. – the testers were able to quickly move through the scenarios. When the action did not produce obvious feedback, the testers became confused. These problems were especially evident when trying to create a bookmark or shortcut in the Nautilus file manager, but the program did not provide feedback and did not indicate whether or not the bookmark had been created.

Usability is important in all software – but especially open-source software. People shouldn’t have to figure out how to use a program, aside from specialized tools. Typical users with average knowledge should be able to operate most programs. If a program is too hard to use, the problem is more likely with the program than with the user. Run your usability test, apply the four themes for good usability, and make your open-source project even better!

Jim Hall is a long-time open-source software developer, best known for his work on the FreeDOS Project, including many of the utilities and libraries. Jim also wrote GNU Robots and has previously contributed to other open-source software programs including CuteMouse, GTKPod, Atomic Tanks, GNU Emacs, and Freemacs. At work, Jim is the Director of Information Technology at the University of Minnesota Morris. Jim is also working on his M.S. in Scientific & Technical Communication (Spring 2014); his thesis is “Usability in Open Source Software.”

Open-source communities have successfully developed a great deal of software. Most of this software is used by technically sophisticated users, in software development or as part of the larger computing infrastructure. Although the use of open-source software is growing, the average user computer user only directly interacts with proprietary software. There are many reasons for this situation; one of which is the perception that open-source software is less usable. This paper examines how the open-source development process influences usability and suggests usability improvement methods that are appropriate for community-based software development on the Internet.

One interpretation of this topic can be presented as the meeting of two different paradigms:

  • The Open Source Developer-User Who Both Uses The Software And Contributes To Its Development
  • The User-Centred Design Movement That Attempts To Bridge The Gap Between Programmers And Users Through Specific Techniques (Usability Engineering, Participatory Design, Ethnography Etc.)


Indeed the whole rationale behind the user-centered design approach within human-computer interaction (HCI) emphasizes that software developers cannot easily design for typical users. At first glance, this suggests that open-source developer communities will not easily live up to the goal of replacing proprietary software on the desktop of most users (Raymond, 1998). However, as we discuss in this paper, the situation is more complex and there are a variety of potential approaches: attitudinal, practical, and technological.

In this paper, we first review the existing evidence of the usability of open-source software (OSS). We then outline how the characteristics of open-source development influence the software. Finally, we describe how existing HCI techniques can be used to leverage distributed networked communities to address issues of usability.

Is there an open-source usability problem?

Open-source software has gained a reputation for reliability, efficiency, and functionality that has surprised many people in the software engineering world. The Internet has facilitated the coordination of volunteer developers around the world to produce open-source solutions that are market leaders in their sector (e.g. the Apache Web server). However most of the users of these applications are relatively technically sophisticated and the average desktop user is using standard commercial proprietary software (Lerner and Tirole, 2002). There are several explanations for this situation: inertia, interoperability, interacting with existing data, user support, organizational purchasing decisions, etc. In this paper, we are concerned with one possible explanation: that (for most potential users) open-source software has poorer usability.

Usability is typically described in terms of five characteristics: ease of learning, efficiency of use, memorability, error frequency and severity, and subjective satisfaction (Nielsen, 1993). Usability is separate from the utility of software (whether it can perform some function) and from other characteristics such as reliability and cost. Software, such as compilers and source code editors, which is used by developers does not appear to represent a significant usability problem for OSS. In the following discussion, we concentrate on software (such as word processors, e-mail clients,                                                        and Web browsers) that is aimed predominantly at the average user.

That there are usability problems with open source software is not significant by itself; all interactive software has problems. The issue is: how does software produced by an open-source development process compare with other approaches? Unfortunately, it is not easy to arrange a controlled experiment to compare the alternative engineering approaches; however, it is possible to compare similar tasks on existing software programs produced in different development environments. The only study we are aware of that does such a comparison is Eklund et al. (2002), using Microsoft Excel and StarOffice (this particular comparison is made more problematic by StarOffice's proprietary past).

There are many differences between the two programs that may influence such comparisons, e.g. development time, development resources, maturity of the software, the prior existence of similar software, etc. Some of these factors are characteristic of the differences between open source and commercial development but the large number of differences make it difficult to determine what a 'fair comparison' should be. Ultimately user testing of the software, as with Eklund et al. (2002), must be the acid test. However, as has been shown by the Mozilla Project (Mozilla, 2002), it may take several years for an open-source project to reach comparability and premature negative comparisons should not be taken as indicative of the whole approach. Additionally, the public nature of open-source development means that the early versions are visible, whereas the distribution of embryonic commercial software is usually restricted.

There is a scarcity of published usability studies of open source software, in addition to Eklund et al. (2002) we are aware only of studies on GNOME (Smith et al., 2001), Athena (Athena, 2001), and Greenstone (Nichols et al., 2001). The characteristics of open source projects emphasize continual incremental development that does not lend itself to traditional formal experimental studies (although culture may play a part as we discuss in the next section).

Although there are few formal studies of open-source usability there are several suggestions that open-source software usability is a significant issue (Behlendorf, 1999; Raymond, 1999; Manes, 2002; Nichols et al., 2001; Thomas, 2002; Frishberg et al., 2002):

"If this [desktop and application design] were primarily a technical problem, the outcome would hardly be in doubt. But it isn't; it's a problem in ergonomic design and interface psychology, and hackers have historically been poor at it. That is, while hackers can be very good at designing interfaces for other hackers, they tend to be poor at modeling the thought processes of the other 95% of the population well enough to write interfaces that J. Random End-User and his Aunt Tillie will pay to buy." (Raymond, 1999)

"Traditionally the users of OSS have been experts, early adopters, and nearly synonymous with the development pool. As OSS enters the commercial mainstream, a new emphasis is being placed on usability and interface design, with Caldera and Corel explicitly targeting the general desktop with their Linux distributions. Non-expert users are unlikely to be attracted by the available source code and more likely to choose OSS products based on cost, quality, brand, and support." (Feller and Fitzgerald, 2000)

Raymond is stating the central message of user-centered design (Norman and Draper, 1986): developers need specific external help to cater to the average user. The HCI community has developed several tools and techniques for this purpose: usability inspection methods, interface guidelines, testing methods, participatory design, interdisciplinary teams,                                                        etc. (Nielsen, 1993). The increasing attention being paid to usability in open-source circles (Frishberg et al., 2002) suggests that it may be passing through a similar phase to that of proprietary software in the 1980s.

As the users of software became more heterogeneous and less technically experienced, software producers started to adopt user-centered methods to ensure that their products were successfully adopted by their new users. Whilst many users continue to have problems with software applications, the HCI specialists employed by companies have greatly improved users' experiences.

As the user base of OSS widens to include many non-developers, projects will need to apply HCI techniques if they wish their software to be used on the desktop of the average user. There is recent evidence (Benson et al., 2002; Biot, 2002) that some open-source projects are adopting techniques from previous proprietary work, such as explicit user interface guidelines for application developers (Benson et al., 2002).

It is difficult to give a definitive answer to the question: is there an open-source usability problem? The existence of a problem does not necessarily mean that all OSS interfaces are bad or that OSS is doomed to have hard-to-use interfaces, just a recognition that the interfaces ought to be and can be made better. The opinions of several commentators and the actions of companies, such as Sun's involvement with GNOME, are strongly suggestive that there is a problem, although the academic literature (e.g. Feller and Fitzgerald, 2002) is largely silent on the issue (Frishberg et al. (2002) and Nichols et al. (2001) are the main exceptions). However, to suggest HCI approaches that mesh with the practical and social characteristics of open-source developers (and users) it is necessary to examine the aspects of the development process that may hurt usability.

"They just don't like to do the boring stuff for the stupid people!" (Sterling, 2002)

To understand the usability of current OSS we need to examine the current software development process. It is a truism of user-centered design that the development activities are reflected in the developed system. Drawing extensively from two main sources (Nichols et al., 2001; Thomas, 2002), we present here a set of features of the OSS development process that appear to contribute to the problem of poor usability. In addition, some features are shared with the commercial sector that help to explain why OSS usability is no worse than proprietary systems, nor is it any better.

This list of features is not intended to be complete but to serve as a starting point in addressing these issues. We note that there would seem to be significant difficulties in 'proving' whether several of these hypotheses are correct.

Developers are not typical end-users

This is a key point of Nielsen (1993) and is one shared with commercial systems developers. Teaching computer science students about usability issues is, in our experience, chiefly about helping them to try and see the use of their systems through the eyes of other people unlike themselves and their peers. In fact, for many more advanced OSS products, developers are indeed users, and these esoteric products with interfaces that would be unusable by a less technically skilled group of users are perfectly adequate for their intended elite audience. Indeed there may be a certain pride in the creation of a sophisticated product with a powerful, but challenging-to-learn interface. Mastery of such a product is difficult and so legitimates membership of an elite who can then distinguish itself from so-called 'lusers' [1]. Trudelle (2002) comments that "the product [a Web browser] should target people whom they [OSS contributors] consider to be clueless newbies."

However, when designing products for less technical users, all the traditional usability problems arise. In the Greenstone study (Nichols et al., 2001) common command line conventions, such as a successful command giving no feedback, confused users. The use of the terms 'man' (from the Unix command line), when referring to the help system, and 'regexp' (regular expression) in the GNOME interface are typical examples of developer terminology presented to end-users (Smith et al., 2001).

The OSS approach fails for end user usability because there are 'the wrong kind of eyeballs' looking at, but failing to see, usability issues. In some ways, the relatively new problem with OSS usability reflects the earlier problem with commercial systems development: initially, the bulk of applications were designed by computing experts for other computing experts, but over time an increasing proportion of systems development was aimed at non-experts, and usability problems became more prominent. The transition to non-expert applications in OSS products is following a similar trajectory, just a few years later.

The key difference between the two approaches is this: commercial software development has recognized these problems and can employ specific HCI experts to 're-balance' their historic team compositions and consequent development priorities in favor of users (Frishberg et al., 2002). However, volunteer-led software development cannot hire with missing skill sets to ensure that user-centered design expertise is present in the development team. Additionally, in commercial development, it is easier to ensure that HCI experts are given sufficient authority to promote the interests of users.

Usability experts do not get involved in OSS projects

Anecdotal evidence suggests that few people with usability experience are involved in OSS projects; one of the 'lessons learned' in the Mozilla project (Mozilla, 2002) is to "ensure that UI [user interface] designers engage the Open Source community" (Trudelle, 2002). Open source draws its origins and strength from a hacker culture (O'Reilly, 1999). This culture can be extremely welcoming to other hackers, comfortably spanning nations, organizations, and time zones via the Internet. However, it may be less welcoming to non-hackers.

Good usability design draws from a variety of different intellectual cultures including but not limited to psychology, sociology, graphic design, and even theatre studies. Multidisciplinary design teams can be very effective but require particular skills to initiate and sustain. As a result, existing OSS teams may just lack the skills to solve usability problems and even the skills to bring in 'outsiders' to help. The stereotypes of low hacker social skills are not to be taken as gospel, but the sustaining of distributed multidisciplinary design teams is not trivial.

Furthermore, the skills and attitudes necessary to be a successful and productive member of an OSS project may be relatively rare. With a large candidate set of hacker programmers interested in getting involved, OSS projects have various methods for winnowing out those with the best skill sets and giving them progressively more control and responsibility. It may be that the same applies to potential usability participants, implying that a substantial number of potential usability recruits are needed to proceed with the winnowing process. If true, this adds to the usability expertise shortage problem.

There are several possible explanations for the minimal or non-participation of HCI and usability people in OSS projects:

  • There are far fewer usability experts than hackers, so there are just not enough to go around.
  • Usability experts are not interested in, or incentivized by the OSS approach in the way that many hackers are.
  • Usability experts do not feel welcomed into OSS projects.
  • Inertia: traditionally projects haven't needed usability experts. The current situation of many technically adept programmers and few usability experts in OSS projects is just a historical artifact.
  • There is not a critical mass of usability experts involved for the incentives of peer acclaim and recruitment opportunities to operate.

The incentives in OSS work better for the improvement of functionality than usability

Are OSS developers just not interested in designing better interfaces? As most work on open-source projects is voluntary, developers work on the topics that interest them and this may well not include features for novice users. The importance of incentives in OSS participation is well recognized (Feller and Fitzgerald, 2002; Hars and Ou, 2001). These include gaining respect from peers and the intrinsic challenge of tackling a hard problem. Adding functionality or optimizing code provides opportunities for showing off one's talents as a hacker to other hackers. If OSS participants perceive improvements to usability as less high status, less challenging, or just less interesting, then they are less likely to choose to work on this area. The voluntary nature of participation has two aspects: choosing to participate at all and choosing which out of usually a large number of problems within a project to work on. With many competing challenges, usability problems may get crowded out.

An even more extreme version of this case is that the choice of the remit of an entire OSS project may be more biased towards the systems side than the applications side [2]. "Almost all of the most widely-known and successful OSS projects seem to have been initiated by someone who had a technical need that was not being addressed by available proprietary (or OSS) technology" [3]. Raymond refers to the motivation of "scratching a personal itch" (Raymond, 1998). The technically adept initiators of OSS projects are more likely to have a personal need for very advanced applications, development toolkits, or systems infrastructure improvements than an application that also happens to meet the needs of a less technically sophisticated user.

"From a developer's perspective, solving a usability problem might not be a rewarding experience since the solution might not involve a programming challenge, new technology, or algorithms. Also, the benefit of solving the usability problem might be a slight change to the behavior of the software (even though it might cause a dramatic improvement from the user's perspective). This behavior change might be subtle, and not fit into the typical types of contributions developers make such as adding features, or bug fixes." (Eklund et al., 2002)

The 'personal itch' motivation creates a significant difference between open source and commercial software development. Commercial systems development is usually about solving the needs of another group of users. The incentive is to make money by selling software to customers, often customers who are prepared to pay precisely because they do not have the development skills themselves. Capturing the requirements of software for such customers is acknowledged as a difficult problem in software engineering and consequently, techniques have been developed to attempt to address it. By contrast, many OSS projects lack formal requirements capture processes and even formal specifications (Scacchi, 2002). Instead, they rely on the understood requirements of initial individuals or tight-knit communities. These are supported by 'informalisms' and illustrated by the evolving OSS project code that embodies it, even if it does not articulate the requirements.

The relation to usability is that this implies that OSS is in certain ironic ways more egotistical than closed-source software (CSS). A personal itch implies designing software for one's own needs. Explicit requirements are consequently less necessary. Within OSS this is then shared with a like-minded community and the individual tool is refined and improved for the benefit of all — within that community. By contrast, a CSS project may be designed for use by a community with different characteristics, and where there is a strong incentive to devote resources to certain aspects of usability, particularly initial learnability, to maximize sales (Varian, 1993).

Usability problems are harder to specify and distribute than functionality problems

Functionality problems are easier to specify, evaluate, and modularize than certain usability problems. These are all attributes that simplify decentralized problem-solving. Some (but not all) usability problems are much harder to describe and may pervade an entire screen, interaction, or user experience. Incremental patches to interface bugs may be far less effective than incremental patches to functionality bugs. Fixing the problem may require a major overhaul of the entire interface — not a small contribution to the ongoing design work. Involving more than one designer in interface design, particularly if they work autonomously, will lead to design inconsistency and hence lower the overall usability. Similarly, improving an interface aspect of one part of the application may require careful consideration of the consequences of that change for overall design consistency. This can be contrasted with the incremental fixing of the functionality of a high-quality modularised application. The whole point of modularisation is that the effects are local. Substantial (and highly desirable) refactoring can occur throughout the ongoing project while remaining invisible to the users. However, many interface changes are global in scope because of their consistency effects.

The modularity of OSS projects contributes to the effectiveness of the approach (O'Reilly, 1999), enabling them to side-step Brooks' Law. Different parts can be swapped out and replaced by superior modules that are then incorporated into the next version. However, a major success criterion for usability is the consistency of design. Slight variations in the interface between modules and different versions of modules can irritate and confuse, marring the overall user experience. Their inevitably public nature means that interfaces are not amenable to the black-boxing that permits certain kinds of incremental and distributed improvement.

We must note that OSS projects do successfully address certain categories of usability problems. One popular approach to OSS interface design is the creation of 'skins': alternate interface layouts that dramatically affect the overall appearance of the application, but do little to change the nature of the underlying interaction dynamics. A related approach is software internationalization, where the language of the interface (and any culture-specific icons) is translated. Both approaches are amenable to the modular OSS approach whereas an attempt to address deeper interaction problems by a redesign of sets of interaction sequences does not break down so easily into a manageable project. The reason for the difference is that addressing the deeper interaction problems can have implications across not only the whole interface but also lead to requirements for changes to different elements of functionality.

Another major category of OSS usability success is in software (chiefly GNU/Linux) installation. Even the technically adept had difficulties in installing the early versions of GNU/Linux. The Debian project (Debian, 2002) was initiated as a way to create a better distribution that made installation easier, and other projects and companies have continued this trend. Such projects solve a usability problem, but in a manner that is compatible with traditional OSS development. Effectively a complex set of manual operations is automated, creating a black box for the end user with no wish to explore further. Of course, since it is an open-source project, the black box is openable, examinable, and changeable for those with the will and the skill to investigate.

Design for usability really ought to take place in advance of any coding

In some ways, surprisingly, OSS development is so successful, given that it breaks many established rules of conventional software engineering. Well-run projects are meant to plan carefully in advance, capturing requirements and specifying what should be done before ever beginning coding. By contrast, OSS often appears to involve coding as early as possible, relying on constant review to refine and improve the overall, emergent design: "Your nascent developer community needs to have something runnable and testable to play with" (Raymond, 1998). Similarly, Scacchi's (2002) study didn't find "examples of formal requirements elicitation, analysis and specification activity of the kind suggested by software engineering textbooks." Trudelle (2002) notes that skipping much of the design stage with Mozilla resulted in design and requirements work occurring in bug reports, after the distribution of early versions.

This approach does seem to work for certain kinds of applications, and in others, there may be a clear plan or shared vision between the project coordinator and the main participants. However good interface design works best by being involved before coding occurs. If there is no collective planning even for the coding, there is no opportunity to factor in interface issues in the early design. OSS planning is usually done by the project initiator before the larger group is involved. We speculate that while an OSS project's members may share a strong sense of vision of the intended functionality (which is what allows the bypassing of traditional software engineering planning), they often have a much weaker shared vision of the intended interface. Unless the initiator happens to possess significant interaction design skills, important aspects of usability will get overlooked until it is too late. As with many of the issues we raise, that is not to say that CSS always, or even frequently, gets it right. Rather we want to consider potential barriers within existing OSS practice that might then be addressed.

Open-source projects lack the resources to undertake high-quality usability work

OSS projects are voluntary so work on small budgets. Employing outside experts such as technical authors and graphic designers is not possible. As noted earlier there may currently be barriers to bringing in such skills within the volunteerism OSS development team. Usability laboratories and detailed large-scale experiments are just not economically viable for most OSS projects. Discussion on the K Desktop Environment (KDE) Usability (KDE Usability, 2002) mailing list has considered asking usability laboratories for donations of time in which to run studies with state-of-the-art equipment.

Recent usability activity in several open source projects has been associated with the involvement of companies, e.g. Benson et al. (2002), although it seems likely that they are investing less than large proprietary software developers. Unless OSS usability resources are increased, or alternative approaches are investigated (see below), then open-source usability will continue to be constrained by resource limitations.

Commercial software establishes state-of-the-art so that OSS can only play catch-up

Regardless of whether commercial software provides good usability, its overwhelming prominence in end-user applications creates a distinct inertia concerning innovative interface design. To compete for adoption, OSS applications appear to follow the interface ideas of the brand leaders. Thus the Star Office spreadsheet component, Calc, tested against Microsoft Excel (Eklund et al., 2002) was deliberately developed to provide a similar interface to make transfer learning easier. As a result, it had to follow the interface design ideas of Excel regardless of whether or not they could have been improved upon.

There does not seem to be any overriding reason why this conservatism should be the case, other than the perceived need to compete by enticing existing CSS users to switch to open-source direct equivalents. Another possibility is that current typical OSS developers, who may be extremely supportive of functionality innovation, just lack interest in interface design innovation. Finally, the underlying code of a commercial system is proprietary and hidden, requiring any OSS rival to do a form of reverse engineering to develop. This activity can inspire significant innovation and extension. By contrast, the system's interface is a very visible pre-existing solution that might dampen innovation — why not just copy it, subject to minor modifications due to concerns of copyright? One might expect in the absence of other factors that open-source projects would be much more creative and risk-taking in their development of radically new combinations of functionality and interface, since they do not suffer short-term financial pressures.

OSS has an even greater tendency towards software bloat than commercial software

Many kinds of commercial software have been criticized for bloated code, consuming ever greater amounts of memory and numbers of processor cycles with successive software version releases. There is a commercial pressure to increase functionality and so to entice existing owners to purchase the latest upgrade. Naturally, the growth of functionality can seriously degrade usability as the increasing number of options becomes ever more bewildering, serving to obscure the tiny subset of features that a given user wishes to employ.

There are similar pressures in open source development, but due to different causes. Given the interests and incentives of developers, there is a strong incentive to add functionality and almost no incentive to delete functionality, especially as this can irritate the person who developed the functionality in question. Worse, given that peer esteem is a crucial incentive for participation, deletion of functionality in the interest of benefiting the end user creates a strong disincentive to future participation, perhaps considered worse than having one's code replaced by code that one's peers have deemed superior. The project maintainer, to keep volunteer participants happy, is likely to keep functionality even if it is confusing, and on receipt of two similar additional functionalities, keep both, creating options for the user of the software to configure the application to use the one that best fits their needs. In this way as many contributors as possible can gain clear credit for directly contributing to the application. This suggested a tendency to 'pork barrel' design compromise needs further study.

The process of 'release early and release often' can lead to an acceptance of certain clumsy features. People invest time and effort in learning them and creating their workarounds to cope with them. When a new, improved version is released with a better interface, there is a temptation for those early adopters of the application to refuse to adapt to the new interface. Even if it is easier to learn and use than the old one, their learning of the old version is now a sunk investment and understandably they may be unwilling to re-learn and modify their workarounds. The temptation for the project maintainer is to keep multiple legacy interfaces coordinated with the latest version. This pleases the older users, creates more development opportunities, keeps the contributions of the older interfaces in the latest version, and adds to the complexity of the final product.

OSS development is inclined to promote power over simplicity

'Software bloat' is widely agreed to be a negative attribute. However, the decision to add multiple alternative options to a system may be seen as a positive good rather than an invidious compromise. We speculate that freedom of choice may be considered a desirable attribute (even a design aesthetic) by many OSS developers. The result is an application that has many configuration options, allowing very sophisticated tailoring by expert users, but which can be bewildering to a novice. The provision of five different desktop clocks in GNOME (Smith et al., 2001) is one manifestation of this tendency; another is the growth of preference interfaces in many OSS programs (Thomas, 2002).

Thus there is a tendency for OSS applications to grow in complexity, reducing their usability for novices, but with that tendency to remain invisible to the developers who are not novices and relish the power of sophisticated applications. Expert developers will also rarely encounter the default settings of a multiplicity of options and so are unlikely to give much attention to their careful selection, whereas novices will often live with those defaults. Of course, commercial applications also grow in complexity, but at least there are some factors to moderate that growth, including the cost of developing the extra features and some pressures from a growing awareness of usability issues.

Potential approaches to improving OSS usability

The above factors aim to account for the current relatively poor state of the usability of many open-source products. However, some factors should contribute to better usability, although they may currently be outweighed by the negative factors in many current projects.

A key positive factor is that some end users are involved in OSS projects. This involvement can be in elaborating requirements, testing, writing documentation, reporting bugs, requesting new features, etc. This is clearly in accord with the advocacy of HCI experts, e.g. Shneiderman (2002), and also has features in common with participatory design (Kyng and Mathiassen, 1997). The challenge is how to enable and encourage far greater participation of non-technical end users and HCI experts who do not conform to the traditional OSS-hacker stereotype.

We describe several areas where we see potential for improving usability processes in OSS development.

Commercial approaches

One method is to take a successful OSS project with powerful functionality and involve companies in the development of a better interface. It is noticeable that several of the positive (from the HCI point of view) recent developments (Smith et al., 2001; Benson et al., 2002; Trudelle, 2002) in OSS development parallels the involvement of large companies with both design experience and considerably more resources than the typical volunteer-led open source project. However, the HCI methods used are the same as for proprietary software and do not leverage the distributed community that gives open-source software its perceived edge in other aspects of development. Does this imply that the only way to achieve a high level of end-user usability is to 'wrap' an open-source project with a commercially developed interface? Certainly, that is one approach, and the Apple OS X serves as a prime example, as to a lesser extent do commercial releases of GNU/Linux (since they are aimed at a (slightly) less technologically sophisticated market). The Netscape/Mozilla model of mutually informed development offers another model. However, as Trudelle (2002) notes, there can be conflicts of interest and mutual misunderstandings between a commercial partner and OSS developers about the direction of interface development so that it aligns with their interests.

Technological approaches

One approach to dealing with a lack of human HCI expertise is to automate the evaluation of interfaces. Ivory and Hearst (2001) present a comprehensive review of automated usability evaluation techniques and note several advantages to automation including cost reduction, increased consistency, and the reduced need for human evaluators. For example, the Sherlock tool (Mahjan and Shneiderman, 1997) automated checking of visual and textual consistency across an application using simple methods such as a concordance of all text in the application interface and metrics such as widget density. Applications with interfaces that can be easily separated from the rest of the code, such as Mozilla, are good candidates for such approaches.

An interesting approach to understanding user behavior is the use of 'expectation agents' (Hilbert and Redmiles, 2001) that allow developers to easily explicitly place their design expectations as part of an application. When a user does something unexpected (and triggers the expectation agent) program state information is collected and sent back to the developers. This is an extension of the instrumentation of applications but one that is focused on user activity (such as the order in which a user fills in a form) rather than the values of program variables. Extensive instrumentation has been used by closed-source developers as a key element of program improvement (Cusmano and Selby, 1995).

Academic involvement

It is noticeable some of the work described earlier has emerged from higher education (Athena, 2001; Eklund et al., 2002; Nichols et al., 2001). In these cases, classes of students involved in HCI have participated in or organized studies of OSS. This type of activity is effectively a gift to the software developers, although the main aim is pedagogical. The desirability of practicing skills and testing conceptual understanding on authentic problems rather than made-up exercises is obvious.

The model proposed is that an individual, group, or class would volunteer support following the OSS model, but involving aspects of any combination of usability analysis and design: user studies, workplace studies, design requirements, controlled experiments, formal analysis, design sketches, prototypes, or actual code suggestions. To support these kinds of participation, certain changes may be needed to the OSS support software, as noted below.

Involving the end users

The Mozilla bug database, Bugzilla, has received more than 150,000 bug reports at the time of writing. Overwhelmingly these bug reports concern functionality (rather than usability) and have been contributed by technically sophisticated users and developers:

"Reports from lots of users are unusual too; my usual rule of thumb is that only 10% of users have any idea what newsgroups are (and most of them lurk 90% of the time), and that much less than 1% of even mozilla users ever file a bug. That would mean we don't ever hear from 90% of users unless we make some effort to reach them."

Generally speaking most members [of an open-source community] are Passive Members. For example, about 99 percent of people who use Apache are Passive Users (Nakakoji, 2002).

One reason for users' non-participation is that the act of contributing is perceived as too costly compared to any benefits. The time and effort to register with Bugzilla (a pre-requisite for bug reporting) and understand its Web interface are considerable. The language and culture embodied in the tool are themselves barriers to participation for many users. In contrast, the crash reporting tools in both Mozilla and Microsoft Windows XP are simple to use and require no registration. Furthermore, these tools are part of the application and do not require a user to separately enter information on a Web site.

We suggest that integrated user-reported usability incidents are a strong candidate for addressing usability issues in OSS projects. That is, users report occasions when they have problems whilst they are using an application. Existing HCI research (Hartson and Castillo, 1998; Castillo et al., 1998; Thompson and Williges, 2000) has shown, on a small scale, that user reporting is effective at identifying usability problems. These reporting tools are part of an application, easy to use, free of technical vocabulary, and can return objective program state information in addition to user comments (Hilbert and Redmiles, 2000). This combination of objective and subjective data is necessary to make causal inferences about users' interactions in remote usability techniques (Kaasgaard et al., 1999). In addition to these user-initiated reports, applications can prompt users to contribute based on their behavior, (Ivory and Hearst, 2001). Kaasgaard et al. (1999) note that it is hard to predict how these additional functionalities affect the main usage of the system.

Another method to involve users is to create packaged remote usability tests that can be performed by anyone at any time. The results are collated on the user's computer and sent back to the developers. Tullis et al. (2002) and Winckler et al. (1999) both describe this approach for usability testing of Web sites; a separate browser window is used to guide a user through a sequence of tasks in the main window. Scholtz (1999) describes a similar method for Web sites within the main browser window — effectively as part of the application. Comparisons of laboratory-based and remote studies of Web sites indicate that users' task completion rates are similar and that the larger number of remote participants compensates for the lack of direct user observation (Tullis et al., 2002; Jacques and Savastano, 2001).

Both of these approaches allow users to contribute to usability activities without learning technical vocabulary. They also map well onto the OSS approach: they allow participation according to the contributor's expertise and leverage the strengths of a community in a distributed networked environment. Although these techniques lose the control of laboratory-based usability studies they gain authenticity in that they are firmly grounded in the user's environment (Thomas and Kellogg, 1989; Jacques and Savastano, 2001; Thomas and Macredie, 2002).

To further promote user involvement it should be possible to easily track the consequences of a report, or test result, that a user has contributed. The public nature of Bugzilla bug discussions achieves this for developers but a simpler version would be needed for average users so that they are not overwhelmed by low-level detail. Shneiderman. suggests that users might be financially rewarded for such contributions, such as discounts on future software purchases. However, in an open-source context, the user could expect information such as: "Your four recent reports have contributed to the fixing of bug X which is reflected in the new version 1.2.1 of the software."

Creating a usability discussion infrastructure

For functional bugs, a tool such as Bugzilla works well in supporting developers but presents complex interfaces to other potential contributors. If we wish such tools to be used by HCI people then they may need an alternative lightweight interface that abstracts away from some low-level details. In particular, systems that are built on top of code management systems can easily become overly focused on textual elements.

As user reports and usability test results are received they need to be structured, analyzed, discussed, and acted on. Much usability discussion is graphical and might be better supported through sketching and annotation functionality; it is noticeable that some Mozilla bug discussions include textual representations ('ASCII art') of proposed interface elements. Hartson and Castillo (1998) review various graphical approaches to bug reporting including video and screenshots which can supplement the predominant text-based methods. For example, an application could optionally include a screenshot with a bug report; the resulting image could then be annotated as part of an online discussion. Although these may seem like minor changes, a key lesson of usability research is that details matter and that a small amount of extra effort is enough to deter users from participating (Nielsen, 1993).

Fragmenting usability analysis and design

We can envisage various new kinds of lightweight usability participation, that can be contrasted with the more substantial experimental and analysis contributions outlined above for academic or commercial involvement. An end user can volunteer a description of their own, perhaps idiosyncratic, experiences with the software. A person with some experience of usability can submit their analysis. Furthermore, such a contributor could run a user study with a sample size of one, and then report it. It is often surprising how much usability information can be extracted from a small number of studies (Nielsen, 1993).

In the same way that OSS development work succeeds by fragmenting the development task into manageable sub-units, usability testing can be fragmented by involving many people worldwide each doing a single user study and then combining the overall results for analysis. Coordinating many parallel small studies would require tailored software support but it opens up a new way of undertaking usability work that maps well to the distributed nature of OSS development. Work on remote usability (Hartson et al., 1996; Scholtz, 2001) strongly suggests that the necessary distribution of work is feasible; further work is needed in coordinating and interpreting the results.

Involving the experts

A key point for involving HCI experts will be a consideration of the incentives of participation. We have noted the issues of a critical mass of peers, and a legitimization of the importance of usability issues within the OSS community so that design trade-offs can be productively discussed. One relatively minor issue is the lowering of the costs of participation caused by problems with articulating usability in a predominantly textual medium, and various solutions have been proposed. We speculate that for some usability experts, their participation in an OSS project will be problematic in cases where their proposed designs and improvements clash with the work of traditional functionality-centric development. How can this be resolved? Clear articulation of the underlying usability analysis, a kind of design rationale, may help. In the absence of such explanations, the danger is that a lone usability expert will be marginalized.

Another kind of role for a usability expert can be as the advocate of the end user. This can involve analyzing end-user contributions and creating a condensed version, perhaps filtered by the expert's theoretical understanding to address concerns of developers that the reports are biased or unrepresentative. The expert then engages in the design debate on behalf of the end users, attempting to rectify the problem of traditional OSS development only scratching the personal itches of the developers, not of intended users. As with creating incentives to promote the involvement of end users, the consequences of the evolving design of usability experts' interactions should be recorded and easily traceable.

Education and evangelism

In the same way that commercial software development organizations had to learn that usability was an important issue that they should consider, and which could have a significant impact on the sales of their product, so open-source projects will need to be convinced that it is worth the effort of learning about and putting into practice good usability development techniques. The incentive of greater sales will not usually be relevant, and so other approaches to making the usability case will need to be made. Nickell (2001) suggests that developers prefer that their programs are used and that "most hackers find gaining a userbase to be a motivating factor in developing applications."

Creating a technological infrastructure to make it easier for usability experts and end users to participate will be insufficient without an equivalent social infrastructure. These new entrants to OSS projects will need to feel welcomed and valued, even if (actually) they lack the technical skills of traditional hackers. References to 'clueless newbies' and 'lusers', and some of the more vituperative online arguments will need to be curtailed, and if not eliminated, at least moved to designated technology-specific areas. Beyond merely tolerating a greater diversification of the development team, it would be interesting to explore the consequences of certain OSS projects actively soliciting help from these groups. As with various multidisciplinary endeavors, including integrating psychologists into commercial interface design and ethnographers into computer-supported cooperative work projects, care needs to be taken in enabling the participants to be able to talk productively to each other (Crabtree et al., 2000).

Discussion and future work

We do not want to imply that OSS development has completely ignored the importance of good usability. Recent activities (Frishberg et al., 2002; Nickell, 2001; Smith et al., 2001) suggest that the open-source community is increasing its awareness of usability issues. This paper has identified certain barriers to usability and explored how these are being and can be addressed. Several of the approaches outlined above directly mirror the problems identified earlier; try automated evaluation where there is a shortage of human expertise and encourage various kinds of end user and usability expert participation to re-balance the development community in favor of average users. If traditional OSS development is about scratching a personal itch, usability is about being aware of and concerned about the itches of others.

A deeper investigation of the issues outlined in this paper could take various forms. One of the great advantages of OSS development is that its process is to a large extent visible and recorded. A study of the archives of projects (particularly those with a strong interface design component such as GNOME, KDE, and Mozilla) will enable verification of the claims and hypotheses ventured here, as well as the uncovering of a richer understanding of the nature of current usability discussions and development work. Example questions include: 'How do HCI experts successfully participate in OSS projects?', 'Do certain types of usability issues figure disproportionately in discussions and development efforts?' and 'What distinguishes OSS projects that are especially innovative in their functionality and interface designs?'

The approaches outlined in the previous section need further investigation and indeed experimentation to see if they can be feasibly used in OSS projects, without disrupting the factors that make traditional functionality-centric OSS development so effective. These approaches are not necessarily restricted to OSS; several can be applied to proprietary software. Indeed the ideas derived from discount usability engineering and participatory design originated in developing better proprietary software. However, they may be even more appropriate for open-source development in that they map well onto the strengths of a volunteer developer community with open discussion.

Most HCI research has concentrated on pre-release activities that inform design and relatively little on post-release techniques (Hartson and Castillo, 1998; Smilowitz et al., 1994). It is noteworthy that participatory design is a field in its own right whereas participative usage is usually quickly passed over by HCI textbooks. Thus OSS development in this case need not merely play catch-up with the greater end-user focus of the commercial world but potentially can innovate in exploring how to involve end users in subsequent redesign. There have been several calls in the literature (Shneiderman 2002; Lieberman and Fry, 2001; Fischer, 1998) for users to become actively involved in software development beyond standard user-centered design activities (such as usability testing, prototype evaluation, and fieldwork observation). It is noticeable that these comments seem to ignore that this involvement is already happening in OSS projects.

Raymond (1998) comments that "debugging is parallelizable", we can add to this that usability reporting, analysis, and testing are also parallelizable. However certain aspects of usability design do not appear to be so easily parallelizable. We believe that these issues should be the focus of future research and development, understanding how they have operated in successful projects and designing and testing technological and organizational mechanisms to enhance future parallelization. In particular, future work should seek to examine whether the issues identified in this paper are historical (i.e. they flow from the particular ancestry of open-source development) or are necessarily connected to the OSS development model.

Improvements in the usability of open-source software do not necessarily mean that such software will displace proprietary software from the desktop; there are many other factors involved, e.g. inertia, support, legislation, legacy systems, etc. However, improved usability is a necessary condition for such a spread. We believe this paper is the first detailed discussion of these issues in the literature.

Lieberman and Fry (2001) foresee that "interacting with buggy software will be a cooperative problem-solving activity of the end user, the system, and the developer." For some open-source developers this is already true, expanding this situation to (potentially) include all of the end-users of the system would mark a significant change in software development practices.

Many techniques from HCI can be easily and cheaply adopted by open-source developers. Additionally, several approaches seem to provide a particularly good fit with a distributed networked community of users and developers. If open source projects can provide a simple framework for users to contribute non-technical information about software to the developers then they can leverage and promote the participatory ethos amongst their users.

Raymond (1998) proposed that "given enough eyeballs all bugs are shallow." For seeing usability bugs, the traditional open-source community may comprise the wrong kind of eyeballs. However, it may be that by encouraging greater involvement of usability experts and end users it is the case that: given enough user experience reports all usability issues are shallow. By further engaging typical users into the development process OSS projects can create a networked development community that can do for usability what it has already done for functionality and reliability.

 

 

Augmented Reality


images


Augmented reality has its origins as early as the 1950s and has progressed with virtual reality since then, but its most significant advances have been since the mid-1990s.

The technology has been around for many years, used in CAD programs for aircraft assembly and architecture, simulation, navigation, military, and medical procedures. Complex tasks including assembly and maintenance can be simplified to assist in training and product prototypes can be mocked up without manufacturing.

Augmented reality has been proven very useful on a day-to-day basis when tied with location-based technology. Several apps are available that will show consumers their nearest food outlets or subway transport stations when they raise the app and view their surroundings through the camera.

Their use in marketing is particularly appealing, as not only can additional, detailed content be put within a traditional 2D advert, but the results are interactive, cool, engaging, and due to the initial novelty - have high viral potential. Consumers react positively to fun, clever marketing, and brands become memorable.

The potential audience varies depending on the application of AR. Through smartphones, it is limited to an audience with suitable handsets, and those willing to download an app. Printing a marker for use with a webcam is limited to those willing to follow through these steps, though often opens a wide demographic including children (printing an AR code on a cereal box to play a game for instance).

What is certain is that the smartphone population is rising, and with this, the level of processing power is too. More and more consumers are carrying phones capable of displaying augmented reality, and once an app is downloaded and they have scanned their first code, they are far more receptive to future appearances of a code - driven by curiosity. As long as the resulting augmented content remains engaging and innovative, consumers will certainly adopt augmented reality as a new and fun twist to conventional marketing and services.

What is AR?

The process of superimposing digitally rendered images onto our real-world surroundings gives a sense of an illusion or virtual reality. Recent developments have made this technology accessible using a smartphone.

Augmented reality (AR) is the integration of digital information with live video or the user's environment in real-time. AR takes an existing picture and blends new information into it. One of the first commercial applications of AR technology is the yellow first down line in televised football games. 

The key to augmented reality is the software. Augmented reality programs are written in special 3D augmented reality programs such as D'Fusion,  Unifye Viewer, or Flare toolkit.  These programs allow the developer to tie animation or contextual digital information in the computer program to an augmented reality "marker" in the real world. 

The end user must download a software application (app) or browser plug-in to experience augmented reality. Most AR applications are built in Flash or Shockwave and require a webcam program to deliver the information in the marker to the computer. The marker, which is sometimes called a target, might be a barcode or a simple series of geometric shapes. When the computer's AR app or browser plug-in receives the digital information contained in the marker, it begins to execute the code for the augmented reality program. 

AR applications for smartphones include a global positioning system (GPS) to pinpoint the user's location and its compass to detect device orientation. Sophisticated AR programs used by the military for training may include machine vision, object recognition, and gesture recognition technologies.

Some of the many actual or potential uses of augmented reality:

  • The changing maps behind weather reporters.
  • A navigational display embedded in the windshield of a car.
  • Visual displays and audio guidance for complex tasks.
  • Images of historical recreations integrated with the current environment.
  • A display in a pilot's helmet allows the pilot to, in effect, see through the aircraft.
  • Mobile marketing involves product information displayed over that product or its location.
  • Video games with digital elements blended into the user's environment.
  • Virtually trying on clothes through a webcam while online shopping.
  • Displaying information about a tourist attraction by pointing a phone at it.

Boeing researcher Thomas Caudell coined the term augmented reality in 1990, about a head-mounted display Boeing, used to guide workers as they put together electrical wiring harnesses for aircraft equipment.  

How is it used?

Augmented reality is hidden content, most commonly hidden behind marker images, that can be included in printed and film media, as long as the marker is displayed for a suitable length of time, in a steady position for an application to identify and analyze it. Depending on the content, the marker may have to remain visible.

It is used more recently by advertisers where it is popular to create a 3D render of a product, such as a car, or football boot, and trigger this as an overlay to a marker. This allows the consumer to see a 360-degree image (more or less, sometimes the base of the item can be tricky to view) of the product. Depending on the quality of the augmentation, this can go as far as indicating the approximate size of the item, and allow the consumer to 'wear' the item, as viewed through their phone.

Alternative setups include printing out a marker and holding it before a webcam attached to a computer. The image of the marker and the background as seen by the webcam is shown on screen, enabling the consumer to place the marker on places such as the forehead (to create a mask) or move the marker to control a character in a game.

In some cases, a marker is not required at all to display an augmented reality.

How does it work?

Using a mobile application, a mobile phone's camera identifies and interprets a marker, often a black-and-white barcode image. The software analyses the marker and creates a virtual image overlay on the mobile phone's screen, tied to the position of the camera. This means the app works with the camera to interpret the angles and distance the mobile phone is away from the marker.

Due to the number of calculations, a phone must do to render the image or model over the marker, often only smartphones are capable of supporting augmented reality with any success. Phones need a camera, and if the data for the AR is not stored within the app, a good 3G Internet connection is.


© Sanjay K Mohindroo 2024