Towards a New World of Client Computing
Although knowing Doug for quite some time, this is my first column for DABCC. What I would like to do in this article is talk about the high level challenges in client computing and what you can expect from me in future articles.
For me, the underlying drive in client computing is towards far greater standardization and automation. This is not an uncommon move in IT – you could say a lot of the changes we make are firmly rooted in standardization and automation: First standardize and then having standardized you are in a position to automate. Between the two we expect better service delivery and costs. But to illustrate my thoughts here, I am going to use an example outside of IT – the introduction of the Ford Model T in 1908. This is an interesting example because it neatly covers the issues, and because it give me a chance to strip away some of the layers of misinformation that surround the Model T.
When the Model T was introduced in 1908 it cost $850, around a third of the price of typical cars of the time. It is tempting to put this saving down to the assembly line and hence automation but that would be wrong – the assembly line was not introduced for another couple of years. What allowed Ford to slash the price of the Model T was standardization. Ford standardized in two very distinct ways:
They standardized the product so there were far fewer options for the customer, hence less complications in the build process. This spanned everything from the number of different chassis layouts to not even providing an option for a driver’s door. Incidentally, the ‘any color as long as it is black’ story is actually only partly true and early and later cars could be bought in a wide range of colors. Over time the number of options increased as customers became more sophisticated and in many ways this is the stage we now have with users. In previous times and with more constrained users it was fine to deliver a restricted client computing experience, and terminal services excels at this – efficiently delivering an identical and limited set of applications to a large number of users at low cost. The reality of the more general user population is they now expect a lot more control over their computing environment.
The second way Ford standardized was in the components that made up the model T. At the time the normal way to build a car was for each of the components to be made individually, almost as one-offs. A group of craftsmen would take castings and sheet metal and fabricate each of the parts of the car and assemble it. This was inefficient, and also meant there was no possibility of a replacement part being a simple swap. What Ford did was standardize each of the components so they could be efficiently made and then simply bolted together. This drove a considerable proportion of the cost savings that allowed Ford to offer the car to the public at a third of the current market price. The model T caused huge changes in both society and the auto industry; a car that ordinary people could afford and both a spare parts business providing bolt-on replacement parts and the car accessory market that allowed people to personalize the standardized product. Over the next few years the introduction of the assembly line and the increasing scale of manufacturing drove the price down further, reaching $440 in 1915 – almost halved again from the introductory price. Of course the savings automation brought would not have been possibly if Ford had not first standardized the components that made up his cars.
Where client computing is today is close to where auto makers were pre 1908. We do have standardization in terms of gold-builds that are installed on machines when they are delivered to users, but from that point on, the vast majority of machines will rapidly become non standard and hence difficult to manage and fix. We effectively use an economic model for deciding the depth in which to investigate a problem – we tend to spend more time fixing problems where the fix will be relevant to larger numbers of users. It can be hard to justify time spent on fixing current desktop problems because many problems will be down to a peculiarity of that machine and so not relevant to anyone else. This tends to lead to an approach of ‘spend some time then reimage’. This leads to users effectively maintaining their own machines with no real knowledge and at great expense to the business.
Increased standardization offers the possibility of delivering a better service, because machines are in a good state and of making the effort put into fixing problems more valuable because solutions will apply to all users. As a result we will save money on supporting users and users will spend less time maintaining their machines and losing productivity on machines that are on the edge of being unusable.
Server Based Computing (SBC) was a good first step towards standardization – it is still the cheapest and most reliable way of delivering a small common set of applications to a large number of users. The limitation with SBC is it is difficult to deliver the larger number of applications in use in the broader business and SBC’s extreme standardization that is unacceptable to the majority of business users.
Effectively client computing is moving to the state Ford got to later in the life of the Model T with many options including a wide range of body styles but all based on standardized components and hence cheap to build. Fundamentally the components are not all that different from things we have dealt with in the past; an operating system, some applications and the user’s environment. What changes are the ways they come together. Rather than the existing situation where an operating system is installed and then applications are installed into it, all the components remain separate:
• An operating system isolated from the underlying hardware by a hypervisor
• Applications isolated from the operating system by application virtualization or application publishing
• The user’s environment isolated from both the operating system and the applications
By keeping the components separate and assembling them as required, IT can achieve a degree of standardization that would have been impossible before while keeping users happy. Ultimately IT gets great flexibility in the selection of how applications are delivered providing that it can also manage and deliver users’ personality. This enables you to deliver hosted virtual desktops now and will be central to delivering future client computing architectures. As we implement current and near term technologies such as SBC and hosted virtual desktops we are actually already on the road towards this world of componentized, standardized client computing. By recognizing that this is actually where we are going we can make the right decisions now to use our resources more wisely.
In this column I will be highlighting events, ideas and technologies that contribute to the rapidly changing world of client computing and possibly debunking a few myths along the way.