Questions? Feedback? powered by Olark live chat software

Utility computing and the drive towards a more sustainable future

I don’t think there can be any doubt that mankind is damaging his environment in numerous ways, and consuming resources at a rate that is simply not sustainable – whatever your views about global warming, there’s no denying that the looming energy crisis and dwindling supplies of natural resources will affect us all in years to come – if only in terms of increasing operating costs and the associated hit to your bottom line.

What might come as a surprise is the key role that cloud computing and the move towards computing resource being provided as a utility service (utility computing) is going to play in enabling us to achieve a more sustainable future across the technology spectrum. This may be particularly surprising as it’s likely you’ve read one of the recent flurry of articles expressing concern at the growing size and power consumption of the internet’s data centres (with Google often cited as an example, tied to some guesstimated environmental cost of each Google search).

In this article I’m going to explain how the existing consumer culture of cheap disposable products is set to change, and how cloud computing is an important part of the technology that will allow us to reduce unnecessary wastage of natural resources.

Let’s start with a description of the problem – since the industrial revolution and the advent of mass production in the early 1900s, there has been a steady drive towards increasing production volumes, increasing product homogenity, decreasing manufacturing costs, and ultimately, as a result of those reduced manufacturing costs, a deterioration in product quality and expected lifetime. This is particularly true in the technology market where Moore’s Law (the expoential growth of the number of transistors on a silicon chip) has made a computer over a couple of years old out of date, and a computer over 10 years old essentially landfill.

It goes deeper than this though – the drive to reduce prices in a market where the cheapest product is often the most popular (and therefore most profitable) has resulted in the value of the resources and materials that go into making that product becing diminished. As mass production and industrialisation made it cheaper to extract and refine these resources out of our planet, so we lost sight of the true value of them as finite and precious gifts of nature.

It’s important to point out that we’re not just talking about fuel petrolium refined out of oil – we’re talking about the many products that come from rare and finite natural resources – this includes plastics and various synthetic materials, but importantly for the technology sector, it also includes metals – particularly the post-transition metals Gallium, Germanium and Indium and Cadmium – all of which are vital for the production of the semiconductors that go into pretty much every electronic device you own. Worryingly – these metals are in equally short supply to oil – in fact we may even run out of Gallium and Indium within 20 years – before we run out of oil.

Of course, this problem is a small subset of a much larger global issue – simply put, we are draining resources from our planet at a rate many orders of magnitude faster than they will be replenished by natural processes. What’s more worrying is that we’re not even really getting anything of any lasting value out of these resources – they’re simply being extracted from the ground, processed into product which will last a person a number of years (or perhaps only months), after which time the product is discarded and these precious raw materials are returned to the environment in a form in which it is much more difficult to extract the resource from, either into landfill sites, or into the atmosphere in the case of incineration or combustion of fossil fuels. It begs the question, when scarsity drives up the cost of these resources, will future generations turn to mining our landfill sites to recover the discarded precious metals in hundreds of thousands of throw-away netbooks?

Let’s be clear about this – supplies of these precious natural resources are already beginning to dwindle, and as they do, the cost of extraction increases, and as demand begins to outstrip supply, market forces will inevitably drive up prices. This is a simple fact of economics. The day will soon be upon us when it is simply not economically viable to produce disposable consumer electronics designed to be used for a couple of years and then thrown away – this is likely to happen within our lifetimes, and it will happen whether there is a conscious and concerted effort to be save the environment or not.

Simple economics, not altruism will force a sea change upon consumer culture – the emphasis will shift from the lowest possible unit price to the longest possible product lifetime. We’re going to see a shift back to an older way of thinking – when we buy a product we’re going to expect it to last a lifetime, maybe more – we’re going to want to hand things down to our kids, and we’re going to have to be prepared to pay a premium for that, but that premium will pale in insignificance compared to the cost of the natural resources required to produce the product. Products will be built to last, because that is the only sustainable way to continue living with the conveniences of modern society that we’ve grown so accustomed to.

So how does this fit into the technology market, where the pace of development has historically moved so rapidly that the maximum feasible lifetime of a technology product like a computer is measured in numbers of years, not decades? The rapid obsolescence of IT hardware has mainly been driven by Moore’s law, which simply states that the number of transistors on a computer chip will double every 2 years – what this really means is that new computers will do roughly twice as many operations in the same space of time as a 2 year old computer, or in other words, every two years, computers double in speed.

With such rapidly expanding computer capabilities, software developers have been quick to take advantage of the extra computing resource available to them, and the pace of software development has closely mirrored the pace of hardware development. However, the problem with this is that old computers become obsolete because they’re simply too slow to run the latest software, and everyone wants to run the latest software, to take advantage of new features and to maintain compatibility to with everyone else who is constantly upgrading.

This creates a market in which computers are not expected to physically last longer than a few years, because there is simply no value in a computer once it’s no longer able to run the latest generation of software. Computers have effectively become a disposable product, you buy one, use it for a couple of years until it either stops working because it has been cheaply manufactured, or stops being useful because it is not able to run all of the software that you require anymore – often the two will neatly coincide. At this point you throw the device away and buy a new one, they’re cheap enough for most people to be able to afford to do that. The current ‘netbook’ fad is a the latest in a long line of consumer devices with lower and lower manufacturing costs and shorter and shorter expected lifetimes.

It’s clear this approach isn’t sustainable – we’re using precious natural resources at an alarming rate – resources that took many hundreds of thousands of years to be created, for the sake products that are gone in the blink of an eye on geological timescales. So what are the changes afoot that will signal an end to this wasteful culture of cheap disposable IT products?

Well, over the last 5-10 years an important change in the nature of software has occurred. That change is of course the internet, more specifically the web. In the early days of the web it was merely a method for posting and viewing information, but what we’re seeing now is that an increasing amount of software is being written to run directly in the web browser, web pages aren’t really just web pages anymore, they’re web applications – we’re not far away from a time when all software will be written to run inside a web browser in this way.

Of course you can’t really have failed to see the growth of web applications, but you might have missed a more subtle implication of this shift in software development practice – an increasing amount of the computing requirement is being shifted away from the client PC and onto the servers. We’re actually seeing a return to an earlier approach to computing – it’s the ‘thin client’ approach – a central server handles the vast majority of the workload, with the client machines themselves being kept relatively minimal and deferring to the server whenever they need any work done or data stored.

The dark secret the chip manufacturers would rather you didn’t know is this: You do not need a fast computer to make a perfectly usable web terminal. Hardware that’s a few years old will still run a modern web browser. This approach effectively extends the useful lifetime of a client device without breaking Moore’s law.

The thin client approach has a number of very attractive qualities, and it’s something that has been tried with a limited amount of success throughout the history of computing. However, we’re now at the stage where some important enabling technologies have become available which make it a much more compelling scenario for the average home user. The web is an important one, but there are lots of other forces conspiring to push us in this direction as well.

Leave a Reply

Your email address will not be published. Required fields are marked *

— Back to top —