Home > Chapters & Articles > On Demand > Introduction to Utility Computing: How It Can Improve TCO

Introduction to Utility Computing: How It Can Improve TCO

Article Description

Regrettably, the IT world is confused as to the purpose, structure, and goals of utility computing. Richard Murch clarifies those issues.

From the author of

Autonomic Computing

Autonomic Computing


What Is Utility Computing?

Utility computing is one of a number of developing technologies, services, and products emerging in the IT world. Along with other technologies such as autonomic computing, grids, and on-demand or adaptive enterprise, utility computing gives IT management a new way of managing future workloads and applications.

Utility computing amounts to buying only the amount of computing you need, much like plugging into the electrical grid. Traditionally, every layer of a computing environment has been static or fixed, manually set up to support a single computing solution. All components are treated as products, installed and configured for specific computers. For example, hardware is assigned for specific uses such as web server or database; the OS is tied to the hardware (one box runs Windows, another a UNIX OS); and networks provide access to only specific locations. On top of all this are the applications, which are installed to run inside this hard-coded, static environment.

In a utility computing environment, on the other hand, hardware and software are no longer bound to the other. Each layer is virtualized—designed so that it doesn't need to be configured for specific systems—and assigned, in real-time, to whatever task most needs the resource.

Let's define utility computing this way: Utility computing consists of a virtualized pool of IT resources that can be dynamically provisioned to ensure that these resources are easily and continually reallocated in a way that addresses the organization's changing business and service needs. These resources can be located anywhere and managed by anyone, and the usage of these resources can be tracked and billed down to the level of an individual user or group.

Utility computing has suddenly become one of the hot topics in the IT analyst community and increasingly in larger enterprises that are looking for ways to reduce the fixed costs and complexity of IT. Gartner and Dataquest believe that the advent of utility as a business model will "fundamentally challenge the established role of channels for suppliers of all types" (Gartner, "IT Utility Standards Efforts Take Shape," 10/22/03).

There are three major reasons why utility computing will become significant in IT:

  • Promises to address pressing business needs, including making the business more agile, adaptive, and flexible; and, more importantly, able to treat IT as an increasingly variable cost. The aim of utility computing is to reduce IT costs.

  • Can be supplied in small, incremental bites that deliver fast, demonstrable, significant return on investment, so companies don't have to wait for the full implementation to achieve payoffs. Much shorter time to market.

  • Provides total flexibility in implementation, from in-house and self-managed to fully outsourced, with everything in-between—including a hybrid deployment model in which in-house capacity can be supplemented by third-party resources to handle peak needs.

Our consumer utilities such as gas, water, and electricity all arrive on demand and independent of the uses to which they are put. This makes for a relatively easy billing structure—consistent infrastructure (pipe, wire) whose capital costs and maintenance are embedded in the usage rate. Exchange is simple: product in via infrastructure, invoice and payment on separate channels. Computing can be bought the same way. This is the basic premise of utility computing, which promises processing power when you need it, where you need it, at the cost of how much you use.

2. Who Is Doing What? | Next Section

Search Related Safari Books

Safari Books