Compare hotel prices and find the best deal - HotelsCombined.com

Wednesday, August 6, 2008

REST Anti-Patterns

When people start trying out REST, they usually start looking around for examples – and not only find a lot of examples that claim to be “RESTful”, or are labeled as a “REST API”, but also dig up a lot of discussions about why a specific service that claims to do REST actually fails to do so.

Why does this happen? HTTP is nothing new, but it has been applied in a wide variety of ways. Some of them were in line with the ideas the Web’s designers had in mind, but many were not. Applying REST principles to your HTTP applications, whether you build them for human consumption, for use by another program, or both, means that you do the exact opposite: You try to use the Web “correctly”, or if you object to the idea that one is “right” and one is “wrong”: in a RESTful way. For many, this is indeed a very new approach.

The usual standard disclaimer applies: REST, the Web, and HTTP are not the same thing; REST could be implemented with many different technologies, and HTTP is just one concrete architecture that happens to follow the REST architectural style. So I should actually be careful to distinguish “REST” from “RESTful HTTP”. I’m not, so let’s just assume the two are the same for the remainder of this article.

As with any new approach, it helps to be aware of some common patterns. In the first two articles of this series, I’ve tried to outline some basic ones – such as the concept of collection resources, the mapping of calculation results to resources in their own right, or the use of syndication to model events. A future article will expand on these and other patterns. For this one, though, I want to focus on anti-patterns – typical examples of attempted RESTful HTTP usage that create problems and show that someone has attempted, but failed, to adopt REST ideas.

Let’s start with a quick list of anti-patterns I’ve managed to come up with:

  1. Tunneling everything through GET
  2. Tunneling everything through POST
  3. Ignoring caching
  4. Ignoring response codes
  5. Misusing cookies
  6. Forgetting hypermedia
  7. Ignoring MIME types
  8. Breaking self-descriptiveness

Let’s go through each of them in detail.

Tunneling everything through GET

To many people, REST simply means using HTTP to expose some application functionality. The fundamental and most important operation (strictly speaking, “verb” or “method” would be a better term) is an HTTP GET. A GET should retrieve a representation of a resource identified by a URI, but many, if not all existing HTTP libraries and server programming APIs make it extremely easy to view the URI not as a resource identifier, but as a convenient means to encode parameters. This leads to URIs like the following:

http://example.com/some-api?method=deleteCustomer&id=1234

The characters that make up a URI do not, in fact, tell you anything about the “RESTfulness” of a given system, but in this particular case, we can guess the GET will not be “safe”: The caller will likely be held responsible for the outcome (the deletion of a customer), although the spec says that GET is the wrong method to use for such cases.

The only thing in favor of this approach is that it’s very easy to program, and trivial to test from a browser – after all, you just need to paste a URI into your address bar, tweak some “parameters”, and off you go. The main problems with this anti-patterns are:

  1. Resources are not identified by URIs; rather, URIs are used to encode operations and their parameters
  2. The HTTP method does not necessarily match the semantics
  3. Such links are usually not intended to be bookmarked
  4. There is a risk that “crawlers” (e.g. from search engines such as Google) cause unintended side effects

Note that APIs that follow this anti-pattern might actually end up being accidentally restful. Here is an example:

http://example.com/some-api?method=findCustomer&id=1234

Is this a URI that identifies an operation and its parameters, or does it identify a resource? You could argue both cases: This might be a perfectly valid, bookmarkable URI; doing a GET on it might be “safe”; it might respond with different formats according to the Accept header, and support sophisticated caching. In many cases, this will be unintentional. Often, APIs start this way, exposing a “read” interface, but when developers start adding “write” functionality, you find out that the illusion breaks (it’s unlikely an update to a customer would occur via a PUT to this URI – the developer would probably create a new one).

Tunneling everything through POST

This anti-pattern is very similar to the first one, only that this time, the POST HTTP method is used. POST carries an entity body, not just a URI. A typical scenario uses a single URI to POST to, and varying messages to express differing intents. This is actually what SOAP 1.1 web services do when HTTP is used as a “transport protocol”: It’s actually the SOAP message, possibly including some WS-Addressing SOAP headers, that determines what happens.

One could argue that tunneling everything through POST shares all of the problems of the GET variant, it’s just a little harder to use and cannot explore caching (not even accidentally), nor support bookmarking. It actually doesn’t end up violating any REST principles so much – it simply ignores them.

Ignoring caching

Even if you use the verbs as they are intended to be used, you can still easily ruin caching opportunities. The easiest way to do so is by simply including a header such as this one in your HTTP response:

Cache-control: no-cache

Doing so will simply prevent caches from caching anything. Of course this may be what you intend to do, but more often than not it’s just a default setting that’s specified in your web framework. However, supporting efficient caching and re-validation is one of the key benefits of using RESTful HTTP. Sam Ruby suggests that a key question to ask when assessing somethings RESTfulness is “do you support ETags”? (ETags are a mechanism introduced in HTTP 1.1 to allow a client to validate whether a cached representation is still valid, by means of a cryptographic checksum). The easiest way to generate correct headers is to delegate this task to a piece of infrastructure that “knows” how to do this correctly – for example, by generating a file in a directory served by a Web server such as Apache HTTPD.

Of course there’s a client side to this, too: when you implement a programmatic client for a RESTful service, you should actually exploit the caching capabilities that are available, and not unnecessarily retrieve a representation again. For example, the server might have sent the information that the representation is to be considered “fresh” for 600 seconds after a first retrieval (e.g. because a back-end system is polled only every 30 minutes). There is absolutely no point in repeatedly requesting the same information in a shorter period. Similarly to the server side of things, going with a proxy cache such as Squid on the client side might be a better option than building this logic yourself.

Caching in HTTP is powerful and complex; for a very good guide, turn to Mark Nottingham’s Cache Tutorial.

Ignoring status codes

Unknown to many Web developers, HTTP has a very rich set of application-level status codes for dealing with different scenarios. Most of us are familiar with 200 (“OK”), 404 (“Not found”), and 500 (“Internal server error”). But there are many more, and using them correctly means that clients and servers can communicate on a semantically richer level.

For example, a 201 (“Created”) response code signals that a new resource has been created, the URI of which can be found in a Location header in the response. A 409 (“Conflict”) informs the client that there is a conflict, e.g. when a PUT is used with data based on an older version of a resource. A 412 (“Precondition Failed”) says that the server couldn’t meet the client’s expectations.

Another aspect of using status codes correctly affects the client: The status codes in different classes (e.g. all in the 2xx range, all in the 5xx range) are supposed to be treated according to a common overall approach – e.g. a client should treat all 2xx codes as success indicators, even if it hasn’t been coded to handle the specific code that has been returned.

Many applications that claim to be RESTful return only 200 or 500, or even 200 only (with a failure text contained in the response body – again, see SOAP). If you want, you can call this “tunneling errors through status code 200”, but whatever you consider to be the right term: if you don’t exploit the rich application semantics of HTTP’s status codes, you’re missing an opportunity for increased re-use, better interoperability, and looser coupling.

Misusing cookies

Using cookies to propagate a key to some server-side session state is another REST anti-pattern.

Cookies are a sure sign that something is not RESTful. Right? No; not necessarily. One of the key ideas of REST is statelessness – not in the sense that a server can not store any data: it’s fine if there is resource state, or client state. It’s session state that is disallowed due to scalability, reliability and coupling reasons. The most typical use of cookies is to store a key that links to some server-side data structure that is kept in memory. This means that the cookie, which the browser passes along with each request, is used to establish conversational, or session, state.

If a cookie is used to store some information, such as an authentication token, that the server can validate without reliance on session state, cookies are perfectly RESTful – with one caveat: They shouldn’t be used to encode information that can be transferred by other, more standardized means (e.g. in the URI, some standard header or – in rare cases – in the message body). For example, it’s preferable to use HTTP authentication from a RESTful HTTP point of view.

Forgetting hypermedia

The first REST idea that’s hard to accept is the standard set of methods. REST theory doesn’t specify which methods make up the standard set, it just says there should be a limited set that is applicable to all resources. HTTP fixes them at GET, PUT, POST and DELETE (primarily, at least), and casting all of your application semantics into just these four verbs takes some getting used to. But once you’ve done that, people start using a subset of what actually makes up REST – a sort of Web-based CRUD (Create, Read, Update, Delete) architecture. Applications that expose this anti-pattern are not really “unRESTful” (if there even is such a thing), they just fail to exploit another of REST’s core concepts: hypermedia as the engine of application state.

Hypermedia, the concept of linking things together, is what makes the Web a web – a connected set of resources, where applications move from one state to the next by following links. That might sound a little esoteric, but in fact there are some valid reasons for following this principle.

The first indicator of the “Forgetting hypermedia” anti-pattern is the absence of links in representations. There is often a recipe for constructing URIs on the client side, but the client never follows links because the server simply doesn’t send any. A slightly better variant uses a mixture of URI construction and link following, where links typically represent relations in the underlying data model. But ideally, a client should have to know a single URI only; everything else – individual URIs, as well as recipes for constructing them e.g. in case of queries – should be communicated via hypermedia, as links within resource representations. A good example is the Atom Publishing Protocol with its notion of service documents, which offer named elements for each collection within the domain that it describes. Finally, the possible state transitions the application can go through should be communicated dynamically, and the client should be able to follow them with as little before-hand knowledge of them as possible. A good example of this is HTML, which contains enough information for the browser to offer a fully dynamic interface to the user.

I considered adding “human readable URIs” as another anti-pattern. I did not, because I like readable and “hackable” URIs as much as anybody. But when someone starts with REST, they often waste endless hours in discussions about the “correct” URI design, but totally forget the hypermedia aspect. So my advice would be to limit the time you spend on finding the perfect URI design (after all, their just strings), and invest some of that energy into finding good places to provide links within your representations.

Ignoring MIME types

HTTP’s notion of content negotiation allows a client to retrieve different representations of resources based on its needs. For example, a resource might have a representation in different formats such as XML, JSON, or YAML, for consumption by consumers implemented in Java, JavaScript, and Ruby respectively. Or there might be a “machine-readable” format such as XML in addition to a PDF or JPEG version for humans. Or it might support both the v1.1 and the v1.2 versions of some custom representation format. In any case, while there may be good reasons for having one representation format only, it’s often an indication of another missed opportunity.

It’s probably obvious that the more unforeseen clients are able to (re-)use a service, the better. For this reason, it’s much better to rely on existing, pre-defined, widely-known formats than to invent proprietary ones – an argument that leads to the last anti-pattern addressed in this article.

Breaking self-descriptiveness

This anti-pattern is so common that it’s visible in almost every REST application, even in those created by those who call themselves “RESTafarians” – myself included: breaking the constraint of self-descriptiveness (which is an ideal that has less to do with AI science fiction than one might think at first glance). Ideally, a message – an HTTP request or HTTP response, including headers and the body – should contain enough information for any generic client, server or intermediary to be able to process it. For example, when your browser retrieves some protected resource’s PDF representation, you can see how all of the existing agreements in terms of standards kick in: some HTTP authentication exchange takes place, there might be some caching and/or revalidation, the content-type header sent by the server (“application/pdf”) triggers the startup of the PDF viewer registered on your system, and finally you can read the PDF on your screen. Any other user in the world could use his or her own infrastructure to perform the same request. If the server developer adds another content type, any of the server’s clients (or service’s consumers) just need to make sure they have the appropriate viewer installed.

Every time you invent your own headers, formats, or protocols you break the self-descriptiveness constraint to a certain degree. If you want to take an extreme position, anything not being standardized by an official standards body breaks this constraint, and can be considered a case of this anti-pattern. In practice, you strive for following standards as much as possible, and accept that some convention might only apply in a smaller domain (e.g. your service and the clients specifically developed against it).

Summary

Ever since the “Gang of Four” published their book, which kick-started the patterns movement, many people misunderstood it and tried to apply as many patterns as possible – a notion that has been ridiculed for equally as long. Patterns should be applied if, and only if, they match the context. Similarly, one could religiously try to avoid all of the anti-patterns in any given domain. In many cases, there are good reasons for violating any rule, or in REST terminology: relax any particular constraint. It’s fine to do so – but it’s useful to be aware of the fact, and then make a more informed decision.

Hopefully, this article helps you to avoid some of the most common pitfalls when starting your first REST projects.

Many thanks to Javier Botana and Burkhard Neppert for feedback on a draft of this article.



Your Ad Here

An Introduction to Virtualization


The IT industry makes heavy use of buzzwords and ever changing terms to define itself. Sometimes the latest nomenclature the industry uses is a particular technology such as x86 or a concept such as green computing. Terms rise and fall out of favor as the industry evolves. In recent years the term virtualization has become the industry’s newest buzzword. This raises the question … just what is virtualization? The first concept that comes to the mind of the average industry professional is running one or more guest operating systems on a host. However, digging a little deeper reveals this definition is too narrow. There are a large number of services, hardware, and software that can be “virtualized”. This article will take a look at these different types of virtualization along with the pros and cons of each.

What is virtualization?

Before discussing the different categories of virtualization in detail, it is useful to define the term in the abstract sense. Wikipedia uses the following definition: “In computing, virtualization is a broad term that refers to the abstraction of computer resources. Virtualization hides the physical characteristics of computing resources from their users, be they applications, or end users. This includes making a single physical resource (such as a server, an operating system, an application, or storage device) appear to function as multiple virtual resources; it can also include making multiple physical resources (such as storage devices or servers) appear as a single virtual resource...”

In layman’s terms virtualization is often:

  1. The creation of many virtual resources from one physical resource.
  2. The creation of one virtual resource from one or more physical resource.

The term is frequently used to convey one of these concepts in a variety of areas such as networking, storage, and hardware.

History

Virtualization is not a new concept. One of the early works in the field was a paper by Christopher Strachey entitled "Time Sharing in Large Fast Computers". IBM began exploring virtualization with its CP-40 and M44/44X research systems. These in turn lead to the commercial CP-67/CMS. The virtual machine concept kept users separated while simulating a full stand-alone computer for each.

In the 80’s and early 90’s the industry moved from leveraging singular mainframes to running collections of smaller and cheaper x86 servers. As a result the concept of virtualization become less prominent. That changed in 1999 with VMware’s introduction of VMware workstation. This was followed by VMware’s ESX Server, which runs on bare metal and does not require a host operating system.

Types of Virtualization

Today the term virtualization is widely applied to a number of concepts including:

  • Server Virtualization
  • Client / Desktop / Application Virtualization
  • Network Virtualization
  • Storage Virtualization
  • Service / Application Infrastructure Virtualization

In most of these cases, either virtualizing one physical resource into many virtual resources or turning many physical resources into one virtual resource is occurring.

Server Virtualization

Server virtualization is the most active segment of the virtualization industry featuring established companies such as VMware, Microsoft, and Citrix. With server virtualization one physical machine is divided many virtual servers. At the core of such virtualization is the concept of a hypervisor (virtual machine monitor). A hypervisor is a thin software layer that intercepts operating system calls to hardware. Hypervisors typically provide a virtualized CPU and memory for the guests running on top of them. The term was first used in conjunction with the IBM CP-370.

Hypervisors are classified as one of two types:

Related to type 1 hypervisors is the concept of paravirtualization. Paravirtualization is a technique in which a software interface that is similar but not identical to the underlying hardware is presented. Operating systems must be ported to run on top of a paravirtualized hypervisor. Modified operating systems use the "hypercalls" supported by the paravirtualized hypervisor to interface directly with the hardware. The popular Xen project makes use of this type of virtualization. Starting with version 3.0 however Xen is also able to make use of the hardware assisted virtualization technologies of Intel (VT-x) and AMD (AMD-V). These extensions allow Xen to run unmodified operating systems such as Microsoft Windows.

Server virtualization has a large number of benefits for the companies making use of the technology. Among those frequently listed:

  • Increased Hardware Utilization – This results in hardware saving, reduced administration overhead, and energy savings.
  • Security – Clean images can be used to restore compromised systems. Virtual machines can also provide sandboxing and isolation to limit attacks.
  • Development – Debugging and performance monitoring scenarios can be easily setup in a repeatable fashion. Developers also have easy access to operating systems they might not otherwise be able to install on their desktops.

Correspondingly there are a number of potential downsides that must be considered:

  • Security – There are now more entry points such as the hypervisor and virtual networking layer to monitor. A compromised image can also be propagated easily with virtualization technology.
  • Administration – While there are less physical machines to maintain there may be more machines in aggregate. Such maintenance may require new skills and familiarity with software that administrators otherwise would not need.
  • Licensing/Cost Accounting – Many software-licensing schemes do not take virtualization into account. For example running 4 copies of Windows on one box may require 4 separate licenses.
  • Performance – Virtualization effectively partitions resources such as RAM and CPU on a physical machine. This combined with hypervisor overhead does not result in an environment that focuses on maximizing performance.

Application/Desktop Virtualization

Virtualization is not only a server domain technology. It is being put to a number of uses on the client side at both the desktop and application level. Such virtualization can be broken out into four categories:

  • Local Application Virtualization/Streaming
  • Hosted Application Virtualization
  • Hosted Desktop Virtualization
  • Local Desktop Virtualization

Wikipedia defines application virtualization as follows:

Application virtualization is an umbrella term that describes software technologies that improve manageability and compatibility of legacy applications by encapsulating applications from the underlying operating system on which they are executed. A fully virtualized application is not installed in the traditional sense, although it is still executed as if it is. Application virtualization differs from operating system virtualization in that in the latter case, the whole operating system is virtualized rather than only specific applications.

With streamed and local application virtualization an application can be installed on demand as needed. If streaming is enabled then the portions of the application needed for startup are sent first optimizing startup time. Locally virtualized applications also frequently make use of virtual registries and file systems to maintain separation and cleanness from the user’s physical machine. Examples of local application virtualization solutions include Citrix Presentation Server and Microsoft SoftGrid. One could also include virtual appliances into this category such as those frequently distributed via VMware’s VMware Player.

Hosted application virtualization allows the user to access applications from their local computer that are physically running on a server somewhere else on the network. Technologies such as Microsoft’s RemoteApp allow for the user experience to be relatively seamless include the ability for the remote application to be a file handler for local file types.

Benefits of application virtualization include:

  • Security – Virtual applications often run in user mode isolating them from OS level functions.
  • Management – Virtual applications can be managed and patched from a central location.
  • Legacy Support – Through virtualization technologies legacy applications can be run on modern operating systems they were not originally designed for.
  • Access – Virtual applications can be installed on demand from central locations that provide failover and replication.

Disadvantages include:

  • Packaging – Applications must first be packaged before they can be used.
  • Resources – Virtual applications may require more resources in terms of storage and CPU.
  • Compatibility – Not all applications can be virtualized easily.

Wikipedia defines desktop virtualization as:

Desktop virtualization (or Virtual Desktop Infrastructure) is a server-centric computing model that borrows from the traditional thin-client model but is designed to give administrators and end users the best of both worlds: the ability to host and centrally manage desktop virtual machines in the data center while giving end users a full PC desktop experience.

Hosted desktop virtualization is similar to hosted application virtualization, expanding the user experience to be the entire desktop. Commercial products include Microsoft’s Terminal Services, Citrix’s XenDesktop, and VMware’s VDI.

Benefits of desktop virtualization include most of those with application virtualization as well as:

  • High Availability – Downtime can be minimized with replication and fault tolerant hosted configurations.
  • Extended Refresh Cycles – Larger capacity servers as well as limited demands on the client PCs can extend their lifespan.
  • Multiple Desktops – Users can access multiple desktops suited for various tasks from the same client PC.

Disadvantages of desktop virtualization are similar to server virtualization. There is also the added disadvantage that clients must have network connectivity to access their virtual desktops. This is problematic for offline work and also increases network demands at the office.

The final segment of client virtualization is local desktop virtualization. It could be said that this is where the recent resurgence of virtualization began with VMware’s introduction of VMware Workstation in the late 90’s. Today the market includes competitors such as Microsoft Virtual PC and Parallels Desktop. Local desktop virtualization has also played a key part in the increasing success of Apple’s move to Intel processors since products like VMware Fusion and Parallels allow easy access to Windows applications. Some the benefits of local desktop virtualization include:

  • Security – With local virtualization organizations can lock down and encrypt just the valuable contents of the virtual machine/disk. This can be more performant than encrypting a user’s entire disk or operating system.
  • Isolation – Related to security is isolation. Virtual machines allow corporations to isolate corporate assets from third party machines they do not control. This allows employees to use personal computers for corporate use in some instances.
  • Development/Legacy Support – Local virtualization allows a users computer to support many configurations and environments it would otherwise not be able to support without different hardware or host operating system. Examples of this include running Windows in a virtualized environment on OS X and legacy testing Windows 98 support on a machine that’s primary OS is Vista.

Network Virtualization

Up to this point the types of virtualization covered have centered on applications or entire machines. These are not the only granularity levels that can be virtualized however. Other computing concepts also lend themselves to being software virtualized as well. Network virtualization is one such concept. Wikipedia defines network virtualization as:

In computing, network virtualization is the process of combining hardware and software network resources and network functionality into a single, software-based administrative entity, a virtual network. Network virtualization involves platform virtualization, often combined with resource virtualization. Network virtualization is categorized as either external, combining many networks, or parts of networks, into a virtual unit, or internal, providing network-like functionality to the software containers on a single system…

Using the internal definition of the term, desktop and server virtualization solutions provide networking access between both the host and guest as well as between many guests. On the server side virtual switches are gaining acceptance as a part of the virtualization stack. The external definition of network virtualization is probably the more used version of the term however. Virtual Private Networks (VPNs) have been a common component of the network administrators’ toolbox for years with most companies allowing VPN use. Virtual LANs (VLANs) are another commonly used network virtualization concept. With network advances such as 10 gigabit Ethernet, networks no long need to be structured purely along geographical lines. Companies with products in the space include Cisco and 3Leaf.

In general benefits of network virtualization include:

  • Customization of Access – Administrators can quickly customize access and network options such as bandwidth throttling and quality of service.
  • Consolidation – Physical networks can be combined into one virtual network for overall simplification of management.

Similar to server virtualization, network virtualization can bring increased complexity, some performance overhead, and the need for administrators to have a larger skill set.

Storage Virtualization

Another computing concept that is frequently virtualized is storage. Unlike the definitions we have seen up to this point that have been complex at times, Wikipedia defines storage virtualization simply as:

Storage virtualization refers to the process of abstracting logical storage from physical storage.

While RAID at the basic level provides this functionality, the term storage virtualization typically includes additional concepts such as data migration and caching. Storage virtualization is hard to define in a fixed manner due to the variety of ways that the functionality can be provided. Typically, it is provided as a feature of:

  • Host Based with Special Device Drivers
  • Array Controllers
  • Network Switchs
  • Stand Alone Network Appliances

Each vendor has a different approach in this regard. Another primary way that storage virtualization is classified is whether it is in-band or out-of-band. In-band (often called symmetric) virtualization sits between the host and the storage device allowing caching. Out-of-band (often called asymmetric) virtualization makes use of special host based device drivers that first lookup the meta data (indicating where a file resides) and then allows the host to directly retrieve the file from the storage location. Caching at the virtualization level is not possible with this approach.

General benefits of storage virtualization include:

  • Migration – Data can be easily migrated between storage locations without interrupting live access to the virtual partition with most technologies.
  • Utilization – Similar to server virtualization, utilization of storage devices can be balanced to address over and under utilitization.
  • Management – Many hosts can leverage storage on one physical device that can be centrally managed.

Some of the disadvantages include:

  • Lack of Standards and Interoperability – Storage virtualization is a concept and not a standard. As a result vendors frequently do not easily interoperate.
  • Metadata – Since there is a mapping between logical and physical location, the storage metadata and its management becomes key to a working reliable system.
  • Backout – The mapping between local and physical locations also makes the backout of virtualization technology from a system a less than trivial process.

Service / Application Infrastructure Virtualization

Enterprise application providers have also taken note of the benefits of virtualization an begun offering solutions that allow the virtualization of commonly used applications such as Apache as well as application fabric platforms that allow software to easily be developed with virtualization capabilities from the ground up.

Application infrastructure virtualization (sometimes referred to as application fabrics) unbundle an application from a physical OS and hardware. Application developers can then write to a virtualization layer. The fabric can then handle features such as deployment and scaling. In essence this process is the evolution of grid computing into a fabric form that provides virtualization level features. Companies such as Appistry and DataSynapse provides features including:

  • Virtualized Distribution
  • Virtualized Processing
  • Dynamic Resource Discovery

IBM has also embraced the virtualization concept at the application infrastructure level with the rebranding and continued of enhancement of Websphere XD as Websphere Virtual Enterprise. The product provides features such as service level management, performance monitoring, and fault tolerance. The software runs on a variety of Windows, Unix, and Linux based operating systems and works with popular application servers such as WebSphere, Apache, BEA, JBoss, and PHP application servers. This lets administrators deploy and move application servers at a virtualization layer level instead of at the physical machine level.

Final Thoughts

In summary it should now be apparent that virtualization is not just a server-based concept. The technique can be applied across a broad range of computing including the virtualization of:

  • Entire Machines on Both the Server and Desktop
  • Applications/Desktops
  • Storage
  • Networking
  • Application Infrastructure

The technology is evolving in a number of different ways but the central themes revolve around increased stability in existing areas and accelerating adoption by segments of the industry that have yet to embrace virtualization. The recent entry of Microsoft into the bare-metal hypervisor space with Hyper-V is a sign of the technology’s maturity in the industry.

Beyond these core elements the future of virtualization is still being written. A central dividing line is feature or product. For some companies such as RedHat and many of the storage vendors, virtualization is being pushed as a feature to complement their existing offerings. Other companies such as VMware have built entire businesses with virtualization as product. InfoQ will continue to cover the technology and companies involved as the space evolves.



Your Ad Here