soam's home

home mail us syndication

Archive for October, 2009

Architecting For The Cloud: Some Thoughts

Introduction

A key advantage associated with cloud computing is that of scalability. Theoretically, it should be easy to provision new machines or decommission older ones from an existing application and scale thus. In reality, things are not so simple. The application has to be suitably structured from ground up in order to best leverage this feature. Merely adding more CPUs or storage will not deliver linear performance improvements unless the application was explicitly designed with that goal. Most legacy systems are not and consequently, as traffic and usage grows, must be continually monitored and patched to keep performing at an acceptable level. This is not optimal. Consequently, extracting maximum utility from the cloud requires applications follow a set of architectural guidelines. Some thoughts on what those should be:

Stateless, immutable components

An important guideline for linear scalabilty is to have relatively lightweight, independent stateless processes which can execute anywhere and run on newly deployed resource (threads/nodes/cpus) as appropriate in order to serve an increasing number of requests. These services share nothing with others, merely processing asynchronous messages. At Delve, we make extensive use of this technique for multimedia operations such as thumbnail extraction, transcoding and transcription that fit well into this paradigm. Scaling for these services involves spinning up, automatically configuring and deploying additional dedicated instances which can be put to work immediately and subsequently taken down once they are no longer needed. Without planning for this type of scenario, however, it is difficult for legacy applications to leverage this type of functionality.

Reduced reliance on relational databases

Relational databases are primarily designed for managing updates and transactions on a single instance. They scale well, but usually on a single node. When the capacity of that single node is reached, it is necessary to scale out and distribute the load across multiple machines. While there are best practices such as clustering, replication and sharding to allow this type of functionality, they have to be incorporated into the system design from the beginning for the application to benefit. Moving into the cloud does not get rid of this problem.

Furthermore, even if these techniques are utilized by the application, their complexity makes it very difficult to scale to hundreds or thousands of nodes, drastically reducing their viability for large distributed systems. Legacy applications are more likely to be reliant on relational databases and moving the actual database system to the cloud does not eliminate any of these issues.

Alternatively, applications designed for the cloud have the opportunity to leverage a number of cloud based storage systems to reduce their dependence on RDBMS systems. For example, we use Amazon’s SimpleDB as core for a persistent key/value store instead of MySQL. Our use case does not require relational database features such as joining multiple tables. However, scalability is essential and SimpleDB provides a quick and uncomplicated way for us to implement this feature. Similarly, we use Amazon’s Simple Storage Service (S3) to store Write Once Read Many data such as very large video file backups and our analytics reports. Both of these requirements, were we to use MySQL like many legacy applications providing similar functionality, would require a heavy initial outlay of nodes and management infrastructure. By using SimpleDB and S3, we are able to provide functionality comparable to or better than legacy systems at lower cost.

There are caveats, however, with using nosql type systems. They have their own constraints and using them effectively requires understanding those limitations. For example, S3 works under a version of the eventual consistency model which does not provide the same guarantees as a standard file system. Treating it as such would lead to problems. Similarly, SimpleDB provides limited db functionality – treating it as a mysql equivalent would be a mistake.

Integration with other cloud based applications

A related advantage of designing for the cloud is the ability to leverage systems offered by the cloud computing provider. In our case, we extensively use Amazon’s Elastic Map Reduce (EMR) service for our analytics. EMR, like Amazon’s other cloud offerings, is a pay as you go system. It is also tightly coupled with the rest of Amazon’s cloud infrastructure such that transferring data within Amazon is free. At periodic intervals, our system spins up a number of nodes within EMR, transfers data from S3, performs computations, saves the results and tears down the instances. The functionality we thus achieve is similar to constantly maintaining a large dedicated map-reduce cluster such as that would be required by a legacy application but at a fraction of the cost.

Deployment Issues

Deploying an application to the cloud demands special preparation. Cloud machines are typically commodity hardware – preparing an environment able to run the different types of services required by an application is time consuming. In addition, deployment is not a one time operation. The service may need additional capacity to be added later, fast. Consequently, it is important to be able to quickly commission, customize and deploy a new set of boxes as necessary. Existing tools do not provide the functionality required. As cloud computing is relatively new, tools to deploy and administer in the cloud are similarly nascent and must be developed in addition to the actual application. Furthermore, developing, using and maintaining such tools requires skills typically not found in the average sysadmin. The combination of tools and personnel required to develop and run them poses yet another hurdle for moving existing applications to the cloud. For new applications, these considerations must be part of any resource planning.

Fault Tolerance

Typically, cloud infrastructure providers do not guarantee uptime for a node. This implies a box can go down at any time. Additionally, providers such as Amazon will provide a certain percentage uptime confidence level for data centers. While in reality nodes are usually stable, an application designed for the cloud has to have redundancy built in such that a) backups are running and b) they are running in separate data centers. These backup systems also must meet other application requirements such as scalability. Their data must also be in sync with that of the primary stores. Deploying and coordinating such a system, imposes additional overhead in terms of design, implementation, deployment and maintenance, particularly relational databases are involved. Consequently, applications designed from the grounds up with these constraints in mind are much more likely to have an easier transition to the cloud.

References