It seems so straightforward: Combine the security and reliability of building private clouds with the scalability and efficiency of public clouds to create an infinitely elastic infrastructure for your apps and data. Like so many other seemingly simple concepts, actually putting a cloud-bursting plan in place is thornier than a rose garden.
In hindsight, the cloud’s initial vision of the pay-as-you-go model was overly simplistic: dynamically source and broker cloud services based on real-time changes in cost and performance. As Forrester principal analyst Lauren Nelson writes in a September 2016 article on ComputerWeekly, such an ideal system “remains a vision.”
Nelson states that the primary limitation of the first generation of cloud-bursting tools is that public cloud costs don’t vary sufficiently to generate much demand for cloud brokering. The real-time information the tools provide is useful only for “right-sourcing” initial deployments to ensure they run in an optimal cloud environment. However, they don’t help in porting workloads that have already been provisioned, according to Nelson.
Addressing the cloud ‘interoperability challenge’
It turns out, running your average workload on in-house servers and using on-demand public cloud capacity when usage spikes put a tremendous strain on your internal network. It also incurs high data-out charges, introduces latency in applications, and mandates that you operate two identical clouds with matching templates. An alternative is to host private clouds in the same data center as a public cloud that uses the same templates and platforms. However, few enterprises to date have implemented such a multi-data center bursting architecture.
An overly simplistic design of a cloud-bursting configuration links app servers on a private cloud to a mirror environment in the public cloud as demand for resources spikes. Source: RightScale, via Slideshare
There’s nothing simple about connecting cloud and non-cloud environments for such functions as authentication, usage tracking, performance monitoring, process mapping, and cost optimization. Suppliers may pitch their products’ interoperability features, but in practice, few of the products are capable of cutting across infrastructure, hypervisor, and cloud platforms.
A service that makes application mapping a breeze
Hybrid cloud management requires an understanding of the economics of various cloud services, and a comprehensive map of all your applications’ dependencies. You also need to know exactly what data the cloud service collects, and how to maximize your providers’ integration options. For example, the Happy Apps uptime monitoring service provides dependency maps that show at a glance the relationships between individual IT systems as they interact with your apps.
With Happy Apps, you can group and monitor databases, web servers, app servers, and message queues as a single application. In a single clear, intuitive dashboard you see the overall status of your systems as well as the status of each group member. The range of databases, servers, message queues, and apps supported by Happy Apps is unmatched in the industry: MongoDB, Riak, MySQL, MS SQL Server, Redis, Elasticsearch, RabbitMQ, and in the near future, Oracle. Last but not least, Happy Apps’ reporting functions facilitate analysis on stored data to identify patterns that affect performance, outages and other parameters.
API management is the key to easy connectivity
The future of IT belongs to application-to-application messaging, usually based on Restful APIs. Managing APIs becomes the key for quick, universal access to all available cloud resources. Unfortunately, the APIs of many cloud services are problematic: reports include receiving inconsistent results from a single API call, daily changes without notifying customers, and libraries of exposed APIs that lack core functions.
Conversely, customers often exacerbate the problem by treating each API as a one-off rather than applying consistent policies to all APIs. Having a well-document API on the customer side makes it more likely the cloud service provider will connect to your database, applications, and other systems with no latency or other performance issues.
Even hyperconverged boxes need TLC to work with hybrid clouds
Some companies view hyperconverged storage as a private cloud in a box, as the Register’s Danny Bradbury writes in a September 22, 2016, article. When cloud bursting involves hyperconverged boxes, your local resources are likely to become overburdened as compute and storage are offloaded from the on-premise kit to the public cloud. The only way to coordinate security, charging and budget control is via orchestration.
In a July 29, 2016, article on Business2Community, Tyler Keenan identifies the technical challenges facing IT in implementing cloud bursting in a hybrid setup. The most common problem area is limited bandwidth: the burst of data you need to move between the datacenter and the public cloud overwhelms your network connection at just the time you need the bandwidth the most. Even if your storage and compute capacities are scalable, your data-transfer pipes may not be so flexible.
The importance of infrastructure and resource orchestration is highlighted in this diagram of a typical cloud-bursting scenario. Source: Inside Big Data
Cloud bursting requires that your software is configured to run multiple instances simultaneously. This is particularly troublesome when you have to retrofit existing apps to accommodate multiple instances. In many organizations, compliance with HIPPA or PCI DSS may be a factor when shifting data between your in-house private cloud and a public cloud service.
What channel partners can do to help make cloud bursting work
Despite the difficulties cloud bursting presents to organizations, the technology still offers the promise of unmatched efficiency when accommodating peaks in demand, whether they’re anticipated (such as a retailer’s traffic bumps at holiday season) or a surprise. In a June 16, 2016, post on the Channel Partners blog, Bernard Sanders presents a five-step plan for cloud service providers who want to assist enterprises in implementing their cloud-bursting strategy.
1. Automate the server build process to the custom needs of the company, including IP and hostname allocation, VM creation, and installation of the base OS and various agents.
2. Automate the entire app stack to ensure builds for common apps are standardized across the company, and best practices are implemented at the automation layer.
3. Set up and test auto-scaling, starting with a single app and a trigger action for scaling resources up and down, such as high or low CPU capacity.
4. Enable the complete stack deployment process inside the public cloud provider’s infrastructure, preferably using a configuration management system such as Morpheus to automate the deployment process.
5. Implement your cloud-bursting plan, starting with a stateless app, such as a web app with no dynamic content or database connection as a proof of concept (and confidence booster).
Cloud bursting is more than just some data-management ideal that will never be attained. But when it comes to actually implementing the technology, IT departments may need to adopt the old motto of the U.S. Marines: “The difficult we do right away. The impossible takes a little longer.”