Managing Application Performance in Hybrid Clouds — DevOps.com

Keeping pace with changes to your networks, systems and applications can feel like a full-time job. As applications support more data types in more distributed environments—often in real time—it becomes imperative to have a clear picture of your end-to-end computing environment.

Just as important is making sure your apps take advantage of the adaptability and efficiency of microservices, document data models and other cloud-focused technologies.

All those much-touted benefits of cloud computing—efficiency, agility, scalability—come to nothing if your applications don’t perform as expected. The growing prevalence of hybrid cloud infrastructures makes managing app performance trickier than ever. According to a recent survey conducted by Riverbed and reported in a Jan. 29 article in Data Center Journal by Riverbed Marketing Director Steve Brar, 89 percent of IT pros claim poor application performance hinders their operations, with 58 percent report application performance glitches each week and 36 percent experience application performance problems each day.

Even worse, 71 percent of the IT managers surveyed are “in the dark” about why the apps are running so poorly, and 37 percent have turned to unsupported apps to fill the breach when such problems occur, which exacerbates the growing problem of shadow IT. On the plus side, the managers believe that making application performance more visible improves productivity (58 percent), customer service (54 percent), product quality (49 percent), employee engagement (46 percent) and revenue generation (43 percent).

Brar identifies five requirements to ensure end-to-end application performance visibility:

  1. Network- and app-aware path selection directs traffic to one of three branches: one MPLS-based, one IPSec-protected Internet link and one that exits directly to the Internet.
  2. Dynamic tunneling via a central control plane allows backhauling of branch data to the data center over an IPSec link.
  3. Use secure web gateways in conjunction with advanced threat detection.
  4. Use a QoS function that manages traffic from the source (inbound) rather than from the destination (outbound) to avoid slowing business-critical inbound data flows unnecessarily.
  5. Implement a unified management plane that has an intuitive interface and is based on such high-level abstractions as applications, sites, uplinks or networks, depending on the unique characteristics of your IT infrastructure.

Cloud Application Management: The IT Basics Still Apply

It isn’t unusual for IT departments to feel they’re losing control of their network infrastructure when they migrate applications to the cloud. Cahit Akin writes in an April 22, 2015, article on Network Computing that capacity management, change control and other IT fundamentals are just as important for managing your cloud infrastructure—if not more so, considering the likely increase in network traffic that results.

The key is to consider the cloud part of your internal network architecture. Three WAN-optimization techniques transfer easily to cloud infrastructure management:

  1. Caching images and other common elements on the local device helps ensure network bandwidth is reserved for the most important data.
  2. Deduplication minimizes your backups and other secondary network traffic by ensuring only changed data is updated.
  3. Compression removes unnecessary data from files to make them smaller and reduce overall bandwidth use.

The same features of a typical WAN optimization tool apply to improving the performance of cloud-based applications, notes TechTarget.

The challenge for IT is to keep pace with the tremendous changes in the nature of applications themselves. Data is collected, aggregated and analyzed from diverse sources, including IoT sensors, clickstreams, logs and social media data, as Ravi Dharnikota writes in a Feb. 5 article on DataInformed. Dharnikota states that IT has to support “anything, anytime, anywhere.”

Rather than a point-to-point approach to integration, you have to rearchitect your systems based on microservices that communicate via lightweight REST APIs. This way you distribute execution of integrations, which lowers costs and increases efficiency. If you aren’t already, you need to break out of the row-and-column rut and adopt document data models that support semi-structured and unstructured data.

Design Apps to Leverage Batch and Real-Time Processing

The only way to deliver an infrastructure that meets the anytime, anywhere nature of modern apps is to be adaptable enough to support both batch processing and real-time processing. Along with this comes the ability to determine which apps require real-time attention, and which can be shifted into batch-processing mode. Dharnikota cites the Lambda architecture as an example of one that accommodates batch and real-time processing by balancing latency and throughput.

However, the Lambda architecture’s shortcomings are explained by LinkedIn’s Jay Kreps in a July 2, 2014, article on O’Reilly Radar. Ultimately, you have to maintain code that produces the same result in two complex, distributed systems. According to Kreps, it’s much simpler and efficient to improve the stream processing system to support both batch and real-time modes. Doing so would entail increasing the system’s inherent parallelism by “replaying history very, very fast.”

MorpheusData-Kreps-parallelism

An alternative to the dual-processing-mode Lambda architecture enhances the stream processing of a single-mode system to reprocess the full data log to mimic both batch and real-time modes.

In this model, reprocessing is required only when the code changes. Rather than two full duplicates of your code, you simply update the single set. The recomputation is done by a job that is merely an improved version of the same code: it runs on the same framework and takes the same input data, although the parallelism needs to be increased to ensure fast performance.

To optimize performance further, you could combine two output tables. However, having two separate tables, at least for a brief period, lets you revert to the old logic in an instant via a button that redirects the application to the older output table. To prevent the new version of the code from degrading performance in comparison to the old version, add an automatic A/B test or bandit algorithm that lets you control the cutover.

This post originally appeared on DevOps.com