6 ways to control your public-cloud spend

Slight changes in cloud usage or over-provisioning of resources can slip under the radar—until the fees start to hit your bottom line. Managing public cloud costs is even harder when your organization has a multi-cloud strategy.

Don't pay for services you don't need any longer. Instead, use analytics to predict what you'll be spending on cloud vendors. Here are some basic scenarios for predicting the monthly expenses of your public cloud implementation, and ways you can reduce those costs.

Multicloud Monitoring: How to Ensure Success the First Time

Understand the basics

The simplest use case involves monitoring daily and hourly costs based on data from your cloud providers. You set alert thresholds for when you hit critical budget levels and eventually on what looks like the end-of-month spend. Take your time to understand every line on the monthly bill, because you might find hidden costs, such as fees charged for a service that was not on last month's bill.

Your cloud provider's charges are primarily based on time used, although cost for data volume is applicable for storage (data at rest) and network (data transferred).

Figure out the spikes and anomalies

A CIO might want to know if there were spikes or other anomalies at certain points in the month, especially if you've been experiencing higher costs. Root-cause analysis can help determine if these spikes are avoidable or if you have something misconfigured.

Overprovisioning is a typical form of misconfiguration. Network misconfiguration (e.g., an open proxy) and the lack of distributed denial-of-service (DDoS) protection can lead to high network costs. 

The key question is whether a given spike in cost is normal, or even expected. You may have had increased compute consumption on the cloud for several weeks while your World Cup ad was running, for instance. And you should be able to absorb the cost of small spikes due to research usage, for example.

Look at your network traffic

Compute costs, depending on your cloud provider, are assessed based on hourly use of machines. Smaller machines are cheaper than larger ones, so right-size your machine use so that you neither overprovision nor create bottlenecks. If possible, use a scale-out architecture that uses multiple, smaller machines.

Also consider your networking costs, especially those for traffic to and from the Internet. It's important to keep an eye on where network traffic is coming from and where it's going. Within a single network, locally, there are usually lower costs or no network costs at all.

In the worst case, high traffic outside your local network might be the result of a security breach, if your cloud resources are being abused for a DDoS or other form of attack.

Look at the usage patterns. Setting alerts for high network traffic is a simple approach, but you can also look for baselining and forecasting trends. In any case, hardening and securing your cloud resources should be your No. 1 priority.

Examine storage costs

Often, storage use is calculated based on volume over time—i.e., the number of gigabytes multiplied by time stored. It should be possible to monitor whether stored data is accessed (if ever) to determine whether the data is actually needed. Was there a project that once required the data but that has now ended without terminating the related data storage? Do you store the same data in multiple places with no real business need for the duplication?

If the data is seldom accessed but considered valuable, it could be moved from block storage on a storage-area network to a facility that is more object-oriented, such as Amazon Web Services' S3, which is much less costly. And there are even less expensive options, such as backup, that you can use for files that are rarely or never accessed.

Analyzing the usage profile of the data you're storing in the cloud is a best practice that's often overlooked in cloud cost management.

Consider cost variances by geo-location

Most providers charge different rates for the same service in different regions. The reasons can be complex. Factors include labor costs and market-value fluctuations from one region to the next.

In Europe, where data locality is more of an issue than in other parts of the world, you'll pay more if you want data from a premium location. And if the requested service demands a compute-intensive machine, that will be more expensive as well—but perhaps the price for the same service or machine would be less in another region that’s accessible to you.

Also, as long as you don't rely on special services unique to a specific cloud provider, it's worth comparing similar offerings from multiple providers. Switching workloads can also save money.

Beware pre-provisioning costs

In our lab, a worker ordered a machine without reading the fine print. The machine came with pre-provisioned, commercial software on it, which resulted in a high up-front cost on top of the normal usage-based costs.

Whoops. This mistake required renegotiation with the service provider, and getting that resolved was time-consuming and expensive.

So be sure to read the fine print when prospecting for services. It should be possible with some natural-language processing and machine learning to help warn users of high costs or expensive incremental costs, especially for pre-provisioned software that you don't need.

Now take back control

Workers across the IT spectrum, from development to IT operations management to network administration, aren't familiar with cloud-based technologies and the associated costs. But the cost implications of implementing poor application architecture or ignoring network traffic patterns and data storage routines can hurt your bottom line.

That's why it's worth spending some time to set up basic usage and cost monitoring, including alerting. Whether you use a tool, a service, or your own homegrown solution to baseline and forecast the spending in a multi-cloud environment, data analytics can help you spot unusual patterns and potentially gain back full control.