12
12
Table of Contents

Modern applications require modern infrastructure to run. At this time and age, everything is running on Microservices based architecture and monoliths are a thing of the past. Containers and container management have become a huge aspect of delivering your applications to a large audience without compromising overall performance for better user experience.

The one technology synonymous with scalability and container orchestration is Kubernetes. Hence being well versed in running and managing your K8s workloads is a big requirement and the right way to go is to optimize the cloud costs of these resources. In this blog we would be taking a look at some AWS cost reduction strategies that you could implement in your EKS environment to prevent it from burning a hole in your pocket.

What is EKS?

According to the official AWS documentation, Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers.

But what does a managed Kubernetes service mean? Does that mean that AWS would manage your EKS cluster on behalf of you or does it means that AWS will have full control over your deployed application? If you are thinking about the former, you might be wrong.

A managed Kubernetes service means that AWS would take care of setting up of your Kubernetes control plane and install all the necessary components required for smooth running of your Kubernetes infrastructure. If you have ever set up a Kubernetes cluster from scratch you would know the pain you have to go through to get it working, AWS helps you by setting up the cluster automatically, leaving you with deploying your workload on that infrastructure. EKS can help you manage your Kubernetes workload on AWS cloud as well as the on-premises data centers.

Components of an EKS cluster:

An EKS cluster consists of two primary components:

  • EKS control plane which consists of all the necessary requirements for running the Kubernetes software like etcd, Kubernetes API server.

  • EKS nodes that are registered with the control plane.

 

EKS pricing:

To know how to realize cloud cost savings on EKS resources, you first need to understand how you are going to be billed for using EKS. Let’s start with creating your EKS cluster, for every EKS cluster that you create there is a fixed flat $0.10 hourly fee.
You have multiple options of running EKS, you can either run EKS on Amazon EC2 instances, AWS Fargate or on-premise using AWS outposts. We will get into the details of running your EKS cluster on EC2 and Fargate in later sections of this blog.

If you are running your EKS workloads on EC2 instances, you will be paying for what you use. The fee would depend on the type of EC2 instances and the resources like EBS you use for running your instances. For a more detailed overview of EC2 cloud cost management, you can refer to the AWS pricing calculator.

If you are running your cluster on AWS Fargate, you will be billed for vCPU and memory consumption right from the beginning of the image download till your pod termination.

Understanding your EKS costs:

Now that you have a basic understanding of EKS pricing, let's take a deeper dive into how you can better manage your existing EKS infrastructure. There are several tools available to help you understand and visualize your EKS cluster costs.

You can configure your EKS cluster with cluster-level cost allocation tagging provided by AWS. If you are running your EKS workload on EC2, enabling these tags will automatically allocate them to your EC2 instances attached to your EKS cluster. These tags can be used to monitor how much the EC2 instances attached to your EKS cluster are costing. By using these tags, you can assign EC2 expenses to specific EKS clusters via AWS Billing and Cost Management, simplifying the process of integrating them into established financial workflows that utilize these popular AWS tools.

Another way to optimize your cloud costs is to use an open-source tool called Kubecost. By configuring Kubecost with your EKS cluster, you can access a visualization dashboard that provides you with a better understanding of your EKS cluster's overall cost.

Understanding compute purchase options

Now, we know how you are billed on your EKS cluster. Let’s take a look at the compute options available to you. We have already talked about them but now, we will be trying to get a better understanding of how they can be useful for you while you are optimizing your EKS costs. You already know you are going to be charged a flat $0.10 hourly fee for every time you create an EKS cluster. The right cloud cost management strategy here is selecting the right compute option to go with your EKS workloads. There are three option available for you:

  • EC2 Instances - Let’s first talk about the most simplest method available to you to deploy your EKS cluster, the EC2 instances. If you are beginning to set up your EKS infrastructure using EC2 is a good place to start.

  • AWS Fargate - AWS Fargate is a service offered by AWS which enables you to run your containers without worrying or provisioning any server. This service is useful if you are running large scale infrastructure and don’t want an additional overhead of managing your servers.

  • Amazon Outposts - If you are running your EKS workload on your On-Premises infrastructure, you could leverage Amazon Outposts. Outposts enables you to run AWS services, APIs and tools at your own in-house datacenters.

Understanding your cluster requirements:

It’s important to first correctly know the ins and outs of your requirements. If you are starting out with EKS or you are hosting a new application which would not have a number of users it would be an overkill to use a huge instance fleet or compute optimized instances. Similarly understanding the usage behavior and resource consumption of your application is an important part of your AWS cost reduction strategies.

For example you are running an application for your team’s internal use only which would only be having 10-15 users, and selecting EC2 instances as your preferred type, With compute optimized instances, it will be difficult to achieve cloud cost savings.

Enable ClusterAuto Scaling:

Autoscaling in EKS means that the service can automatically adjust the number of resources allocated to your application based on its current demand. This includes adding more worker nodes, which are like individual computers that run your application, or scaling up the capacity of your existing nodes to handle more workloads.

For example, if your application suddenly experiences a spike in traffic, EKS can automatically add more worker nodes to handle the increased demand. And when the traffic subsides, EKS can also automatically remove the excess nodes to save resources and reduce costs.

EKS autoscaling can help ensure that your application is always available and responsive to users, without you having to manually adjust the resources or worry about capacity planning. It can also help optimize cloud costs by automatically scaling up or down based on actual demand.

Autoscaling can help optimize EKS costs in several ways:

  • Right-sizing resources: Autoscaling can help you allocate just the right amount of resources needed to handle your application's workload. This means that you're not wasting resources on over-provisioned infrastructure that sits idle and incurs unnecessary costs. With autoscaling, you can add more resources when needed and remove them when they're no longer required.

  • Avoiding over-provisioning: Over-provisioning means having more resources than necessary to handle the workload, which can lead to higher costs. Autoscaling can help you avoid over-provisioning by scaling resources up or down based on actual demand. For example, if your application experiences a sudden spike in traffic, autoscaling can add more resources to handle the increased load, and then remove them when the traffic subsides.

  • Saving on compute costs: Autoscaling can help you in cloud cost savings by automatically selecting the most cost-effective instance types and sizes based on the workload requirements. This means that you're only paying for what you use.

  • Reducing manual intervention: Autoscaling can help reduce the need for manual intervention in scaling resources, which can be time-consuming and error-prone. With autoscaling, you can set up policies and rules to automatically scale resources based on predefined thresholds and parameters, which reduces the likelihood of human error and helps ensure consistent performance and availability.

Overall, autoscaling boosts your AWS cost reduction strategies by dynamically adjusting the resources allocated to your application based on actual demand, avoiding over-provisioning, and reducing the need for manual intervention. (Read more about the right practices in implementing Autoscaling.)

Using Spot Instances:

A Spot Instance is a purchasing option for Amazon Elastic Compute Cloud (EC2) instances, where you can bid for unused EC2 instances at a significantly lower price compared to On-Demand instances. It allows you to take advantage of Amazon's spare computing capacity, which can be sold at a discounted price when the demand for instances is low.

When you launch a Spot Instance, you specify the maximum hourly price you're willing to pay for the instance. The actual price you pay is the current Spot Price, which fluctuates based on supply and demand. If the spot price goes above your maximum price, your instance will be terminated automatically.

In EKS, you can use Spot Instances to run your Kubernetes worker nodes, which are the instances that run your containerized applications. By using Spot Instances, you can optimize cloud costs, with discounts up to 90% compared to On-Demand instances. However, Spot Instances come with the risk of being interrupted or terminated at any time, which can affect your application's availability and performance.

To mitigate this risk, you can use strategies such as instance diversification, which spreads your worker nodes across different availability zones and instance types to reduce the impact of interruptions. You can also use a combination of Spot Instances and On-Demand or Reserved Instances to ensure that you have enough capacity to meet your application's requirements.

In summary, using Spot Instances in EKS can help in cloud cost management by taking advantage of discounted EC2 capacity, but it requires careful planning and risk management to ensure that your application remains available and performant.

Using AWS Fargate:

Autoscaling can help optimize EKS costs in several ways:

  • No need to manage EC2 instances: With Fargate, you don't need to provision, manage, or scale EC2 instances for running your containers. This means that you don't have to worry about the cost and complexity of managing EC2 instances and you can focus on your application instead.

  • Pay for what you use: Fargate pricing is based on the actual compute and memory resources used by your containers, with a minimum billing increment of one second. This means that you only pay for the resources your application needs and you can optimize cloud costs by right-sizing your containers based on their actual usage.

  • Higher resource utilization: With Fargate, you can achieve higher resource utilization compared to running containers on EC2 instances. This is because Fargate can pack multiple containers on the same underlying infrastructure, reducing the overhead of running multiple EC2 instances. This means that you can achieve better resource utilization and reduce your overall costs.

  • Reduced operational overhead: Fargate can help reduce operational overhead by abstracting away the complexity of managing EC2 instances. This means that you don't have to worry about patching, scaling, or monitoring EC2 instances, which can reduce the amount of time and resources required to manage your infrastructure.

Overall, Fargate can help achieve cloud cost savings by eliminating the need to manage EC2 instances, paying for what you use, achieving higher resource utilization, and reducing operational overhead.

Minimizing Inter-AZ data transfer charges in EKS:

In Amazon Elastic Kubernetes Service (EKS), data transfer charges refer to the fees associated with transferring data between different AWS services or different Availability Zones (AZs) within the same region.

For example, if your EKS cluster has worker nodes in one Availability Zone (AZ) and your application's database is in another AZ, any data transfer between the worker nodes and the database will incur data transfer charges. Similarly, if your EKS cluster is using an Elastic Load Balancer (ELB) to distribute traffic to your application, any data transfer between the ELB and the worker nodes will also incur data transfer charges.

AWS charges for data transfer based on the amount of data transferred and the destination region or service. The charges vary based on whether the data transfer is between different AWS services or different AZs within the same region.

Cloud cost management by reducing inter-Availability Zone (inter-AZ) data transfer charges takes the following these best practices:

  • Use the same zone for your EKS control plane and worker nodes: By using the same availability zone for your EKS control plane and worker nodes, you can avoid inter-AZ data transfer charges for communication between the control plane and worker nodes.

  • Use Cluster Autoscaler to minimize the number of worker nodes: Cluster Autoscaler can automatically scale up or down the number of worker nodes based on the application's demand. By minimizing the number of worker nodes, you can reduce the amount of inter-AZ data transfer required for communication between the nodes.

  • Use multi-AZ deployment only when necessary: Multi-AZ deployment provides high availability and fault tolerance by replicating your application across multiple AZs. However, this also increases inter-AZ data transfer charges since data needs to be replicated across multiple AZs. Only use multi-AZ deployment when it's necessary for your application's requirements.

  • Use Amazon S3 or Amazon EFS for data storage: Amazon S3 and Amazon EFS are highly scalable and durable storage services that can be used to store data outside of your EKS cluster. By using these services for data storage, you can introduce cloud cost savings by reducing inter-AZ data transfer charges for accessing and replicating data within your cluster.

  • Use Amazon CloudFront for content delivery: Amazon CloudFront is a content delivery network (CDN) that can distribute your content globally with low latency and high data transfer speeds. By using CloudFront, you can reduce inter-AZ data transfer charges for serving content to your users.

Overall, minimizing inter-AZ data transfer charges in EKS requires careful planning and optimization of your infrastructure. By following these best practices, you can reduce your inter-AZ data transfer charges and optimize your cloud costs.

In conclusion, optimizing EKS costs requires careful planning and a strategic approach. By leveraging best practices such as using spot instances, scaling clusters based on demand, and minimizing data transfer charges, businesses can significantly reduce their EKS costs while maintaining high availability and performance for their applications.

Additionally, using tools like AWS Cost Explorer, Cluster Autoscaler, and third-party tools like Kubecost can help in cloud cost management by monitoring and optimizing their EKS costs in real-time.

Ultimately, the key to optimizing EKS costs is to right-size resources based on actual usage, implement automation wherever possible, and continuously monitor and analyze costs to identify potential cost savings opportunities. By adopting these AWS cost reduction strategies and best practices, businesses can successfully optimize their EKS costs, improve their bottom line, and focus on driving innovation and growth.

When it comes to Cloud Cost Optimization, having an expert by your side is the right way to go about.
Know how CloudKeeper can help you accelerate your FinOps journey.
Talk to our AWS Certified Cloud Experts.

12
Let's discuss your cloud challenges and see how CloudKeeper can solve them all!
0 Comment
Leave a Comment

Speak with our advisors to learn how you can take control of your Cloud Cost