adjoe Engineers’ Blog
 /  Backend  /  Running Apache Kafka® on Spot Instances
abstract design with kafka logo embedded
Backend

Running Apache Kafka® on Spot Instances

Apache Kafka is an open-source distributed event-streaming platform. At adjoe we deploy our Kafka cluster on Kubernetes and use it for event streaming – but also in some cases as an event bus. 

Some of the applications that process requests publish messages to Kafka topics. This means that the Kafka brokers should be reliable. Running reliable Kafka deployment can be costly. The high costs come from the way Kafka achieves the resiliency; in order to avoid unplanned downtimes, the data should be replicated across brokers. 

Here at adjoe we always consider the financial impact of our solutions without sacrificing the reliability of our product. Our solutions need to be scalable, reliable, and cost-effective. An easy way of decreasing the costs is using AWS spot instances instead of on-demand instances. Spot instances can be up to 90 percent cheaper than on demand. 

In this article, I will showcase how we managed to run self-managed Apache Kafka on AWS spot instances to cut costs by around 60 percent.

The Setup before the Switch

  • We usually use a replication factor of 3 for our topics with minimum in-sync replicas set to 2. 
  • We use segmentio/kafka-go as our Go Kafka client indirectly by using justtrackio/gosoline. This is a framework for creating Go applications developed by our sister company justtrack.
  • We publish the messages in async mode.
  • Our Kafka and Zookeeper run on Kubernetes.

Which Problems Did We Try to Solve?

When switching from an on-demand deployment to a spot instance deployment, you should expect the nodes to go down at any time. When a node that runs a Kafka broker goes down, all the partitions for which this broker was leader for will become unavailable, a new leader will need to be elected – but this process can sometimes be a bit slow. Some in-flight requests may also exceed the timeout, and some of the error responses are not retryable. 

There are use cases when it would be acceptable for the request to return an error and then be retried. But in some cases, we don’t want to propagate the error back to the user, so we have to guarantee that the messages will eventually be produced. In theory that would mean having to keep the messages in memory until we can write them to Kafka, but if the leader election takes too long, we risk losing those messages due to OOM kill.

The Idea

When a broker goes down, all the partitions for which the broker is a leader will become unavailable until a new leader is elected. Kafka uses a key to partition the messages. There can be multiple strategies, but usually the default partitioner is used. The default partitioner guarantees that all the messages with the same partition key will be assigned to the same partition. 

In our use case, this guarantee is not important, so we asked ourselves: “What would happen if we were to change that behavior, so that when we detect a partition is offline, we try to send the message to a different partition?” And that is what we implemented as an experiment.

Without Active Partition Balancer

diagram showing adjoe running Apache Kafka cluster without active partition balancer

With Active Partition Balancer

diagram showing adjoe running Apache Kafka cluster with active partition balancer

How Does It Work?

First we had to get rid of the async writing because we want to be able to detect if the message we try to write failed or not. This async writing functionality was provided by the Kafka-Go client.

Next we had to implement our own partitioner, which would be aware of errors when we publish a message. Kafka-Go calls this partitioner a Balancer and provides an interface.

type Balancer interface {
   Balance(msg Message, partitions ...int) (partition int)
}

As you can see, this interface takes the message to be produced and a slice of partitions. For example, if your topic has five partitions, the call would look like this:

p := Balance(msg, 0, 1, 2, 3, 4)

If we want to introduce a mechanism that can react when a write request fails, the Balancer should be aware of that. We created a new interface to do this.

type KafkaBalancer interface {
   kafka.Balancer


   OnSuccess(kafka.Message)
   OnError(kafka.Message, error)
}

Now we can notify the Balancer when an error happens. 

Next we created a new Balancer that we call activePartitionBalancer that implements the KafkaBalancer interface. This new Balancer maintains a list of circuit breakers per topic and partition.

How Does activePartitionBalancer Work?

When a new message is about to be balanced, this is how it works.

diagram showing how activePartitionBalancer works
  • When the write message operation fails, the error is passed to the onError function of the Balancer, where it registers the failed attempt.
  • When the write message operation succeeds, the message is passed to the OnSuccess function of the Balancer, where it will reset the partition circuit breaker.

You can find all the implementation details here.

Things We Consider When We Write Cost-Effective Code

  • Try to take advantage of the spot instances whenever possible. 
  • If you doubt that a service can run in spot instances, you can always perform an experiment and evaluate your ideas.
  • Do not settle down – re-evaluate your solutions.
  • Design the code in a way that can withstand unexpected disruptions. See chaos engineering.

Senior Data Engineer (f/m/d)

  • adjoe
  • Programmatic Supply
  • Full-time
adjoe is a leading mobile ad platform developing cutting-edge advertising and monetization solutions that take its app partners’ business to the next level. Part of the applike group ecosystem, adjoe is home to an advanced tech stack, powerful financial backing from Bertelsmann, and a highly motivated workforce to be reckoned with.

Meet Your Team: Programmatic Supply

Did you know that in-app ads are sold in a real-time auction before they get rendered by adjoe’s Android and iOS SDK in thousands of mobile apps? 

It’s exactly for this that adjoe has built its own adtech platform. Our backend system can simultaneously handle a few billion auctions in real-time every day. It receives price bids for every ad impression via API from more than 20 different companies buying potential ad inventory. 

Behind the auction scenes, we have impressive prediction models working to figure out if an ad is relevant to a user. Our algorithm requires operating data from multiple data sources and performs calculations in real-time. 

Join our discussions, explore implementation, and put your problem-solving skills to the test in our cross-functional Programmatic team!
What You Will Do
  • Build & Maintain a Feature Store: Contribute to the design, implementation, and optimization of a scalable, efficient, and easily accessible feature store for machine learning applications.
  • Develop Scalable Data Pipelines: Build and maintain robust data pipelines that process and transform large-scale event data daily and in near real time within our data lake and feature platform.
  • Enable High-Impact Features: Extract, clean, and engineer features from raw data to enhance the relevance and accuracy of recommendations in our adtech platform.
  • Ensure Data Quality, Observability & Integrity: Implement data validation, monitoring, and governance processes to maintain accuracy, consistency, and reliability across all datasets.
  • Bridge Data Science & Backend Teams: Act as the key link between data science and backend engineering, ensuring seamless data integration and usage across the organization.
  • Work in an International Environment: Join an international, English-speaking team focused on scaling our adtech platform to new heights.
  • Who You Are
  • You have 3+ years’ of software development experience, working on modern data engineering data stack. 
  • You have experience working with Python, SQL. Knowledge of Golang is a plus. 
  • You have experience working on distributed systems that process large amounts of data in real-time. 
  • You know scheduling framework such as Airflow / Kubeflow 
  • You know the concepts of data quality,  feature stores. 
  • You have advanced knowledge of computer science, algorithms, and data structures. 
  • You have knowledge of databases and a basic understanding of regular expressions, HTTP protocol, and encoding.
  • Plus: You have experience in working with Clickhouse, AWS, Terraform.

  • Our Tech Stack
  • Apache Kafka as our data bus system
  • Go as our primary language for backend
  • Kubernetes and Terraform to manage our infrastructure
  • S3, Redis and Druid as data storage
  • ClickHouse as feature store
  • Prometheus, Grafana, and OpenObserve for logging and monitoring
  • Airflow for orchestration.
  • … and we’re always open to trying new technologies that suit each case best.
  • Fuel for the Journey: Benefits to Support Your Ambitions
  • Invest in Your Future: Regular feedback and our development program support your growth, helping you expand your skill set and achieve your career goals.
  • Easy Arrival to adjoe: From signing to settling in Hamburg, we’ve got you covered. Need a visa? No problem. Ready to build your new life and career at (company name) in Hamburg? We support every ambition—from learning German to a relocation bonus that helps you settle in and make Hamburg feel like home.
  • Live Your Best Life, at Work and Beyond: We work in a hybrid setup with 3 core office days, plus flexible working hours. Enjoy 30 vacation days, 3 weeks of remote work per year, and free access to an in-house gym with lots of different fitness classes and mental health support through our Employee Assistance Program (EAP).
  • Thrive Where You Work: Enjoy the Alster lake view from our central office with top-notch equipment, fun open spaces, and a large variety of snacks and drinks.
  • Join the Community! Participate in regular team and company events, including hackathons and social gatherings. We work together, and we celebrate together, too.
  • We’re programmed to succeed

    See vacancies