One young tech employee sitting down in front of a computer screen next to another colleague looking serious
Meet Farhad: Taking the Lead on Kafka and Druid Integrations

At adjoe, the sky is not the limit. There is no limit. If you have an idea that could improve the business, that could grow the technologies we are using to make our lives easier, adjoe will back the idea. And the resources will be assigned.

A graduate in IT engineering, Cloud Engineering Tech Lead Farhad Farahi has been part of adjoe’s ever-expanding Tech teams from almost the very beginning, since 2018. Having previously dabbled in backend development, the essence of Farhad’s projects now live by DevOps standards of “out with the old, in with the new.” 

The cloud technologies specialist has spent the last months taking complete ownership of designing, automating the deployment, and monitoring both Kafka and Druid on Kubernetes to replace adjoe’s current technology. Being responsible for substantially cutting infrastructure costs and scaling up technology to handle growing traffic is just one end goal. Switching to new systems means Farhad will very soon significantly speed up the dashboards for end users by at least 300 percent.

This or That: Seeking the Right System

Based on adjoe’s data size – streams with millions of items per day and tables with billions of rows – it was up to Farhad to choose the right database and combination of products to replace both Kinesis and RedShift. adjoe has four billion rows in one of the tables, which could grow to between 40 and 400 billion rows when the company’s new rewarded video project goes live in the near future.

After working on data-streaming platforms in production – from ClickHouse, Druid, Redpanda, and Kafka – and with the input of adjoe’s Tech team, the DevOps team opted for Druid on Kubernetes. This was based on its ease of management, deployment, and high performance. Kafka was then the go-to option to handle the massive amount of data processed at adjoe coming from millions of users distributed around the globe every day. That is, the daily clicks, views, installs, and other in-app events occurring in apps with adjoe’s SDK.

Automation and Optimizations on Kubernetes

After Farhad decided to run Druid and Kafka on Kubernetes, he started on automation and streaming data to Druid and from Kafka to the data lake. Signaling adjoe’s great move away from the more expensive Kinesis.

Now, the automation process is mostly done, and it’s up to Farhad’s DevOps team to make optimizations. Whenever the DevOps team finds an optimization, such as

  • on Druid – scaling different components of the Druid cluster, changing the heap size of the memory of various processes, changing supervisor configuration, etc.
  • on Kafka – turning different components of the system, including brokers, producers, and consumers, etc.

They perform the optimizations in Terraform and then apply the changes, so everything is in sync. Then they don’t need to make manual changes in various dashboards.

Creating Kafka- and Zookeeper-Monitoring Dashboards

resource "kubernetes_config_map" "kafka_exporter_dashboard" {
  metadata {
    namespace = var.prometheus_namespace
    name      = "kafka-exporter-dashboard"
    labels    = { grafana_dashboard = "1" }
  }

  data = {
    "kafka_exporter_dashboard.json" = templatefile("${path.module}/templates/kafka_exporter_dashboard.json", {
      DS_PROMETHEUS = var.prometheus_datasource_name
    })
  }
  depends_on = [kubernetes_namespace.kafka]
}

resource "kubernetes_config_map" "zookeeper_dashboard" {
  metadata {
    namespace = var.prometheus_namespace
    name      = "zookeeper-dashboard"
    labels    = { grafana_dashboard = "1" }
  }

  data = {
    "zookeeper_dashboard.json" = templatefile("${path.module}/templates/zookeeper_dashboard.json", {
      DS_PROMETHEUS = var.prometheus_datasource_name
    })
  }
  depends_on = [kubernetes_namespace.kafka]
}
image of graphs showing memory usage, CPU Usage, JVM Memory Used, JVM GC Time
image of graphs showing Available Disk Space, Open File Descriptors, JVM GC Count and JVM Thread Count

Following successful deployment to the frontend, Farhad’s DevOps team is now considering connecting the Kafka cluster with the stream-processing tool Apache Flink® to enrich streaming data via data in various database backends before populating the data lake. The Flink cluster will be deployed on the Kubernetes cluster using the official Kubernetes operator. This would be the last piece of the puzzle for Farhad’s project.

Querying Data 3–15× Faster

With Farhad’s integration of the new technologies, query times have improved by a factor of three to fifteen. And now that the backend is live, Farhad is working with adjoe’s frontend developers to deploy the frontend counterpart.

By choosing to integrate Druid and Kafka on Kubernetes – and with this drastically improved query performance – end users will soon have access to a vastly richer analysis, as adjoe’s backend developers can use more complex queries. These might have taken minutes on the old system. With the new systems in place, they will take milliseconds to seconds. Publishers and partners might soon have access to a graph in the dashboard that will show a longer time frame. Dashboard users will, for example, soon be able to see data such as app user distribution per country (in percent) from months to years.

Overall, the end users of adjoe’s dashboards – including adjoe’s own account managers in the Business teams – will be able to observe significantly more quickly how campaigns are performing, how much revenue is being generated, and how an SDK is performing. Key data that can ultimately drive business.

Calling the Shots for DevOps at adjoe

With little previous production experience in Kafka or Druid, Farhad readily became the director, driver, and decision-maker. Not only for introducing new technologies and improving adjoe’s existing environment with cross-team collaboration from frontend and backend developers. But also for his own career growth. 

From diving into risk management, data loss and downtime, and the future of a product – testing other databases, reading relevant documentation, finding workarounds in GitHub’s open source projects, and studying source code. Farhad became a self-made specialist in various technologies. This enabled him and will continue to enable him to select the right technologies for adjoe. Especially, as the DevOps team plans to expand over the next months.

Senior DevOps/DataOps Engineer (f/m/d)

  • adjoe
  • Cloud Engineering
  • Full-time
adjoe is a leading mobile ad platform developing cutting-edge advertising and monetization solutions that take its app partners’ business to the next level. Part of the applike group ecosystem, adjoe is home to an advanced tech stack, powerful financial backing from Bertelsmann, and a highly motivated workforce to be reckoned with.

Meet Your Team: Cloud Engineering

The Cloud Engineering team is the core of adjoe’s tech department. It is responsible for the underlying infrastructure that helps adjoe’s developers to run their software – and the company to grow its business. 

From various AWS services to Kubernetes and Apache foundation open source projects, the team continuously validates new cloud and architecture services to efficiently handle a huge amount of data. Cloud Engineering tackles the challenge of choosing when to use self-hosted or managed services to reduce $300K of monthly hosting costs, while still ensuring convenience and data security for adjoe’s developers. 

Because adjoe needs to ensure high-quality service and minimal downtime to grow its business, Cloud Engineering invests heavily in monitoring and alerting technologies for insights into system health (networking, application logs, cloud service information, hardware, etc.). The cloud engineers also provide working solutions, knowledge, and documentation to the entire community of adjoe developers, giving them the autonomy to work on the infrastructure themselves and ensure the smooth sailing of adjoe’s systems.
What You Will Do
  • You will work together with three other experienced DevOps engineers to reinvent our cloud infrastructure by introducing new technologies and improving the existing environment.
  • You will help transfer our current managed AWS cloud infrastructure to self-hosted and open source technologies: We believe a hybrid combination between managed and self-hosted offers the best cost/efficiency ratio.
  • Support our developers in building a high-performance backend with Go and data scientists with a better design and better performance on data lakes, data pipelines, and OLAP databases.
  • Collaborate with experts from different technological backgrounds and countries, learn from highly experienced colleagues, and share your knowledge.
  • Work with our current tech stack: Go, DruidDB, Kafka, DynamoDB, ScyllaDB, RDS, Kubernetes, Terraform, Gitlab, ECS, EMR, Lambda, and complex CI/CD pipelines.
  • Introduce new technologies, including migrating part of the architecture to our new Kubernetes and Kafka clusters and introducing Apache Spark/Flink and our own hosted object storage.
  • Who You Are
  • You want to improve scalability and cloud infrastructure, but you are still interested in coding (Golang, Java, Scala)
  • You have a deep understanding of Kafka clusters and Kafka tuning
  • You have a profound understanding of data lakes, OLAP databases (e.g. Druid, Clickhouse), and designing data for performance and productivity
  • You have a thorough understanding of computations/transformations over data streams, such as Flink
  • You have a good understanding of Spark clusters and data pipelines
  • You have a good understanding of Kubernetes
  • You have a good understanding of Terraform
  • Heard of Our Perks?
  • Tech Package: Create game-changing technologies and work with the newest technologies out there.
  • Work–Life Package: Work remotely for 2 days per week, enjoy flexible working hours and 30 vacation days, work remotely for 3 weeks per year, modern office in the city center, dog-friendly.
  • Relocation Package: Receive visa and legal support, a generous relocation subsidy, and free German classes in the office.
  • Never-Go-Hungry Package: Graze on regular company and team lunches, free breakfasts, and a selection of free snacks and drinks.
  • Health Package: Free in-house gym, biweekly yoga classes
  • Activity Package: Enjoy a host of team events and hackathons.
  • Career Growth Package: Dedicated growth budget to attend relevant conferences and online seminars of your choosing.
  • Wealth Building: Virtual stock options


  • Skip writing cover letters. Tell us about your most passionate personal project, your desired salary and your earliest possible start date. We are looking forward to your application!

    We welcome applications from people who will contribute to the diversity of our Company.

    Conquer Cloud Technologies at adjoe

    View All Vacancies