adjoe Engineers’ Blog
 /  Android  /  Kotlin Symbol Processing
decorative image with embedded code
Android

Leveraging Code Generation with KSP for Efficient JSON Serialization

In adjoe’s WAVE team, our mission is to empower app publishers with a seamless and efficient ad mediation solution and maximize their monetization potential. A critical aspect of this mission involves ensuring that our SDKs, which affect millions of users around the world, are robust and performant. 

We identified that one of the key areas where we could optimize performance is through JSON serialization. Instead of relying on third-party libraries in our Android SDK, we decided to develop our own JSON serialization library tailored for our team’s needs.

We’ve integrated Kotlin Symbol Processing (KSP) to streamline and automate the generation of serialization code at compile time.

Why Develop Our Own Library?

We as SDK developers need fine-grained control over every aspect of our codebase. Relying on third-party libraries can introduce unnecessary complexity and constraints. 

By developing our own JSON serialization library, we are able to:

  1. Optimize performance: Tailor the serialization logic to our specific data models, eliminating generic overhead and improving speed.
  2. Reduce dependencies: Avoid potential compatibility issues and security vulnerabilities associated with third-party dependencies.
  3. Customize behavior: Implement custom serialization and deserialization behaviors that are uniquely suited to our SDK’s constantly evolving requirements.

What is Compile-Time Code Generation?

Compile-time code generation involves creating source code during the compilation process, usually through annotation processing. This approach offers you several advantages over traditional reflection-based approaches:

  1. Performance: You eliminate the need for runtime reflection, which can be slow and resource-intensive, as generated code is compiled into the final application.
  2. Type safety: You enhance the reliability of the code, since you can catch errors at compile time rather than runtime.
  3. Maintainability: The codebase is cleaner and easier to maintain, as this approach reduces boilerplate code.

Several known libraries like Room (Android Jetpack), Dagger, and Moshi use compile-time code generation to provide their functionality.

What is Kotlin Symbol Processing?

Kotlin Symbol Processing (KSP) is a powerful tool introduced by JetBrains that allows developers to build lightweight compiler plugins. 

KSP provides a simplified API for processing Kotlin programs and generating code. Compared to traditional annotation processors, KSP offers better performance and compatibility with Kotlin’s features. 

Annotation processors that use KSP can, for example, run up to two times faster than with kapt. Additionally, by hiding compiler changes, KSP minimizes maintenance efforts for processors. This makes it a great choice for us. You can find a more detailed comparison to other tools and approaches in KSP’s official document

How Did We Leverage Code Generation?

In our library, Joson, the fundamental processes are JSON serialization and deserialization for structured data types. Avoiding runtime reflection, our approach revolves around using adapters to handle the conversion between Kotlin objects and their JSON representation. This means that each data type that needs to be serialized or deserialized requires a corresponding adapter class implemented in the source code.

The following code snippet shows a simplified version of the abstraction for our JSONAdapters, in which JWriter and JReader are helper classes for writing and reading a JSON-encoded value as a stream of tokens.

abstract class JsonAdapter<T> {
    /** Encodes the given [value] with the given [writer]. */
    abstract fun toJson(writer: JWriter, value: T?)

    /** Decodes a nullable instance of type [T] from the given [reader]. */
    abstract fun fromJson(reader: JReader): T?
}

The library’s core module provides adapters for Kotlin common types and parameterized types – for example, primitives and collections. But what about the custom data types?

Instead of writing an adapter class for each and every data model in our main codebase, we use the magical power of code generation and utilize KSP to create adapters.

Therefore, alongside the main module for the library, we have a code generator module implementing com.google.devtools.ksp.processing.SymbolProcessor and com.google.devtools.ksp.processing.SymbolProcessorProvider registered as a service responsible for operating on predefined annotations and generating the JsonAdapter classes.

With the ksp Gradle plugin added to the Android project, the final usage of the library modules and the dependency graph is as follows:

// Wave `build.gradle` file 
dependencies {
    implementation(project(":joson"))
    ksp(project(":joson-codegen"))
}
graph showing how adjoe’s WAVE works with Joson library and kotlin symbol processing

The following annotation defined in the library’s main module demonstrates how the adapter generation works:

@Target(AnnotationTarget.CLASS)
annotation class JsonSerializable

We then implement the KSP processor for the annotation, which is the entry point of our code generator, like this:

class JsonSerializableSymbolProcessorProvider : SymbolProcessorProvider {
    override fun create(environment: SymbolProcessorEnvironment): SymbolProcessor {
        return JsonSerializableSymbolProcessor(environment.codeGenerator, environment.logger)
    }
}

private class JsonSerializableSymbolProcessor(
    private val codeGenerator: CodeGenerator,
    private val logger: KSPLogger
) : SymbolProcessor {
    override fun process(resolver: Resolver): List<KSAnnotated> {
        for (type in resolver.getSymbolsWithAnnotation(JSON_SERIALIZABLE_NAME)) {
            type as? KSDeclaration ?: continue

            try {
                // Generate adapter class and write it to the [codeGenerator]
            } catch (e: Exception) {
                logger.error("Error preparing ${type.simpleName.asString()}")
            }
        }
        return emptyList()
    }

    companion object {
        private val JSON_SERIALIZABLE_NAME = JsonSerializable::class.qualifiedName!!
    }
}

The implementation of SymbolProcessorProvider class will be loaded as a service to instantiate the implementation of SymbolProcessor class. This should occur within the SymbolProcessorProvider.create() method, which will receive the necessary dependencies – in this case, CodeGenerator and KSPLogger – and pass them to the SymbolProcessor.

The main logic of each processor happens inside the SymbolProcessor.process() method, which will be called for each provided processor in the compilation time. Inside this method, resolver.getSymbolsWithAnnotation() can be used to get the symbols the processor wants to process, given the fully-qualified name of an annotation. 

In our processor, for example, we iterate through the pre-defined JsonSerializable annotations in the code and generate an adapter for each annotated class. For more details on implementing a SymbolProcessor, you can read the Kotlin documentation.  

We implemented the actual code generation using a powerful library called KotlinPoet by Square, which is a Kotlin and Java API for generating .kt source files.

For example, let’s suppose we have a data class like the following, which requires serialization.

@JsonSerializable
data class AdRequest(
    val id: String,
    val publisherId: String,
    val adType: String,
    val timestamp: Long
)

Simply annotating it with @JsonSerializable will result in our code generator module creating the following adapter for it. This adapter will be accessible through the main module like any other custom adapter:

public class AdRequestJsonAdapter(
    joson: Joson,
) : JsonAdapter<AdRequest>() {
  private val keys: JReader.Keys = JReader.Keys.of("id", "publisherId", "adType", "timestamp")

  private val stringAdapter: JsonAdapter<String> = joson.adapter(String::class.java)

  private val longAdapter: JsonAdapter<Long> = joson.adapter(Long::class.java)

  override fun fromJson(reader: JReader): AdRequest {
    var id: String? = null
    var publisherId: String? = null
    var adType: String? = null
    var timestamp: Long? = null
    reader.beginObject()
      while (reader.hasNext()) {
        when (reader.resolveKey(keys)) {
          0 -> id = stringAdapter.fromJson(reader) ?: throw illegalNullProperty("id", reader)
          1 -> publisherId = stringAdapter.fromJson(reader) ?: throw illegalNullProperty("publisherId", reader)
          2 -> adType = stringAdapter.fromJson(reader) ?: throw illegalNullProperty("adType", reader)
          3 -> timestamp = longAdapter.fromJson(reader) ?: throw illegalNullProperty("timestamp", reader)
        }
      }
      reader.endObject()
      return AdRequest(
          id = id ?: throw illegalNullProperty("id", reader),
          publisherId = publisherId ?: throw illegalNullProperty("publisherId", reader),
          adType = adType ?: throw illegalNullProperty("adType", reader),
          timestamp = timestamp ?: throw illegalNullProperty("timestamp", reader),
      )
  }

  override fun toJson(writer: JWriter, value: AdRequest?) {
    if (value == null) {
      throw NullPointerException("provided value is null on serialization")
    }
        writer.beginObject()
    writer.name("id")
    stringAdapter.toJson(writer, value.id)
    writer.name("publisherId")
    stringAdapter.toJson(writer, value.publisherId)
    writer.name("adType")
    stringAdapter.toJson(writer, value.adType)
    writer.name("timestamp")
    longAdapter.toJson(writer, value.timestamp)
    writer.endObject()
  }
}

What Did We Achieve Through Code Generation?

Integrating Kotlin Symbol Processing (KSP) for compile-time code generation has significantly enhanced the efficiency, reliability, and performance of JSON serialization in WAVE’s SDK.

We have not only ensured type safety but also minimized build and runtime conflicts and security vulnerabilities by minimizing dependencies on third-party libraries.

We’ve crucially also minimized runtime overhead and eliminated potential reflection-related issues. For our team, this translates to fewer bugs and a cleaner, more streamlined codebase that’s easier to maintain. Our end users benefit from a more responsive and stable SDK. This enhances the overall user experience and increases trust in our solution.

Embracing code generation further to other areas – such as generating type-safe APIs, creating database schemas, and streamlining dependency injection – would allow us to revolutionize various aspects of our development processes and eliminate manual boilerplate, enhance modularity, and improve maintainability across our development processes. 

It would crucially also ensure our solutions remain robust and cutting-edge for our partners.

Senior Data Engineer (f/m/d)

  • adjoe
  • Programmatic Supply
  • Full-time
adjoe is a leading mobile ad platform developing cutting-edge advertising and monetization solutions that take its app partners’ business to the next level. Part of the applike group ecosystem, adjoe is home to an advanced tech stack, powerful financial backing from Bertelsmann, and a highly motivated workforce to be reckoned with.

Meet Your Team: Programmatic Supply

Did you know that in-app ads are sold in a real-time auction before they get rendered by adjoe’s Android and iOS SDK in thousands of mobile apps? 

It’s exactly for this that adjoe has built its own adtech platform. Our backend system can simultaneously handle a few billion auctions in real-time every day. It receives price bids for every ad impression via API from more than 20 different companies buying potential ad inventory. 

Behind the auction scenes, we have impressive prediction models working to figure out if an ad is relevant to a user. Our algorithm requires operating data from multiple data sources and performs calculations in real-time. 

Join our discussions, explore implementation, and put your problem-solving skills to the test in our cross-functional Programmatic team!
What You Will Do
  • Build & Maintain a Feature Store: Contribute to the design, implementation, and optimization of a scalable, efficient, and easily accessible feature store for machine learning applications.
  • Develop Scalable Data Pipelines: Build and maintain robust data pipelines that process and transform large-scale event data daily and in near real time within our data lake and feature platform.
  • Enable High-Impact Features: Extract, clean, and engineer features from raw data to enhance the relevance and accuracy of recommendations in our adtech platform.
  • Ensure Data Quality, Observability & Integrity: Implement data validation, monitoring, and governance processes to maintain accuracy, consistency, and reliability across all datasets.
  • Bridge Data Science & Backend Teams: Act as the key link between data science and backend engineering, ensuring seamless data integration and usage across the organization.
  • Work in an International Environment: Join an international, English-speaking team focused on scaling our adtech platform to new heights.
  • Who You Are
  • You have 3+ years’ of software development experience, working on modern data engineering data stack. 
  • You have experience working with Python, SQL. Knowledge of Golang is a plus. 
  • You have experience working on distributed systems that process large amounts of data in real-time. 
  • You know scheduling framework such as Airflow / Kubeflow 
  • You know the concepts of data quality,  feature stores. 
  • You have advanced knowledge of computer science, algorithms, and data structures. 
  • You have knowledge of databases and a basic understanding of regular expressions, HTTP protocol, and encoding.
  • Plus: You have experience in working with Clickhouse, AWS, Terraform.

  • Our Tech Stack
  • Apache Kafka as our data bus system
  • Go as our primary language for backend
  • Kubernetes and Terraform to manage our infrastructure
  • S3, Redis and Druid as data storage
  • ClickHouse as feature store
  • Prometheus, Grafana, and OpenObserve for logging and monitoring
  • Airflow for orchestration.
  • … and we’re always open to trying new technologies that suit each case best.
  • Fuel for the Journey: Benefits to Support Your Ambitions
  • Invest in Your Future: Regular feedback and our development program support your growth, helping you expand your skill set and achieve your career goals.
  • Easy Arrival to adjoe: From signing to settling in Hamburg, we’ve got you covered. Need a visa? No problem. Ready to build your new life and career at (company name) in Hamburg? We support every ambition—from learning German to a relocation bonus that helps you settle in and make Hamburg feel like home.
  • Live Your Best Life, at Work and Beyond: We work in a hybrid setup with 3 core office days, plus flexible working hours. Enjoy 30 vacation days, 3 weeks of remote work per year, and free access to an in-house gym with lots of different fitness classes and mental health support through our Employee Assistance Program (EAP).
  • Thrive Where You Work: Enjoy the Alster lake view from our central office with top-notch equipment, fun open spaces, and a large variety of snacks and drinks.
  • Join the Community! Participate in regular team and company events, including hackathons and social gatherings. We work together, and we celebrate together, too.
  • We’re programmed to succeed

    See vacancies