abstract decorative image in purple

Mobile App Testing: Automation Testing Frameworks for Our WAVE SDK

We’ve built our test automation framework at adjoe on our test app, which tests our WAVE product’s SDK on both platforms – Android and iOS. The framework contains test cases that cover basic scenarios, like testing different ad formats for different adapters. 

Our future goal is to constantly expand our test suite and include even more specific scenarios – if we find them helpful. 

We developed our test automation framework with this concept in mind: to be easily maintainable and adjustable. It’s written in Python and Appium. The design pattern we used is POM (Page Object Modeling), which is the most recommended architecture for E2E testing frameworks.

Why Have an Automation Testing Framework?

An automation framework for testing is a critical component for any software application, including mobile apps. 

It goes without saying that automated tests run faster and more reliably than manual tests. You also save yourself from using a lot of resources. The benefits it offers are numerous in the software development process. Here are a few:

  • Improved efficiency due to higher coverage. Besides being able to run automation testing as flexibly and frequently as you’d like, you can execute a wide range of test scenarios. These include edge cases and negative testing. Automated tests also make regression testing much faster and more efficient. You can quickly and thoroughly test the entire application after making changes or adding new features to ensure that you haven’t inadvertently broken existing functionality.
  • Better consistency and product stability. Automated tests perform the same actions and checks consistently each time they run. This ensures product stability, even across multiple environments.
  • Reporting and analysis tools to provide detailed information. When we look at test results, we find it easier to identify and diagnose issues.

What is POM?

The design pattern on which our test automation framework is built is called Page Object Model (POM). This pattern is commonly used in test automation. You can use it for both web application testing and mobile app testing.

The core idea of POM is to encapsulate the elements and interactions on a user interface into separate classes or objects. 

Each application page or component is represented by a corresponding Page Object (in our case, practically a Python class). This contains methods to interact with the elements and retrieve information from that page.

screenshot demonstrating how each interface is represented by its own python class

Besides the Page Objects, the other part of our project consists of test cases (each written in one Python class). Test cases contain all the steps for a defined scenario and are the executable files. The test cases use the methods written in the Page Objects to interact with the application and perform all the actions that construct a test. All the tests form the test suite, which is easily expandable.

screenshot demonstrating how each interface is represented by its own python class

To make it easier to write test cases, we have a parent class in our project with the necessary steps to choose the device, start the application, execute the test, and then close the application. All the test cases, which are written in their separate classes, must extend this class and provide the code for the method that will carry out all the testing steps and interactions with the app.

So, essentially, the project consists of two main parts that interact closely with each other.

In our project, the Page Object part is further divided into Android and iOS sections. This means that for every interface of the application, there is not just one but two Page Objects. One represents the Android activity, and the other the iOS view controller.

Why Use POM?

POM is a highly recommended approach in test automation for various reasons. 

  • More modular and organized structure. Your test automation code represents each application page or component with a separate Page Object class. This modularity makes your code more manageable and efficient.

  • Enhanced reusability. Page Objects can be reused across multiple test scripts, reducing code duplication and saving time. Additionally, POM promotes readability in test scripts as interactions with UI elements are abstracted into meaningful method calls. This makes the tests more understandable and self-explanatory.

  • Greater maintainability. When changes occur in the application’s user interface or functionality, you only need to update the corresponding Page Object class. This minimizes the impact on other test scripts. POM also reduces code redundancy by centralizing UI element locators.

Duck Typing for Easy Integration of Both Platforms

Interfaces do not exist in Python, so classical polymorphism structure is not possible. 

To achieve polymorphism, you use a concept called “duck typing.” Duck typing emphasizes the principle: “If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.” It prioritizes code that works based on an object’s behavior.

Since we support both platforms in our project, we have many methods that execute the same actions and, in principle, are the same. They however differ in the elements/locators they use and sometimes even in steps. 

As a result, we need to implement automation for each app interface twice, once for each platform. Instead of including everything in one Python class with a lot of “if-else” conditions to verify the platform in every action and element location, we decided to use polymorphism to handle the duality. With this structure, we need unification, and since Python does not support interfaces, the duck typing concept comes into play.

With this implementation, we have added a twist to the POM architecture, where we have incorporated it with polymorphism. We don’t have one Page Object per interface, as in classic POM, but two.

Unlike the Page Objects, we still have the same test cases that are carried out on both platforms. This means they will have the same behavior, but they use different code. Each test case is still only one Python page (the original structure of POM). They can be executed on the chosen platform at a time. What changes from one case to the other, is the Page Objects classes used, whether they are going to be the Android or the iOS ones.

According to the duck typing principle, the code implementation for a specific action can be different for both cases, but the methods containing that code have the same name. When that method is called from the test case – depending on the type of object it is being called from – the Android or iOS method will be executed. When it’s time to switch to another interface, the method that does the switching action also initializes the new object corresponding with the new interface, and that will be in accordance with the platform.

The following example is from the Page Object of the GooglePlay Store/App Store. This is the method that returns the user to the ad. The first code snippet is from the PlayStoreActivity.py, the Page Object for Google Play Store, and the second from AppStoreViewController, the Page Object for App Store. As you can see, the names of the methods are the same, but implementation is different.

def return_to_ad(self):
   from pageobjects.android.InstallPromptActivity import InstallPromptActivity
   return self.press_back_button(InstallPromptActivity)
def return_to_ad(self):
   from pageobjects.ios.InstallPromptViewController import InstallPromptViewController
   return InstallPromptViewController(self.driver)

They both return the corresponding Page Object for the install prompt interface. In the test case, they are called with the same line of code. The method that we execute depends on the object type it is called from (third line).

install_prompt = play_app_store.return_to_ad()

The first initialization, which also guides the whole test case on the platform, will be executed in the TestBase class, where the platform is verified. The correct object corresponding to the platform is initialized. It’s a chain reaction from there.

if TestCaseContext.is_android():
   self.homescreen = HomeScreenActivity(self.driver)
   self.homescreen = HomeScreenViewController(self.driver)

Introducing UIElement Class

To be able to execute test cases more smoothly and reliably, we have created a class that represents the objects on the interface (buttons, text views, containers, switches, etc).

When an element is initialized in the framework, a UIElement object is used – instead of directly using the Appium object.

This class at its core serves as a data holder and action performer when needed. When an object of this class is initialized, it needs some information about the element on the interface. Unlike the Appium methods and classes, this class does not search for the element immediately, so it can be initialized – even when the element has not appeared in the interface yet.

def __init__(self, page, strategy: AppiumBy, locator: str, element: WebElement = None):
   self.driver = page.driver
   self.page = page
   self.strategy = strategy
   self.locator = locator
   self.element = element

It will search for the element only when it is interacted with. For example, it will only search for a button when that button needs to be clicked.

def tap(self):
   if self.is_enabled():
       Logger.action_step(self.page, "Element with locator " + str(self.strategy) + " : "
                          + self.locator + " tapped.")
       Logger.failed_step(self.page, "Element with locator " + str(self.strategy) + " : "
                          + self.locator + " was not enabled", "Element should be enabled and clickable.")
       raise InvalidElementStateException("Element was not enabled.")

The search and capture, of course, is made via Appium and with the data given when the object is initialized.

def __get_element(self):
   if self.element is not None:
       return self.element
       return self.driver.find_element(self.strategy, self.locator)

This class also holds methods to perform different checks and actions with the elements. These include checking availability, visibility, existence, clicking the element, entering text, waiting for it to appear or disappear, etc. All these actions can be performed on an element using the UIElement class and not Appium directly.

This class also ensures reliability, as the methods containing actions on the elements will most of the time verify their existence and availability. If those fail, this UIElement class will generate logs that inform on the state of the app at a specific moment.

Generating Test Results and Reports

After running the test via the pipeline, you receive a generated report written in an .html file. That file contains all the test case results (passed/failed) and the generated logs.

For better and more efficient log generation during the execution of a test case, we have created a specific class in the project. This class generates four types of logs: a passed step, failed step, action performed, or info provided. 

You can use these anywhere in the framework, whether in the PageObjects or test cases. The logs contain information regarding the app’s interface at the moment they were generated, as well as further information regarding how things look or should look.

Every log is also accompanied by a screenshot. This is also saved in a specific folder that you can check at the end of the execution.

screenshot demonstrating how each interface is represented by its own python class

Pitfalls and Lessons learned

One of the main challenges we faced when creating a test automation framework is having frontend elements with clear, unique, and meaningful locators. To capture an element, Appium uses either the ID, classname, Xpath, CSS selector, tag, or some other attributes. 

The value of the chosen attribute should be included in the code (hardcoded), so you need to inspect and study the frontend structure beforehand.

A setback we faced in our project was that Xpaths were all invalid and could not be used to capture elements. So, we resorted to only using IDs. This solution was also not without its problems, as we then needed to make sure that every element had an ID, and a unique one. This was possible after some fixes and updates in the Android and iOS projects.

Another setback was that it was not possible to access all the elements in the interface. Especially those inside the ad video or playable. As we could not find a workaround, we decided not to check the contents of these elements, only their presence and visibility.

The last challenge I want to mention revolves around capturing the logs in order to access ad data. We thought that it would be useful to have the ad information – such as session ID, targeting group UUID, bidder, auction ID, etc. – in the test report. 

This was not very direct, as these logs are not part of the app frontend. However, Appium does allow you to access the device Logcat, so collecting this data in the background, as the test case continues, was made possible. 

This was implemented for Android and we soon realized that the same thing wouldn’t be possible for iOS. We needed to think of a different approach. We settled on adding an extra interface in the iOS app that you can open with the click of a button and has all the necessary logs.

Advice for Your Automation Testing Framework

With everything in place, including the test automation pipeline of this project, the test automation framework provides a robust solution to enhance the efficiency, consistency, and reliability of our software development process. 

  • The POM design pattern ensures we benefit from a modular and organized structure that promotes code reusability and readability. 
  • The implementation of duck typing ensures easy integration across both platforms. This allows us to seamlessly execute the same test cases with different code implementations.
  • The introduction of the UIElement class adds reliability to our test cases. They provide a smoother execution of actions on interface elements. 

As we continue to expand our test suite, the focus remains on maintaining an easily adaptable and maintainable automation framework.

We’re programmed to succeed

See vacancies