How Freshworks built its test automation framework

Test automation frameworks are built with the essential functionalities and templates required to support test scripting. The framework helps eliminate code duplications, maintain the project structure, and test scripts. Typical automation frameworks enforce an implementation pattern such as the Page Object Model (POM), provide libraries (such as Selenium WebDriver) for browser interactions, and enable test developers to build application interaction libraries, test scripts, generate test reports, take screenshots, and write test logs. Organizing all fundamental components within the framework enables easy test scripting, execution, and maintenance. 

A standardized quality assurance practice eliminates duplication of efforts, enables engineers to switch between products easily, and provides a consistent bar for hiring across products. This centralizes the hiring efforts and any engineer hired can be assigned to any product based on demand.

In this blog post, we speak about the test engineering strategy at Freshworks and how our constantly evolving automation framework (foundation) augments our strategy. 

What Freshworks does differently, and why:

The Freshworks suite of products includes Freshdesk, Freshservice, Freshsales, Freshteam, Freshchat, Freshcaller, Freshmarketer, and Freshconnect. The challenge before the engineering team was to build a test automation framework that scales our quality engineering practices and teams for all products simultaneously. So, to cater to the wide range of products, we had to adopt a test automation strategy and framework that is different from the industry-standard generic frameworks. 

The automation framework built by the different product teams had different test frameworks and tools for automating system tests. Most teams used Selenium + Ruby for test scripting and RSpec as the test framework. Many teams were building the same test utilities and libraries for browser interactions, test logs, screen capture, file input-output, test data input-output, integration with test-case management tools, and so on. We observed that this was a duplication of effort and overhead in all QA teams because the time a team spends on building the utilities adds to the total test engineering time. We wanted to reduce the overhead by repurposing the libraries built by a team across teams. To unify and standardise the test strategy across all products, we constituted a test engineering team as the first step. 

The team aimed to build a framework that encompasses a foundation and an extension framework. The foundation abstracts all essential test libraries and the extension framework enables a test developer to extend the foundation with application-specific interaction libraries and test scripts.

The foundation 

The core entities and capabilities of the test automation foundation framework (Webframe) are,

  • Page Object Model(POM) template
  • Capabilities for test-case management integration
  • Scaffolds for test scripts generation
  • Libraries for browser interaction
  • Libraries for web services interaction
  • Capabilities for abstraction of test configuration, test report and test log generation, screen capture, and parallel test execution 

The foundation is built as a Ruby gem which when installed provides a set of functionalities accessible through the CLI. The test execution platform is built in-line with the foundation so that the test runs are also standardized and the execution environment is stable. Standardization of the test environment  minimizes flakiness in test execution resulting from a volatile environment. 

Page Object Model template 

The automation project structure and file hierarchy were different in automation projects of different products. We wanted to standardize and unify the project structure across all automation projects so that it minimizes the learning curve and context switching time for engineers who work on multiple automation projects. After the WebFrame gem is deployed, the init functionality available through the CLI can be used to create the automation project template with the sample files for page, action library, test script, and a sample test configuration. This helps new teams bootstrap a new project without having to spend time on designing the project structure and library hierarchies. 

 

$ webframe init TestProject 

Convention over Configuration 

The primary problem that we want to solve by enforcing conventions through the framework is to minimize the number of decisions and compatibility checks a test developer has to make when adding dependencies (ruby gems). 

The project dependencies such as Ruby, Selenium, and other gems for utilities are all configured within the WebFrame and this helps us to keep the dependencies and its versions in control. The conventions eliminate the differences between the test development and test execution environments of different products.  However, the configurations are extensible and an automation project can override the default configuration, if there are product-specific requirements. 

WebFrame also enforces conventions for defining page object files, test scripts, and test suites by providing scaffolds (code generators). We deal with this later in the post.

Capabilities for test-case management tool integration

We use TestRail for test authoring. A test developer first writes the test procedures (prerequisites, steps, and expected behavior) for a feature in TestRail and groups them under a ‘Run’. WebFrame includes capabilities, for integration with the test-case management system, which can be enabled if necessary through configuration. The integration enables the framework to write the test results to the test-case management system and read the test descriptions to generate test script stubs when generating scripts through Scaffolds.

Scaffolds for script generation

The scaffolds provided in the WebFrame helps to create files and make necessary associations as part of the boilerplate code generation. 

For example, to create a Page Object file and a Page Action library, you can run the $ webframe add page command and enter the page name (sample) and location when prompted. It creates a Page file and a Locator file under the specified location. The content of the sample files provides the necessary associations between the two files.

To create the test suite using the scaffolder, you can run the $ webframe add spec command and when prompted, provide the specification details as follows:

This creates a sample test specification in the specified location, for the specified feature. If you do not enter a value for the Test Ids or Run Id, a skeleton spec as follows is created:

If you enter a value for the Test Ids or Run Id, the scaffolder fetches the details of the specified test cases from the test-case management tool and generates stubs for every test case in the specified run, under the spec folder. A test developer can then fill the stubs with methods from the application interaction library, to complete a spec.

Libraries for browser interaction

WebFrame provides a wrapper around the native interaction library of Selenium WebDriver to enhance its behaviour and  safely handle user interface interactions on browsers. 

For example, if a click operation from the Selenium native library encounters an exception, the execution is stalled. The wrapper handles a click safely; verifies the presence of the element to be clicked, handles cases  when the element is not found, and any exception that arises when performing an action on the user interface.

An example click implementation, as provided by the WebFrame, is as follows:

Test configurations

WebFrame provides capabilities to configure test cases through a framework configuration file (default.yml) and an execution specification file (for example, staging.yml) that extends the default configuration. This capability abstracts the test configurations to stay within the foundation and thereby the test configuration mechanism is used across products. 

To run the test suite based on the specified configuration, you can use the command $webframe staging.yml. 

Parallel test executions

Parallel test executions are achieved using the parallel_tests gem. The WebFrame CLI abstracts the native functionalities provided by parallel_tests. 

For example, to run a test suite in two processes, you can run the  $webframe staging.yml -n 2  command.

Real-time logs

The logger within the WebFrame logs the details of every action as and when the action is performed. The logs are captured in a log file and are fed to a log analyzer (Kibana) in real time; this enables a test engineer to act on test-cases that fail without waiting for the entire run to complete. 

Test reports

Webframe provides the capability to generate test execution reports that provide detailed insights about the test runs. We use Allure Report to visualize the reports. We use the integration with  test-case management tools to update the test results to the test case management tool. The results from test-case management tool are used to analyze high-level details such as the number of test-cases passing, failing, blocked, and the runs that are complete.

The test execution platform

The test environment comprising of Ruby files, WebFrame, and browser drivers is packed inside a docker container and instantiated when a request for test run is received. The test environment is set up on AWS with auto-scaling enabled. The containers required to run the test suite are spawned on demand and discarded after the run is complete. 

Synopsis

The reusable foundation framework has helped us to minimize redundant efforts. Engineers across all products are collectively involved in the evolution of the framework. Webframe is developed and sustained as an open source model to which the different in-house product teams contribute, whenever they encounter a limitation. The application specific library (UI action library) is also thriving and at some point we will have a library for all product functionalities and the product teams will be able to enhance the scaffolders to generate not just the stubs for scripts but the scripts itself by mapping a keyword in the test procedure with the corresponding method in the action library.