🚀
Testmo logo
GUIDE

GitLab CI/CD Parallel Test Automation Jobs

By Dennis Gurock
.
15 min read

Configuring your GitLab CI/CD pipeline to run your automated tests with parallel jobs is a great way to improve your build times and provide faster feedback to your dev & QA teams. It can be frustrating to wait a long time for your tests to complete when new code is committed to your project repository. Not only does running your test jobs in parallel reduces wait times, it also enables you to release new updates to production faster without reducing the number of tests you can run.

If you haven't set up GitLab to run your automated tests yet, you can also read our introduction article on GitLab CI/CD test automation first.

Fortunately GitLab makes it very easy to set up parallel execution of your automated tests. You just need to understand a few basic concepts and learn a couple of configuration options to implement this. In this guide we will go through all these details so you can speed up your test automation runs, as well as report your test results to test management. Let's get started!

GitLab CI/CD Parallel Testing Workflow

To run our tests in parallel as part of our GitLab CI/CD pipeline, we need to tell GitLab to start multiple test jobs at the same time. In each of the test jobs we then run a different subset of our test suite.

For our example project we start by adding separate buildtest and deploy jobs. For our purpose we wouldn't really need the build and deploy steps, as we will focus on the testing tasks here; but we want to create a full example so it's easier for you to implement a full workflow based on this article.

Our initial pipeline will start executing the build job, followed by multiple parallel test jobs and then finally run deploy if all tests passed (GitLab will stop the pipeline if any of the test jobs fail):

GitLab CI/CD Parallel Test Automation Jobs - YXNzZXRzL2NvbnRlbnQvZ2l0bGFiLWNpLXBhcmFsbGVsLXRlc3QtYXV0b21hdGlvbi90ZXN0aW5nLXdvcmtmbG93LnBuZw

To configure the execution order and dependencies of our jobs, we start by defining the stages in our GitLab pipeline configuration, namely buildtest and deploy. We also define jobs with the same names and link them to the relevant stages.

This tells GitLab to execute all the jobs in our build stage first, then follow with our test stage jobs and end with any jobs in the deploy stage. If any of the jobs fail, GitLab will stop executing the pipeline and will not run any subsequent jobs.

GitLab also makes it easy to run multiple instances of the same job in parallel. We can simply specify the parallel option for our test job and GitLab will start four instances of this job at the same time.

# .gitlab-ci.yml
stages:
  - build
  - test
  - deploy

build:
  stage: build
  script:
    - echo "Building .."

test:
  stage: test
  parallel: 4
  script:
    - echo "Testing .."

deploy:
  stage: deploy
  script:
    - echo "Deploying .."
GitLab CI/CD Parallel Test Automation Jobs - logo dark
Start testing with Testmo free
#1 Unified Modern Test Management Software
GitLab CI/CD Parallel Test Automation Jobs -
GitLab CI/CD Parallel Test Automation Jobs - cli example 2

Test Automation Suite Example Project

The test automation suite we will use as an example for this article will be based on our previous example. We will just extend this example to use multiple separate test files as this makes it easier to run just a subset of our tests in each parallel job instance. Here is the basic structure of this project's main directory:

.github/workflows/build.yml

package-lock.json
package.json

tests/test1.js
tests/test2.js
tests/test3.js
tests/test4.js
tests/test5.js
tests/test6.js
tests/test7.js
tests/test8.js

In GitLab we are creating a new project called example-gitlab-parallel and commit our initial files. Whenever we submit new code to the repository, GitLab CI/CD will automatically pick up our new code and CI config and run our workflow now. You can also visit the article's full repository in GitLab to review all project files.

We are using the JavaScript Mocha/Chai testing framework for this example project, as it's easy to set up and use. Our pipeline will execute with the official node (Node.js JavaScript runtime) container. But any other platform or framework will also work with the examples we provide in this article. You would just use the relevant Docker container for your platform and use your favorite testing framework in your pipeline.

For our example project, each of our test files look similar to the following code example, consisting of a list of test cases that all pass:

// tests/test1.js
const chai = require('chai');
const assert = chai.assert;

describe('files', function () {
    describe('export', function () {
        it('should export pdf', function () {
            assert.isTrue(true);
        });

        it('should export html', function () {
            assert.isTrue(true);
        });

        it('should export yml', function () {
            assert.isTrue(true);
        });

        it('should export text', function () {
            assert.isTrue(true);
        });
    });
});

Parallel Test Execution With GitLab CI/CD

Let's look at a complete example on how to run our tests in parallel with GitLab now. We already told GitLab to start multiple parallel instances of our test job by specifying the parallel option.

But if we now just executed our Mocha test suite as in our previous GitLab test automation article, it would always run the full test suite in each parallel job instance. Instead we need to run a different subset of our tests in each instance and also make sure that all tests are executed across the instances.

So how do we know which subset to run inside each test job? GitLab makes two environment variables available during our job runs to help with this, namely CI_NODE_TOTAL (the total number of parallel jobs) and CI_NODE_INDEX (the index of the current job). We can use these variables to find all test files and then select just a subset of the tests to run in the current job (based on the current index). We just need to make sure that we always run all tests across the jobs, regardless of the number of jobs we spawn.

We wrote a little script (split.js) for this example project to implement this. The script uses a pretty simple approach as it just selects the same number of files (but different files) inside each instance. We could also adjust the script to select files based on the file size, number of test cases or even previous testing times if we store these. But for our example project our simple approach works well. Here's the full pipeline configuration to run our test suite in parallel now:

# .gitlab-ci.yml
default:
  image: node:19
  cache:
    - key:
        files:
          - package-lock.json
      paths:
        - .npm/

stages:
  - build
  - test
  - deploy

build:
  stage: build
  script:
    - echo "Building .."

test:
  stage: test
  parallel: 4
  script:
    - npm ci --cache .npm --prefer-offline
    - node split.js | xargs npm run mocha

deploy:
  stage: deploy
  script:
    - echo "Deploying .."

Once we commit our new configuration to GitLab, it will start a new pipeline run and execute our test job four times in parallel, each executing a subset of our test suite with our split.js script:

GitLab CI/CD Parallel Test Automation Jobs -

Reporting Results to Test Management

Running our automated tests with GitLab is a great start, but we also want to make it easy to collect, review, report and share our test results. For this we are going to submit our test results to a test management tool; Testmo in our case. Submitting and reporting our results enables us to track and manage our tests, link new issues found during testing (e.g. to GitLab Issues or Jira) and make all results accessible to the entire team.

As we are running our tests in parallel with separate jobs in GitLab, it would be good to capture the tests in our testing tool separately as well. Fortunately Testmo has full support for this, as we can submit the results for each job as a separate thread inside a test run. This way we can either see the entire run results at once (e.g. to just focus on the failed tests across all jobs), or we can review a single test job & thread. This also allows us to capture the console output, exit code and measure test times for each job separately, as they are stored as separate threads.

This screenshot shows what a test automation run with multiple testing jobs & threads looks like in Testmo (see the section with the separate threads at the bottom of the screenshot):

GitLab CI/CD Parallel Test Automation Jobs - YXNzZXRzL3NjcmVlbnNob3RzL2F1dG9tYXRpb24tcnVucy12aWV3LWZ1bGwucG5n

To submit our test results to Testmo from our CI pipeline, we can just use the testmo command line tool. We also looked at a more basic example (without multiple threads) in our previous GitLab CI/CD test automation article. The testmo command line tool is deployed as an NPM package, so it only requires one line of code to install it (even if you don't use a node-based Docker container, NPM is usually already installed or is very easy to add).

So we could just install the testmo tool at the beginning of all our jobs. But because we will use it in multiple jobs, including our parallel test jobs, we can just add the package to our package.json config as well, so it's automatically installed with our npm ci calls. So we just add the package to our config by running the following command inside the local development container and commit the file to the repository:

# Run this inside our dev container and then 
# commit package.json & package-lock.json to GitHub
$ npm install --save-dev @testmo/testmo-cli

In our previous article we just needed a single call to the testmo automation:run:submit command to create a new test run, submit all test results and then mark the test run as completed, all in one step. This time though we want to submit multiple parallel test runs and their results, and then mark the run as completed after all tests were executed and reported.

So we are going to extend our GitHub Actions workflow by adding additional test-setup and test-complete jobs. In the setup job we are creating a new Testmo run, then pass the new test run ID to each parallel test instance and submit its results, and finally mark the run as completed in test-complete:

GitLab CI/CD Parallel Test Automation Jobs -

When we create the new test automation run in Testmo in our test-setup job (and submitting all basic information about the run such as the name and source), we receive the ID of the newly created run. We need to pass this ID to the following jobs, as we need to reference it when creating the threads and submitting the results. GitLab CI/CD supports this by writing any variables to an .env file and uploading it for subsequent jobs via the artifacts & dotenv settings. It's just important to reference the test-setup job as a dependency in any subsequent jobs that need access to the test run ID variable then.

Also remember how we called our split.js script above, followed by the call to our Mocha test suite? To make things a bit simpler and cleaner, we are moving this line to a script alias in our package.json file. This will also generate and output our test results as JUnit XML files, which we need so we can report the results to Testmo. You can see the full package.json config in this article's GitHub repository.

So all we need to do now is changing our test job to call testmo automation:run:submit-thread and pass our new mocha-junit-parallel command to it as the last parameter. The testmo command will then launch our (split) Mocha tests, enabling it to capture the console output, measure the test times and record the exit code. All this is then automatically submitted to the thread in Testmo together with our test results.

# .gitlab-ci.yml

# [..]

test-setup:
  stage: test-setup
  script:
    - npm ci --cache .npm --prefer-offline
    - npx testmo automation:run:create
        --instance "$TESTMO_URL"
        --project-id 1
        --name "Parallel mocha test run"
        --source "unit-tests" > testmo-run-id.txt
    - echo "TESTMO_RUN_ID=$(cat testmo-run-id.txt)" > testmo.env
  artifacts:
    reports:
      dotenv: testmo.env

test:
  parallel: 4
  stage: test
  script:
    - npm ci --cache .npm --prefer-offline
    - npx testmo automation:run:submit-thread
        --instance "$TESTMO_URL"
        --run-id "$TESTMO_RUN_ID"
        --results results/*.xml
        -- npm run mocha-junit-parallel # Note space after --
  dependencies:
    - test-setup

Finally we will mark the test run as completed in our test-complete job after all our testing jobs completed. We will again use our previously created ID to reference the run in Testmo for this. Testmo will then mark the run as completed, and flag it as successful or failed based on the test results.

In case there's a problem with the pipeline execution and we didn't mark the run as completed, Testmo will still mark the run as completed eventually by itself.

test-complete:
  stage: test-complete
  script:
    - npm ci --cache .npm --prefer-offline
    - npx testmo automation:run:complete
        --instance "$TESTMO_URL"
        --run-id "$TESTMO_RUN_ID"
  dependencies:
    - test-setup
  when: always

What about failing tests?

Until now, all our tests just passed, so we didn't have to worry about failing tests. But what happens if a test fails? Will this still get reported to Testmo? And what about our deploy job? You can easily test this by simply failing one of our tests, e.g. by throwing an error with throw new Error('This test failed');.

The way we configured our workflow, everything works as expected. The call to testmo to run our tests passes through any exit code of our Mocha run. So if the Mocha run failed, we pass this through to GitHub and fail this test job instance.

We also tell GitHub Actions to always run our test-complete job after our tests, even if one of the jobs failed. We do this by specifying the if: always() option for our test-complete job. So we can also mark the test run as completed, even after test failures.

For our deploy job, we changed its configuration to depend on both the test and test-complete jobs, but without specifying the always condition. So it will run after these jobs completed, but only if they were successful. So if a test failed, or if we couldn't mark the run as completed for some reason, the deployment job is skipped, just like it should. Here's what the test results of a test run would look like after they are submitted to Testmo:

GitLab CI/CD Parallel Test Automation Jobs - YXNzZXRzL3NjcmVlbnNob3RzL2F1dG9tYXRpb24tcnVucy1yZXN1bHRzLWNvbXBhY3QucG5n 2

Tracking Test Suites, Threads & Runs

Now that we submit the test run and its results, we can easily track the tests, review failures, document issues, identify flaky and slow tests and use these details to improve our test suites over time. This also allows us to make the test results available to the entire team and link the runs to our projects and milestones.

The way we have configured and set up our CI pipeline, we can also scale the test execution when our test suite grows. We can just increase the number of parallel testing jobs by adjusting the parallel setting in our pipeline configuration in the future and we don't have to change anything else.

GitLab CI/CD Parallel Test Automation Jobs -

Another advantage of submitting and storing our test automation results in our testing tool is that we can link and manage our automated tests together to other testing efforts, such as manual test case management and exploratory testing. In our experience, if you make the test automation results available to the entire team, it also helps to increase awareness of test performance and build times, giving the dev team a chance to improve these metrics.

By following the approach outlined in this article to implement parallel testing jobs with GitLab, you can build a CI pipeline that is easy to maintain and easy to scale, so make sure to try it for your projects!

PS: We regularly publish original software testing & QA research, including free guides, reports and news. To receive our next postings, you can subscribe to updates. You can also follow us on Twitter and Linkedin.

More from Testmo

Get news about Testmo & software testing

Also receive our free original testing & QA content directly in your inbox whenever we publish new guides and articles.
We will email you occasionally when news about Testmo and testing content is published. You can unsubscribe at any time with a single click. Learn more in our privacy policy.