# .github/workflows/test.yml
name: Test
on: [push, workflow_dispatch]
jobs:
test:
name: Test
runs-on: ubuntu-latest
steps:
- run: echo "Hello world"
Running your automated tests as part of your GitHub Actions CI workflow is a great way to automatically test new versions of your code as updates get pushed to your repository.
Whether you are including your tests directly in your main build/deploy workflow, or alternatively have your QA/testing team maintain a separate repository just for testing: using GitHub Actions is a great investment to improve your test execution.
In this guide we will look at all the details of setting up a new test automation workflow with GitHub Actions. We will also look at submitting and reporting our test results to a testing tool. You just need some basic understanding of how to use Git and GitHub, that's it, so let's go!
Initial Repository and GitHub Actions
We will start by creating a new empty GitHub repository. For this article we are creating a repository called example-github-automation
but you can name it anything you like. You can also find all files for this project in this article's repository on GitHub. If you are new to Git or GitHub, simply follow these instructions on creating a new repository. It's usually best to get familiar with and start using the git
command line tool from the beginning.
GitHub Actions is automatically enabled for new repositories and all you need to do in order to start your first CI run is to add a GitHub Actions workflow configuration file. GitHub Actions is looking for any YAML file in the .github/workflows
directory inside your repository. We will start by adding a basic workflow file:
There are a couple of configuration options inside this workflow file that are common to most GitHub Actions workflows. We start by naming the workflow so it's easier to see the name in GitHub's web app. We then tell GitHub when to run this workflow with the on
option. In our case we tell GitHub to run this workflow whenever we push new code to the repository. The workflow_dispatch
event also allows us to easily trigger this workflow via GitHub's UI or API if we want use this in the future.
Under the jobs
section we can list one or more jobs. For our article we will use a single test
job that is automatically run as part of our workflow. We then tell GitHub that we would like to run the steps of our testing job in a Ubuntu Linux virtual machine. GitHub Actions supports different operating systems and versions for the runner. We will just directly run our steps in the main virtual machine, but we could alternatively also run all or some of our steps in a Docker container.
Once you add and push the new configuration to your repository, GitHub Actions will start the workflow and execute our test
job, resulting in our Hello world step to be executed:
Local Development Environment
Your workflow will run on GitHub Actions' servers inside a virtual machine. But we usually still want to execute commands locally, e.g. to use the NPM package manager to install packages. Or to run our tests locally during development to see if they pass. You could do all this on your computer directly, but it can be useful to use a virtual machine or container as well. This way you can easily try new packages, install dependencies or re-use containers without having to install anything directly on your machine.
We could use a full virtual machine similar to what GitHub Actions uses, but we like to use Docker here to make it easier to use and and run existing containers. The generic virtual machine GitHub Actions provides has many packages and apps pre-installed. For Docker containers you usually want to use smaller and more targeted images.
As our article's example project will use Node.js to run JavaScript tests, we will just use the official node
Docker image with the same version we plan to use with GitHub Actions. We will prepare a Docker Compose file in our project directory like this:
# dev/docker-compose.yml
version: '3'
services:
node:
image: node:19
volumes:
- ./../:/project
working_dir: /project
We can easily start and use the node
Docker container on our machine by calling docker compose run
and telling it to launch a new shell inside the container. We are then able to start any commands and run our tests inside the container without affecting our main machine:
$ docker compose run node bash
Creating dev_node_run ... done
root@d433d79213d7:/project$ # in the container
Setting Up Test Automation
Let's take a look at our example test automation project next. You can use pretty much any test automation tool, framework and platform with GitHub Actions with the approach we describe here. You can even run code on Windows Server or macOS with GitHub Actions. For our example project here we want to use something simple that is easy to set up. We will use a simple JavaScript (Node.js) based framework, in our case Mocha/Chai. But if your team uses a different platform, such as Java, Ruby, Python, PHP, .NET etc, you would just use a different framework (and development container).
Node.js is easily available in our GitHub Actions virtual machine and it's already set up in our local development Docker container. So we will just use npm
to install our remaining package dependencies (i.e. the testing framework we plan to use). So inside our development container we just install the required packages:
# Run this inside your container
$ npm install --save-dev mocha chai mocha-junit-reporter
This creates the package.json
and package-lock.json
files in our project directory with our dependencies (make sure to also commit these files to your repository). When we check out the code of our repository on another machine or in GitHub Actions, we can easily install all packages based on these configuration files.
We also want to make it a bit easier to run our tests. For this we are adding a few script aliases to our package.json
file under a new scripts
section:
// package.json
{
"scripts": {
"mocha": "npx mocha",
"mocha-junit": "npx mocha --reporter node_modules/mocha-junit-reporter --reporter-options jenkinsMode=1,outputs=1,mochaFile=results/mocha-test-results.xml"
},
"dependencies": {
"chai": "^4.3.7",
"mocha": "^10.2.0",
"mocha-junit-reporter": "^2.2.0"
}
}
We also need a test suite with tests to run. Mocha/Chai makes it easy to create a basic test suite. Simply create a new test.js
file and add some tests like below. You can find the full file in in this article's GitHub repository.
Note that we are adding one test that fails with a random chance of 50%. This way you can try both full passes and failures in GitHub Actions to see what both scenarios look like.
// test.js
const chai = require('chai');
const assert = chai.assert;
describe('files', function () {
describe('export', function () {
it('should export pdf', function () {
assert.isTrue(true);
});
it('should export html', function () {
assert.isTrue(true);
});
it('should export yml', function () {
assert.isTrue(true);
});
it('should export text', function () {
// Fail in 50% of cases
if (Math.random() < 0.5) {
throw new Error('An exception occurred');
} else {
assert.isTrue(true);
}
});
});
// [..]
});
We can try our tests by running them in our container. You can use our new script alias and run the tests locally with npm run mocha
in our project directory. The output should look like this (sometimes passing or failing one test if you run the command multiple times):
$ npm run mocha
files
export
✔ should export pdf
✔ should export html
✔ should export yml
1) should export text
import
✔ should import pdf
✔ should import html
✔ should import yml
✔ should import text
7 passing (17ms)
1 failing
Running Automated Tests with GitHub Actions
Pushing the new tests to GitHub doesn't do much yet, as we haven't told GitHub Actions how to run our tests yet. We will look at doing exactly this next by adjusting our workflow file. Edit the test.yml
workflow file and add the following steps:
# .github/workflows/test.yml
name: Test
on: [push, workflow_dispatch]
jobs:
test:
name: Test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
with:
node-version: 19
cache: npm
- run: npm ci
- run: npm run mocha
There are a couple of interesting bits here, so let's look at each step separately:
actions/checkout@v3
: By default there aren't any project files in the new virtual machine GitHub Actions starts. With this pre-defined action we tell GitHub to check out our repository files inside the virtual machine.actions/setup-node@v3
: We want to use Node.js to run our tests and GitHub Actions provides this useful action to set up a specific version of Node.js for us. We will use the same version we use in our local dev container, so both environments are as similar as possible. The action also caches the packages for us, so installing packages is much faster.run: npm ci
: Therun
step executes one or multiple commands inside the virtual machine within a shell. So we can just tell it to run any command we want, in this casenpm ci
to install all our dependencies from ourpackage.json
file (ci stands for clean install).run: npm run mocha
: Finally we tell GitHub Actions to run our tests. Depending on whether our tests pass or fail, GitHub Actions would continue running additional steps or stop executing further steps if we had any.
When you push the new configuration to GitHub, GitHub Actions will automatically start a new run of our workflow. The workflow will check out our files, set up Node.js, install our dependencies and finally run our tests. The workflow will look something like this in GitHub Actions (if the tests failed with our 50% chance, you would see a failure):
Automation Reporting to Test Management
Now that we can run our tests as part of our GitHub Actions workflow, we can look at submitting and reporting the test results to a test management tool such as our Testmo.
Reporting the results allows us to track and view all our test runs, make all tests accessible to the entire team, identify slow, flaky or failing tests and link our results to project milestones. Here's what an automated test run looks like in Testmo:
Submitting the test results to Testmo as part of our GitHub Actions workflow is very easy. First, instead of using our npm run mocha
script alias, we are changing our call to use the mocha-junit
script. Instead of outputting the results to the console, Mocha will write the test results to a file using the JUnit XML file format then. Because pretty much any test automation tool can output or convert test results to JUnit XML files, this format is used universally to exchange results between tools.
In order to send the results to Testmo, we use the testmo
command line tool. This tool is distributed as an NPM package, so we can just use NPM to install it. This is true even if you don't use JavaScript for your projects otherwise (Node.js is very easy to install or is already installed on most systems, making it a great option for deployment). So we just add the call to npm install
to our test.yml
workflow.
Now, we could just run our Mocha tests to generate the JUnit XML file and then call testmo
afterwards to submit the results. But there's a better way to to do this.
Instead, we call testmo
(with its automation:run:submit
command) and pass the Mocha command as the last parameter. This way our testmo
command can also measure the time, capture any console output and record the result code of Mocha. We also pass a few basic additional details such as the project ID, new test run name, source name and the location of our XML file(s) to testmo
:
# .github/workflows/test.yml
name: Test
on: [push, workflow_dispatch]
jobs:
test:
name: Test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
with:
node-version: 19
cache: npm
- run: npm ci
- run: npm install --no-save @testmo/testmo-cli
- run: |
npx testmo automation:run:submit \
--instance "$TESTMO_URL" \
--project-id 1 \
--name "Mocha test run" \
--source "unit-tests" \
--results results/*.xml \
-- npm run mocha-junit # Note space after --
env:
TESTMO_URL: ${{ secrets.TESTMO_URL }}
TESTMO_TOKEN: ${{ secrets.TESTMO_TOKEN }}
You might have noticed that we use a variable called TESTMO_URL
as a placeholder for our Testmo web address here. The testmo
command also expects to have an API key available in the TESTMO_TOKEN
environment variable (which you can generate in Testmo from your user profile).
Why not just add the URL and API key directly to our workflow? This would be a very bad idea, as everyone with access to your Git repository would be able to see these secrets in your checked-in code then. You should never check in API keys and passwords directly to your code. Instead, we will use GitHub's secret management for this.
Just go to your repository settings (or organization settings) and add two secrets called TESTMO_URL
and TESTMO_TOKEN
with the relevant values. You can then tell GitHub Actions to make these secrets available as environment variables (see the updated workflow above).
Now when we push our new configuration to GitHub, our testmo
command will start the Mocha test run and submit all test results to Testmo. It also automatically passes through the exit code of the Mocha run by default. So if Mocha returns an error, testmo
also reports the error to GitHub Actions to stop the workflow.
Every time our tests are run, a new test run with its results is submitted to the testing tool now. So you can easily track all results over time and identify problematic test cases in your test suite:
Having your test results in a test management tool also makes the test runs easily accessible to the rest of your team, including the entire development team and QA/testing team. You can also track your automated tests together with manual test case management and exploratory testing for unified QA management.
Running your automated tests as part of your GitHub Actions CI workflow is very straightforward by following the basic steps outlined in this article. If you are interested in more advanced workflows such as executing tests in parallel for better performance or how to use Selenium with GitHub Actions, make sure to subscribe to notifications about our upcoming articles.
PS: We regularly publish original software testing & QA research, including free guides, reports and news. To receive our next postings, you can subscribe to updates. You can also follow us on Twitter and Linkedin.