For many QA and development teams, managing all of the test cases you’ve written to cover all of the functionality of your product can be a significant challenge. As your test suites scale, it can become difficult to know how to organize everything efficiently so important tests don’t fall through the cracks and you can ensure you’re actually testing everything that needs to be tested.
Testmo provides powerful features for creating and organising your tests, but we often get questions from users who aren’t sure whether they’re managing their cases as efficiently as possible.
In this article, we will walk through some best practices and recommendations for organising your tests so they can easily be found and reused by your team in subsequent test cycles.
Principles for Writing Test Cases
The basis for efficient test case organization starts with how you write the test cases themselves. We won’t get into great detail about that in this article, since we covered the topic in depth in a separate guide: Writing Test Cases: Examples, Best Practices & Test Case Templates. Nevertheless, there are some useful principles that you should consider when setting out. A good test case has the following characteristics:
- It has clear & unambiguous intent, and a clear title. If the test case has steps or a description, those things should be clear, specific and actionable as well.
- Tests are atomic, covering a single requirement, scenario or feature at a time. Combine this with a clear objective, covering a single requirement, user story or acceptance criteria. (Taking this approach will also help with clearly identifying coverage in Testmo’s Requirements Coverage & Traceability report!)
- Test cases which are intended to be executed manually should include instructions for satisfying any necessary dependencies or preconditions. Make sure you list any data, setup or state requirements up-front, so they’re not discovered mid-execution!
- Clearly stated expected output or behaviour, so the tester can quickly & easily distinguish between a passing or failing result. Note: in the case of a non-deterministic outcome such as when testing an AI model, different principles and test case characteristics should be applied. Watch out for more from us on this soon!
- The test case should be repeatable, and it should produce consistent results when run under the same conditions (point 3 refers).
Now we know the essence of what a good test case should look like, let’s focus on how to organise them in a consistent and scalable way in Testmo.
For many QA and development teams, managing all of the test cases you’ve written to cover all of the functionality of your product can be a significant challenge. As your test suites scale, it can become difficult to know how to organize everything efficiently so important tests don’t fall through the cracks and you can ensure you’re actually testing everything that needs to be tested.
Testmo provides powerful features for creating and organising your tests, but we often get questions from users who aren’t sure whether they’re managing their cases as efficiently as possible.
In this article, we will walk through some best practices and recommendations for organising your tests so they can easily be found and reused by your team in subsequent test cycles.
Test Case Folders & Folder Structure
What we’re looking for from our test case arrangement is that they’re organised in some sensible, logical way that enables people on your team and within your organisation to find, use and maintain them over time.
In our experience, the best way for teams to accomplish this is by organising tests by product domain or feature. By doing so, you’ll end up with something like this:
Project: [Product Name or Domain] ├── Folder: Core Features │ ├── Sub-Folder: Authentication │ ├── Sub-Folder: User Profiles │ └── Sub-Folder: Notifications ├── Folder: E-commerce │ ├── Sub-Folder: Product Search │ ├── Sub-Folder: Cart & Checkout │ └── Sub-Folder: Orders & Payments ├── Folder: Admin Tools ├── Folder: Mobile App ├── Folder: Shared Tests (e.g. Login, API Auth)
Additionally, we’ve found that it’s helpful to apply the following principles:
- For large teams or multiple apps, create separate projects for each product or platform.
- Structure projects or major folders to reflect team ownership.
- Mirror the product architecture (modules, services, or features) with your folder/test case hierarchy.
- Use dedicated tests in a separate folder for reusable flows (e.g. login, setup).
Manual Test Case Naming
Test names should act as clear summaries of what is being tested, even when they’re exported to a separate artifact (e.g. a CSV, XML or TXT file). Keeping your test names consistent will make for easier searching, reviews and traceability.
When writing automated tests, it’s common to use a format like the following:
[Feature]_[Scenario/Condition]_[ExpectedOutcome]
It’s a good idea to follow a similar approach for your manual test cases. You don’t necessarily have to use underscores or CamelCase as you would for automated tests though. So, whereas for automated tests you may have something like the following:
- File: CartTest.js
- Cart_Add Out Of StockItem_ShowsError
- File: LoginTest.js
- Login_ValidUser_SuccessfulRedirect
- File: BillingTest.js
- Billing_MissingCardDetails_SubmitBlocked
For your manual cases, you’d have this instead:
- Folder: Cart & Checkout
- Adding out of stock item to cart shows error
- Folder: Authentication
- Logging in as valid user successfully redirects
- Folder: Orders & Payments
- Missing card details during billing blocks submission
You might be tempted to skip the feature part if you’ve organised your tests into folders by feature, as we recommended above. Don’t do that! Keep in mind that when you’re reporting, exporting or generally referring to tests outside of that folder structure, if you haven’t referred to the feature in your test name, you won’t know what it is.
The most important thing here is that your test case names should be easy to understand, indicate what feature the test case is for, and what it’s intended to do.
Clear naming makes browsing and filtering test cases faster, especially for individuals or teams unfamiliar with the intent.
Tagging Strategy
Tags enable you to dynamically group, filter and report on test cases across different folders if you need to.
In our experience, a couple of good ways to use tags are for identifying what versions a test should be used with, and against what components. Both of which could also be mechanisms via which your tests are organised at the folder level.
Test Type is another common scenario, along with Automation Status, though again, these could be equally well handled through the use of folders, custom fields, or Automation Linking which — in Testmo — demonstrates both coverage & traceability between manual test cases & automation, by showing which cases are covered and what the current result status of those tests are.
Some examples of tagging conventions we’ve seen used are below:
Tag Category | Examples | Purpose |
Component | login, cart, checkout | Product modules |
Risk | high, medium, low | Test risk level |
Test Type | regression, smoke, negative, integration | Testing intent |
Automation Status | automated, manualOnly, needsAutomation | Trace to automation |
Most Test Case Management tools (Testmo included) will have dedicated fields for some if not all of these examples however. And if they don’t, you can probably add fields with defaults and presets, according to your needs. Which begs the question, why would you need tags then?
A good rule of thumb is to use tags for more ephemeral efforts at organisation. If you need to apply a tag to a group of tests for a period of time, after which the tag can be removed (a specific version of the code, for example) -- that would be a good way to use a label. Once the code version has been released, you can remove the tags. Or, if you find it's something you need to refer back to, you can create a milestone for the version and add the runs to it.
Here’s an example of how a well-tagged test case may appear:
Test Case: Missing card details during billing blocks submission
Tags: checkout, high-risk, regression, v2.5, needs-automation
A good tagging strategy should supplement the project & test naming conventions we’ve already talked about above, ultimately making it easier to filter for your tests, and to report on them more easily.
Tagging mistakes to avoid
- Tag Duplication: Avoid creating tags that essentially capture the same information
- Inconsistent Formatting: Stick to your naming convention (e.g., don't mix @LoginPage with @login-page)
- Outdated Tags: Regularly clean up version-specific tags after releases are completed
- Overly Complex Tags: If you need complex multi-part tags, consider if a custom field would be better
- Tag Sprawl: Resist the temptation to create a new tag for every situation
Requirements Traceability
If you have some requirements you wish to cover with your tests, usually in the form of Epics, Stories, tasks, bugs etc (issues, in other words) in Jira, GitHub or GitLab, now’s a good time to think about defining the relationships between them and your test cases.
In Testmo, that’s as simple as just adding the issue ID(s) to the test case Issues field. Once done, you can see which cases cover what requirements in your test case repository view by enabling the Issues column. You’ll also be able to see coverage by running our Requirements Coverage & Traceability report.
Additionally, if you add bug IDs to your test results when you run them, you’ll end-up with an additional layer of traceability: from bugs to tests, to test cases, to requirements.
Organizing Automated Tests
If you have a suite of automated tests developed to test your product, you will most likely organize and store those in source code repositories in platforms like GitHub or GitLab. That said, many teams design automated tests to replace cases they previously ran manually, or supplement certain scenarios of cases they are still running manually.
Even if you store your automated test scripts outside of Testmo, many teams automatically upload test automation results to Testmo using the Testmo CLI so they can get a unified view into all of their test activities from a single dashboard.
When you upload automated test results, Testmo will automatically organize your test results by automation source, run, and thread. You can then review test results, filter for failed tests, and view error messages and other important information in the automation runs area of your Testmo project. Check out our documentation on test automation to learn more.
If you have automation coverage for your test cases as well, you can also demonstrate when automated tests cover test cases in your Testmo repository and include automated tests in traceability by adding an automation link. In a similar fashion to requirements coverage, you’ll be able to see test automation coverage on the repository view by enabling the Automation Column. Check out our video on reporting on requirements and automation coverage on YouTube to see what that all looks like: How to View Test Case Coverage for Manual & Automated Tests in Testmo
Additionally, if you add bug IDs to your test results when you run them, you’ll end up with an additional layer of traceability: from bugs to tests, to test cases, to requirements.
Maintenance & Review Workflow
Testmo comes with built-in workflow settings for test cases, runs & sessions. If you want to implement a design, review & approval process against the test cases for your team, this is the place to do it.
Out of the box, Testmo provides statuses of Draft, Under review, Rejected, Active & Retired, which should work well for most use cases. If you want to modify the workflow to something slightly different however, you can do that in the Admin area. Assuming you’re sticking with the default settings however, that means you can:
- Ensure your test cases go through the appropriate design, review & approval steps before use.
- Create test runs with only test cases in approved (default: Active) states
- Implement a review process for your test cases after releases, to clean them up and move test cases to a retired state where appropriate
- Assign ownership of test case related actions, using the built-in assignment capabilities
Share a Test Case Playbook Across Teams (Use This Article!)
If you’ve been following along, then by this point you should have gotten a good start on organising your test cases in a scalable and consistent fashion. If you want a recap though, check out our video running through the process step by step:
To make sure your team is applying the same concepts and practices, share this article and the video with them! And if you have some suggestions for how we can support you with more educational content, please let us know!
PS: We regularly publish original software testing & QA research, including free guides, reports and news. To receive our next postings, you can subscribe to updates. You can also follow us on Twitter and Linkedin.