May 31, 2020 · ⏱ 6 minutes

🌲 Bootstrapping a Portable Cypress Setup for Blazing-Fast Browser Tests

Cypress has become one of the most popular tools for running reliable browser-based tests across multiple environments, making sure that your web application will behave as expected in production. This surge in user adoption and popularity over the past years is hugely based on high-quality developer documentation, a straightforward API surface to write tests quickly, and a set of features that empower developers to debug their applications quickly and avoid regressions later on.

Being able to test your site from front to back gives you massive guarantees later on, but it also means that you need to decide carefully which types of tests to run for which parts of your codebase, especially since every new test added will naturally take up time. You'll probably end up with a blend of unit, integration, and browser tests, each of which covers different parts.

Cypress is extremely helpful for transactional flows, such as the typical shopping cart example, requiring multiple actions in a specific order. While offering extended control over the browser allows you to tweak network requests, Cypress also has a set of trade-offs that arise naturally due to its architecture.

What I want to focus on for this guide, is setting up a fast and portable Cypress setup that will run in CI environments, such as Circle CI. While Cypress already offers first-hand integrations and examples for most environments, I'd like to decouple this configuration slightly so it's easy to switch later on or even run the same setup on my local machine because that makes my life easier.

Because I already spent considerable time figuring out how to manage Docker containers and Docker Compose services in Circle CI, I'll pick Docker as my layer of abstraction. This will allow us to run our tests on every platform that supports Docker (and Docker Compose, in our case).

I uploaded the complete source for the following example to my blog code repository, available here.

✏️ Setting up some tests

Our first task is to set up Cypress in a freshly-created directory and open up the test runner interface. At this point, Cypress will create some example tests automatically, which I'll continue to use for this guide.

Let's start by installing Cypress as a dev-dependency, and opening up the test runner:

# If you haven't set up a package.json yet, we'll do it now
yarn init -y
# Install cypress as devDependency
yarn add -D cypress
# Open the test runner
yarn cypress open

The first run might take some time, but after everything's done you'll be greeted by the following screen


From here on, you can already run your Cypress tests by clicking on a test suite (Cypress organized suites based on spec files), or by running all specs with the click of the top-right button. You can also configure the browser used for running the tests, which will prove helpful if you want to make sure all deployment targets are functional. While running the tests locally is already amazing, the real value of Cypress tests lies in the recording feature, visible in the Runs tab.


This view will display a list of all your previous and current test runs, allowing you to retrieve further details like screen recordings, logs, and other details by heading over to the Cypress Dashboard. As we haven't created our Cypress project (the organizational unit containing our test runs) yet, we can click Set up project to record, which will prompt us to sign in to the Dashboard. You'll be able to sign in using GitHub, for example, after which you can get back to your test runner.


Now you can supply some final information for your project, including its name, organization to be contained in, and visibility.

Create project

Once you confirm, you'll notice Cypress has created a cypress.json file in the background, containing your project ID. This is necessary for your test runner to know which project is connected to your repository. You'll also get a key we'll use later on when running our tests. The command outlined in the second step will run and record all test suites, creating a report available in the Runs tab and your Dashboard.


At this point, we've successfully set up our first tests and configured our Cypress project that will contain all future test runs, whether manual or from CI. In the next step, we're going to dockerize our tests, making them run to completion in parallel.

🐳 Dockerizing our tests

To make our tests run wherever we desire, we'll run them in Docker containers. Luckily, Cypress provides well-maintained Docker images out of the box, which we can build on. Let's create our Dockerfile:

FROM cypress/base:latest
COPY package.json yarn.lock /app/
RUN yarn install
COPY cypress.json /app/
COPY cypress /app/cypress
# This is completely optional and will
# be used by Cypress to fetch information
# about the current commit
COPY .git /app/.git
CMD yarn cypress run --record

We're installing all dependencies, followed by copying over the necessary files to run all tests, including cypress.json and the complete cypress directory. For running, we can simply run and record as the prompt in the previous step outlined.

Let's also create a Compose file to describe the services we want to run:

version: '3.8'
build: .
env_file: .env

For now, we'll simply tell Docker Compose that it should build from our current directory (it will automatically find the Dockerfile we just created) and to use the .env file we'll now create. It should contain all variables that are set for all runs, while temporary variables that differ for each run will be supplied differently later on.

# Your Cypress record key, displayed in
# the test runner in the "Runs" tab
CYPRESS_RECORD_KEY=<secret record key>

And that's all we need to run our tests in Docker! Let's try it out:

docker-compose up --build

After building our image, the test container will start up and you're able to see the test logs flashing over the terminal. You can already head over to the Cypress dashboard, which will show the current run in real-time

Current run

If we click on the run, we're able to see all details from which tests have already passed to how many machines are running and the environment they're in:

Run details

After some time, the tests complete and we can check out everything from screenshots, to logs, to videos of each of our tests that got recorded.

You'll also notice a CTA at the top of your list of tests, outlining how running the tests on multiple machines will speed up your tests significantly!

Parallelization CTA

This is where it gets even better, I'll show you how we can run our test in parallel by adding a few small changes to our existing configuration.

πŸš€ Running tests, in parallel

Running our tests in parallel hugely improves performance by a ton. Luckily, it's straightforward to adapt our setup to running multiple instances of our test container, which will connect to the Dashboard to then retrieve the test suites (based on the test files), they should run. The Dashboard acts as the orchestrator in this case, where we only have to start up more containers with some minor changes, so the Dashboard knows they're related.

As a first step, we'll update the command in our Dockerfile to run the tests in parallel, as well as passing in a CI build ID that will identify all containers that we'll spin up to run the same tests

# Add the following flags to the existing command
CMD ... --parallel --ci-build-id $BUILD_ID

To make the CI build identifier available in our environment, we'll also add it to the Compose file using variable substitution, allowing us to pass the build identifier from our parent environment (so the CI job) straight to the containers

env_file: .env

Every time we want to run our tests now, we'll have to expose the BUILD_ID variable to our local environment, choosing the identifier our provider exposes or using our own, although it helps to be able to link test runs to CI executions.

export BUILD_ID=<an identifier unique to every test run>

Now, let's start up our tests once more, this time running with four instances to split up our tests between those machines

docker-compose up --build --scale tests=4

This will start up four instances of our tests service, where all containers are supplied with exactly the same configuration, awaiting instructions from Cypress Dashboard. After verifying Cypress can run, your tests will start to execute.

Running in parallel

After all tests are completed, Docker Compose will exit with exit code 0, notifying us that everything finished successfully.

To recap, we changed a few details, adding parallel capabilities to our test configuration, enabling us to scale up to as many instances as we're able to launch based on the underlying compute resources, greatly improving performance.

All tests complete

πŸ“‘ Automating test runs in CI

One last thing I'd love to share is how to run all of what we built in CircleCI, to make it easy to set this up for yourselves.

First, we'll go ahead and create our CircleCI configuration file, located at .circleci/config.yml:

# Use the latest 2.1 version of CircleCI pipeline process engine.
# See:
version: 2.1
- image: circleci/golang:1.14-node-browsers
- checkout
- setup_remote_docker
- run: |
docker-compose up --build --scale tests=4
- tests:
- master

On every Git push to the master branch, CircleCI will check out the Git repository, set up everything required to run Docker and Docker Compose, and then define the build identifier based on the current build number to finally start our tests scaled to four instances (which can be freely configured and should be based on the machine that runs underneath).

Once this is done, we can copy our secret record key from our .env file, delete it, and remove env_file from our Compose file, as we don't need it anymore. Instead, we are going to configure our Compose file to use the CYPRESS_RECORD_KEY environment variable we'll set up shortly.

version: '3.8'
build: .

After pushing, we can set up our CircleCI project, start building, and then visit the project settings to add the environment variable CYPRESS_RECORD_KEY to our project with the value we copied earlier. If it got lost in the meantime, you can easily get it back by visiting your project settings in the Cypress Dashboard and copy the record key that was generated earlier.

At last, we finished creating a fully-functional testing configuration, allowing us to write browser tests and run them on every push, or completely scheduled, for example, once a day.

Some additional resources on Cypress parallelization can be found over at the amazing Cypress documentation, for example on this page, while Continuous Integration setup steps are described here.

I hope you enjoyed this post and could learn something from it! If you've got any questions, suggestions, or feedback in general, don't hesitate to reach out on Twitter or by mail.

πŸ„ The latest posts, delivered to your inbox.Subscribe