QA at Spotahome part 1: Testing our frontend platform

David Zambrana
Spotahome Product
Published in
9 min readJun 11, 2019

--

This post is the first of a series where we will try to explain how we address Quality and test automation at Spotahome.

In this first one, I’m going to summarize the test automation we do on our frontend platforms, including the tools, the technology and strategies.

Our front-end architecture is built up based on the BFF approach using Node + React, that combines the speed of JavaScript and uses new ways of rendering websites, making them highly dynamic and responsive to the users. For it, we make use of GraphQL to request data to the backend. GraphQL was developed to cope with the need for more flexibility and efficiency that developers experience when interacting with REST APIs, making the workflow much more efficient.

The test tool

Little more than one year ago we decided to start using Cypress, a tool that intends to overtake Selenium (or to make you forget about it right away), by ripping off all its setbacks such as the set up and debugging. If you want to dive deeper into this comparison, here you can read further.

In their terms, Cypress is a testing tool built for the modern web. One can see lots of similarities among them, but the benefits and constraints you will face are quite different depending on which one you use. If what you’re seeking for is easiness of setting up and debugging, Cypress can be a pretty good match.

One big drawback in Cypress as of the time of this writing is cross-browser testing, since it can only run the tests in Chrome and Electron. Other browsers such as Firefox or Safari are on their roadmap but there’s no specific date to release them yet, so if you cannot go without this feature, you’ll need to wait for Cypress to include them in future versions.

Cypress is an open source test runner that lets you write tests in JavaScript code. You can checkout their repository and contribute to their mission if you feel so.

Despite its short life, you can find lots of references to it, be it lectures on YouTube, tech articles, Q/As on Stack Overflow or a Gitter chat. It even appeared in the ThoughtWorks technology radar, adopt section.

In our case, we made the shift from Selenium to Cypress for several reasons. The most important one, it seemed to be pretty easy to set up and work with, and that enabled us from QA to scale testing efforts by spreading its use among QAs and devs. Since it uses JavaScript, our dev team welcomed it and started to use it almost friction-less. Even having some weaknesses as commented above, the strong points were enough to outbalance them.

The technology underneath

Cypress is written in JavaScript and lets you write tests in JavaScript. One thing to keep in mind is that Cypress is built up on top of Mocha, so if you are already used to write test scripts using Mocha as your test framework, you are likely to find some old friends here :)

From Mocha, Cypress adopts its BDD syntax which fits nicely with both integration and unit testing. Some of the syntax statements that you will find in this group are:

  • describe
  • context
  • it
  • before
  • beforeEach
  • after
  • afterEach

Another bundled technologies that Cypress comes with are Chai, Chai-jQuery, Sinon, and Sinon-Chai.

How does a Cypress test look like

As mentioned before, Cypress takes its syntax from Mocha to give shape to your test specs. The following is an example of a part of a test script that belongs to one of our suites:

it(‘user should be able to see their profile info filled after saving their data in a booking request’, () => {cy.server();cy.route(‘PUT’, ‘**/api/**/booking-request’).as(‘bookingRequest’);cy.route(‘POST’, ‘**/api/formData’).as(‘personalData’);cy.visit(`/booking/${city}/${listingIdMadrid}/${checkInDate}/${checkOutDate}`);cy.contains(‘.booking-form’, ‘Personal details’).should(‘be.visible’);cy.get(‘[data-cy=”formName”]’).should(‘have.attr’, ‘value’).and(‘eq’, name);cy.get(‘[data-cy=”formEmail”]’).should(‘have.attr’, ‘value’).and(‘eq’, email);cy.get(‘[data-cy=”formPhone”]’).type(‘666777333’);cy.get(‘[data-cy=”formBirthDay”]’).select(‘12’);cy.get(‘[data-cy=”formBirthMonth”]’).select(‘3’);cy.get(‘[data-cy=”formBirthYear”]’).select(‘1976’);cy.get(‘[data-cy=”formGender”]’).select(‘male’);cy.get(‘[data-cy=”formNationality”]’).select(‘AQ’);cy.get(‘.booking-next-step’).should(‘be.enabled’).click();cy.get(‘[data-cy=”formCouple”]’).select(‘1’);cy.get(‘[data-cy=”message”]’).type(‘hello there, I would like to reserve with Spotahome’);cy.get(‘[data-cy=”formOccupation”]’).select(‘professional’);cy.get(‘.booking-additional-info-next-step’).click();cy.wait(‘@personalData’);cy.url().should(‘contain’, ‘payment’);cy.contains(‘.braintree-option__label’, ‘Card’).click();cardPayment(cardInfo.number, cardInfo.expiration, cardInfo.cvv);cy.get(‘button[type=”submit”]’).click({force: true});cy.wait(‘@bookingRequest’);cy.url().should(‘contain’, ‘success’);})

When running the test in GUI mode, we see the following:

What Cypress does is open a browser, then on the left you see the ‘live log’ which reflects everything the browser does during the test, and on the right you have the application under test.

Worth to mention here that Cypress is not an outsider tool that communicates with the browser through an API, but that it runs directly in the browser, and this makes it to be so close to the web application under test that you can do things that you can’t do in Selenium, such as stubbing DOM APIs.

One of the key features here is the debuggability. When the test finishes or fails, the state is not reset, meaning that you can hover around the ‘live log’ to see what or where was Cypress actually doing at that moment in time -a screenshot is displayed for that moment in time for every action. You can even open the browser console and get a full description of the steps you select on the log. This feature has boosted the whole process of test development so much that we can’t go without it now.

Let’s show it with an example: Let’s change some expected value in the test code to make it fail on purpose and run it afterwards:

Cypress also lets you run the tests in headless mode. This way what you get is a command line log, with screenshots and video recordings of your tests.

Setting up the CI

For our CI system we use Brigade, which is an event-based tool for running automated tasks on the cloud. Our colleagues from the DevOps team wrote a series of posts about how this works, so in case you are interested you can take a deep dive here.

As commented at the beginning of the post, we split our application in different BFFs, which lets us to build and deploy them independently, making the release process swifter and more easy to scale.

All of these BFFs have at least one e2e test repository attached, that will be triggered as a different event every time a pull request is merged.

This case above illustrates the typical series of events that happen when merging a new Pull Request into our staging environment -BFF pipeline. First comes the push event (including unit tests 😍), then the deployment to the staging environment, and finally a post_deploy_hook triggers the configured e2e tests. We’ve got Slack integrated into this loop to notify us of every step and warn us in case anything went wrong.

The e2e tests are stored in a repository of their own (accounting to more than 15 repositories). We commit every change on e2e test code to those repositories to create a new Docker image that will be grabbed and executed by the BFF whenever the event comes.

Focusing on the e2e test event, the command we use to run the tests is:

$ cypress run — record — group 4x-electron — parallel — ci-build-id ${buildId}

Briefly analyzing the params:

  • record: Sends the video recordings to Cypress Dashboard to keep track of the runs
  • group <name>: Group recorded tests together under a single run
  • parallel: Enables parallelization of the tests
  • buildId: Necessary to defining a unique build or run

Wait what? Sends the video recordings to a dashboard?

Cypress Dashboard

The last part of the journey. Cypress Dashboard is a service that allows us to access to recorded tests, being pretty useful when we run them on the CI. It gives us information about all the test runs triggered, including logs, screenshots and videos.

In case we wanted to see what happened in our last failed test run, we can just open it and dive into the data.

Pictured above is an unusual test run where 18 ⚠️ tests failed, but it is in fact a good example to see how the dashboard displays information. Here, each row represents a different spec file, that might contain from 1 to ’n’ tests. All of them will be displayed in either green or red depending on their final state.

On the right column you can choose among the different options to check the run and dive into the errors, making available logs, screenshots and video recordings for that.

Even though we could go without the Dashboard, we found it pretty helpful and use it widely -we are on a paid plan that allows several of our engineers to have access to it and work in a more efficient way. The big win here is that we can be aware of the state of the application quite fast, and we can address any issue right away when anything fails. Once the failure is revealed and the potential impact calculated, we either fix the issue and test it again ASAP or we snooze the failing assertion until it gets fixed, so that we don’t introduce noise in the pipeline.

A few words on strategy

Last but not least I wanted to briefly share how we decide whether to automate or not any upcoming feature. I consider this an important topic and as such we try to get every member of the squad involved in that decision. It also depends on the squad’s feeling of the application and the given resources, so there’s not a correct answer to this.

In general terms, if we consider the new feature or change a critical one that can potentially impact in the business, we create one or more scenarios for it, depending on the acceptance criteria defined in the previous steps. This way we can be sure that such critical functionality is safe and protect ourselves against any code change that can affect it.

Otherwise, if we don’t consider that the new feature/change can have an impact in the business flow, we discuss if we want to create a new scenario for it, depending on the workload and the team’s availability. It can happen that we consider it’d be nice to have it tested, but it’s not that important, so we create the task and move it to the backlog, to grab it later when there is availability.

At Spotahome, besides the masters developing automated tests (devs included), there is the figure of the functional tester. They act basically as manual testers providing an extra effort on some defined scenarios for critical paths and checking other UI related aspects of the application for which we don’t have automatic tests. We do that because we are true believers that automated tests have a limit and cannot (and should not) cover everything. Embracing that human factor in quality has proved to be quite useful and we intend to continue to pursue that practice.

In the end, it is the main user story the one that holds all the acceptance criteria defined amongst PMs, tech leads and QA. From there, scenarios are created in different subtasks and the story is not completed until all the scenarios are created and running in our CI.

This is, in summary, the whole journey we’ve created to assure that our frontend platforms work as designed and meet the acceptance criteria set beforehand. Till we got here, we tested several approaches to decide which ones were the best for our needs.

Yet now, facing a huge increase in the number of engineers and reshaping the squads, are still seeing how we can take it to the next level, making every step as much automated as possible.

The actual figures that are in the game are:

10+ BFFs and increasing

150+ tests and increasing

130+ weekly test runs

As farewell, I want to say thank you to all the colleagues at Spotahome that collaborated on the elaboration of this post, and encourage the readers and QA guardians to give feedback and share their stories.

Cheers!

--

--