Helios is now part of Snyk! Read the full announcement here.







How Helios integrates with Cypress to provide backend visibility into your UI testing

Written by

Subscribe to our Blog

Get the Latest News and Content

A look into the challenges distributed architectures present to UI testing tools and how Helios solves them with E2E visibility using context propagation.


Testing web applications from the user interface (UI) is a must for every customer-facing product, from e-commerce portals to cyber security dashboards. Often, a broken or inefficient UI experience can make or break whether end users adopt a product quickly and trouble-free. This is why developers have embraced UI testing as a critical part of their development process.

Cypress, an open-source UI testing tool for web-based applications, has grown in popularity thanks to its bundling of end-to-end (E2E) testing capabilities into one package that can be installed and deployed simply in your environment. The solution runs E2E tests in the same way your users interact with your app in a real browser, making Cypress faster and more reliable than other tools (as tests are not run through remote commands, rather directly within the browser).

However, when it comes to E2E testing in distributed/cloud-native architectures, browser-based tools like Cypress can only go so far. For one, when you have to troubleshoot a failed test it can be difficult as the UI provides almost no visibility into what happens in the backend. Another challenge is that validation capabilities in the UI are limited to what appears in the HTML.

Helios, a developer platform that helps increase dev velocity when building cloud-native applications, integrates with Cypress to solve these challenges, helping developers see and understand E2E test failures quickly – so they can fix them quickly.

In this blog post, I’ll take you through a deeper look into the main challenges distributed architectures present to UI testing tools for web-based apps, and how Helios integrates with Cypress to address these challenges.


E2E testing in distributed architectures

One challenge when trying to troubleshoot failed tests in browser-based tools is that the UI provides no insight into what happens behind the scenes. When running tests from the UI, developers see solely what is reflected in the browser. As the backend grows more complex, the potential for more failures increases. When Cypress tests fail, we can’t figure out if the failure is caused by a real issue or simply a temporary headless browser resources issue that has affected the HTML rendering. It can take a great deal of time to identify the reason behind these issues and often developers have no data for what really happened in the backend.

One of the components in the Helios product is a web app, so we use Cypress to test it all the time. Let’s look as an example at one of our own Cypress runs, which failed with the not-so-helpful
AssertionError: Timed out retrying after 4000ms: Expected to find element: `div[class*="TraceVisualizationPageView_side-pane"] div[class*="TraceTriggerConfig_field"] :nth-child(4) > textarea`, but never found it.

run Cypress tests
Our very own Cypress test runs; we need to troubleshoot failed test runs without a lot of context into what went wrong

It can be pretty time-consuming to figure out what the root cause of this failure is as we have no real insight or clue into what went wrong in the backend.

Another challenge is that, in many E2E scenarios, validating what happens in the UI is not enough. This is because some asynchronous operations (e.g., 3rd party API calls, DB operations, etc.) are usually not reflected in the frontend. Things may seem to work properly based on the UI, but in reality the tested scenario is broken. For example, to continue with our e-commerce scenario from above, a customer makes a purchase on a website with the frontend showing that everything is working properly. However, the customer doesn’t get a confirmation email after making the purchase, which could point to a problem. This isn’t something that is shown in the UI and therefore developers may not be aware of it.

Gaining visibility into your backend during UI testing

Helios integrates with Cypress to give you insight into your backend and asynchronous processes during UI testing by connecting what happens in a browser across distributed traces from the backend. You can trace a flow from your browser to your backend by instrumenting it with OpenTelemetry (OTel), the emerging standard for collecting observability data. Tracing a flow from the browser means that all downstream operations will be included in the same trace, allowing developers to get a complete view of what happens behind the scenes like never before.

Back to the example from our own Cypress test runs, once a test is instrumented it’s an easy 1-click to access the full E2E trace in Helios:

Run Cypress test
Each instrumented test run has a direct link to its trace visualization in Helios so that it’s quick and easy to understand the full context

In this case, the UI data provides very little indication for the source of the problem. The problem comes from deep within the system, a DB query in a microservice that the frontend app interacts with. It’s easy to immediately see in the Helios trace that the root cause was a missing DB column:

Trace visualization of the failed test run in Cypress
Trace visualization in Helios of the failed test run – you can immediately see where in the flow the error occurred and what it is

Connecting different operations and requests to a coherent trace requires propagating the trace’s context between two separate parts of the application – in this case between the browser and the backend. Therefore, in addition to initiating a trace (i.e., creating an initial “root” span for the Cypress test) on the browser’s side, we need each HTTP request that Cypress makes from the browser to carry the trace’s context, so that the receiving backend services will associate their operations with the same trace. This can be tricky in certain scenarios, because some custom or non-standard workflows may not always support distributed tracing mechanisms.

I will now walk through what we did to connect the requests to a coherent trace using context propagation. We did this in our own application because we saw that troubleshooting UI tests was wasting us too much time.

Instrumenting a Cypress test

The purpose of instrumenting a Cypress test is to represent each test execution by a single trace, encompassing all the different operations (spans) carried out as part of the test run. To do that, we need to create a trace that represents the test itself, and then make sure its OpenTelemetry context propagates downstream.

The plan involves:

  1. Creating a root span that contains the basic information about the test result (full name, duration, success / failure, failure reason if applicable)
  2. Making sure all the operations that are triggered by the test are associated with this test span

Creating the span is done by leveraging the Cypress test:before:run hook, which runs before each test and has access to the test’s hierarchy and name. Similarly, we can use the test:after:run that signals the test has ended, and allows us to collect the test duration and determine success or failure. By applying these hooks, we get an OpenTelemetry representation of a Cypress test, meaning we’ll have spans for each Cypress test in our system.

However, that’s not all we’re looking for; to achieve the second step we need to somehow propagate the context from the test spans we’ve just created, to the backend.

Usually, we’d do that by instrumenting the HTTP requests sent by the browser, attaching a traceparent header to each request by hooking into the HTTP client code. But in our case, this isn’t possible – the Cypress test code has no way of patching the HTTP client like in regular instrumentation cases, since the Cypress test code has no “classic” programmatic access to do that dynamically. How can we still solve this problem, when “standard” instrumentation isn’t possible? Luckily, the Cypress API provides a solution. By using the Cypress intercept API, which is intended to spy, stub, and test network calls made to your application, we’re able to inject the context propagation headers into each outgoing request:

cy.intercept('/**', (req) => {
    propagation.inject(context.active(), req.headers);

The context we’re injecting is taken of course from the test span we’ve created. Each request sent to the backend will now be associated with the Cypress test trace, providing beautiful visibility into the E2E flow:

An E2E Cypress test run, pulling together all the downstream operations into the same trace leveraging OTel context propagation
An E2E Cypress test run, pulling together all the downstream operations into the same trace leveraging OTel context propagation

The result displayed above is a trace that starts from a Cypress node, i.e. the test browser, which sends 3 requests to the backend-for-frontend service, which in turn makes queries to Elasticsearch, Redis, and sends multiple Rest and GraphQL calls to downstream microservices. In a single test there are over 30 interactions, any of which could fail, and figuring out which could fail, often takes hours of precious dev efforts.

Helios and Cypress – the perfect E2E testing combo

The Helios SDK offers Cypress auto-instrumentation on each UI browser test, collecting test-related information like the test’s name, the run status (i.e. if it failed or passed), and any assertion errors. This information provides much-needed insight into the backend during UI testing in distributed architectures. Context propagation is the key mechanism here as it unifies the backend and frontend into the same trace, making test troubleshooting so much faster and smoother for developers – this is at least what we saw within our own team at Helios, where our dev velocity increased because we had the right insight when we needed it. With the right backend data at the right time, developers can save time, while also ensuring their end users have a good product experience on the frontend.

Subscribe to our Blog

Get the Latest News and Content

About Helios

Helios is an applied observability platform that produces actionable security and monitoring insights. We apply our deep runtime data collection capabilities to help Sec, Dev, and Ops teams understand the actual application risk posture, prioritize vulnerabilities, shorten troubleshooting time, and reduce MTTR.

The Author

Natasha Chernyavsky
Natasha Chernyavsky

Natasha is a senior software engineer at Helios, where she was one of the first employees. Previously, Natasha was an R&D team leader and senior software architect at Oribi, acquired by LinkedIn in 2022. Natasha has over a decade of development and management experience in the industry and in the IDF, and she holds a B.Sc from Tel Aviv University and M.Sc From Reichman University, both in Computer Science.

Related Content

Challenges of existing SCA tools
Challenges with Traditional SCA Tools
Application security testing tools are designed to ensure that applications are put through rigorous security assessments to identify security flaws within...
Read More
Banner for blog post - Scaling microservices - Challenges, best practices and tools
The Challenges of Collecting Runtime Data
Collecting data in real-time plays a crucial role in securing, monitoring, and troubleshooting applications. This real-time data, often referred to as...
Read More
Helios Runtime for Appsec
Helios Runtime for AppSec: The missing link in application security
Modern development teams increasingly rely on open-source packages to rapidly build and deploy applications. In fact, most, if not all applications consist...
Read More