Debugging tests in cloud-native distributed architectures is challenging because testing frameworks often don’t provide good transparency into failures and errors. Similar to how application flows in microservices architectures are handled by multiple services and cloud entities, so too are test flows, making it hard to get a complete picture of what happens behind the scenes. Even if engineers know where to look, the logs are usually not accessible to them, and often, the only indication for what went wrong is the failed assertion which doesn’t tell the whole story. It takes time, sometimes hours, to even begin to know what to investigate when a test fails, often blocking the CI/CD process.
How can we know what happens to a test when it’s executed, what the test touches, why validations didn’t pass, where the test fails and how to troubleshoot it?
How Helios can help your team
Helios is a developer platform that provides actionable insight into your end-to-end flows – and tests – by leveraging OpenTelemetry distributed tracing data. Helios instruments your existing tests, providing end-to-end visibility into the execution of complex test scenarios, therefore shortening the process of investigating issues. Distributed tracing telemetry shows a test run as a trace; this allows Helios to seamlessly apply the capabilities we’ve built to help you troubleshoot application flows – to troubleshoot test runs as well.
Helios integrates with common testing frameworks (Cypress, Jest, Mocha, pytest, and more), connecting between a test run and the trace that was generated from the test. When a test runs, all the steps carried out throughout the test are aggregated in the same trace so that it’s easy to visualize what’s happening and quickly pinpoint where things fail.
With Helios you can:
- Capture test runs as traces, enabling you to visualize and monitor them
- Get a comprehensive end-to-end view of complex test scenarios executed, especially in your CI environment
- Make asserts available to easily understand why a test failed; you can also link to a trace visualization from the console where you’re running your tests
- Access test overview data in an organized way, seeing tests that ran including those that passed and failed
You’re running a UI test in a browser that completes actions mimicking a real user in a web application. A user goes into an e-commerce website, searches for a product, adds it to their cart, and then checks out. All of these UI actions can be simulated with UI testing frameworks like Cypress, Playwright, Puppeteer, Selenium, etc. On the e-commerce site, users are logging in, clicking, typing, etc. resulting in a lot of microservices at play behind the scenes – HTTP requests, DB transactions, third-party API calls, published messages, and so on. The UI test checks that all these actions are working. But let’s say one of the services doesn’t perform as expected. For example, even though a “purchase completed” message appeared, the user did not receive an email confirming the purchase. Several things could have led to this scenario. Maybe something in the backend failed, or there was a communication failure between services. You’re not able to see this in the UI. But with Helios test instrumentation, you can capture complete E2E test runs as traces; all the actions and steps from the test are aggregated in the same trace so that you can easily visualize and monitor what’s happening in a test and quickly identify where things fail.
See the live trace visualization in the Helios Sandbox.