Skip to main content

Playwright testing observability: navigation timelines, network analytics, and CI metadata

· 5 min read
TestRelic Team
TestRelic Maintainers

If you run Playwright in CI, you already have pass/fail signals. Testing observability means capturing the execution story around each failure: which URLs you touched, how the SPA navigated, what the network layer did, and which build or branch produced the run. The @testrelic/playwright-analytics reporter is designed for that workflow—it runs locally, writes structured JSON and HTML, and can feed the TestRelic cloud platform when you add an API key. This guide consolidates the official docs into one narrative for teams standardizing on Playwright.

Why Playwright teams adopt execution-level observability

Playwright excels at deterministic automation, but flaky timing, third-party dependencies, and routing bugs often show up only as a single failed assertion. The Introduction describes how TestRelic enriches runs with navigation timelines, network stats, console output, failure diagnostics, and CI metadata so reviewers can answer: what did the browser actually do before the expect() failed? That is distinct from screenshot-on-failure or trace zip alone—it is a structured analytics model aligned to navigations and tests, which you can diff across runs once data reaches the cloud.

Requirements (from the intro): Node.js ≥ 18 and Playwright ≥ 1.40.

End-to-end browser testing: automatic navigation tracking

The E2E Testing (Browser) page is the canonical reference for browser suites. When tests import the page fixture from @testrelic/playwright-analytics/fixture, the reporter tracks:

  • Navigation timeline — timing around page load, DOM content loaded, network idle-style signals where applicable.
  • Network statistics — request counts, byte transfers, and response-time style signals per navigation.
  • Navigation typesgoto, link clicks, back/forward, SPA route changes, and hash changes.

That combination helps teams distinguish “the app never loaded” from “the app loaded but the wrong route rendered” without re-running the suite locally.

API-only tests, unified runs, and combined JSON

Not every suite drives a browser for every test. The Get Started documentation includes an API testing setup path: configure the reporter with options such as trackApiCalls, captureRequestBody, and captureResponseBody, and compose tests with testRelicApiFixture from @testrelic/playwright-analytics/api-fixture as documented there.

When you mix browser and API coverage, the Unified Test Reports (Browser + API) guide explains how a single JSON report can contain:

  • Navigation timeline entries (URLs, navigation types, durations, network statistics).
  • Pass/fail/flaky/skipped results for every test, including those that used only request.
  • Failure diagnostics and CI metadata in the same schema family as other TestRelic reports.

For SEO and internal discovery, bookmark that page whenever your squad searches for “combined E2E and API Playwright analytics.”

Install paths: local-only vs cloud-backed

The Introduction outlines two onboarding tracks:

  1. Local reporting — install the package, add the reporter to playwright.config.ts, run tests; artifacts land under paths like ./test-results/ as described in the installation guide.
  2. With the cloud platform — sign up at testrelic.ai, complete onboarding, add your API key to reporter configuration, and uploads happen automatically on subsequent runs (see intro for the ordered checklist).

The Cloud Platform Quickstart expands the cloud path for teams that want dashboards, AI features, and integrations—not just local files.

Configuration as the single source of truth

Every tuning knob for the Playwright reporter lives on the Configuration page: output location, stack traces, code snippets around failures, network capture toggles, streaming vs batched reporting, redaction rules, custom testRunId, custom metadata, and more. The page groups minimal, development, CI, performance-optimized, and redaction patterns so platform engineers can copy a baseline and tighten it for regulated environments.

Common questions—changing outputPath, disabling network tracking, filtering navigation types, attaching metadata—are answered in the “What does each option do?” section on that same page, plus a Troubleshooting section for when reports look empty or oversized.

From JSON to dashboards and AI

Once runs reach the cloud, the Cloud Platform Overview describes how the same underlying data powers the test runs dashboard, session workspace (timeline, console, network, video where available), AI & intelligence features, test navigation maps, monitoring (smoke, regression, nightly), and integrations (GitHub Actions, Jira, Grafana Loki, Amplitude, TestMu AI, BrowserStack). Your Playwright reporter remains the producer; the platform is the consumer for org-wide analytics.

FAQ: Where do I set the analytics JSON output path?

Use the reporter options object in playwright.config.ts. The Configuration page documents outputPath and related fields under “How do I change where the report is saved?”

FAQ: How do I enable automatic navigation tracking in specs?

Switch imports to @testrelic/playwright-analytics/fixture as shown in E2E Testing (Browser) and the copy-paste prompts on Get Started.

FAQ: Can I run Playwright analytics without sending data to the cloud?

Yes. The Introduction “Local reporting only” path installs the reporter and generates local reports without requiring cloud signup.

FAQ: What belongs in unified vs browser-only reports?

See Unified Test Reports (Browser + API) for when combined page + request fixtures produce one JSON artifact versus separate browser or API reports.