AI Insights & Artifacts
TestRelic's AI produces two categories of structured output: session insights (automatically generated analysis of individual test sessions) and artifacts (structured outputs generated through AI chat interactions).
Session AI Insights
Session insights appear in the Insights tab of the Session Workspace. They are automatically generated when you open a session on the Growth plan.
AI Insights require the Growth plan. See Plans & Billing.
What insights include
| Insight type | Description |
|---|---|
| Defects | AI-identified defects in the test session — categorized by severity, likely cause, and whether the issue is a test problem or an application bug |
| User impact | Assessment of the user-visible impact if the failing behavior reached production |
| Jira linkage | Matches the detected defect against open Jira issues (when the Jira integration is connected) |
| MTTR panel | Mean Time to Resolution estimate based on similar historical failures in your repository |
Insights panel components
The InsightsPanel in the Session Workspace renders:
- A list of detected defects with severity labels and expandable detail sections.
- User impact summary — a brief description of what end users would experience.
- Linked Jira issues — existing tickets that match the failure pattern.
- MTTR estimate with a confidence indicator.
AI Artifacts
Artifacts are structured outputs produced by the AI during a conversation in Ask AI or the AI Assistant. Each artifact appears in a dedicated rendering panel alongside the chat response.
Artifact types
- Dashboard
- Report
- Test Plan
- Code
- Data Table
- Chart
- Navigation Paths
A rendered analytics dashboard with multiple metrics panels and charts. Useful for generating executive-facing quality summaries.
Typical trigger: "Create a dashboard showing pass rates, flaky tests, and failure trends for this week."
A formatted test quality report with sections, tables, and analysis. Exported as readable markdown.
Typical trigger: "Generate a weekly test quality report for the payments team."
A structured test plan document with test scenarios, steps, expected results, and coverage notes.
Typical trigger: "Write a test plan for the user registration flow covering happy path, validation errors, and duplicate email scenarios."
A code snippet rendered with syntax highlighting. Commonly used for new test code, reporter configurations, or CI workflow snippets.
Typical trigger: "Write a Playwright test for the login flow."
A tabular breakdown of test data — useful for comparing test metrics across repositories, branches, or time periods.
Typical trigger: "Show me a table of failure rates by repository for the last 30 days."
An inline data visualization (bar, line, or pie chart) rendered in the artifact panel.
Typical trigger: "Plot pass rate trends for the last 4 weeks."
A visual map of test navigation paths, similar to the Test Navigation view, but generated as part of an AI response.
Typical trigger: "Show me the navigation coverage gaps in the checkout flow tests."
Artifact presentation
Artifacts render in the right-hand panel of the Ask AI page, separate from the chat messages. Multiple artifacts from the same conversation are stacked and navigable. Each artifact includes:
- A title (set by the AI based on the request)
- The artifact type badge
- The rendered content
- A creation timestamp
Feedback on AI outputs
Every AI message and artifact can be rated with thumbs up or thumbs down feedback. This feedback is stored and used to improve model quality over time.