Ask AI in TestRelic: conversational analysis, @ context, artifacts, and streaming
Ask AI is TestRelic’s primary full-page conversational interface for turning natural language into answers about your organization’s tests, runs, and quality signals—and, when appropriate, into structured artifacts you can share or iterate on. The authoritative specification lives in the Ask AI documentation; this article expands that material with additional cross-links for teams evaluating Ask AI against IDE-only tooling such as the TestRelic MCP server.
Access, URLs, and plan gating
Per the Ask AI doc:
- Open Ask AI from the left sidebar in the product.
- Routes follow
/aifor a new conversation and/ai/:conversationIdwhen resuming history.
Plan requirement: Ask AI requires the Growth plan. Pricing and limits are documented on Plans & Billing. The Introduction includes a high-level plan comparison table (Starter vs Growth) for SDK, cloud, AI, storage, and retention—use that table when stakeholders ask what unlocks Ask AI versus reporter-only usage.
Starting conversations and example prompts
Ask AI streams token-by-token responses after you submit a prompt. The documentation ships copy-ready examples:
- Analyze recent failures — e.g. common failure patterns over seven days scoped to a repository.
- Generate a test plan — structured scenarios such as checkout happy path, payment failure, inventory edge cases.
- Summarize a specific run — distinguish infrastructure noise from product defects using
@context (below).
Teams onboarding many engineers should paste these patterns into internal playbooks so prompts stay consistent and auditable.
@ mentions: precision without leaking unrelated data
The @ context picker (documented in a table on the Ask AI page) attaches typed entities so the model grounds answers in your org:
| Context | Role |
|---|---|
@repo | Repository scope |
@test_run | Run-level diagnostics |
@test_case | Single test case focus |
@branch | Branch-scoped analysis |
@suite | Suite grouping |
@tag | Tag-filtered views |
@environment | Environment isolation |
@integration | Connected integration context |
Type @ in the composer to search and attach. This pattern is what makes Ask AI materially different from a generic LLM chat: the platform can invoke tools against those attachments (see “Streaming and tool use” below).
Conversation history, privacy, and feedback
- History: Conversations save automatically. The Chat Sidebar lists prior threads; you can resume, rename, or delete them. Conversations are private to your user within the organization—as documented on the Ask AI page.
- Feedback: Thumbs up / down on each assistant message feeds product improvement loops.
Document these policies in your internal AI governance memo so security reviewers understand data residency at the conversation level.
Structured artifacts beyond plain text
Ask AI can emit artifacts rendered beside the chat. Supported types (from the official table) include:
| Artifact type | Typical use |
|---|---|
dashboard | Executive-facing quality charts |
report | Narrative test quality write-ups |
test_plan | Scenario/step structured plans |
presentation | Slide-style summaries |
code | Tests, configs, or snippets |
data_table | Tabular breakdowns |
chart | Inline visualizations |
navigation_paths | Navigation path maps |
Deeper styling and behavior notes live under AI Insights & Artifacts, which also covers session insights (automatic analysis in the Session Workspace Insights tab on Growth—see that page for defect, user-impact, Jira linkage, and MTTR panels).
Streaming, tool calling, and resilience
Ask AI’s doc explains the runtime model:
- The assistant issues tool calls to fetch repository, run, and test-case data; the UI surfaces an indicator while tools run, then merges results into the streamed answer.
- Streaming uses Server-Sent Events (SSE); the platform retries and resumes if the connection drops mid-generation.
For reliability engineers, that means timeouts should be tuned at the proxy layer with SSE-friendly settings—not by disabling streaming outright without product guidance.
Platform context: where Ask AI sits
The Cloud Platform Overview lists AI & Intelligence as a first-class pillar alongside runs, repositories, test navigation, monitoring, integrations, and administration. Practically, Ask AI consumes the same org-wide ingestion path described there: SDK reporters produce JSON; the cloud stores and indexes it; Ask AI and other AI surfaces read it through governed tools.
For run-level drill-down before or after an Ask AI session, send engineers to the Dashboard & Test Runs and Session Workspace documentation.
Ask AI vs TestRelic MCP
| Dimension | Ask AI | MCP |
|---|---|---|
| Surface | Web app full-page chat | IDE / agent host tools |
| Primary user | Human + optional sharing | Automation-first workflows |
| Credential | Product login + plan | tr_mcp_ token + caps |
| Strength | Artifacts + UI affordances | Inline code + repo tasks |
Both can leverage the same underlying TestRelic data; they are complementary, not interchangeable.
Related documentation
- Ask AI
- Plans & Billing
- Introduction (plan table + onboarding links)
- Cloud Platform Overview
- AI Insights & Artifacts
- Session Workspace
- Dashboard & Test Runs
- TestRelic MCP overview
FAQ: Which plan unlocks Ask AI?
Growth, per Ask AI and Plans & Billing.
FAQ: How do I scope a question to one repository?
Attach @repo (and other @ types as needed) using the context picker documented in Ask AI.
FAQ: What happens if SSE disconnects mid-answer?
The platform automatically retries and resumes streaming, per Ask AI.
FAQ: Where do session-level AI insights appear?
In the Session Workspace Insights tab; details in AI Insights & Artifacts (Growth plan).
