Skip to main content

Ask AI in TestRelic: conversational analysis, @ context, artifacts, and streaming

· 5 min read
TestRelic Team
TestRelic Maintainers

Ask AI is TestRelic’s primary full-page conversational interface for turning natural language into answers about your organization’s tests, runs, and quality signals—and, when appropriate, into structured artifacts you can share or iterate on. The authoritative specification lives in the Ask AI documentation; this article expands that material with additional cross-links for teams evaluating Ask AI against IDE-only tooling such as the TestRelic MCP server.

Access, URLs, and plan gating

Per the Ask AI doc:

  • Open Ask AI from the left sidebar in the product.
  • Routes follow /ai for a new conversation and /ai/:conversationId when resuming history.

Plan requirement: Ask AI requires the Growth plan. Pricing and limits are documented on Plans & Billing. The Introduction includes a high-level plan comparison table (Starter vs Growth) for SDK, cloud, AI, storage, and retention—use that table when stakeholders ask what unlocks Ask AI versus reporter-only usage.

Starting conversations and example prompts

Ask AI streams token-by-token responses after you submit a prompt. The documentation ships copy-ready examples:

  1. Analyze recent failures — e.g. common failure patterns over seven days scoped to a repository.
  2. Generate a test plan — structured scenarios such as checkout happy path, payment failure, inventory edge cases.
  3. Summarize a specific run — distinguish infrastructure noise from product defects using @ context (below).

Teams onboarding many engineers should paste these patterns into internal playbooks so prompts stay consistent and auditable.

@ mentions: precision without leaking unrelated data

The @ context picker (documented in a table on the Ask AI page) attaches typed entities so the model grounds answers in your org:

ContextRole
@repoRepository scope
@test_runRun-level diagnostics
@test_caseSingle test case focus
@branchBranch-scoped analysis
@suiteSuite grouping
@tagTag-filtered views
@environmentEnvironment isolation
@integrationConnected integration context

Type @ in the composer to search and attach. This pattern is what makes Ask AI materially different from a generic LLM chat: the platform can invoke tools against those attachments (see “Streaming and tool use” below).

Conversation history, privacy, and feedback

  • History: Conversations save automatically. The Chat Sidebar lists prior threads; you can resume, rename, or delete them. Conversations are private to your user within the organization—as documented on the Ask AI page.
  • Feedback: Thumbs up / down on each assistant message feeds product improvement loops.

Document these policies in your internal AI governance memo so security reviewers understand data residency at the conversation level.

Structured artifacts beyond plain text

Ask AI can emit artifacts rendered beside the chat. Supported types (from the official table) include:

Artifact typeTypical use
dashboardExecutive-facing quality charts
reportNarrative test quality write-ups
test_planScenario/step structured plans
presentationSlide-style summaries
codeTests, configs, or snippets
data_tableTabular breakdowns
chartInline visualizations
navigation_pathsNavigation path maps

Deeper styling and behavior notes live under AI Insights & Artifacts, which also covers session insights (automatic analysis in the Session Workspace Insights tab on Growth—see that page for defect, user-impact, Jira linkage, and MTTR panels).

Streaming, tool calling, and resilience

Ask AI’s doc explains the runtime model:

  • The assistant issues tool calls to fetch repository, run, and test-case data; the UI surfaces an indicator while tools run, then merges results into the streamed answer.
  • Streaming uses Server-Sent Events (SSE); the platform retries and resumes if the connection drops mid-generation.

For reliability engineers, that means timeouts should be tuned at the proxy layer with SSE-friendly settings—not by disabling streaming outright without product guidance.

Platform context: where Ask AI sits

The Cloud Platform Overview lists AI & Intelligence as a first-class pillar alongside runs, repositories, test navigation, monitoring, integrations, and administration. Practically, Ask AI consumes the same org-wide ingestion path described there: SDK reporters produce JSON; the cloud stores and indexes it; Ask AI and other AI surfaces read it through governed tools.

For run-level drill-down before or after an Ask AI session, send engineers to the Dashboard & Test Runs and Session Workspace documentation.

Ask AI vs TestRelic MCP

DimensionAsk AIMCP
SurfaceWeb app full-page chatIDE / agent host tools
Primary userHuman + optional sharingAutomation-first workflows
CredentialProduct login + plantr_mcp_ token + caps
StrengthArtifacts + UI affordancesInline code + repo tasks

Both can leverage the same underlying TestRelic data; they are complementary, not interchangeable.

FAQ: Which plan unlocks Ask AI?

Growth, per Ask AI and Plans & Billing.

FAQ: How do I scope a question to one repository?

Attach @repo (and other @ types as needed) using the context picker documented in Ask AI.

FAQ: What happens if SSE disconnects mid-answer?

The platform automatically retries and resumes streaming, per Ask AI.

FAQ: Where do session-level AI insights appear?

In the Session Workspace Insights tab; details in AI Insights & Artifacts (Growth plan).