Skip to content

Quality Metrics

The Intelligence dashboard calculates a set of quality metrics from your test execution history. This page explains how each metric is computed and how to interpret the results.

Success Trend Chart

The primary visualization is a line chart plotting pass rate over time. Each data point represents one Runner execution.

text
X-axis: Run index (most recent on the right)
Y-axis: Pass rate (0-100%)
Green zone: >= 95%
Yellow zone: 80-94%
Red zone: < 80%

Run Windows

You can view the chart at three granularity levels:

WindowRuns ShownBest For
15Last 15 executionsActive development, immediate feedback
50Last 50 executionsSprint-level trend analysis
100Last 100 executionsRelease readiness, long-term health

Quality Drift Detection

When the pass rate drops by more than 3% within a window, xyva flags a "Quality Drift" with the exact run where the decline started. Cross-reference this with the git commit log to identify the likely cause.

Score Calculation

The Quality Score is a composite number displayed in the dashboard header:

text
Quality Score = (Pass Rate x 0.5) + (Stability x 0.3) + (Coverage x 0.2)

Where:
  Pass Rate  = passed tests / total tests (latest run)
  Stability  = % of tests classified as Stable over the window
  Coverage   = % of source files with at least one associated test
Score RangeLabelColor
90-100ExcellentGreen
75-89GoodBlue
50-74Needs AttentionYellow
0-49CriticalRed

Metrics Table

Below the chart, a table lists per-test metrics:

ColumnDescription
Test NameFull test title from the spec file
Pass RatePercentage of passes over the window
Avg DurationMean execution time in seconds
Flaky Score0-1 scale, higher = more erratic
Last ResultPass, Fail, or Skip from the most recent run

Sorting

Click any column header to sort. Click again to reverse. This makes it easy to find the slowest, flakiest, or most recently broken tests.

Data Retention

xyva retains the last 500 run results in .xyva/run-history/. Older runs are archived to .xyva/run-archive/ and excluded from active metric calculation but remain available for export.

Minimum Data

At least 5 completed runs are required before metrics are displayed. Before that threshold, the dashboard shows a prompt to run more tests.

Local-first QA orchestration.