Skip to main content

Scan report format

Hardproof writes a scan report to scan.json (plus an event stream to scan.events.jsonl) under your chosen --out directory.

Schema

Top-level shape

{
  "schema_version": "x07.mcp.scan.report@0.4.0",
  "tool": "hardproof",
  "tool_version": "0.4.0-beta.4",
  "report_kind": "scan",
  "target": { "kind": "mcp_server", "transport": "streamable_http", "ref": "…", "meta": {} },
  "status": "warn",
  "score_available": true,
  "score_mode": "partial",
  "score_truth_status": "partial",
  "overall_score": null,
  "partial_score": 94,
  "dimension_coverage": {
    "conformance": true,
    "security": true,
    "performance": true,
    "reliability": true,
    "trust": false
  },
  "unknown_dimensions": ["trust"],
  "partial_reasons": ["TRUST-NOT-EVALUABLE", "SERVER-JSON-MISSING", "WEIGHT-COVERAGE-BELOW-FULL", "UNKNOWN-DIMENSIONS"],
  "gating_reasons": ["TRUST-NOT-EVALUABLE", "SERVER-JSON-MISSING", "WEIGHT-COVERAGE-BELOW-FULL", "UNKNOWN-DIMENSIONS"],
  "dimensions": [ /* conformance, reliability, performance, security, trust */ ],
  "usage_metrics": { /* token/context metrics + usage_mode truth class (+ estimator/tokenizer/trace metadata) */ },
  "findings": [ /* codes + evidence + suggested_fix */ ],
  "artifacts": [ /* referenced files (JSON/JUnit/HTML/SARIF, etc) */ ],
  "report_digest": "…",
  "run_id": "…"
}

Notes

  • dimensions[] contains the five dimension results; usage_metrics is an overlay focused on token/context footprint.
  • score_mode is the dominant public truth field. Partial scans keep overall_score at null; partial_score remains a machine-readable comparison aid rather than the primary public score.
  • Sample viewer: /hardproof/report-viewer
  • findings[] is the stable place to look for actionable problems (codes, evidence, fixes).
  • artifacts[] provides a review-friendly index of emitted files referenced by findings and dimensions.

Next