Public standard · v0.1

The SheetBrain Spreadsheet Trust Standard

This page defines what SheetBrain means by a business-ready spreadsheet: a workbook evaluated against published integrity rules, with explicit scope, a reproducible verdict, and disclosed limitations.

Version 0.1 Published 2026-04-28 Maintained by SheetBrain

Definition

The standard exists so that a SheetBrain audit report can be referenced by an author, recipient, manager, or procurement officer as evidence that a specific evaluation occurred against a specific set of rules at a specific point in time.

A spreadsheet is business-ready when it satisfies both integrity criteria and assessment criteria.

Integrity criteria

  1. No critical formula integrity risks.
  2. No unresolved structural data issues within the scanned range.
  3. No unresolved schema drift between the spreadsheet and prior known-good states, when prior state is recorded.

Assessment criteria

  1. The scan scope is explicit: sheets, ranges, and cells.
  2. The verdict is tied to a published, named, version-pinned detector set.
  3. The limitations of the evaluation are disclosed.
  4. The evaluation is re-runnable under the same detector set against the same source.
Trusted All integrity and assessment criteria are satisfied for the scanned snapshot.
Has open issues Non-critical risks remain and should be reviewed before external sharing.
Critical Material correctness risks exist; the spreadsheet should not be forwarded until fixed.

The verdict applies to the snapshot evaluated. A spreadsheet that was Trusted on Apr 27 may have unresolved issues on May 1 if it has been modified.

Severity classes

ClassMeaningVerdict consequence
Critical An issue that materially compromises the correctness of the data or its derived values, such as broken references in active formulas or formulas pointing at the wrong column due to schema drift. Critical — do not forward until fixed
Open An issue that does not necessarily produce wrong values today but represents a risk to future correctness or trust, such as mixed data types, duplicate lookup keys, or untracked cross-sheet dependencies. Has open issues — not recommended for external sharing
Informational An observation that does not affect verdict but may be useful to the author, such as unused named ranges, sparse columns, or hardcoded constants. Verdict unaffected

The exact list of detectors that map to each class is published as part of the detector set definition.

Detector sets

Every SheetBrain audit report cites the detector set used to produce its verdict.

A detector set is a named, version-pinned, immutable bundle of:

  • The list of detectors active in the set.
  • The severity class assigned to each detector.
  • The verdict thresholds for Critical, Has open issues, and Trusted.
  • A content hash that uniquely identifies the set.
  • A release date.

Once a detector set version is published, it is immutable. Detectors can be added, modified, or retired only by releasing a new version. This is the mechanism that makes historical reports verifiable.

A SheetBrain report generated under detector set v1.4 can be re-run against v1.4 years later and produce the same verdict, provided the underlying spreadsheet has not changed. If the same spreadsheet is re-evaluated under detector set v2.0, the verdict may differ. That is expected.

The current detector set is cited directly in SheetBrain audit reports. A dedicated detector changelog will be published once multiple detector-set versions exist.

Scope and limitations

SheetBrain checks spreadsheet structure, formulas, type consistency, and selected data-quality risks.

SheetBrain does not:

  • Verify that source data is factually correct, complete, or authorized.
  • Certify business logic, modeling assumptions, or the suitability of the spreadsheet for a particular purpose.
  • Validate external data sources, API integrations, named-range references that resolve outside the workbook, or third-party add-on output.
  • Replace the judgment of a qualified reviewer for material business decisions.

A Trusted verdict means the spreadsheet has passed a defined, reproducible technical evaluation. It does not certify that the conclusions drawn from the spreadsheet are correct.

This boundary is intentional. The standard covers what can be tested deterministically. Everything else remains the responsibility of the author and reviewer.

Report contents

Every SheetBrain audit report contains the following fields so the report can be cited, attached to a decision, or referenced in a contract without ambiguity about what was evaluated.

FieldDescription
VerdictOne of: Trusted, Has open issues, Critical.
Verdict timestampThe moment the evaluation was performed.
Workbook identifierWorkbook name and report-scoped metadata. Stable content fingerprints are intentionally deferred until the privacy model is designed.
ScopeSheets and ranges evaluated, plus a count of cells scanned.
Detector setVersion, content hash, and detector-set label cited by the report.
FindingsEach finding listed by severity class, with location and explanation.
LimitationsThe standard limitations text above, plus any scan-specific limitations.
Report IDA unique, immutable identifier for the report.
ExpiryThe date after which the report should be regenerated to reflect current state.

Versioning

This standard follows semantic versioning.

  • Major versions introduce changes that affect the meaning of the verdict or the structure of reports.
  • Minor versions add detectors, refine severity classifications, or clarify scope without breaking compatibility with prior reports.
  • Patch versions are not used; corrections to a published version are issued as the next minor version.

The current standard version is v0.1. Versions below v1.0 are subject to revision based on feedback from real-world use. Every revision will be published; prior versions will remain accessible.

Reports generated under v0.x will state their version, and recipients should treat them as evaluations under an evolving standard.

How this standard evolves

The SheetBrain Spreadsheet Trust Standard is iterated based on:

  • Findings encountered in real spreadsheet evaluations that the current detector set misses.
  • Feedback from authors, recipients, managers, and procurement officers who reference reports.
  • Categories of spreadsheet error that emerge as common across user populations.
  • Coordination with adjacent standards in the data-quality and audit space, where relevant.

Updates are published with a changelog. Material changes to verdict semantics are versioned as a major release.

This standard is maintained by SheetBrain. It is not currently endorsed by any external standards body. Recipients should evaluate the standard's fitness for their purpose before relying on its verdicts in formal procurement, audit, or compliance contexts.

A note on intent

This standard exists because spreadsheets are routinely used to make decisions worth far more than the spreadsheets themselves, and the question of whether a given spreadsheet can be trusted has historically been answered by intuition, seniority, or hope.

A standard does not eliminate that judgment, but it gives recipients a defined surface to evaluate before applying it. Trusted under SheetBrain Detector Set v1.4 with no critical issues is a different statement from looks fine to me. Both involve human review. Only one of them is referenceable.

That is what this standard is for.