ResearchWhite Papers

    Tavri Research

    Technical papers on AI safety infrastructure, auditability, and critical-infrastructure workflows. Built for engineers, operators, and regulated environments.

    January 2026

    Inference Authority: A Conceptual Framework for Governance of Safety-Critical Artificial Intelligence Systems

    Conceptual Framework introducing inference authority as a governance consideration in safety-critical AI systems, distinguishing authoritative and non-authoritative inference pathways and describing controlled transitions between inference sources.

    Inference GovernanceSafety-Critical AIArchitecture
    January 2026

    AI-Assisted Asset Intelligence Supporting FAC-008 Facility Rating Workflows

    How AI-assisted nameplate extraction can support traceable data inputs for facility rating processes, while responsibility for rating determination and compliance remains with the asset owner.

    FAC-008Facility RatingsNameplate DataAuditability
    January 2026

    Tavri Asset Intelligence Schema (TAIS) v0.1

    Open specification

    Open, implementation-agnostic schema for structuring asset evidence, traceability, and recorded engineering artifacts in regulated infrastructure workflows (including FAC-008 packaging patterns).

    Open SpecificationJSON SchemaEvidence-LinkedFAC-008
    Federal Submission

    Standards & Policy Engagement

    Tavri contributes to emerging standards and federal discussions on governance of AI systems, including authoritative output control and model transition safety.

    March 2026

    Response to NIST Request for Information on AI Agent Security

    Tavri founder Joshua A. Wright submitted formal recommendations to the National Institute of Standards and Technology regarding governance of model transitions and authoritative output in safety-critical AI systems, including non-authoritative parallel evaluation, controlled authority transfer, authority revocation, deterministic enforcement layers, and auditability.

    The submission describes inference authority as the system-level designation of which model or compute pathway is permitted to produce authoritative output during live operation.

    View submission

    Want a paper on a specific workflow or regulatory context?

    Contact Tavri →