Tavri Research
Technical papers on AI safety infrastructure, auditability, and critical-infrastructure workflows. Built for engineers, operators, and regulated environments.
Inference Authority: A Conceptual Framework for Governance of Safety-Critical Artificial Intelligence Systems
Conceptual Framework introducing inference authority as a governance consideration in safety-critical AI systems, distinguishing authoritative and non-authoritative inference pathways and describing controlled transitions between inference sources.
AI-Assisted Asset Intelligence Supporting FAC-008 Facility Rating Workflows
How AI-assisted nameplate extraction can support traceable data inputs for facility rating processes, while responsibility for rating determination and compliance remains with the asset owner.
Tavri Asset Intelligence Schema (TAIS) v0.1
Open specificationOpen, implementation-agnostic schema for structuring asset evidence, traceability, and recorded engineering artifacts in regulated infrastructure workflows (including FAC-008 packaging patterns).
Standards & Policy Engagement
Tavri contributes to emerging standards and federal discussions on governance of AI systems, including authoritative output control and model transition safety.
Response to NIST Request for Information on AI Agent Security
Tavri founder Joshua A. Wright submitted formal recommendations to the National Institute of Standards and Technology regarding governance of model transitions and authoritative output in safety-critical AI systems, including non-authoritative parallel evaluation, controlled authority transfer, authority revocation, deterministic enforcement layers, and auditability.
The submission describes inference authority as the system-level designation of which model or compute pathway is permitted to produce authoritative output during live operation.
View submissionWant a paper on a specific workflow or regulatory context?
Contact Tavri →