How teams use Provara

From EU AI Act compliance to SOC 2 audits to AI agent accountability.

EU AI Act

Article 12 logging for high-risk AI systems

Who

Teams building high-risk AI systems — credit scoring, hiring tools, medical decision support. If your system is in Annex III, you need this.

The requirement

EU AI Act Article 12 requires automatic logging for high-risk AI systems. Logs must enable post-hoc monitoring, be available to competent authorities, and cover the full operational lifecycle. "We have logs" isn't enough — the logs have to be trustworthy.

How Provara helps

Every inference is appended as a signed event. The chain is independently verifiable. When auditors ask for evidence, you don't hand them a CSV — you hand them a cryptographically verified chain they can check themselves with a single command.

The signature proves the log wasn't edited after the fact. The chain proves no events were deleted or reordered. That combination is what Article 12 is actually asking for.

eu_ai_act_logging.py
import psmc

vault = psmc.Vault("~/.provara/inference-log")

# Log every high-risk inference
def log_inference(model, input_hash, output, score):
    vault.append(
        type="inference",
        data={
            "model": model,
            "input_hash": input_hash,  # not raw input
            "output": output,
            "risk_score": score,
            "human_oversight": score > 0.8,
        },
        tags=["eu-ai-act", "annex-iii", "article-12"],
    )

# Auditor verification — no Provara access needed
$ psmc --vault inference-log verify
Chain integrity: CLEAN (8,441 events, 0 gaps)
SOC 2

Tamper-evident audit trails for Type II compliance

Who

SaaS companies seeking SOC 2 Type II certification. Typically required by enterprise customers before signing.

The requirement

SOC 2 auditors want evidence of controls operating over time. Logs work — but only if they're tamper-evident. A log you could have edited before the audit is not evidence; it's a document with no provenance.

How Provara helps

Every security event, access log, and administrative action is appended as a signed event. The audit trail is verifiable by the auditor directly. You're not asking them to trust your screenshots — you're handing them a chain they can verify with open-source tooling.

What goes in the vault

Authentication events. Permission changes. Data access by admins. Configuration changes. Deployment events. Incident response actions. Anything a SOC 2 auditor would ask "can you prove that happened?"

The key word in SOC 2 Type II is "over time." The timestamp in each event is part of the signed content — it can't be backdated without invalidating the signature.

access_log.py
vault.append(
    type="access",
    data={
        "actor": user_id,
        "resource": resource_id,
        "action": "read",
        "ip": request.remote_addr,
        "granted": True,
    },
    tags=["soc2", "cc6"],
)
AI agents

Exact replay of what your agents did, and when

Who

Teams running multi-agent workflows, LLM pipelines, or autonomous AI systems that take actions in the real world.

The problem

When an AI agent calls a tool, makes a decision, or updates state, there's often no receipt. Something goes wrong, and you can't reconstruct what happened. You have the outcome. You don't have the chain of reasoning that produced it.

How Provara helps

Instrument your agents to append an event at every decision point. The chain gives you exact replay of what happened, in order, with signatures proving the log wasn't altered after the incident.

agent_loop.py
class AccountableAgent:
    def __init__(self, vault_path):
        self.vault = psmc.Vault(vault_path)

    def decide(self, context):
        decision = self.model.run(context)

        # Append before acting — not after
        self.vault.append(
            type="decision",
            data={
                "context_hash": sha256(context),
                "model": self.model.id,
                "output": decision.action,
                "confidence": decision.score,
                "tool_calls": decision.tools,
            },
            tags=["agent", "production"],
        )

        return decision

    def tool_call(self, tool, args, result):
        self.vault.append(
            type="tool_call",
            data={
                "tool": tool,
                "args_hash": sha256(str(args)),
                "result_hash": sha256(str(result)),
                "success": result.ok,
            },
        )
Healthcare AI

HIPAA-friendly audit trails via input hashing

Who

Teams building AI for healthcare — diagnosis support, patient triage, clinical decision support — where recommendations need a verifiable paper trail.

The requirement

Healthcare AI decisions need audit trails for HIPAA compliance and liability. Who made this recommendation? When? What data was used? The answers have to be provable, not reconstructed from memory.

How Provara helps

Log the hash of the input, not the input itself. This gives you a cryptographically provable link between the input and the decision — without storing patient data in the event log. GDPR-compatible, HIPAA-friendly, and still tamper-evident.

If you later need to prove that decision X was made on input Y, present the original input and show that sha256(Y) matches the input_hash in the signed event. The signature proves it wasn't changed after the fact.

clinical_log.py
import hashlib, psmc

vault = psmc.Vault("~/.provara/clinical-ai")

def log_clinical_decision(
    patient_data,  # never stored
    model_id,
    recommendation,
    clinician_id,
):
    # Hash the input — prove the link without storing PHI
    input_hash = hashlib.sha256(
        patient_data.encode()
    ).hexdigest()

    vault.append(
        type="clinical_decision",
        data={
            "input_hash": input_hash,   # not raw data
            "model": model_id,
            "recommendation": recommendation,
            "reviewed_by": clinician_id,
            "overridden": False,
        },
        tags=["hipaa", "clinical", "gdpr-compatible"],
    )

Ready to instrument your system?

Set up a vault and start appending tamper-evident events in minutes.