AI security self-audit
15 questions, 5 domains · data, model, prompt, access, response. Runs 100% in your browser · nothing leaves your machine until you choose to contact us.
Data
Have you classified the data that enters the AI system (public, internal, confidential, GDPR personal)?
Is there PII masking before the request reaches the LLM?
How long do you retain prompt + response logs?
Model
Where does the model run and is a no-training clause in the contract?
Is the model version pinned, or are you using `latest`?
Is there a fallback model (router) for when the primary is down or slow?
Prompt
Have you tested the system against known prompt injection patterns (OWASP LLM Top 10)?
Is user-submitted content (docs, URLs, emails) handled as untrusted input?
Does the system prompt contain a secret (API key, internal URL, business rule)?
Access
Do all LLM calls go through your server (never directly from the client)?
Is there a per-user rate-limit and token cap?
Does the LLM only access data the current user is authorized to see?
Response
Do you validate the response (hallucination, disallowed content, data leakage)?
Is there reverse PII checking on the response (must not contain accidental personal data)?
Is there logging + alerting when the response contains personal data or a disallowed term?
Want a full audit?
This checklist covers the first 10% · a full hands-on audit adds threat modelling, data-specific eval harness, and a prioritized remediation plan. Budapest-based team, report within 2 weeks.