Back to Assessment

How the AI Security Risk Assessment Works

A practical guide to the methodology, scoring, and standards behind AIRiskAssess.com

What Is This Tool?

AIRiskAssess.com is a free, browser-based tool for security analysts, IT managers, and AI governance teams to systematically evaluate their AI system security posture. No account required, no data stored. The assessment is aligned to 2026 industry standards including NIST AI RMF, NIST AI 100-4 (red teaming), EU AI Act (including August 2026 high-risk enforcement), OWASP Top 10 for LLMs v2, and NIST AI 100-5 (agentic AI) — giving you a rigorous, up-to-date baseline for any AI deployment.

9

Security Domains

61

Control Areas

13

AI Sectors

9 Security Domains Explained

The assessment covers all critical dimensions of AI system security, from foundational data protection to emerging agentic AI risks.

Domain 18 controls

Data Security

Evaluates how securely AI systems handle, process, and store data.

  • Data collection practices
  • Training data integrity
  • PII handling
  • Data lineage tracking
Domain 28 controls

Model Security

Assesses robustness of AI models against attack vectors.

  • Model poisoning
  • Adversarial attacks
  • Prompt injection & jailbreak defense
  • AI red teaming (NIST AI 100-4)
  • System prompt leakage prevention (OWASP LLM07)
Domain 36 controls

Infrastructure Security

Reviews security of environments where AI systems are deployed.

  • Deployment environment hardening
  • API endpoint protection
  • Cloud security
  • Container security
  • Cross-tenant AI data isolation
Domain 45 controls

Access Controls

Analyzes identity and access management for AI systems.

  • Authentication mechanisms
  • Authorization matrix
  • Privileged access
  • User activity monitoring
  • Shadow AI detection & governance
Domain 57 controls

Operational Resilience

Examines continuity, monitoring, and recovery capabilities.

  • System monitoring
  • Incident response
  • Model drift detection
  • LLM DoS & resource exhaustion controls (OWASP LLM10)
  • AI emergency stop & human override
Domain 67 controls

Supply Chain Security

Evaluates external components, vendors, and dependencies.

  • Third-party model vetting
  • Vendor security
  • Open-source risk
  • AI Bill of Materials (AI-BOM)
  • Fine-tuning & PEFT adapter security
Domain 78 controls

Compliance & Governance

Assesses regulatory adherence and internal governance.

  • EU AI Act classification & Aug 2026 enforcement
  • Mandatory AI incident reporting (Article 73)
  • US state AI law compliance mapping
  • NIST AI RMF mapping
  • Audit trails
  • Explainability
Domain 85 controls

Ethical Considerations

Evaluates fairness, oversight, and responsible AI practices.

  • Bias detection
  • Human oversight
  • Misuse prevention
  • Responsible AI policy
  • AI-generated content disclosure (EU AI Act Art. 50)
Domain 97 controls

Agentic & Generative AI Security

Covers LLM and agentic AI-specific threats — updated for 2026.

  • RAG pipeline security (OWASP LLM08)
  • Agent tool authorization
  • LLM output guardrails
  • Multi-agent trust
  • Excessive agency controls (OWASP LLM06 / NIST AI 100-5)

5-Level Maturity Scoring Scale

Each control is rated on a 0–4 maturity scale. The combined scores across all domains determine your overall security posture and surface the highest-priority gaps.

0Not ImplementedCritical Risk
1Initial / Ad HocHigh Risk
2DefinedMedium Risk
3ManagedModerate Risk
4OptimizedLow Risk

Tip

Be honest in your ratings. The goal is to identify real gaps — not achieve a high score. Accurate ratings lead to more actionable recommendations.

Tailored Risk Weighting by AI Sector

Different industries face different threat priorities. When you select a sector, certain domains receive a risk multiplier (1.1x–1.5x), ensuring your results reflect what matters most in your operational context.

Healthcare AI

Patient data protection and regulatory compliance are paramount.

  • Data Security1.5x weight
  • Compliance & Governance1.3x weight
  • Ethical Considerations1.2x weight

Reflects HIPAA obligations and sensitive patient data risks.

Generative AI / LLM

Agentic and model-level threats are the primary concern.

  • Agentic & Generative AI Security1.5x weight
  • Model Security1.4x weight
  • Ethical Considerations1.3x weight

Reflects OWASP LLM Top 10 and prompt injection attack surface.

All 11 Supported Sectors

Healthcare AIFinancial Services / FinTech AIGenerative AI / LLMAutonomous Systems / RoboticsGovernment & DefenseRetail & E-Commerce AIManufacturing & Industrial AIEducation AILegal & Compliance AIHR & Recruitment AICritical Infrastructure AIConsumer ApplicationsOther / General Purpose AI

Aligned to 2025–2026 Industry Standards

Every domain and control in this assessment maps to one or more recognized frameworks, ensuring your results are meaningful to auditors, regulators, and executive stakeholders.

NIST AI RMF 1.0

The foundational AI risk framework from the National Institute of Standards and Technology

NIST AI 600-1

Generative AI Profile covering hallucination, data privacy, misuse, and more

NIST AI 100-4

Formal AI red teaming guidance — adversarial testing methodologies and scope for AI systems

EU AI Act (2024/1689)

The EU's risk-based AI regulation — prohibited uses, high-risk requirements (fully in force Aug 2026), transparency obligations, and mandatory incident reporting

OWASP Top 10 for LLMs v2

Updated LLM vulnerability reference including system prompt leakage (LLM07), vector/embedding weaknesses (LLM08), and resource exhaustion (LLM10)

MITRE ATLAS™

Knowledge base of adversary tactics and techniques targeting AI/ML systems

ISO/IEC 42001

International standard for AI management systems

NIST AI 100-5

NIST guidance on agentic AI security — excessive agency, autonomous action boundaries, and human oversight

Advertisement

Turning Results Into Action

A completed assessment is the starting point, not the finish line. Here is how to make your results count.

Share with Stakeholders

Export your PDF report and use the domain scores to build consensus around security investment priorities.

Build a Remediation Roadmap

Focus first on Critical and High Risk domains. Convert recommendations into time-bound tasks with clear ownership.

Reassess Quarterly

AI systems evolve rapidly. Schedule quarterly reassessments to track progress and catch new risks.

Frequently Asked Questions

Is my data stored or sent anywhere?

No. All assessment responses are stored only in your browser's memory for the duration of the session. Nothing is transmitted to any server. When you close the tab, your responses are gone.

How long does the assessment take?

Most users complete the full assessment in 15–20 minutes. You can work through one domain at a time and return to others before viewing results.

Do I need to complete all 61 controls to see results?

Yes — the "View Results" button activates only when all controls have been rated. This ensures a complete picture of your security posture.

Who is this tool designed for?

Security analysts, IT managers, CISOs, AI governance officers, compliance teams, and anyone responsible for evaluating or improving AI system security.

How often should I reassess?

We recommend quarterly for active AI systems, or after any significant model update, new deployment, or regulatory change.

Is this a replacement for a professional security audit?

No. This tool provides structured self-assessment and guidance but does not replace professional security consulting or formal auditing. Results should be validated by qualified security professionals.

Ready to Assess Your AI System?

Free. No account. No data stored. Takes 15–20 minutes.

Start the Assessment →