The SME assessment tool for AI requirements, developed by SBS and DIGITAL SME in collaboration with EIT Digital, navigates users through a structured series of questions, categorising AI systems into the four risk levels.
Each question addresses the AI system’s functionality, application domain, and potential impact. Upon completing the assessment, users will receive a report containing valuable resources, information on their system’s risk level, and advice on how to improve their tools.
The EU AI Act (“AI Act”) aims to ensure that Europeans can trust the benefits AI offers. While many AI systems present little to no risk and have the potential to address significant societal challenges, some systems pose risks that need to be managed to prevent undesirable outcomes.
The AI Act categorises AI systems into four risk levels:
- Unacceptable risk: Systems banned outright, such as government social scoring or toys promoting dangerous behaviour.
- High risk: Systems in critical areas (e.g., healthcare, transport, justice) subject to strict requirements, including risk assessment, high-quality data, human oversight, and transparency.
- Limited risk: Systems with transparency obligations to inform users when interacting with AI (e.g., chatbots, deepfakes).
- Minimal or no risk: Systems freely usable, such as video games or spam filters.