Guide
AI Act 2026: what SMEs need to know
Complete guide to the EU AI Act for SMEs. Obligations, deadlines, penalties, Italian Law 132/2025 and how to prepare with compliant AI solutions like ORCA.
The AI Act in brief
The AI Act (EU Regulation 2024/1689) is the world’s first legislation that comprehensively regulates artificial intelligence. Approved by the European Parliament in March 2024, it establishes clear rules for the development, distribution and use of AI systems in the European Union.
Key dates:
| Deadline | Obligation |
|---|---|
| February 2025 | Ban on unacceptable-risk AI systems |
| August 2025 | Obligations for general-purpose AI models (GPAI) |
| August 2026 | Obligations for high-risk AI systems |
| August 2027 | Full application for all systems |
What changes for SMEs
The AI Act isn’t just about big tech. Every company that uses, develops or distributes AI systems in Europe must assess its position.
Risk classification
The AI Act classifies AI systems into four levels:
- Unacceptable risk (banned): social scoring, subliminal manipulation, mass biometric surveillance
- High risk (regulated): AI in healthcare, finance, HR, education, critical infrastructure
- Limited risk (transparency obligations): chatbots must declare they are AI
- Minimal risk (no specific obligations): spam filters, AI in video games
What SMEs must do
If your company uses an AI chatbot to interact with customers or analyse sensitive data, you may fall into the limited or high risk categories. In that case, you must:
- Document the AI system and how it works
- Ensure transparency: users must know they are interacting with an AI
- Implement human oversight: an operator must be able to intervene
- Maintain a log of decisions made by the AI
- Assess impact on fundamental rights
The problem with American cloud solutions
Using ChatGPT, Claude or Gemini through cloud APIs raises specific concerns under the AI Act:
- Lack of transparency: you don’t know how the model works internally
- No control: you can’t intervene in the AI’s behaviour
- Data outside the EU: data passes through non-European servers
- Unilateral updates: the provider can change the model without notice
The solution: compliant on-premise AI
An on-premise AI platform like ORCA solves these issues:
- Open-source models: total transparency on how they work
- On-premise: data stays within the company infrastructure
- Audit trail: complete traceability of every interaction
- Version control: the company decides when and how to update models
- Human oversight: integrated into the system architecture
HT-X supports businesses on their AI Act compliance journey, from risk assessment to implementing compliant solutions.
The Italian law: Law 132/2025
In addition to the EU AI Act, Italy has passed its own national AI law: Law no. 132 of 23 September 2025 (“Provisions and delegations to the Government on artificial intelligence”), in force since 2025.
The Italian law does not introduce additional obligations beyond EU Regulation 2024/1689 (art. 3, para. 5), but transposes its principles and regulates aspects specific to the national context.
What it means for businesses
General principles (art. 3): AI systems must be developed and used in compliance with transparency, proportionality, security, personal data protection, non-discrimination and sustainability. Human oversight and intervention must always be ensured.
Cybersecurity (art. 3, para. 6): cybersecurity is an essential precondition throughout the entire lifecycle of AI systems, with a proportionate, risk-based approach.
Labour (art. 11): AI in the workplace must be safe, reliable and transparent. Employers must inform workers about AI use. Any form of discrimination is prohibited.
Intellectual professions (art. 13): AI is permitted only as support to professional activity, with intellectual work remaining prevalent. Professionals must disclose to clients which AI systems they use.
Public administration (art. 14): public bodies may use AI to improve efficiency and services, but the human decision-maker remains solely responsible. Traceability of AI use must be guaranteed.
National authorities (art. 20)
The law designates two national authorities for AI:
- AgID (Agency for Digital Italy): promotes AI innovation and development, manages conformity and accreditation procedures
- ACN (National Cybersecurity Agency): supervises AI systems, including inspections and sanctions, and ensures cybersecurity
The two agencies jointly manage regulatory sandboxes for developing compliant AI systems.
Criminal penalties (art. 26)
The law introduces an aggravating circumstance for crimes committed using AI systems when they constitute an insidious means or hinder defence. It also creates the offence of unlawful dissemination of AI-generated or altered content (deepfakes), punishable by 1 to 5 years of imprisonment.
What this means for SMEs
Law 132/2025 confirms that Italy’s regulatory framework is aligned with the EU AI Act. For SMEs, the key points are:
- Disclosure obligation: if you use AI in dealings with clients, employees or patients, you must clearly communicate it
- Mandatory human oversight: the final decision always rests with a person
- Cybersecurity: AI systems must be secure throughout their entire lifecycle
- Sandboxes available: AgID and ACN offer regulatory sandboxes for developing compliant solutions
An on-premise solution like ORCA natively meets these requirements: transparency, traceability, human oversight and data security are built into the architecture.
Prepare now: 5 steps
- AI inventory: list all AI systems your company uses or develops
- Risk classification: determine the risk level for each system
- Gap analysis: identify what’s missing for compliance
- Migration: adopt transparent, controllable solutions (like ORCA)
- Training: prepare staff on obligations and best practices
Frequently asked questions
The AI Act (EU Regulation 2024/1689) is the world's first law comprehensively regulating artificial intelligence. It was approved in 2024 and the main obligations for businesses come into force progressively between 2025 and 2027. From 2026, companies using high-risk AI systems must be compliant.
Yes. The AI Act applies to all companies that develop, distribute or use AI systems in the European Union, regardless of size. SMEs have some accommodations (regulatory sandboxes, reduced compliance costs), but are not exempt from fundamental obligations.
Penalties can reach up to 35 million euros or 7% of annual global turnover (whichever is higher). For SMEs, penalties are proportionate but remain significant. Non-compliance can also result in a ban on marketing the AI system.
The key steps are: 1) Inventory all AI systems in use, 2) Classify the risk level of each system, 3) Implement required technical documentation, 4) Adopt transparent and traceable AI solutions like ORCA, 5) Train staff on obligations.
Yes. ORCA is designed for AI Act compliance: it offers total transparency (open-source models), complete traceability (audit trail), human oversight (integrated supervision), and technical documentation. Being on-premise, the company maintains full control of the system.
Yes. Law no. 132 of 23 September 2025 is Italy's national AI law. It applies in conformity with EU Regulation 2024/1689 (AI Act) and covers principles, sector-specific obligations (healthcare, labour, public administration, justice, professions) and designates AgID and ACN as national AI authorities.
Looking for a private ChatGPT for your business?
ORCA is the on-premise AI platform by HT-X (Human Technology eXcellence): your data stays yours, GDPR and AI Act compliant.
Discover ORCA