EU Artificial Intelligence Act (AIA) and Future Risk-Based Regulations and Their Effects - Long Guide
- Halil İbrahim Ordulu

- Oct 18
- 19 min read
Since its conceptual inception in the 1960s, artificial intelligence (AI) technologies have rapidly evolved, particularly in recent years, to permeate nearly every aspect of life. However, this momentum has also brought with it numerous ethical, value-based, and security-based issues.
In response to these problems, the European Union took a global lead by implementing the world's first comprehensive and risk-based artificial intelligence regulation with the EU Artificial Intelligence Act (AIA), which came into force in 2024.
In this article, we will discuss in detail the scope, development, risk classification, main roles, relevant institutions, sectoral impacts and reflections of AIA on entrepreneurs and the startup ecosystem.

1. Why was the EU Artificial Intelligence Act (AIA) needed?
Digital Risks, Legal Gaps and Their Relationship with the Legal Framework
The rapid rise of artificial intelligence (AI) has driven a transformation that, while advancing technological focus, is also reshaping ethical, legal, and societal boundaries. Today, AI systems have become a direct driver of human life, from credit assessments and hiring processes to health diagnoses and judicial decision-making support systems.
However, the decision-making mechanisms within these systems often operate with unclear transparency, openness to data-based bias, and limited accountability. This has naturally created new risk areas that negatively impact individuals' fundamental rights and society's sense of trust.
The European Union interpreted these developments as a comprehensive transformation brought about by technology, as existing regulatory instruments—such as the GDPR (General Data Protection Regulation) or Consumer Safety Directives—did not provide sufficient protection for AI’s decision-making and technical autonomy.
This gap became particularly evident in three key areas:
Algorithmic Bias: Biases in education data could systematically disadvantage certain groups in hiring, credit, or benefits decisions.
Lack of Transparency and Explainability: The incomprehensibility of the internal workings of “black-box” models made it difficult to question flawed decisions.
Accountability Issue: It was not clearly defined who would be responsible for the harmful consequences of an AI system—the provider, the user, or the developer.
These shortcomings led to the conclusion that the concept of “trustworthy artificial intelligence” should become widespread at the legal and societal level.
That's precisely why the AIA was born: to move AI out of the realm of free innovation and into a risk-based regulatory system.
AIA's Risk-Based Approach
What distinguishes the AIA is that it classifies all AI systems not “by type of technology,” but rather by their societal impact and level of potential harm.
This approach established, for the first time, an “ex ante” paradigm in technology regulation.
In other words, instead of waiting for a system to become harmful, the law requires technical, ethical and administrative measures to be taken before the risk arises.
This model considers AI at four main risk levels:
This table embodies the AIA's effort to regulate innovation with balance: intense control for high-risk systems, freedom for low-risk systems.
Relationship Between GDPR, KVKK and AIA
The AIA does not replace the GDPR; on the contrary, it is positioned as a complement to it.
While GDPR focuses on the protection of personal data, AIA focuses on the behavior and decision-making logic of AI.
In other words, “data” is protected in one, and “system” in the other.
Common intersection areas:
Data governance, data quality and fairness principles
The right to accountability and human oversight
Impact assessments (GDPR: DPIA ↔ AIA: FRIA/AIIA)
Differences:
GDPR is privacy-based, AIA is security and robustness-based.
GDPR limits the processing of data, AIA regulates the entire life cycle of the system.
In Turkey, this integration process necessitates the evolution of KVKK into a “risk-based” artificial intelligence framework.
This transformation has become a fundamental condition not only for integration but also for access to the European market – an effect referred to in the literature as the “Brussels Effect”.
2. Historical Process and Implementation Schedule (Transitional Periods)

The European Union Artificial Intelligence Act (AIA) is the world's first legal framework to systematically regulate AI. However, its arrival was not a sudden decision, but the result of nearly three years of intensive policy, negotiation, and technical consultation.
A. Outlines of the Development Process
The foundations of the draft law were laid with the initial proposal published by the European Commission in April 2021. During this period, the EU's aim was to establish a legal framework that would support digital transformation while ensuring the ethical, safe, and human-centered development of artificial intelligence.
Since its 2021 proposal, the law has passed three key milestones:
European Commission Phase (2021):
The Commission described the AIA as “the single market framework for trustworthy AI.” The draft legislation is framed in the logic of product safety legislation, meaning AI systems are subject to a “product-like” compliance process.
European Parliament Phase (2022–2023):
The rise of generative AI (especially models like ChatGPT) has led to radical revisions to the legal text. During this period, the concept of "General Purpose Artificial Intelligence (GPAI)" was added, and the law was no longer limited to high-risk systems.
Parliament also requested more sensitive provisions on the balance of fundamental rights, explainability and innovation.
Tripartite Negotiations (Trilogue) and Final Acceptance (2024):
The trilogue process between the European Commission, the European Parliament and the Council of the EU was the most critical turning point of the AIA.
The text, adopted by the Parliament on 13 March 2024 and by the Council on 21 May 2024, was published in the Official Journal of the EU on 12 July 2024 and entered into force on 1 August 2024.
B. AIA's Phased Implementation Schedule
Due to its comprehensive nature, the AIA has adopted a phased implementation schedule to allow companies time for compliance.
This transition process also reflects the pragmatic and risk-based nature of the AIA: basic prohibitions come into play first, followed by obligations related to high-risk systems.
Thanks to this structure, AIA allows businesses to progressively develop both their technical and managerial capacities.
For example, while in the first half of 2025 the focus will be on “identifying prohibited practices” and preparing staff with AI ethics & awareness training, post-2026 the emphasis will shift to risk management systems (RMS) and FRIA/AIIA impact assessments.
3. Classification of Artificial Intelligence Systems
Four-Stage Risk-Based Approach
At the heart of the European Union Artificial Intelligence Act (AIA) is a risk-focused approach that regulates the impact it creates, not the technology itself.
Rather than “policing every technology equally,” this approach categorizes AI systems according to their potential for societal harm.
The goal is to ensure security without stifling innovation—that is, to establish governance commensurate with risk.
Accordingly, the AIA defines four risk levels:
A. Systems with Unacceptable Risk (Full Ban Category)
This category encompasses systems that pose a clear threat to fundamental rights or human dignity.
The AIA strictly prohibits such practices. The aim is to draw a line where technology can undermine public trust and democratic values.
Main prohibited practices:
Social scoring systems: Systems that classify people based on their behavior or socioeconomic status.
Cognitive/psychological manipulation systems: Applications that aim to influence the behavior of individuals through subconscious techniques.
Emotion extraction in employment or education settings: Systems that analyze employee or student emotional responses to inform decision making.
Real-time remote biometric identification: Uses by law enforcement for continuous surveillance in public areas (except in exceptional circumstances).
These bans represent the point where Europe is answering the question “what should technology not do?” rather than “what can technology do?”
B. High Risk Systems (Preventive Compliance Obligation)
The broadest and most complex category of AIA is high-risk systems.
These are applications of AI that could have serious consequences for people's lives, security, economic rights, or democratic participation.
High-risk systems are listed under eight main headings in Annex III:
Critical infrastructures: Traffic control, energy management, water network systems.
Training and professional development: automated exam evaluation, student performance analysis.
Employment, business management and human resources: CV ranking algorithms, recruitment assessments.
Access to essential public and private services: Credit scoring, welfare eligibility assessments.
Judiciary and justice processes: Prediction systems that support court decisions.
Immigration, asylum and border control: Identity verification or risk profiling systems.
Law enforcement: Crime prediction algorithms, monitoring systems.
Democratic processes: AI-based tools affecting election security.
Obligations:
Establishing a Risk Management System (RMS)
Data governance and quality controls
Technical documentation and six-month logging
Implementation of human oversight mechanisms
CE marking and conformity assessment process
These requirements are especially critical in sectors that directly touch human decision-making, such as LegalTech, HealthTech, FinTech.
C. Limited Risk Systems (Transparency Obligation)
The limited risk category covers systems that interact directly with the user but have limited impact on fundamental rights.
For these systems, the AIA imposes obligations based solely on transparency.
Examples:
Chatbots: The necessity to clearly inform the user that they are interacting with artificial intelligence.
Deepfake content: Labeling generated or manipulated audio/videos with the phrase “AI generated.”
The goal is to give the user the right to know that their decisions are being guided by a machine and to prevent perceptual manipulation.
D. Minimal Risk Systems (Ethical Focus Maintaining Area)
The AIA places the majority of existing AI systems in the “minimal risk” category.
There is no legal obligation for these systems, but voluntary ethical frameworks (e.g., codes of ethics, algorithmic explainability principles) are encouraged.
Examples:
Email spam filters
In-game artificial intelligence systems
Impersonal versions of recommendation systems
This approach also demonstrates the EU's intention to support innovation: it encourages creativity and growth in low-risk areas while only policing risky systems.
4. Roles, Responsibilities and Supply Chain Dynamics
Artificial intelligence systems have a multi-layered life cycle from the development phase to reaching the end user.
The European Union Artificial Intelligence Act (AIA) creates the chain of accountability by clearly defining the responsibilities of all actors in this chain.
The aim is to prevent a “liability gap” in the event of an error or violation of rights.
A. Basic Roles: Provider and Distributor
At the heart of AIA are two main roles: Provider and Deployer.
These two actors are the parties directly responsible for the launch and use of the system.
The technical suitability of the provider system,
The distributor is responsible for the proper use of the system.
This distinction ensures that regulation encompasses both engineering and governance dimensions.
B. Other Actors in the Supply Chain
1. Importer
A person who brings an AI system from a country outside the EU (for example, Türkiye or the USA) to the EU market.
The importer's task is to check the provider's conformity documents (CE marking, technical documentation, risk analysis).
In this process, it assumes the role of a kind of “regulatory mediator”.
2. Distributor
It is the intermediary that deploys the AI system in the supply chain but does not develop or modify it.
The distributor is responsible for the safe transportation and proper labeling of the system.
If the system changes, it becomes a provider.
C. Determining Roles in SaaS and API Usage Scenarios
Taking into account the modern software economy, AIA has clarified role transitions in SaaS, API and integration-based uses.
If a company uses a third-party AI tool as an API or SaaS service in its professional processes, then it is in the role of a Deployer.
→ Example: A law firm uses a third-party AI system for candidate screening during the hiring process.
However, if the same company takes this AI system and integrates it into its own solution and offers it to the market as a new product (what we mean by new product is that it includes some serious technical applications such as retraining the model), it now moves into the Provider role.
→ Example: A startup customizes an existing LLM model and launches its own “legal document analysis tool.”
This distinction is of critical importance for businesses in terms of both technical approaches and legal strategy.
Because each role requires different levels of responsibility, documentation and compliance assessment.
D. Provider Obligations: The Basis for Technical Compliance
In high-risk systems, providers' compliance process is built on technical excellence.
The aim of this process is to prove that the system is fully tested, secure and explainable before it is released to the market.
Basic obligations:
Risk Management System (RMS):
A structure that identifies, measures and mitigates risks that may arise throughout the system life cycle.
Data Governance:
Training and test data are accurate, representative and free from bias.
Technical Documentation and Logging:
Recording all system-related data, decision chains and test results in detail.
System Security and Integrity:
The model's error rates and cybersecurity resilience meet certain standards.
Conformity Assessment and CE Marking:
Obtaining a “conformity verification” and affixing the CE marking before selling or providing services in the EU market.
These elements are the prerequisites for AI systems to gain “trusted product” status.
E. Deployer Obligations: The Foundation of Operational Compliance
Distributors, as professional institutions using the system, are responsible for maintaining compliance.
This responsibility is not technical, but managerial and ethical.
Basic obligations:
Human Oversight:
The decisions made by AI can be interpreted by humans and overridden when necessary.
Fundamental Rights Impact Assessment (FRIA / AIIA):
If the system affects public services, recruitment, credit or justice, the potential impact on individuals' fundamental rights must be analysed.
Record Keeping and Reporting:
Monitoring of data, logs and results generated during system use.
Transparency:
Informing users about the role of AI elements in decision-making processes.
These obligations strengthen not only companies' legal compliance but also their corporate culture of trust.
5. Sectoral Applications and Areas of Impact
Tangible Impact of AIA in High-Risk Areas

The AIA's risk-based classification does not place the entire technology sector in the same framework.
Instead, it imposes stricter rules in areas that directly affect fundamental rights and security.
Therefore, the AIA's strongest impact is seen in high-impact areas such as law, healthcare, education, finance, and public services.
A. LegalTech (Legal Technologies)
LegalTech solutions are artificial intelligence applications that directly interact with the justice system.
For this reason, it is generally considered a High Risk System (Annex III) under the AIA.
High-risk LegalTech examples:
Systems that attempt to predict or influence court decisions.
Tools that rank the parties “by likelihood of success in the case.”
Analysis systems that generate reliability profiles of parties in judicial processes.
Such systems are under strict control as they may affect the impartiality of justice and fundamental rights.
Providers :
Checking education data for bias,
Implementation of human oversight mechanisms,
And it is mandatory to conduct a Fundamental Rights Impact Assessment (FRIA/AIIA).
In contrast, LegalTech solutions that perform only document review, legal research, or content generation (e.g., contract analysis, legal summarization) fall into the limited risk category.
This reduces the risk of identifying all innovative initiatives in the legaltech space as high risk.
💡 Strategic Result:
LegalTech startups can position themselves as both ethically and legally secure by designing AI models as “pre-decision recommendation” rather than “decision support.”
B. HealthTech (Medical and Health Technologies)
Artificial intelligence systems used in healthcare are one of the AIA's most sensitive regulatory areas.
Because these systems directly affect human life.
Examples of high risk:
AI-based diagnosis and diagnostic systems
Decision support systems in medical image analysis
Algorithms that automatically optimize clinical processes or drug doses
These systems are already regulated by harmonized product safety regulations such as the Medical Device Regulation (MDR) or the In Vitro Diagnostic Regulation (IVDR).
The AIA adds AI-specific requirements to these controls:
Data sets cover patients in a representative manner
Documentation of bias analyses
Performing accuracy, robustness and cyber security tests of the model
Thus, a medical device must now become “ethical and explainable” in addition to being safe.
💡 Strategic Result:
HealthTech startups benefit from “dual compliance” with the MDR + AIA integration — a seal of trust for access to the EU market.
C. Education Sector
The use of artificial intelligence in educational technologies carries opportunities but also serious ethical risks.
The AIA specifically makes two key distinctions in this area: permitted and strictly prohibited systems.
High Risk Applications:
Algorithms that automatically score student performance
Systems that rank applicants in university admissions processes
Prohibited Practices:
Systems that infer emotions in educational environments (e.g., AI that analyzes the student's attention level from the camera)
While AIA encourages systems that support the learning process, it explicitly rejects systems that interfere with individuals' psychological or cognitive domains.
💡 Strategic Result:
AIA compliance in educational technology requires carefully drawing the line between a system that “analyzes student data” and one that “manipulates the student.”
D. Finance and Credit Systems
According to the AIA, financial algorithms that directly affect individuals' economic lives are high risk.
In this context, the following are inspected:
Credit scoring and risk assessment algorithms
Insurance premium calculation systems
Robo-advisors that offer automated investment advice
The main requirements in these systems are:
Decisions must be explainable
Education data should be fair, representative and up-to-date.
Human intervention mechanisms are not disabled
💡 Strategic Result:
For FinTech startups, AIA compliance also creates a new competitive environment in terms of customer trust and brand reputation.
E. Public Services and Immigration Management
The AIA sets strict rules for state-sponsored systems, particularly regarding transparency and fundamental rights.
Therefore, AI systems used by public institutions (e.g., algorithms for welfare eligibility, immigrant risk profiling, or border control) are also high risk.
Public institutions:
Must conduct FRIA (Fundamental Rights Impact Assessment),
It is responsible for ensuring citizens' right to challenge AI-powered decisions.
This article symbolically represents the EU's human rights-centered understanding of digital governance.
F. General Purpose Artificial Intelligence (GPAI) Models
In the post-2023 period, General Purpose Artificial Intelligence (GPAI) models such as ChatGPT have been included within the scope of the AIA under a separate heading.
These models, while not directly high risk, are subject to additional responsibilities due to their “systemic risk potential.”
Obligations of GPAI providers (from August 2, 2025):
Copyright compliance: Building protection mechanisms against copyrighted content in training data.
Data transparency: Publicly sharing a summary of the source and type of data used in education.
Labeling of generated content: Clearly identifying deepfakes or artificial content.
This article establishes the concept of “explainable and traceable AI” as the standard for generative models.
6. Adaptation Strategies for SMEs and Startups
Minimum Viable Compliance (MVC) Approach

The European Union Artificial Intelligence Act (AIA) affects organizations of all sizes, but has the biggest impact on small and medium-sized enterprises (SMEs) and startups.
Because these businesses generally focus on speed and innovation, heavy documentation and compliance processes can hinder their competitiveness.
At this point, the principle of “proportionality” introduced by the AIA comes into play:
The law stipulates that all obligations may be imposed in accordance with the size, resource capacity and risk level of the enterprise.
The strategic framework that can be used to make this flexibility practical is the Minimum Viable Compliance (MVC) approach.
This concept, just like the “Minimum Viable Product (MVP)”, aims to establish the most basic yet effective form of compliance.
A. What is MVC?
Minimum Viable Compliance, before implementing a full-scale compliance system,
is the rapid implementation of essential compliance components that will control the most critical risks.
The aim is to position the company as “safe, not risky” within the AIA framework, even with limited resources.
MVC has three main pillars:
These three steps form the basis of a fast, cost-effective and sustainable adaptation strategy for startups.
B. Flexibility and Support Mechanisms Provided by AIA
AIA is a rare regulation that includes special provisions for SMEs and startups to support innovation.
In this way, small-scale enterprises do not have to carry the same responsibilities as large corporate companies.
Main support mechanisms:
Regulatory Sandboxes:
They are safe areas created by member states to test innovative AI solutions in a controlled environment.
Startups can interact directly with regulatory bodies here.
AI Pact (Artificial Intelligence Pact):
It is an initiative granted by the EU to organisations that commit to voluntary compliance at an early stage, without waiting for the law to come into force.
SME Guidance Programs:
Guides created by AI Office and national authorities provide documentation and assessment templates specific to small businesses.
Open Source Exemption:
Open source AI components are exempt from most AIA obligations as long as they are not used for commercial purposes.
This creates a strong innovation space for early-stage startups.
💡 Note:
As an “SME-friendly” law, the AIA aims to guide innovation without hindering it.
Startups that prepare early are likely to gain access to the EU market with AIA-certified products in the future.
C. Applicable MVC Roadmap for Startups
For startups looking to prepare for AIA, the following 5-step practical roadmap breaks down the MVC approach into actionable steps:
Take AI Inventory:
List all AI systems used or developed at your company. Classify each one according to its risk level.
Simplify Risk Management System (RMS):
Based on ISO or NIST documents, create a simple risk management table: risk definition, probability, impact, mitigation strategy.
Establish Data Quality Protocols:
Ensure your training data is current, accurate, and unbiased. Document your data source if necessary.
Start Documentation:
Prepare a summary version of the AIA technical documentation requirements (model description, data source, test results, accuracy rate).
Launch AI Literacy Trainings in the Team:
Increase the team's baseline awareness in preparation for the AIA's AI literacy mandate, which will come into effect in February 2025.
These simple steps allow even small teams to be structurally prepared for AIA. They also create a significant credibility advantage by creating an image of a "startup with a culture of compliance" with investors and business partners.
D. Long-Term Strategic Value of MVC
Minimum Viable Compliance approach is not a “workaround”,
It is a forward-looking framework that fosters a sustainable governance culture in the growth process of enterprises.
In the long run, this approach:
Moves the company's AI products to the "trusted" class,
It saves time and cost in future audits,
It establishes a culture of ethical awareness and transparent decision-making within the organization.
Most importantly, businesses that adopt early will have a “competitive advantage” when the AIA reaches full implementation by 2027.
7. Advantages, Criticisms and Strategic Consequences of Regulation
Balance of Trust, Innovation and a Human-Centered Future
As we mentioned at the beginning of the article, the European Union Artificial Intelligence Act (AIA) is the world's first comprehensive artificial intelligence regulation.
The law aims to demonstrate that safe innovation is possible and that ethical, transparent, and human-centered AI systems can support both societal and economic growth. However, this vision also raises some debates.
A. AIA's Strengths (Advantages)
B. Criticisms (Challenges) of the AIA
C. Strategic Implications of AIA and Lessons for Business
The essence of AIA is to make risk management culture part of innovation.
This perspective provides businesses not only with compliance but also with sustainable competitive advantage.
1. Transition from Compliance to Strategy:
Companies that approach AIA not as a “legal obligation” but as a strategy for product trust and brand reputation make a difference.
2. Compliance by Design:
Regulatory obligations should be integrated into the design process during product development, not later.
This both reduces costs and speeds up the audit process.
3. Continuous Adaptation and Measurability:
AIA is not a one-time certification; it requires a culture of compliance that is sustained throughout the certification lifecycle.
Risk management systems (RMS), performance measurement tools and FRIA/AIIA assessments are the building blocks of this culture.
4. Bridge Between Turkish and EU Markets:
The AIA's extraterritorial structure also directly impacts technology companies in Türkiye.
Every Turkish startup that wants to open up to the EU market must adopt AIA standards.
Therefore, it is strategically important for Türkiye to develop a risk-based harmonization between the KVKK and the AIA.
Conclusion: From Risk to Value, From Compliance to Trust
The AIA aims to frame the rapidly advancing technology of the digital age within specific ethical principles and standards; in this respect, it redefines trust and responsibility within the AI ecosystem. Evaluating technology solely in terms of speed or efficiency can lead to the neglect of ethical and human values in favor of short-term benefits. Therefore, the law offers an approach that also enables measuring the impact of technology on humans.
From this perspective, AIA lays the foundations for a human-centered digital civilization rather than a “technology-centered future.”
This European Union regulation is a crucial moment for reflection at a time when technology is advancing at a dizzying pace—it encourages us to redesign our products and processes to put people at the heart. Those who adapt will shape the future; those who don't will simply observe.
Live-Cell Agency Note:
AIA is the answer to the question “How can we move forward more consciously?” in addition to the question “How can we be faster?” in the age of digital transformation.
This article is also a reflection of the philosophy of human-centered technology.
The path to secure innovation is through conscious design. For strategic guidance and consulting support in line with the AIA framework during the regulatory compliance process of your LegalTech initiative, please contact us or explore our Live-Cell Agency consulting program .
Links:
🚀 12-Week Strategic Clarity & Transformation Program (For Founders) Break the founder lock-in, establish systems, and become ready to scale. → Program Details
🧠 LegalTech Mastermind Community (For Founders and Entrepreneurs) Share experiences with founders who are walking the same path as you and grow together.
⚖️ LegalTech Turkey (For Lawyers and Entrepreneurs) Join current discussions at the intersection of law and technology. Learn about digitalization and access materials.
💡 EasyBusy (For Everyone) Explore entrepreneurship and digital transformation with startup stories, practical materials, and system building tips.
🔗 ALL USEFUL RESOURCES specific to the ecosystem (Single Link) To access the above-mentioned communities, our LegalTech Atlas Turkey newsletter and social media accounts, visit this single link : Ecosystem Link
(You can only access the ecosystem fully through this link.)
☕ Let's Clarify Your Strategy Together Schedule a 1:1 meeting and we'll create a personalized roadmap for you.

Comments