A sample text widget

Etiam pulvinar consectetur dolor sed malesuada. Ut convallis euismod dolor nec pretium. Nunc ut tristique massa.

Nam sodales mi vitae dolor ullamcorper et vulputate enim accumsan. Morbi orci magna, tincidunt vitae molestie nec, molestie at mi. Nulla nulla lorem, suscipit in posuere in, interdum non magna.

Legal Framework for Artificial Intelligence in Bulgaria

This post offers a non-exhaustive overview of the legal framework governing the development and deployment of artificial intelligence (AI) in Bulgaria. It specifically examines aspects of regulatory compliance, intellectual property, contractual agreements, and liability within the context of AI. As a member state of the European Union, Bulgaria is actively integrating EU regulations into its national legal system while also formulating its own strategic policies for AI advancement.

1. Regulatory Compliance

The regulatory compliance landscape for AI in Bulgaria is significantly shaped by European Union legislation, primarily the Artificial Intelligence Act (EU AI Act) and the General Data Protection Regulation (GDPR).

1.1. EU Artificial Intelligence Act (EU AI Act)

Adopted in 2024, the EU AI Act stands as the world’s inaugural major legislation to regulate AI. Its provisions extend to all EU member states, including Bulgaria, with the overarching goal of ensuring that AI systems are safe, transparent, and uphold fundamental rights.

Prohibited AI Practices: The Act explicitly bans certain AI practices deemed overtly harmful. These encompass systems designed to manipulate human behavior, utilize biometric data for social scoring, or facilitate real-time facial recognition in public spaces, except under strictly defined and justified circumstances, such as preventing terrorist threats. Such practices are considered infringements upon privacy and democratic principles.

Four Levels of AI Risk: The EU AI Act categorizes AI systems into four distinct risk levels: unacceptable, high, limited, and minimal. High-risk AI systems, which are deployed in critical sectors like healthcare, recruitment, law enforcement, or essential infrastructure, are mandated to adhere to stringent legal requirements. These include, but are not limited to, transparency, robust data governance, and human oversight.

Business Obligations: The implications for businesses vary based on their specific role within the AI lifecycle. Whether an entity functions as an AI provider, authorized representative, importer, distributor, deployer, or operator, distinct compliance obligations apply. Failure to comply can result in substantial penalties, potentially reaching up to €35 million or 7% of global turnover. While this intensifies regulatory scrutiny, it simultaneously establishes a clear framework for ethical innovation and fosters consumer trust.

Citizen Protection: For citizens, the Act significantly enhances protection against discrimination, surveillance, and opaque decision-making processes. It guarantees individuals the right to be informed when interacting with AI systems and to challenge outcomes that are deemed harmful.

1.2. Bulgaria’s National AI Strategy (2020–2030)

In December 2020, Bulgaria formally adopted the Concept for the Development of Artificial Intelligence in Bulgaria until 2030. This strategic document articulates the nation’s ambition to emerge as a leader in AI by actively promoting innovation, research, and ethical deployment.

Key Objectives: The strategy outlines several core objectives: cultivating a strong knowledge and skills base in AI; developing robust research capabilities; supporting innovative AI implementation; establishing a reliable infrastructure for AI development; securing sustainable funding; increasing public awareness and building trust in AI; and creating a regulatory framework for trustworthy AI consistent with international standards.

Compliance with EU Regulations: A central tenet of Bulgaria’s strategy is its commitment to compliance with EU regulations, including the GDPR, with the ultimate aim of establishing a “trust ecosystem” for AI systems.

2. Intellectual Property

Intellectual property (IP) considerations within the realm of AI are inherently complex and remain in a state of continuous evolution across both Bulgaria and the broader European Union. This section explores two primary facets: AI’s role as a consumer of protected content and its capacity as a creator of content.

2.1. AI as a User of Protected Content

The EU Directive, and subsequently Bulgarian legislation, streamlines the process for AI to utilize protected content without requiring prior authorization or payment in specific scenarios, thereby encouraging scientific advancement. This facilitation is particularly evident in the context of text and data mining (TDM).

Exceptions for TDM: Two significant exceptions are stipulated:

  • Should a right holder intend to prohibit AI usage, this prohibition must be communicated through automatically recognizable mechanisms (e.g., embedded within the code).
  • For scientific research conducted by public benefit organizations (such as the Bulgarian Academy of Sciences or universities), AI is permitted to use protected materials even if the rights holder has explicitly forbidden it.

2.2. AI-Generated Content and Copyright

This area is subject to considerable debate, primarily due to the absence of human authorship. In the United States, both the Supreme Court and the Copyright Office have affirmed that AI cannot hold patents, and works exclusively created by AI without human intervention are ineligible for copyright protection.

Human Intervention: Conversely, if an individual arranges AI-generated images into a comic, this collective work may qualify for copyright protection.

Stance of Bulgaria and the EU: Neither Bulgaria nor the EU has definitively resolved the complexities surrounding copyright for AI-generated creations. Legislative efforts are currently focused on finding a balanced approach.

Proposed Solutions: Various ideas are under consideration, including classifying AI creations as folklore works (which lack copyright but allow for related rights for producers/publishers) or mandating collective rights management organizations to oversee AI-generated content. The latter approach aims to provide compensation to human creators whose works were utilized for AI training, thereby addressing potential economic displacement.

3. Contracts

As of now, Bulgaria lacks specific legislation dedicated to AI-related contracts. Consequently, the general principles of contract law, as enshrined in the Obligations and Contracts Act, are applied to agreements involving AI. Nevertheless, the distinctive characteristics of AI systems introduce novel challenges that necessitate careful consideration in contractual relationships:

  • Scope of Services and Outcomes: Contracts must precisely delineate the scope of AI services, anticipated outcomes, and the criteria for success. This includes detailed specifications regarding the AI system’s functionality, performance metrics, and inherent limitations.
  • Intellectual Property Rights: It is paramount to clearly establish the ownership and usage rights pertaining to AI models, training data, and the outputs generated by AI. This clarity is especially crucial given the ongoing ambiguity surrounding copyright for AI-generated content.
  • Data Protection and Confidentiality: Contractual agreements should incorporate clauses that ensure strict adherence to GDPR and other pertinent data protection laws, particularly when AI systems process personal data.
  • Liability and Indemnification: A precise allocation of responsibility among the AI provider, developer, and end user is indispensable. This encompasses liability for any errors, omissions, or damages directly attributable to the AI system.
  • Ethical Considerations: While not directly legally binding, the integration of ethical principles and standards into contracts can significantly contribute to fostering trust and promoting the responsible development of AI.

4. Liability

Similar to contract law, Bulgarian legislation currently lacks specific provisions addressing liability stemming from AI systems. The general rules governing tort liability under the Obligations and Agreements Act are, therefore, applicable. However, determining liability for damages caused by AI presents a complex challenge, primarily due to the autonomous nature of some AI systems and the inherent difficulty in establishing clear causality.

Possible Liability Scenarios:

  • Manufacturer/Provider Liability: The manufacturer or provider of an AI system may be held accountable for defects in the system’s design, manufacturing, or usage instructions. This liability could also extend to inadequate testing or validation of the AI system.
  • Operator/User Liability: The individual or organization that deploys or operates an AI system may incur liability for improper usage, insufficient oversight, or non-compliance with stipulated operating conditions.
  • Data Liability: Accountability for the quality, accuracy, and legality of the data utilized for AI training is fundamental. Inaccurate or biased data can lead to discriminatory or detrimental outcomes, for which liability may be pursued.

Upcoming Changes: With the impending enforcement of the EU AI Act, the introduction of more specific liability rules is anticipated, particularly for high-risk AI systems. These forthcoming regulations are likely to include requirements for conformity assessments, robust risk management frameworks, and comprehensive documentation obligations, all of which will influence the distribution of liability.

Conclusion

Bulgaria, in alignment with the European Union, is proactively developing a comprehensive legal framework for artificial intelligence. The EU AI Act is poised to become a pivotal instrument in regulating AI systems, while Bulgaria’s national strategy will continue to guide the strategic development and deployment of AI within the country. Although issues concerning intellectual property, contractual relationships, and liability are still undergoing clarification, future legislation is expected to provide enhanced clarity and specific guidelines. For businesses, it is imperative to closely monitor these evolving developments and ensure full compliance with all applicable regulations to foster the ethical and responsible utilization of AI.

AI and the Law in Bulgaria: What Businesses Need to Know

As Bulgaria aligns with the European Union’s AI regulatory framework, understanding your legal obligations is critical. The EU AI Act and Bulgaria’s National AI Strategy 2020–2030 mark a turning point for ethical and responsible AI innovation. Here, the experts at Belcheva & Karadjova answer key questions about the changing legal landscape.

Q1: How is Bulgaria adapting to the new EU AI Act?
Bulgaria is integrating the Act into national law while advancing its National AI Strategy, emphasizing innovation, ethics, and trust in artificial intelligence.

Q2: What are key compliance challenges for companies using AI?
Businesses must manage risk classification, transparency, data governance, and human oversight to comply with the EU’s risk-based regulatory model.

Q3: Are there specific rules for AI-related contracts?
Not yet — Bulgaria currently applies general contract law. However, precise contractual clauses addressing performance standards, data protection, and liability are essential in any AI project.

Q4: Who is liable if an AI system causes harm?
Responsibility can rest with the developer, provider, or user, depending on the source of the issue — such as defective design, misuse, or biased data. Forthcoming EU legislation is expected to refine these rules.

Q5: What should companies do now?
Conduct a compliance audit, align internal policies with the EU AI Act, and consult experienced AI legal advisors to future-proof your operations.

💼 For strategic guidance on AI law, regulatory compliance, and contractual structuring in Bulgaria, reach out to Belcheva & Karadjova.
We help businesses innovate confidently — within the law.

Leave a Reply