ADL #67: General-Purpose AI Under the EU AI Act: What Tech Startups Need to Know
3 Areas to Review in Your AI-Driven Operations
Intro
Picture this: You're running a fast-growing startup, and your team has just started using an AI tool to streamline operations. It's saving hours of work: scheduling meetings, screening job candidates, and even generating marketing copy. But then comes the unexpected question from a client: “How does your AI make decisions? Is it compliant with EU regulations?”
Suddenly, the tool that was supposed to make your life easier feels like an unknown risk waiting to surface. Are you unknowingly violating the new EU AI Act? Could you face fines or lose client trust?
Today, we’re diving into the practical side of AI compliance:
✅ what does general-purpose AI usage mean for your business
✅ how to ensure you use AI ethically and legally
✅ how to remain innovative under the new regulation.
We will also tackle three quick areas to review in your AI-driven operations. Whether you’re using Chat GPT for content creation or more complex hiring or customer service tools, this issue is for you if you want to stay ahead of the game.
🎁 Plus, there is an exclusive surprise for you at the end!
Quick Tips - 3 Areas to Review in Your AI-Driven Operations
As AI becomes a core part of your business, ensuring it operates responsibly and effectively is more critical than ever. With regulations like the EU AI Act setting new standards, now is the time to review your systems and processes. Here are three quick tips to help you stay compliant, transparent, and ahead of the innovation curve.
Transparency & Accountability
Clearly disclose when AI is being used and how decisions are being made while allowing users to request human oversight. Don’t forget to document that in policies and guidelines for your team, or even yourself.Cybersecurity Measures
Protect sensitive data processed by your AI systems from breaches or misuse, especially in high-risk applications like HR or healthcare. Verify these measures with your GDPR procedures and make sure they align and complement each other.
Human Oversight Protocols
Implement procedures to ensure that humans review and validate critical AI-driven decisions (such as hiring or handling loan approvals) before finalization. Also, don’t leave this to only spoken rules, but write them down and deploy trainings for your team.
General-Purpose AI Under the EU AI Act: What Tech Startups Need to Know
As a tech lawyer working with startups, I’ve seen firsthand how general-purpose AI (GPAI) is reshaping industries. Tools like ChatGPT, Midjourney, and GitHub Copilot are becoming essential for startups looking to scale faster, automate tasks, and innovate. However, with the EU AI Act now regulating GPAI, it’s critical that startups understand what this means for their operations and compliance obligations. Let’s break it down.
What is General-Purpose AI?
The definition of general-purpose AI was introduced in the final compromise text of the EU AI Act, acknowledging the growing importance of these systems in the AI ecosystem.
The EU AI Act defines general-purpose AI as:
AI systems that can perform a wide range of tasks and are not limited to a specific application or purpose.
These systems are designed to be versatile and adaptable, meaning they can be used across various domains and for multiple purposes, often serving as foundational models for more specialized AI applications.
General-purpose AI models may be placed on the market in various ways, including through libraries, APIs or direct downloads.
For example, the Regulation indicates that a typical example of a general-purpose AI model is a large generative AI model, due to its flexible content generation capabilities. This content can take the form of text, audio, images, or video and can easily be adapted to various tasks. Think of tools like ChatGPT, which can write emails or even hold a conversation, or DaVinci, which can create stunning visuals from just a few words.
Or, more practical,
a healthcare company might use a general-purpose AI model as the backbone for an app that helps doctors diagnose illnesses.
an AI-powered chatbot for e-commerce platforms that can handle customer inquiries, recommend products, and process returns.
social media assistant that generates captions, hashtags, and image ideas for influencers or small businesses.
a personal finance app that analyzes spending patterns and provides tailored budgeting advice or savings plans.
What Does it Mean for Tech Startups?
The Act recognizes that general-purpose AI is incredibly versatile but also carries risks—like bias in decision-making, misuse of sensitive data, or unintended consequences when applied to high-risk areas like hiring or healthcare.
The regulation aims to ensure that these systems are safe, transparent, and ethical.
For startups, this means two things:
If you’re building GPAI systems, you’ll need to comply with strict rules around transparency, risk management, and documentation, depending on their risk.
If you’re using GPAI tools in your business - whether, for example, for recruitment, marketing, or customer service - you’ll need to assess how these tools impact your operations and whether they fall into high-risk categories under the Act.
What Happens if You’re Using AI Tools?
Let’s say your team uses AI tools to generate marketing copy or to create visuals for your brand. While these uses are generally low-risk under the EU AI Act, you still need to ensure transparency and proper oversight.
For example, you would need to:
Label content generated by AI as “AI-assisted” so customers know it wasn’t created by a human.
Prohibit employees from inputting sensitive company data into these AI platforms unless those tools meet compliance standards under the GDPR and AI Act (data governance requirements).
Now, remember that if you’re using an AI tool for more sensitive tasks, like screening job applications or analysing employee performance, these are considered high-risk applications under Annex III of the Act. In this case, you’ll need to implement additional safeguards, such as:
Ensuring human oversight of all critical decisions. For instance, if an AI system recommends rejecting a job candidate or terminating an employee contract, a human must review and validate that decision before it’s finalized.
Regular audits of the system should be conducted to check for biases or errors.
4. What Happens If You Don’t Comply?
The penalties for non-compliance with the EU AI Act are steep:
Fines can reach up to €35 million or 7% of global revenue for serious violations, such as deploying banned systems or failing to manage risks effectively.
Even more minor compliance breaches, such as incomplete documentation or insufficient transparency measures can result in penalties of up to €7.5 million or 1% of global revenue.
5. How Do You Stay Compliant?
Compliance isn’t just about avoiding fines: it’s about positioning your startup as a leader in responsible innovation. Customers and investors are increasingly looking for companies that prioritize ethical practices when it comes to technology use.
Here are some simple steps you can achieve that:
Audit Your Tools:
Map out all the ways your startup uses AI tools and classify them based on risk levels outlined in the Act. High-risk uses will require additional safeguards.Set Clear Policies:
Create internal policies outlining acceptable uses of AI at work. For example, prohibit employees from using AI platforms for sensitive tasks or when using personal data unless those platforms meet compliance standards under GDPR and the EU AI Act.Train Your Team:
Ensure your employees understand how to use these tools responsibly, including fact-checking outputs and reporting errors or biases when they occur. Always consider deploying the training of AI usages together with GDPR. Many times, they come hand in hand.Keep Records:
Documentation is key under the EU AI Act. Maintain detailed records of how these tools are used in your business.
6. Looking Ahead
General-purpose AI is here to stay and its applications will only continue to grow across industries and sectors. The EU AI Act provides a framework for using this technology responsibly while safeguarding individuals from potential harm.
For startups willing to embrace these regulations proactively, compliance can become a competitive advantage, not just a legal obligation. By implementing transparent policies, training your team on the ethical use of AI tools, and maintaining proper oversight, you can unlock the full potential of AI usage in your company while staying on the right side of the law.
How Can I Support You?
Legally Remote started as my personal law office - Ana-Maria Draganuta-Briard - Law Office.
Over time, it has evolved into something much bigger: a hub of legal advice and resources for digital entrepreneurs who want to grow their businesses and protect their dreams. Today, Legally Remote is a team of dedicated experts, each contributing to help bring this vision to life.
Here’s how we can work together:
Legal consultancy 1:1. Get clarity on your business’s current standing, the compliance rules you need to follow, or whether a large platform is treating your company fairly. We can also guide you on tax optimization, digital nomad laws, consumer rights, and building secure client relationships. Moreover, if you’re considering legal action or want to ensure you're on the right track, a consultation can give you the clarity and direction you need.
Legal Packages for Freelancers. Includes a professionally tailored contract template designed for service providers and a 90-minute, one-on-one session with a legal expert who understands your challenges.
Legal Retainer Service. Receive consistent, reliable, tech-savvy legal support for your business throughout the year.
End-to-End Legal Services for Digital Businesses. From corporate guidance to e-commerce compliance, trademark protection, and strategies for global expansion, we create personalized solutions for digital entrepreneurs to help your business succeed in the digital landscape.
Not sure what you need? Book a free 20' intro call, and let’s figure it out together!
🎉 Exciting Announcement: Legal Learning Club for Digital Entrepreneurs 🚀
I'm thrilled to announce the upcoming launch of an exclusive Legal Learning Club tailored specifically for digital entrepreneurs like you! 🌟 As a token of appreciation for your support, I'm offering a limited number of free membership applications – and you're among the first to have this opportunity! 🏆
What's in it for you? 💼
📚 Expert-led legal workshops (both in-person and online)
❓ Compliance Q&A group sessions
🤝 Networking opportunities with fellow entrepreneurs
🎁 Exclusive member benefits for Legally Remote and partner services
Limited Time Offer ⏳
We're giving away just 10 free 6-month memberships, after which the membership can be purchased at a fee of €70 per month. Don't miss this chance to be part of our founding group! 💯
Act Fast! 🏃♀️💨
More details will be coming soon, but don't wait too long! The free application is limited in time and the selection is done on a first-come-first-serve basis (who sends first + fulfills the criteria)! We’ll be selecting the lucky 10 applicants to win a free 6-month spot in our community.
Those selected will receive an email with all the details and will get a spot in our WhatsApp group.
After that, membership will still be available, but for a fee.
Stay tuned for further information, and get ready to take your digital business to the next level with expert legal guidance and a supportive community. 📈🌐
Legal updates
EU launches InvestAI initiative to mobilise €200 billion of investment in artificial intelligence. At the AI Action Summit in Paris, Commission President Ursula von der Leyen launched InvestAI to raise €200 billion for AI, including a €20 billion fund for AI gigafactories. Read more.
Axel Voss, a German member of the European parliament, who played a key role in writing the EU’s 2019 copyright directive said a ”legal gap” had opened up after the conclusion of the EU’s AI Act, which meant copyright was not enforceable in this area, accusing EU for supporting big tech instead of protecting European creative ideas and content.” Read more.
South Korea removed DeepSeek from app stores, which is pending privacy review. According to South Korea's Personal Information Protection Commission, DeepSeek's apps were taken down from the Apple App Store and Google Play in South Korea. The company has agreed to enhance privacy measures in collaboration with the agency before the apps are reinstated. Read more.
Europeans overwhelmingly view robots and AI in the workplace positively, with over 60% approving of their use and more than 70% believing they boost productivity. While most Europeans support using robots and AI for workplace decision-making, 84% stress the need for careful management of AI to safeguard privacy and guarantee transparency at work. Read more.
The European Commission announced its plan to withdraw the AI Liability Directive from its 2025 work program, citing "no foreseeable agreement" on the proposal. However, the Internal Market and Consumer Protection Committee (IMCO) of the European Parliament voted to continue working on liability rules for artificial intelligence products, defying the Commission's intention. Read more.
Have some thoughts?
Reply to this email and let’s chat! I’m open to constructive criticism and new and brave ideas.
See you soon! The next issue is on March 13.