AI and Copyright in Sweden 2026: What is Legal?
AI and copyright in Sweden 2026: Find all information here.
When a Swedish marketing department used ChatGPT to write a product description and accidentally reproduced large portions of a competitor's text verbatim – was it copyright infringement? When a designer generates a logo with Midjourney that resembles an existing design – who owns the rights? And when your company trains an AI model on data from the internet – are you obliged to pay for copyrighted material?
Welcome to 2026's most complex legal gray area.
AI and copyright are no longer an academic debate – it is a daily challenge for every Swedish company using AI tools. With over 50% of Swedish office workers now using generative AI daily, and with the EU's AI Act coming into full force in August 2026, the question of what is legal has become urgent.
In this comprehensive guide, we go through exactly what applies according to Swedish and EU legislation in 2026, what legal precedents have shaped the rules of the game, and – most importantly – how your company navigates this legal minefield safely.
Quick facts: Copyright and AI in Sweden 2026
Before we dive deep, here are the absolutely most important points to know:
📋 For AI-generated content:
AI CANNOT own copyright in Sweden (requires human creation)
Pure AI-generated works have NO copyright protection
But: If humans sufficiently direct AI = copyright possible
The burden of proof is on you to show human creativity
📋 For AI training on copyrighted material:
Legally unclear in Sweden (no court cases yet)
The EU court is expected to rule in the fall of 2026 (case C-250/25)
Text and data mining exceptions exist but are disputed
Copyright holders can opt-out
📋 EU AI Act (effective from August 2, 2026):
High-risk AI systems require transparency
General AI models must comply with copyright directive
Reporting of training data mandatory
Administrative fines for violations
📋 Risks for Swedish companies:
Using AI output without review = risk of copyright infringement
Inputting sensitive material into AI = GDPR + copyright issues
Selling AI-generated material = unclear ownership
Ignoring opt-out = potential infringement
Now to the details.
Part 1: Can AI own copyright? Swedish and EU rules
Fundamental principle: Human creation required
According to the Swedish Copyright Law (URL), the starting point is crystal clear:
Copyright requires a work to be created by a human.
This means that:
✅ A human writing a book = copyright ✅ A human painting a picture = copyright ❌ An AI generating text entirely autonomously = NO copyright ❌ An AI creating an image without human guidance = NO copyright
As Thomas Riesler, a lawyer at the Patent and Registration Office (PRV), put it:
"Generally speaking, when it comes to AI images, there will be no copyright. It requires a human to have created something original."
This applies globally. Both in the USA, EU, and China, the principle is the same: copyright presupposes human creation.
But what happens when humans AND AI collaborate?
Here it becomes significantly more complicated.
The spectrum of human involvement:
🔴 Minimal input (NO copyright):
"Create an image of a marshmallow bear" → Simple prompt to Midjourney
"Write an article about AI" → Basic prompt to ChatGPT
Result: No copyright protection
🟡 Moderate input (UNCLEAR):
Detailed prompt with specific instructions
Several iterations and adjustments
Some manual editing
Result: Gray area – no clarity in Swedish law yet
🟢 Significant human guidance (POSSIBLE copyright protection):
AI used as an "assistant" or "tool"
Extensive manual editing and creative choices
Human directs and controls the process
Clear human "personality stamp"
Result: May qualify for copyright
First court trial – Chinese ruling 2024
In November 2024, a Chinese court made a groundbreaking decision granting copyright protection to an AI-generated image. The reason? The person who created the image had:
Used over 150 detailed prompts
Made extensive parameter adjustments
Chosen from thousands of generated versions
Edited and refined the final result
This was the first time ever a court said "yes" to copyright for an AI-assisted work.
But: As PRV's Thomas Riesler points out:
"What the ruling means for Swedish conditions is still unclear. Here, the question of copyright for AI images has never been tested – nor within the EU."
USA's Copyright Office clarification (January 2025)
In January 2025, the USA's Copyright Office issued a guiding statement:
"AI-generated work can be copyright-protected when it contains meaningful human authorship."
Key phrase: "Meaningful human authorship"
This means:
A simple prompt is NOT enough
You must be able to show significant creative decisions
The burden of proof is on you
Case-by-case assessment
Important example – Beatles "Now and Then"
In February 2025, the Beatles' song "Now and Then" won a Grammy – despite being partly AI-generated. Why did it get copyright protection?
Original demo by John Lennon (human)
AI was used only for voice restoration (tool)
Paul McCartney and Ringo Starr added new music (human)
George Harrison's guitar from the 90s (human)
= Massive human involvement = copyright.
Practical consequence for Swedish companies
If you create AI content:
Document your process
Save all prompts
Show iterations and choices
Preserve evidence of manual editing
This will be your burden of proof in a dispute
Expect that pure AI output lacks protection
Can be freely copied by others
You cannot prohibit use
No exclusive rights
Want copyright? Add human work
Edit extensively
Make creative choices
Use AI as a tool, not the end product
Part 2: Can AI be trained on copyrighted material?
This is by far the most debated question – and it is NOT resolved in Swedish or EU law.
What happens when AI is trained
To understand the law, we must understand the technology:
AI training involves:
Collecting massive amounts of data from the internet
"Reading" and "copying" texts, images, music
Analyzing patterns and relationships
Storing compressed information in the model
The question: Is step 2 (copying) copyright infringement?
Three different legal perspectives
Perspective 1: AI companies' argument - Text and Data Mining (TDM)
AI companies like OpenAI, Google, and Meta argue that their training is covered by the EU's exception for "text and data mining" (TDM).
EU Directive (implemented in Sweden January 1, 2023): Articles 3 and 4 of the copyright directive (DSM directive) allow copying copyrighted material for:
Research purposes (Article 3)
General text and data mining (Article 4)
But: This exception came in 2019, before ChatGPT existed. The legislators thought of academic research, not commercial AI training.
Article 4 says:
Copyright holders can "opt-out" and prohibit use by "reserving itself in an appropriate way".
The problem:
How should opt-out work in practice?
Does a "robots.txt" file count?
What is "appropriate way"?
Retrospective opt-out – does it apply to data already scraped?
Perspective 2: Rights holders' argument - Copyright infringement
Authors, artists, musicians, and publishers argue:
AI companies copy MILLIONS of works without permission
This is massive copyright infringement
The TDM exception was never meant for this
Commercial use ≠ research
Concrete examples:
📰 New York Times vs OpenAI & Microsoft (December 2023) NYT sued because ChatGPT can reproduce their articles almost verbatim. The case is still ongoing.
🎵 GEMA vs OpenAI (May 2025) German music organization claims ChatGPT can reproduce song lyrics without a license. GEMA does not demand data deletion – but a licensing system.
📚 Authors vs OpenAI (multiple lawsuits 2023-2025) Sarah Silverman, John Grisham, among others, claim their books were used in training without permission.
Perspective 3: EU Court's upcoming decision
The BIG question will be answered by the EU Court in the fall of 2026.
Case C-250/25: Like Company vs Google Ireland
Background:
Hungarian news company Like Company published an article
Users asked Gemini (Google's AI) to summarize the article
Gemini reproduced large portions of the text
Like Company: This is copyright infringement
Questions the EU Court will answer:
Is AI training on copyrighted material allowed under the TDM exception?
Does opt-out apply retroactively?
Does AI output resembling original works constitute "communication to the public"?
What rights do newspaper publishers specifically have?
Expected ruling: Fall 2026 (12-18 months from April 2025)
As law firm Lindahl puts it:
"How the court will assess Google's challenge in light of existing legislation will have a commercial impact on rights holders and AI service providers, either in the form of license and compensation requirements, or that rights holders must accept that AI services fall outside the copyright framework."
USA: Fair Use Doctrine
In the USA, courts have started to decide similar cases using the "fair use" doctrine.
Bartz vs Anthropic (2025): The court found that some AI training qualifies as "fair use". But the ruling is complicated and inconsistent.
Thomson-Reuters vs Ross Intelligence (2025): Ross Intelligence (AI-legal research) was convicted of copyright infringement for scraping data from Westlaw.
Lesson: Even in the USA, the legal situation is unclear and variable.
What applies in Sweden right now (January 2026)?
The answer: It is unclear.
As Digimyndigheten expresses in its guidelines:
"Exactly how the training of AI should be considered legally is not clarified."
Practical consequences:
❓ If you train your own AI model:
There is a legal risk
Consider licensing solutions
Document thoroughly
Follow opt-outs
❓ If you use others' AI models (ChatGPT, Midjourney, etc):
You are not primarily responsible for their training
BUT: See Part 3 on output responsibility
Part 3: Who owns what AI creates?
Three scenarios
Scenario 1: You use ChatGPT, Midjourney, etc.
Most AI services' terms say something like:
"You get the commercial rights to the output, but we (AI company) can also use your output for improvement and training."
This means: ✅ You can use the output commercially ✅ You can sell, publish, etc. ❌ You do not own copyright (if it is pure AI output) ❌ The AI company also has rights
Always read the Terms of Service carefully!
Scenario 2: Your company develops its own AI model
If you build your own AI model internally:
The company owns the model
Output rights according to your internal regulations
But: Copyright protection only if human involvement
Scenario 3: You hire a consultant/agency that uses AI
IMPORTANT – Write clear contracts!
The contract must specify:
Who owns the output?
May the consultant use AI tools?
Which tools are allowed?
What happens if output infringes copyright?
Responsibility distribution in a dispute
Example of good practice:
"The contractor may use AI tools as an assistant, but guarantees that the final product contains significant human creative involvement and does not infringe third-party copyright. In case of copyright infringement, the Contractor bears full responsibility."
Part 4: EU AI Act and Copyright – What applies from August 2026?
The EU AI Regulation (AI Act) entered into force on August 1, 2024, but takes full effect on August 2, 2026.
What AI Act requires regarding copyright
For providers of general-purpose AI models:
📋 Transparency requirements (Article 53):
Must publish a summary of training data
Including copyrighted material
Accessible to the public
📋 Copyright compliance:
Must comply with the EU copyright directive
Respect opt-outs
Document data sources
📋 System risk assessment (for large models):
Model evaluations
Adversarial testing
Incident reporting
Cybersecurity measures
Sanctions for violations
AI Act has teeth:
Administrative fines
Based on annual turnover
Can become VERY costly
Percentages:
Up to EUR 35 million or 7% of global annual turnover (highest amount)
Depending on the type of infraction
What does this mean for Swedish companies?
If you DEVELOP AI systems:
Must comply with AI Act from August 2026
Document training data
Respect copyright
Risk of large fines
If you USE AI tools:
Less direct impact
But: Providers must comply with the rules
You should choose compliant tools
Part 5: Practical risks and how to avoid them
Risk 1: AI reproduces copyrighted material
Scenario: You use ChatGPT to write marketing text. Without knowing it, the AI reproduces large parts of a competitor's text or a well-known article.
Legal risk:
YOU (not OpenAI) risk being sued for copyright infringement
May be required to remove material
May be required to pay damages
Reputation damage
How to avoid this:
✅ Always review AI output
Run texts through plagiarism tools
Google unique phrases
Ensure originality
✅ Ask AI to be original
Prompt: "Write original content, never reproduce existing texts"
Use rewriting and paraphrasing
✅ Have a backup policy
Clear action plan if infringement is detected
Quick removal
Legal backup
Tools for plagiarism check:
Copyscape
Grammarly Plagiarism Checker
Turnitin
Risk 2: Input copyrighted material into AI
Scenario: Employees paste customer documents, internal reports, or third-party material into ChatGPT.
Legal risk:
Copyright infringement already in the input phase
GDPR breach if personal data
Confidentiality breach
Material can be used in AI training
How to avoid this:
✅ Clear AI policy (see Part 6)
Specify what CAN be inputted
What CANNOT be inputted
Mandatory training
✅ Technical barriers
DLP (Data Loss Prevention) systems
Block public AI services for sensitive data
Use enterprise versions with data protection
✅ Use self-cleansing AI
ChatGPT Enterprise/Team (does not save data)
Claude for Work
Other tools with data protection agreements
Risk 3: Sell or publish AI-generated material
Scenario: Your company sells AI-generated images, texts, or products as creative works.
Legal risk:
Buyers get no copyright protection (if pure AI)
May demand a refund
Breach of contract if you promised copyright
May violate marketing regulations
How to avoid this:
✅ Be transparent
Inform that the material is AI-generated/AI-assisted
Specify the degree of human involvement
Clarify copyright status
✅ Add human value
Edit and improve
Curation and selection
Concept and idea
✅ Contracts that protect you
"Sold as is"
"No guarantee of copyright protection"
Limitations of liability
Risk 4: AI and trademark infringement
Copyright is not the only protection!
Scenario: You ask Midjourney to generate a logo. It resembles the Nike swoosh or another registered trademark.
Legal risk:
Trademark infringement (other than copyright)
Can be stopped immediately
Damages possible
How to avoid this:
✅ Search existing trademarks
PRV's database (Sweden)
EUIPO (EU)
WIPO (globally)
✅ Be careful with known brands
The prompt "in the style of [known brand]" = dangerous
Avoid copying recognizable styles
Risk 5: AI and personal protection/right of publicity
Scenario: You generate an image or voice resembling a famous person (living or dead).
Legal example - Astrid Lindgren's voice:
The article by Sveriges Tidskrifter on this:
"That Astrid Lindgren's voice is identifiable is beyond doubt, but she is deceased. The question is whether her voice can be protected? If not, new legislation is needed."
Legal risk:
Violation of personal privacy
Trademark infringement (if famous person is a trademark)
Defamation or damage
Protection of estates
How to avoid this:
✅ Avoid reproducing real persons
General persons are okay
Specific celebrities = risky
✅ Ask for permission
From living persons
From estates for deceased persons
Part 6: Create an AI copyright policy for your company
Every Swedish company using AI needs a clear policy. Here is a template:
AI Copyright Policy – Template Structure
1. PURPOSE
Ensure legal AI use
Protect the company from copyright infringement
Clarify rights and responsibilities
2. SCOPE
Applies to: All employees, consultants, partners
Areas: Text, image, code, sound, video
3. ALLOWED USE
✅ AI may be used for:
Inspiration and brainstorming
First drafts that are human-edited
Translation and summarizing of internal material
Code generation that is reviewed and tested
With requirements:
Always review output
Never publish directly
Document human editing
Run plagiarism check
4. PROHIBITED USE
❌ AI may NOT be used for:
Copying competitors' material
Reproducing famous art/design
Generating content posing as entirely human
Infringing others' trademarks or personal protection
❌ The following may NEVER be inputted:
Customer data or personal information
Confidential documents
Third-party copyrighted material without permission
Sensitive company information
5. PUBLICATION AND SALE
Before AI-generated material is published or sold:
Inform about AI use
Ensure sufficient human involvement for copyright
Conduct plagiarism/trademark check
Get approval from [responsible person/department]
6. CONTRACTUAL ISSUES
When hiring external suppliers:
Include clause on AI use
Specify responsibility distribution
Demand a guarantee against copyright infringement
Clarify ownership of output
7. TRAINING
Mandatory AI copyright training for all
Annual update
Law change news communicated immediately
8. RESPONSIBILITY AND CONSEQUENCES
In case of policy violation:
[Specify consequences]
In case of copyright infringement: The individual bears responsibility
9. CONTACT PERSON
[Legal responsible / IT responsible / compliance officer]
10. UPDATE
The policy is updated annually or upon legislative changes.
Part 7: International perspectives – USA, China, UK
USA: "Meaningful human authorship"
Status 2026:
Copyright Office: AI works can be protected with "meaningful human authorship"
Fair use doctrine applied to training (but unclear)
Many lawsuits ongoing (50+)
Diverse rulings from various district courts
Difference from EU: The USA is more liberal with fair use, but the burden of proof for human creation is clear.
China: First with AI copyright?
Beijing Internet Court Nov 2024:
FIRST ruling granting copyright to AI-assisted work
Required extensive human guidance (150+ prompts)
Can influence global development
UK: Follows the EU but with nuances
United Kingdom:
No longer an EU member but similar legislation
Discussing a special "AI-generated works" category
May develop its own path
The lesson for Swedish companies
Global business = follow the strictest jurisdiction:
If you sell in the USA: Follow US rules
If you sell in the EU: Follow EU rules
In case of uncertainty: Follow BOTH
Part 8: The future – What happens 2026-2027?
Fall 2026: EU Court's decision
Case C-250/25 will shape the entire European AI landscape.
Possible outcomes:
Scenario A: AI companies win
TDM exception covers AI training
Rights holders have limited options
AI development continues smoothly in the EU
RISK: Brain drain of creators leaving the EU
Scenario B: Rights holders win
AI training requires licenses
Copyright infringement determined
AI companies must pay or delete data
RISK: EU loses ground to USA/China in AI race
Scenario C: Compromise
License models mandatory
Opt-out strengthened
Transparency requirements increase
Both innovation and protection balanced
2027: Legislation adapts
Regardless of the ruling, the EU will need to update legislation:
Clearer TDM rules
Practical opt-out mechanisms
Standardized license agreements
Sweden will follow EU developments.
Industry solutions: GEMA model
GEMA (German music organization) has proposed a two-part license model:
Part 1: Training license
AI companies pay to use material in training
Based on volume and commerciality
Part 2: Output license
End users pay for AI-generated material based on protected content
Revenue sharing between AI companies and creators
This could become a model for the future.
AI detection improves
Tools to identify AI-generated content are improving:
OpenAI's text classifier
Biometric markers in images
Watermarking in AI output
EU AI Act requires: Detectability for certain AI types.
Part 9: Checklist – Is your company compliant?
Go through this checklist TODAY:
Basic compliance
☐ Do we have an AI copyright policy? ☐ Has every employee received training? ☐ Do we always review AI output before publishing? ☐ Do we regularly use plagiarism tools? ☐ Do we have data classification (what can be inputted into AI)?
Legal security
☐ Have we consulted a lawyer about AI use? ☐ Do our supplier agreements have AI clauses? ☐ Do we know who is responsible in case of copyright infringement? ☐ Do we have processes for opting out our own data? ☐ Are we following developments in the EU Court?
Tools choice
☐ Are we using AI tools with clear terms of use? ☐ Do we have data processing agreements (DPA) with suppliers? ☐ Do we know where our AI tools store data? ☐ Are we using enterprise versions for sensitive use?
Documentation
☐ Do we document how AI is used in production? ☐ Do we save proof of human creative involvement? ☐ Do we have an incident response plan for copyright infringement? ☐ Do we log what data has been inputted into AI?
If you answered NO to more than 3 points: You need to act NOW.
Summary: 10 golden rules for AI and copyright 2026
AI cannot own copyright – Only human creation is protected
Document human involvement – This is your burden of proof
Always review AI output – Never assume it is original
Have a clear AI policy – Everyone must know the rules
Respect opt-outs – Even if the legal situation is unclear
Read terms of use – You do not always own the AI output
Never input sensitive material – Into public AI tools
Inform about AI use – Transparency is key
Prepare for the EU ruling in fall 2026 – It will change everything
Stay updated – Legislation is evolving rapidly
Resources and expert help
Learn more
Official sources:
Legal guidance:
Compliance tools
Plagiarism tools:
Copyscape
Grammarly
Turnitin
AI detection:
GPTZero
Originality.ai
OpenAI Classifier
Conclusion: Safely navigate the legal gray zone
2026 is the year when AI and copyright truly collide. With the EU AI Act coming into full force in August and the EU Court's ruling expected in the fall, the rules will either be clarified – or further complicated.
For Swedish companies, this means:
Urgent need for clear policies
Investments in training and tools
Legal preparedness
Continuous updates
But it's not all bad news. With the right preparations, you can:
Use AI powerfully AND legally
Protect your company from risks
Build competitive advantages
Stay ahead of competitors
The key is to act NOW, not wait for the law to be "clear" – because it will continue to evolve for years.
Next steps:
Implement our checklist
Create your AI copyright policy
Educate the team
The future belongs to companies that use AI smartly AND responsibly.
Read more:




