Spark Intelligence—the AI news for creatives and marketers #019

The AI news you need to know to grow your business and your career

Greetings earthlings,

Welcome to Spark Intelligence – your AI navigator in the creative and marketing world. Emma here - and today we’re tackling one of the most awkward (but essential) conversations in the AI world: copyright.

Back in March, we hosted a LinkedIn Live session (catch up here) with the brilliant Alexandra Ralph, founder of the Creative Rights Institute on AI, copyright and intellectual property.

It's a topic that isn’t theoretical, it’s happening now. The tricky thing is, the legal frameworks haven’t caught up. But your clients still expect clarity.

This edition breaks down what’s legally murky, what’s practical, and how to talk about it AI and copyright without spiralling.

🔍 In this edition:

  1. The current state of copyright law and AI

  2. What you can and can’t claim (yet)

  3. Talking to clients without panic

  4. How to protect your team, your work, and your brand

  5. Download: The AI + Copyright Guide for Agencies

1. What’s the law?

While nearly all the model making companies (OpenAI, Google, Meta included) have knowingly used copyrighted content to train their models (with lawsuits ongoing e.g., Silverman v. OpenAI/Meta), you are not breaking copyright by using them unless you deliberately create something that mimics an existing copyrighted work. This is just the same as if you had briefed an illustrator or photographer - if you ask them to deliberately mimic the style of something else you will be infringing someone else’s copyright. If not, then you’re fine.

However for most brands, the main issue will not be worries of infringement, it will in fact be the opposite  - what you create with these models has no copyright associated with it at all.

The US Copyright Office issued guidance earlier this year and clarified that:

  • Works must have clear human intent to be copyrightable.

  • Prompts are not copyrightable and do not class as intent. Instead you must intentionally modify the generated work (copy or imagery) in some way to provide human authorship and make is copyrightable.

The UK Copyright Office closed its consultation on Feb 25th, and will issue guidance later this year, but is likely to follow a similar interpretation.

This means:

  • If you use AI to create images, copy or code, it may not be protected by copyright unless there’s been meaningful human input. 

  • The training data behind tools like ChatGPT and Midjourney includes lots of copyrighted material — there are court cases ongoing but no definitive rulings yet. 

  • Some platforms are shifting liability for IP risks to you via contracts - though others (like Adobe Firefly or Canva) now offer indemnification for properly used content generated through their licensed datasets.

For agencies: You may not own the output — or be able to license it to clients unless human contributions are documented clearly.

For CMOs: Brand-safe content starts with understanding how it was made and whether liability protections apply.

2. 🤔 So what can you actually say?

Here’s what Alexandra helped clarify:

  • Be honest about when AI was used - transparency builds trust.

  • Make sure your contracts reflect co-creation or editing by humans rather than sole authorship claims for purely machine-generated outputs.

  • Flag high-risk content (e.g., likenesses, voice synthesis, training mimicry).

  • Ensure you have internal AI use policies in place

  • Undertake regular audits of AI tools for brand safety compliance

This approach not only protects your agency but also reassures clients that you’re managing risks responsibly.

3. Talking to clients without causing panic

This is a trust conversation, not a tech one. Clients, be it internal stakeholders or external client want to know:

  • Is this legal?

  • Is it safe for our brand?

  • What happens if someone challenges it?

Frame the conversation around risk, responsibility, and transparency. Make sure your teams aren’t making blanket promises or vague reassurances - especially since laws vary across jurisdictions (e.g., Japan has a more permissive stance on AI-generated works than the UK or US). A clear position beats a fast one every time.

For agencies: Share your guidelines with your clients, it positions you as a partner, not just a provider.

For CMOs: Ask better questions of your agencies. “Did you use AI?” isn’t enough anymore. Try: “How was this produced? Are liability protections in place? How are you managing rights and reputation?”

4. Protecting your team, your work, your brand

Get your ducks in a row now - don’t wait for legislation to catch up. Here’s how to start:

  1. Create internal policies on AI use - What tools are approved? What workflows are safe? What risks get escalated?

  2. Educate your team - not just creatives but account managers and legal too. And don’t forget your freelancers. Everyone needs to understand the stakes.

  3. Include AI disclosure and IP clauses in your contracts now - whether you’re creating content or commissioning it from others.

We turned our live session with Alexandra into a free, practical guide.

It includes:

  • What’s known and what’s changing across jurisdictions

  • Recent legal cases

  • What to look out for in 2025

Share it with your teams. Send it to a client who wants to understand the legal status quo. We hope it’s helpful - let us know!

Is this helpful?

Login or Subscribe to participate in polls.

That’s all for today!

Cheers,
Emma 
Co-Founder of Spark AI

AI isn’t just tools - it’s a business shift. Spark AI helps you lead the AI transition strategically, not reactively, with confident teams and a strategy for the future.

What did you think of our email today?

Login or Subscribe to participate in polls.