Insights

Circle graphic Circle graphic Circle graphic
Back to insights

‘Made with AI’… but what if it’s not?

In February 2024 Meta announced that it would be rolling out a suite of AI detection tools. As Meta explained in its newsroom, as AI-powered content generation tools get more sophisticated, it’s becoming harder for consumers of content to tell the difference between what’s been generated by AI and what has not. By clearly and correctly labelling content which is created using AI, Meta hopes to build user trust.

Since AI tools became cheap, user friendly and easily accessible, there has been a well-documented rise in public scepticism and mistrust. Fake news and deepfakes making their way into public consciousness. Research published in August 2023 suggested that 30% of the global population was aware of the concept of deepfakes. The same research conducted a year earlier found only 13% knew what the word meant. As this interesting thought-piece from the European Parliament suggests “Simply knowing that deepfakes exist can be enough to undermine our confidence in all media representations, and make us doubt the authenticity of everything we see and hear online.” 

Undermined Confidence

It’s this ‘undermined confidence’ in platforms which Meta is trying to address with its AI detection labels. But getting the labelling right is harder than Meta imagined. It’s easy for Meta to detect AI generated content created using its own software, but now there are thousands of different AI tools out there used either to create images and videos from scratch or manipulate existing visuals. In the last few weeks content creators on Instagram and Facebook have noticed many of their photographs and videos had been mislabelled by Meta as ‘Made with AI’, when they weren’t. Expectedly, there’s been an uproar, with some influencers going so far as to boycott Instagram.

Here’s why Meta’s probably struggling to get the label right. To properly detect if an image has been created by AI, detection programmes can’t rely on the “look” of an image. Detection relies on programs to be able to read metadata or invisible markers which are embedded within the file. What we hope, is that software like Dall-E, Midjourney and others all embed some metadata into AI generated content to mark it as such. However, there is no government legislation or industry standard – thus far – which makes the embedding of this metadata or AI markers mandatory.

Dangerous Assumptions

Regulation is being spoken about. The EU AI Act, for example, prescribes some stringent record keeping and logging of materials produced by high-risk AI applications. But how these regulations are adopted and enforced is still to be seen. For now, tech companies seem to be moving quicker than regulation is and doing their own thing.

Meta claims to be developing its AI detection standards alongside other industry players like Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock. What Meta assumes, and needs, for its detection programme to be successful is that other industry players will comply. It also assumes tech companies will act in good faith, and all agree on the ethics of disclosure. Which is a big, and dangerous assumption to make.

Meta’s mislabelling of content seems to have exposed the problem. As a Meta spokesperson explained “We rely on industry-standard indicators that other companies include in content from their tools, so we’re actively working with these companies to improve the process so our labeling approach matches our intent.”

Blurring lines

Meta’s mislabelling also exposed another problem with AI – one of blurring lines between what is and isn’t AI. Many photographers noticed that Meta was ascribing the ‘Made by AI’ label to images edited using some of Adobe’s tools like Photoshop. Photoshop is used to remove a bit of garbage on the lawn in an otherwise flawless frame of a bride and groom, or to make a sunset look more dramatic than it was. But it can also be used to make waistlines look slimmer, lips fuller and skin smoother – which brings a range of other ethical issues and mental health concerns.

But increasingly, tools like Photoshop have AI integrations. Here’s an example: instead of manually scrubbing out the garbage, you can use a text prompt to “tell” Photoshop how you want the image edited, and it will interpret your verbal prompt to remove garbage for you. So, you might end up with the same output as if you had scrubbed out the garbage yourself, but your image now has a tiny bit of code which says to Meta’s AI detector that it has been ‘Made with AI’.

Tiny Artificial Intelligence integrations are making their way into tools we use day to day. This is not new. Microsoft Word, on which I am currently penning my thoughts, launched its ‘Editor’ function way back in 2016. This is an AI-powered service which performs spell checks and recommends grammar corrections. Even though this article is a product of my very human brain, I’ve relied on Word to correct a few typos for me. So, is it ‘Made with AI?’ All in the eyes of the beholder or in the eyes of the AI detector.

Some notes:

  • Meta has acknowledged it is trying to resolve the mislabelling of images.
  • The ‘Made with AI’ label is currently only visible on the mobile app; not desktop.

Our team

You may also be interested in

View all