- Blurred Reality
- Posts
- Meta rebrands AI Labels + Phone with built-in AI-Detection
Meta rebrands AI Labels + Phone with built-in AI-Detection
DeepMind Study: Deepfakes most common GenAI use case
Hey, it’s Philip.
Happy Monday vibes!
I have been busy, so this week I am wrapping up the last two weeks 🙂 I will be back on a weekly schedule from next week.
For new readers, sign up here
NEWS RUNDOWN
META released a bunch of new AI models and a watermarking solution (AudioSeal) to pin-point AI generated speech within a longer audio file (Venture Beat)
Popular generative AI music companies Udio and Suno get sued by Sony, Warner & Universal over copyright infringement (Reuters)
YouTube now has a request-removal process for victims of AI deepfakes and voice cloning (Decoder)
Synthesia’s AI-Avatars will soon become more realistic with a full-body depiction (MIT)
TikTok’s AI Tools for advertisers launched with missing safety features, allowing people to make AI avatars say almost anything (The Verge)
Google’s DeepMind showcased a new model to create audio for videos (video-to-audio) (Google)
TOP STORIES
Meta’s “Made by AI” label leads to frustration among photographers as authentic pictures get mislabelled & Meta rebrands to “AI info” as a quick fix

Credit: Meta
Meta has announced back in February to start labelling AI-contents with its “Made with AI” label across Instagram, Threads and Facebook using IPTC and C2PA metadata as well as signals from imperceptible watermarks for AI content created with Meta AI to categorise and label content.
Now in June, when Meta started to roll out its “Made with AI” labels on Instagram a shit storm started.
Many photographers reported that their genuine pictures were wrongly labeled with as “Made with AI” and many obvious GenAI pictures were not tagged. The problem is obviously the reliance on available metadata that even small edits or tweaks using software often adds or changes metadata leading to miscategorisation by Meta.
Since there is no quick fix, Meta just tweaked its user communication to be less distinctive, declaring possible AI contents only with “AI Info”.
Opinion: Since there are no broadly used technical standards yet (c2pa is getting there) and AI detection is not really an option anymore (AI outputs are too good), the forced upon “AI labelling effort” by regulators via the White House Executive Order or the EU AI Act will remain a challenge and likely an impossible mission. Especially in a world where AI will be ubiquitous and a feature in every product, the line between what’s “Made with AI” and what was enhanced or edited (e.g. cut) with AI is already blurred.
I believe regulation as proposed in California is more sensible, demanding provenance for AI and authentic content.
In the end, we will probably live in a world where the majority of media is AI generated or manipulated, and we have adapted to only trust “authenticated AI” and “authenticated real” content or trusted sources/outlets and accept that everything else could be real or GenAI.
Honor Phone’s built-in deepfake detection for videos

Credit: techradar
Honor has introduced an AI-powered Deepfake Detection feature for its smartphones to combat deepfake video calls. This tool analyses video calls and videos frame by frame, examining elements like eye contact and lighting to identify potential deepfakes and alert users with a popup warning. techradar
Opinion: It’s an interesting idea to integrate AI detection on the consumer interface layer or on-device. However AI detection is a tricky field and continuously evolving, so this is likely (also in the future) more a gimmick than a dependable solution. It is very likely though, that we will have authentication system for video calls to make sure all video call participants are who they claim to be.
VIDEO OF THE WEEK
Toys’R’us released the first commercial produced with OpenAI’s Sora. More
Featured:
DeepMind Research on use of GenAI: “The most common goal of actors misusing generative AI was to shape or influence public opinion, the analysis, conducted with the search group’s research and development unit Jigsaw, found. That accounted for 27 percent of uses, feeding into fears over how deepfakes might influence elections globally this year.” (Financial Times & Study)
AI Safety & Deepfakes:
The dangers of voice fraud: We can’t detect what we can’t see (Venture Beat)
Ted Cruz wants AI platforms to be liable for deepfakes (404media)
California’s AI safety bill is getting massive blow back from Silicon Valley (Financial Times)
Germany introduces a bill to punish creation and distribution of deepfakes violating personal rights with up to five years in prison (Decoder)
Crypto investors are estimated to get frauded for value of $25B USD by deepfake crypto scams in 2024 (Coinspeaker)
Most deepfake scams are targeting business leaders (Venture Beat)
Interesting pieces:
Research published in Nature suggests, that humans are still fairly good at identifying AI cloned and generated voices and that AI voices stimulate slightly different parts of the brain. (Nature)
Interview: Geoffrey Hinton on the risk of loosing control to a superintelligence, regulation and counter measures. (Bloomberg)
A really interesting read on the dynamic between Microsoft AI and OpenAI, with Mustafa Suleyman and Sam Altman diverting interests (Semafor)
Reuters Survey: Most people (52% in the US and 63% in the UK) would be uncomfortable with news produced or curated by AI (Reuters)
NYT vs OpenAI law suit: OpenAI argues that the published work of NYT is also using other copyrighted work and that the way an LLM learns is not so different from the editorial process. (Opinion piece on TorrentFreak)