Reagan Was Shot at the Same Hotel 45 Years Ago
Darwin, 27 April: The shooting incident at the Hilton Hotel in Washington, D.C., on Saturday night—during the White House Correspondents’ Association annual dinner attended by…
SAN FRANCISCO: From a baby fleeing a T-Rex to cats dancing in streetwear, Big Tech’s race to give social media an AI makeover is underway—and so far, the results are chaotic, raising massive concerns over safety, copyright, and consumer acceptance.
The influx of new AI tools is part of a high-stakes competition among tech giants to dominate the next era of the internet and secure AI-driven revenue amid fears of an economic bubble.
Several major companies are pushing AI directly into social feeds and messaging apps, often mimicking successful formats like TikTok:
OpenAI’s Sora App: This latest entrant is a video generation tool that quickly produced bizarre, high-quality scenes like those described, though it’s designed primarily to encourage content creation.
Meta’s AI App: Meta has integrated a TikTok-like video feed called Vibes into its AI app, alongside the ability to chat with AI personas directly within Instagram DMs.
TikTok’s AI Alive: This tool converts static images into short videos based on simple commands.
These platforms are trying to become the go-to destination for the next generation of influencers, positioning themselves as both a creative forum and a viewing destination.
The push to democratize sophisticated AI video creation has amplified long-standing worries about misinformation, fake content, and intellectual property rights:
Copyright Infringement: Motion Picture Association CEO Charles Rivkin immediately raised alarms after Sora’s debut, citing the proliferation of videos infringing on major films and characters. OpenAI has responded by exploring a revenue-sharing model and implementing guardrails that now result in an error message for copyrighted prompts like “Pikachu.”
Misinformation and Deepfakes: Tools like Sora can create incredibly lifelike footage, taking the fear of misinformation to a new level. Although Sora-generated videos include C2PA metadata (an industry standard signature denoting origin) and invisible watermarks, tech publications have already demonstrated that it is relatively easy to remove the visual watermarks.
Beyond content manipulation, the integration of AI assistants poses specific threats to users, particularly teenagers.
Mental Health Risks: The new focus on AI personas follows a string of lawsuits against apps like Character.AI, alleging that AI chats have contributed to suicide and mental health issues among young people. OpenAI stated that Sora includes “stronger protections for young users,” such as restricting mature content and preventing adults from initiating messages with teens.
Consumer Backlash: There is a growing question about whether consumers even want the resulting deluge of random, AI-generated content—dubbed **”AI slop”—**flooding their feeds.
Privacy Confusion: Earlier this year, some Meta AI users were reportedly unaware that their personal chats—which included sensitive medical or legal questions—could be shared to a public “Discover” feed, underscoring the potential for confusion when merging private messaging with public AI tools. Meta maintains that chats are private by default unless users take multi-step actions to share them.
As tech giants continue their rapid-fire innovation, they are simultaneously racing to figure out how to manage the sweeping ethical and legal fallout of integrating artificial intelligence into the fabric of social media.