In a recent discussion on Threads, Instagram’s chief, Adam Mosseri, communicated a cautionary message regarding online content reliability. He underscored the significant influence of artificial intelligence in creating visually deceptive content indistinguishable from real images.

Mosseri stressed the import of assessing content sources and mentioned that social networks should play a key role in that validation. ‘Identifying AI-generated content is crucial,’ Mosseri remarked, acknowledging the potential for mislabeled content.

He contended that platforms should not only label AI-created content but also disclose information about content originators, enabling users to gauge content credibility. This mirrors the concept of being wary of chatbots that may mislead before relying on AI-driven search results.

Despite Mosseri’s suggestions, Meta’s platforms currently lack extensive context tools, although there have been indications of imminent changes to content moderation policies. The described features hint at user-driven moderation similar to Community Notes or Bluesky’s customizable filters, though it’s uncertain if Meta will follow this path. Historically, the company has adapted successful strategies from peers like Bluesky.