The rapid rise of generative AI in 2022 heralded a new era, but by 2023, it sparked a wave of concerns. OpenAI’s ChatGPT, setting records as the fastest-growing consumer product, inadvertently paved the way for governmental scrutiny. The US Federal Elections Commission probes deceptive campaign ads, Congress demands oversight into AI companies’ data labeling practices, and the European Union hastily modifies its AI Act to address generative AI.
Yet, amid the novelty, generative AI confronts all-too-familiar challenges reminiscent of social media platforms’ struggles over the past two decades. OpenAI and its counterparts racing to unveil new AI models grapple with issues reminiscent of the woes that plagued earlier tech giants like Meta. Problems with content moderation, labor practices, and misinformation are resurfacing, now compounded by an AI twist.
“These problems were entirely foreseeable,” remarks Hany Farid, a UC Berkeley School of Information professor, underscoring the preventability of these hurdles.
Familiar Paths Generative AI companies, in some instances, inherit problematic infrastructures established by social media predecessors. The reliance on low-paid, outsourced content moderation workers, a practice prevalent in companies like Facebook, is mirrored in training generative AI models. These workers, often laboring under harsh conditions for meager wages, remain pivotal in shaping the AI landscape.
Outsourcing not only distances crucial operational functions but also blurs the line between human and AI interventions. The true intelligence behind content removal or customer service interactions becomes a murky blend of algorithms and human effort, complicating oversight and governance.
Similarly, responses to criticism echo the playbook of social media platforms. Assertions of deploying “safeguards” and “acceptable use” policies parallel social networks’ terms of service. However, these measures prove porous, easily circumvented, as demonstrated by the flaws uncovered in Google’s Bard chatbot, allowing the propagation of Covid-19 misinformation and Ukraine war falsities.
Beyond Authenticity Generative AI amplifies the potential for disinformation, enabling rapid production of fabricated content, undermining the credibility of genuine information. This surge in fake videos depicting individuals, including politicians and CEOs, saying things they never uttered raises alarms.
While platforms attempt to counter this with labeling policies for AI-generated political ads, the measures fall short in addressing the breadth of fake media dissemination.
Moreover, amidst concerns about misleading content, major tech companies have scaled back resources dedicated to detecting harmful content, exacerbating an already volatile landscape.
Reckless Progress The echoes of social media’s unintended consequences, epitomized by the mantra “Move fast and break things,” resonate in the realm of generative AI. The haste in releasing new algorithms lacks adequate consideration, shrouded in opacity regarding development, training, and deployment practices.
Despite legislative efforts aiming for a more proactive stance toward regulating generative AI, Farid points out the alarming lag between regulation and AI advancement. With regulators trailing behind the breakneck speed of AI development, there’s little incentive for these companies to slow down.
The crux lies not in the technology itself but in societal ramifications. Farid stresses how tech giants capitalize on privatizing profits while externalizing costs, reflecting the fears not of technology but of unchecked capitalism.