The Hidden Layers AI’s Growing Opacity Dilemma

The unveiling of GPT-4, the AI language model behind ChatGPT, stirred significant buzz in the tech world earlier this year. But, while the 100-page reveal covered various aspects of the model, there was a conspicuous absence of key details about its construction and operation.

Such an omission isn’t accidental. Many leading tech giants are adopting a guarded approach towards the specifics of their state-of-the-art algorithms. This guardedness stems from concerns of potential misuse, coupled with the competitive edge they believe these details provide.

However, a recent Stanford University research piece is now shedding light on the veil of secrecy that surrounds GPT-4 and other sophisticated AI systems. According to numerous AI experts, this trend signifies a paradigm shift in the field, one that may not bode well for scientific advancements, accountability, reliability, or safety.

In their exploration, the Stanford team delved into 10 AI systems, predominantly vast language models. Some of these include GPT-4, PaLM 2 by Google, Amazon’s Titan Text, and offerings from startups like AI21 Labs’ Jurassic-2 and Anthropic’s Claude 2. They also ventured into ‘open source’ models like Meta’s Llama 2.

Utilizing 13 criteria, the team gauged the transparency of these models. This included data training transparency, hardware specifics, software frameworks, and energy consumption. Their findings? No model crossed a 54% transparency threshold based on their standards. Amazon’s Titan Text ranked as the least transparent, while Llama 2 took the crown for the most open – albeit still deemed largely non-transparent.

Nathan Strauss, Amazon’s spokesperson, noted that it might be premature to assess Titan Text’s transparency since it’s still under private preview. Meanwhile, Meta and OpenAI remained tight-lipped on the study.

According to Rishi Bommasani, a Stanford PhD student and a contributor to the study, AI’s evolution is paradoxically leaning towards opaqueness, even as its impact grows. He recalls the late 2010s, where tech giants were more forthright about their research, which subsequently ushered in the deep learning era.

Moreover, the study suggests that this secrecy might be unnecessary from a competition standpoint. Kevin Klyman, a Stanford policy researcher, believes that these models could maintain their competitive edge even if more transparent.

Jesse Dodge, a scientist at the Allen Institute for AI, emphasizes this as a watershed moment in AI’s history. The predominant AI shapers are becoming increasingly insular, holding back vital details.

To counter this, the Allen Institute for AI is crafting OLMo, an AI language model championing transparency. Its training involves a diverse dataset sourced from the internet, academic works, code, books, and encyclopedias. Dubbed Dolma, this dataset has been launched under the institute’s ImpACT license. Upon OLMo’s completion, the institute intends to release both the system and its code, allowing further development by the community.

In Dodge’s view, granting access to the underlying data of robust AI models is imperative. He argues that science thrives on reproducibility, and without access to foundational data and code, stagnation looms.

With AI’s proliferating influence and potential risks, fostering a culture of openness and transparency might be the need of the hour.