AI should not be a black box - FT中文網
登錄×
電子郵件/用戶名
密碼
記住我
請輸入郵箱和密碼進行綁定操作:
請輸入手機號碼,透過簡訊驗證(目前僅支援中國大陸地區的手機號):
請您閱讀我們的用戶註冊協議私隱權保護政策,點擊下方按鈕即視爲您接受。
FT商學院

AI should not be a black box

Spats at OpenAI highlight the need for companies to become more transparent

Sam Altman, chief executive of OpenAI. Researchers once released papers on their work, but the rush for market share has ended such disclosures

Proponents and detractors of AI tend to agree that the technology will change the world. The likes of OpenAI’s Sam Altman see a future where humanity will flourish; critics prophesy societal disruption and excessive corporate power. Which prediction comes true depends in part on foundations laid today. Yet the recent disputes at OpenAI — including the departure of its co-founder and chief scientist — suggest key AI players have become too opaque for society to set the right course.

An index developed at Stanford University finds transparency at AI leaders Google, Amazon, Meta and OpenAI falls short of what is needed. Though AI emerged through collaboration by researchers and experts across platforms, the companies have clammed up since OpenAI’s ChatGPT ushered in a commercial AI boom. Given the potential dangers of AI, these companies need to revert to their more open past.

Transparency in AI falls into two main areas: the inputs and the models. Large language models, the foundation for generative AI such as OpenAI’s ChatGPT or Google’s Gemini, are trained by trawling the internet to analyse and learn from “data sets” that range from Reddit forums to Picasso paintings. In AI’s early days, researchers often disclosed their training data in scientific journals, allowing others to diagnose flaws by weighing the quality of inputs.

Today, key players tend to withhold the details of their data to protect against copyright infringement suits and eke out a competitive advantage. This makes it difficult to assess the veracity of responses generated by AI. It also leaves writers, actors and other creatives without insight into whether their privacy or intellectual property has been knowingly violated.

The models themselves lack transparency too. How a model interprets its inputs and generates language depends upon its design. AI firms tend to see the architecture of their model as their “secret sauce”: the ingenuity of OpenAI’s GPT-4 or Meta’s Llama pivots on the quality of its computation. AI researchers once released papers on their designs, but the rush for market share has ended such disclosures. Yet without the understanding of how a model functions, it is difficult to rate an AI’s outputs, limits and biases.

All this opacity makes it hard for the public and regulators to assess AI safety and guard against potential harms. That is all the more concerning as Jan Leike, who helped lead OpenAI’s efforts to steer super-powerful AI tools, claimed after leaving the company this month that its leaders had prioritised “shiny products” over safety. The company has insisted it can regulate its own product, but its new security committee will report to the very same leaders.

Governments have started to lay the foundation for AI regulation through a conference last year at Bletchley Park, President Joe Biden’s executive order on AI and the EU’s AI Act. Though welcome, these measures focus on guardrails and “safety tests”, rather than full transparency. The reality is that most AI experts are working for the companies themselves, and the technologies are developing too quickly for periodic safety tests to be sufficient. Regulators should call for model and input transparency, and experts at these companies need to collaborate with regulators.

AI has the potential to transform the world for the better — perhaps with even more potency and speed than the internet revolution. Companies may argue that transparency requirements will slow innovation and dull their competitive edge, but the recent history of AI suggests otherwise. These technologies have advanced on the back of collaboration and shared research. Reverting to those norms would only serve to increase public trust, and allow for more rapid, but safer, innovation.

版權聲明:本文版權歸FT中文網所有,未經允許任何單位或個人不得轉載,複製或以任何其他方式使用本文全部或部分,侵權必究。

對話Otter.ai的梁松:我們可以從會議和對話中獲取有價值的數據

這家會議轉錄新創公司的聯合創辦人認爲,我們甚至可以用虛擬形象代替自己進行工作互動。

蕭茲迎來自己的「拜登時刻」

德國總理受到黨內壓力,要求其效仿美國總統拜登退出競選。

歐盟極右翼黨團在氣候和高層任命問題上獲得更多支援

歐洲議會中右翼議員正越來越多地與極右翼聯手瓦解該集團的綠色議程,並推動更嚴格的移民限制措施。

毛利人對紐西蘭後阿德恩時代的民粹主義轉向感到憤怒

盧克森的保守黨政府推翻了前總理的許多進步政策。

Lex專欄:輝達令人炫目的成長與每個人都息息相關

這家晶片巨擘的盈利對美國股票投資者來說是一件大事,這不僅僅是因爲其3.6兆美元的市值。

歐洲比以往任何時候都更需要企業成長冠軍

歐洲正在急切地尋找企業成長冠軍,FT-Statista按長期收入成長對歐洲企業進行的首次排名展示了這方面的可能性。
設置字型大小×
最小
較小
默認
較大
最大
分享×