AI should not be a black box | 人工智慧不應該是一個黑盒子 - FT中文網
登錄×
電子郵件/用戶名
密碼
記住我
請輸入郵箱和密碼進行綁定操作:
請輸入手機號碼,透過簡訊驗證(目前僅支援中國大陸地區的手機號):
請您閱讀我們的用戶註冊協議私隱權保護政策,點擊下方按鈕即視爲您接受。
爲了第一時間爲您呈現此資訊,中文內容爲AI翻譯,僅供參考。
FT商學院

AI should not be a black box
人工智慧不應該是一個黑盒子

Spats at OpenAI highlight the need for companies to become more transparent
圍繞OpenAI的爭議事件凸顯了公司變得更加透明的必要性。
Sam Altman, chief executive of OpenAI. Researchers once released papers on their work, but the rush for market share has ended such disclosures
OpenAI首席執行長薩姆•奧爾特曼(Sam Altman)。研究人員曾經發表過關於他們工作的論文,但由於急於搶佔市場份額,這種披露已經終止
Proponents and detractors of AI tend to agree that the technology will change the world. The likes of OpenAI’s Sam Altman see a future where humanity will flourish; critics prophesy societal disruption and excessive corporate power. Which prediction comes true depends in part on foundations laid today. Yet the recent disputes at OpenAI — including the departure of its co-founder and chief scientist — suggest key AI players have become too opaque for society to set the right course.
AI的支持者和反對者普遍認爲,這項技術將改變世界。OpenAI的薩姆•奧爾特曼等人預見到人類將會繁榮發展的未來;批評者則預言社會將面臨混亂和過度的企業權力。哪種預測會成爲現實,在一定程度上取決於今天所奠定的基礎。然而,OpenAI最近的爭議,包括其聯合創辦人和首席科學家的離職,表明關鍵的AI參與者已經變得對社會過於不透明,無法爲其設定正確的方向。
An index developed at Stanford University finds transparency at AI leaders Google, Amazon, Meta and OpenAI falls short of what is needed. Though AI emerged through collaboration by researchers and experts across platforms, the companies have clammed up since OpenAI’s ChatGPT ushered in a commercial AI boom. Given the potential dangers of AI, these companies need to revert to their more open past.
史丹佛大學(Stanford University)開發的一個指數發現,AI領軍企業谷歌(Google)、亞馬遜(Amazon)、Meta和OpenAI在透明度方面存在不足。儘管AI是透過跨平臺的研究人員和專家的合作而出現的,但自從OpenAI的ChatGPT引領了商業AI的繁榮以來,這些公司就閉口不談了。鑑於AI的潛在危險,這些公司需要回歸到更加開放的過去。
Transparency in AI falls into two main areas: the inputs and the models. Large language models, the foundation for generative AI such as OpenAI’s ChatGPT or Google’s Gemini, are trained by trawling the internet to analyse and learn from “data sets” that range from Reddit forums to Picasso paintings. In AI’s early days, researchers often disclosed their training data in scientific journals, allowing others to diagnose flaws by weighing the quality of inputs.
AI的透明度主要涉及兩個方面:輸入和模型。大型語言模型,如OpenAI的ChatGPT或谷歌的雙子座(Gemini),透過在網路上搜索並分析從Reddit論壇到畢加索繪畫作品等「數據集」來進行訓練。在AI的早期階段,研究人員通常會在科學期刊上披露他們的訓練數據,以便他人透過評估輸入的質量來診斷缺陷。
Today, key players tend to withhold the details of their data to protect against copyright infringement suits and eke out a competitive advantage. This makes it difficult to assess the veracity of responses generated by AI. It also leaves writers, actors and other creatives without insight into whether their privacy or intellectual property has been knowingly violated.
如今,關鍵參與者傾向於保留其數據的細節,以防止侵犯版權的訴訟並獲得競爭優勢。這使得評估由人工智慧生成的回應的真實性變得困難。同時,這也讓作家、演員和其他創作者無法瞭解自己的私隱或智慧財產是否被有意侵犯。
The models themselves lack transparency too. How a model interprets its inputs and generates language depends upon its design. AI firms tend to see the architecture of their model as their “secret sauce”: the ingenuity of OpenAI’s GPT-4 or Meta’s Llama pivots on the quality of its computation. AI researchers once released papers on their designs, but the rush for market share has ended such disclosures. Yet without the understanding of how a model functions, it is difficult to rate an AI’s outputs, limits and biases.
這些模型本身也缺乏透明度。一個模型如何解釋其輸入並生成語言取決於其設計。人工智慧公司往往將其模型的架構視爲其「祕密武器」:OpenAI的GPT-4或Meta的Llama的巧妙之處在於其計算的質量。人工智慧研究人員曾經發布有關其設計的論文,但是爲了爭奪市場份額,這種披露已經結束。然而,如果不瞭解模型的功能如何,很難評估人工智慧的輸出、限制和偏見。
All this opacity makes it hard for the public and regulators to assess AI safety and guard against potential harms. That is all the more concerning as Jan Leike, who helped lead OpenAI’s efforts to steer super-powerful AI tools, claimed after leaving the company this month that its leaders had prioritised “shiny products” over safety. The company has insisted it can regulate its own product, but its new security committee will report to the very same leaders.
所有這些不透明性使得公衆和監管機構難以評估人工智慧的安全性並防範潛在的危害。這更加令人擔憂的是,揚•雷克(Jan Leike)在本月離開公司後聲稱,OpenAI的領導層將「閃亮的產品」置於安全性之上,他曾協助領導OpenAI的超強人工智慧工具的發展。公司堅稱可以自行監管其產品,但其新的安全委員會將向同樣的領導層彙報。
Governments have started to lay the foundation for AI regulation through a conference last year at Bletchley Park, President Joe Biden’s executive order on AI and the EU’s AI Act. Though welcome, these measures focus on guardrails and “safety tests”, rather than full transparency. The reality is that most AI experts are working for the companies themselves, and the technologies are developing too quickly for periodic safety tests to be sufficient. Regulators should call for model and input transparency, and experts at these companies need to collaborate with regulators.
去年在布萊切利公園(Bletchley Park)舉行的一次會議、喬•拜登(Joe Biden)總統的人工智慧行政命令以及歐盟的人工智慧法案(AI Act),爲人工智慧監管奠定了基礎。儘管這些措施受到歡迎,但它們更注重設置「安全防護措施」而非全面透明。事實上,大多數人工智慧專家都在公司內部工作,而技術的發展速度對於定期的安全測試來說過快。監管機構應呼籲模型和輸入的透明度,並且這些公司的專家需要與監管機構合作。
AI has the potential to transform the world for the better — perhaps with even more potency and speed than the internet revolution. Companies may argue that transparency requirements will slow innovation and dull their competitive edge, but the recent history of AI suggests otherwise. These technologies have advanced on the back of collaboration and shared research. Reverting to those norms would only serve to increase public trust, and allow for more rapid, but safer, innovation.
人工智慧有潛力改變世界,甚至可能比網路革命更具影響力和速度。公司可能會主張透明度要求會減緩創新並削弱他們的競爭優勢,但人工智慧的最近歷史表明情況並非如此。這些技術是在合作和共享研究的基礎上取得進展的。迴歸這些規範只會增加公衆的信任,並允許更快速但更安全的創新。
版權聲明:本文版權歸FT中文網所有,未經允許任何單位或個人不得轉載,複製或以任何其他方式使用本文全部或部分,侵權必究。

一週新聞小測:2024年6月30日

您對本週的全球重大新聞了解如何?來做個小測試吧!

新興市場會走出陰霾嗎?

如果你有風險偏好,亞洲或許值得再看一看 。

英格蘭隊是否已經失去光環?

球迷們提出了更高的要求,但主教練仍令人欽佩地拒絕責怪任何人,只說是他自己的問題。

巴菲特去世後將把鉅額財富遺贈給新基金會

這位億萬富翁投資人結束了近二十年來對蓋茲基金會的捐贈,轉而成立了一個由其子女領導的新慈善機構。

民主黨人在考慮不可思議的事情:讓拜登靠邊站

第一夫人可能會對總統是否繼續參選投下決定性的一票。

爲歐洲日益萎縮的軍隊招募新兵

英國、德國、法國和義大利的部隊人數都在下降。怎樣才能讓更多人服役?
設置字型大小×
最小
較小
默認
較大
最大
分享×