Generative AI is sowing the seeds of doubt in serious science | 生成式 AI 正在爲嚴肅科學播下懷疑的種子 - FT中文網
登錄×
電子郵件/用戶名
密碼
記住我
請輸入郵箱和密碼進行綁定操作:
請輸入手機號碼,透過簡訊驗證(目前僅支援中國大陸地區的手機號):
請您閱讀我們的用戶註冊協議私隱權保護政策,點擊下方按鈕即視爲您接受。
FT英語電臺

Generative AI is sowing the seeds of doubt in serious science
生成式 AI 正在爲嚴肅科學播下懷疑的種子

Researchers have already developed a bot that could help tell the difference between synthetic and human-generated text
研究人員正在開發工具,以區分合成文字和人工生成的文字。
00:00

undefined

The writer is a science commentator

Large language models like ChatGPT are purveyors of plausibility. The chatbots, many based on so-called generative AI, are trained to respond to user questions by scraping the internet for relevant information and assembling coherent answers, churning out convincing student essays, authoritative legal documents and believable news stories.

But, because publicly available data contains misinformation and disinformation, some machine-generated texts might not be accurate or true. That has triggered a scramble to develop tools to identify whether text has been drafted by human or machine. Science is also struggling to adjust to this new era, with live discussions over whether chatbots should be allowed to write scientific papers or even generate new hypotheses.

The importance of distinguishing artificial from human intelligence is growing by the day. This month, UBS analysts revealed ChatGPT was the fastest-growing web app in history, garnering 100mn monthly active users in January. Some sectors have decided there is no point bolting the stable door: on Monday, the International Baccalaureate said pupils would be allowed to use ChatGPT to write essays, provided they referenced it.  

In fairness, the tech’s creator is upfront about its limitations. Sam Altman, OpenAI’s chief executive, warned in December that ChatGPT was “good enough at some things to create a misleading impression of greatness . . . we have lots of work to do on robustness and truthfulness.” The company is developing a cryptographic watermark for its output, a secret machine-readable sequence of punctuation, spellings and word order; and is honing a “classifier” to tell the difference between synthetic and human-generated text, using examples of both to train it.

Eric Mitchell, a graduate student at Stanford University, figured a classifier would take a lot of training data. Along with colleagues, he came up with DetectGPT, a “zero-shot” approach to spotting the difference, meaning the method requires no prior learning. Instead, the method turns a chatbot on itself, to sniff out its own output.

It works like this: DetectGPT asks a chatbot how much it “likes” a sample text, with the “liking” a shorthand for how similar the sample is to its own creations. DetectGPT then goes one step further — it “perturbs” the text, slightly altering the wording. The assumption is that a chatbot is more variable in its “likes” of altered human-generated text than altered machine text. In early tests, the researchers claim, the method correctly distinguished between human and machine authorship 95 per cent of the time.

There are caveats: the results are not yet peer-reviewed; the method, while better than random guessing, did not work equally reliably across all generative AI models. DetectGPT could be fooled by making human tweaks to synthetic text.

What does all this mean for science? Scientific publishing is the lifeblood of research, injecting ideas, hypotheses, arguments and evidence into the global scientific canon. Some have been quick to alight on ChatGPT as a research assistant, with a handful of papers controversially listing the AI as a co-author.

Meta even launched a science-specific text generator called Galactica. It was withdrawn three days later. Among the howlers it produced was a fictitious history of bears travelling in space.

Professor Michael Black of the Max Planck Institute for Intelligent Systems in Tübingen tweeted at the time that he was “troubled” by Galactica’s answers to multiple inquiries about his own research field, including attributing bogus papers to real researchers. “In all cases, [Galactica] was wrong or biased but sounded right and authoritative. I think it’s dangerous.” 

The peril comes from plausible text slipping into real scientific submissions, peppering the literature with fake citations and forever distorting the canon. The journal Science now bans generated text outright; Nature permits its use if declared but forbids crediting it as co-author.  

Then again, most people don’t consult high-end journals to guide their scientific thinking. Should the devious be so inclined, these chatbots can spew an on-demand stream of citation-heavy pseudoscience on why vaccination doesn’t work, or why global warming is a hoax. That misleading material, posted online, can then be swallowed by future generative AI to produce a new iteration of falsehoods that further pollutes public discourse.

The merchants of doubt must be rubbing their hands.

版權聲明:本文版權歸FT中文網所有,未經允許任何單位或個人不得轉載,複製或以任何其他方式使用本文全部或部分,侵權必究。

Lex專欄2024年的成敗:加密貨幣的復甦,信用卡刷卡次數增加

我們未能預見到川普效應,以及他的當選勝利對加密貨幣世界帶來的巨大推動。

「挑選法院」爭論將提交至美國最高法院

此案可爲大法官們提供一個機會,遏制原告在意識形態最友好的法院提起訴訟的趨勢。

影片遊戲市場低迷,《俠盜獵車手VI》仍將打破紀錄

預計《俠盜獵車手VI》第一年的銷售額將達到30億美元,有望成爲低迷的影片遊戲市場的一個亮點。

吉米•卡特,美國總統兼人權衛士,1924-2024年

在白宮任職一個任期後,這位喬治亞州人成爲了一位有影響力的道德代言人。

英國工會領導人要求2025年成爲「交付之年」

英國工會大會領導人保羅•諾瓦克呼籲首相制定公共服務路線圖,擊退民粹主義右翼。

「危險的先例」:羅馬尼亞社會因重新選舉而陷入分裂

取消總統選舉導致羅馬尼亞社會陷入分裂,政府被指無視俄羅斯的干預。
設置字型大小×
最小
較小
默認
較大
最大
分享×