Sundar Pichai: AI can strengthen cyber defences, not just break them down | 桑達爾•皮查伊:人工智慧可以加強網路防禦,而不僅僅是破壞它們 - FT中文網
登錄×
電子郵件/用戶名
密碼
記住我
請輸入郵箱和密碼進行綁定操作:
請輸入手機號碼,透過簡訊驗證(目前僅支援中國大陸地區的手機號):
請您閱讀我們的用戶註冊協議私隱權保護政策,點擊下方按鈕即視爲您接受。
爲了第一時間爲您呈現此資訊,中文內容爲AI翻譯,僅供參考。
FT商學院

Sundar Pichai: AI can strengthen cyber defences, not just break them down
桑達爾•皮查伊:人工智慧可以加強網路防禦,而不僅僅是破壞它們

Private and public institutions must work together to harness the technology』s potential
民營和公共機構必須共同努力,利用這項技術的潛力。
The writer is chief executive of Google and Alphabet 
作者是谷歌(Google)和谷歌母公司Alphabet的首席執行長 
Last year saw rapid and significant technological change powered by progress in artificial intelligence. Millions of people are now using AI tools to learn new things, and to be more productive and creative. As progress continues, society will need to decide how best to harness AI’s enormous potential while addressing its risks. 
去年,由於人工智慧的進步,科技發生了快速而重大的變革。現在有數百萬人正在使用人工智慧工具來學習新知識,提高生產力和創造力。隨著進步的持續,社會需要決定如何最好地利用人工智慧的巨大潛力,同時解決其中的風險。
At Google, our approach is to be bold in our ambition for AI to benefit people, drive economic progress, advance science and address the most pressing societal challenges. And we’re committed to developing and deploying AI responsibly: the Gemini models we launched in December, which are our most capable yet, went through the most robust safety evaluations we’ve ever done. 
在谷歌,我們的方法是大膽地追求人工智慧造福人類、推動經濟進步、推進科學發展並解決最緊迫的社會問題。我們致力於負責任地開發和部署人工智慧:我們在去年12月推出的雙子座(Gemini)模型是我們迄今爲止最強大的模型,經歷了我們有史以來最嚴格的安全評估。
On Thursday, I visited the Institute Curie in Paris to discuss how our AI tools could help with their pioneering work on some of the most serious forms of cancer. On Friday, at the Munich Security Conference, I’ll join discussions about another important priority: AI’s impact on global and regional security. 
週四,我訪問了巴黎的居里研究所(Institute Curie),討論我們的人工智慧工具如何幫助他們在一些最嚴重的癌症形式上的開創性工作。週五,在慕尼黑安全會議(Munich Security Conference)上,我將參與關於另一個重要優先事項的討論:人工智慧對全球和地區安全的影響。
Leaders in Europe and elsewhere have expressed worries about the potential of AI to worsen cyber attacks. Those concerns are justified, but with the right foundations, AI has the potential over time to strengthen rather than weaken the world’s cyber defences.
歐洲和其他地區的領導人對人工智慧可能加劇網路攻擊的潛力表示擔憂。這些擔憂是有道理的,但是透過正確的基礎,人工智慧有潛力隨著時間的推移加強而不是削弱世界的網路防禦。
Harnessing AI could reverse the so-called defender’s dilemma in cyber security, according to which defenders need to get it right 100 per cent of the time, while attackers need to succeed only once. With cyber attacks now a tool of choice for actors seeking to destabilise economies and democracies, the stakes are higher than ever. Fundamentally, we need to guard against a future where attackers can innovate using AI and defenders can’t.
利用人工智慧可以扭轉網路安全中所謂的防禦者困境,即防禦者需要在100%的時間內正確應對,而攻擊者只需要成功一次。隨著網路攻擊現在成爲尋求破壞經濟和民主穩定的行爲者的首選工具,風險比以往任何時候都高。從根本上說,我們需要防止這樣一種未來:攻擊者可以利用人工智慧進行創新,而防禦者卻無法做到。
To empower defenders, we began embedding researchers and AI approaches in Google cyber security teams more than a decade ago. More recently, we’ve developed a specialised large language model fine-tuned for security and threat intelligence. 
爲了賦予防禦者更多權力,我們在十多年前開始將研究人員和人工智慧方法嵌入到谷歌的網路安全團隊中。最近,我們開發了一種專門針對安全和威脅情報進行優化的大型語言模型。
We’re seeing the ways AI can bolster cyber defences. Some of our tools are already up to 70 per cent better at detecting malicious scripts and up to 300 per cent more effective at identifying files that exploit vulnerabilities. And AI learns quickly, helping defenders adapt to financial crime, espionage or phishing attacks like the ones that recently hit the US, France and other places. 
我們看到了人工智慧如何增強網路防禦能力。我們的一些工具在檢測惡意腳本方面已經提高了70%,在識別利用漏洞的檔案方面提高了300%。而且人工智慧學習速度快,幫助防禦者適應金融犯罪、間諜活動或釣魚攻擊,就像最近發生在美國、法國和其他地方的攻擊一樣。
That speed is helping our own detection and response teams, which have seen time savings of 51 per cent and have achieved higher-quality results using generative AI. Our Chrome browser examines billions of URLs against millions of known malicious web resources, and sends more than 3mn warnings per day, protecting billions of users. 
這種速度有助於我們自己的檢測和響應團隊,他們的時間節省了51%,並且使用生成式人工智慧取得了更高質量的結果。我們的Chrome瀏覽器每天檢查數十億個URL,與數百萬個已知的惡意網路資源進行對比,併發送超過300萬個警告,保護數十億用戶。
Empowering defenders also means making sure AI systems are secure by default, with privacy protections built in. This technical progress will continue. But capturing the full opportunity of AI-powered security goes beyond the technology itself. I see three key areas where private and public institutions can work together.
賦予防禦者權力也意味著確保人工智慧系統默認安全,並內置私隱保護。這種技術進步將會持續。但是,充分利用人工智慧驅動的安全機會超越了技術本身。我認爲私人和公共機構可以在三個關鍵領域共同合作。
First, regulation and policy. I said last year that AI is too important not to regulate well. Europe’s AI Act is an important development in balancing innovation and risk. As others debate this question, it’s critical that the governance decisions we make today don’t tip the balance in the wrong direction. 
首先,是規章制度和政策。我去年說過,人工智慧太重要了,不能不加以良好的監管。歐洲的人工智慧法案(AI Act)在平衡創新和風險方面是一個重要的發展。在其他人辯論這個問題的同時,我們今天所做的治理決策不能朝錯誤的方向傾斜。
Policy initiatives can bolster our collective security — for example, by encouraging the pooling of data sets to improve models, or exploring ways to bring AI defences into critical infrastructure sectors. Diversifying public sector technology procurement could help institutions avoid the risks of relying on a single legacy supplier.
政策舉措可以增強我們的集體安全——例如,透過鼓勵數據集的共享來改進模型,或者探索將人工智慧防禦引入關鍵基礎設施領域的方式。多樣化公共部門的技術採購可以幫助機構避免依賴單一的傳統供應商所帶來的風險。
Second, AI and skills training, to ensure people have the digital literacy needed to defend against cyber threats. To help, we’ve launched an AI Opportunity Initiative for Europe to provide a range of foundational and advanced AI training. We’re also supporting innovative start-ups, like the Ukrainian-led company LetsData, which provides a real-time “AI radar” against disinformation in more than 50 countries. 
其次,人工智慧和技能培訓,以確保人們具備抵禦網路威脅所需的數字素養。爲此,我們推出了歐洲人工智慧機遇計劃,提供一系列基礎和高級人工智慧培訓。我們還支援像烏克蘭領導的LetsData這樣的創新新創公司,在50多個國家提供實時的「人工智慧雷達」來對抗虛假資訊。
Third, we need deeper partnership among businesses, governments, and academic and security experts. Our Málaga safety engineering centre is focused on cross collaboration that raises security standards for everyone. At the same time, global forums and systems — like the Frontier Model Forum and our Secure AI Framework — will play an important role in sharing new approaches that work. 
第三,我們需要加強企業、政府、學術界和安全專家之間的合作伙伴關係。我們的馬拉加安全工程中心致力於跨界合作,提高安全標準。與此同時,像前沿模式論壇(Frontier Model Forum)和我們的安全人工智慧框架(Secure AI Framework)這樣的全球論壇和系統將在分享有效方法方面發揮重要作用。
Protecting people on an open, global web is an urgent example of why we need a bold and responsible approach to AI. It’s not the only one. Helping researchers identify new medicines for diseases, improving alerts in times of natural disasters, or opening up new opportunities for economic growth are all just as urgent, and will benefit from AI being developed responsibly. Progress in all of these areas will benefit Europe, and the world.
在開放的全球網路上保護人類是一個緊迫的例子,說明我們爲什麼需要對人工智慧採取大膽而負責任的方法。這並不是唯一的例子。幫助研究人員找出治療疾病的新藥、改善自然災害時的警報或爲經濟成長開闢新機遇,這些都同樣緊迫,並將受益於負責任地發展人工智慧。所有這些領域的進步都將造福歐洲乃至全世界。
版權聲明:本文版權歸FT中文網所有,未經允許任何單位或個人不得轉載,複製或以任何其他方式使用本文全部或部分,侵權必究。

尋求:日本公司在人工智慧、貿易和加密貨幣方面的政策建議

基於政府和行業經驗的法律專業知識正受到熱烈追捧,以助力監管政策的塑造。
16小時前

歐洲可再生能源能否應對風力和日照負面氣候?

長時間的弱風力弱光照,對可再生能源領域來說是一個挑戰。

歐盟在綠色協議上面臨壓力

大選在即,隨著商業團體對歐盟工業競爭力感到擔憂,歐盟的氣候規則已成爲一個政治戰場。

Lex專欄:數據中心將科技公司變成支出大戶

科技公司對人工智慧押下鉅額賭注,但如果沒有獲得回報,投資的增加可能會拖累利潤率多年。

聯合國核事務負責人:伊朗願進行「嚴肅對話」

與伊朗緊張的關係似乎正在緩和,先前伊朗因其核項目而面臨制裁。

德國政府探索減稅措施,以延長德國人的工作時間

德國加入了英國和荷蘭的行列,試圖解決導致該地區經濟低迷的一個主要問題。
設置字型大小×
最小
較小
默認
較大
最大
分享×