Sundar Pichai: AI can strengthen cyber defences, not just break them down - FT中文網
登錄×
電子郵件/用戶名
密碼
記住我
請輸入郵箱和密碼進行綁定操作:
請輸入手機號碼,透過簡訊驗證(目前僅支援中國大陸地區的手機號):
請您閱讀我們的用戶註冊協議私隱權保護政策,點擊下方按鈕即視爲您接受。
FT商學院

Sundar Pichai: AI can strengthen cyber defences, not just break them down

Private and public institutions must work together to harness the technology』s potential

The writer is chief executive of Google and Alphabet 

Last year saw rapid and significant technological change powered by progress in artificial intelligence. Millions of people are now using AI tools to learn new things, and to be more productive and creative. As progress continues, society will need to decide how best to harness AI’s enormous potential while addressing its risks. 

At Google, our approach is to be bold in our ambition for AI to benefit people, drive economic progress, advance science and address the most pressing societal challenges. And we’re committed to developing and deploying AI responsibly: the Gemini models we launched in December, which are our most capable yet, went through the most robust safety evaluations we’ve ever done. 

On Thursday, I visited the Institute Curie in Paris to discuss how our AI tools could help with their pioneering work on some of the most serious forms of cancer. On Friday, at the Munich Security Conference, I’ll join discussions about another important priority: AI’s impact on global and regional security. 

Leaders in Europe and elsewhere have expressed worries about the potential of AI to worsen cyber attacks. Those concerns are justified, but with the right foundations, AI has the potential over time to strengthen rather than weaken the world’s cyber defences.

Harnessing AI could reverse the so-called defender’s dilemma in cyber security, according to which defenders need to get it right 100 per cent of the time, while attackers need to succeed only once. With cyber attacks now a tool of choice for actors seeking to destabilise economies and democracies, the stakes are higher than ever. Fundamentally, we need to guard against a future where attackers can innovate using AI and defenders can’t.

To empower defenders, we began embedding researchers and AI approaches in Google cyber security teams more than a decade ago. More recently, we’ve developed a specialised large language model fine-tuned for security and threat intelligence. 

We’re seeing the ways AI can bolster cyber defences. Some of our tools are already up to 70 per cent better at detecting malicious scripts and up to 300 per cent more effective at identifying files that exploit vulnerabilities. And AI learns quickly, helping defenders adapt to financial crime, espionage or phishing attacks like the ones that recently hit the US, France and other places. 

That speed is helping our own detection and response teams, which have seen time savings of 51 per cent and have achieved higher-quality results using generative AI. Our Chrome browser examines billions of URLs against millions of known malicious web resources, and sends more than 3mn warnings per day, protecting billions of users. 

Empowering defenders also means making sure AI systems are secure by default, with privacy protections built in. This technical progress will continue. But capturing the full opportunity of AI-powered security goes beyond the technology itself. I see three key areas where private and public institutions can work together.

First, regulation and policy. I said last year that AI is too important not to regulate well. Europe’s AI Act is an important development in balancing innovation and risk. As others debate this question, it’s critical that the governance decisions we make today don’t tip the balance in the wrong direction. 

Policy initiatives can bolster our collective security — for example, by encouraging the pooling of data sets to improve models, or exploring ways to bring AI defences into critical infrastructure sectors. Diversifying public sector technology procurement could help institutions avoid the risks of relying on a single legacy supplier.

Second, AI and skills training, to ensure people have the digital literacy needed to defend against cyber threats. To help, we’ve launched an AI Opportunity Initiative for Europe to provide a range of foundational and advanced AI training. We’re also supporting innovative start-ups, like the Ukrainian-led company LetsData, which provides a real-time “AI radar” against disinformation in more than 50 countries. 

Third, we need deeper partnership among businesses, governments, and academic and security experts. Our Málaga safety engineering centre is focused on cross collaboration that raises security standards for everyone. At the same time, global forums and systems — like the Frontier Model Forum and our Secure AI Framework — will play an important role in sharing new approaches that work. 

Protecting people on an open, global web is an urgent example of why we need a bold and responsible approach to AI. It’s not the only one. Helping researchers identify new medicines for diseases, improving alerts in times of natural disasters, or opening up new opportunities for economic growth are all just as urgent, and will benefit from AI being developed responsibly. Progress in all of these areas will benefit Europe, and the world.

版權聲明:本文版權歸FT中文網所有,未經允許任何單位或個人不得轉載,複製或以任何其他方式使用本文全部或部分,侵權必究。

對話Otter.ai的梁松:我們可以從會議和對話中獲取有價值的數據

這家會議轉錄新創公司的聯合創辦人認爲,我們甚至可以用虛擬形象代替自己進行工作互動。

蕭茲迎來自己的「拜登時刻」

德國總理受到黨內壓力,要求其效仿美國總統拜登退出競選。

歐盟極右翼黨團在氣候和高層任命問題上獲得更多支援

歐洲議會中右翼議員正越來越多地與極右翼聯手瓦解該集團的綠色議程,並推動更嚴格的移民限制措施。

毛利人對紐西蘭後阿德恩時代的民粹主義轉向感到憤怒

盧克森的保守黨政府推翻了前總理的許多進步政策。

Lex專欄:輝達令人炫目的成長與每個人都息息相關

這家晶片巨擘的盈利對美國股票投資者來說是一件大事,這不僅僅是因爲其3.6兆美元的市值。

歐洲比以往任何時候都更需要企業成長冠軍

歐洲正在急切地尋找企業成長冠軍,FT-Statista按長期收入成長對歐洲企業進行的首次排名展示了這方面的可能性。
設置字型大小×
最小
較小
默認
較大
最大
分享×