Sections

ideals
Business Essentials for Professionals



Companies
23/02/2024

Google CEO Sundar Pichai: AI Can "Disproportionately" Assist In Defending Against Cybersecurity Attacks




Google CEO Sundar Pichai: AI Can "Disproportionately" Assist In Defending Against Cybersecurity Attacks
Google CEO Sundar Pichai suggests that quick advances in artificial intelligence could bolster cyberspace defences against security attacks.
 
Pichai stated that the intelligence capabilities might assist governments and businesses in accelerating the identification of — and reaction to — threats from hostile actors, amid mounting concerns about the potentially sinister uses of AI.
 
“We are right to be worried about the impact on cybersecurity. But AI, I think actually, counterintuitively, strengthens our defense on cybersecurity,” Pichai told delegates at Munich Security Conference at the end of last week.
 
Attacks on cybersecurity have been more frequent and sophisticated as a result of hostile actors using them more frequently to extort money and exercise power.
 
 
Cybersecurity Ventures, a cyber research organisation, estimates that by 2025, the cost of cyberattacks to the world economy will have increased to $10.5 trillion from an expected $8 trillion in 2023.
 
According to a January assessment by the British National Cyber Security Centre, a division of GCHQ, the nation's intelligence agency, artificial intelligence (AI) would further heighten these risks by reducing the entrance barriers for cybercriminals and permitting a rise in harmful cyberactivity, including ransomware assaults.
 
Pichai did note, though, that AI was also reducing the amount of time it took for defences to recognise threats and take appropriate action. According to him, this would lessen the so-called "defenders' dilemma," in which cyberhackers only need to successfully breach a system once, but defenders must consistently succeed in order to keep it safe.
 
“AI disproportionately helps the people defending because you’re getting a tool which can impact it at scale versus the people who are trying to exploit,” he said.
 
“So, in some ways, we are winning the race,” he added.
 
Google last week revealed a new project that aims to improve online security by investing in infrastructure and providing AI technologies. In a statement, the business claimed that a white paper outlines research and precautions as well as defines boundaries surrounding artificial intelligence (AI). A free, open-source application called Magika seeks to assist users in identifying malware, or dangerous software.
 
According to Pichai, the technologies are already being used in the company's internal systems and products, such Gmail and Google Chrome.
 
“AI is at a definitive crossroads — one where policymakers, security professionals and civil society have the chance to finally tilt the cybersecurity balance from attackers to cyber defenders.
 
The announcement was made concurrently with the signing of an agreement by significant MSC businesses to take “reasonable precautions” to stop the deployment of AI tools to rig elections in 2024 and beyond.
 
Among the parties to the new agreement were Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok, and X, formerly Twitter. It also provides a framework for how businesses should react to "deepfakes" created by AI that are intended to fool voters.
 
It occurs at a time when the internet is becoming a more significant domain of influence for both private users and bad actors with state support.
 
On Saturday, former US Secretary of State Hillary Clinton referred to the internet as "a new battlefield."
 
“The technology arms race has just gone up another notch with generative AI,” she said in Munich.
 
According to a Microsoft research released last week, state-sponsored hackers from China, Iran, and Russia have been utilising OpenAI's large language model (LLM) to improve their ability to deceive targets.
 
The Chinese and North Korean governments, the Iranian Revolutionary Guard, and Russian military intelligence were all reported to have depended on the instruments.
 
According to Mark Hughes, head of security at DXC, an IT services and consulting organisation, malefactors are using WormGPT, a hacking tool modelled after ChatGPT, more frequently to perform activities like reverse engineering code, as reported by CNBC.
 
But he added that he was also witnessing "significant gains" from comparable systems that enable engineers to quickly identify and retaliate against engineer attacks.
 
“It gives us the ability to speed up,” Hughes said last week. “Most of the time in cyber, what you have is the time that the attackers have in advantage against you. That’s often the case in any conflict situation.
 
“If you can run a little bit faster than your adversary, you’re going to do better. That’s what AI is really giving us defensively at the moment,” he added.
 
(Source:www.cnbc.com) 

Christopher J. Mitchell

Markets | Companies | M&A | Innovation | People | Management | Lifestyle | World | Misc