Enterprises see both benefits and risks in using AI for cybersecurity

 

Research sponsored by Google Cloud and the Cloud Security Alliance uncovered mixed feelings about AI and security – most see AI as promising, but many worry about data quality and the potential to produce misleading output.

Artificial Intelligence (AI) dominates tech headlines as organisations look to tap into the technology to improve efficiencies and support key areas like customer service and cybersecurity.

Enterprises have been applying machine learning (ML) to set a baseline of a normal operating environment and then discern malicious activity from harmless anomalies for years. Now with the growing prominence of Generative AI, there are more discussions on how to use the technology to improve threat intelligence, workflow, and incident response. At the same time, there is some nervousness about the integrity of AI output. There are also questions around the subject of bad actors using AI as an offensive weapon.
AI and cybersecurity survey

The Cloud Security Alliance and Google Cloud surveyed 2486 IT professionals to take the temperature about using AI in cybersecurity. Most – 63%- think AI will help them improve threat detection and mitigation. That said, there is a degree of caution and concern centred on AI technology falling into the wrong hands. Some 31% see AI as helping the enterprise and the cybercriminals looking to breach it equally. In addition, 25% think that bad actors will get more benefits from AI than the organisations they attack.

Top AI concerns include data quality challenges – 38% – which could interfere with the accuracy of data output. Nearly one-third (30%) said they expect AI to enrich their cyber security skill set. Around 28% percent look forward to AI providing additional support for their role. Another 24% see AI a for automating some of their job functions. Only 12% worry that AI will replace workers. Half are concerned about becoming overly dependent on AI, stressing the need to balance technology with human experience.
Lack of balance in knowledge

There is a lack of balance in AI knowledge in the workforce. While 52% of all C-level suite executives boasting deep familiarity with AI, only 11% of the general staff can make that statement. Most – 74% – trust their leadership to make constructive AI decisions.

Mandates around AI are definitely generated at the top. Some 82% say AI projects are generated at the executive level.

Source: verdict.co.uk

 

Hipther

FREE
VIEW