Businesses using AI without proper controls in place, coupled with a rising number of cyberattacks, means Australian organisations are facing a confluence of cybersecurity challenges.
The fifth edition of HLB’s Cybersecurity Report provides a snapshot of the current cyberthreat landscape and highlights actions leaders have taken since 2020 to become more cyber-resilient. HLB International surveyed over 600 senior IT professionals globally in September 2024 via an online questionnaire about the main cybersecurity threats of today, their progress in implementing cyber strategies and the dual role of AI.
Ninety two per cent of businesses surveyed have observed ongoing cyberattacks. Despite these threats, some organisations are still overlooking basic security measures, leaving themselves vulnerable to breaches which can compromise their business operations.
Another concern raised in the report, is the increasing use of AI without adequate defence systems. The report indicated over 28 per cent of organisations were either using or planning to use AI without adequate security controls. This is a critical gap in cybersecurity governance.
The consequences of neglecting AI governance could be severe. One key concern is the potential for AI to be weaponised – a risk that is heightened by its scalability and autonomous operations, which poses a significant threat to data security.
This not only exposes businesses to potential vulnerabilities but also highlights the urgent need for comprehensive AI governance frameworks. Companies should prioritise putting in place robust security measures alongside their AI initiatives, to safeguard against emerging cybersecurity threats.
The increasing incidence and sophistication of cyber threats is evident in the latest HLB report, with 39 per cent of companies reporting a rise in attacks and 86 per cent of surveyed professionals expressing heightened concerns over cybersecurity threats.
The survey also revealed that 29 per cent of respondents have reported more severe consequences from cyber-attacks in the last 12 months. It is therefore important for businesses to establish controls and oversight mechanisms to ensure AI technology is used ethically and securely.
Investment is required in regular audits and risk assessments and identifying potential vulnerabilities before they can be exploited by cyber criminals. Organisations must also focus on integrating AI with existing cybersecurity measures to detect and prevent AI-driven attacks more effectively.
Additionally, businesses should develop robust recovery strategies to manage a potential cyberattack. As part of the overall security posture, it is important to understand third-party vendor risks which are often overlooked by organisations such as email exploitation, leading to potentially critical impacts to business operations and reputation. In particular, smaller vendors may lack robust security measures, making them attractive targets for cybercriminals.
In the wake of a series of major outages throughout 2024 nationwide, bolstering cybersecurity defences remains a strategic imperative for businesses as they face an increasing number of sophisticated attacks.
Other key findings from the report include:
- Despite rapid AI advancements, only 29 per cent of companies have implemented additional security and governance controls related to AI
- 64 per cent of businesses consider cybersecurity a major strategic priority
- 24 per cent of organisations now run ongoing cybersecurity awareness training programs, up from 20 per cent in 2023
- 47 per cent of respondents identified email exploitation as a major threat
- 30 per cent of businesses only engage in staff cybersecurity audits annually or post-incident
- 40 per cent of organisations conduct cybersecurity training quarterly or bi-annually
- 80 per cent of companies have incident response plans in place.