top of page

AI Adoption Highlights the Need for Businesses to Invest in Proactive Cybersecurity Measures

By C. Perera, JadeTimes News

 
AI Adoption Highlights the Need for Businesses to Invest in Proactive Cybersecurity Measures
Image Source : Jawwad Rizwi

The rapid adoption of Artificial Intelligence has fueled growth but also created new opportunities for cybercriminals to exploit AI for more sophisticated attacks, according to Kaspersky. The global cybersecurity and digital privacy firm emphasized the importance of businesses investing in proactive cybersecurity measures to address these emerging threats.


Kaspersky has integrated AI across its products and is leveraging AI models to combat threats, thereby making its technologies more resistant to evolving cyberattacks. The company highlighted various ways in which cybercriminals are misusing AI, such as using tools like ChatGPT to write malicious software, automate attacks, and track smartphone inputs, potentially compromising sensitive information like messages, passwords, and bank codes.


In 2023, Kaspersky's solutions protected 220,000 businesses worldwide and prevented approximately 6.1 billion attacks. The company also reported saving 325,000 unique users from potential financial theft through banking trojans. In 2024, Kaspersky has detected an average of over 411,000 malicious samples daily, an increase from the previous year's 403,000.


Kaspersky's Global Research & Analysis Team cybersecurity expert, Vitaly Kamluk, pointed out that the volume of cyberattacks today is impossible to manage with human resources alone, with attackers increasingly using automation and AI.


A recent Kaspersky study on AI's role in password cracking found that while most passwords are encrypted using cryptographic hash functions, many can still be decrypted using brute-force algorithms and modern GPUs. The study revealed that 32% of user passwords are vulnerable to such attacks within an hour.


Threat actors are also utilizing large language models like ChatGPT 4 to generate sophisticated phishing messages, which can cross language barriers, personalize emails, and mimic specific writing styles, making these attacks more challenging to detect. The aggressive adoption of generative AI by organizations has expanded the attack surface, with cybercriminals developing more sophisticated methods.


The rise of AI has also introduced challenges such as deepfakes, which are increasingly used in celebrity impersonation scams and other fraudulent activities. Detecting deepfakes remains a significant technical challenge, though experts believe that future research may lead to solutions.


In response to the growing cyber threats, organizations are advised to maintain 100% uptime to ensure cyber resilience, which involves being operational at all times and effectively responding to security breaches. AI is being used by attackers to create new malware variations that are harder to detect, underscoring the need for improved defensive AI.


Kaspersky reported that over 99% of malware in 2023 was detected by automatic systems, and while the balance between malware detection and evasion remains, cybersecurity experts stress the importance of swiftly implementing regulations and ethical standards in the field of AI and generative AI.


Ethan Seow, Co founder of C4AIL, noted that regulations are struggling to keep up with the rapid advancements in AI. Vitaly Kamluk added that ethical education is crucial, particularly with the newer generation, as AI continues to evolve and expand its potential.

More News

bottom of page