As AI continues to expand, especially in the MSME segments, where users don’t have the resources to counter threats effectively, concerns about privacy and data protection are being raised with louder voices.
Some important aspects of concern are –
Sensitive Data Breach
Bias in Data
Surveillance
AI can track and impose surveillance on individuals by tools such as face recognition and location tracking. This information can be misused by cybercriminals to attack individuals.
Misuse by Criminals of AI Modelling
Adding to this, Pinkesh Kotecha, CMD, Ishan Technologies stated, “India has emerged as one of the top three most attacked countries by nation-state actors in the Asia-Pacific region, accounting for a staggering 13% of all cyberattacks. The landscape of cybersecurity is rapidly evolving and the threat has taken a worrying turn. Cybercriminals are now leveraging AI tools to launch more sophisticated and targeted attacks.”
Addressing the Concerns
Further adding to his previous comments, Kotecha said, “To address the increasing attacks, regular security audits are crucial, allowing companies to proactively identify and address vulnerabilities in AI models and algorithms. Leveraging AI for threat detection and prevention, while also monitoring for phishing and business email compromise, enables us to stay ahead of emerging threats and safeguard our operations and data. Additionally, organisations that collaborate with ICT service providers can further enhance cybersecurity with new technologies, especially for organizations with limited expertise or resources. These providers offer advanced resources and tools including SASE, EDR, SIEM and IAM, to bolster cybersecurity defences against sophisticated threats, allowing organizations to focus on critical business decisions without compromising on IT security.”
“One of the first steps to prepare for AI-driven cyberattacks is to understand the unique risks and vulnerabilities associated with AI technologies. Implementing AI security best practices is crucial for protecting against AI-driven cyberattacks. This includes regular security assessments to identify and address vulnerabilities in AI systems and regularly training security teams with simulated attack scenarios and tabletop exercises to ensure readiness in the event of a real attack. Monitoring AI systems for unusual or suspicious behavior is critical for detecting and mitigating AI-driven cyber-attacks. Implementing monitoring tools and processes can help organisations identify potential threats early on, allowing for a timely response. At the same time, developing and testing incident response plans specifically tailored to address AI-driven cyberattacks is super important. The plan should outline procedures for containing and mitigating the impact of such attacks, as well as for communicating with stakeholders and coordinating with external security experts if necessary”, commented Amit Singh, MD, Asia-Pacific and Japan, Terraeagle.
“Organisations, first and foremost, should establish a comprehensive set of policies, guidelines and best practices that assist in governing the development as well as deployment of AI systems. AI Security Compliance Programmes should be created to significantly reduce the risk of attacks on AI systems in addition to mitigating the impact of all security incidents. Highly diverse and representative datasets can be leveraged to establish the integrity of training data and mitigate bias. Human oversight in decision-making processes can effectively stop the exploitation of AI systems. It is extremely important to build a multi-layered security approach, from intrusion-detection systems to user training to protect the organisation’s infrastructure, operations and services. Collective defence where industry cooperation and information sharing play key roles, helps establish a collaborative defense ecosystem. This also includes sharing threat intelligence with peers as well as partners from the industry. AI models should be trained by utilising adversarial techniques to defend against potential attacks,” emphasised Bhattacharyya.
Conclusion
By understanding the unique risks of AI and implementing best practices, organisations can build resilience against AI-driven cybercrime. This includes regular security assessments, training security teams, monitoring AI systems for suspicious behaviour and developing incident response plans. Ultimately, a comprehensive approach encompassing policies, diverse training data, human oversight and collective defense strategies is necessary to harness the power of AI for a secure digital future.
--Swaminathan B is a guest columnist with DQ Channels
Read more from Dr Archana Verma here