As part of the annual Internet Governance Forum (IGF) 2024, held in Riyadh from December 15 to 19, Kaspersky presented its guidelines for the secure development and deployment of artificial intelligence (AI) systems. The document aims to help organizations avoid risks associated with the adoption of AI technologies by providing cybersecurity requirements that should be considered when implementing these systems. The guidelines address the pressing need for robust security frameworks as AI becomes integral to industries worldwide.
The "Guidelines for Secure Development and Deployment of AI Systems" were presented on December 18th at the IGF workshop titled “Cybersecurity in AI: Balancing Innovation and Risks”. Kaspersky representatives arranged a panel of experts to explore how innovation in AI can be harmonized with effective risk and cybersecurity management. The document was developed in collaboration with leading academic experts, to address the increasing complexity of cybersecurity challenges associated with AI-enabled systems.
The document is a
critical resource for developers, administrators, and AI DevOps teams, and provides
detailed, practical advice to address technical gaps and operational risks. The
guidelines are particularly crucial for organizations relying on third-party AI
models and cloud-based systems, where vulnerabilities can lead to significant
data breaches and reputational damage.
By embedding security-by-design principles, the guidelines help organizations
align AI deployment with standards like ESG, for example, and international
compliance requirements. The paper addresses key
aspects of developing, deploying and operating AI systems, including design,
security best practices and integration, without focusing on foundational model
development.
Kaspersky’s guidelines emphasize the following principles to enhance the
security of AI systems:
1. Cybersecurity Awareness and Training:
- Kaspersky highlights the importance of leadership support, and specialized employee training. Employees must be aware of the methods used by malicious actors to exploit AI services. Regular updates to training programs ensure alignment with evolving threats.
2. Threat Modelling and Risk Assessment:
- The guidelines stress the need to proactively identify and mitigate risks by means of threat modeling that helps pinpoint vulnerabilities early in AI development. Kaspersky suggests using established risk assessment methodologies (e.g., STRIDE, OWASP) to evaluate AI-specific threats including model misuse, data poisoning, and system weaknesses.
3. Infrastructure Security (Cloud):
- AI systems, often deployed in cloud environments, require stringent protections such as encryption, network segmentation, and two-factor authentication. Kaspersky emphasizes zero-trust principles, secure communication channels, and regular infrastructure patching to guard against breaches.
4. Supply Chain and Data Security:
- Kaspersky underscores the risks posed by third-party AI components and models, including data leaks and the misuse of obtained information for resale. In this regard, privacy policies and security practices for third-party services, such as the use of safetensors and security audits, must be strictly applied.
5. Testing and Validation:
- The continuous validation of AI models ensures reliability. Kaspersky promotes performance monitoring and vulnerability reporting to detect issues from input data drift or adversarial attacks. Proper partitioning of datasets and assessing model decision-making logic are essential to mitigate risks.
6. Defense from ML-Specific Attacks:
- The guidelines stress the need to protect AI components against ML-specific attacks, including adversarial inputs, data poisoning, and prompt injection attacks. Measures like incorporating adversarial examples into the training dataset, anomaly detection systems, and distillation techniques improve model robustness against manipulation.
7. Regular Security Updates and Maintenance:
- Kaspersky emphasizes the frequent patching of AI libraries and frameworks to address emerging vulnerabilities. Participation in bug bounty programs and lifecycle management for cloud-based AI models can additionally enhance system resilience.
8. Compliance with International Standards:
-
Adherence to global regulations (e.g., GDPR, EU AI Act) and best practices, as
well as auditing AI systems for legal compliance help organizations align with
ethical and data privacy requirements, fostering trust and transparency.
The guidelines underline the importance of implementing
AI systems responsibly to avoid significant cybersecurity risks, making them a
critical resource for businesses and governments alike.
Yuliya Shlychkova, Vice President
of Public Affairs at Kaspersky, shares:
“With the growing adoption of AI, ensuring its security is not optional but
essential. At IGF 2024, we’re contributing to a multi-stakeholder dialogue to
define standards that will safeguard innovation and help fight emerging cyberthreats”.
As AI continues to be integrated into critical sectors such as healthcare,
finance, and government, Kaspersky's "Guidelines
for Secure Development and Deployment of AI Systems" provide the
foundation for safe and ethical AI usage.
You can learn more about Kaspersky guidelines for secure AI development at the website.