The stakes for things going wrong with AI are incredibly high. Only 29% of organizations feel fully equipped to detect and prevent tampering with AI(1). In AI, emerging risks focus on different stages of the AI lifecycle, while responsibility lies with different owners, including developers, end users and vendors.
As AI becomes ubiquitous, businesses will use and develop hundreds, if not thousands, of AI applications. Developers need AI security and safety railings that work for every application. At the same time, implementers and end users are rushing to adopt artificial intelligence to increase productivity, which can expose their organization to data breaches or proprietary data poisoning. This adds to the growing risks associated with organizations moving from public data to training models on their proprietary data.
So how can we ensure the safety of AI systems? How to protect AI from unauthorized access and misuse? Or prevent data leakage? Ensuring the safety and ethical use of artificial intelligence systems has become a critical priority. The European Union has taken significant steps in this direction by introducing the EU AI Act.
This blog looks at how the AI Act addresses the security of AI systems and models, the importance of AI literacy among employees, and Cisco’s approach to protecting AI through a holistic AI Defense vision.
EU Law on Artificial Intelligence: A Framework for Safe Artificial Intelligence
The EU Law on Artificial Intelligence is a landmark EU effort to create a structured approach to the governance of artificial intelligence. One of its components is an emphasis on cybersecurity requirements for high-risk artificial intelligence systems. This includes mandating strong security protocols to prevent unauthorized access and misuse and ensure that AI systems operate safely and predictably.
The law supports human oversight and recognizes that while artificial intelligence can increase efficiency, human judgment remains indispensable in preventing and mitigating risk. It also recognizes the important role of all employees in ensuring security and requires both providers and deployers to take steps to ensure their employees have sufficient levels of AI literacy.
Identifying and clarifying roles and responsibilities in securing AI systems is complex. The AI Act primarily targets developers of artificial intelligence systems and certain providers of general-purpose AI models, although it correctly recognizes the shared responsibility between developers and implementers, underscoring the complex nature of the AI value chain.
Cisco’s vision for AI security
In response to the growing need for AI security, Cisco envisions a comprehensive approach to protecting the development, deployment, and use of AI applications. This vision builds on 5 key aspects of AI security, from securing access to AI applications to detecting risks such as data leakage and sophisticated adversarial threats to employee training.
“When implementing AI, organizations should not choose between speed and security. In a dynamic environment where there is fierce competition, technology effectively ensures throughout their life cycle a Cisco is uncompromisingly reimagining security for the age of AI.”
- Automated vulnerability assessment: Using AI-driven techniques, organizations can automatically and continuously assess vulnerabilities in AI models and applications. This helps identify hundreds of potential security and safety risks and enables security teams to proactively address them.
- Security at runtime: Implementing safeguards during the operation of AI systems helps defend against evolving threats such as denial of service and leakage of sensitive data and ensures the secure operation of these systems.
- User protection and data loss prevention: Organizations need tools to prevent data loss and monitor unsafe behavior. Companies must ensure that AI applications are used in accordance with internal policies and regulatory requirements.
- Managing Shadow AI: It is important to monitor and control unauthorized AI applications, known as shadow AI. Identifying third-party applications used by employees helps companies enforce policies that limit access to unauthorized tools, protect confidential information, and ensure compliance.
- Training of citizens and employees: In addition to the right technological solutions, employee literacy in the field of artificial intelligence is also crucial for the safe and effective use of artificial intelligence. Increasing AI literacy helps build a workforce capable of responsibly managing AI tools, understanding their limitations, and recognizing potential risks. This in turn helps organizations comply with regulatory requirements and fosters a culture of AI safety and ethical awareness.
“The EU AI law emphasizes the importance of equipping employees with more than just technical skills. It is about implementing a holistic approach to AI literacy that also includes safety and ethical aspects. This himlps ensures that users are better prepared to safely handle artificial intelligence and harness the potential of this revolutionary technology.”
This vision is embedded in the new Cisco technology solution “AI Defense”. Regulations such as the EU AI Act, training for citizens and employees, and innovations such as Cisco’s AI Defense all play an important role in the multifaceted effort to secure AI technologies.
As AI continues to transform every industry, these efforts are essential to ensure the safe, ethical and responsible use of AI, ultimately protecting both organizations and users in the digital age.
(1) Cisco’s 2024 AI Readiness Index
Share: