Introduction
Key Takeaways
For the last several decades, fictional depictions of machine uprisings and nefarious artificial intelligence systems have painted a scary picture of the future of AI. Terminators running around stealing leather jackets and attacking people? Talk about a total lack of AI security and regulation.
Today we’re taking a more proactive approach to AI governance and management, swapping the T-800 for ISO 42001 — the world’s first AI management system standard.
While we’re still a long way from sentient AI and time-traveling cyborg assassins (hopefully), the rise of Siri and Alexa, self-driving cars, and generative AI models such as ChatGPT have brought new ethical and safety concerns to the surface.
As AI capabilities continue to grow, organizations must ensure responsible, sustainable implementation of technologies that are now deeply ingrained in our everyday lives. But walking the line between innovation and governance requires careful balance. How do we account for privacy, security, and other risks associated with AI systems — without stifling progress?
In a recent conversation with Apptega, Matt Malone, director at Vistrada, and Sean Austin, CEO of generative AI company Markets EQ, discussed why emerging standards such as ISO 42001 are essential for AI companies aiming to establish long-term growth and credibility.
Here’s what we learned about the evolving regulatory landscape, the importance of proactive data protection in AI development, and the business case for AI compliance.
The Evolving AI Regulatory Landscape
Recent leak allegations against OpenAI, makers of ChatGPT, have raised questions about whether AI companies have the right security measures and standards in place to safeguard private information.
Security researchers recently uncovered a simple way to reveal potentially sensitive user files and system prompts for the company’s custom GPT offerings. In another report headlined by Google’s DeepMind, researchers announced they could trick ChatGPT into disclosing sensitive training data, some of which included phone numbers, addresses, and other private user information.
These are just a sample of recent ChatGPT headlines that have included claims of compromised accounts, leaked conversations, and copyright infringement. And OpenAI isn’t the only company facing questions and lawsuits around their use of AI technologies.
"Organizations are trying to figure out how to do AI ethically, securely, and gain user trust,” Malone said. “They’re trying to find guidance and uniformity. I think that’s where something like ISO 42001 helps bring an understanding of the risks."
Published in 2023, ISO 42001 provides global guidance for establishing, implementing, maintaining, and continually improving AI technologies. It’s expected to play a critical role in building confidence and trust in AI, ensuring responsible development and ethical use of the technologies without imposing restrictive barriers to growth.
ISO 42001 was created in response to new AI laws and regulations being enacted worldwide. The European Union is leading the charge with the AI Act, the first comprehensive law establishing clear requirements for organizations using or developing AI technologies.
The AI Act establishes certain restrictions on AI systems based on the level of risk and impact they present, particularly to personal data. Unacceptable risks include technologies used for emotional recognition, biometric identification, social scoring, and behavioral manipulation. These technologies are either banned entirely under the new restrictions or reserved for law enforcement use in limited circumstances.
Several countries around the world are following the EU’s lead. In the United States, 16 states have already enacted some form of AI legislation as of January 2024, with another 14 and the District of Columbia proposing new legislation. At the federal level, the proposed American Data Protection and Privacy Act, if passed, would enact rules for the development and use of AI technologies.
Privacy by Design: Data Protection in AI Development
Let’s define AI within the context of this conversation, as it can mean a lot of different things. At its most basic, AI is technology that enables machines and computer programs to simulate human intelligence, performing tasks that typically require some level of cognitive function. In short, it’s machines that think like humans.
As more specifically defined by ISO/IEC TR 24030:2021, AI is the “capability to acquire, process, create, and apply knowledge, held in the form of a model, to conduct one or more given tasks.”
These AI models raise several security concerns around the collection and retention of data. Where does the data flow? How is it being processed? Is it being used to train the AI models? What’s the retention policy?
This applies to data processes not only internally but also with partners. Like eating a six-foot party sub, AI security is an end-to-end group effort. You must understand the partners through which your data flows and how that data is being used when traversing their systems. You’re responsible for ensuring the security of their systems as well as your own.
Data protection is best accomplished at the beginning of technology development — a process known as Privacy by Design. Whether putting policies in place, securing buy-in from senior leadership, or performing risk assessments, AI privacy and security best practices are easier to implement at an earlier stage.
“Every AI project needs to be scoped, whether it’s a chatbot you’re bringing in or a service you’re providing” Malone explained. “It needs to be scoped before you go off and implement it because something very benign can later become problematic. People tend to rush into AI. They want to implement things quickly, and that could be dangerous. You need to first make sure it’s secure and monitored.”
It takes time and resources to map out the potential security frameworks, investments, and partner risks that make up a robust cybersecurity and compliance program. The overall scope of work can be overwhelming for many organizations, especially smaller startups that must also balance priorities such as shipping and innovation.
If you take on too much and allocate all your resources to security, you run the risk of throwing off that balance. It’s a delicate tightrope to walk. Following ISO 42001 standards can ensure you're taking the right steps to protect your data and business.
A Security-First Approach to AI: The Business Case for ISO 42001
ISO 42001 provides a systematic, security-first approach to risk management that balances innovation and governance. It does this by offering practical guidance for effectively managing AI risks while also identifying opportunities for innovation within a framework built around existing organizational structures.
AI is a new industry with a lot of unknowns, and many businesses and consumers are cautious about its implementation. There’s heightened awareness about data transmission, retention, and privacy, and everyone is weary of leaks. ISO 42001 helps ease some of these concerns by prioritizing human well-being, safety, and user experience in AI design and development.
This security-first approach gives organizations the confidence to go to market with an AI solution (or components of it within their other applications) while providing consumers peace of mind that legal and regulatory standards are met.
By helping organizations remain compliant with these standards, ISO 42001 builds greater confidence in AI management, enhancing organizational reputation and fostering trust with stakeholders.
“We’re very thoughtful about what security means in the generative AI world,” Austin said about his company’s AI compliance practices. “We’re investing resources into certifications such as ISO and SOC 2, and we have policies and standards in place for our cloud infrastructure. In terms of documents and collateral, we have technical overviews, internal policies, and anything else we can produce around our infrastructure.”
According to Malone, having these certifications, standards, and documents at the ready can help you not only save money but also win more business.
Early ISO 42001 adoption also shows you understand the space and are solving problems that may not yet exist. It provides a market head start and proactively positions your organization to guide conversations with customers and investors, helping answer questions and reduce skepticism.
“People are still figuring out a lot of things in generative AI,” Austin said. “It’s a rapidly evolving space, which makes it a perfect opportunity to relay to others what we’re doing and the investments we’ve made. We have a clear mission and believe a lot of people can benefit from what we build. Using security to get in front of more people is part of our strategy.”
You don’t have to become ISO 42001 certified to realize the benefits. You can start building toward the certification and using it as a guide to map your controls.
The first step is securing buy-in across your organization. That starts with senior management and evolves with a team approach. From there, ISO 42001 can create a deeper understanding of the technologies you need, associated risks, data flow, and other concerns.
Conclusion
AI compliance isn’t a one-and-done project. It’s a linear process with milestones and continuous improvement. The earlier you start thinking about privacy and security, the easier it will be to integrate protections.
The goal is to ensure ethical and responsible AI development. ISO 42001 provides the guidance you need to meet security standards while growing your business, setting your organization on a balanced path to compliance and future success.
--
For more information on ISO 42001 and how to balance AI innovation and governance, watch the full webinar with Matt Malone and Sean Austin.