Is your use of AI secure?

SCMagazine.com reported “Security faced significant challenges in the past year. On the threat side, exploits and numerous zero-day attacks sent security teams scrambling. On the regulation side, the U.S. SEC instituted stringent incident reporting requirements, while in the EU, government bodies proposed new frameworks like the Artificial Intelligence Act in response to rising AI threats.”  The August 12, 2024 article entitled “Three ways to start owning AI security” (https://tinyurl.com/38s5zjtx) included these three ways to take action:

#1 Understand vendor usage of AI: Identify the vendors that are leveraging AI in their software, and ask specific questions to understand the application of AI to the company’s data. Determine if vendors are training models on the data the company provides and what that means in terms of protecting company data further. The Cloud Security Alliance (CSA) offers excellent resources, such as the "AI Safe Initiative" group, which includes valuable research and education and AI safety and security.

#2 Demand transparency and control: Ensure transparency in how AI gets used in the products the company uses. For example, at our company, we are very transparent about our use of AI and even let customers turn it off if they are not comfortable using the technology. These are the choices we should demand from our products. Find out which vendors are moving to a model where they are training the AI on the company’s sensitive data. While it’s risky, and security teams need to decide on their own level of comfort.

#3 Follow evolving community frameworks: There are many frameworks being developed, but two that I recommend taking a look at now are the NIST AI RMF and ISO 42001. Other available frameworks include the OWASP AI Security and Privacy Guide and AI MITRE will help in staying up to date on the latest.

Good ideas, what do you think?

Previous
Previous

Indemnification will not save you from GenAI!

Next
Next

Google Olympics AI ad pulled!