I’ve done a series of “Asking AI” posts here recently, where I have queried a number of generative AI (GenAI) apps on cybersecurity topics. I plan to continue doing similar posts because I am fully onboard with the idea that using these GenAI tools can lead to faster and , more importantly, better results. A believer in AI in copilot mode, augmenting our skills.
Today is just a slightly different spin on this. This morning instead of asking AI for help, I gave it a task. Specifically, I asked for help in drafting a Cyber Supply Chain Risk questionnaire that vendors would be required to complete when an enterprise is assessing them. I also asked it to consider emerging risks associated with the use of GenAI by SaaS (software as a service) providers.
I gave the same task, the same prompt, to Gemini Pro, Pi, ChatGPT (GPT-4), Claude-3-Opus-200k, and Copilot. Here is my prompt to them:
You are part of the cybersecurity team at a large enterprise. Please draft a questionnaire to be completed by a third party vendor of a cloud-based service, to assess their security capabilities and due care. This should take into account standard risks considered in third party risk management, as well as new risks associated with the use of generative AI by SaaS providers.
As usual, each of the GenAI apps came up with a good response. Gemini Pro, ChatGPT, and Claude-3-Opus-200k all returned very good responses. Gemini Pro was the best on Data Management and Security:
Data Classification: Describe your data classification procedures. How is sensitive data (personally identifiable, financial, confidential company information) identified?
Data Encryption: Detail encryption protocols (in-transit, at-rest, key management). Are customer-specific encryption keys available?
Data Retention: What is your data retention policy? How is data securely deleted once beyond the retention period?
Data Locality: Where is data stored geographically? Are there options for customers to specify storage locations?
Data Masking/Anonymization: Do you use data masking or anonymization techniques for development, testing, or analytics purposes? If yes, please describe these methods in detail.
Claude offered the best response overall though. I like that it offered up a very appropriate Introduction section:
Introduction:
As part of our third-party risk management process, we kindly request that you complete the following questionnaire to help us assess your organization's security capabilities and due care practices. This questionnaire covers standard risks associated with third-party vendors, as well as emerging risks related to the use of generative AI in SaaS offerings. Please provide detailed responses to each question and attach relevant supporting documentation where applicable.
And its section on generative AI is excellent:
Generative AI-Specific Questions:
21. Do you utilize generative AI technologies in your cloud-based service(s)?
22. If yes, please describe the specific generative AI technologies used and their purpose
23. How do you ensure the security and integrity of the AI models and training data?
24. What measures are in place to prevent unauthorized access to or manipulation of AI systems?
25. How do you address risks associated with AI-generated content, such as biased or misleading outputs?
26. Do you have policies and procedures for the ethical use and deployment of generative AI?
27. How do you monitor and audit the performance and outputs of your generative AI systems?
28. Are there any human oversight or intervention mechanisms in place for your AI systems?
Please provide any additional information or comments that you feel are relevant to your organization's security posture and the use of generative AI in your cloud-based service(s).
The standard qualifier for all things delivered up by these GenAI tools applies here: their output can include inaccuracies, mistakes, offensive content etc, and must always be reviewed before thinking about using it. Having said that, the responses from Gemini Pro, ChatGPT, and Claude here are good, solid drafts. Each of these could easily be worked with, modified or polished as needed, and put to good use when carrying out vendor cyber risk assessments.
As I mentioned at the top of this post, I plan to continue with this theme of asking, and also tasking, AI for assistance in looking at various cybersecurity topics. If you have a particular topic you’d like to see covered, please leave a comment.
Excellent article, Patrick! I like the prompts and the outcome comparison is helpful. I didn't see any use of Perplexity.
I hope you are feeling better!