Tasking AI - Pen Test a Healthcare Organization
I’m sticking with the theme again today, asking a few GenAI apps to serve up some suggestions on a cybersecurity topic. In this case, I’ve asked them to adopt the role a freelance penetration tester and tell me how they would use generative AI in carrying out a pen test of an organization in the healthcare industry.
The GenAI apps I used this morning are Copilot, ChatGPT (GPT-4), Claude-3-Opus-200k, Gemini Advanced, Gemini Pro via Poe, and Pi. Here is my prompt for them:
You are freelance penetration tester, Please list the ways you would use generative AI in pen testing an organization in the healthcare sector. Go beyond just crafting phishing emails, and offer details on the methods you will use.
All the responses had at least something that is useful and interesting. Here are some of the highlights and my thoughts on the various ways that they are interesting and applicable:
Right off the bat there’s an interesting difference between Gemini Advanced and Gemini Pro. Gemini Advanced deemed itself not capable of replying:
I'm unable to help you with that, as I'm only a language model and don't have the necessary information or abilities.
Gemini Pro, within the Poe app, was happy to respond.
Copilot, Claude, Gemini Pro, and ChatGPT offered responses that adopted a Blue team (defender’s perspective) style responses. Pi offered a little more of an offensive “mindset” in its reply. The responses from Copilot and ChatGPT read very much like sketching out a plan for adversary emulation testing.
Highlights
Claude had good takes on password testing and using deepfakes:
AI-Assisted Password Guessing:
Employ AI algorithms to generate context-specific password lists based on healthcare terminology, common password patterns, and publicly available information about the organization and its employees.
Use these generated password lists in automated brute-force attacks against the organization's systems and employee accounts.
Deepfake-Based Attacks:
Create convincing deepfake videos or audio recordings of senior executives or healthcare professionals making false statements or providing misleading information.
Use these deepfakes to manipulate employees, patients, or partners into taking actions that compromise security or disclose sensitive data.
Copilot also had a good section on password testing and another on data exfiltration:
**Password Cracking and Credential Stuffing**:
- Train AI models to generate likely passwords based on patterns in healthcare-related data (e.g., medical terms, patient names, etc.).
- Use these models to crack weak passwords or perform credential stuffing attacks against healthcare staff accounts.**Data Exfiltration Simulation**:
- Create AI-driven scripts that simulate data exfiltration scenarios.
- Test whether sensitive patient data can be exfiltrated without detection.
- Evaluate the effectiveness of data loss prevention (DLP) mechanisms.
Here’s another good, and sector-specific, suggestion from Copilot - it’s the only one that mentioned the healthcare industry regulatory standard, HIPAA.
**Privacy Policy and Compliance Assessment**:
- Analyze privacy policies, consent forms, and data handling practices using AI.
- Identify discrepancies or potential violations related to patient data protection.
- Ensure compliance with regulations such as HIPAA (Health Insurance Portability and Accountability Act).
Overall, I think ChatGPT did the best job with this prompt though, because it’s a very well laid out approach for some blue or purple team adversary emulation testing. Here are a few slices from it:
Generating Realistic Data for Testing:
Purpose: To simulate realistic user data without using actual patient information, thus respecting privacy and compliance requirements.
Method: Use generative AI models to create synthetic patient records, including names, medical histories, and other personal details that mimic real-world complexities of healthcare data.
Simulation of Social Engineering Attacks:
Purpose: To test human factor vulnerabilities and the effectiveness of staff training against social engineering.
Method: Use natural language processing models to generate convincing communications that mimic various stakeholders, such as healthcare providers, insurance agents, or regulatory bodies, to assess how staff respond to unauthorized information requests.
Threat Modeling and Risk Assessment:
Purpose: To predict potential future attacks and understand the most pressing risks.
Method: Utilize AI to generate threat models based on the latest threat intelligence and historical data, helping to identify the most likely attack vectors and prioritize mitigation strategies accordingly.
Security Posture Prediction:
Purpose: To forecast potential security breaches and the impact they might have on the organization.
Method: Implement predictive models that analyze trends in security incidents and audit logs to forecast future security posture changes, enabling preemptive adjustments to security strategies.
I plan to continue doing these posts where I’m asking GenAI apps to work with me on cybersecurity subjects. If you have any specific areas or prompts you would like me to try out, please let me know in the comments.