A Quick Tech & Nonsense Update - Paid Subscriptions are Gone: I recently turned on paid subscriptions here. The truth is I never felt good about that, from several angles. So I’m refunding my two paid subscribers (started that yesterday) and I am very happy that I don’t have to think any more about how to reward paid subscribers and not reward free subscribers (hate all of that). Instead, I’ve added “Buy Me a Coffee” - which requires no ongoing commitment, and puts T&N right back where all subscribers are free subscribers.
Now, on to today’s post:
Ever since ChatGPT was released (unleashed?) we’ve seen a tidal wave of reporting on new cyber threats that come along with the powers of generative AI tools. One of the most talked about threats is in the area of attacks that use social engineering - and specifically AI’s ability to create far more effective phishing emails. Bye bye easy to spot spelling and grammar mistakes, and hello large language models that power GenAI tools working 24/7 to craft phishing emails that are much harder to identify.
I follow this topic, amongst many other cyber threat areas, as part of my daily routine that I hope keeps me somewhat sharp in my work life in cybersecurity. I have some thoughts on this new level of phishing threats, but I’m more interested today in hearing what my collection of AI apps has to say about it. So I asked them this morning. Here are the AI apps I prompted on this topic, the prompts I used, excerpts from some of the best and most interesting responses, and some common responses, or areas of agreement among the AI apps.
Which apps did I ask about these threats?
The AI apps I asked for insight from are Google’s Bard, Anthropic’s Claude 2-100K (as the selected model in the Poe app developed by Quora), Open AI’s latest GPT-4 (also via Poe), Mistral Medium (a popular and well-respected open source model), Microsoft Copilot (that uses GPT-4), Google’s beta Gemini Pro (as the selected model in the Perplexity app) and Inflection’s Pi.
First question/prompt: What will some of the most successful AI generated phishing themes be in 2024?
Common responses: Deepfakes was a unanimous pick. Increased personalization and more effective spearphishing were mentioned often as well
Mistral Medium on deepfakes:
Deepfakes are already a concern, and as AI technology advances, it's likely that phishing attacks will incorporate convincing deepfake audio and video recordings. These deepfakes may impersonate trusted individuals, such as coworkers, managers, or even family members, to trick targets into divulging sensitive information or transferring funds.
GPT-4 offered the strongest response:
COVID-19 and Public Health Scams: Though the pandemic may have subsided, cybercriminals could still exploit lingering public health concerns, such as new variants or vaccination reminders, to trick individuals into revealing personal information.
Financial Relief and Government Grants: Scammers might continue to capitalize on economic hardships or new government programs purporting to offer financial support or tax breaks, enticing victims to provide sensitive details to receive aid.
Cryptocurrency Investments: With the growing interest in cryptocurrencies, phishing schemes promising lucrative investment opportunities or assistance with digital wallets could become more sophisticated.
Tech Support Scams: With increasing dependency on technology, fake alerts about security breaches or issues with personal accounts can prompt users to inadvertently give away login credentials or install malware.
Natural Disaster and Charity Fraud: After any significant natural disaster, fraudulent charity campaigns often emerge. AI could generate more believable and emotionally compelling narratives to solicit donations that never reach the intended recipients.
Work-from-Home and Remote Job Offers: As remote work becomes more normalized, phishing emails that offer too-good-to-be-true job opportunities might be used to harvest personal data or require payment for "training" or "equipment."
AI and Machine Learning Platforms: As AI becomes more central to business operations, phishing schemes could target users by impersonating popular AI platforms or services, asking for credentials or payment to access new features or avoid service disruption.
Utility Scams: With fluctuating energy prices and changes in utility services, phishing messages could claim to offer rebates, discounts, or warnings about service terminations to gather payment information.
Social Media Impersonation: Attackers might use AI to craft more convincing fake messages from social media platforms, claiming to need information to verify accounts or to alert users to policy violations.
Current Event Scams: Phishing attacks often leverage real-time events, such as political turmoil, celebrity incidents, or high-profile court cases, to create a sense of urgency or curiosity that leads to hasty clicks on malicious links.
Bard started its response with a very responsible and cautious preface:
Gemini Pro was the fastest and most concise response:
Personalized Phishing Attacks: AI will be used to create highly personalized phishing emails and messages tailored to individual targets, making them harder to identify as malicious.
Deepfake Attacks: AI-generated deepfake technology will be used to create convincing and manipulative videos or audio recordings that can be used to impersonate trusted individuals or organizations.
AI-Powered Malware: Malware will become more sophisticated and evasive, using AI to learn from its environment, bypass security measures, and adapt to new defenses.
Automated Social Engineering Attacks: AI will be used to automate social engineering attacks, such as creating fake social media profiles or sending targeted messages, to manipulate and deceive victims.
AI-Enabled Botnets: Botnets will be powered by AI to become more intelligent and adaptive, making them harder to detect and disrupt.
Second Question/prompt: What are some of the best methods and/or tools to defend against these AI generated phishing attacks?
Common responses: Multi-factor authentication (MFA) and security awareness training were the unanimous choices here. Other common responses offered guidance for both home users and organizations, and included staying on top of security patches and updates; the use of password managers having a suite of email, web, and network filtering tools; and the use of anti-phishing tools powered by AI and ML (machine learning).
Pi mentioned DNS filtering; GPT-4 did as well, in greater detail:
Domain-Based Message Authentication, Reporting, and Conformance (DMARC): Adopt DMARC to help prevent email spoofing by verifying that the sender's domain is legitimate.
Mistral Medium showed a smart approach to user training:
Encourage reporting: Foster a culture where employees feel comfortable reporting potential phishing attempts without fear of repercussions. This can help organizations identify and respond to phishing attacks more quickly.
Bard had good advice for outside of the workplace:
Talk to your friends, family, and colleagues about phishing. The more people are aware of the dangers, the better equipped they will be to protect themselves.
Bard and MS Copilot both offered good links for further learning, Bard’s were a little better:
The Anti-Phishing Working Group: https://apwg.org/trendsreports/
The Federal Trade Commission's website on phishing: https://www.ftc.gov/office-inspector-general/ftc-imposter-scams
The National Cyber Security Alliance: https://staysafeonline.org/
The AI apps’ responses here are not full of revelations or brand new insights. Having said that, as with many things we do with chatbots, they delivered very fast and very useful information. Whether you are researching these types of attack for the first time or a veteran at tracking them, the speed and quality of the responses is a huge help.
It’s also notable that across the seven AI apps I used and the two questions I posed, I got back zero responses that are anywhere near the “hallucinations” (inaccurate or ridiculous responses) category. I’m confident that would not have been the case even 6 months ago; yet more evidence of how fast these tools are improving.