Today there are a couple of AI in Cyber headlines that caught my eye. One article outlines realistic and worrisome potential uses of generative AI for creating far more effective attacks at various stages of the cyber kill chain. The other is some promising sounding news about the popular Virus Total site incorporating AI to improve its malware analysis and detection capabilities.
On the offensive side first, an article at Dark Reading offers up this rather scary scenario:
Consider the untapped AI training potential of vulnerability scanning tools such as Acunetix or exploitation frameworks such as Metasploit. These tools automate the reconnaissance and exploitation stages of the cyber kill chain. Today, these tools require human guidance and direction. Advanced persistent threats (APTs) using them to target organizations are focused on the environment of a single victim …
Imagine an AI that trains on security exploitation as deeply as LLMs train on language. Picture an AI training on all known CVEs, the NIST Cybersecurity Framework, and the OWASP Top 10 as part of its core data set. To truly make this AI dangerous, it should also train on data lakes generated by popular hacking tools. For example, use Nmap for a few million networks and train the AI to recognize correlations between open ports, OS versions, and domains. Run Nessus vulnerability scans in thousands of environments and feed the results to the AI to "learn" patterns of enterprise security flaws.
I’m lightyears away from being an AI expert, but that all sounds quite feasible. Soaking up that particular type of data at enormous scale doesn’t seem like a stretch with all the truly exponential growth of generative AI even just over the last 5 or 6 months.
On the more cheerful offensive cyber side of things, there is news via Bleeping Computer that this week Virus Total has launched “a new artificial intelligence-based code analysis feature named Code Insight.” Google owns Virus Total and it’s not shocking that the new feature is powered by their own Google Cloud Security AI Workbench, which features a large language model that is fine-tuned for security use cases. The Google Cloud Security team’s video on Code Insight says it quickly identifies and describes malicious code to support its goal of “catching the bad actors far sooner”. It also incorporates cyber threat intelligence driven analysis through another Google company, Mandiant.
Right now, Code Insight is only deployed to analyze a specific subset of PowerShell files - including .PS1 files which are PowerShell script files - with expanded support for more file types and overall capabilities promised in coming days and weeks. Starting with PowerShell scripts is nothing to be scoffed at. They are very heavily used in cyber attacks and can be used to target just about any organization that has Windows machines in their environment, as PowerShell is a core component, built deeply into Windows.
As a blue teamer - working on the defensive side of cybersecurity - my hope is of course that our defensive security tools, along with our still useful human efforts, will manage to keep pace with and thwart enough of the potential onslaught of AI driven attack methods to keep our organizations and ourselves safe.