One of the questions I regularly see when it comes to PS5 hacks progress is whether tools like ChatGPT could be leveraged to help reverse engineering the console or some of its defenses such as its now infamous Hypervisor.
Here’s for some opinion that will probably age like fine milk, so just take it for what it is, a mid-week rant: in my opinion, the answer is no. Or at least, not as directly as people might think.
How AI is used in computer attacks today
There have been a lot of articles lately about chatGPT and its use cases in programming in general, and cybersecurity in particular.
Techopedia describes multiple ways that cybercriminals could leverage AI to attack networks, some of which are allegedly already being used:
- Creating deepfake data.
- Building better malware.
- Stealth attacks.
- AI-supported password-guessing and CAPTCHA-cracking.
- Generative Adversarial Networks (GANs).
- Human impersonation on social networking platforms.
- Weaponizing AI frameworks for hacking vulnerable hosts.
- Deep exploits.
- ML-enabled penetration testing tools.
A lot of these possible “attacks” rely on human flaws on the other end (eg human impersonation), computer systems that are not up to date (eg hacking vulnerable hosts through AI frameworks), or are simply irrelevant in the context of hacking a “black box” such as the PS5. (eg CAPTCHA cracking).
One in this list however looks like it could vaguely apply to hacking a console: improving penetration testing tools through ML. In the case of a device like PS5, this could apply to fuzzing tools, decompilers, and the like. In other words, and it’s been mentioned by a lot of developers already, AI can (currently) be of great use as an assistant to help improve existing tools or data.
In the case of reverse engineering a console such as the PS5, it means that AI could have very practical uses today in decompilation tools such as Ghidra or IDA pro, removing a lot of the actual “reverse engineering” work for hackers, by providing human readable code almost out of the box. I’m convinced that the companies making disassemblers are hard at work to make it happen sooner than later. As a matter of fact, plugins already exist that use chatGPT in Ghidra.
There’s also the ongoing thought that AI could look at some code (compiled or human readable) and find bugs in it. A lot of folks in the programming and infosec community have shared pretty impressive examples of ChatGPT detecting code issues.
ChatGPT exploits a buffer overflow 😳 pic.twitter.com/mjnFaP233h
— Brendan Dolan-Gavitt (@moyix) November 30, 2022
Very cool! I’ve asked it about some different codebases and it surprised me with some of its responses. Even if it’s not aware of the code in question it can often be a second set of eyes to understand a code snippet.
— nedwill (@NedWilliamson) January 25, 2023
Could AI be used to hack the PS5?
Based on these early findings, some people in the PS5 hacking scene are salivating at the prospect of using ChatGPT to find bugs “easily” in Webkit or PlayStation firmware files, but they’re missing two critical points:
First, AI bases its findings on existing data. There’s no doubt that a sufficient advanced AI program will be able to find flaws in existing open source code such as Webkit, and it’s probably already happening somewhere. But most of the code on the PS5 is closed source, and some of its most interesting parts, such as the Hypervisor, are not only proprietary, they also have a very small footprint (so, less software bugs), and they’re entirely new/unknown. AI doesn’t have any data on what a “PS5 Hypervisor” is, let alone what its code looks like. AI could possibly look at other Hypervisors/Virtual Machine implementations and suggest possible attack vectors for hacks in a system that is somewhat “equivalent” (the PS5), but that stops here.
Granted, it’s likely the PS5 shares some of its code with the PS4, which, even if it’s not “officially” public, has been decrypted for some time now. So existing data could help understand the inner workings of the PS5 better. But all of this would be missing the second point:
More importantly, such AI-based vulnerability detection mechanisms will be in the hands of big corporations before they reach hackers. This is probably a good thing overall (we do want our bank websites and our phones to be harder targets for attacks after all), but in the case of PS5s Jailbreak, it most likely means Sony will have found the bugs (in part thanks to AI) way before hackers do. A free, ultra efficient replacement for their bounty program, if you want. Such tools might already exist, and it’s likely Sony are eyeing them, if not already using them.
Conclusion: PS5 Hardware hacks might be the only way
So, in my humble opinion, not only will it be difficult for hackers to leverage AI in order to help hacking a black box system such as the PS5, I think AI will actually help securing future consoles and consumer devices even more, which will negate any benefits it could bring to jailbreak-creators down the way.
The one aspect that companies like Sony cannot easily fix with AI, however, is hardware. Any hardware hack moving forward will be extremely valuable because it’s impossible to fix (without a new model, that is). This is why a hack such as Fusée Gelée on Nintendo Switch is so great, because a console hacked with this exploit cannot be patched.
So there you have it, my opinion is that although AI can probably help (even today) with some reverse engineering tasks, I don’t see it coming with a clever hack for some black-box system. Furthermore, I think AI will benefit consumer device security and walled gardens more than its “opponent”, Jailbreaks, in the long run.