Drücke „Enter”, um zum Inhalt zu springen.

AI Tackles Smart Contract Vulnerabilities: New Research Shows

admin 0

• Artificial intelligence (AI) is being tested to see if it can replace human smart contract auditors.
• OpenZeppelin conducted an experiment using the AI model ChatGPT-4 to identify vulnerabilities in 28 Ethernaut challenges.
• While GPT-4 was successful on levels 1-23, it failed on the newer levels 24-28, suggesting that AI cannot completely replace human auditors.

Exploring the Use of AI for Smart Contract Auditing

AI is increasingly becoming a topic of discussion and concern as its capabilities continue to evolve and its potential to replace human jobs is explored. One such area where this technology could potentially make a difference is smart contracts auditing. In order to evaluate this possibility, OpenZeppelin recently conducted a study involving ChatGPT-4, an AI model developed by OpenAI, against 28 Ethernaut challenges designed to detect smart contract vulnerabilities.

What are Ethernaut Challenges?

Ethernaut challenges are puzzles and problems created by OpenZeppelin for testing and improving one’s understanding of ethereum (ETH) smart contract security vulnerabilities. The levels range from easy to difficult and cover various issues like re-entrancy attacks, underflows, overflows, etc.. Each challenge presents a unique smart contract with a specific vulnerability that players must identify and exploit in order to complete the challenge.

GPT-4 Performance on Ethernaut Challenges

The experiment involved presenting GPT-4 with code from each level and asking it to find any security flaws or bugs present within the code. It was able to solve 19 out of 23 challenges released prior to September 2021 when its training data cutoff date occurred, however it failed at 4 out of 5 newest levels released afterwards – Levels 24-28. This suggests that while AI may be useful for finding some security bugs in smart contracts, it cannot totally replace human auditors yet due its inability so far to identify new vulnerabilites quickly enough as they emerge in real life situations.

Factors Behind GPT-4’s Success

One significant factor which might have contributed towards GPT-4’s success with Levels 1 – 23 is that its training data may have contained published solutions for these levels from before September 2021 cutoff date . On the other hand; Levels 24 – 28 were released after this date thus making them unknowns for ChatGTP’s algorithm since it had not seen them before so was unable solve them indicating that the training data likely includes published solutions as explanation of success here too . Furthermore; ChatGTP’s „temperature“ setting which determines how random or nonrandom responses generated by model also has effect on performance when trying tackle these tasks as per OpenZepelin’s research team findings .

Conclusion

In conclusion; while Artificial Intelligence can be useful tool for locating certain types of security weaknesses , but at same time it cannot fully substitute human auditors due lack ability recognize new threats quickly enough as they appear in real world scenarios . Therefore ; more research into this field should be conducted order refine algorithms used increase efficacy further better protect our digital infrastructure from malicious actors

Die Kommentare sind deaktiviert.