The impact of ChatGPT on Web3, Web2 and online security

Last week, ChatGPT, the dialogue-based AI chatbot capable of understanding natural human language, took the world by storm. Gaining over 1 million registered users in just 5 days, it became the fastest growing tech platform ever. ChatGPT generates impressively detailed human-like written text and thoughtful prose, after being fed a text input prompt. In addition, ChatGPT also writes code.

Now ChatGPT can write, scan and hack Smart Contracts, where do we go next? 

The ChatGPT AI code writer is a game changer for Web3 which can go two ways 

  • Near instant security audits of smart contract code to find vulnerabilities & exploits (existing & prior to implementation).
  • On the flip side, bad actors can program AI to find vulnerabilities to exploit SC code. (thousands existing SC could suddenly find themselves exposed)

See more: The Privacy-Enhancing Technologies we need today

The Naoris Protocol POV:

  • In the long term this will be a net positive for the future of Web3 security
  • In the short term AI will expose vulnerabilities which will need to be addressed as we could see a potential spike in breaches.
  • AI will illuminate where humans need to improve
  • AI is not a human being. It will miss basic preconceptions, knowledge and subtleties that only humans see. It is a tool that will improve vulnerabilities that are coded in error by humans. It will seriously improve the quality of coding in Smart Contracts. But we can never 100% trust its output

ChatGPT Web2 and Enterprise

The Naoris Protocol POV:

  • – Artificial Intelligence that writes and hacks code could spell trouble for enterprises, systems and networks. Current cybersecurity is already failing with exponential rises in hacks across every sector in recent years with 2022 reportedly already 50% up on 2021. 
  • – With ChatGPT on the horizon, it can be used positively within an enterprises security and development workflow, which increases the defence capabilities above the current (existing) security standards. However, bad actors can increase the attack vector, working smarter and a lot quicker by instructing AI to look for exploits in well established code and systems. Well regulated enterprises like FSI spaces, for example, would not be able to react or recover in time due to the way current cybersecurity and regulation is configured.
  • – For example the current breach detention time as measured by IBM (IBM’s 2020 Data security report) is up to 280 on average. Using AI as part of the enterprise defence in depth posture breach detection time could be reduced to less than 1 second, which changes the game.
  • – The advent of AI platforms like ChatGPT will require enterprises to up their game, they will have to implement and use AI services within their security QA workflow processes prior to launching any new code / programmes.
  • – As soon as the genie is out of the bottle, if one side isn’t using the latest technology, they’re going to be in a losing position. So if there’s offensive AI out there, enterprises will need the best defensive AI to come back. It’s an arms race to who’s got the best tool.