The AI Revolution: Skynet or Siri?
Like the internet and smartphone revolution before it, the AI revolution is upon us and it is here to stay. While these tools are new and still (for the most part) untested, they present incredible opportunities and should be explored to the fullest extent possible. Starting with the release of OpenAI’s ChatGPT in November 2022, major tech companies such as Google, Microsoft and X are starting the roll out of their newest chatbots powered by large language models (LLM). This roll out is creating competition for market dominance that is leading to the rapid development and improvement of these tools.
Simply put, LLMs can be thought of as the brains of the chatbots; data parameters, which are within the LLMs, act as the neurons. The more parameters present within the LLM, the more powerful and capable it is to perform everyday tasks. These tasks can range from telling a creative story to writing software code. There are some that are even beginning to have image generation capabilities such as Midjourney and the soon to be released Gemini LLM from Google coming in late 2023 or early 2024.
With all of this going on, many people are wondering: Are we on the cusp of seeing Skynet jump from our TV screens into reality? Simply put, as AI currently stands, the answer to that is no, not yet. While none of us know what tech companies or governments might be doing behind closed doors, there is very little proof right now of AI achieving sentience or physical real world capabilities that might be dangerous to humanity. Sure, many companies such as Boston Dynamics have successfully built robots that can run, jump and dance, but that is a far cry from having a robot that can make conscious decisions and take physical actions on those decisions.
There have been many fascinating interviews and articles written on the topic of defining sentience and what that means for AI, including this interview from the Shawn Ryan Show with AI thought leader Sean Webb. Right now, with the data learning that these models are doing, Webb points out that they are learning to make decisions and take actions based on those decisions to achieve goals. In a way, as Webb points out, this learning can be considered self preservation.
While this self preservation is not something to lose sleep over, it is important to remember that self preservation needs empathy; this is true for humans and AI. In the same way you teach a child not to hit a friend because they’re unhappy with them, AI needs to learn the value of preserving itself and the value of human life. This is important so when these chatbots do become more powerful and interconnected – or get put into physical vessels – they can see the value of their human companion and not see them as a threat.
Whether it’s Google’s Bard, Microsoft’s Sydney or X’s Grok, these bots are the beginning of a revolution that we haven’t seen since the inception of the internet. While the possibilities are endless right now, it is important that industry and government leaders emphasize the importance of ethical guidelines for these bots so that humanity can harness their potential for good and avoid any pitfalls of this potentially limitless tool.