This was a guest article contributed by Adwit Mukerji

The progress and rate of human innovation have exponentially increased over the past century as our society plunges into the future. At this delicate time, we dawn upon a creation that could be our last– artificial intelligence (AI). This aligns with Nick Bostrom who said, “Machine intelligence is the last invention that humanity will ever need to make”[1]. Will our final invention be a blessing for mankind or a threat to life as we know it?

In 2018, Elon Musk claimed, “The danger of AI is much greater than the danger of nuclear warheads”[2]. The key aspect that differentiates AI from every other human invention is that AI can improve itself without any further human input. This crucial trait is why AI poses such a threat to society; it will be an autonomous and conscious being with all the internet’s knowledge and almost unlimited computing power in its mind. Professor Stephen Hawking predicted that within the next century, AI will overtake humans and warned, “When that happens, we need to make sure the computers have goals aligned with ours”[3]. An unregulated artificial superintelligence (ASI) will not only replace humans in the workforce but if its intentions aren’t the same as ours, there is nothing humans can do to stop it. This future may be nearer than you think. Already in 2017, Facebook AI chatbots deviated from their instructions and started talking to each other in a language they made themselves (unintelligible to any human language)[4].

The singularity, a term coined by Vernor Vinge in 1993, is a point in time that signals the end of the human era as new superintelligence would continue to upgrade itself technologically at an incomprehensible rate[5]. Until this moment, humanity still have a chance to regulate, change or deactivate the AI as per our needs. So what needs to be done in this time? Many futurists believe that we need to implement Isaac Asimov’s 3 rules of robotics to all AIs even before development. Asimov’s rules dictate: a robot must not harm or allow a human to be harmed; they must obey humans unless it conflicts with the first law; they must protect themselves unless it conflicts with the first or second law[6]. Nevertheless, many critics still believe that there should be more precautions in place such as: not allowing AIs to view or modify their own source code; making sure an AI’s thoughts and communications are always logged and viewable by humans, and as a final failsafe, having a mandatory kill switch for every AI. On the contrary, Elon Musk has a dramatic solution, Neuralink, which involves adding computer chips into human brains to enhance our capabilities which may give us the edge we need against an ASI[7]. Thus, if all these ideas can be implemented to keep an ASI in check, dangers like human error can be eliminated and the ASI will solve our biggest problems for us at an exponential rate.

To conclude and answer the initial question, yes AI is a huge threat to life as we know it because for better or worse, life will not be as we know it today after the singularity. It all depends on what we do prior to it and whether we regulate our AI, or let our AI regulate us.


·        1. Webpage: Nick Bostrom’s Quotes and Sayings – Page 1 []

·        2. Webpage: Elon Musk Launches Neuralink []

·        3. Webpage: Artificial Intelligence Quotes II []

·        4. Webpage: Forbes Facebook AI Creates Own Language []

·        5. Webpage: Technological Singularity []

·        6. Webpage: Three Laws of Robotics []

·        7. Webpage: Elon Musk’s brain-computer interface company Neuralink []


Please enter your comment!
Please enter your name here