May 13, 2024

Technological singularity and transhumanism

In scientific literature, it is generally accepted that there are 3 levels of artificial intelligence (AI): narrow or weak AI, general or strong AI, and artificial superintelligence.

Weak Artificial Intelligence

Weak artificial intelligence (ANI) is the only type of artificial intelligence we have achieved to date. Weak AI is goal-oriented, designed to perform specific tasks such as:

  • Facial recognition
  • Voice recognition/voice assistants
  • Driving a car
  • Providing purchase suggestions

These are statistical models that make predictions or prescriptions in specific contexts for which they have been trained from an initial dataset. It is very efficient at completing the specific task for which it is programmed. In any case, weak AI has experienced significant advances in recent years, thanks to deep learning and neural networks. For example, AI systems used in medicine to diagnose cancer and other diseases. It has also become more ubiquitous and its presence is increasingly widespread in the daily lives of citizens, driven by the popularity of home voice assistants and functionalities included in any of our smartphones such as facial recognition in photo applications.

At the enterprise level, intelligent data usage is a reality through factory sensorization that allows task prescription or predictive maintenance, or the accumulation of user data that enables the generation of fully personalized products and services.

Strong Artificial Intelligence

The next level is strong artificial intelligence (AGI), also known as general artificial intelligence. It is artificial intelligence that matches or exceeds human intelligence, meaning the intelligence of a machine that can successfully perform any intellectual task of any human being. So far, strong artificial intelligence remains an aspiration, it is hypothetical despite the great advances in the field and the improvement of machine learning models.

Artificial Superintelligence

Finally, Artificial Superintelligence (ASI) refers to an intelligence far above the most gifted human minds. It is related to what is known as “technological singularity”.

What is technological singularity?

The technological singularity predicts that, in the future, technology will develop machines that will surpass human intelligence, marking a before and after in the history of humanity. This will lead to what Oxford philosopher Nick Bostrom calls “intelligence explosion”: machines will self-improve recurrently. So, each new generation, being smarter, will be able to enhance its own intelligence, giving rise to another even smarter generation, and so on. Technological singularity will cause unimaginable social changes, impossible to comprehend or predict by any human.

singularidad tecnológica

Experts differ in their predictions about when it will happen. Researcher Gary Marcus claims that:

"Practically everyone in the field of AI believes that machines will someday surpass humans and, at some level, the only real difference between enthusiasts and skeptics is a timeframe."

At the 2012 Singularity Summit, Stuart Armstrong conducted a study on experts' predictions and found a wide range of predicted dates, with a mean value of 2040. In Armstrong's statements in 2012:

"It's not completely formal, but my current estimate of 80% is something like from five to 100 years."

Preparing for Technological Singularity

What will happen after the singularity is a mystery to the human mind. A sufficiently powerful artificial intelligence would be capable of making global-scale changes that we cannot predict, for reasons we would not be able to discern. It does not necessarily mean a dystopian future for humans nor that we will become extinct. Nor on the other extreme, we cannot expect machines to do all the work and for humans to live in a permanent vacation. Simply put, we do not know what will happen because it is beyond the reach of our intelligence.

Just because we do not know what will happen does not mean we should not prepare. Regulatory efforts that currently address issues such as code transparency, explainability of AI decisions, bias in training data, or the risks of AI military uses are undoubtedly necessary and will be very important at the first level of AI (ANI). However, there are reasonable doubts from experts about their applicability when we reach technological singularity. How will we be able to apply standards to such superior intelligences?

Augmented Humanity or Transhumanism

One of the most interesting approaches is based on enhancing the human being (augmented humanity or transhumanism) so as not to lose the intelligence race against machines.

For historian Yuval Noah Harari, the most likely path of evolution is precisely for the human being to evolve in parallel with machines. He says in his essay “Homo Deus”:

“Homo sapiens will not be exterminated by a robot uprising. It is more likely that Homo sapiens will improve itself step by step, and will join robots and computers in the process, until our descendants look back and realize that they are no longer the kind of animal that wrote the Bible.”

In fact, there are already several projects in this line today. For example, Neuralink by Elon Musk: its vision is to develop brain-computer interfaces (BCI) to connect humans and computers to survive the next era of AI. Neuralink's mission is clear: "if you can't beat them, join them." In other words, our only way to have a chance when superintelligent AI arrives is to merge with AI.


On a personal level, all we can do is prepare ourselves mentally and be open to a change that, according to experts, may happen in our lifetime.

To conclude, Harari said:

“Neanderthals didn't have to worry about the Nasdaq because they were shielded from it by tens of thousands of years. However (...) it is likely that the attempt to improve Homo sapiens will change the world to the point of being unrecognizable even in this century.” And he continues: “In hindsight, many believe that the fall of the pharaohs was a positive event. Perhaps the collapse of humanism may also be beneficial. Generally, people fear change because they fear the unknown. But the greatest and only constant in history is that everything changes.”

At SEIDOR, we help companies on their journey towards digital transformation. If you need advice on adopting Artificial Intelligence, contact us!

You may be interested

May 13, 2024

The technological challenge of transitioning to renewable energies

Explore how technology is driving the transition towards renewable energies and environmental sustainability. Discover innovative projects in Spain that use IoT, artificial intelligence, and Big Data to create a cleaner and more efficient future.

May 14, 2024

Monitor smart cities with Grafana, Timescale and Sentilo

Discover how Grafana and TimescaleDB are transforming data management in smart cities, offering interactive visualizations and big data analysis to improve urban efficiency.

May 14, 2024

The necessary equipment for your Artificial Intelligence project

Discover the three essential roles for the success of any artificial intelligence project: the data scientist, the IT technician, and the domain expert, and how their collaboration drives innovation.