The core assumptions that the concept of the singularityor "superintelligence" rests upon are the following:
-
Technological progress is always improving
-
There is a tendency to automate labor over time
-
Artificial intelligencewill get to a sufficiently intelligent point that it can do not only one task better than humans, but all tasks
There are a number of problems with this:
-
Automationis about reducing necessary labor-time In this way, the argument still holds, but falls apart when we look out at the world. As of writing this, there are no firms who are actively looking to create a general artificial intelligence
-
Intelligence is situational. Intelligence comes from one's environment and the problems that arise in said environment
-
As the article linked to below says, you could not simply put a human brain in an octopus's and assume it'll be able to survive its environment. Much of what makes a human human is hard-coded (but not everything!)
-
-
There is no such thing as "general" intelligence
-
This puts far too much faith in software developers
Links
- public document at doc.anagora.org/20210712164936-the_singularity_will_not_happen
- video call at meet.jit.si/20210712164936-the_singularity_will_not_happen