πŸ“• subnode [[@ryan/20210712164936 the_singularity_will_not_happen]] in πŸ“š node [[20210712164936-the_singularity_will_not_happen]]

The core assumptions that the concept of the singularityor "superintelligence" rests upon are the following:

  1. Technological progress is always improving

  2. There is a tendency to automate labor over time

  3. Artificial intelligencewill get to a sufficiently intelligent point that it can do not only one task better than humans, but all tasks

There are a number of problems with this:

  1. Automationis about reducing necessary labor-time In this way, the argument still holds, but falls apart when we look out at the world. As of writing this, there are no firms who are actively looking to create a general artificial intelligence

  2. Intelligence is situational. Intelligence comes from one's environment and the problems that arise in said environment

    1. As the article linked to below says, you could not simply put a human brain in an octopus's and assume it'll be able to survive its environment. Much of what makes a human human is hard-coded (but not everything!)

  3. There is no such thing as "general" intelligence

  4. This puts far too much faith in software developers

Links

πŸ“– stoas
β₯± context