#+title: the singularity will not happen The core assumptions that the concept of the [[file:20210309214442-singularity.org][singularity]] or "superintelligence" rests upon are the following: 1. Technological progress is always improving 2. There is a tendency to automate labor over time 3. [[file:20210306160859-artificial_intelligence.org][Artificial intelligence]] will get to a sufficiently intelligent point that it can do not only one task better than humans, but /all/ tasks There are a number of problems with this: 1. [[file:20210211094543-automation.org][Automation]] is about reducing [[file:20200716142405-necessary_labor_time.org][necessary labor-time]]. In this way, the argument still holds, but falls apart when we look out at the world. As of writing this, there are no firms who are actively looking to create a general artificial intelligence 2. Intelligence is situational. Intelligence comes from one's environment and the problems that arise in said environment a. As the article linked to below says, you could not simply put a human brain in an octopus's and assume it'll be able to survive its environment. Much of what makes a human human is hard-coded (but not everything!) 3. There is no such thing as "general" intelligence 4. This puts far too much faith in software developers * Links - [[https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec][The implausibility of intelligence explosion | by François Chollet | Medium]]