TESCREAL

No, that’s not my head hitting the keyboard and forming a random sequence of letters. TESCREAL is a new acronym for a set of futuristic philosophies that will save the world, sell us into slavery, or win a game of Scrabble.

TESCREAL stands for seven ideologies: Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism. They all focus on using technology to improve people’s lives. And they are deeply influential among people working on artificial general intelligence (AGI).

An AGI can perform any intellectual task a human can. No such system exists yet, but the benefits and risks are topics of hot debate – and the ideologies form the basis for many of the arguments. I expect the terms will become more mainstream over time, so let’s familiarise ourselves by looking at each in turn.

TRANSHUMANISM advocates for the technological and biological enhancement of humans. Transhumanists believe that technologies like genetic engineering, cybernetics and artificial intelligence can help us live longer, healthier, and more fulfilling lives. Notable supporters include Ray Kurzweil, Director of Engineering at Google, and Nick Bostrom, Director of the Future of Humanity Institute at the University of Oxford.

EXTROPIANISM is the belief that advances in science and technology will, someday, let people live indefinitely. Extropians believe this will lead to a world with greater happiness, prosperity, and freedom. Science fiction and fantasy author Diane Duane first used the term extropy in this sense in 1983. Advocates Tom Bell and Max More co-founded the Extropy Institute in 1988. They closed it down in 2006, stating its mission was ‘essentially completed’. Still, the philosophy continues.

SINGULARITARIANISM believes that artificial general intelligence will eventually surpass human intelligence. This could lead to a technical singularity: a theoretical point in time when technological progress accelerates beyond human comprehension. Singularitarians expect this will happen in the medium term, and we need to act now to ensure singularity will benefit humans. Everyone I’ve mentioned so far supports this growing movement.

COSMISM is a manifesto for spreading outwards into the cosmos and curing death. Russian philosopher Nikolai Fyodorov developed the idea in the late 19th century. He believed humans had an ethical obligation to use technology to resurrect the dead and unite humanity and all of nature. He also proposed we colonise space and create a new utopian society. Supporters of Cosmism include the usual suspects, and Hiroshi Ishiguro, Director of the Intelligent Robotics Laboratory. They make some amazingly human-like robots.

RATIONALISM is an established philosophy that reason should be the guiding principle for humanity. Rationalist Tescreals believe knowledge can be acquired through reason alone, without relying on our senses or other people. Some believe that’s the only way to acquire knowledge, while others think it’s just one way. Transhumanist, Futurist, and two-time US presidential candidate Zoltan Istvan regularly mentions Rationalism in interviews.

EFFECTIVE ALTRUISM is a social movement that aims to do as much good as possible. In AI, that’s using technology to make the world a better place and minimise any potential negative consequences. Effective altruism isn’t about making perfect decisions. Rather, it’s about making the best decisions with the information we have. Stuart Russell, a computer science professor at UC Berkeley and founder of the Center for Human-Compatible AI, is a supporter. ChatGPT maker OpenAI and Sam Bankman-Fried of FTX should also get a mention.

LONGTERMISM is a radical view that emphasises the long-term future of humanity and the world. It argues we should prioritise actions that have the greatest potential for creating a better future in the long run – even if it’s at the expense of people living today. Elon Musk supported Longtermism when he endorsed a book titled What We Owe The Future by Oxford University philosopher William MacAskill.

The seven philosophies are closer together than at a first glance. For example, William MacAskill’s book What We Owe The Future argues for both Effective Altruism and Longtermism. After all, no one specified that ‘doing as much good as possible’ applies to those living today. MacAskill believes that ‘positively influencing the long-term future is a key moral priority of our time’. He also co-founded the Centre for Effective Altruism in 2011.

Leading Cosmism philosopher Nikolai Fyodorov wanted to cure death – a dream shared by Extropians. Extropians and Transhumanists differ mainly in whether death is inevitable or not. And Transhumanist Zoltan Istvan is also a Rationalist. The beliefs and communities overlap. Timnit Gebru and Émile P. Torres coined the term TESCREAL for this reason in a not yet submitted academic paper.

The debate about AGI is heated because the technology that could save us might also destroy us. Implicit in this notion is that humanity needs saving. Some of the philosophies hold that view, but all of them believe in building a better future. The main difference between the schools of thought is how much we should sacrifice to achieve that goal.

For more new developments in AI, click here.


Posted

in

, ,

by

Tags: