Summary of SaTML 2023 - Timnit Gebru - Eugenics and the Promise of Utopia through AGI

This is an AI generated summary. There may be inaccuracies.
Summarize another video · Purchase summarize.tech Premium

00:00:00 - 00:45:00

Timnit Gebru explores the history and evolution of eugenics, transhumanism, and their impact on the development of artificial general intelligence (AGI). She argues that AGI is being pursued by billionaires involved in the transhumanist movement who believe it is necessary to fulfill humanity's collective destiny. However, Gebru identifies several issues with this pursuit, such as perpetuating biases and perpetuating societal inequalities. She suggests that we should stop trying to build AGI altogether and focus on preventing the harm of AI while promoting diverse perspectives for a more beneficial application of AI.

  • 00:00:00 In this section, the speaker introduces Timnit Gebru and her background in the field of AI. She discusses the impact of Gebru's work, including her research on deep learning for analyzing street view imagery and estimating demographics of neighborhoods, as well as her project called Gender Shades, which revealed the limitations of facial recognition technology. The speaker also highlights Gebru's recent paper on limitations of large language models and mentions her role as founder of DARE, which aims to prevent the harm of AI and promote diverse perspectives for a more beneficial application of AI. Gebru's talk will focus on AGI (Artificial General Intelligence) and why it has become such a central topic in the field, exploring different definitions of AGI and its implications for society.
  • 00:05:00 In this section, Timnit Gebru provides an introduction to eugenics as a progressive movement that was popular among scientists at universities such as Harvard and Stanford in the 20th century. She highlights the fact that eugenics aimed to apply scientific ideals actively and was often supported by progressives and liberals who believed in the idea that the human race could be improved through restricting the reproduction of inferior populations, such as disabled people and people of color. Gebru also notes that eugenics did not end after World War II but persisted, with California's sterilization program continuing until 1979 and the British Eugenics Society only changing their name to the Galton Institute in 1989.
  • 00:10:00 In this section, Timnit Gebru, a computer scientist and researcher, delves into the history of eugenics and how it has evolved over time. The first wave of eugenics, popular in the US and Europe, aimed to improve human stock through positive and negative eugenics, advocating for selective breeding of people with desirable traits and removing those with less desirable ones. The second wave then evolved to focus on improving human stock through personal choices like genetic engineering or transhumanism, allowing individuals to improve themselves and their offspring without population level policies or coercion. However, Gebru argues that this version of eugenics is still problematic and is known as the test squirrel bundle of ideologies, which includes transhumanism, effective altruism, cosmism, rationalism, and long-termism.
  • 00:15:00 In this section, Timnit Gebru discusses transhumanism, which combines the vision of transcendence with a new methodology of second wave Eugenics. This new methodology depends on technological advances like artificial intelligence, nanotechnology, and genetic engineering. The goal of transhumanism is to create a new superior species through radical self-enhancement, which could lead to post-human capabilities such as indefinitely long health span and augmented cognitive capacities, among others. The Singularity is another variant of transhumanism, which emphasizes the coming technological singularity, a point at which the rate of technological progress becomes so fast that it causes a fundamental rupture in human history. It is predicted that this will happen in 2045 by Ray Kurzweil, while udovski Eleazar udovski predicted it will happen in two years.
  • 00:20:00 In this section, Timnit Gebru outlines several different movements within transhumanism that have emerged over the years. These include extropianism, singularitarianism, cosmism, and rationalism. Gebru explains that while rationalism is not necessarily transhumanist, it was founded by transhumanists and heavily focuses on improving human reasoning and decision-making. Gebru also mentions the movement of effective altruism, which emerged around the same time as rationalism and applies the principle of rationality to the ethical domain. The focus is on how to do the most good possible with finite resources, with particular attention paid to the very long-term future of humanity. Long-termism is a central component of this movement, which emphasizes ignoring short-term problems and focusing on accomplishing astronomical amounts of good in the future, especially through the creation of posthumans and advanced AI.
  • 00:25:00 In this section, Timnit Gebru discusses the properties of the so-called "test real model". These include historical roots in contemporary communities, a common lineage with first wave eugenics and an intimate connection with transhumanism. The model aims to radically modify the human organism in various ways, with eschatological convictions centered on Utopia and the apocalypse. Utopia involves the transcendence of human limitations and the abolition of suffering, while the apocalypse warns of the potential for unprecedented dangers brought about by superintelligence. Both Utopia and avoiding the apocalypse are viewed as moral obligations, even though they have highly discriminatory views such as placing great importance on intelligence.
  • 00:30:00 In this section, Timnit Gebru discusses the history of eugenics and its influence on the development of AGI. Gebru provides details on Nick Bostrom's 2002 paper, in which he argued that dysgenic pressures pose an existential risk to humanity, and discusses how views on intelligence and eugenics have influenced the development of AGI. Gebru notes that there are many billionaires involved in the transhumanist movement who are funding AGI research and believe that AGI is necessary to fulfill humanity's collective destiny. She also provides a brief history of AGI and notes that many researchers did not associate themselves with the larger goal of developing AGI due to the AI Winters that occurred in the past.
  • 00:35:00 In this section, Gebru provides an overview of AGI and its prominent players. She outlines how the concept of AGI rose in prominence, with Ben Gortzel envisioning a post-human future, OpenAI being founded to make AGI safe, and DeepMind being purchased by Google, which spurred the creation of OpenAI. Gebru then examines the two pathways towards AGI utopia, one based on having an AI that figures out what to do in every situation and the other based on transhumanism. Finally, Gebru highlights how the race to create larger and larger language models perpetuates bias and presents significant risks.
  • 00:40:00 In this section, Timnit Gebru discusses some of the issues that arise from using internet texts to train AI models, particularly search engines based on these models. She mentions how these models are trained on data that represents societal biases and can perpetuate harmful stereotypes. Additionally, the idea of "unlimited intelligence and energy" leading to a utopia is not the reality. Instead, worker exploitation exists for those who filter toxic texts for companies like OpenAI for only $1 an hour in Kenya. There is also the issue of centralized power, with resources going to one company or country that claims to have one model for everything while ignoring the differences in languages and cultures. The centralization of power goes against the idea of wealth and power for everyone.
  • 00:45:00 In this section, Timnit Gebru discusses the concepts of Utopia and apocalypse in relation to artificial superintelligence (AGI). She argues that while it is important to create the necessary safety protocols to prevent an AGI apocalypse, focusing solely on this issue allows the people building these systems to evade accountability for the current harms being caused by the development of AGI. Gebru also highlights the difficulty in defining and testing AGI systems, making it an inherently unsafe practice to pursue. As such, she suggests that we should stop trying to build AGI altogether.

Copyright © 2024 Summarize, LLC. All rights reserved. · Terms of Service · Privacy Policy · As an Amazon Associate, summarize.tech earns from qualifying purchases.