Summary of Gillian Hadfield | Why the science of AI needs a science of normativity

This is an AI generated summary. There may be inaccuracies.
Summarize another video · Purchase summarize.tech Premium

00:00:00 - 01:00:00

This video discusses the importance of a science of normativity in order to understand and manage the complex and emergent phenomena of artificial intelligence. The key attributes of a normativity science are common knowledge, clear rules, stable enforcement, and impersonal reasoning. This focus on the structure and coordination of decentralized enforcement is key to the success of a normativity science.

  • 00:00:00 The speaker discusses the need for a science of normativity in order to ensure that the technology of artificial intelligence (AI) does not have negative consequences for humanity. She notes that this is a motivator for her work at Schwartz Reisman, a company that is focused on creating ethical AI.
  • 00:05:00 Gillian Hadfield discusses the alignment problem, or the question of how artificial intelligence will continue to align with human interests, and how normativity is a critical way to think about such matters. She discusses how social science approaches to understanding normativity can be used to think about how to align artificial intelligence with human interests.
  • 00:10:00 Gillian Hadfield discusses the need for social science research on normativity, arguing that values are the equilibria of normative systems and that we need to think about value alignment as not a question of embedding, but rather aligning with the equilibria of human normative systems. Denis Walsh, a proponent of evolutionary perspective on normativity, is also a member of the panel.
  • 00:15:00 The video discusses how the concept of normativity is important for understanding human behavior, and how it can be thought of as a shared binary classification of actions that drives adaptation.
  • 00:20:00 This video discusses the need for a science of normativity in order to understand how social order is generated through social norms. It describes how decentralized collective punishment (i.e. punishing an individual for engaging in an act deemed wrong by a social norm) works in a setting where social order is generated through social norms and how increased complexity and ambiguity in classification generates challenge to coordination. The demand for an ambiguity-resolving institution is generated as a result.
  • 00:25:00 This video discusses the need for a science of normativity in order to manage the complex and emergent phenomena of artificial intelligence. The key attributes of a normativity science are common knowledge, clear rules, stable enforcement, and impersonal reasoning. This focus on the structure and coordination of decentralized enforcement is key to the success of a normativity science.
  • 00:30:00 Gillian Hadfield discusses the concept of silly rules and how they can contribute to the stability and robustness of a group. She goes on to discuss the example of masks and how wearing one in public can be seen as silly by some, but can have a significant impact on the spread of a virus.
  • 00:35:00 The paper, "Silly rules and the robustness of groups," by graduate students at Berkeley and McCain Andrus, suggests that silly rules, such as the rule that arrows must be warm when made, contribute to the robustness of groups. Additionally, the paper argues that the rules have religious significance and that only men can make arrows.
  • 00:40:00 Gillian Hadfield discusses the need for a science of normativity in order to properly study artificial intelligence. She gives the example of a group of mathematical agents who are constantly engaged in interactions governed by randomly selected rules. The agents are faced with uncertainty about their group's enforcement of rules, as well as their own individual decisions about whether to stay or leave. If the rules are composed of a high proportion of silly rules, the group may collapse faster in response to shocks to belief. If the rules are composed of important rules, the group may be more resilient to shocks and may last longer.
  • 00:45:00 Gillian Hadfield discusses the importance of normative systems in the context of AI, arguing that the stability of rules is essential for agents to make informed decisions. She provides a theoretical example of how a group of agents can benefit from having low-cost and predictive silly rules, which helps to maintain stability in the group.
  • 00:50:00 The article discusses the concept of silly rules and how they can be used in reinforcement learning in order to help agents learn norms. The article also discusses the concept of food taboos and how they are prevalent across different cultures. Finally, the article discusses how silly rules can be used in an environment with multiple agents to help them learn about the norm.
  • 00:55:00 The video discusses how agents can learn better by following rules that are silly, as opposed to those that are important. In the green condition, with no rules, agents learn quickly and efficiently to avoid poisonous berries. In the red condition, with the important rule only, agents learn more slowly and eventually stop avoiding the berry altogether. This provides evidence that agents learn better when they have some leeway in how to behave, rather than following rigid rules.

01:00:00 - 01:25:00

Gillian Hadfield discusses the importance of a science of normativity in AI, as it is necessary to understand the complicated rules that define groups in order to create machines that are able to integrate into these systems. She likens it to the Awa Indians, who live in a society with rules that are not always enforced or understood.

  • 01:00:00 Gillian Hadfield discusses the importance of normativity in human intelligence and how the science of AI needs a science of normativity in order to achieve goals. She argues that values are not something that we set a priori, but an emergent property of the society.
  • 01:05:00 Gillian Hadfield discusses the need to understand the characteristics of normative systems in order to create machines that are able to integrate into these systems. She argues that humans have evolved to solve this problem, and that we need values and principles to create a robust normative system.
  • 01:10:00 Gillian Hadfield discusses the need for a science of normativity in order to create systems that are both reasonable and honorable. She argues that robust, open-ended systems that are subject to interpretation require rich concepts like reasonable, honorable, and right. She also discusses the importance of legal institutions in coordinating the community and enforcing norms.
  • 01:15:00 Gillian Hadfield discusses the need for a science of normativity in AI, asrules that definegroups can be complicated and difficult to change. She likens it to the Awa Indians, who live in a society with rules that are not always enforced or understood.
  • 01:20:00 Gillian Hadfield discusses the idea that human language is a system of silly rules that serves as a means of intelligibility. She argues that if AI learns human language, it may be ethical, but is not sufficient to be ethical. She discusses the idea that silly rules are informative and provide information about the normative state of the system.
  • 01:25:00 This video introduces the upcoming speaker, Marzyeh Ghassemi, who will be discussing the importance of having a science of normativity in order to properly address the challenges posed by artificial intelligence.

Copyright © 2024 Summarize, LLC. All rights reserved. · Terms of Service · Privacy Policy · As an Amazon Associate, summarize.tech earns from qualifying purchases.