Summary of Our Extended Bodies#3

This is an AI generated summary. There may be inaccuracies.
Summarize another video · Purchase summarize.tech Premium

00:00:00 - 01:00:00

This video discusses the "Our Extended Bodies" project, which aims to explore what a "professional level" of photography looks like today. The project includes a preview of some of the pictures found on a website called "Landscape Photography Today." These pictures are often based on artificiality and construction, and raise questions about the definition of "nature." The video also discusses the importance of deep learning in the field of photography, and how it will help to reveal the "before" and "formalized" concepts of the world.

  • 00:00:00 Pauline is a PhD candidate in media and communication studies at the University of Sarajevo. She is currently working on her dissertation, which focuses on Google's digital landscapes and how they impact perceptions of self and coexistence. She has also been a guest lecturer at Paris College of Art and been involved in transdisciplinary new media projects.
  • 00:05:00 Pauline presents her research on Google's "Creatism" project, which uses artificial intelligence (AI) to create stunning landscapes using Google Street View imagery. She discusses the complex, theoretical concepts at the heart of the project, and offers a proposal for meeting halfway between the machine's deep learning language and human sciences language.
  • 00:10:00 The paper explores the concept of ethics in photography, and suggests that there is a "universal metric" for the highest aesthetic standard that humans can define. It argues that deep learning systems can create professional-level photographs without the need for additional labels or pre-existing images, through the use of discriminative and generative processes.
  • 00:15:00 The video discusses a project aimed at training a machine to produce high-quality photos, based on a concept of "aesthetic metric." The project has difficulty finding professional-quality photographs to train the machine with, and relies on generating photo "before" and "after" examples by changing parameters in photographs. One of the project's specificities is its use of a "dramatic mask" to enhance dramatic lighting in photographs.
  • 00:20:00 This video discusses the idea of "professionalism" in photography, and how it can be defined in terms of experience and education. The video also discusses the concept of "aesthetic" and how it can be related to professionalism. Finally, the video discusses the controversies around professionalism in photography, and how they are changing the landscape of the industry.
  • 00:25:00 This video introduces the "Our Extended Bodies" project, which aims to explore what a "professional level" of photography looks like today. The project includes a preview of some of the pictures found on a website called "Landscape Photography Today." These pictures are often based on artificiality and construction, and raise questions about the definition of "nature." The video also discusses the importance of deep learning in the field of photography, and how it will help to reveal the "before" and "formalized" concepts of the world.
  • 00:30:00 Paul We Neck, a software engineer, presents on the topic of "Serving Humans." He points out that the complexity of our world has led to an ever-growing need for services, which in turn has led to an ever-growing demand for human expertise. He argues that, because we are living in a time of ever-expanding knowledge, it is important for us to think about who is being served by this increasing complexity.
  • 00:35:00 The video discusses how humans are limited in the way that they can communicate and relate to other people due to the size of their brains and bodies. It goes on to say that in order to cope with the increasing complexity of the world, humans need to employ psychological tricks to help them remember things and stay connected to others. The concept of superstimuli is introduced, which refers to stimuli that are designed to be more appealing than the average stimulus. These stimuli can be things like delicious foods or addictive substances. The video ends with a discussion of how human interests can be complex and sometimes in opposition to each other, and how this can lead to problems such as overeating or procrastination.
  • 00:40:00 In this video, Eliza Yukovsky, the writer of the Les Wrong Rationality Project, discusses how video games and other technological products can become addictive. She points to variable rewards as a key reason why users keep coming back to these products. This concept was first pioneered by gambling machines from the last century, and has since been applied to various technological products.
  • 00:45:00 The video discusses two machines, one with a positive expected return, and one with a negative expected return. Sir Tim Berners-lee, the inventor of the internet, says that the original goals of the internet, which were to connect humanity and transcend limits, have not been met. He suggests that the main problem is the intentional and malicious behavior of bad actors, as well as the increasing centralization of the web. He says that we need to be critical of the applications that are built on top of the internet, and that we need to worry about what people are building on top of it.
  • 00:50:00 The video discusses the systematic failure of current online systems, which are due to economic and sociological incentives that shape interactions. There is a need for reforms to these incentives in order to solve the system's problems. One example of a reform that is discussed is the use of economic incentives to change the way users are treated by platforms.
  • 00:55:00 In this third video in the "Our Extended Bodies" series, Google Maps and Netflix propose different ways of using one's extended body. Netflix suggests using the late hours as a good time for autoplaying, while Google Maps suggests taking the conversation offline. Both concepts can be understood through the perspective of Spinoza's concepts of Konatus and Tristia. Konatus is the tendency to maintain and affirm one's existence, while Tristia is the tendency to maintain and maximize one's ability to be affected and to find sources of happiness. These concepts help us see that it is important to critically assess the mod alignment of a service before jumping into using it.

01:00:00 - 02:00:00

This video discusses the concept of "Our Extended Bodies", which is the idea that humans need to adapt to their changing environment. The speaker proposes the use of cognitive augmentation to help us overcome obstacles such as deliberate malicious intent and economic incentives to deceive humans.

  • 01:00:00 The speaker discusses the need for humans to adapt to their changing environment, including the need to address the first and second sources of dysfunction- deliberate malicious intent and economic incentives to deceive humans. They propose the use of cognitive augmentation to help us overcome these obstacles.
  • 01:05:00 The speaker discusses the difference between artificial intelligence (AI) and machine learning, and how supervised learning is the most active and successful paradigm for AI. He then goes on to talk about image recognition, and how supervised learning is a good way to do this.
  • 01:10:00 In this video, the presenter discusses the concept of classification, which is a powerful paradigm for machine learning. The first step in classification is to create a data set of examples with both inputs and outputs. The second step is to decide where to look for the mapping function f. In step three, the presenter defines a notion of distance and constructs a loss function to measure how well a candidate function is performing on the data set. Finally, in step four, the presenter evaluates the best approximation to f and determines if it is suitable for training a machine learning algorithm.
  • 01:15:00 Supervised learning is a technique where a machine is taught how to do something by using examples of that thing. This can be used to perform tasks such as recognizing different types of animals, or doing complex calculations.
  • 01:20:00 This video discusses the differences between human and machine intelligence, and how machines have been getting better at specific tasks for a long time. The presenter says that this is not a paradigm shift, and that machines have always been superhuman in some ways.
  • 01:25:00 The video discusses the importance of humans in machine learning, highlighting the shortcomings of machine learning models without human understanding. It goes on to explain that generalization, transfer, and understanding are all important areas for machine learning that humans can do without thinking about it.
  • 01:30:00 In this video, the presenter discusses the limitations of artificial intelligence (AI), and how humans will still be needed to guide and interpret the results of machine learning models. He goes on to say that the purpose of AI is to solve problems that humans cannot, and that 2020 has been a bad year so far. He then suggests that we need to work together to create a more advanced AI and that humanity's intelligence comes with some limitations.
  • 01:35:00 The speaker discusses the intersection of technological advances and the forces that will determine how widespread these advances will be. They note that while some people may be eager to use these technologies, there are still strong economic incentives that will keep most people from benefiting.
  • 01:40:00 In his presentation, Pauline Maison discusses the ethics of machine learning and its impact on society. She notes that while the problem of bias in machine learning systems has only recently become widely known, it is a problem that has been present for some time. She explains that she teaches about digital mapping and optical devices, among other topics, in order to help students understand the behind-the-scenes concepts that are driving these technologies.
  • 01:45:00 The speaker discusses the potential ethical complications arising from the use of technology to create convincing fake texts and videos. They point out that this problem has been around for a long time, and that new technological solutions are necessary in order to combat it.
  • 01:50:00 In this video, Pauline Provencal discusses the idea of universal aesthetics, and how it's difficult to build up a metric for it. She also suggests that we should focus on the subjectivities of people when dealing with technologies.
  • 01:55:00 The video discusses the paper "Our Extended Bodies: A Matrix of Aesthetics" by Tour and his team of researchers. The paper attempted to measure beauty by asking humans, and found that 60% of people agreed that the images were reasonably beautiful. However, the paper's true purpose was to define an algorithm that can trick humans, not to judge beauty. Despite this, the concept is still relevant as it shows how humans can be tricked by an intelligent system.

02:00:00 - 03:00:00

In the video "Our Extended Bodies#3", the speaker discusses the motivation behind artificial intelligence, and how it can be used for good or evil purposes. He also introduces the idea that machines can be used to generate creativity, and that the humanities can help contribute to the understanding of human intelligence.

  • 02:00:00 The video discusses the motivations behind artificial intelligence research, and how the results of that research can be interpreted. It also discusses the importance of words in communication, and how the use of certain words can convey a certain idea about the video's topic.
  • 02:05:00 In the video, speaker Sayed discusses the idea that we can't solve everything through technology and that technology should be used to help solve problems, not to replace human interaction. He also discusses the idea that there are technologies that help us grow and others that make us shrink. If you would like Sayed to repeat himself, please let him know.
  • 02:10:00 The video discusses how human beings lose agency due to technological innovations such as smartphones, computers, and the printing press. Deborah discusses how a humanist would maximise the diversity of human destinies in order to allow each individual to choose what they want their lives to be. Tristan Harris discusses how designers are working on ethical technologies to help people maximize their own desires. The engineer in the house asks about how to build in a foolproof dimension into technology so that it is not used in a nefarious way.
  • 02:15:00 In this video, Shabby discusses the consequences of societal level decisions made around the use of artificial intelligence (AI). He points out that, while individual researchers may have ethical values, the institution as a whole may not. He suggests that political decisions need to be made around the technology, and that more than just observing the situation is necessary - it is time to be intentional about shaping the incentives so that AI is used for good rather than evil. Jeremy introduces himself and explains that he has been involved with AI for two years, and that he has seen first-hand the danger that it could be misused.
  • 02:20:00 The speaker tells the audience that they will be taking a short break, and then will be able to continue the conference with a question-and-answer session. The speaker then goes down the stairs to get a glass of wine. If anyone would like to make the conference a bit more of a party, they are free to do so.
  • 02:25:00 The video discusses a fight that occurred among the participants, and one individual suggests that they can wait for a bit before getting involved.
  • 02:30:00 The speaker welcomes two senior lecturers from a top French university and discusses their research in areas such as computer science and artificial intelligence.
  • 02:35:00 This 1- paragraph summary of the video "Our Extended Bodies#3" points out that one of the aims of the roundtable is to ask what articulations might be found if any between the critical work that is being done in humanities and social sciences and the advances in technology and stem disciplines. It also reminds the audience that science is not just about acquiring knowledge about objects, but also about studying the self. This raises some concerns because artificial intelligence is increasingly capable of understanding and translating human language.
  • 02:40:00 In the video, Albino Luciano discusses the motivation behind artificial intelligence, which is to reproduce or understand human intelligence. He also mentions the turing test, which is a way to measure intelligence. Albino Luciano argues that there is no language to talk about the language of intelligence, which is a deep experiment. He concludes that artificial intelligence is still in its infancy and that science is still trying to define it.
  • 02:45:00 The video introduces the idea that creativity can be generated by machines, and that the humanities can play a role in exploring this field. It raises the question of what the humanities can do to help contribute to the understanding of human intelligence.
  • 02:50:00 The speaker offers a perspective on the role of machines in the world, discussing the potential for machines to do everything that humans can do and more. He also discusses the potential for machines to be used for evil purposes, and suggests that we should regulate machine learning.
  • 02:55:00 The video discusses the idea that there are things that machines cannot do, and discusses some ways in which humans and machines can collaborate to make sure that humans are safe. It also touches on the idea of risk, and how humans should make decisions about technology without being influenced by economic interests. The speakers believe that humans and machines are on a similar page when it comes to understanding technology, and that the main challenge is getting everyone to make the same choices. They feel optimistic about the future, as they believe that machines are able to replicate intelligence and have a huge power of computation. However, they also feel that humans will have to face similar challenges when it comes to making choices about technology in the future.

03:00:00 - 03:40:00

The video discusses the dangers of AI and how it separates scientists from their responsibility to society. The author suggests that we work together to create a system in which scientists are responsible for the use of their technology.

  • 03:00:00 The video discusses how the use of artificial intelligence (AI) is a danger to the planet because of the harm it causes to the environment. AI is also a danger to society because it separates scientists from their responsibility to society. The author suggests that we work together to create a system in which scientists are responsible for the use of their technology.
  • 03:05:00 The video discusses how different disciplines within the sciences are working together to form a more holistic view of knowledge. It also discusses the importance of plasticity and creativity in the brain, and the need for scientists from different disciplines to communicate with each other.
  • 03:10:00 The speaker discusses how being open to other disciplines can help keep the humanities relevant in the age of technology. He mentions how sharing ideas across different fields can help to create a more symmetrical relationship between the humanities and other sciences.
  • 03:15:00 The video discusses how engineers and scientists can work together to address larger questions in society. The hosts ask why humanities people continue to think about these larger questions, and why they are not always successful in doing so. They ask two engineers working in the private and public sectors why they feel this way, and one engineer responds that it is because humanities people are not interested in making money.
  • 03:20:00 The video discusses the difficulties faced by researchers in artificial intelligence when attempting to gain recognition from the community of computer scientists. The main barrier is that many of these researchers are not motivated by the money that is available, but by their passion for the field. Another barrier is that open science is essential for exchanging ideas and improving research. However, there are also limitations to this approach, as some people fear that tinkering with nature may have unintended consequences.
  • 03:25:00 The speaker talks about the need for humanities to help with the understanding of the world, and the different incentives that are at play in the public and private space. They suggest that computer science should be looked at in the same way, and that the general public may not believe experts when it comes to A.I. concerns.
  • 03:30:00 The speaker discusses the benefits of reading science fiction and the dangers of relying on details to make a story seem more realistic. The speaker also points out that science fiction is not a prediction, but a way to ask questions and explore possibilities.
  • 03:35:00 This video discusses how science fiction raises questions about the future, and how humanities can help us better understand how people will react to new technologies.
  • 03:40:00 The speaker thanks everyone for attending the afternoon workshop, and expresses her enjoyment of the discussion. She formally closes the event.

Copyright © 2024 Summarize, LLC. All rights reserved. · Terms of Service · Privacy Policy · As an Amazon Associate, summarize.tech earns from qualifying purchases.