This video discusses the Stanford Graph Learning Workshop 2022, which showcased advances in graph-based machine learning. Researchers announced new tools and partnerships that will help power these technologies, including a new graph neural network engine and a new library for speeding up graph-based routines. The video also previews the upcoming Open Graph Benchmark, a tool that will allow for the evaluation of different graph-based machine learning models.
00:00:00 In this workshop, researchers are showing progress in graph machine learning, as well as industrial applications of AI. They are also announcing new tools and partnerships that power these technologies.
00:05:00 Graph neural networks are a type of deep learning that can be used to generalize and apply patterns to large data sets. They are seen as a more general framework than other deep learning models, which can adapt to different shapes of data. Graph neural networks are used in a variety of fields, such as financial networks and customer 360 journey analysis.
00:10:00 The Stanford Graph Learning Workshop 2022 released a Pytorch 2.1 with Pi G lib, which is a low-level GNN engine that scales and accelerates. Additionally, the design of gnns has been improved to allow principled or developing principled aggregations. Furthermore, a set of graph machine learning case studies and use cases are released freely online. Finally, training materials for the Stanford course "cs224w machine learning with graphs" are released.
00:15:00 In this workshop, attendees learn about various advancements made in graph-based machine learning, including accelerating GNN execution using GPUs and using Intel's processors to improve performance. Additionally, the video discusses knowledge graphs, an increasingly popular data structure for representing human knowledge. Finally, the video previews the upcoming Open Graph Benchmark, a tool that allows for the evaluation of graph-based machine learning models.
00:20:00 In his presentation, Matthias Faye discusses recent developments in the Pi G ecosystem, including the introduction of Pai Chi, a graph learning framework that is popular for deep learning on graphs. Faye also discusses the scalability and performance of graph machine learning systems.
00:25:00 Graph neural networks provide a unified computational framework that can generalize different architectures into a new paradigm of how we think about neural networks. Pi G, or pre-existing library, provides state-of-the-art GNN architectures and training procedures for easy use.
00:30:00 The video presenter introduces paiji, a tool for efficiently training graph neural networks. Over the past year, they've worked on improving examples and tutorials, adding support for more sampling techniques, and releasing the Stanford Graphml Tutorials. Next, they discuss how paiji works, focusing on its modularity and flexibility. They go on to discuss some of the recent progress they've made in training graph neural networks, including improvements to examples and tutorials. Finally, they talk about the future of paiji and what's in store for the end of the year.
00:35:00 The first announcement is that there is a new GNN engine called Pai G lip, which is a collaboration between Kumo AI and the video and Intel team. The second announcement is that there are new optimizations that have been made to Pi G, which will speed up the performance of the software. Finally, the third announcement is that there is a new library called iglib that will help speed up the performance of all the graph-specific routines in Pai Chi.
00:40:00 This workshop discusses how to transform a homogeneous graph into a heterogeneous one, and how to apply GNN layers to achieve different goals. The workshop also showcases how to improve graph Samplers using Pai G lib.
00:45:00 In this workshop, attendees learn about various aggregation techniques that can be used in Pai Chi, and how to implement them in the software. Additionally, sparse matrix multiplication support is added to Pi G, which allows for more efficient GNN implementations.
00:50:00 In his talk, Professor Darrell Stein introduced several new features in the upcoming PiG 2.2 release, including improved scalability and pluggable graph backend support. Additionally, he announced plans to improve explainability of graph model predictions across different data sets and tasks.
00:55:00 The video discusses the state of the art of graph learning and representation, and how Pai G is being used in a variety of ways in production today. The presenter goes on to talk about how the community can help contribute to the development of the project, and how the framework is easy to use.
The Stanford Graph Learning Workshop is a series of events that discuss the latest research in graph learning. The first workshop is scheduled for October 10th, and additional workshops will be announced on the Pi G slack channel. The workshop will focus on how to use graph learning algorithms to recognize patterns in data.
01:00:00 The video provides a brief overview of some of the many ways in which the Stanford Graph Learning Workshop can help you build better graphs and networks, as well as provide support for new data types and structures. The first workshop is scheduled for October 10th, and additional workshops will be announced on the Pi G slack channel.
01:05:00 The talk will discuss the new feature store and graphs that Matthias mentioned earlier, and how you can scale them up to very massive data sets and massive graphs. Dong and myself will discuss how Kumo where we think about how to scale up large-scale graphs and work on graph neural networks on massive large-scale data warehouses.
01:10:00 The Stanford Graph Learning Workshop 2022 discusses how to scale graph-based learning using Pai G, with the aim of achieving independence from the node-based memory constraints of traditional graph-processing frameworks. Pai G provides an abstract key-value store interface which allows users to store graph features and data in an optimized format, while the data loader manages feature fetching and subgraph generation.
01:15:00 The Stanford Graph Learning Workshop 2022 discusses the key abstractions used in Pai G, the graph sampler and feature store. These abstractions allow for efficient graph sampling and feature storage, even for large graphs. The data loader allows users to interface with these abstractions without knowing the underlying details.
01:20:00 The Stanford Graph Learning Workshop 2022 discusses how to build a filter and graph store for large-scale graph neural networks. The workshop provides an overview of the various abstractions provided by Pi G, and shows how these can be used to ease the development and optimization of graph-based models.
01:25:00 The presenter discusses how feature representation can be optimized for storage and performance. They also discuss how to scale up machine learning algorithms.
01:30:00 In this video, a Stanford Graph Learning Workshop participant introduces himself and talks about his work on a decision problem involving foreign objects. He demonstrates how a decision tree can be used to find the best solution, and then extends the tree to include multiple variables.
01:35:00 The Stanford Graph Learning Workshop 2022 discusses current trends in graph learning and showcases some of the latest advancements in the field.
01:40:00 This video is a tutorial on graph learning, featuring a Stanford professor who discusses the different approach that his university takes. The professor also gives a brief overview of the research that is currently being done in this area.
01:45:00 In this workshop, participants learn how to use graph learning algorithms to recognize patterns in data.
01:50:00 In this video, Stanford researchers discuss how Graph Learning can be applied to various problems, such as school supplies, foreign languages, and creation. They also mention that the technology is being used by various businesses and organizations to improve their performance.
01:55:00 This video is a transcript of a Stanford Graph Learning Workshop, where participants discuss the latest research in graph learning. The first part of the workshop will focus on Coffee Break.
The Stanford Graph Learning Workshop 2022 will focus on building a library that serves as a community effort for data scientists and researchers. In this workshop, attendees learn about the different graph structures that are useful for machine learning and recommendation tasks. They also learn about the different stages involved in the recommendation process and how to use graph-based tools to help make those decisions.
02:00:00 Rishi Puri from Nvidia will discuss their contributions to the Pi G library and their efforts to scale the library to larger graphs. Cutlass kugraf and kugraph Ops will be discussed in detail, as will the Pi G lib library which provides low-level optimizations for accelerated workflows with Nvidia Cuda libraries. The Rapids kugraf Pi G ecosystem will support multi-node multi-gpu using kugraph or cool graph service, as well as massive scale graphs. Graph stores will be provided without any code changes needed by the user, all integrated with qml and qdf for faster pre-processing training and inference.
02:05:00 In this workshop, attendees learn about two different ways to manage and use graph data using kugraph: using kugraph directly to manage node and edge data, and using the kugraph service that provides additional flexibility with data stores and compute segmentation. Both options support using multiple gpus for both homogeneous and heterogeneous graphs. Both options rely on kugraf's core algorithms, with the option to use Cutlass to accelerate heterogeneous gnns.
02:10:00 This video workshop discusses the benefits of parallelizing GNN training on CPUs. It covers the basics of the Nvidia GNN product and the new, performance-tuned Pi G container. Finally, it discusses how Intel is helping to speed up GNN training on CPUs.
02:15:00 Intel's Vision for AI software is to provide a unified AI software stack that is optimized for performance and productivity. This stack includes the 1dn software library, which is designed to optimize the performance of dense computing. Additionally, the company is developing new features into the top of the stack, such as workflows and containers.
02:20:00 The Stanford Graph Learning Workshop 2022 introduces the use of convolution in graph learning and provides a summary of the performance implications. The workshop also introduces the use of SPM and reduce, two commonly used optimization techniques on Intel CPU platforms. Finally, the workshop provides a comparison between convolution and SPM/reduce on the Reddit data set.
02:25:00 The video showcases how Stanford Graph Learning Workshop 2022 optimized a large graph for speed using vectorization on the number of non-zero non-zero edges.
02:30:00 This workshop discusses the scalability and performance of graph machine learning algorithms, as well as the unified graph platform that the company is working on.
02:35:00 The Stanford Graph Learning Workshop 2022 will focus on building a library that serves as a community effort for data scientists and researchers.
02:40:00 In this workshop, attendees learn about the different graph structures that are useful for machine learning and recommendation tasks. They also learn about the different stages involved in the recommendation process and how to use graph-based tools to help make those decisions.
02:45:00 This workshop discusses how to use graph learning to improve ranking of suggestions for users and podcasts. Graph learning algorithms are used to learn about the relationships between users, podcasts, and other data.
02:50:00 In this workshop, attendees discuss how to use graph networks to improve the quality of search results. They discuss how to generate and embed gene networks, and how to use these networks to improve the quality of search results.
02:55:00 The speaker discusses some of the challenges involved in designing effective Recommendation Systems, and highlights some of the tools and techniques that have been successful thus far. He also provides a list of reference materials for further reading.
The Stanford Graph Learning Workshop 2022 is a video that discusses the Kumo platform and how it can be used to make predictions about customer churn. The video also provides a baseline comparison of predictions made using a simple marketing analyst and an XGBoost-based model for the LTV problem.
03:00:00 Kuma Ai is a startup ecosystem that enables Enterprises to query the future quickly and easily. Hema Ragawan, a co-founder of Kuma Ai, will be speaking about how the company uses dnms to do this. Query the Future will be the tagline of the talk. Gnns are key to powering this future, and by enabling a platform that allows many contributions, Kuma Ai is helping to innovate together as a community.
03:05:00 Stanford's Graph Learning Workshop 2022 covers the basics of how to build a simple ranking and recommendation model, and how to apply these models to practical problems in enterprises. This workshop is a valuable resource for data scientists looking to learn more about the latest in graph-based prediction models.
03:10:00 In this workshop, Stanford researchers discuss how to use deep learning to predict outcomes in warehouses. They walk through a demonstration of how to connect tables in a graph and create predictions.
03:15:00 The video discusses how to use the Kumo platform to make predictions about customer churn. Kumo provides a simple query builder so that you can easily generate predictions. The video also provides a baseline comparison of predictions made using a simple marketing analyst and an XGBoost-based model for the LTV problem. The predictions generated by Kumo are generally accurate.
03:20:00 The Stanford Graph Learning Workshop 2022 discusses Kumo, a platform that enables graph-based solutions for table-level and column-level explainability. Kumo also has a secure Cloud First solution that isolates data and compute. This allows for enterprise-level SAS offerings that are based on open source technologies.
03:25:00 The Stanford Graph Learning Workshop 2022 will discuss how to invest in research at Stanford, how to recruit students from Stanford, and how the Lynx program works to connect students from underrepresented backgrounds with faculty and industry.
03:30:00 Stanford Graph Learning Workshop 2022 will explore advances in graph learning algorithms and applications.
03:35:00 The video presents a workshop on graph learning at Stanford University. The workshop covers a variety of topics related to graph learning, including foreign language processing, algorithms, and data structures.
03:40:00 The video discusses a Stanford Graph Learning Workshop that will be taking place in 2022. The workshop will focus on how to improve the accuracy of machine learning models using graphs.
03:45:00 In this video, Stanford researchers discuss graph learning methods and their potential future applications.
03:50:00 The video discusses how to use Graph Learning techniques to learn new information.
03:55:00 In this video, a Stanford Graph Learning Workshop presenter describes some of the recent research they've been doing on graph learning. They mention that they've improved their approach, and that they would like to share this with others.
The Stanford Graph Learning Workshop 2022 provides an overview of how to use graphs to improve clinical diagnosis. The workshop discusses how to embed patient data into a knowledge graph, how to use self-surprise pre-training (gnns) to improve the accuracy of a model, and how to use machine learning to support the entire life cycle of drug discovery.
04:00:00 The Stanford Graph Learning Workshop 2022 discusses the development of graph-based applications. The workshop discusses the importance of data collection, data pre-processing, and data analysis.
04:05:00 In this workshop, participants learn about the latest advances in graph learning technology. They are then given a task to perform using this technology.
04:10:00 This video is a workshop on graph learning, which is a type of machine learning used to identify relationships between objects. The workshop is led by Professor Savery and covers a variety of topics, including different types of graphs, algorithms for graph learning, and applications of graph learning.
04:15:00 This video workshop covers the latest developments in graph learning algorithms. The presenter discusses how to improve the performance of Graph Convolutional Neural Networks using various optimization techniques.
04:20:00 In this workshop, attendees learn how to apply graph learning techniques to solve problems in various domains.
04:25:00 This workshop will discuss graph learning methods and applications.
04:30:00 The video explains how graph AI is useful for diagnosis, and shows how it can help reduce the time it takes to get a diagnosis for a rare disease.
04:35:00 Shepard is a small AI learning method designed for the purpose of diagnosing rare diseases. It works as follows: first, a base model is created by training self-supervised graph representation learning approach on a large biomedical Knowledge Graph. This serves as a base onto which patient level information is overlayed. Then, individual patient information is generated and represented in a way that optimizes the resulting embedding space such that patients that are similar to each other are represented by points that are nearby in the embedding space. Finally, Shepherd can be implemented in a clinical workflow and evaluated on two external patient cohorts. The results of this study show that it is able to accurately and quickly diagnose patients with rare diseases.
04:40:00 The Stanford Graph Learning Workshop 2022 provides a comprehensive overview of how to use graphs to improve clinical diagnosis. The workshop discusses how to embed patient data into a knowledge graph, how to use self-surprise pre-training (gnns) to improve the accuracy of a model, and how to use machine learning to support the entire life cycle of drug discovery.
04:45:00 The workshop discusses how to use machine learning to prioritize drugs for diseases. The workshop provides an example of how this was used to identify promising drugs for a virus.
04:50:00 The speaker introduces Brian Porosi, a research scientist at Google, and discusses some of the successes of applying graph neural networks to various tasks.
04:55:00 The talk discusses some of the challenges that Google faces when working with large graphs, including heterogeneity and scale, and how Grail helps to overcome these issues.
This video discusses the Stanford Graph Learning Workshop 2022, which will focus on learning graph models with similarity. The presenter discusses the challenges of evaluating academic graph models and provides an example of how to improve their accuracy by using a separate model for each data set. The speaker also discusses how powerful machine learning models can overfit when given bad training data, and suggests applying a regularization term to the latent representation to balance out the bias. Different regularization functions can be used, including an adversarial model and a function based on the moments of the distribution.
05:00:00 The video discusses the Stanford Graph Learning Workshop 2022, which will focus on learning graph models with similarity. The presenter discusses the challenges of evaluating academic graph models and provides an example of how to improve their accuracy by using a separate model for each data set.
05:05:00 The speaker discusses how powerful machine learning models can overfit when given bad training data. They suggest applying a regularization term to the latent representation to balance out the bias. Different regularization functions can be used, including an adversarial model and a function based on the moments of the distribution. The speaker finishes by mentioning that different data sets will require different equalization measures.
05:10:00 Srijan Kumar from Georgia Tech is presenting two machine learning methods for detecting fraudulent activity on websites. The first method is a time-varying interaction network, and the second is a recurrent neural network that learns to predict future embedding trajectories for users and items.
05:15:00 Jody is a machine learning algorithm that is capable of updating user and item embeddings in a scalable manner. The algorithm outperforms existing models by 12%. Facebook released the ties model, which uses the dynamics of user and item embeddings for platform integrity tasks.
05:20:00 In this workshop, Stanford researchers discuss the robustness of graph neural networks models against adversarial manipulation. They present findings from a study in which they designed a new adversarial attack model to manipulate detection systems. The results show that graph and sequence-based models can be easily manipulated, with failure rates as high as 15%.
05:25:00 In this workshop, attendees learned about the stability of graph neural networks, how to measure it, and how to improve it. They also discussed how imperceptible changes to the training data can impact the stability of a recommendation system.
05:30:00 Luna Dong from Meta AR VR Reality Labs will discuss the role of graph learning in building an assistant that can serve as a virtual smart assistant. She will discuss the technologies used to support this assistant, as well as the benefits it provides.
05:35:00 The video discusses the challenges and initial solutions for developing an assistant that can be used in AR and VR. It discusses how the assistant needs to evolve to be able to understand the context and be multimodal.
05:40:00 The video showcases how, using a personal knowledge graph, the various aspects of a user's life – such as their reading history, preferences, and routines – can be used to generate recommendations.
05:45:00 The speaker discusses how graph learning is important for intelligent assistants, explaining that the content in graphs is not as important as the path between nodes. He also discusses how Federated learning can preserve privacy, and how inference and on-device learning are possible.
05:50:00 This video introduces the use of graphs in language modeling, and discusses two techniques, linkboard and dragon, to make this easier. The linkboard technique uses document links as input to create a language model, while the dragon technique incorporates knowledge graphs into the model training process.
05:55:00 This video introduces the use of two new machine learning techniques - linker and dragon - which take advantage of the interconnectedness of documents and knowledge graphs. Linker is effective for multi-hop reasoning and dragon for short or low resource question answering. Both techniques are available as open-source code on the hacking face website.
The Stanford Graph Learning Workshop 2022 video covers the topic of large-scale graph machine learning. The workshop provides an overview of the various challenges faced when training machine learning models on large graphs, as well as a variety of data sets available for use. The workshop also discusses the Large Scale Challenge (LSC), which has attracted over 500 participants from around the world.
06:00:00 The video presentation presents a new technique called Dragon which can train language models from text and knowledge graphs more effectively. Dragon achieves this by first preparing an informative peer of a text segment and knowledge subgraph, and then using this to ground the text in the knowledge subgraph. Dragon also improves performance across multiple tasks and domains, and has promising benefits for complex reasoning.
06:05:00 This workshop discusses how graphs can be used to improve language modeling. Results show that using linked documents and knowledge graphs can improve performance.
06:10:00 This video provides a workshop on graph learning at Stanford University. The workshop is open to interested participants and covers a variety of topics related to graph learning.
06:15:00 In this video, a Stanford Graph Learning Workshop is discussed. Various challenges and examples of how to solve them are given. Finally, a one-month plan for keeping track of monthly progress is outlined.
06:20:00 This workshop discusses how to apply graph analytics to business problems. The workshop features presentations on various applications of graph analytics, including impact analysis, product recommendations, and foreign business analysis.
06:25:00 This video provides a workshop on graph learning, which is a subfield of machine learning that focuses on the analysis and understanding of graphs.
06:30:00 The speaker will discuss the Open Graph Benchmark, which is a collection of realistic and diverse Benchmark data sets for machine learning. The Open Graph Benchmark is available online and includes data sets from different domains and tasks.
06:35:00 The Stanford Graph Learning Workshop 2022 video covers the topic of large-scale graph machine learning. The workshop provides a overview of the various challenges faced when training machine learning models on large graphs, as well as a variety of data sets available for use. The workshop also discusses the Large Scale Challenge (LSC), which has attracted over 500 participants from around the world.
06:40:00 The Second ogbr-scale Challenge uses three large-scale data sets to train machine learning models that predict important quantum chemistry properties. The competition is designed to keep the field advancing, and the winners will be announced late November.
06:45:00 The final talk of the day was by Hong yuren, a fifth-year PhD student at Stanford and joint work with several amazing colleagues. Hong is interested in knowledge graphs and is working on reasoning methods that allow for complex predictive queries over these types of graphs.
06:50:00 The Stanford Graph Learning Workshop 2022 will discuss Knowledge Graphs and their applications. The talk will discuss how to complete a Knowledge Graph, and how to find answers to queries.
06:55:00 The Stanford Graph Learning Workshop 2022 video presents a new approach to modeling queries, using a hyperrectangle representation. This representation is more expressive and efficient than traditional query representations, and can be used to model any query. The video also introduces a framework for scaling up reasoning algorithms to large datasets, using a distributed training paradigm and a highly optimized pipeline.
This video is a recording of a workshop on graph learning. The workshop discusses the use of graphs in various application areas, including relevance and quality assessment in search, fake reviews, and boosting product rank. The panelists suggest that the field is still in its early stages, with many opportunities for innovation still ahead.
07:00:00 The speaker discusses various downstream applications of query embedding methods, including natural language question answering and fact ranking. They also present a scalable framework that combines query embedding and reasoning methods to extremely large knowledge graphs.
07:05:00 This video is a transcript of a workshop on graph learning. The workshop introduces the audience to graph learning, and then discusses the five distinguished panelists. Each panelist has a impressive background in research and industry.
07:10:00 The speaker discusses their work with graphs, highlighting how they are used in various industries. He shares that they are currently hiring for a position in the field of graphs and machine learning.
07:15:00 This video introduces the use of graphs in various application areas at Amazon, including relevance and quality assessment in search, fake reviews, and boosting product rank. The presenter notes that graph problems are fundamentally relational, and that at Snapchat, the team is primarily focused on recommending items to users.
07:20:00 This video discusses how industry is well positioned to help academics with pragmatic research, as well as the various collaborations between industry and universities. It also mentions how companies often have internship programs and University collaborations.
07:25:00 The panelists discuss how interns are used at Amazon, and how smoothly the IP process is currently run. They mention that while interns are usually students from top universities, there is no limit to who can apply. The panel also mentions that interns are a great opportunity for those looking to gain hands-on experience in the field, and to connect with faculty members.
07:30:00 The speaker discussed the advantages and disadvantages of the academic and startup environments, highlighting the contrasting incentives and values of the two. He recommended students consider research in areas such as system scaling and inference, as well as machine learning and optimization.
07:35:00 The speaker discusses the advantages and disadvantages of using graph-based models, and advises students to do an internship at a company or lab.
07:40:00 This video discusses the future of the graph learning field, focusing on the role of recent advances in equivariant neural networks and 2D architectures. The panelists suggest that the field is still in its early stages, with many opportunities for innovation still ahead.
07:45:00 This workshop discusses the current state of machine learning and deep learning, and how students can find opportunities to use these technologies in the real world.
07:50:00 The speaker discusses how deep learning has the potential to surpass traditional methods in drug discovery, and how students should think about what they are doing before doing it. He also recommends that students think about how their work fits into a larger picture, and how it can be improved.
07:55:00 The presenter thanks attendees for coming and mentions that the event would not have been possible without the help of the Stanford data science initiative. Several speakers present on different aspects of graph learning, and the closing remarks mention that drinks and food are available outside the venue.