Summary of English Google Webmaster Central office-hours hangout on best practices

This is an AI generated summary. There may be inaccuracies.
Summarize another video · Purchase summarize.tech Premium

00:00:00 - 01:00:00

Google's John Mueller shares a variety of best practices and tackles myths and misconceptions related to web development and SEO in the "English Google Webmaster Central office-hours hangout." He stresses the importance of using fast and reusable technologies, using sitemaps to notify Google of new and updated URLs, avoiding robots.txt to eliminate duplicate content and how to handle duplicate content using the parameter handling tool, and upholding website design best practices such as avoiding blocking CSS and JS resources. He also emphasizes the value of creating unique content, improving mobile access, and user feedback. Mueller also discusses the limited submissions of Fetch as Google and advises on where to upload RSS feeds. Lastly, he sheds light on why sitemaps aren't prioritized by Google and advises on how to rank better through creating useful and timely content.

  • 00:00:00 In this section, John Mueller, a webmaster trends analyst at Google Switzerland, shares some best practices and myths to help webmasters make fantastic websites. He recommends using fast and reusable technologies to present content in ways that work across all devices, using Responsive Web Design to create one website to reuse across multiple devices, and avoiding Flash and other rich media for textual content. He also discusses how to optimize website architecture using sitemaps, RSS, and Atom feeds to let Google know about new and updated URLs and how to ensure the correct dates are used in these files to match primary content. Finally, he emphasizes the importance of double-checking and confirming indexed URLs to ensure that the URLs submitted match those found during crawling.
  • 00:05:00 In this section, John Mueller discusses the use of rel canonical and the best practices for canonical settings. He recommends using rel canonical to notify Google about the preferred URL that a user wants to have indexed. It's important to set it up correctly and make sure that the pages are equivalent to avoid common mistakes such as having all the points linked to the homepage. He also explains that duplicate content on the same website won't penalize it, and Google is quite good at recognizing them and ignoring them. However, it's still necessary to clean up the content for crawling purposes.
  • 00:10:00 In this section, John Mueller from Google Webmaster Central discusses how duplicate content can affect crawling and indexing of a website, particularly if there are a lot of duplicates. Google's indexing system will recognize duplicates, reject them, and filter them out during search results. However, if the content is not entirely duplicate, such as identical articles posted on a US English and UK English website, Google will index all pages and choose the most relevant one for search results. While it is not considered penalizing for a website, having duplicates can slow down crawling and make it more difficult for Google to find the actual content.
  • 00:15:00 In this section, John Mueller explains that technically having duplicate content on different sites is not necessarily a sign of lower quality content, and there can be legitimate reasons for doing so. However, users might notice it if it is done excessively or without unique value, and algorithms could see it as a sign of lower quality content. Regarding the amount of words that constitute duplicate content, there is no set number or rule, and it varies depending on different factors. Additionally, citing back to other websites is not something that algorithms specifically take into account, but it's a good practice to adopt. Finally, Mueller warns against using robots.txt to eliminate duplicate content, as it can cause more problems than it solves.
  • 00:20:00 In this section, John Mueller discusses how to handle duplicate content, stating that using robots.txt is a bad way of dealing with it as Google might see the URL as relevant and show it in search results. Instead, having a clean URL structure, using 301 redirects or rel canonical, and the parameter handling tool can help. However, Mueller emphasizes that it's always a good practice to clean up duplicate content. In the case of faceted navigation, it's tricky to handle, and it depends on the website. Some content makes sense to index separately, but it's best to make a judgment call on how relevant and useful the specific pages are. Canonicalizing search results into the main category is sometimes useful, but Mueller says that it depends, and it's a good topic for another Hangout.
  • 00:25:00 In this section of the video, webmasters discuss the use of robots.txt and blocking pages on a website. While it's best to block pages that Googlebot doesn't need to crawl, such as resource-expensive content and tools that take a long time to run, using robots.txt won't prevent pages from getting indexed. To prevent indexing, webmasters should use no index or server-side authentication. Additionally, it's important to stay current with website design best practices and not block JavaScript, CSS, and other embedded resources, as these are important for mobile-friendly pages. Lastly, the web is constantly changing, and it's important to keep up with new trends to stay relevant to users.
  • 00:30:00 In this section, John Mueller discusses various myths and misconceptions related to web development and SEO. Shared IP addresses and too many 404 pages won't affect your website's ranking, but having your own unique content instead of just copying affiliate content is more beneficial. It's important to use disavow files if you find problematic links that you don't want to be associated with, and there's no need to worry about keyword density or the order of text in an HTML file. He also mentions that having a primary content image is fine but it should match the specific content of the page, and the keywords meta tag is not a ranking factor.
  • 00:35:00 In this section, John Mueller of Google Webmaster Central explains that some of the meta tags that are almost completely irrelevant include the keyword meta tag, the revisit after tag, and tags that say follow and abating. He also suggests that people do not waste time tweaking keywords in wordy URLs or keywords in meta tags as content is what Google focuses on primarily. He stresses the importance of mobile access as many people use their smartphones to access the internet, and emphasizes completing a task such as ordering something to make sure the website is easily navigable. Finally, he reminds us to always measure, always ask for feedback, and always think of ways to improve your website.
  • 00:40:00 In this section, Google's John Mueller explains why Fetch as Google is limited in submissions to Webmaster Tools, as these URLs need to be reprocessed in a special way. He recommends using sitemaps and feeds instead of Fetch as Google to automate the process of content indexing, adding that relying on these other methods can pick up site changes quickly, which tends to happen in seconds or minutes. Mueller also discusses whether Google uses priority in sitemaps, stating that it doesn't use this feature for web search but it might be used for custom search engines. Finally, he advises on where to upload RSS feeds, suggesting that they should not be uploaded as a sitemap but instead be uploaded to a server where they are easily accessible by Google.
  • 00:45:00 In this section, John Mueller recommends using pubsubhubbub in addition to submitting a sitemap in Webmaster Tools for fast-changing websites in order for Google to immediately pick up any new changes. He also advises users who are dealing with duplicate content on their site being copied or scraped to try using DMCA removal tools or contacting their legal team, and to check that their own website is doing everything properly. In addition, Mueller confirms that Google is in the final stages of releasing a new version of Penguin within the next few weeks, and gives advice for rebranding and transferring links to avoid redirects.
  • 00:50:00 In this section, John Mueller from Google Webmaster Central talks about why there might be performance drops after making significant changes to a website, the frequency of crawling a website's pages by search engine bots, and the relationship between usability and SEO. He mentions that changes such as layout and internal linking can cause fluctuations, but small text pieces should not. The crawling frequency varies depending on the number of changes, the type of pages, and the value of the website to search engine bots. Lastly, the fundamental tactic for good SEO remains having good content.
  • 00:55:00 In this section, John Mueller discusses the tactics on ranking better in Google search results, which he says are more focused on creating useful content, staying timely, and providing value to users rather than technical tactics. He also clarifies that the disavow file is processed continuously, taking into account every change made on Google's side. Additionally, he addresses a question on hreflang and canonicals in relation to having multiple versions of a site and points out that canonical URLs should be used for hreflang pairs.

01:00:00 - 01:00:00

In a recent Google Webmaster Central office-hours hangout, John Mueller shared his advice on properly using hreflang in canonical URLs. He suggests connecting the canonical URLs with hreflang and using the version of the site being picked up for indexing for the hreflang parameter if there are multiple versions of the site. He also offers to review sites to help resolve any hreflang or canonical issues, and suggests rolling back any moves to HTTPS if there are any concerns or issues.

  • 01:00:00 In this section, John Mueller discusses how to properly use hreflang in canonical URLs. He advises that the hreflang should be connecting the canonical URLs, and if it points to a URL that is not being indexed, the hreflang tag is ignored. If both HTTP and HTTPS versions of a site exist, Mueller suggests using the version that is being picked up for search and indexing by Google for the hreflang parameter. Additionally, he offers to review a specific site to help the user resolve any hreflang or canonical issues. Lastly, Mueller suggests rolling back any moves to HTTPS if there are any concerns or issues, and to reconsider the move after hearing about other positive experiences.

Copyright © 2024 Summarize, LLC. All rights reserved. · Terms of Service · Privacy Policy · As an Amazon Associate, summarize.tech earns from qualifying purchases.