Summary of English Google Webmaster Central office-hours hangout

This is an AI generated summary. There may be inaccuracies.
Summarize another video · Purchase summarize.tech Premium

00:00:00 - 00:55:00

In this YouTube video, John Mueller from Google Webmaster Central office-hours hangout answers questions on a variety of topics, from canonical tags to domain age, crawl rates to switching URL structures, and more. Mueller advises users on canonical tags, stating that rel canonical is not equivalent to a 301 redirect, while on domain age, suggests it's not something to worry about artificially as it's not a primary factor. He also emphasizes the importance of allowing Google to crawl both CSS and JavaScript files to determine if pages are mobile-friendly or not. Ultimately, Mueller stresses the need to focus on technical aspects and quality content to help sites rank higher, rather than relying solely on external signals.

  • 00:00:00 In this section, Google's John Mueller answers various questions on canonical tags, Freebase, domain age, and linking from gate elements. On canonical tags, Mueller explains that rel canonical is not equivalent to a 301 redirect, as it takes extra work to forward signals with a rel canonical even after indexing. On domain age, Mueller suggests that while it may indirectly affect trust and credibility from long-time users who are familiar with the website, it's not something to worry about artificially. Additionally, Mueller states that a website's age is not a primary factor. Finally, on linking from gate elements, Mueller sees it as a technical problem that Google should solve to better understand the links' source.
  • 00:05:00 In this section, John Mueller, a Google Webmaster Trends Analyst, advises users to do their due diligence when considering buying an expired domain name, as it may have a problematic linked profile or history attached to it. He also discusses the impact of user behavior on search engine ranking, noting that if users have a terrible experience on a site and bounce out, it could prevent them from recommending the site, which can indirectly affect search engine ranking. Mueller states that if you're using 301 redirects to clean up, make sure you're redirecting directly to the final page to make it easier for Google to process. Finally, he explains that Google recrawls pages naturally, and as they see fewer issues, the graph will go down.
  • 00:10:00 In the first excerpt, a user asks what the average crawl rate in Webmaster Tools should be for a new domain with 5,000 URLs after two or four weeks, and John Mueller states that there is no set number, as it depends on numerous factors, such as website type and content, server speed, and potential errors. In the second excerpt, Mueller addresses an issue where the desktop search for keywords leads to mobile pages appearing in desktop search results, suggesting that this could be a technical problem with the proper implementation of canonical or alternate markup tags. Finally, Mueller answers a question about the process when a website experiences a malware attack, stating that there is not one set procedure as there are different types of attacks, and users would need to go to the Security Issues section of Webmaster Tools where they can request a review.
  • 00:15:00 In this section, John Mueller explains the process of removing malware warnings on a website. The process is fully automated and Google crawls the pages to confirm there is no malware before removing the warning from search results and the Safe Browsing API list. However, if a site has been hacked with SEO spam or hidden pages, the Web Spam team will manually review the pages before removing the warning. When it comes to Fetch as Google, if a website experiences lots of 500 errors or timeouts, the crawling is reduced to a minimal amount which, in turn, affects the Fetch as Google function. Finally, Mueller explains that the process for removing malware warnings is completely automated.
  • 00:20:00 In this section, John Mueller, a Google webmaster trends analyst, explains how soft 404s, which are pages that redirect to a category or homepage after being removed, are treated the same as 404s by Google. He explains that redirecting can be disadvantageous since it wastes Google's crawling time and calls it a grey hat or black hat SEO technique. He advises on using a 410 for those pages, which is the proper solution, and providing value to users by showing related items or categories. Furthermore, he encourages using JavaScript to redirect to related pages so that both Google and users are satisfied.
  • 00:25:00 In this section of the video, John Mueller from Google emphasizes that just because a big website does something, it does not mean that it is necessarily a good thing to do. He also discusses the role of social media in Google's search algorithms, highlighting that social signals from public content are not currently used in the search algorithm, but that the content itself can be crawled and indexed. Additionally, he addresses a question about what webmasters should do in Google Webmaster Tools after changing their site to HTTPS, explaining that the main thing they need to do is to add the HTTPS version of their site to Webmaster Tools. Finally, a question is raised about how Google extracts the article date for the OneBox in universal search results, and Mueller assures the audience that this question will be checked by the Google team.
  • 00:30:00 In this section, John Mueller of Google answers some questions about switching to a new URL structure and HTTPS, and seeing a drop in traffic due to missing images and high error pages. He informs the user that it is not a problem to have two 301 redirects and suggests redirecting directly for better user experience and to avoid redirect time. He also explains that it might take some time for things to settle down and the fluctuations are general changes that you would see with any site move situation. He mentions that the missing images would probably recrawl after a few cycles and states that there is no problem with high error pages responding with 200 status codes.
  • 00:35:00 In this section, John Mueller discusses the importance of allowing Google to crawl both CSS and JavaScript files for websites, as it enables the search engine to determine if the pages are mobile-friendly or not. While it may not be essential for desktop websites, it is crucial for mobile websites. John also mentions how video content can be used and boosted in search results, as well as how Google identifies whether a video on a website is the primary content or not.
  • 00:40:00 In this section, John Mueller, Google's Senior Webmaster Trends Analyst, explains that Google takes into account all signals, including good quality links and good technical aspects of websites. However, relying solely on external signals is not recommended. In another question, he advises the audience to focus on actionable metrics such as a create a sitemap file with important URLs you want to have indexed. John also explains that although content is king, in some cases, a website with no content can still show up in search results if it has external signals. However, he stresses that it's easier to resolve technical issues on a site than compensate with other means. Finally, the audience asks if one can bring up penguin updates to Google's webmasters.
  • 00:45:00 In this section, John Mueller explains that Google's algorithms are automated and are not manually adjusted by Google personnel. The algorithms run globally and on a continuous basis. Consequently, shifts and fluctuations occur even when Google tries to be cautious so as not to cause upheavals. Mueller denies that there was a rollback of any search elements. He, however, acknowledges and understands that such changes may affect traffic to websites, especially when they run in highly competitive industries.
  • 00:50:00 In this section, John Mueller explains that webmasters cannot request the removal of content that mentions their business or content on another site that shows up in search results, as it is handled algorithmically by Google. However, if a site is copying a webmaster's content, a DMCA or other legal means could be used to resolve the issue. Mueller also explains that the new European regulation does not allow webmasters to request content removal from search results as it is a legal process that must be followed. If a site is not ranking well for competitive search terms, it may be due to a lack of signals, which can be resolved by focusing on a smaller niche, marketing campaigns, ads, and social media marketing.
  • 00:55:00 In this section, John Mueller discusses the difficulties of ranking in competitive areas and suggests being unconventional to help stand out. He cautions against assuming that there is one trick to ranking and highlights the importance of building up an alternate stream of users that return to the site even if search traffic fluctuates. Regarding a question about the number of URLs indexed dropping in Webmaster Tools, he indicates that it wouldn't necessarily mean a drop in traffic unless there's a technical problem. Finally, he suggests that having many links from a company with a Singaporean domain shouldn't be a cause for concern and can sometimes happen.

Copyright © 2024 Summarize, LLC. All rights reserved. · Terms of Service · Privacy Policy · As an Amazon Associate, summarize.tech earns from qualifying purchases.