Summary of English Google Webmaster Central office-hours hangout

This is an AI generated summary. There may be inaccuracies.
Summarize another video · Purchase summarize.tech Premium

00:00:00 - 01:00:00

John Mueller from Google advises website owners to focus on maintaining high-quality content by taking a broader approach to website design, navigation, and credibility, instead of solely relying on technical tools. He also explains how Google's algorithms deal with soft 404 pages and suggests webmasters optimize their site's crawling by fixing these technical issues. Mueller discusses suggestions to filter out spammy links but notes the challenge in doing so, and advises webmasters to disavow any domain generating junk activity. He also talks about the importance of having hreflang tags between language versions of a website and double-checking sitemap files. Throughout the video, Mueller provides helpful tips for website owners who want to improve their site's ranking on search engines.

  • 00:00:00 In this section, John Mueller of Google discusses how to maintain a website’s high-quality content by going beyond textual content and taking a broader approach to website design, navigation, and credibility. While he advises using technical tools to understand why users find certain content on your website, it should not be the primary method for identifying high quality or low-quality content. Ultimately, he believes it’s best to assess the quality of your content manually, taking a step back and viewing your site objectively to ensure it’s providing the best experience to users with an aim to build a sense of trustworthiness.
  • 00:05:00 In this section, John Mueller discusses soft 404s and how Google's algorithms deal with them. Soft 404s are pages that have no content at all or have a "no results search" page. Google's algorithms find these pages and treat them as 404 pages, which means they fall out of the index and are crawled less frequently. Webmasters can optimize their site's crawling by fixing these technical issues and focusing on pages that actually have content. In addition, Mueller advises webmasters to disavow any domain generating junk activity even if it is reputable.
  • 00:10:00 In this section, John Mueller discusses suggestions from webmasters to filter out spammy links that are irrelevant to Google's indexing process. The Webmaster Tools team has been reluctant to filter links by weight for fear of revealing information about which links count and which do not. However, they may consider recognizing spammy links and filtering them out, though it may be a difficult task. In response to a question about HTML5, Mueller states that Google understands HTML5 and supports the individual tags from HTML5 but does not process them specially. Additionally, concerning hacked sites, he advises webmasters to use the "Fetch as Google" tool and to double-check their servers for content that may not be indexed but still points toward hacked content. Finally, in response to a question about recovering from Google's current algorithms, Mueller suggests that it may be faster to start over with a new site, design, and a viral content strategy if the business doesn't care about the domain at all.
  • 00:15:00 In this section, John Mueller discusses whether or not it is a good decision to move to a new domain, stating that it can be a tough decision, as it is not necessarily something that can be done easily, and there is no general answer that everyone should follow. He then goes on to explain that a penalty affecting a certain website that is being redirected to another website is not something they recognize in their system. Lastly, Mueller advises a webmaster who is struggling with organic love and mentions having best in class content and links to potentially clean up any artificially built links and seek advice from peers who may be able to spot any issues from technical or quality points of view.
  • 00:20:00 you have some hack on your website that's creating this content or injecting this content. But if you're seeing words in there that you think, oh, these are kind of irrelevant or insignificant, then that's not really something that you need to focus on because that's fairly normal.
  • 00:25:00 In this section, John Mueller reassures a user that if insignificant words are found in their site that seemingly suggest a hack, such as "dub, dub, dub" or "HTTP", it doesn't mean that the site is hacked, as these are just things that are picked up while crawling the site. He also talks about a mismatch between the number of URLs submitted to sitemaps and the number of URLs that are actually indexed. Mueller suggests that this is usually due to URLs in the sitemap file not matching the ones that are actually indexed and advises opening the sitemap file in a text editor to identify any discrepancies. Consistency is key in making it easier for Google to focus on a site's URLs. Finally, he shares a chart showing how hreflang markup, HTTP, and HTTPS versions work for different language versions of a site.
  • 00:30:00 In this section, John Mueller of Google explains the importance of having hreflang tags between the versions of a website we actually pick up for indexing. He mentions that if you have a redirect that points to an HTTPS version, you should use that version for the hreflang tag. However, if you have it set up differently, such as a redirect pointing in the other direction or if you have hreflang tags between pages that aren't canonical, Google basically ignores them. Mueller also discusses how to use hreflang tags with pagination and rel canonical pages.
  • 00:35:00 In this section, John Mueller advises website owners to double-check their sitemap files and ensure that it only includes URLs that should be indexed. He recommends sampling individual pages from the sitemap file to see if they're indexed and if they're technically OK with no noindex, 404 error, redirecting or broken canonical tag issues. John suggests that discrepancies between the sitemap file, canonicals, or indexed version may indicate that the website owner is submitting URLs that don't match what Google picks up for indexing. He also advises against artificially making a page look fresh by putting a new date on it when nothing has changed, and suggests only updating the date when a page has been updated.
  • 00:40:00 In this section, John Mueller from Google says that if your website's content is scraped and copied by other sites, it shouldn't affect your search rankings when it comes to quality algorithms. Scrapers are always happening, and Google focuses mostly on the original source of the content. Although Google might rank a scraper higher than the original site, they work really hard on recognizing original content and treating it appropriately. In general, Google has been catching those cases appropriately and is not seeing specific complaints about it recently. John also suggests using DMCA if the content is copied and you don't want it to use in a certain way.
  • 00:45:00 In this section, an audience member shares their website URL for review by John Mueller, who states that they are still seeing issues and recommends digging further to clean up any problematic links. Mueller also mentions that the use of mobile usability selection could become a ranking factor in the future, as more people use their smartphones for browsing. However, he cautions that taking that into account as a ranking factor directly would be a different ball game. Lastly, there is a brief discussion on how adjustments to hreflang can impact rankings.
  • 00:50:00 In this section, the Google team discusses what to do when changing items such as hreflangs on websites frequently, and the potential consequences that such actions may lead to. Additionally, they discuss the issue of copyright and how Google identifies original content. The team notes that Google generally recognizes the original source of content, and that there is no need to constantly tweak the content of a website to avoid scrapers. They also suggest that one may take legal action through the DMCA process against scrapers or proxy servers, but would not recommend modifying website content simply to deter scrapers. Finally, they discuss how to handle guest posts that put rich anchor text to the website and the frequency of such occurrences.
  • 00:55:00 In this section, John Mueller, a Google Webmaster trends analyst, discusses how to handle questionable links and when to submit disavow files. He advises that if a site owner is uncertain about a link's quality, they should put it in disavow file, which is automatically processed and reduces the need for constant tracking of links. Mueller also mentions that sitemaps don’t necessarily have to contain an entire website and that adding customer testimonials to a website isn't a duplicate content issue. Finally, he explains that hreflang might take some time to settle, and that it is hard to say why some sites swap out with others.

01:00:00 - 01:10:00

During a Google Webmaster Central office-hours hangout, John Mueller addresses various topics. He explains that Google doesn't have a sandbox per se to hold back new websites, but new site visibility can be affected by implementing various algorithms. The talk also covers how to optimize the chances of an image being utilised for search by having a unique landing page for images that includes context to aid search engines in understanding what the picture entails. Finally, Mueller discusses the trend of websites towards HTTPS and how the benefits depend on the website and type of content.

  • 01:00:00 In this section, John Mueller from Google addresses questions from users about handling hreflang errors and whether or not Google has a sandbox. Mueller confirms that there isn’t a Google sandbox algorithm in place to hold back new websites, but various effects may make it seem that way. Moreover, Google tries to recognize original images and treat them similar to any other web content to optimize images' ranking.
  • 01:05:00 In this section, John Mueller suggests that having a unique and valuable landing page for images, with text that provides context for the image can improve the chances of the image being used for image search. Having a gallery page of images with no information can be problematic for image search, but having an image landing page can make it easier for search engines to associate the image with the website's content. Furthermore, original images should also have context available to aid search engines in understanding what the image is about. Finally, Mueller comments that moving a website to HTTPS shouldn't have any negative effect on search results in the midterm, which will see most sites moving towards HTTPS.
  • 01:10:00 In this section, John Mueller talks about the trend of websites moving towards HTTPS and explains that it's not just a phase that will go away soon. He suggests that more and more websites will move to HTTPS and start on it directly. While some may prefer to be on the forefront, he highlights that ultimately, it's up to website owners and depends on their website and content. For websites handling user content and doing personalized tasks for users, he believes it's critical to move to HTTPS as quickly as possible, while other websites may not need it as urgently.

Copyright © 2024 Summarize, LLC. All rights reserved. · Terms of Service · Privacy Policy · As an Amazon Associate, summarize.tech earns from qualifying purchases.