Summary of English Google Webmaster Central office-hours hangout

This is an AI generated summary. There may be inaccuracies.
Summarize another video · Purchase summarize.tech Premium

00:00:00 - 01:00:00

In a recent Google Webmaster Central office-hours hangout, John Mueller provided advice on how to fix the issue of discovering 90 subdomains indexed for a site that didn't exist, suggesting the removal of a wildcard DNS entry, but explained that duplicate content resulting from this kind of issue "shouldn't cause problems." Mueller also discussed how Google identifies high-quality content through a variety of algorithm updates and techniques, including rewriting titles to improve search results, and responded to questions about the impact of frequently changing metadata and the slow recovery of previous rankings following a site migration to HTTPS. Additionally, he clarified that keyword stuffing can cause a drop in rankings, but is not necessarily critical, and addressed how Google deals with email spam and negative sentiment surrounding links.

  • 00:00:00 In this section, John Mueller, a Webmaster Trends Analyst at Google in Switzerland, provides advice to a webmaster who had discovered 90 subdomains indexed for a site that didn't exist. Mueller suggests that removing the wildcard DNS entry is the most important part of fixing the problem. Mueller states that this type of duplicate content should not cause problems for websites as it is "just a technical problem." Mueller then answers another question about a site that had a manual link penalty applied to subdomains. Mueller explains that both single and individual penalties are possible, but usually, individual penalties for specific subdomains will be more granular.
  • 00:05:00 In this section, John Mueller discusses how a website can accidentally block the Google crawler besides using the robot's text, such as blocking the IP address, or having a firewall that identifies Google bot as a malicious script, causing it to block the bot. He also addresses the issue of serving a server error, like the 500 error, for the robot's text, which would cause Google to assume that it can't crawl any content and stop crawling completely. Furthermore, he highlights the importance of recognizing higher-quality content and promoting it in search through various algorithms that work with the content to rank it better.
  • 00:10:00 In this section, John Mueller and Joshua Berg discuss how Google’s algorithms identify high-quality content, and how this applies to algorithm updates like Panda. They clarify that Google’s algorithm looks at both the positive and negative aspects of a site's content and it’s signals so they can show more relevant and higher quality results, and it is not just concerned with detecting bad content or penalizing websites. They also recommend that webmasters should provide feedback to Google and update the disavow file when new URLs are found, so they can help to improve search results. Furthermore, Mueller explains that Google rewrites the titles in the search results to improve their quality when needed, such as when a title is too long or when there is keyword stuffing about which the audience may not be interested.
  • 00:15:00 In this section, John Mueller from Google explains how Google sometimes takes into account site titles from the DMOZ, the Open Directory Project, and how it might look at that as well and use that. Mueller explains that you can block the ODP title by using the no ODP metatab, but the other titles that are rewritten are something that their algorithms do automatically. Titles can change depending on the query, and from their point of view, they have a small collection of titles that they swap in and out depending on what the user is searching for. Additionally, Mueller explains how Google's newly launched description snippets feature shows up in a lot of rankings, but these are still experimental features and sometimes hard to do algorithmically. Mueller recommends webmasters to structure their information as much as possible and use schema.org markup to provide more information about the specific entities that are being talked about. He also mentions that Google automatically picks up the schema mark-up and there is no need to submit it anymore.
  • 00:20:00 In this section, the attendees discuss the possibility of a search query report that displays a list of titles and click-through rates (CTR) for specific queries, which John Mueller finds interesting and mentions that it is being worked on. The group also debates the impact of frequently changing metadata as it relates to keyword rankings and if switching from HTTP to HTTPS has any drawbacks or advantages. John reassures the group that modifying metadata every 15 days is fine, and moving to HTTPS will not affect visible changes in rankings. Sites may experience fluctuations in the short term, but it usually settles down to about the same place, and there will be no visible SEO advantages. Finally, the team discusses the slow recovery of previous rankings that followed a recent 301 move to a new domain. John explains that while in practice a 301 should settle down fairly quickly if there are still issues, they could be either technical or unrelated to the change.
  • 00:25:00 be manageable by doing a proper site move and waiting a month or two for the site to regain stability. As for the Penguin refresh cycle, while the team is working on finding a way to improve it, there is no specific detail to share yet. Uploading a disavow file can change links to no-follow and have an effect on algorithms and manual actions. A site owner experiencing loss of traffic after a site migration to HTTPS should have their site checked to see what's happening, as it may be due to fluctuations or other issues. The Panda update is global and affects different languages and countries. To maintain high-quality ranking, one should ensure that what they're providing on their website is of the highest quality possible.
  • 00:30:00 In this section, John Mueller of Google Webmaster Central responds to a question regarding the trustworthiness of sites. He explains that Penguin is an algorithm aimed to demote sites engaging in webspam techniques that have unnatural outbound links, but it is not specifically designed to promote trustworthy sites. However, during algorithm changes, Google looks at the search results' first few pages for multiple queries to analyze both web spam and high-quality sites suitable for users, ensuring excellent search results reliability.
  • 00:35:00 In this section, John Mueller discusses how to remove a site link from search results and mentions the option to demote a site link in Webmaster Tools. However, he notes that site links are based on the query and may not always appear in search results. Regarding click-through rates, he advises webmasters to focus on making sure their content matches user needs and encourages them to visit the site. In addition, he explains that the fetch and render option in Webmaster Tools is useful to double check if Google bot can see content and how it would render the site's content.
  • 00:40:00 In this section, John Mueller discusses the importance of CSS for mobile websites and how it can affect their ranking on search engines. If Google can't see the CSS of a webpage, they can't determine whether it's optimized for mobile devices or not, which poses a problem for smartphone search rankings. He advises webmasters to separate queries into different categories and assess clickthrough rates separately for each category, such as branded or navigational queries. He also answers questions about disavow files and site links.
  • 00:45:00 notice a specific algorithm targeting keyword spamming because it's just one aspect that's taken into account among many others. However, John Mueller notes that when spam is recognized on pages, algorithms will try to figure out how to best handle it. From his point of view, it's preferable for algorithms to focus on the good parts of the website and treat it appropriately while ignoring any spam that may have been unintentional or unnoticed by the webmaster.
  • 00:50:00 In this section, John Mueller discusses keyword stuffing and its potential impact on rankings. He notes that though keyword stuffing can cause a drop in rankings, it is not necessarily a critical issue that needs to be resolved immediately as it does not result in a rise in rankings. However, maintaining a website with keyword stuffing requires constant attention and can cause more problems than benefits. He recommends cleaning up keyword stuffing to avoid future worries. Additionally, Mueller states that using a company name multiple times on a page will not necessarily trigger keyword stuffing algorithms and should not be a cause for concern as long as it is being used in a reasonable way. Finally, Mueller clarifies that email spam is not taken into account by Google when it comes to ranking websites.
  • 00:55:00 In this section, John Mueller discusses how Google deals with email spam and negative sentiment surrounding links. Google is unlikely to take email spam into account as it's a different part of the company. Negative sentiment is something Google tries to recognize, but it's difficult to do in an accurate way. Extreme cases are taken into account, but positive or negative mentions are generally trusted. When it comes to optimizing a site for a different country, Mueller suggests using geotargeting or href lang markup. In terms of blogs, category pages showing up in search instead of actual blog posts can occur due to technical issues or a lack of trust in a website. Mueller advises to first check for technical issues and to work on improving the quality of the website if everything is in order.

01:00:00 - 01:10:00

During the "English Google Webmaster Central office-hours hangout," John Mueller from Google provides advice for website owners. He suggests implementing CAPTCHAs to stop scripts from creating pages, using no-follow links and disavowing domains that cause harm. Mueller advises submitting the URL for him to check that it is clean. Mueller also suggests that a website built entirely using AJAX, Campus, and WebGeo shouldn't be treated as a bad site, but rendering options need to be used. Having separate URLs is essential to allow Google to crawl individual pages, index them and bring more visitors to a Google+ page. Mueller also explains that Google attempts to take the rel canonical signal into account but cannot trust it when used in a negative manner. Lastly, Mueller describes how disavow files can help identify legitimate market sites that have high-quality content and good links, but are negatively impacted by spam lists, ultimately affecting the other content on the website.

  • 01:00:00 In this section, a website owner complains that their website has been key stuffed by SEO companies, which created over 20,000 profiles and links to their website causing their traffic to drop from 50,000 visits per day to 200. John Mueller recommends using CAPTCHAs to block scripts from creating pages automatically, using no-follow links and disavowing domains that cause harm. He suggests that if you significantly change your website, you should expect to see changes in rankings within two to three months. Finally, he advises the website owner to submit their URL so he can take a look and double check that it's really clean.
  • 01:05:00 In this section, it is explained that a website built completely using AJAX, Campus, and WebGeo will not be treated as a bad site, but it is harder for Google to get to your content, so the rendering option must be used. The video also highlights that it is essential to have separate URLs on your website to allow Google to crawl individual pages and index them. When asked how to bring more visitors to Google+ pages, it was advised to treat them like any other page on the web, recommending them to users, and encourage visitors to recommend them to their friends. Finally, it is explained that Google tries to take the rel canonical signal into account but cannot trust it when used in a bad way.
  • 01:10:00 In this section, Google's John Mueller discusses the use of rel canonical and disavow files. Mueller suggests that while rel canonical is helpful in confirming a move and indicating that something is not broken on the website, it may be ignored in certain situations. In regards to disavow files, they can help identify legitimate market sites that have high-quality content and good links, but are stuck on spam lists, ultimately impacting the other content on the website. Although Google sees the overall use of disavow files, it cannot fully assure that it specifically enhances Panda or other search algorithms. Additionally, Mueller states that comments with no follow tags do not stop spam entirely, but Google does its best to filter them out without hindering website algorithms.

Copyright © 2024 Summarize, LLC. All rights reserved. · Terms of Service · Privacy Policy · As an Amazon Associate, summarize.tech earns from qualifying purchases.