Digital Marketing » The Kalicube Process » Courses » Knowledge Panel Course » The Knowledge Panel Course: The Three Google Knowledge Algorithms

The Knowledge Panel Course: The Three Google Knowledge Algorithms


Role
Entity
AuthorJason Barnard
InstructorJason Barnard
PlatformKalicube Academy
ProducerKalicube SAS
PublisherKalicube SAS
Year Released2022
Has PartsThe Knowledge Panel Course: Building Google’s Confidence in Your Entity
The Knowledge Panel Course: How Google Chooses What Photos and Logos to Show
The Knowledge Panel Course: Getting Your Knowledge Panel to Show on Your Brand® SERP
The Knowledge Panel Course: Managing People Also Search For and Related Searches
The Knowledge Panel Course: Getting Your Entity Into Google’s Knowledge Vault
The Knowledge Panel Course: How a Knowledge Panel Is Built
The Knowledge Panel Course: The Google Knowledge Extraction Algorithm
The Knowledge Panel Course: What Information Does Google Show in Knowledge Panels?
The Knowledge Panel Course: The Three Google Knowledge Algorithms
The Knowledge Panel Course: How to Change Information in a Knowledge Panel
The Knowledge Panel Course: How to Claim a Knowledge Panel
The Knowledge Panel Course: Six Knowledge Verticals that Trigger a Knowledge Panel
The Knowledge Panel Course: How Google’s Knowledge Graph Works
The Knowledge Panel Course: The Powerful Geeky Way to Join the Dots
The Knowledge Panel Course: The Non-Geeky Way to Join the Dots
The Knowledge Panel Course: Identifying the Relevant Corroborative Sources
The Knowledge Panel Course: Writing Your Entity Description
The Knowledge Panel Course: Building Your Entity Home
the Knowledge Panel Course: Getting a Knowledge Panel in Three Easy Steps
The Knowledge Panel Course: Educating the Child That Is Google
Introduction to the Knowledge Panel Course

Jason Barnard speaking: Hi and welcome. You now know how a Knowledge Panel is built, which vertical Knowledge Graphs are available to you, and how to get into the Knowledge Vault.

Jason Barnard speaking: Of course, every aspect of Knowledge Vaults, Knowledge Panels, and Brand SERPs are managed by machine learning algorithms. In this lesson, I will focus on the three Knowledge Algorithms. I’ll explain how they work and what you can do to help them understand your entity.

Jason Barnard speaking: I’d like to make one very obvious but incredibly important point here. Machine learning algorithms evolve constantly. They are learning to learn, which means that they change every time they run and they improve exponentially. Add to that the fact that Google engineers regularly tweak the algorithms and feed them with additional data. You understand that Knowledge Panels are a constantly moving target, and managing them is necessarily an ongoing monthly task, year after year after year.

Jason Barnard speaking: So, back to Google’s Knowledge Algorithms. What are they? Where do they intervene? How do they work? And how can we feed them effectively?

Jason Barnard speaking: Let’s start with a simple definition. Google has many algorithms that help it understand the world and display that understanding on the SERP to its users in a helpful manner. The Knowledge Algorithms operate on three main levels. One is responsible for curating information for the Knowledge Vault and the Web Index Knowledge Graph, one builds Knowledge Panels, and the other assimilates the information from the world wide web to allow the other two to access the information.

Jason Barnard speaking: Let’s start with the foundational algorithm, I would suggest the most important, the Knowledge Extraction Algorithm. This Knowledge Algorithm is part of Googlebot, and it creates structured data from mostly unstructured online content. When Googlebot and Bingbot crawl a web page, they extract the information from that web page and attempt to give it structure and annotate the information before putting it in the index. It also attributes a confidence score to the annotation, indicating its level of confidence that the annotation is accurate.

Jason Barnard speaking: Always remember that the bot is looking at this page in isolation, so it needs as many helpful clues as possible on the page itself in order to annotate accurately. There is an entire lesson about that in this course.

Jason Barnard speaking: The Knowledge Panel Algorithm builds Knowledge Panels on Google SERPs. If you haven’t already done so, watch the lesson about how a Knowledge Panel is built.

Jason Barnard speaking: The Knowledge Panel Algorithm determines what can be shown in the Knowledge Panel of an entity. The role of the Knowledge Panel in a SERP is to provide the user with a summary of factual information about an entity without having to visit multiple web pages. As such, much of the information displayed is found in the pages behind the blue link results.

Jason Barnard speaking: In the lesson about the Knowledge Panel Algorithm and how a Knowledge Panel is built, I use a chest of drawers analogy to explain this process. And I also present techniques you can use to manage the contents of Knowledge Panels.

Jason Barnard speaking: The Knowledge Panel Algorithm populates Knowledge Panels with information from the Web Index that it considers to be reliable fact. And bear in mind the role the Knowledge Extraction Algorithm plays here. If it has accurately annotated a piece of content, then that information is much more likely to be found by the Knowledge Panel Algorithm. And if the annotation has been attributed a high confidence score, then it is much more likely to be considered positively by the Knowledge Panel Algorithm.

Jason Barnard speaking: As you can now see, the annotation is absolutely vital since without it, the information won’t even be considered by the other algorithms. And a high confidence score will move that information upwards towards the front of the queue.

Jason Barnard speaking: Now, the Knowledge Vault Algorithm adds information, hopefully facts, to the Main Knowledge Graph, which I also call the Knowledge Vault. The ultimate aim for Google is to understand the whole world by filling this Knowledge Vault with all the facts about everything. Obviously impossible, but a nice target to aim for.

Jason Barnard speaking: Since Google owns multiple human curated vertical Knowledge Graphs, one technique it is using is to move the entities from the verticals and into the Main Knowledge Vault. As such, this Knowledge Algorithm is responsible for selecting the entities or information it can move from the different vertical Knowledge Graphs, Podcasts, Books, Scholars, et cetera, to the Knowledge Vault.

Jason Barnard speaking: This process is pure machine learning. That means that the algorithm decides, and humans don’t get a say. Before moving an entity into the Knowledge Vault, the Knowledge Vault Algorithm must first assess multiple resources from the web that corroborate the information in the vertical Knowledge Graph to a level where it is absolutely sure the information is true.

Jason Barnard speaking: Now, a quick aside about the web vertical Knowledge Graph I talked about in the lesson about the vertical Knowledge Graphs. Because it is not human curated like the others, it has a special role. In order to understand the whole world, Google cannot rely on human curated sources. It needs to create knowledge from the open web. As Google expands the Knowledge Vault, the web vertical Knowledge Graph will increasingly serve as the best first step for an entity to be added to the Knowledge Vault.

Jason Barnard speaking: Some sources can partially skip this algorithmic validation process. Wikipedia is one example. However, as of late 2022, that is becoming less and less the case. At the end of the day, just remember every entity and every piece of information that is in the Main Knowledge Graph has been thoroughly checked and confirmed by machine learning algorithms beforehand.

Jason Barnard speaking: Your job is to ensure the information that allows it to understand the information about an entity and check the veracity is to present that information on multiple trusted, authoritative sources in a format that the Knowledge Extraction Algorithm can confidently annotate starting, of course, with the Entity Home itself.

Jason Barnard speaking: Now, I’ll answer a question that should save you from some stressful waits and difficult conversations with your clients. How often do Google’s Knowledge Algorithms update?

Jason Barnard speaking: Like the core algorithm, the three Knowledge Algorithms get manual updates by Google engineers, both algorithmic tweaks and injection of corrective and reinforcement training data. We cannot predict when these will happen. And when they do happen, Google doesn’t tell us. Kalicube Pro tracks activity in all three Knowledge Algorithms and provides a sensor that alerts us and helps us to understand what is happening when major changes occur across multiple entities.

Jason Barnard speaking: All Google’s algorithms use machine learning to a large degree. That means that they are improving all the time. I like to say that they are in a constant state of learning to learn. That means any of them can suddenly have a lightbulb moment about any piece of information about any entity at any moment. This is one reason why any individual Knowledge Panel, Knowledge Vault entry, or confidence score in the Knowledge Vault can change from one day to the next without any updates to the algorithms or changes to the information online.

Jason Barnard speaking: Since the Knowledge Extraction Algorithm is attached to web crawling and the Web Index, it is categorising and tagging all the information as Googlebot crawls the web. That means the Knowledge Extraction Algorithm updates its assessment of knowledge in real time. Bear in mind that it updates each source individually and in isolation.

Jason Barnard speaking: The algorithm that builds Knowledge Panels uses the Web Index and is therefore in constant evolution on an entity by entity basis. However, it is very reticent about including new or updated information since this information is shown very prominently on the SERP, which means Google’s reputation for reliability is on the line. Here, it is helpful to consider that it requires multiple corroborative sources at the same time. It brings together multiple annotated sources when it’s changing information in the Knowledge Panel chest of drawers, that I talked about in how the Knowledge Panel is built lesson.

Jason Barnard speaking: Now, the Knowledge Vault Algorithm is by far the slowest moving of the three. Google is still trying to get the foundations right. They need to be careful that the Knowledge Vault doesn’t fill up with junk as they move away from the human curated data provided by their foundational sources such as Wikipedia, Freebase, Wikidata, MusicBrainz, and IMDb. This means that updates are less frequent and, as of late 2022, appear to be dominated by manual intervention by Google engineers. That makes these updates more unpredictable than the other two.

Jason Barnard speaking: When the Knowledge Algorithm that feeds the Knowledge Vault is updated manually by engineers at Google, the update is for one or more of the following three reasons: number one, algorithm tweaks, number two, an injection of curated training data, number three, a re-evaluation of the contents of the Knowledge Vault using data from the Web Index.

Jason Barnard speaking: That said, the Knowledge Vault Algorithm assesses the contents of the Knowledge Vault constantly. Any entity can be added, removed, or changed in the Knowledge Vault at any time. Tracking on Kalicube Pro suggests that this affects well under 0.1% of entities in any one day.

Jason Barnard speaking: All three Knowledge Algorithms will affect your Knowledge Panel short, medium, and long term. The Knowledge Extraction Algorithm is constantly analysing pages by or about you on first, second, and third party websites to create structured data from unstructured online content. You can usefully consider this to be a daily update.

Jason Barnard speaking: The Knowledge Panel Algorithm is constantly cross-checking facts extracted by the Knowledge Extraction Algorithm to ensure that the contents of your Knowledge Panel are correct and up to date. You can usually consider this to be a weekly update.

Jason Barnard speaking: The Knowledge Vault Algorithm updates sporadically and in unpredictable ways, since they are still closely and manually managed by Google engineers with a data lake approach. You can usefully consider this to be a monthly update. You’ll find an article about data lakes in the additional material for this lesson.

Jason Barnard speaking: The final point of this lesson is that the trick for all three algorithms is to present information in a structured format and ensure that all first, second, and third party web pages dedicated to the entity provide consistent corroboration.

Jason Barnard speaking: To give you a timeline, here is what we see when we or an agency uses Kalicube Pro effectively. Getting a presence in the Web Index vertical Knowledge Graph or the Knowledge Vault takes a few weeks to three months. Updating a piece of information in the Knowledge Panel will take a few days to a few weeks. That time frame depends immensely on the amount of mess that is out there, also whether or not a Wikidata or a Wikipedia page has been created and deleted, and how fast and effectively you can update all the corroborative sources.

Jason Barnard speaking: Now, Knowledge Panel management can feel like a slow process, and it can be frustrating when nothing changes for weeks or even months. But now you understand how all three Knowledge Algorithms work and you have an idea of their timelines, you can better understand why, and you can understand what you need to do to convince each of these algorithms: structure, consistency, and corroboration.

Jason Barnard speaking: Thank you very much, and I’ll see you soon.

Similar Posts