InLinks just added the concept of “Search Engine Understanding” (SEU) to its content Optimization audits, so it seems good to explain this concept.
What is SEU?
SEU or “Search Engine Understanding” is an analysis of the engine’s natural language processing API’s ability to recognize all the entities or topics on a page of content. Modern search engines (in particular, Google) have moved towards this approach of connecting a page’s underlying concepts to their knowledge graph. However, parsing words and text and trying to extract underlying concepts is not always an exact science. InLinks publishes regular case studies tracking Google’s understanding of concepts in different sectors, according to its own natural language API.
How do you Calculate SEU?
Calculating SEU is done by comparing two different NLP (Natural Language Processing) algorithms. The first is Google’s public Natural Language API and the second is Inlinks’ own proprietory NLP API which is not currently public. We then look at the number of defined entities in both data sets to get a percentage score of how many Google registers in its API.
How Good is the inLinks NLP API?
The InLinks API is specifically designed to be overly aggressive at extracting entities from a corpus of text. This will occassionally mean that it finds topics that a human would say are incorrect (for example, we might see text talking about an “engine” in this text and incorrectly associate this with a combustion engine). This aggressive approach is important, however, for SEOs, because it is the job of the content optimizer to match genuine topics to the about schema and to help ensure the CORRECT entities are communicated to Google. (InLinks also helps automate this through about schema)
Does Google use the same API in its search engine as they offer Publicly?
We only know that the one we use is the official Google API. It is part of the larger Cloud Machine Learning API family from Google.
Why is SEU helpful for SEOs and how do I use it?
If you can rephrase your content to make it easier for a search engine to extract the correct topics as relevant and meaningful, then the search engine can store all those topics very efficiently in their Knowledge Graph. When a user asks for a particular search query, a Search engine can look at the pattern of topics that might be relevant for answering that query and then display results with a close digital footprint to the one the searcher may need.
Should I be aiming for 100%?
No, not really. Our Natural Language Understanding case studies are showing that the best of breed sites are on average only scoring around 18% at the time of writing, although this varies quite significantly by sector. In the education sector, for example, Genie Jones at Warwick university spotted that in the education sector, machine learning is significantly better at 34% and might even be doing its part to help widen participation to higher education. On the other hand, dumbing down your text just to help a “dumb search engine” might also be having a negative bias on humanity. It is a complex topic that I will enjoy philosophizing over for years to come. (Keynote opportunity, anyone?)