Google could be messing up the Web and it’s our fault

A textbook case of how trying to solve one problem creates a new one. Now imagine trying to solve two at once.


How Google’s search algorithm actually works is once again extensively discussed after the company’s latest core update. This time, though, its implications might be larger than ever. (Image: Ekaterina Bolovtsova, Pexels)


Anyone following the creator economy, digital marketing and search engine optimization markets already knows that yet another Google search algorithm update is currently being rolled out. It is called “the Google Helpful Content” update, it was officially deployed at the end of August and it will reach every corner of the Web soon. There’s a predictable pattern emerging whenever these updates happen: bloggers, marketers, YouTubers, and virtually anyone whose work has to do with Google search results gets a bit jumpy over this, many try to make sense of the algorithm’s changes and how it will affect their content’s ranking, some even make various predictions about the after-effects. Most just cross their fingers and wait for the dust to settle down.

This is normal: it happened in May, it happened this past December, and it will happen again (and again). What’s unusual about this particular Google search algorithm update is that it serves a specific purpose the company made clear in no uncertain terms: to boost the ranking of websites containing articles the algorithm considers “helpful” and, at the same time, downgrade websites featuring a lot of articles the same algorithm considers “unhelpful”. Yes, it’s somewhat complicated. We’ll get to that.

In the meantime journalists, online writers and content creators all prepare to be deemed worthy or found unworthy by the newest version of Google’s almighty algorithm: thousands and thousands of puny Thors judged by a virtual Mjolnir and its little spider friends. The hammer will fall and — before the year is out — its effects on many websites’ rankings will be plain for all to see.

We all type in keywords or questions to Google’s search bar for any topic imaginable every single day. How the company selects which results to offer first is almost a science all by itself. (Image: Lucia Macedo, Unsplash)


After the dust settles it will be our turn to decide how helpful or unhelpful this new Google core update has been. This time, though, there may be other implications to consider regardless of the algorithm’s effectiveness. The way Google itself now seems to think about content, in general, could actually be a bigger problem than “unhelpful content” is. It’s high time the company recognized that possibility, as well as the responsibility it ended up bearing for the content of the modern Web as a whole.

The company is now in a position to either save the Web from itself or seriously mess it up. Here’s why.

It’s about the road that’s paved with good intentions… again

According to Google, this particular search algorithm update is focused on “people-first content” and anyone who’s been following the blogging business during the last few years knows why the company chose that expression. It’s all about two things: what Google calls “search intent” and what’s been happening with various artificial intelligence programs and services that strive to emulate human writing.

Regarding “search intent”, it’s a simple enough concept: Google knows that people search the Web for specific information by either using whole sentences formed as questions or by entering keywords that they deem most relevant to that information. The company’s algorithm has been trying — for a while now, actually — to offer results that either work directly as answers to these questions or seem to include the searched information in long-form content. Past algorithmic updates were already heading in that direction, but now Google openly states that websites that “help consumers achieve their goal” — that is, finding the information they are looking for — will have their rankings boosted.

There are other variables in that mix too, but that’s the idea in a nutshell. It is a sensible approach, granted, but it has already created a major problem — one that will only get even more serious because of the Google Helpful Content Update. More on this in a bit.

Since everything Google-related is data, so is what people search for when using it, down to the exact words typed in. That data is available to anyone through free or paid online services. (Image: Pexels, Pixabay)


In terms of AI writing, it’s obvious what Google means to do and the company is absolutely right about this issue. We’ve all come across search results that are ranked high on Google’s pages — on the first page even! — only to find out, upon actually visiting those websites, that they offer very little useful information or do not make an awful lot of sense.

That’s because their content was written using AI tools. Someone entered a number of keywords, the artificial intelligence program created some text on the fly, that got some light editing, and… voila: an “article” about something gets published in under an hour. It contains many of the searched keywords and similar others (relevant to the same broad subject), so Google’s algorithm thinks that this must be quality long-form content. But when visitors actually read this, they realize that it offers precious little value. If any at all.

The Helpful Content Update helps, in theory, Google’s algorithm to “understand” whether a piece of text is primarily and/or originally written by an AI tool or by an actual human and strives to boost the rankings of Web pages that contain the latter’s work, not the former’s. Again, Google’s intentions are good: we absolutely need less fake, machine-generated, pointless text and more human-created, crafted content. But this creates a second problem that the Helpful Content Update will not solve for either writers or Web users. Let’s take a look at both.

Is query reverse-engineering just system manipulation?

Regarding search intent, the problem was already serious enough: many online writers have been essentially manipulating Google’s algorithm and approach by, well, working backward. They are “gaming the system”, in a sense, by publishing lots and lots of articles that are explicitly written as direct answers to specific questions. These articles, as anyone can find out by making a few highly targeted searches, tend to rank much, much higher than articles written in free form about the same subject.

Many online writers “reverse-engineer” natural-language queries in order to publish articles answering with the same specific words. It’s a hack that works but a questionable one. (Image: PhotoMix, Pixabay)


Online writers achieve this by using tools that list the actual words people are using when searching for something and then write posts making sure that they include the exact same words, in that specific order or close to it, within the title, headings and body text. The result: Google’s algorithm is made to believe that these posts are the best match for these queries, regardless of the quality of information these posts may or may not provide. It’s the kind of smart hack that every SEO expert will suggest, at some point or another, especially to new writers trying to make money online by attracting organic traffic.

This problem will become even more serious now that Google has practically legitimized that tactic. The company will be boosting the rankings of websites that provide direct answers to these questions as “they help consumers achieve their goal” (whatever that means), in essence giving the “written-as-answers” articles priority among its search results.

Writing, though, is not just about directly answering highly specific questions. It’s not just about covering topics that search engine tools mark as interesting to a lot of people. It is a form of expression, the art of examining concepts and ideas through words, a way to discuss anything and everything. So there are thousands or millions or tens of millions of articles, essays, studies, reports, etc. out there that are not SEO-optimized in the way Google seems to prefer because they were never written with that in mind. Among those, there’ll definitely be many an article that’s just a better read on a subject, more interesting and more informative, than most of those the search algorithm would suggest. But those will be buried in the fifth or sixth page of Google’s results, where nobody ends up looking.

Online writers not looking into how their work can be “Google-friendly” were already at a disadvantage. Google’s Helpful Content Update made things even worse for them. (Image: Edho Pratama, Unsplash)


When I am searching for articles regarding the Black Lives Matter movement, for instance, by entering these three words only, Google’s algorithm is already suggesting all the basics, all the obvious links about it, on the first and second page of its results (including articles that use questions as titles…). But if I already know what the Black Lives Matter movement is, I am much more interested e.g. in the thoughts of people who have been a part of it and have published pieces about it from the heart. Or first-hand accounts of events. Or something else of that nature.

Well, then… tough luck. Unless those pieces were published in hugely popular online media of high domain authority, I’d have to dig deep in order to find them. I’ve happened to come across some of those and they never showed up in the first 4 or 5 pages of search results. Hardly anyone keeps looking past that point.

It may be a simplistic example, but it remains indicative of the scale of this problem. In its official Helpful Content Update post Google seems to deny that this will be an issue, but… let’s be real here. The company’s algorithm is effective, but not yet intelligent enough to understand that at least a couple of the suggested results of its first couple of pages should be e.g. quality pieces of writing about the Black Lives Matter movement.

It’s not an easily quantifiable thing, quality, so it’s understandable why this is happening. But direct top-level domain links, “written-as-answers” articles, or newsroom reports cannot and should not be the dominant results offered by Google on any given topic. The company must find a way to make its algorithm not just more effective in recognizing useful content, but more intelligent… and more selective besides.

Algorithm-friendly guidelines threaten creativity in writing

So this is how we arrive at a point where Google, by trying to (a) offer more useful results and (b) solve the problem of low-quality, AI-created content, is actually creating two new problems.

First, the inevitable one: if online writers want their articles, stories, posts, however one wants to call them, to rank high enough in search results so as to be easily found, they now have to follow Google’s rules as dictated in that Helpful Content Update guide. It’s not an optional thing. If these articles are never to be found, then what’s the point in writing them anyway? The pure expression of ideas can always be a valid motive, sure, but we write stuff and publish it online in order for it to be read. If we wanted to keep it to ourselves, it would be in a journal. You know, a paper-based one.

Correctly structuring an online article so that it can be easier to read is one thing. Doing specific things on the page so as to satisfy Google’s algorithm is quite another. (Image: Kenny Eliason, Unsplash)


So online writers of all kinds must practically make sure that their articles, stories, posts, etc. are not just SEO-optimized to a certain degree (good luck with convincing many of those writers to spend much time learning how to do that) but also in line with Google’s “helpful content” guidelines. Anyone who’s used to writing articles in a freeform manner will attest to this: it’s actually quite difficult to structure content, incorporate keywords and make use of headlines in the exact way Google’s algorithm wants them in published work without ending up with text that sounds like… you guessed it, copy created by an AI-guided tool.

It can be done, of course, and — given time — it should become easier to routinely produce text that’s Google-friendly but still tastes like original work, not AI-based garbage. It’s quite certain that this will put a lot of strain on a lot of people, though, before they get to the point where writing does not feel like completing blank spaces in templates.

What’s worse, it’s also quite possible that this will seriously hurt originality and creativity in writing: by being careful about following these guidelines, writers can easily lose their distinct voice and style. Or, at the very least, a part of it. One can already see this happening, as a matter of fact, since Google’s December 2021 update: a lot of articles on the same topics published by different people in different publications now tend to “sound alike” more than they ever did. People who read a lot of stories on a variety of topics as part of their work can attest to that too.

Google will have to somehow make sure that “write-by-the-numbers” content is not promoted at the expense of freeform writing. It’s not going to be easy. (Image: Super Snapper, Unsplash)


The equally obvious second problem is that — now encouraged by Google — online writers who primarily publish “written-as-answers” articles have even more reasons to continue doing so, either helped or not helped by AI tools. This is not necessarily a bad thing in and of its own: all of us need to find answers to specific questions quickly on a regular basis and these articles, if well-researched and well-written, meet that need. So they are most welcome. But this also means that even more freeform, non-SEO optimized articles about the same topics will be regularly displaced by Google’s algorithm, exiled to the fifth or sixth or tenth page of results where most readers never visit.

That’s not good. One might even say that freeform stories hardly ever surfacing in searches is… unhelpful in the greater scheme of things.

The solution to these problems, ironically, leads everyone back to Google. Until the company’s algorithm reaches a point of almost human-like intelligence — where it can decide for itself, not just about the helpfulness, but the overall value of any piece of text published on the modern Web — issues like these will be cause for concern. Then again, will Google’s algorithm really care about what we puny humans think of its effectiveness if it ever reaches human-like intelligence? Now that is a question worth answering with carefully crafted text, no?

ABOUT THE AUTHOR


Kostas Farkonas

Veteran reporter and business consultant with over 30 years of industry experience in various media and roles, focusing on consumer tech and services, modern entertainment and digital culture.

Veteran reporter and business consultant with over 30 years of industry experience in various media and roles, focusing on consumer tech and services, modern entertainment and digital culture.

LATEST STORIES

OTHERS ARE READING