It’s hard to imagine the internet without search engines. We would, I guess, be limited to just typing in the addresses of the popular websites we know about and hoping that the information it contains is accurate and up to date.
Imagine, for instance, visiting YouTube (the world’s second largest search engine after Google… kind of) and having no search bar. You’d be limited to trawling a directory for interesting channels, or just clicking on whatever rubbish the popularity algorithm throws up.
And yet search engines, as we know them today, haven’t changed all that much since they first appeared around 30 years ago. The range of features built on top of, what are essentially giant databases, has grown, and the accuracy of the results has improved, but the basic functionality remains the same.
The only thing that has really changed is the litany of problems that have developed around the technology. From privacy concerns, to censorship debates, and even filter bubbles (more on this below), search engines it seems are having to paper over more and more cracks in their services.
Little wonder then that the recent release of AI marvel, ChatGPT, was quickly touted as a “Google Killer”, despite the fact that ChatGPT doesn’t crawl the web and doesn’t know anything that happened after 2021.
People are hungry for something totally new in the search engine world, and here are five reasons why it’s going to take a sizeable reboot to create a truly worthy successor.
Google makes 80% of its money through advertising. This means it needs to know as much as possible about its users, so it can accurately match customer and advertiser. It accomplishes this by tracking what you search for and what you click on.
Even if you’re using a browser’s “incognito” or “private” mode, the chances are good that, although your activity isn’t being recorded locally, it’s still being tracked and recorded elsewhere.
Many people couldn’t care less about being followed around by their preferred search engine, but enough are concerned about it to create space for search engines that promise anonymity. And yet, even these can be less private than people realise.
It seems like the only way to truly remain anonymous online is to use a VPN service.
Except that… even that might not be completely bulletproof.
2) Search Engines actively censor content
As previously discussed, we’re not against censorship. It’s a necessary, if blunt, tool for limiting access to illegal content. What should concern everyone, however, is when search engines censor legal information because an interested third-party persuades them to do so.
“Courts and government agencies around the world regularly request that we remove information from Google products. We review these requests closely to determine if content should be removed because it violates a law or our product policies. In this report, we disclose the number of requests we receive in six-month periods.”
This is a perfectly reasonable stance, however it’s common knowledge that some governments don’t always act in the best interest of their citizens. This can result in information that is useful to the general populace, but dangerous or embarrassing to the government, being removed.
In some cases, a government can have such a stranglehold on internet activity, that entire portions of the web can be entirely excluded. Looking at you, China.
3) Search Engines have filter bubbles
It’s well-known that social media platforms have a tendency to show you content that matches your world view, and “protect” you from content that might challenge your existing point of view. It’s less well-known that these, so-called filter bubbles, exist on search engines as well.
It is, of course, useful to receive personalised search results that fits your location and interests. What is less useful is when your ability to properly research is hampered because the search engine is making algorithm-based decisions on what to show you.
We’re not suggesting anything nefarious is going on with this search quirk, but a WSJ investigation found that, in 2012, a particular search engine customised the search results for people who had recently searched for “Obama”, but not for those who had recently searched for “Romney”. Make of that what you will.
As search engine problems go, this isn’t necessarily the worst problem. Most of the time, it’s probably more helpful but harmful. But there’s no denying that it’s a hindrance to objective research.
4) Search Engines have an English bias
Unless you live in a part of the world where the main language is something other than English, if you search for a word in a foreign language you’re more likely to discover a bunch of English websites offering to translate your phrase than you are to find a foreign language site.
Even if you go to an international version of a search engine, personalisation will still kick in and provide you with English language results.
This is a feature rather than a bug since most English speakers are monolingual, but it means that non-English speakers and writers are automatically less likely to attract visitors to their content. It also means that, unless you’re prepared to jump through some VPN hoops, researching foreign language content is a headache.
This is not an inconsequential problem. The views and opinions of non-English speakers are every bit as valid and important as anybody else, but are far less likely to be included in the conversation.
5) Search Engines are slower than you think
With trillions of web pages to crawl, search engines have to prioritise where they focus their resources. Naturally this means the most popular sites are going to be indexed more frequently than those of the smaller, less visited variety.
It can take weeks or even months for new content from a lesser-known site to make it into the search engine results. Which means, if the content is time-sensitive, it may already be out of date. Search engines may offer the facility to search for content that has been posted within a particular time frame, but this will still only return web pages that have been crawled since that period.
It’s easy to perform a search and assume that what you’re getting is the most up-to-date results. When in fact, what you’re actually getting is the most up-to-date results from a relatively small number of websites that have been deemed to be the most important.
Exorde is NOT a search engine… but it’s a good start!
It’s hard to imagine a new search engine being launched that solves all of the above problems, AND reaches enough people to overtake the Googles and Bings of the world. What’s more likely is that tools will be developed that aim to fill in the gaps. In other words to allow internet content to be crawled and assessed in a manner that search engines can’t.
Exorde is one such example, offering a way to crawl internet content…
Exorde works by providing access to a large network of crawlers and validators, based all over the world. Any keyword or keyphrase can be quickly researched and assessed using natural language processing to provide insights into what people are saying and how they feel about different subjects.
And because it runs on the blockchain, anyone can utilise this tool without restriction, and with total anonymity.
Again, its worth repeating, Exorde is NOT intended to operate as a search engine. But what it does provide is a way to review and study internet content, virtually in realtime, without any public or private organisation being able to limit or restrict the results.
Every deep search carried out by Exorde is done so publicly, for maximum transparency, and the results are stored on a decentralised database so they can be reviewed by any interested party.
The Internet would be impossibly difficult to navigate without search engines. And it can’t be denied that, for all their problems and issues, they provide an invaluable function.
However, it also can’t be denied, that in their current guise, they cannot be relied upon to provide full and transparent access to information.
Other tools are going to be needed for this purpose. And Exorde is a strong move in this direction.