Las Vegas exposed the limits of Google’s algorithms. But is there still hope?

Google’s dominance in the diffusion of information is a nearly an undisputed fact. But what happens when Google’s algorithms fail to provide quality, reliable information when it’s most needed?

This was the case with the Las Vegas tragedy when Google’s Top Stories featured a 4chan forum post which wrongfully accused Geary Danley as the perpetrator of the shootings. Unfortunately, the post spread across the Internet and Danley’s good name was smeared. It is common knowledge that 4chan is not a reliable source: the forum is notorious for its “trolling” personality, racist views, and willful dissemination of inaccurate information. Yet, Google’s algorithms did not filter the 4chan post. After receiving widespread criticism for circulating the 4chan post, Google issued the following response:

Unfortunately…we were briefly surfacing an inaccurate 4chan website in our Search results for a small number of queries. Within hours, the 4chan story was algorithmically replaced by relevant results. This should not have appeared for any queries, and we’ll continue to make algorithmic improvements to prevent this from happening in the future.

In the past, Google was simply a resource of information and acted as a search engine.  Yet with the “Top Stories” feature which highlights trending stories, the company now also bears the responsibility of curating news. While useful in ordinary situations, Google’s algorithms have fallen short when it comes to filtering for reliable sources during breaking news like Las Vegas. In simple terms, Google’s Top Stories algorithm measures stories and posts by two variables: “freshness”, how new and trending a topic is, and “authoritativeness”, the credibility of the source. The algorithms allowed the 4chan post to surface into the mainstream because calculations weighed “freshness” over “authoritativeness”. In response to Las Vegas and other blunders, many critics have since denounced the algorithms, deploring them as “rogue” and a “failure”.

However, technically-speaking, the algorithms did not “fail”. They simply did what they were programmed to do. Perhaps there is hope to improve the algorithms. As Artificial Intelligence has shown, machines take time to learn, requiring many examples before they can perform effectively. What is needed, therefore, is an improved way for these algorithms to filter information and measure accuracy.

Are you a computer scientist or software engineer developing improved algorithms that would prevent further blunders like the Google 4chan debacle? You may be eligible for the R&D tax credit. If you would like to find out how your company could benefit from R&D Tax Credits, please contact a Swanson Reed R&D Specialist today.

Swanson Reed regularly hosts free webinars and provides free IRS CE credits as well as CPE credits for CPA’s.  For more information please visit us at www.swansonreed.com/webinars or contact your usual Swanson Reed representative.

Recommended Posts