Good Technology Gone Bad

Henry Dobson & Babett Kürschner

Driven by their vision to change the world for the better, most entrepreneurs set out with not just good but the very best of intentions. Actually we’re pretty sure most of them embark on their journey with a noble aim and purpose, wanting only the very best for themselves as well as others. But as the old saying goes: the road to hell is paved with good intentions.

The moral of this proverb isn’t that we are doomed to create chaos no matter how good our intentions are. The moral is that despite our best efforts to solve pressing social issues and environmental problems, we sometimes find ourselves doing more harm than good.

Cautionary tales

This is very much the case with modern technology and most of the digital applications that we use today. Many applications are built with the greater good in mind. Yet over time, as they scale and grow, some applications change and morph in ways that see them capable of inflicting varying degrees of harm on its stakeholders, eventually impacting and changing society in ways the founders never intended nor anticipated.

The reason for this is that some elements of technology are inherently problematic. Algorithms for example are biased by default, simply because their purpose is to make decisions and choices based on the data being fed into them.

To address the possible risks and harms associated with technology, we’ve identified four major areas of ethical concern that we believe founders should be aware of when developing their product and running their business.  After all, we learn from our mistakes, and many mistakes have already been made with modern technology products. And if we fail to learn from the mistakes of the past then there’s a good chance we’ll make the same mistakes again in the future. Below are four cautionary tales of where good technology has gone bad.

Bias & discrimination

In the U.S., courts in states like New York or California use risk assessment algorithms to predict the likelihood of a defendant recommitting crimes once they are released from jail. This likelihood is also referred to as a defendant’s recidivism rate. 

One such tool is COMPAS (Correctional Offender Management Profiling for Alternative Sanctions). The software was developed by a for-profit company and uses a set of scores derived from 137 questions to classify criminal defendants as either low or medium/high risk. Courts use the results from the algorithm to base their decisions on whether defendants awaiting trial should be released on bail or not.

An independent investigation uncovered some worrying data. By utilising data derived from a structurally racist system, the COMPAS algorithm produced outcomes which scored black defendants as being twice as likely to recommit than their white counterparts. As a result, black defendants were falsely classified as high/medium risk at double the rate of white defendants, who in turn are mislabeled as low risk twice as much.

Once a black defendant is categorised as high/medium risk, they are just as likely to reoffend as a white defendant and vice versa, the proportion of re-offenders is roughly the same across races within the individual risk categories. This means that COMPAS paradoxically proves race isn’t a good indicator to determine somebody’s recidivism rate. 

Many other algorithms have since proven to be racially biased, like Google’s facial recognition algorithm that classifies images of people of colour as “gorillas”. Understanding how algorithms are biased is critical for operating a socially responsible technology business.   

Invasion of privacy

At this point, voice assistant devices have become commonplace in many households around the world. Sold with the promise of making life at home all the more efficient, convenient and comfortable, Amazon’s Alexa has become an integral part of over 100 million households in the U.S. alone. But what seemed like a promise of relief has quickly turned into a cause for alarm for some customers.

Some customers have described “Kafkaesque” situations in which their voice assistants started repeating commands over and over again which they had given days ago. Other devices were able to access supposedly confidential audio files that someone else’s device had recorded. Virtually every user can recall moments where its device turned on without being prompted. Isolated incidents or inherent flaws within the technology?

Moreover, whenever voice assistants are asked to perform a particular task, the voice recordings are processed by an AI algorithm which not only listens but also transcribes the recording. Amazon has admitted to these storing transcripts on its own servers. This  means that Amazon is in possession of what is effectively a written copy of their users’ most private conversations and intimate moments at home.

Despite not always feeding back information to their servers, voice assistants are listening and recording everything you say – how else would they hear the prompt that activates their function? Amazon claims these recordings are only being listened to by humans in some instances for the purpose of improving Alexa’s services. There have been incidents however where recordings have shown up as evidence in court cases without explanation. Some experts have gone so far as to equate home assistants with a state of “constant surveillance” – a situation not dissimilar to George Orwell’s novel “1984”. 

While Amazon surely (or rather hopefully) didn’t intend their products to have these consequences,, this is an excellent example of the inherent flaws within a technology’s design.

Declining mental health & well-being

Originally social media platforms like Facebook and Instagram (or MySpace back in the days) were designed to give people an online space to be themselves and connect to their friends and family no matter where they may be in the world. Facebook’s vision statement says that the company wants to empower people to build community and bring the world closer together. While this is a noble pursuit, the platform today has morphed into something far bigger and more powerful than what was originally intended.

Let’s think of our own social media presence. We generally tend to present a heavily embellished, edited version of ourselves online. Which makes sense, as our online network has expanded from just friends and family to acquaintances, cousins-twice-removed or, in the case of LinkedIn, even to our professional contacts.

Given the broad extent of our personal social network we naturally want to show our most confident, intelligent and best self. This doesn’t mean that our online personas are made up. It just means that we’re never giving people an accurate nor full account of our real, multifaceted self.

These “best” versions that we see of other people in our network can have some very serious effects on our self image and our self esteem. An unfortunate truth about social media is that it directly affects our mental health and is known to cause heightened levels of social anxiety, depression as well as suicide.  

While there are undoubtedly many benefits of using social media, we need to reflect on and critically think about how these platforms affect our mental health and social wellbeing. Some warning signs are looming on the horizon already, as research suggests that the recent increase in anxiety and depression in adolescents can be partially attributed to excessive social media usage. 

Social & democratic risks

Social media is also a major cause for concern around social and democratic risks. Every day we willingly upload and share information about our daily lives, personal  thoughts, political views and, of course, what we had for breakfast. By providing companies with this information on a silver platter – for free – we’re giving them insights into who we are, what we like and how we think.

Facebook doesn’t just store our information and data; they analyse and formulate our data into psychological profiles. At best, these profiles are used to determine the content which we like most, to keep us glued to our screens by refreshing our news feeds, and to customise the ads we see, all done with the aim of getting us to buy products we may not really need. Perhaps worst of all is the fact that social media can be used to actively disenfranchise voters and undermine democracy.

The Cambridge Analytica scandal is possibly one of the most distinct examples of how things can go very wrong if individuals and organisations with not-so-good intentions get their hands on our data. Using data scraped and bought from Facebook, the company allegedly manipulated people’s political state of mind by presenting highly tailored campaign material along with other messages all aimed at influencing how people voted in the 2016 U.S. Presidential election. More recently, it has come to light that the Trump campaign used psychological profiles to actively discourage people from voting. Trump’s strategists acquired the data of over 200 million U.S. citizens and categorized them into individual categories.

Voters who were more sympathetic to the democratic party but not necessarily core voters were dubbed deterrents. Trump’s chief data scientist later said that they “hope they don’t show up to vote”. And it worked. A report by Channel 4 revealed that in the key 16 swing states unproportional amounts of deterrents who voted in the previous election ended up not going to the polls in 2016. They were subject to highly targeted ads on Facebook and other platforms, in some instances the campaign released 6 million different versions of the same message.

This is perhaps one of the greatest social risks facing our world: the way social media can be used to diminish our trust in people, society and modern democracy.

Many more areas of concerns

The issues mentioned above only touch upon some of the problems that we’re encountering today; there are many other areas of concern that we haven’t discussed, like  the future of autonomous warfare and the possibility of General Artificial Intelligence (GAI) – the point at which AI obtains human-like mentality, including the ability to sense and feel the world like humans do.

As we mentioned at the beginning,  most, if not all entrepreneurs start out with not just good but the very best of intentions, often with the aim to change  the world for the better. This however can only be achieved if you keep the inherent risks and problems of your technology in mind, and if you act proactively to mitigate and minimise these risks for your own product and business.


If you want to find out more about your and your business’ ethics, take our Responsible Technology Assessment here.

Only one step left to start the assessment!

Click below to be redirected to the assessment. If you’d like to fill it later, you can also receive a reminder to your inbox. Just leave your email address below.

Welcome on board! 🎉

We’re so glad you’re ready to become a tech2impact member! 

To get started, we need to know more details about your startup. The questionnaire takes about 20 minutes to fill. But if you prefer to fill it later, just sign up with your email, and we’ll send a reminder a few hours from now!