An Introduction to Responsible Technology

Henry Dobson & Babett Kürschner

Facial recognition systems tracking our every move. Personal assistants like Siri or Alexa listening in on our conversations at home. Social media platforms collecting tons of personal data and information. It’s undeniable that the technology we create is becoming smarter, faster and more powerful each and every day. But with great power comes great responsibility.

The more time we spend online, the more we expose ourselves to digital risks, psychological harms and other kinds of danger. The use of modern technologies such as big data and artificial intelligence in everyday applications is rapidly increasing and the necessity for socially responsible technology is growing simultaneously. But what exactly is responsible technology? And why is it so important?

The future of technology at the crossroads

When you ask anybody what role they think technology will play with regards to human progress, the answer will probably be something along the lines of technology enabling us to live longer, happier and healthier lives. Some of the world’s top tech CEOs are already talking about a world in which their products work towards solving major problems like poverty, famine and climate change.

While this sounds like a great roadmap, in practice technological progress alone doesn’t necessarily lead to a prosperous future. Not all progress is automatically good progress. While there’s nothing wrong with technology when it’s used appropriately and designed responsibly, supposedly good technology could easily cause a lot more problems than it actually solves if we don’t pay attention to the adverse effects of the products we design.

Technology will only be as good as the ones who are building it. In order to be on track towards a better future, we need to recognise that the cure to avert serious harm isn’t to make our technology smarter, faster or more powerful – the way forward is to make sure our technology is designed with an ethical mindset.

What is ethics in technology?

Ethics is a field of moral philosophy that studies the moral views, attitudes and principles that govern the behaviour of individuals as well as businesses and large organisations. Ethics concerns itself with the factors that lead to the health and well-being of individual people and broader society, including plants, animals and the planet too.

Logically, ethics also studies behaviours which can lead to the exact opposite and can therefore be considered bad, harmful or detrimental to our existence. When it comes to technology, ethicists and philosophers study the behaviour of computers, robots and machines in specific scenarios. They try to assess the moral significance and ethical concerns of those affected.

To steer or not to steer – that is the question

A classic example for an ethical dilemma in technology is the study of what is known as the Trolley Problem. Imagine you’re in the following situation: you’re standing at the wheel of a trolly. By pure accident. And you’re speeding down some train tracks!

Ahead of you are five people, tied down on the tracks, who will most definitely be killed by the speeding trolley that you’re driving. But wait! There’s a second track, splitting off to your right. But there’s a workman on the track who can’t escape – and will most definitely be killed if you divert your trolley.

This puts you in a moral dilemma. Do you allow the trolley to carry on, as a course of pure accident, and let five people die? Or do you actively steer the trolley onto the other track and kill the workman instead?

A research group at MIT has applied the Trolley Problem to autonomous, self-driving cars. Unlike human beings who are moral agents, AI systems and machines are non-moral agents, which means that they don’t possess any instinctive sense of morality.

If a self-driving car perceived the same situation as you on those tracks, which way should it turn? What moral decisions should a programmer encode into the AI software of the self-driving car? And in the case of a fatal accident, who would be morally responsible for the deaths involved – the self-driving car or the person who programmed it?

Working towards a better future

The reason why technologies like facial recognition systems and certain social media platforms often cause harm to its users is not because they were purposefully built to do so. It’s because designers, developers and ultimately founders often built them without ethics in mind.

For technology to enable us to live longer, happier and healthier lives we need to think about potential risks before we get to work. Given the inherent risks, dangers and harm of modern technology, we need to reflect on its impact on all of our stakeholders. We can’t wait until we have to do damage control. In order to make sure our solutions aren’t accidentally creating more problems, we need to decide if we’re happy with the direction our technology has taken – and if that’s not the case, we have to steer away the metaphorical trolley to a track that doesn’t cause casualties.

A solution that has a net-negative effect is ultimately not successful but a failure.We have to take collective responsibility for the way in which we design, develop and use technology. With the world’s most powerful algorithms and mobile devices in the palm of our hands responsible technology is not an option – it’s a moral and ethical necessity.


If you want to find out more about your and your business’ ethics, take our Responsible Technology Assessment here.

Only one step left to start the assessment!

Click below to be redirected to the assessment. If you’d like to fill it later, you can also receive a reminder to your inbox. Just leave your email address below.

Welcome on board! 🎉

We’re so glad you’re ready to become a tech2impact member! 

To get started, we need to know more details about your startup. The questionnaire takes about 20 minutes to fill. But if you prefer to fill it later, just sign up with your email, and we’ll send a reminder a few hours from now!