Can my design be used for evil?

There has been a lot of talk about ethics in UX circles over the last couple of years. This is a good thing. However, most of it has not been actionable in everyday work. And, to be honest, most ethically problematic products weren’t designed to be unethical. I am quite sure the designers of smart thermostats, easier purchase flows, sharing economy apps and social networks didn’t expect that their work would be used for domestic abuse, unwanted purchases, worker exploitation and skewed world views.  In my experience, UX designers are generally a group of people who believes in the good of their fellow humans which means most of the time they don’t even consider how their designs could be used in unintended ways that might be harmful or dangerous. But maybe we, as a group, should. Maybe we should try to imagine the worst ways our designs could possibly be used as a part of our design process so we can at least try to mitigate the risk of that happening. 

To help you do that for your own product or service I have tried to collect some examples of ways good designs have been used for bad things that you can use as inspiration for imagining how your great designs can be used in ways you probably didn’t intend. I have also provided some questions you can ask yourself to see if your product or service run the risk of causing harm. The rest is up to your own vivid imagination and ability to pretend you are evil. So let’s put on our arch-villain persona and get to it.

Business goals and how they can be taken to the extreme

Companies need to make money to pay salaries (and make shareholders and investors happy if you believe in that model). Often goals are formulated in a measurable way, often called Key Performance Indicators (KPIs), in order to focus attention on what brings in the money. This also means that designs are made to optimize these KPIs. Facebook makes money on targeted ads, so the main KPI is said to be engagement, i.e. how much time you spend and how much you interact with the site. How do you get people engaged? Show them what they want to see. Show them what they like. Or show them what they don’t like so they feel they have to react and engage

Facebook’s engagement metric might sound innocent enough and obvious to design for. But we have seen the results of this during the last few years in the form of growing extremism and difficulties in people with opposing worldviews to understand and communicate with each other. Since people with opposing views now don’t even share sources from where the information is gained and instead gets it served to them in forms which can only rarely be seen as neutral, this kind of information filtering leads to even more senses of “I am right and the others are wrong”. Instead of “giving people the power to build community and bring the world closer together”, which actually is their mission statement, they, in practice, do the opposite. This is not supposed to be a rant about the evils of Facebook, but should be a clear illustration of how blindly optimizing for certain measurements might lead you astray.

What questions can you ask yourself to address this:

  • What are the KPIs in your company and what would an extreme optimization for them result in?
  • If there are different levels of KPIs in your company – how do yours relate, contribute or oppose the ones for your colleagues?

Convenience vs privacy

People are sharing more and more of their private data with the apps and websites they use. They track their spending, health, pregnancies, and travels, and get coaching and services in exchange for the data shared. Collecting the information might seem harmless, but at some point, someone might decide to share it in a way that is not. Take the example of the pregnancy tracking app which some US employers provide their employees with that also shares whatever its users put in with them. In addition, there are also examples where sharing very personal data might seem voluntary, but in order to receive a service is not, e.g., when a health insurer provides discounts if you agree to share your Fitbit data with that company. I am quite sure that was not what the designers at Fitbit originally had in mind when designing the service. (If they did – shame on them!)

What questions can you ask yourself to address this:

  • Do we really, really, really need this data?
  • How can we ensure ourselves and our users that the data we collect will not be abused?
  • If we are using third parties to handle some of our data (e.g. using a third party for handling the log in), are we sure we can trust them with it?
  • What can we, as designers, ask of our companies in terms of taking responsibility for the data it collects? What power do we have?

Sharing data with the world

It is not only apps or services that gets access to data shared by its users – the users themselves can select to share all kinds of information voluntarily with the world. This can have many beneficial effects such as encouragement when exercising etc., but also unintended, negative consequences. Consider for example, the unintended exposure of secret US military bases through data shared by soldiers using fitness apps when going for a run. Or when burglars use social media posts to schedule their break ins based on when the inhabitants are away. Or online fraudsters who find the answers to security questions in people’s social media profiles.

What questions can you ask yourself to address this:

  • How can the data shared be used in a not-so-friendly way, i.e. what can be learned from it apart from what it was intended for?
  • How can we prevent misuse?

Letting people communicate freely can have dire consequences

A lot of platforms for all kinds of things ranging from run tracking to dating have introduced ways for users to interact and communicate with each other. This often leads to a positive experience of using the platform, but can also have built-in dangers. For example, video game chats in children’s games have unfortunately become hunting grounds for sexual predators and in games mostly played by adults tools for harassment of mostly female players. Communication in dating apps, on the other hand, has been used not only for finding love, but also for scamming lonely people out of their hard earned money. It is not easy or even ethical to keep tabs on what people say online, and there are very different views on which responsibilities the provider of a social platform have in such cases. However, as a supplier one cannot ignore that providing a social platform might not invite problems such as those listed above.

What questions can you ask yourself to address this:

  • What kind of users does our platform have and what kind of abusers might target such users?
  • How can we prevent such abuse?
  • If abuse happens, how can it be detected, reported and acted on?

Internet of Things (IoT)

Something that has become very popular is to connect everything to the internet. You can control who enters your home, track your weight, and even control your toaster using an app on your phone. However, as some developer friends of mine often says ”The S in IoT stands for Security. There is none? Huh!” When designing your revolutionary hardware gadget, it’s easy to focus on convenience and forget about security. This can have quite scary consequences as some parents experienced when a stranger started talking to their child through their online baby monitor.

However, it is not only security in relation to strangers that could be an issue. As mentioned in the introduction, this kind of technology is nowadays also used for domestic abuse. Connected things such as locks, cameras, thermostats, and lights are increasingly showing up in domestic abuse cases as having been used to control, monitor and terrorize people in their own homes. In those cases, the person controlling the devices might have been given access at some point, and the person being abused lost control over them as a result. Or not being able to remove the access privileges for various reasons.

What questions can you ask yourself to address this:

  • How do we ensure that only the intended users have control and access to these devices online?
  • Once given, how can users administer removing access privileges themselves when needed or even reset the system to ensure that no previous accesses are kept?
  • Does the user have a right or the ability to know their data is protected from third-party partners or hackers, but also within the company collecting the data?

Social impacts of the sharing economy

When the sharing economy started it did so with a promise of making resources more readily and affordably available as well as more demorcratic as anyone could pretty much share anything. Services like AirBnB promised a more personal experience while travelling as you would be staying in people’s homes instead of a generic hotel, and Uber allowed pretty much anyone who had a car to earn some extra money by acting as a taxi without having to get a taxi license. 

A few years later, we’ve seen that although the intentions, especially of its end users, have mostly been good, these services have had some effects that have not been so positive. AirBnB, for example, has been banned or limited in some cities as it made the availability of flats for long-term rental low since landlords figured they’d earn more money by putting them up as holiday accommodation. There has also been reports of properties available on AirBnB being used for human trafficking as, unlike in hotels, there is rarely any control of who actually uses the property. Uber, on the other hand, have forced many drivers to trade health insurance and other benefits for being on their own. Uber refuse to see their drivers as employees – they are only sharing their car and happen to be driving it after all. 

This one is a bit tricky to address using design, but should still be listed as it still is an example of negative consequences of an idea which originally appears good. If such consequences are identified by anyone working with the idea, they should be brought up for discussion – no matter if they are found by the designer, the CEO or the receptionist.

What questions can you ask yourself to address this:

  • What potential negative impacts can what I work on have in a society where it is being used?
  • Who will pay the ultimate price for a service which makes something cheaper than getting the same thing in the currently conventional way?

“Evil”, you say?

To be fair, even the people misusing the things we design might not consider themselves evil. Very few people do. Everyone has their own rationale for their actions and some results that might turn out bad might even be the result of an unfortunate combination of factors. Even so, it is up to us, when at the design stage, to try to eliminate as many as possible of the possibilities of our designs being misused. Many of the issues discussed in this post are around data protection and privacy. Considering the consequences of collecting and using personal data should be in every product or service design process. That, and thinking about how greed can lead to misuse. We should design to make lives better, but not forget to think about how it also might make it worse.

Disclaimer: I am no longer at Crisp so therefore the comment section for this post has been closed. However, if you want to come in contact with me regarding any of the content, please email me at anneli.olsen [at] tinypaws.se.

One response on “Can my design be used for evil?

  1. A thought worthy collection of examples where good intentions turned harmful. Considering the unintended consequence, should indeed be basic practice of any design process.

Comments are closed.