Responsible Tech in an Era of Hostile Tech

Automatic Summary

Responsibility and Technology: Understanding its Consequences

In exploring our roles as technologists, we are often led down divergent paths. What happens when the technologies we create become unintentionally harmful or dangerous to society? Famed author, William Gibson, posited that "technologies are morally neutral until we apply them. It's only when we use them for good or evil that they become good or evil." Moreover, Tim Harford from the Financial Times suggests that we cannot view these inventions in isolation, as they greatly shape the societies around us.

The Notion of Responsible Technology

Responsible technology reminds us to widen our perspectives beyond our target audiences and consider the wider societal and economic implications of our solutions. This umbrella term encourages us to ask questions like, 'are we bettering or worsening societal or economic inequities?' or 'what could be the unintended consequences?' Understanding the broad consequences of our technologies—even if we can't predict them—is key to responsible tech.

The Dark Side of Technology: Hostile Tech

While hackers, ransomware, and disinformation bots are easily identifiable as hostile tech, there are other more concealed threats. What about, for example, tracking algorithms? They might be beneficial for some but considered 'hostile' for others who value their privacy.

Ethical Considerations and Fairness

How does foreseeable bias in your technology impact different demographic groups? For instance, a common criticism of facial recognition software is that it often struggles with identifying women and black faces as accurately as it identifies white male faces. Similarly, a credit scoring algorithm using proxies for socio-economic status can inherently label people as credit risks unjustly. The concept of fairness here is complex and context-dependent, which is particularly evident in autonomous driving, raising questions such as whether a self-driving car should prioritize the lives of pedestrians or the driver.

Unintended Consequences and Mitigation

The footprint of what could go wrong if certain issues aren't considered or specific failure modes are overlooked is enormous with increasing technology interlaced into our lives. Responsible technologists need to consider, not only the probable unintended positive effects but also the possible unintended negative impacts of their work and strive to mitigate those risks.

Navigating the World of Responsible Technology: Recommendations

1. Broader Stakeholder Considerations: Often, a minor tweak either in our product or approach can make our technology less hostile to those outside our original target audience.

2. Harnessing Diverse Perspectives: Building a diverse team can help bring differing viewpoints and reflect different cultural, socio-economic backgrounds, or other factors into the discussion.

3. Responsible Tech Playbook: Unable to build a diverse team? Use tools like the Responsible Tech Playbook, a collection of facilitation techniques that prompt us to look at problems differently and from various perspectives.

Our Roles As Technologists

It's vital to remember that while there are people intentionally creating hostile tech, many hostile technologies aren't intended to be harmful. It's our duty to strive for minimal negative impact and maximize the positive impact our technologies have. In the wise words of William Gibson, "It's our responsibility as technologists to make that determination and to decide for ourselves, we are going to be on the side of the responsible technologist."

Final Thoughts

In an increasingly digital world, the importance of responsible technology cannot be understated. As technologists, we should take responsibility to consider the potential consequences of our creations, both intended and unintended, to ensure we create technologies that serve all of society effectively and efficiently. Remember, behind every line of code, there is an ethical implication. Let's continue the pursuit of technology that is useful, equitable, inclusive, and, above all, responsible.


Video Transcription

I wanna start with a few quotes about technology because when we think about technology, we all have different kinds of perspectives. This first one comes from William Gibson. I think that technologies are morally neutral until we apply them.It's only when we use them for good or evil that they become good or evil. And if you'll notice, I emphasize that word, we because technologies are not just inert things, they happen because we interact with them, we create them, we bring them into the world. And then the next quote is from Tim Harford from the Financial Times. New inventions do not appear in isolation, they profoundly shape the societies around us and I would assert they also, this is now Rebecca talking and not Tim Harford, they also are shaped by those societies.

So how we react to technology, the influence that that technology has is shaped by the societies around us. And then one of my personal favorites is from Mark Andriessen. The historical track record of technology innovators predicting the consequences of their innovations is very poor.

So was everybody else's predictions as technologists, we are not very good at really understanding what is going to happen with the result of, of our technology because as problem solvers, we have this laser focus on here's our audience. We have this entire discipline of user centered design and user research and accessibility to understand how our users are going to interact with the solution that we've given them to their problem. But we have blinders because we're looking at our problem and we're looking at our target audience and we aren't always looking at the other people around us, the other people that we aren't thinking about right now. And that's where this notion of responsible technology comes into play.

And we can have a discussion. I, is it responsible technology? Is it ethical technology? I consider responsible technology to be kind of an umbrella term that will take in ethical considerations as well as considerations of equity. Are we bettering or making worse societal or economic inequities?

But it also is talking about what do we as technologists need to do to think about what are the consequences of the technology that we are creating? There's clearly an intended consequence, the problem that we want to solve the solution, the service that we want to deliver to our, our, our customers or to society. But then there can be unintended consequences and there might be second order effects of those technologies as well.

And what responsible tech gets at is our responsibility as the creators and the deploys and the implementer of technology to take into account what are the broad consequences of our technologies? Even if as Mark Andreson points out, we're not very good at predicting what those are. Now.

Let's talk a little bit about what I mean by hostile tech because there's a whole category of technology that people would easily recognize as hostile hackers, ransomware, uh disinformation bots. All of those things are easily identifiable as hostile tech. But what about say, tracking algorithms?

I was talking to a marketing professional not all that long ago. And she said no one cares about being tracked online if the recommendations and and the information that they get back as a result of that tracking is useful. And I disagreed with her. I said it might be that many people don't mind or even most people don't mind, but a lot of people do mind. They don't care what they are giving up, they don't want to be tracked to them. They consider that tracking to be hostile. And so we need to take into account the perspectives of the people who are involved. Sometimes this, this uh this issue with the technology isn't even intentional. One of my favorite examples that I try to use when I talk about bias and A I is a teaching hospital and they tried to develop a model that would help them with recommendations on when patients should be put into the intensive care unit after a particular procedure. And they use two different algorithms to train on the same data set to see what kind of models they got. And the first model was very good at making predictions that tracked with, with the data set. But they couldn't really tell what was the basis for the recommended actions. The other technique they used didn't come up with quite as good of a model, but they could actually interrogate the model and understand what was the rationale for the recommendations being made.

And what they learned is that they had a bias in their dataset. Their policy had always been for this particular procedure that all people with asthma would immediately be put into intensive care. And so their data set had nothing to say about people with asthma. So not surprisingly, the model that resulted said nothing about people with asthma. Were those were those modelers malicious? Did they have ill intent against people with asthma? Of course, not what they didn't understand was that their data set had an underlying bias that they didn't recognize.

Now, I'm sure people with asthma would have felt well if they died as a result of that model. Well, that, that feels pretty hostile to, you know, to me, but it certainly wasn't malicious and it certainly wasn't intentional. Fortunately that they had the foresight to use those different models to try to determine what the basis of that recommendation was so that they could understand that and could possibly account for it. So then what kinds of areas, what might we be thinking about? Let's start with the environment when you're developing a product, particularly a physical product. Are you thinking about the communities around where that product will be manufactured? Are you thinking about the, the community and the environment around where the source, the raw materials will be sourced from?

What about equality? Does your product in some way discriminate against people? Again, facial recognition software is well known to not necessarily do as good of a job on women, black faces as they do on white male faces. A lot of that has to do with the training set that, that, that that's that's used. And so if you're deploying facial recognition software, the the po the possibility, the probability of a false positive will be much greater for certain demographic groups than others. Is that fair?

What about a credit scoring algorithm as a bank? For example, how should I make the determination of whether or not I use proxies for particular attributes of people who are applying for credit? Because it might be that if I use and find some kind of proxy for socio-economic status or educational upbringing or race, that I might get more profit because the model is making recommendations that will tend to steer me away from people who might be not have quite a good credit risk.

But is that fair? Are you improperly labeling people as a credit risk based on factors that don't apply to them. So as organizations, we need to come to terms with what constitutes fair in my context, it gets even more complex when you start talking about autonomous driving.

Who do you really want to have, making the decision about whether or not the car prioritizes the life of the driver or the pedestrian? Personally, I'm a software developer. I don't wanna have the responsibility of deciding what that happens. And so we have this broad range of areas where technology is having an increasing impact on our life. And we have to figure out how as technologists, we respond to that. So I've given a lot of examples about how this can play out in different realms. But I wanna talk a little bit about why, why is this such an important topic now? And if, if you think back to when people like me with this mono gray hair got involved in technology, we were doing things like account ledgers, accounts payable payroll. There weren't many gray areas with respect to what was the right thing to do? No, you don't cheat. No, you don't break the law. Yes, you pay, you know, you, you pay in the withholding taxes you're supposed to pay in so many of those things. It was very straightforward what the right thing to do was. But the surface area of where technology is impacting our lives, both in the software realm, things like criminal justice and medical decision making. And you know, credit scoring and all, all of that. But also in the way that it's starting to impact us physically, more and more wearables, more and more medical technology and more and more automation in things like manufacturing and chemical processing and such.

And so the footprint of what can go wrong if we don't take particular problems into account, if we don't take particular failure modes into account, that footprint is enormous. And so we need to spend more time thinking about how do I do a better job of predicting what are the possibly unrecognized, positive, unintended consequences of my technology? But also what are the unintended negative consequences? And what might I do to be able to mitigate against those?

And so what should we do? I don't think it's terribly controversial to say that the individual software developer working for Volkswagen who are the individuals. If it was a team who made the decision to write the code that that said, if I'm hooked up to an exhaust testing machine in the United States of America, the engine runs this way. And if I'm not hooked up to that machine, um the uh the, the uh results should look like this. I don't think anybody would argue with the fact that that team made a bad choice. But past those obvious cases, we have a responsibility to consider the broader range of stakeholders, we need to lift our head up. And instead of just focusing on our target audience, look around. And maybe there's another target audience out there that with a very minor tweak either to our product or to our message, we might have vastly expanded opportunities. Or maybe if we think about how somebody who doesn't look like us or comes from a different culture or comes from, from a different socio-economic background or a racial background, how they might feel about that, maybe we can make decisions that, that help our technology be less hostile to people who weren't in our original target audience.

But how can we do that? We can't have a representative from every culture and every language and every religious background and every sexual identity and race and country and socio. And we can't have a team that big. We'd never make any decisions, but we do have to do what we can to try to bring diverse perspectives into the discussion and where you can't do that with building a diverse team. There are strategies that will allow you to do this. There have been many teams in different organizations who have developed facilitation techniques to allow us to help prompt us to look at problems differently and from different perspectives. And by getting permission from these different groups, we've collected these techniques into something we call the responsible tech playbook and it's available from our website perfectly free. No paywalls. No, no, you know, email, you've got to give an email address, kind of thing.

Um But it, it's a variety of facilitation techniques, some model after things like threat modeling, but there's the tarot cards of tech, there's something called a consequence scanner. All of these things helped prompt you as you're trying to look at your technology and understand what this technology might represent to help you step back and think about, well, what about this? What about people from here? What about someone who wants to do this? And it's that diversity of perspective, getting our head up that will allow us to start looking more broadly at what impact our technology is having because although yes, there are hackers out there, there are people who are malicious, who are intentional about creating hostile technology.

But the vast majority of technology that ends up being hostile often isn't intended to be. And it's our responsibility as a as technologist to do what we can to not create hostile technology when we don't intend to. And I wanna go back to that quote from, from uh William Gibson. I think the technologies are more only neutral until we apply them. It is only when we use them for good or evil that they become good or evil. It's our responsibility as technologists to make that determination and to decide for ourselves, we are going to be on the side of the responsible technologist. We are going to do what we can to ensure that the technology we create has as broad a positive impact as possible and as small a negative impact as possible. And we do what we can to mitigate that risk. Thank you so much for your attention and.