IAC19 Dan Zollman on Ethical Design

Cersi

2019/03/22 发布于 设计 分类

文字内容
1. I’m an information architect and strategist based in Boston, MA. Ethics has been an important theme for my entire career, but feel that I’m only beginning to understand how to approach ethics in practice. This talk represents my current understanding. The speaker notes contain most of the spoken part of this presentation, with additional examples, resources, or citations below the separator. --Other notes/background: • Please feel free to contact me to discuss any aspect of this topic. • I’m also part of an online community for Ethical Design which has global, Englishspeaking participation and is connected with local meetups in several cities. • Pronouns: he/him/his 1
2. This is my vision statement for those working in design and technology fields. 2
3. An online community for this topic has its origins in the IA Summit. 3
4. The part I want to focus on is agency. 4
5. The practical reality of ethics for designers, and anyone working in technology, is that it’s complex and it’s political. 5
6. We need a systemic approach in order to understand our role in this complex and political environment. 6
7. We’ll bring that systemic approach through five themes, focusing on theory and principles, with a bit about practices at the end. 7
8. A core idea for this session: Ethical practice is characterized by ongoing questions that are never fully answered. I will raise some of these questions, but I can’t give you the answer. Only YOU can answer them in your own situation. It’s an ongoing, active process. It’s about developing awareness and the capability to do this work. 8
9. 9
10. We know there are serious ethical problems in design and technology. We hear about these issues everywhere—and we know that UX professionals are directly implicated in many of them. But it’s not just these issues. All design has an ethical dimension. 10
11. In design, we make decisions that affect other peoples lives. We make decisions about what things are and how people should behave. That’s not a neutral act. The idea of human-centered design means we’re trying to improve people’s lives. Design for good. But we need to interrogate that. Is it really good? For whom? Who gets to decide that? This is a starting place for ethical questioning. --• “Design is applied ethics.” – Cennydd Bowles • Alain Findeli: When approaching a problem, both technological and behavioral solutions are always available; product design already involves a choice to use a technological solution. “Choosing a technological mediation is a matter of ethics.” (Findeli, “Ethics, aesthetics, and design”) • “Human decision making, while often flawed, has one chief virtue. It can evolve. As human beings learn and adapt, we change, and so do our processes. Automated systems, by contrast, stay stuck in time until engineers dive in to change them.” (Cathy O’Neil, Weapons of Math Destruction) 11
12. There are many ways of defining “good” that may overlap or compete in practice. Perhaps good design enhances human wellbeing. Perhaps good design aligns with our values, beliefs, and ideology. We’ve seen this question with purportedly neutral platforms like Twitter, Facebook, or Bitcoin. Platforms that are intended to break down political hierarchies, and to give everyone avoid, actually do express a very particular ideology. Perhaps good design enhances societal and environmental outcomes. Or it enhances social justice. That’s about fairness, equality and equity, and combatting injustices in the world. ----Other examples of how these might compete in practice: • If you have a libertarian ideology, that may shape or delimit your understanding of what approaches to social justice are acceptable. • Products designed in the interest of human wellbeing cannot be “universally” designed, i.e., you cannot include everyone or provide an equal/equitable outcome for everyone. 12
13. We could look at it from a human rights perspective. Based on my reading, these are all the rights listed in the UN Universal Declaration of Human Rights from 1948. There are a lot of them! Values may differ between users, clients, and ourselves. They differ from user to user, and between communities of users/stakeholders. And as professionals, we are personally invested and bring closely held values to the table. Design requires difficult decisions about which kinds of good we’ll prioritize. Even simple design problems can be ethically complex. --Note: Recently, many statements of values/principles/heuristics/ethical frameworks have been proposed by designers in the UX world. I find most of these untenably narrow and relative to Western values. For example, many of them implicitly prioritize individual autonomy as an overarching framework. Others mix human rights with particular ideologies, such as “free an open sources”, without acknowledging the limitations of the latter. 13
14. The philosophy of ethics offers many frameworks that can help us. Even though I’m trying to give you a birds-eye view, this presentation favors a consequentialist framework, because I’m going to emphasize the consequences our actions have for others. But even then, ideas about virtues and ethics are mixed in. These frameworks can help sensitize us to our moral reasoning, allowing us to be more conscious and deliberate in our decisions. Ultimately, you still have to make your own decisions. 14
15. In addition to making decisions about values, we have to make decisions about what role we’ll play as practitioners and what kind of responsibilities we’ll take on. Perhaps we should focus on serving the client. That’s the realm of professional ethics. Perhaps we should not only serve the client, but make sure we do no harm to users. (That comes from medical ethics and bioethics.) Beyond that, maybe we should effect positive change for users. Or going further, effect political change. Some people would say-–yeah! We should always use design for political change. Others would say no: It would be unethical to use a client engagement as a vehicle for political ends. So we have to make these decisions too. The problem with this framing is that implies we have the option of not having a role, not being involved…. 15
16. But “do no harm” is not always possible, because harm is already being done. We already participate in dynamic systems where good and bad things are happening. We’re a part of that. All technology has positive and negative consequences Any organization you work for can both improve and perpetuate systemic problems. 16
17. We can’t be neutral and have to choose between imperfect alternatives, weighing the upsides and downsides of each. One one hand, we can’t fix everything, and we can’t take the world on our shoulders. But we do have some power, and maybe we can take some things on our shoulders. According to Jared Spool, there is more demand for designers than supply. That means we have choices about the work we do. If you’ve got a seat at the table, what do you plan to do with it? 17
18. Let’s go deeper into this idea that we a a part of dynamic systems. 18
19. We participate in larger systems where we can’t specify the outcomes the way we can write a design specification. Design is a distributed, social process. 19
20. (This slide builds out from the center of the left circle.) Design is a collective process involving many people who have different goals and intentions, different values and beliefs. Those people may be diverse, or not so diverse. They have different points of view. They have different levels of power and influence over the process. All of this shapes the outcomes of design. The product is also shaped by the design skills, processes, and capabilities we have in the organization. It is shaped by the organizational structure, information flows, and incentives. It is shaped the external forces the organization has to respond to. Political, environmental, social, technological, legal, environmental. And all of this is permeated by culture—the shared understanding we have about who we are and what we believe to be true about the world. 20
21. It is through all of these structures and processes that we gain an understanding of the world of our users and the context(s) we are designing for. We understand that world through a combination of research and the knowledge and lenses already embedded in the organization. Then all of this shapes the products and services we produce. 21
22. And finally, once all of that happens, the actual meaning and function of these products are reconstructed in the context of use. There are many more complex structures in that environment, shaping the outcomes of our design. So this is a complex process. 22
23. The responsibility for these outcomes is distributed across many participants in the process. There are a few systems principles we can apply to any complex process like this. One is that there is no simple relationship between cause and effect. For any given outcome, there is rarely one single root cause. Instead, there is joint determination by many causes. It’s not nature OR nurture, but nature AND nurture. Two, cause and effect can be nonlinear, bidirectional or circular. You’ve seen this if you’ve ever had a supervisor who was micromanaging you, and you responded to that by pushing them back and trying to create space for yourself, and they responded to that by micromanaging even harder. It’s a circular relationship. And third, people have limited information about the system around them. They make the best decisions they can based on the information they have, but that information is complete. That’s the principle of bounded rationality. 23
24. To illustrate this, let’s look at a simple scenario. Let’s imagine there’s a startup building a new enterprise software platform, and they have six months to launch an MVP. There’s a UX designer who’s doing the very best lean UX research, doing site visits with potential customers to understand their needs. The product manager and developer also want to build the best product, but they know they have time and resource constraints, so they all work together to prioritize features and build something they can start selling to customers. Then there’s a second company that’s doing some business process reengineering, and they’re under pressure to meet a deadline that came down from the CEO. After a lot of negotiation, they purchase this product from the first company and roll it out to the employees. Unfortunately, it’s a disaster. People can’t figure out how to use the software. It doesn’t match the way they do their work. The IT department is overwhelmed with support tickets, and they can’t help. But people are relying on this platform to do their jobs. They can’t get things done. So projects fail. They lose business. Some employees get bad performance reviews. At the end of the year, they don’t get raises or bonuses. So we’re talking about something very mundane, it’s not Cambridge Analytica, but it’s something any of us have may have experienced if you’ve worked either on the production end or the receiving end of enterprise software, and this had a serious impact in the lives of these employees. 24
25. What would have had to to happen in order for this to be a successful software rollout? We could look at the way they original user research was done, or who was included in that research. We could look at the way the product team prioritized features and planned the product. The alignment between the product team and the sales team, which informed the sales team’s understanding of the strengths and weaknesses of the product, and their choices about which customers to pursue. In the second organization, we could look at the quality and the timeline of the requirements gathering process. The way they configured and customized the software. The training given to employees as the software was rolled out. And whether or not the IT support desk had the right knowledge and procedures to support the product. So a lot of this falls outside the traditional responsibilities of design and UX, but if what we care about is the outcomes for users and stakeholders, we can imagine how all of this could have gone differently. --For practical guidance on how the UX function can align with and collaborate across other organizational functions, see, for example, Alesha Arp’s presentation from the same conference (IA Conference 2019). 25
26. There are a few more principles illustrated in this story. • Design decisions have unintended consequences. Some of these can be anticipated; others can’t. • The effects of these decisions scale-–the product team is only a few people, but this affects hundreds or thousands of people. • The impact is felt across space and time. The users are far away, and the effects may not be felt for months or years. The designer never experiences or observes these outcomes. • Again, the context of use is a big part of the outcome. What if this software had been rolled out to a university? A hospital? A construction company? The consequences would have been much different. • We can see how the voices who are included in the design process, or the voices that are not included, also affect the outcome. 26
27. Based on what we’ve been talking about, let’s do an exercise… • Partner with someone in the audience and decide who is “A” and who is “B”. • Individually first, read the scenario and reflect on it. • Once you’ve answered the question, discuss with your partner. This should be a judgment-free conversation. Just listen to what the other person has to say. https://goo.gl/forms/mqCYSQmNeryiHtlv1 27
28. 28
29. Did anyone reconsider after the conversation? Obviously, these are leading questions, and they’re written to sway you. But based on the information in the scenario, there is no correct or incorrect response to the poll. Each scenario… • Was told from a different person’s point of view. • Had different information. • Had different levels of insight into the CompuGuard company. • Emphasized different pros and cons. You also probably had certain beliefs, feelings, or past experiences that may have affected your response. You may have had past experiences with privacy, surveillance, or authority, that influenced your answer. And whether or not you felt you could make a difference in this company may have influenced your answer. This decision comes down to weighting a whole bunch of pros and cons, which is difficult because on either side, you have important human values that are all right, but they conflict with each other. And you don’t have enough information or enough foresight to know the consequences of your decision. 29
30. A couple more principles here: • Extending the idea of unintended consequences, technology can be misused or abused. CompuGuard’s customers, even if there were checks and balances, could abuse this technology to spy on their employees. • People with different points of view and different information have different realities about what is right and wrong. This is very real for them. • And last, did anyone find that in scenario B, the information I gave you about the CompuGuard’s competitors let you to make a decision you might not otherwise have made? 30
31. So that leads us into the idea of hierarchy. We can see much power we have as professionals who shape the technological world, yet we work within larger structures that shape the options available to us, and limit our agency. 31
32. If you’ve been to any IA conference, you’ve probably seen this diagram, the idea of pace layering, from Stewart Brand. This is just one way of looking at civilization. 32
33. Every complex system has many layers of hierarchy and scale and subsystems, systems inside systems inside systems. There are small-scale structures that move and change very quickly, and there are larger-scale structures that change very slowly. It’s from these fast-moving structures that systems learn, and from the larger, slower structures that we get stability and consistency. This relates to design in a very important way… 33
34. Very important idea: Design reproduces dominant sociopolitical relationships unless there is a deliberate effort to change them. 34
35. We’re seen algorithms that amplify racism. Design for healthcare that improves outcomes, but benefits the wealthy. Urban planning that reinforces segregation in cities. 35
36. Even our exemplars of the very best UX and service design come from companies that have done a lot of good in the world, but also have externalities. These are costs that the company doesn’t take responsibility for and are absorbed by other people. A good user experience doesn’t guarantee that this won’t happen. It can even facilitate it. 36
37. So how is it that design does this? This is a course unto itself, but I’ll talk about three mechanisms by which design reproduces these relationships. 37
38. The first is economic incentives. If you think about it, it’s strange that we manufacture tons and tons of plastics so you can carry food to your home and throw them away, using gasoline to drive back and forth over and over again. But these wastes don’t go outside the system, they are externalities that come back to us in the form of other costs. But we’ve built, and rely on, this infrastructure of food distribution and suburban sprawl. It’s too expensive to change it, so we keep designing new products that continue the cycle. 38
39. And in a society with great wealth disparity, investors follow those who have money to spend. That applies to consumer products and services as well as hospitals and universities. The result for designers is that designers who want to make a positive difference in the world are more likely to find paying work that serves affluent customers. 39
40. Another mechanism: Governing mentalities. I’m borrowing and extending this term from STS scholar Nancy Campbell, who used this to describe the way human beings are seen and understood from the point of view of policy making. You may have heard about predictive algorithms and how they’re being used in law enforcement and criminal justice. They’ve been attacked for the way they make highly biased and inaccurate judgments about human beings. But going beyond bias, but these technologies tell us something about how the justice systems sees criminals, and criminality—perhaps as a deterministic set of personality traits. That mentality is the deeper structure in the hierarchy that I’m talking about. 40
41. Governing mentalities describes “those widely shared values, norms, expectations, and assumptions of how the world operates.” They “are simultaneously the most important and the most difficult to identify:'>identify: they are pervasive, subtle, distributed patterns of thought that underpin social activity and personal interpretations.” So they are incredibly important, and they’re often invisible. --"Governing mentalities—those widely shared values, norms, expectations, and assumptions of how the world operates—are simultaneously the most important and the most difficult to identify:'>identify: they are pervasive, subtle, distributed patterns of thought that underpin social activity and personal interpretations. Governing mentalities shape how people interpret macro social-cultural phenomena and how they think about their own lives and identities. Coming to terms with the analytic and practical tensions associated with the persistence of such forces is a serious challenge to design thinking. Feminist design scholarship emphasizes the importance of this challenge by showing how governing mentalities impinge on design practice to systematically shape outcomes.” From Dean Nieusma, “Alternative Design Scholarship: Working toward Appropriate Design.” In Design Issues, Vol. 20, No. 3, STS and the Social Shaping of Design (Summer, 2004). Nancy Campbell introduces the idea of “governing mentalities” in her book, Using Women. Gender, Drug Policy and Social Justice. 41
42. Let’s connect this to a few other ideas. For decades, color film was specifically designed to capture the skin tones of white people. The standard color reference cards used throughout the photography industry were these Shirley cards, photos of white women. You could not take a good photo of a black person with this film. This only changed in the 1970s when advertisers started to complain that they couldn’t take good photos of wood furniture and chocolate. 42
43. During Apartheid in South Africa, some government officials used Polaroid ID-2 cameras for exactly this reason. These cameras had a “boost” button that would increase the flash brightness by 42%, perfectly tuned to capture black skin. They used these camera to take the ID photos for the passbooks that the government used to control the movement of black people around the country. So this is an example of abuse not intended by the designers, and it’s also an example of inclusive design. Inclusive design is incredibly important—but in this instance, it did not lead to something more just. It was a tool in a system that was vastly unjust. 43
44. This is what it looked like if you took a photo with Kodak film. You can see some of the faces in this picture, but if you look at the man on the right, you can only see his teeth and his eyes. Source: Vox, “Color film was built for white people. Here's what it did to dark skin.” (video). https://www.youtube.com/watch?v=d16LNHIEJzs 44
45. The writer Syrteena McFadden wrote powerfully about this: “Kodak never encountered a groundswell of complaints from AfricanAmericans about their products. Many of us simply assumed the deficiencies of film emulsion performance reflected our inadequacies as photographers.” The perception is not that this is about the technology. It’s reflect back into self-perception and internalized. 45
46. She continues to write powerfully about how these widely seen images serve to place a lower value on black skin and reinforce certain kinds of cultural perceptions AND self-perceptions of African Americans. If someone tells you that this is an inherent limitation of the technology, or that it’s the consumer’s job to pick a better camera, I ask you to reconsider whether it was within the power of designers and engineers to change this cultural reality. --Full quote: “I don't know when the first time was I learned that I was ugly. Or the part where I was taught to despise my dark skin, or the part where my mother's friends or old aunts yelled at us to stay out of the sun and not get so dark. I hear this from dark girls all the time. I don't know how we were taught to see a flattened blackness, to fear our own shades of dark. I do know how we accepted the narratives of white society to say that dark skin must be pitied, feared, or overcome. There are overwhelming images of dark-skinned peoples in Western imagination that show us looking desperate, whipped, animalistic. Our skin blown out in contrast from film technologies that overemphasize white skin and denigrate black skin. Our teeth and our eyes shimmer through the image, which in its turn become appropriated to imply this is how black people are, mimicked to fit some racialized nightmare that erases our humanity.” Syreeta McFadden, “Teaching the Camera to See My Skin”, https://www.buzzfeednews.com/article/syreetamcfadden/teaching-the-camera-tosee-my-skin 46
47. Kodak fixed their film, but this hasn’t gone away. There are still problems with each new photography and facial recognition technology that comes out. When this happens, it says: The camera doesn’t see you. This product isn’t made for you. You don’t get to use this. Most importantly, it reinforces perceptions of how you are seen, and exactly how much your are valued, as a person of a non-white race. ---More about this: https://revisionpath.com/exposed-films-legacy-racist-technical-development-livesdigital-age/ https://imgur.com/aWUHqz6 47
48. We know that facial recognition technologies are less well-tested, and less accurate, for non-white people. As facial recognition becomes more and more ubiquitious – in places where being recognized by a computer can get you arrested – this is critically important, and it disproportionately affects marginalized people. But we have to remember that this isn’t just an issue of inclusion, because there are deeper structures in the system hierarchy. --More about this: https://www.washingtonpost.com/news/the-switch/wp/2018/05/22/amazon-isselling-facial-recognition-to-law-enforcement-for-a-fistful-ofdollars/?noredirect=on&utm_term=.5ead328c67f0 48
49. Going back to the problem of bias in algorithms, Julia Powles and Helen Nissenbaum write: “Alleviating this problem by seeking to ‘equalize’ representation merely coopts designers in perfecting vast instruments of surveillance and classification.” ---Julia Powles & Helen Nissenbaum, “The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence” https://medium.com/s/story/the-seductive-diversion-of-solving-bias-inartificial-intelligence-890df5e5ef53 49
50. The governing mentality in the age of ubiquitous computing is one where every detail of our lives and bodies is subject to observation and measurement so that it can be managed—most likely, by the powerful entities who are gathering or receiving that data. 50
51. A third mechanism I want to talk about is the idea of narratives. These are the stories that we tell, explicitly and implicitly, about who we are, where we came from, why we’re doing what we’re doing, and why what we’re doing is good. It’s not that narratives are inherently bad. What I want to point out is that narratives express a certain point of view, and they may obscure alternative narratives and alternative points of view. We have social narratives about human nature and politics. 51
52. We have narratives about technology, like: “AI is inevitable.” Well, that doesn’t have to be true. Or that certain technologies are neutral, or that a technology will lead to revolution. 52
53. Organizations have narratives. We saw this in the exercise. The security company, CompuGuard, had a narrative about why what they were doing was important and necessary and good. Again, I’m not saying this is inherently bad. I’m saying it is a specific point of view, and it may obscure or even push out alternative points of view. 53
54. We also have narratives in design. We should reflect on these and understand the limitations of the stories we tell ourselves about design. 54
55. We like to say that design starts with desirability for the user. But the reality is that we usually work within the envelope of a business model. If it’s not viable, it doesn’t happen. 55
56. Thomas Wendt has explained that this can lead us to confuse human needs with market needs, and confuse humans with consumers. This leads us to create unsustainable products, It leads us to create unsustainable products and set aside the externalities. --Thomas Wendt: "centricity forecloses on everything outside the center” "human becomes sovereign in a system of which we are only a small part” From “Critique of Human-Centered Design OR Decentering Design” 56
57. Lean is a narrative. There are many versions. In some versions, you avoid creating long-term plans. You identify the most stripped-down, low-hanging fruit you can find, build it as quickly as possible, then push it out to your customers. The problem is, this picture is wrong. There’s no way a skateboard is a hypothesis for an SUV. You’re not validating anything here. You’re falling into the trap of the bounded rationality of short-term thinking. You’re releasing a product in the real world that could actually put people at risk. There are real examples of MVPs putting people at risk because of this thinking. --https://hackernoon.com/mvp-paradox-and-what-most-founders-need-to-be-awareof-3a5f8c3acb76 https://medium.com/theory-and-principle/what-does-viable-mean-in-the-context-ofconsumer-legal-tech-mvps-f2801dd13152?source=---------11------------------ 57
58. The MVP is the seed that grows into the long-term product. We need to think carefully about that long term vision. What needs to be true about that seed in order for it to have positive long-term outcomes? How do we build those relationships into the product from the beginning? How will we measure and evaluate that? https://hackernoon.com/mvp-paradox-and-what-most-founders-need-to-be-awareof-3a5f8c3acb76 58
59. There is also a narrative that design will save the world. 59
60. We’ve recognized that design has produced the unsustainable civilization we live in. But then design is positioned at the solution. As something that can solve global problems. A discipline that is special compared to all others. This narrative can’t be true. I think design is a critical part of our future, but design also has limitations too. Design cannot transform politics and social hierarchies. It’s one among many forms of knowledge and expertise that are needed to form an ethical practice. --Arturo Escobar asks, "If we start with the presupposition, striking perhaps but not totally far-fetched, that the contemporary world can be considered a great design failure, certainly the result of particular design decisions, can we design our way out?” If the answer is to be yes, then we need to move beyond problem solving.” 60
61. We talk about designing the end to end experience. But whose experiences are we designing? The people who will go inside the building. Not the people who were sitting on the sidewalk across the street from this sign. (Gensler is an architecture and design firm; I took this photo in downtown Boston.) This kind of statement implies that the designer has greater wisdom and more complete knowledge of the people whose experiences are being design. You can’t design the human experience. What this really says to me is that we’re going to take more and more parts of the human experience and turn them into consulting projects. And then measure and data mine them so we can add even more technology. 61
62. So let’s put this all together, hierarchies, incentives, governing mentalities, narratives. The intent, motivation, biases, assumptions; the voices in design, organizational structure and culture; narratives, mentalities, structural forces, are reified and embedded in design structuring the experiences of stakeholders in aggregate, forming systems of participation in social, cultural, and civic life. There are cycles here. Virtuous cycles of inclusion and justice, and vicious cycles of injustice and marginalization. 62
63. Very, very, carefully, I want to make a qualified analogy between technology and government. People rely on them in order to participate in contemporary life and realize their human rights. Decisions about technology and policy scale to affect many people across time and space. And in democracy, there’s a fundamental tradeoff between the ability of people to participate directly in that decision making, and the need to designate representatives, because not everyone can participate in every decisions. 63
64. So what if designers are like legislators in the digital and physical world? How would you change your process if you knew you had this responsibility? What if we replaced the word ”users” with “constituents”? How would that change how you design? I don’t have answers to these questions. 64
65. I do think we can challenge hierarchies. And we should. This starts with us. The more we reflect and understand ourselves, recognize our own values, beliefs, assumptions; our governing mentalities and narratives; the better prepared we are to act. And together, we can ask questions, start dialogue, and change things together. In an organization, it only takes a few people to start a movement. 65
66. 66
67. As designers, we work in complex systems, but we are located in these systems, in a particular place. 67
68. We have a perspective, a limited view of this system. 68
69. It’s only by bringing together stakeholders with different points of view and different value systems that a bigger picture starts to form. --In systemic design, The Law of Requisite Variety says that the design process has to involve perspectives from every part of a system in order to account for its complexity. Arie de Geus “…the real purpose of effective planning is not to make plans but to change the microcosm, the mental models that these decision makers carry in their heads.” Fenn & Hobbs – ethical pluralism across multiple stakeholders in a changing sociocultural ecosystem 69
70. Repurposing the words of anthropologist Lucy Suchman: “multiple, located partial perspectives…find their objective character through ongoing processes of debate”. It’s through this debate, having perspectives that conflict, going through that agonism (or struggle), that we get new insight. We have to let go of the need for objectivity—the need to understand the entire system, to have the absolute truth about what solution is right, because this is subjective and contextual. Instead, the goal is to place “all persons as ends in themselves”. --Lucy Suchman, “Located accountabilities in technology production.” Alain Findeli, “Ethics, aesthetics, and design” 70
71. I think designers are in a unique position to facilitate this dialogue, to bring in differing voices, To reflect, discuss, and wrestle with difficult questions together. Designers have the ability to creatively reframe disagreement, and to help the group reach shared understanding. 71
72. I want to make a distinction between codes of ethics and ethical practice. A code of ethics is static and doesn’t change when you bring it into a new project. In contract, ethical practice involves new ethical thinking and reflection in each new situation, because the answer is always different. 72
73. Ethical practice is ongoing, social, personal, reflective, inclusive, pluralistic, dialogic, and contextual. Knowing this is more important than having any one method or technique. 73
74. One way to approach this by looking at our relationships. Bringing attention to our relationships and explicitly defining them can help us • Facilitate dialogue • Acknowledge power relationships • Provide scripts to guide how we interact with each other • Structure our decision-making processes. --In Contextual Design, Karen Holtzblatt and Hugh Beyer explain how relationship models come along with implicit “scripts” that shape our interactions, and we immediately draw on these scripts intuitively. For contextual design research, they propose the master/apprentice relationship as a template for the relationship between user (master) who is demonstrating their embodied work process to the researcher (apprentice). Note that I am not suggesting that master/apprentice is the appropriate model for all user-designer relationships. 74
75. I’ll briefly discuss three relationships (Designer-client, designer-user, and designer-product. The user-product relationship is not covered in this version of the presentation.) 75
76. In the designer-client relationship, a key question is: Should the designer be a neutral facilitator of clients’ goals? Or an expert who gives strong ethical direction? There isn’t just one way of defining this. Harold Nelson and Erik Stolterman write about the designer/client power relationship and how this can have different configurations. --Harold Nelson and Erik Stolterman, “The Design Way” 76
77. Nelson and Stolterman write about the idea of a “service relationship” between designer and client. It’s not a helping relationship, which is a habitually unequal relationship that reinforces itself over time. It’s an equitable partnership that brings everyone along in the process together. There is no assumption of inequity in the capacity to contribute. The designer has very important expertise to contribute, and the client has very important expertise to contribute. It’s complex relationship where everybody has a different role, but is equally valid—with the understanding that a variety of stakeholders must also be brought in to make sure all interests are sufficiently represented, and externalities accounted for. Skills required in this relationship include active listening and facilitation. --• “This team boasts a composition of diverse roles that are distinctly different but always equitable in character. Because of this, those in the role of 'client' experience change motivated out of their own desiderata, rather than someone else's limited understanding of what is best for them. The client, in this case, is a full member of the design team. There is no assumption of inequity in the client's capacity to contribute." (The Design Way, 55-56, my emphasis) • Peter Jones observes: “User experience practice often promotes an authoritative position toward influencing changes to product requirements conferred by their privileged access to user data. In uncertainty situations, attempts to leverage this position can makes matters worse, as it tends to polarize team members into opposing camps.” (Peter Jones, We Tried to Warn You: Innovations in leadership for the learning organization) 77
78. 78
79. In my mind, the most harmful governing mentality in user-centered design is the idea that “users don’t know what they want”. That puts us in the position of scientists observing people as research subjects, and then trying to deduce our own abstract representations of what people want or need. 79
80. Steve Woolgar has observed that the practice of usability testing is as much about getting users to behave the way we want, as it is about getting the product to behave the way users want. I argue that this is transaction-centered, not human-centered. --It also places observable, short-term outcomes as the primary or only indicators of the success of a product. This fails to reflect the long-term processes of learning, habituation, social behavior, etc. 80
81. There may not be a way of neutralizing the power that designers have compared to users, especially if working within a business model that unilaterally sets the terms for this relationship. We can try not to abuse that power, which takes incredible restraint, and ask how we can live up to the responsibility that comes with that power. To the extent it can be done fairly, we can think about the idea of equitable partnership, and the idea that stakeholders have necessary contextual knowledge and expertise in their own lives. The designer can help stakeholders articulate what they want, need, and value. Instead of us envisioning futures for them, we can guide them in envisioning their own futures, and design with them. The designer/user relationship is not one of a scientific researcher studying participants, but one of “moral engagement”. ---Additional notes: • As another critique of the “helping” relationship, see Liz Jackson’s keynote talk at the recent Interaction19 conference, “Empathy reifies disability stigmas.” https://interaction19.ixda.org/program/keynote--liz-jackson/ • “we need to begin by problematizing the terms ‘designer’ and ‘user’ and reconstructing relevant social relations that cross the boundaries between them.” (Suchman, “Located Accountabilities”) • Lucy Suchman has also written about the invisible forms of labor that go into the 81
82. use and application of a new technology. “Recent research on the actual work involved in putting technologies into use highlights the actual work involved in putting technologies into use highlights the mundane forms of inventive yet taken for granted labor, hidden in the background, that are necessary to the success of complex sociotechnical arrangements.” (Suchman, “Agencies in Technology Design: Feminist Reconfigurations”). The ”finished product” is not finished; adoption cannot be taken for granted. • Again, see note on slide 75 about the master/apprentice relationship model in Contextual Design. 81
83. Finally, as designers, we can remember that problems are always seen from a point of view, so we can start with self-awareness. If you’re designing an educational system, whether you see education as exciting and inspiring, or as a form of authority to be challenged, that changes the way you approach your work. Second, even simple design problems are ethically complex. We can’t specify complex systems, or keep them in stasis, so the language of problem/solution isn’t quite sufficient. Focused on localized problems or end to end experiences may lead to unsustainable solutions. Instead, our job is to create the the infrastructure for people and systems to thrive and to be healthy. It’s to create the conditions for behaviors that persist after our work is done. Our job is to be good ancestors. ---• Another dimension of this relationship—overlapping with the designer-client relationship—is to see design situations as opportunities for strategic change to processes, policies, and infrastructure. I highly recommend the short book on strategic design, “Dark Matter and Trojan Horses: A Strategic Design Vocabulary” by Dan Hill. • See also critiques of anthropocentrism in design, such as Thomas Wendt’s “Decentering Design.” 82
84. Just in the last one or two years, many new toolkits and frameworks have come out for ethical design. We might ask whether we should introduce new tools or modify our existing ones, and I think a combination is appropriate. But the important thing is the questions we’re asking in all steps of the process, because everything we do in design is part of an ethical practice. I’m an information architect, so I created a classification with 8 categories…. 83
85. As we’re approaching a project and forming a team: Which voices will be included? How will we make sure they are fairly represented? How will the process be facilitated so their perspective is actually accounted for in the design process? We can reflect personally and as a group on how we are already starting from a specific place and point of view. What values, beliefs, assumptions, and feelings are we bringing into the work? These are just examples. 84
86. In the design process, we’re already asking certain questions. The design process is essentially about asking and answering questions about the design situation and the potential solution. We can incorporate ethical reflection and discussion into these activities, and the artifacts we produce along the way—UX strategy briefs, personas journey maps. What are our goals? How do we want the world to be different? What might the future look like? (Again, these are just examples.) 85
87. Through all of these conversations, we can make the ethical dimensions visible in the work we do, make explicit the points of view we embed in our work. Second, we can look at the inverse, the negative space, and ask about the alternative points of view that we might be excluding or overlooking. --This is a work in progress. I will continue to develop this list and compile resources, frameworks, and tools in each category. 86
88. This is also a call to action for our professional communities and professional organizations. It’s our responsibility too. We need to increase the agency of every practitioner to do ethical work. Ethics and responsibility should be integral to our education and the culture we foster in our conferences, magazines, and education (both academia and professional education). We can build support networks, locally and online. This impacts the way we position the value of information architecture and UX, and how we build practices in organizations. We can also develop career paths and create more opportunities for responsible and meaningful work. 87
89. Let’s build a resource library, a conference, and a community for ethical design. I invite you to join the Ethical Technology / Ethical Design community. You can sign up at ethicsofdesign.org. 88
90. We have agency to ask questions, to change the way we work, to challenge the systems around us. We can’t control everything, we can’t fix everything. Bad things will happen, our work will be misused. But we do have agency to ask questions, start conversations, facilitate dialogue about ethics, examine the beliefs, assumptions, and narratives around us, challenge the systems that aren’t working, and build these ongoing ethical practices. 89
91. Fundamentally, design is the deliberate, skilled framing of goals and plans to achieve them. That is why technology and business need design. And that’s why we need ethical design. 90
92. Thank you! I recommend the book Future Ethics by Cennydd Bowles: https://www.future-ethics.com/ Request an invitation to the Ethical Technology group in Slack https://ethicsofdesign.org 91