So what can we do? This is a big topic, but a few steps towards addressing ethical issues are:

    • Analyze a project you are working on.
    • Implement processes at your company to find and address ethical risks.
    • Support good policy.
    • Increase diversity.

    Let’s walk through each of these steps, starting with analyzing a project you are working on.

    It’s easy to miss important issues when considering ethical implications of your work. One thing that helps enormously is simply asking the right questions. Rachel Thomas recommends considering the following questions throughout the development of a data project:

    • Should we even be doing this?
    • What bias is in the data?
    • Can the code and data be audited?
    • What is the accuracy of a simple rule-based alternative?
    • What processes are in place to handle appeals or mistakes?
    • How diverse is the team that built it?

    These questions may be able to help you identify outstanding issues, and possible alternatives that are easier to understand and control. In addition to asking the right questions, it’s also important to consider practices and processes to implement.

    One thing to consider at this stage is what data you are collecting and storing. Data often ends up being used for different purposes than what it was originally collected for. For instance, IBM began selling to Nazi Germany well before the Holocaust, including helping with Germany’s 1933 census conducted by Adolf Hitler, which was effective at identifying far more Jewish people than had previously been recognized in Germany. Similarly, US census data was used to round up Japanese-Americans (who were US citizens) for internment during World War II. It is important to recognize how data and images collected can be weaponized later. Columbia professor that “You must assume that any personal data that Facebook or Android keeps are data that governments around the world will try to get or that thieves will try to steal.”

    The Markkula Center has released An Ethical Toolkit for Engineering/Design Practice that includes some concrete practices to implement at your company, including regularly scheduled sweeps to proactively search for ethical risks (in a manner similar to cybersecurity penetration testing), expanding the ethical circle to include the perspectives of a variety of stakeholders, and considering the terrible people (how could bad actors abuse, steal, misinterpret, hack, destroy, or weaponize what you are building?).

    Even if you don’t have a diverse team, you can still try to pro-actively include the perspectives of a wider group, considering questions such as these (provided by the Markkula Center):

    • Whose interests, desires, skills, experiences, and values have we simply assumed, rather than actually consulted?
    • Who are all the stakeholders who will be directly affected by our product? How have their interests been protected? How do we know what their interests really are—have we asked?
    • Who/which groups and individuals will be indirectly affected in significant ways?
    • Who might use this product that we didn’t expect to use it, or for purposes we didn’t initially intend?

    Ethical lenses

    Another useful resource from the Markkula Center is its . This considers how different foundational ethical lenses can help identify concrete issues, and lays out the following approaches and key questions:

    • The rights approach:: Which option best respects the rights of all who have a stake?
    • The justice approach:: Which option treats people equally or proportionately?
    • The utilitarian approach:: Which option will produce the most good and do the least harm?
    • The common good approach:: Which option best serves the community as a whole, not just some members?
    • The virtue approach:: Which option leads me to act as the sort of person I want to be?
    • Will the effects in aggregate likely create more good than harm, and what types of good and harm?
    • Are we thinking about all relevant types of harm/benefit (psychological, political, environmental, moral, cognitive, emotional, institutional, cultural)?
    • How might future generations be affected by this project?
    • Do the risks of harm from this project fall disproportionately on the least powerful in society? Will the benefits go disproportionately to the well-off?
    • Have we adequately considered “dual-use”?

    The alternative lens to this is the deontological perspective, which focuses on basic concepts of right and wrong:

    • What rights of others and duties to others must we respect?
    • How might the dignity and autonomy of each stakeholder be impacted by this project?
    • What considerations of trust and of justice are relevant to this design/project?
    • Does this project involve any conflicting moral duties to others, or conflicting stakeholder rights? How can we prioritize these?

    One of the best ways to help come up with complete and thoughtful answers to questions like these is to ensure that the people asking the questions are diverse.

    Currently, less than 12% of AI researchers are women, according to a study from Element AI. The statistics are similarly dire when it comes to race and age. When everybody on a team has similar backgrounds, they are likely to have similar blindspots around ethical risks. The Harvard Business Review (HBR) has published a number of studies showing many benefits of diverse teams, including:

    Diversity can lead to problems being identified earlier, and a wider range of solutions being considered. For instance, Tracy Chou was an early engineer at Quora. She wrote of her experiences, describing how she advocated internally for adding a feature that would allow trolls and other bad actors to be blocked. Chou recounts, “I was eager to work on the feature because I personally felt antagonized and abused on the site (gender isn’t an unlikely reason as to why)… But if I hadn’t had that personal perspective, it’s possible that the Quora team wouldn’t have prioritized building a block button so early in its existence.” Harassment often drives people from marginalized groups off online platforms, so this functionality has been important for maintaining the health of Quora’s community.

    A crucial aspect to understand is that women leave the tech industry at over twice the rate that men do, according to the (41% of women working in tech leave, compared to 17% of men). An analysis of over 200 books, white papers, and articles found that the reason they leave is that “they’re treated unfairly; underpaid, less likely to be fast-tracked than their male colleagues, and unable to advance.”

    Studies have confirmed a number of the factors that make it harder for women to advance in the workplace. Women receive more vague feedback and personality criticism in performance evaluations, whereas men receive actionable advice tied to business outcomes (which is more useful). Women frequently experience being excluded from more creative and innovative roles, and not receiving high-visibility “stretch” assignments that are helpful in getting promoted. One study found that men’s voices are perceived as more persuasive, fact-based, and logical than women’s voices, even when reading identical scripts.

    Receiving mentorship has been statistically shown to help men advance, but not women. The reason behind this is that when women receive mentorship, it’s advice on how they should change and gain more self-knowledge. When men receive mentorship, it’s public endorsement of their authority. Guess which is more useful in getting promoted?

    As long as qualified women keep dropping out of tech, teaching more girls to code will not solve the diversity issues plaguing the field. Diversity initiatives often end up focusing primarily on white women, even though women of color face many additional barriers. In interviews with 60 women of color who work in STEM research, 100% had experienced discrimination.

    This is a challenge for those trying to break into the world of deep learning, since most companies’ deep learning groups today were founded by academics. These groups tend to look for people “like them”—that is, people that can solve complex math problems and understand dense jargon. They don’t always know how to spot people who are actually good at solving real problems using deep learning.

    This leaves a big opportunity for companies that are ready to look beyond status and pedigree, and focus on results!

    The professional society for computer scientists, the ACM, runs a data ethics conference called the Conference on Fairness, Accountability, and Transparency. “Fairness, Accountability, and Transparency” which used to go under the acronym FAT but now uses to the less objectionable FAccT. Microsoft has a group focused on “Fairness, Accountability, Transparency, and Ethics” (FATE). In this section, we’ll use “FAccT” to refer to the concepts of Fairness, Accountability, and Transparency.

    FAccT is another lens that you may find useful in considering ethical issues. One useful resource for this is the free online book by Solon Barocas, Moritz Hardt, and Arvind Narayanan, which “gives a perspective on machine learning that treats fairness as a central concern rather than an afterthought.” It also warns, however, that it “is intentionally narrow in scope… A narrow framing of machine learning ethics might be tempting to technologists and businesses as a way to focus on technical interventions while sidestepping deeper questions about power and accountability. We caution against this temptation.” Rather than provide an overview of the FAccT approach to ethics (which is better done in books such as that one), our focus here will be on the limitations of this kind of narrow framing.

    One great way to consider whether an ethical lens is complete is to try to come up with an example where the lens and our own ethical intuitions give diverging results. Os Keyes, Jevan Hutson, and Meredith Durbin explored this in a graphic way in their paper “A Mulching Proposal: Analysing and Improving an Algorithmic System for Turning the Elderly into High-Nutrient Slurry”. The paper’s abstract says:

    In this paper, the rather controversial proposal (“Turning the Elderly into High-Nutrient Slurry”) and the results (“drastically increase the algorithm’s adherence to the FAT framework, resulting in a more ethical and beneficent system”) are at odds… to say the least!

    In philosophy, and especially philosophy of ethics, this is one of the most effective tools: first, come up with a process, definition, set of questions, etc., which is designed to resolve some problem. Then try to come up with an example where that apparent solution results in a proposal that no one would consider acceptable. This can then lead to a further refinement of the solution.