by Linda Kinstler
Salesforce's ethics team
introduced tools and processes
employees and clients
"stretch their moral imagination."
Alex Zyuzikov/Getty Images
Technocrats create because they can, not because
there is a moral or ethical case to do so.
In their mind, technology is not ethical or moral
and therefore does not deserve a consideration
during the development processes.
hired to save tech’s soul - Will anyone let them?
Fifty-two floors below the top of Salesforce Tower, I meet
Paula Goldman in a glass-paneled conference room where the words
EQUALITY OFFICE are spelled out on a patchwork bunting banner, the
kind of decoration you might buy for a child's birthday party.
Goldman has a master's degree from Princeton and a Ph.D. from
Harvard, where she studied how controversial ideas become
She arrived at
Salesforce just over a year ago to become its first-ever
Chief Ethical and Humane Use Officer, taking on an unprecedented
and decidedly ambiguous title that was created specifically for her
unprecedented, ambiguous, yet highly specific job:
see to it that
Salesforce makes the world better, not worse.
"I think we're at
a moment in the industry where we're at this inflection
point," Goldman tells me.
"I think the tech
industry was here before, with security in the '80s. All of
a sudden there were viruses and worms, and there needed to
be a whole new way of thinking about it and dealing with it.
And you saw a
security industry grow up after that. And now it's just
standard protocol. You wouldn't ship a major product without
red-teaming it or making sure the right security safeguards
are in it."
"I think we're at
a similar moment with ethics," she says.
"It requires not
only having a set of tools by which to do the work, but also
a set of norms, that it's important.
So how do you
scale those norms?"
I ask her how those norms
are decided in the first place.
"In some sense, it's
the billion-dollar question," she says.
"All of these issues
are extremely complicated, and there's very few of them where
the answer is just absolutely clear. Right? A lot of it does
come down to, which values are you holding up highest in your
In the wake of the
Cambridge Analytica scandal,
employee walkouts, and other political and privacy incidents, tech
companies faced a wave of
hire what researchers at the Data & Society Research
Institute call "ethics
owners," people responsible for operationalizing,
domain-jumping, and irresolvable debates about human values that
underlie ethical inquiry",
practical and demonstrable ways.
hired Goldman away from the Omidyar Network as the
culmination of a seven-month crisis-management process that came
after Salesforce employees protested the company's
involvement in the Trump administration's immigration work.
responding to their own respective crises and concerns, have hired a
small cadre of similar professionals - philosophers, policy experts,
linguists and artists - all to make sure that when they promise
not to be evil, they actually have a coherent idea of what that
So then what
While some tech
firms have taken concrete steps to insert ethical thinking into
their processes, Catherine Miller, interim CEO of the ethical
Doteveryone, says there's also been
a lot of "flapping round" the subject.
Critics dismiss it
the practice of merely kowtowing in the direction of moral values in
order to stave off government regulation and media criticism.
The term belongs to
the growing lexicon around technology ethics, or "tethics," an
abbreviation that began as satire on the TV show "Silicon Valley,"
but has since crossed over into occasionally earnest usage.
"If you don't
apply this stuff in actual practices and in your incentive
structures, if you don't have review processes, well, then, it
becomes like moral vaporware," says Shannon Vallor, a
philosopher of technology at the Markkula Center for Applied
Ethics at Santa Clara University.
that you've promised and you meant to deliver, but it never
Google, infamously, created an
AI Council and then, in April of last year,
disbanded it after
employees protested the inclusion of an anti-LGBTQ advocate.
approach to ethics includes the use of "Model
Cards" that aim to explain its AI.
anything that has any teeth," says Michael Brent, a data
ethicist at Enigma and a philosophy professor at the University
like, 'Here's a really beautiful card'."
The company has
made more-substantial efforts:
Vallor just completed a tour of duty at Google, where she
taught ethics seminars to engineers and helped the company
implement governance structures for product development.
talk about ethics in organizational settings, the way I
often present it is that it's the body of moral knowledge
and moral skill that helps people and organizations meet
their responsibilities to others," Vallor tells me.
More than 100
Google employees have
attended ethics trainings developed at the Markkula center.
The company also
fairness module as part of its Machine Learning Crash Course,
updates its list of "responsible AI practices" quarterly.
majority of the people who make up these companies want to build
products that are good for people," Vallor says.
don't want to break democracy, and they really don't want to
create threats to human welfare, and they really don't want to
decrease literacy and awareness of reality in society.
They want to
make things they're proud of.
So am I going
to do what I can to help them achieve that? Yes."
Markkula center, where Vallor
works, is named after Mike Markkula Jr., the "unknown"
Apple co-founder who,
in 1986, gave the center a starting seed grant in the same
manner that he gave the young Steve Jobs an initial loan.
He never wanted his
name to be on the building - that was a surprise, a token of
gratitude, from the university.
retreated to living a quiet life, working from his sprawling gated
estate in Woodside.
These days, he
doesn't have much contact with the company he started,
"only when I
have something go wrong with my computer," he tells me.
But when he arrived
at the Santa Clara campus for an orientation with his daughter in
the mid-'80s, he was Apple's chairman, and he was worried about the
way things were going in the Valley.
"It was clear
to us both, Linda [his wife] and I, that there were quite a few
people who were in decision-making positions who just didn't
have ethics on their radar screen," he says.
"It's not that
they were unethical, they just didn't have any tools to work
At Apple, he spent
a year drafting the company's "Apple
Values" and composed its
famous marketing philosophy ("Empathy, Focus, Impute.").
He says that there
were many moments, starting out, when he had to make hard ethical
was the guy running the company, so I could do whatever I
"I'd have a
heck of a time running Apple today, running Google today," he
"I would do a
lot of things differently, and some of them would have to do
with philosophy, ethics, and some of them would have to do with
what our vision of the world looks like 20 years out."
Former Apple CEO Mike Markkula
spent a year crafting the company's values statement.
Today, the ethics center at Santa Clara University
is named for him.
Photo: Tom Munnecke/Getty Images
Markkula Center for Applied Ethics
is one of the most prominent voices in tech's ethical awakening.
On its website, it
offers a compendium of materials on technology ethics, including,
toolkit ("Tool 6: Think About the Terrible People")
list of "best ethical practices" ("No. 2: Highlight the
Human Lives and Interests behind the Technology")
app ("Ethics: There's an App for That!" reads a flier
posted at the entrance)
Every one of these
tools is an attempt to operationalize the basic tenets of moral
philosophy in a way that engineers can quickly understand and
But Don Heider,
the Markkula center's executive director, is quick to acknowledge
that it's an uphill fight.
"I'd say the
rank-and-file is more open to it than the C-suite," he says.
Salesforce, practitioners like Yoav Schlesinger, the company's
principal of ethical AI Practice, worry about imposing an "ethics
tax" on their teams - an ethical requirement that might call for
"heavy lifting" and would slow down their process.
direction, the company has rolled out a set of tools and processes
to help Salesforce employees and its clients,
moral imagination, effectively," as Schlesinger puts it.
The company offers
educational module that trains developers in how to build
"trusted AI" and holds employee focus groups on ethical questions.
task is not teaching ethics like teaching deontological versus
Kantian or utilitarian approaches to ethics - that's probably
not what our engineers need," he says.
need is training in ethical risk spotting: How do you identify a
risk, and what do you do about it when you see it from a process
perspective, not from a moral perspective."
not from a moral perspective," she says. "It's just more
that we're focused on the practical, 'what do you do about it,'
than we are about the theory."
The company has
also created explainability features, confidential hotlines, and
protected fields that warn Salesforce clients that things like ZIP
code data is highly correlated with race.
They have refined
acceptable use policy to prevent their e-commerce platform from
being used to sell a wide variety of firearms and to prevent their
AI from being used to make the final call in legal decision-making.
The Ethical and
Humane Use team holds office hours where employees can drop by to
They have also
begun to make their teams participate in an exercise called "consequence
scanning," developed by researchers at Doteveryone.
Teams are asked to
answer three questions:
"What are the
intended and unintended consequences of this product or
"What are the
positive consequences we want to focus on?"
"What are the
consequences we want to mitigate?"
The whole process
is designed to fit into Agile software development, to be as
minimally intrusive as possible.
Like most ethical
interventions currently in use, it's not really supposed to slow
things down, or change how business operates.
Beware the "ethics
running code," says Subbu Vincent, a former software engineer
and now the director of media ethics at the Markkula center.
Engineers, he says,
"always want to
layer their new effort on top of this system of software that's
handling billions of users. If they don't, it could end their
And therein lies
while well-intentioned and potentially impactful, tend to suggest
that ethics is something that can be quantified, that living a more
ethical life is merely a matter of sitting through the right number
of trainings and exercises.
is that the solutions that are coming out are using the language
of, 'hey, we'll fit within the things you're already familiar
with'," says Jacob Metcalf, a researcher at Data & Society.
saying, 'hey, maybe don't be so voracious about user data, maybe
you don't need to grow to scale using these exploitative
forcing a change in the diversity of who is in the room."
Danah Boyd and Emmanuel Moss, Metcalf recently surveyed a
group of 17 "ethics owners" at different companies.
One engineer told
them that people in tech,
"are not yet
moved by ethics."
An executive told
them that market pressures got in the way:
"If we play by
these rules that kind of don't even exist, then we're at a
disadvantage," the executive said.
The "ethics owners"
they spoke to were all experimenting with different approaches to
solving problems, but often tried to push for simple, practical
solutions adopted from other fields, like checklists and educational
ethics as a difficult but tractable technological problem
amenable to familiar approaches, ethics owners are able to
enroll the technical and managerial experts they feel they need
as full participants in the project of 'doing ethics'," the
building a solution in the same mold that was used to build the
problem is itself a form of failure."
If and when ethics
does "arrive" at a company, it often does so quietly, and ideally
"Success is bad
stuff not happening, and that's a very hard thing to measure,"
says Miller, the acting CEO of Doteveryone.
In a recent
survey of UK tech workers, Miller and her team found that 28%
had seen decisions made about a technology that they believed would
have a negative effect upon people or society.
Among them, one in
five went on to leave their companies as a result.
Enigma, a small
business data and intelligence startup in New York City, all new
hires must gather for a series of talks with Michael Brent, the philosophy
professor working as the company's first data ethics officer.
gatherings, Brent opens his slides and says,
Now we're going to do an hour-long introduction to the
European-based, massively influential moral theories that have
been suggested in the past 2,400 years.
We have an hour
to do it."
The idea is that
starting at the beginning is the only way to figure out the way
forward, to come up with new answers.
that we're looking at don't obviously have any direct
application - yet - to these new issues. So it's up to us,
We're the ones
who have to figure it out," he says.
The engineers he
works with - "25-year-olds, fresh out of grad school, they're young,
they're fucking brilliant" - inevitably ask him whether all this
talk about morals and ethics isn't just subjective, in the eye of
They come to his
office and ask him to explain.
"By the end,"
he says, "they realize that it's not mere subjectivism, but
there are also no objective answers, and to be comfortable with
that gray area."
Brent met Enigma's
founders, Marc DaCosta and Hicham Oudghiri, in a
philosophy class at Columbia when he was studying for his doctorate.
They became fast
friends, and the founders later invited him to apply to join their
company. Soon after he came on board, a data scientist at Enigma
called him over to look at his screen.
It was a list of
names of individuals and their personal data.
"I was like,
whoa, wow... OK. So there we go. What are you going to do with
this data? Where did it come from? How are we keeping it
The engineer hadn't
realized that he would be able to access identifying information.
"I'm like, OK,
let's talk about how you can use it properly."
The fact that
Brent, and many others like him, are even in the room to ask those
questions is a meaningful shift.
consulting with companies and co-authoring
reports on what it means to act ethically while building
unpredictable technology in a world full of unknowns.
Ph.D. who works at a big tech company tells me that the
interventions he ends up making on his team often involve having his
colleagues simply do less of what they're doing, and articulating
ideas in a sharper, more precise manner.
because to actually make these products and do our jobs, all the
machine learning is built around data. You can't really avoid
that for now," he tells me.
"There are a
lot of stops in place … It's basically really hard to do our job
wants to say they are ethicists
But for every
professional entering the field, there are just as many - and
probably more - players whom Reid Blackman, a philosophy
professor turned ethics consultant, calls "enthusiastic amateurs."
engineers who care, and who somehow confuse their caring with an
So then they
bill themselves as, for instance, AI ethicists, and they are
most certainly not ethicists.
I see the
things that they write, and I hear the things that they say, and
they are the kinds of things that students in my introduction to
ethics class would say, and I would have to correct them on," he
reinventing the wheel, talking about principles or whatever.
It's the Wild West, and anyone who wants to say they are
ethicists can just say it. It's nonsense."
The result is that
to wade into this field is to encounter a veritable tower of Babel.
study of 84 AI ethics guidelines from around the world found
ethical principle appeared to be common to the entire corpus of
documents, although there is an emerging convergence around the
transparency, justice and fairness, non-maleficence,
responsibility and privacy."
This is also, in
part, a geopolitical problem:
wants its code of AI ethics to be the one that wins. (See, for
White House's recent explication of "AI with American
these AI ethics principles are brought out to support a
particular worldview, and a particular idea of what 'good'
is," Miller says.
"It is early
days," says Goldman, so it's not necessarily surprising that
people would be using different vocabularies to talk about
"That is true
of how fields get created. I'm sure it's true of how security
I asked her and
Schlesinger what would happen if a Salesforce client decided
to ignore all of the ethical warnings they had worked into the
system and use data that might lead to biased results.
The thing is, ethics at this point is still something you can opt
explains that Salesforce's system is right now designed to
give the customer,
opportunity to decide whether or not they want to use the code,"
believe that customers should be empowered with all the
information to make their decisions, but that their use cases
are going to be specific to them and their goals."
Enigma, the company's co-founders and leadership team can choose
not to listen to Brent's suggestions.
"I'm going to
say, OK, here's what I think are the ethical risks of developing
this kind of product," he says.
"You guys are
the moneymakers, so you can decide what level of risk you're
comfortable with, as a business proposition."