On this episode of There Has to Be a Better Way?, co-hosts Zach Coseglia and Hui Chen talk to Dr. Rumman Chowdhury, a pioneer in the field of responsible AI. Currently a Responsible AI Fellow at Harvard, with prior leadership roles at Twitter and Accenture, Rumman has first-hand insight into the real harms of AI, including algorithmic bias.

1353788a.jpg

Transcript:

Zach Coseglia: Welcome back to the Better Way? podcast brought to you by R&G Insights Lab. This is a curiosity podcast for those who find themselves asking, "There has to be a better way, right?" There just has to be. I'm Zach Coseglia, the co-founder of R&G Insights Lab, and I'm here, as always, with my friend, colleague and collaborator, Hui Chen. Hi, Hui.

Hui Chen: Hi, Zach. It's so great to be here. Hello, everyone.

Zach Coseglia: Hui, we have a really exciting guest with us today. I've been looking forward to this for many weeks, and I know you have as well. We have Dr. Rumman Chowdhury with us. Rumman, welcome to the Better Way? podcast.

Rumman Chowdhury: Thank you so much for having me.

Zach Coseglia: Absolutely. We usually start with a pretty hard-hitting question, which is: Who are you? So, Rumman, why don't you tell us a little bit about yourself and introduce yourself to our listeners.

Rumman Chowdhury: I am a data scientist and a social scientist. I did most of my work in political science where I'm known as a "quant" (quantitative analyst and quantitative analysis). I became a data scientist in the early days of data science, but for the past six years, I have been working in the field of applied responsible AI. I know AI has come full circle in being front and center in a lot of conversations, and especially around the ethical and responsible use—that's what I do. I've been more on the applied side of things rather than the research side, although I dabble in both. I was Accenture's first global lead for responsible AI. Back then, nobody really knew what the term meant. I built a practice around it. I then left to create my own startup, an algorithmic auditing startup called Parity. I was acquihired over to Twitter where, until last November, I was the engineering director of the Machine Learning, Ethics, Transparency and Accountability team. Currently, I am a Responsible AI Fellow at Harvard's Berkman Klein Center for Internet & Society, amongst multiple other things that would probably take the rest of the podcast to explain.

Zach Coseglia: One of the things that really has made me excited about having you on the podcast—and this has been a point of excitement for many of our guests on the podcast—is that we often find in our world that there are people who talk about data science and there are people who talk about behavioral science, which are both areas of interest for us, and there are people who talk about AI, but a lot of the people who are talking about these topics aren't necessarily experts in these topics. Here we have you coming to talk to us today about artificial intelligence, data science, and responsible AI and ethics in the field of artificial intelligence. I'm not sure that we could find a better expert anywhere in the world on these topics than you.

Zach Coseglia: As AI continues to find its way into mainstream consciousness, we see lots of terms getting thrown around, sometimes incorrectly to this point, and sometimes not. We here at the Better Way? podcast very much believe that precision matters. We would love to just start with some of the basics, and to have you help us define some of the key terms in this world. I'd actually just like to start with the most obvious one, which is "artificial intelligence."

Rumman Chowdhury: I said at the beginning of the podcast that everything has come full circle, and one of the earliest battles I gave up fighting was what got to be defined as "artificial intelligence." Today, the term is so muddied, anything that is computationally derived is now artificial intelligence as long as it has a reasonably cool and sophisticated-looking user interface. As long as I can make a chatbot sound like it's chat-botty, even if it's a basic decision tree behind it—which literally looks like a tree, "If person says X, say Y"—that gets called artificial intelligence. In my world, if I'm speaking, I'm going to put my data scientist/technologist hat on, artificial intelligence is a very specific thing. These are usually self-trained or real-time adaptable sophisticated systems based on mass amounts of data that use things like neurological networks—"neural nets" are what they're called. Neural nets are designed trying to mimic the way the human brain works, and that's where the term artificial intelligence comes from. The term was born out of a conference at Dartmouth in the 1950s. In my mind, as a technologist, what artificial intelligence does not refer to are lots of the things one can do in Excel, like basic statistical models, mathematical models or regression models, but all of that gets lumped into AI. I'm going to put my consultant/person who builds stuff hat on: AI is like art—it's what the person wants it to be.

Zach Coseglia: We often hear AI paired with ML. So, talk to us about machine learning: How you define machine learning, and how it fits into the broader artificial intelligence ecosystem.

Rumman Chowdhury: The happy medium of this battle of what is AI has become "AI/ML," and that actually is the overall encompassing term. Machine learning models are a lot more static—they are trained on a very specific body of data. These are the more mathematical statistical models. So, the big difference here isn't even about what it looks like to the user—it's what happens underneath the hood. And that's why I say, this is a debate that is left up to the people who build the stuff arguing with each other, but at the end of the day, the people on the receiving end of it, it actually doesn't matter. The reason it doesn't matter is when we get into responsible and ethical AI, where bias comes from, how we should be thinking about it, it doesn't particularly change very much because so much of responsible or irresponsible use is about how it is used in the real world versus under the hood (what parameter you chose for some function).

Zach Coseglia: What about the term "predictive analytics," which is another term that's often used in some of the ponds that we swim in? Is there a difference between predictive analytics and machine learning, or are they two words for more or less the same thing?

Rumman Chowdhury: I will preface my answer by saying one of the things I am most known for is my candor, and, as a former consultant myself, I'll say predictive analytics is data science in a new dress. Once data science was out of vogue, then it became either AI/ML engineering if you were more on the programming side, or if you were more on the math side, it became predictive analytics. So, these are all just the same thing in slightly different outfits.

Zach Coseglia: The last term that I want to talk about—and this is what's so fascinating about this space—is a term that probably wasn't on anyone's tongue three or four months ago and now is a term that everyone just uses in normal language, and that's "generative AI."

Rumman Chowdhury: Generative AI is a class of models that can generate images, audio, video and text. I think people tend to focus a lot on language models, and I have theories as to why, but essentially, generative AI models can build media based on simple prompts by the user, so, plain-language prompts. One of the biggest revolutions in this new wave of generative AI—I say "new wave" because generative AI is not a new technology, that this is just a new face of it—was actually cracking that user functionality. The big thing about GPT-4—and it's called "GPT-4" because there was a three, a two and a one—was that now finally people who did not know how to code could communicate withthis AI model and get it to produce some sort of an output. So, this new wave of generative AI is revolutionary in two ways. One in that the output it creates is much more convincing—it creates synthetic media, so things that did not exist before. It's not just repeating something verbatim that exists on the internet. Also, the user functionality has become so simple, a child can go in and interact with it and get it to say something, build something or do something.

Zach Coseglia: The last term or set of terms that I'd love to get your definition on are "responsible AI" or "trustworthy AI," which I know is an area that you focus on and is an area where you are in probably a small group of thought leaders on the topic.

Rumman Chowdhury: The simplest definition is "creating artificial intelligence that improves humanity and impacts society positively writ large." That's a pretty vague and broad remit, so to be a little bit more specific, it's interesting how the field of responsible AI or trustworthy AI started as ethical AI. People got squeamish about ethics because it implies moral frameworks. I came into the field in this age of responsible AI, and even then, responsible people felt kind of squishy about, "What do we mean by responsible? Responsible to whom? Does responsible mean liable?" Now, the word has become "trustworthy." And they all have their different benefits. What I do like about the term "trustworthy" is I really do think it gets at the benefit to humanity. I have to be able to trust something to use it, and we are only all going to benefit from something if people are confident in the output or the outcome of whatever it is they're using. The term "trustworthy" also starts to get past a little bit of the technological gatekeeping. This idea—and it's been espoused by folks like Eric Schmidt—that "only technologists can understand the technology," that's actually deeply untrue. So, what I take responsible AI as is not just a group of experts building technology to benefit humanity but also ways in which people who are not technologists can meaningfully contribute to the development of the technology that impacts them.

Zach Coseglia: I think sometimes in our worlds, when we're talking about governance, when we're talking about compliance, when we're talking about ethics, responsibility, trustworthiness or whatever term we use, that it's sometimes viewed as, "Yes, there's the business side, and we'll try to make that trustworthy or responsible." It's sort of an add-on, as opposed to something that's deeply ingrained in the actual product or the actual business strategy. To this point, I want to actually read a quote from your testimony to Congress a few weeks ago. You said, "There is concern about the U.S. remaining globally competitive if it is not investing in AI development at all costs. This is simply untrue. Building the most robust AI industry isn't about powerful models, processors and microchips." You say, "The real competitive advantage is trustworthiness."

Rumman Chowdhury: Absolutely. It's interesting—that perspective of mine is borne out of years of working with clients, and, in particular, having worked at Twitter. Over the last year, we've heard a lot about how easy it is to build a social media platform, lots of engineers saying they can pull together Twitter in a weekend, and actually, that is true, but it misses the point of what a company like Twitter was. A company like Twitter was about its trust and safety—it was the fact that users felt confident that they could go on there and use the platform, and not be harassed, not have somebody say malicious things to them and not see content that was harmful or degrading. And that actually applies to just about any technology. The value for regular people using any product, again, it doesn't have to be an AI product, is reliability and trustworthiness. People don't buy just the coolest, flashiest products—some percent of people do—the vast majority of people buy, use and interact with the most reliable products. The history of tech actually is the history of reliable products, not necessarily the coolest, flashiest technology. So many of these companies and smart companies are investing in diligence, compliance and in responsible use, and they're really concerned about what their users are worried about and how they can address those concerns, just as much as they are worried about how cool and flashy is my technology.

Hui Chen: Rumman, I'd love to hear your thoughts on: What should ordinary people be worried about today, given where we are?

Rumman Chowdhury: If it's in particular about generative AI, my big concern is "information integrity," which is actually the broader term. I think people use "misinformation" and "disinformation"—I think those terms have become a little bit conflated with let's say news, facts or elections. In addition, now, things like hallucination. For example, even early on, there was this really funny incident with ChatGPT when you asked it about me and it made up this whole bizarre story that I was a social media influencer, which was, by the way, very gendered—it was very focused on how I look and how many shoes I have. I have a public persona, but nothing about my public persona is based on appearance, and anyone listening to the podcast who is a woman understands why I do that. So, it's very interesting that the assumptions it made somehow—and I really don't even know how it made it—was that my public persona was based on my appearance, and I was some sort of a social media influencer. They've corrected that, but now there is this very subtle thing it does where, if you ask it my bio, it says that I worked for IBM. I've actually never worked for IBM. It's actually very convincing, it's very real-sounding, and it sounds like I would have or could have worked somewhere like IBM. So, that's what I worry about. When I say, "information integrity," this is not just fact-checking or election data—this is just these subtle incorrect things that may get slipped in and completely distort someone's view of the world.

Zach Coseglia: Many of us have a view of AI that's shaped by the movies and by popular culture. I guess I'd love to hear just your thoughts, because I imagine it's on the mind of a lot of our listeners. How fantastical are those depictions of how wrong this can go and how worried should we be as a result?

Rumman Chowdhury: Yes, there is this increasing narrative of runaway AI, and, in part, it's fed by movies, etc. In part, it's also fed by this larger movement of existential risk. I'll put it this way: We have documented facts and data to support addressing things like algorithmic bias, harms to underrepresented communities, and global impacts on things like mental health for individuals. We have zero empirical evidence to support the idea that AI will come alive and set off nuclear weapons. And actually, when I say we have zero empirical evidence, I mean, as a scientist, we have empirical evidence to the contrary. In GPT-4's system card—which anybody can download off the internet, it's very interesting—it walks into all of the safety checks that they did for their model. They actually have a section where they go into hiring a group that tested for existential risk—in other words, "Could it accidentally leak nuclear secrets or set off a weapon or whatever?" And the answer was, "No, it couldn't." So, the current state of this technology is provably not in a state where it can do any of the fantastical things in movies. The narrative starts to become, "What about future technologies?" Sure, many things are possible in future technologies, but again, grounding it in reality, we have empirical evidence and already existing harms in the very basic models that are built today. Maybe it makes more sense to solve the problems of today than to worry about fantastical problems of tomorrow.

Hui Chen: Going from the fantastical to the practical, I'm curious as to your thoughts on, as you see how companies and individuals are using ChatGPT, how are they using it right, and how are they using it wrong?

Rumman Chowdhury: I think a fun and interesting way to use not just ChatGPT, but this whole genre of generative AI, really has more to do with things that are low stakes—so, writing really basic introductory blog posts, as a starting point. I think a lot of people I know who use these products use them to augment the work they're doing, so it actually doesn't replace anything they do. I have seen it useful for if I need to write a post for LinkedIn—400 words on some very vague concept that is not time relevant. By the way, most of these models are not trained past 2020 or 2021, so don't ask them any current information—they can't tell you. It's a nice way to kick off the creative process—I'll put it that way. Most interestingly, and the most relevant applications I have seen have actually been in image generation. One was a conversation I had with a photographer, and I asked her how she feels about image generation, given that she's a photographer. "Is she worried about her job?" She actually said, "No, I'm super excited. I spend hours on image generation."If she's going to go do a model shoot the next day, she has it generate her ideas in images. So, she takes it with her, and she shows her models, "This is what I want this to look like." And I thought, "What an interesting way to make her life easier and her job easier." But nowhere in that is the model losing their job or is she losing her job, because she actually still has to have the creative vision. I've heard similar things from people who are graphic designers where they say it actually makes it easier for them to converse with their clients. What they'll say is, "Client X, can you go and generate a few pictures that are kind of the feel for what you want from me? Then, I will come back and make you the more professional version of it." I actually am already finding a lot of people in creative industries—again, copyright and intellectual property aside—trying to tap into it to make their lives easier.

I actually find that text generation is the most problematic and the least helpful. I have tried to use it to summarize these workshops that we were doing at Harvard. I found its summaries to be so bad, I just scrapped them and did them from scratch. That's actually because what we did was a workshop of experts in the field of responsible AI, whether it was law, policy, etc., and what the model couldn't do was create a sophisticated narrative. It couldn't synthesize expert conversation and make a sophisticated, unique story about AI and policy, even though all the seeds were there. But as an expert in the field, I can look at that text, and I can pull out meaning in a way that the models absolutely could not. So, it's a mixed bag.

Hui Chen: That's interesting, because one of the things I've read about the regenerative AI is that it is not good at issue spotting, which is basically one way I interpreted what you just said, that certainly experts are good at spotting the issues in a discussion or in an emerging situation. Why is that? Why is regenerative AI not good at something like issue-spotting?

Rumman Chowdhury: Yes, and that's a great way to put it. I'll also add it's not particularly good at synthesis, and synthesis was what it struggled with when it came to the content I was trying to build. To understand especially language models and the way they work, and why it doesn't do that well, it helps to understand how these models are built, how they're acting and what they're doing. Some people jokingly call it "spicy auto-correct," but that's kind of what it is—it's a sentence completion model, and if I give it a word, it has given probabilities of what the next word can and should be. It's very similar to when you search in a search engine, and if you search for a very vague term, you can get a bunch of garbage, but the more specific your search is, the closer you'll get to the thing you're looking for—it's the same thing. If I've written a prompt with enough words that give this model context, what it actually does is create a next-word probabilistic model based on this complicated neural network, and then it'll start pulling out and generating sentences. It doesn't do issue-spotting—nowhere in this is it synthesizing or is it "thinking," it's not doing any of the above.

Zach Coseglia: Let's stay grounded in reality rather than going to some of the fantastical existential threats. One of the real risks that you articulated was "algorithmic bias." Why don't we start by just defining that term.

Rumman Chowdhury: Algorithmic bias very specifically is when, due to the design of the model that is built—I'll also define a "model" (a model is data plus an algorithm) and that's very important—the model skews towards a systemic set of answers that is misaligned from what the intended or expected use of that model is. I chose my words very carefully in that definition. Algorithmic bias is not my feelings about an algorithmic output. Algorithmic bias has to be somehow either mathematically backed by looking at at-scale issues in your data or the mathematical limitations of your model, and also, it's helpful if it's possible to demonstrate it in the output of the model at scale. Usually, people go backwards as they demonstrate that there is some sort of an output that is undesirable, and it's at-scale, which is fine. But even then, based on especially how artificial intelligence models are built, may or may not be the result of algorithmic input.

Zach Coseglia: Maybe you can share a couple of examples, some of which I'm guessing that our listeners are familiar with, but some specific examples of how algorithmic bias has actually materialized in harm, because that's the point that we're picking up on, that there has been demonstrable harm already.

Rumman Chowdhury: Absolutely. There's a really great book by David Robinson, and it is actually a positive read about algorithmic bias and how a particular community overcame it. In this case, it was a series of articles in the Boston Globe about how in the medical community, there were algorithms being used to determine whether or not people should be placed on the kidney transplant list, and it was biased against Black people. Black people were systematically being undervalued compared to identical white patients over and over again, and not added to this kidney transplant list. Fundamentally, what it boiled down to was human behavior. Doctors tend to dismiss Black patients, in particular, Black men, when they express signs of discomfort, pain, etc. This is actually very well-documented in the medical literature. We see the same in things like maternal health and how Black women are treated. In this very specific case, it translated to how Black patients were or were not previously recommended for kidney transplants, and that data being blindly fed into a machine learning model, an algorithm—presumably because data is neutral or algorithms are neutral and this is going to be less biased—actually produced and amplified a more biased output. The solve for things like that is not easy. It's kind of a cliché, but people say, "AI is a mirror." I think sometimes people mean that to say, "AI is just like us." I actually take it to mean, "AI is a mirror with really good lighting, and you can see all the pimples on your face." That is my interpretation of "AI is a mirror"—it shows you the pimple, that was the fact that doctors discriminate against Black patients, and no matter how many layers of algorithmic curation you hide it behind, it's still going to manifest itself. So, fundamentally, it was based on flawed data.

I can give you an example of something that people often think is algorithmic bias, but the jury's out, and this is based on work that we did at Twitter. We had a paper that was in PNAS and a lot of journals, and a lot written about it, about algorithmic amplification of political content. Now, of course, whether or not there is shadow banning, stifling or algorithmic dampening of certain perspectives on social media is forever a topic of conversation. My team actually investigated it, and we found that in seven out of eight countries, I think surprisingly to most, the algorithmically curated timeline amplified center right content slightly more than it did any other content. But the story can't really end there, because like I said, we're starting with the output. We're seeing this as the output compared to the reverse chronological, for which there's no algorithmic amplification, or the algorithmic amplification is based on time zone (technically, if I'm being overly technical about it). What that does tell us is there is an undesirable output. In the blog post that we have, what we talk about is it's unclear until we do a root cause analysis, whether this is a function of algorithmic bias—in other words, our machine learning models somehow picked up—even though they were not told to (it was a variable)—what center right content was and amplified it artificially. In that case, it is a problem for Twitter to fix, for my team to fix, or whether it was simply reflecting what people's sentiment was at that time. Again, these models don't act of their own accord—they're based on the data we feed them. Now, the timeframe from our data was April to August of 2020, which was actually a time in which a lot of people were talking about center right content. We had multiple elections coming up. We had Trump running for re-election. If it is the case that these models were just picking up what people were saying, now we start to get into the existential or the philosophical. Who's to say what is and isn't fair in this case? I definitely think that's way above my pay grade. I certainly do not want that responsibility of defining what democratic discourse is. But a tweet-size takeaway people say is, "See, Twitter's algorithm was biased for conservative viewpoints." The answer is, actually it's not—just because we see an output doesn't mean I understood where it came from, or anybody understands where it came from.

Zach Coseglia: So, what do we do about this, and how do we know whether it has algorithmic bias? You advocate for funding of independent groups to conduct red teaming and adversarial auditing, and you also advocate for legal protections so that these individuals, when operating in the public good, are not silenced with litigation—this was something that you talked about when you were testifying in front of Congress a couple of weeks ago. But maybe, start by defining a "bias bounty" for us and what "red teaming" means.

Rumman Chowdhury: Yes. We're getting at all of the stuff that I am deeply passionate about now. My starting point is governance requires an ecosystem, meaning all kinds of actors. Traditionally, governance has skewered more towards what is possible in-house at companies, and while that is needed, it is insufficient. When I was at Twitter, we actually held the first algorithmic bias bounty, and that was at a conference called Def Con, which is the biggest hacker conference in the world. We opened up one of our Twitter models (the code for the model) to the public, and we offered prizes if people found flaws. There was a grading rubric, everything was made very public, and we learned some amazing things. People are way smarter than the average team at any company could be—no matter how many brilliant minds I could curate and pay extremely high salaries to at Twitter, we're not going to be able to encompass every form of algorithmic bias or harm. So, not only were we so impressed by the types of answers we got and the novel approaches, but also the fact that some of our winners weren't even programmers.

Bias bounty is a program by which there's a challenge that people can submit to to achieve a particular goal that is around improving algorithmic bias. After we were all laid off from Twitter, I started a small group to do bias bounties outside of a corporate entity. We did our first one last fall—it was a very technical bias bounty. We asked people to design a race, image, and gender classifier, which is often used in, for example, medical classifications and in other limited scenarios that actually optimize towards lower bias rather than fastest performance, so that was really interesting, as well. A lot of people got back to us and said they've never actually had to think about these issues before, and being given a structured competition, they were able to improve how they did work and think about their own practice differently.

The thing we're doing now, which has gotten a lot of press and is eating up a lot of my time, is the largest ever generative AI red teaming exercise. Red teaming exercise is when experts or individuals who are outside of companies are given access to a company's models in order to identify harms. Traditionally, red teaming is behind closed doors. It's an unspecified number of experts curated by the company who don't work at that company—they're invited to a special multi-day, multi-week, etc., iterative process, where they provide feedback. None of this stuff is new—all of this exists in information security and cybersecurity. But it's interesting to start bringing it into responsible AI because the problems are really difficult in a different way, so we're borrowing some of the structure that's built in infosec.

I'm super excited about this event at Def Con we're doing this year. We're doing, as I mentioned, the largest-ever generative AI large-language model red team. We are giving, basically, anybody at Def Con access, for 50 minutes, to compete in what's called a "capture the flag," so a competition to get the most number of points in identifying different kinds of harms in every major large-language model. Every single large-language model company is giving access to their models for people to find harms. And it's more than a competition—what we're hoping to do is to educate people, create awareness that these practices can exist, but also start to create some infrastructure and institutions around harms reporting, responsible disclosure, responsible release of data, liability, etc. All of this is completely untrodden territory in responsible AI. This competition is fun—it's supported by the White House, which has been really exciting—but what I'm looking the most forward to is what we do after the event and what is the kind of precedent we set for the industry.

Zach Coseglia: On this point, I want to actually go back to another quote from your recent congressional testimony—there were so many wonderful nuggets in it. This one, which I know Hui is going to love, is about governance and innovation. You said, "It's important to dispel the myth that governance stifles innovation. This is not true." These are your words: "In my years of experience delivering industry solutions and responsible AI, good governance practices have contributed to more innovative products." You then added, "I use the phrase, 'Brakes help you drive faster,' to explain this phenomenon. The ability to stop a car in dangerous situations enables us to feel comfortable driving at fast speeds. Governance is innovation." So, Rumman, tell us about why, in your mind, "governance," "responsible," "ethical," "transparent," whichever word we use, doesn't mean, "stifling innovation," and, in fact, how the work that you're doing around algorithmic bias, bias bounties, red teaming is proof of that.

Rumman Chowdhury: Absolutely. Again, this is from my time at Accenture. The first questions, of course, any good company is asking is, "Is there any potential liability for me? What are the laws? How are they applied? I just want to make sure that we're not going to get sued later on." These are all actually very smart questions to be asking. Then, the next wave of questions is, "How can I protect myself against reputational risk? I'm hearing all sorts of stuff about algorithms, and how they're not going to perform the same for different kinds of people. How do I protect against that happening, especially my BtoC customers?" Anybody who was customer-facing was deeply interested in responsible AI as a way of them stress testing whether or not they should invest in a particular innovation.

I'll give you a specific example. I had a cosmetics client who wanted to introduce AI-enabled color matching for foundation. I shared some of Joy Buolamwini's and Timnit Gebru's work on gender shades and how algorithmic detection models often literally did not see darker-skinned Black people, and their decision, at that time, was to say, "I don't think it's worth us investing in this technology." Again, they're BtoC. If you know anything about the makeup industry, it's very volatile, it can be very fickle, it's very trend-driven, and reputational risk is one of their biggest risks. They could not risk potentially alienating darker-skinned customers, and having them move to another client, and buy foundation from somebody else that didn't literally discriminate against them algorithmically. It's interesting because makeup is something that's feminized—it gets dismissed. It is a multi-billion-dollar industry. These companies may not have the massive impact in a particular sense, but from a financial sense, these are massive clients, massive markets, and costing your customers hundreds of millions of dollars, if not more, because you're not careful about something, frankly, so basic as, "Is it identifying your customers," is just such a fatal flaw that these are the conversations we should be having.

Hui Chen: I so appreciate that you've framed governance in terms of concerns for legal liability, as well as reputational concerns because particularly, in terms of what we're talking about here, legally you're often in uncharted territories. You're dealing with legal issues that nobody had seen before, so you really need to ask that question. But more importantly, you need to be asking about that reputational question, which, like the question that Zach asked you in the beginning: "Who are you? Who are we as a company?"

Rumman Chowdhury: Responsible AI is not just about liability—it's about ensuring what you are building is enabling human flourishing. I have an op-ed in The Hill about exactly this—about if we're going to think about governance methodologies, specifically about the concept of global governance bodies, we should actually align the mission of these bodies to the concept of flourishing, rather than stopping bad things from happening. This is really important to me—it's important to me for multiple reasons. It goes back to my testimony. Governance is not just punishment—governance is also rewards, and hand-in-hand with opening up things like better standards, cultural norms, access to third-party individuals, is also the positive reputational benefit companies get from engaging with these groups and bodies.

Today, we're in this world of actually a very negative feedback loop where somebody has a gotcha moment, they post it on social media, everything goes viral, and the company is now scared, or the company is now publicly shamed. What that actually leads to is a failure state because then companies will just shut their doors, they don't engage with anybody, and they're like, "You know what? We're just going to go build stuff quietly. And as long as no one's paying attention to us, we're going to build whatever we want to build," which is the actual wrong way to do any of this. So, my goal is to open up these different avenues by which people can safely engage with companies, but also companies can safely engage with different experts. I actually think that so much of this is driven by a lack of knowledge, a lack of information, a lack of access, and enabling that conversation is what will make better tech.

Zach Coseglia: Rumman, before you go, we have a few questions that we ask everyone at the end of the podcast. These are really intended to be fun, just to get to know you a little bit better. It's inspired for me by James Lipton and Inside the Actors Studio; for others, by Proust, Vanity Fair, Bernard Pivot. The first question, Rumman, is a choice—you could choose one of these two questions. The first is: If you could wake up tomorrow having gained any quality or ability, what would it be? Or you can answer: Is there a quality about yourself that you're currently working to improve, and if so, what is it?

Rumman Chowdhury: The ability to fly—I just feel like it'd be very calming.

Zach Coseglia: Agreed. I like that.

Hui Chen: That sounds wonderful. The next one is also a choice of two. You can answer either: Who is your favorite mentor? Or: Whom do you wish you could be mentored by?

Rumman Chowdhury: If you subscribe to the newsletter, Exponential View, Azeem Azhar is somebody I love. He's like a big brother to me. He was one of the people I sought advice from after all of us got laid off from Twitter. Last year was a very traumatizing and difficult year, and especially as a team leader and a manager, to try to hold your team together, help them move on, it was really tough. Azeem was very sweet and gave me some of the best advice, so I very much appreciate the advice he's given me.

Zach Coseglia: That's great. From that to this, what's the best job, paid or unpaid, that you've ever had?

Rumman Chowdhury: It's hard to say it's the best—I definitely reminisce very fondly about my very first job. I worked at Barnes & Noble, and it was literally my first job out of college.

Hui Chen: What is your favorite thing to do?

Rumman Chowdhury: Probably, read. Yes, that's an easy answer: Read.

Zach Coseglia: It's a good one, a common one. I like that. What's your favorite place? And you can interpret "place" however you'd like.

Rumman Chowdhury: Home. One can interpret "home" to mean whatever you want it to mean. Home is where my family is. Home is where my pets are. Home is where my stuff is. Home has meant different things. Home is more of a psychology thing than a physical thing, as somebody who has spent significant amounts of time on the road.

Hui Chen: What makes you proud?

Rumman Chowdhury: Seeing the people I've helped do well. I am so proud to see my former team thriving, being really successful, and even moving onto other jobs and roles. I think one of the proudest moments I had was last week when a former member of my team, who has a new job working at a company that doesn't do AI—she's an engineer, she's working in engineering—said that she's convinced them to start doing fairness metrics and fairness principles in what they're doing. I was so immensely proud of that. I was so immensely proud of her.

Zach Coseglia: That's nice. We go from the deep to the opposite of deep... the next question is: What email sign-off do you use most frequently?

Rumman Chowdhury: "Best." It's actually really easy to type—it's only four letters. One of my friends calls my inbox, "A super fun site," which is the best description. So, if any way I can save a millisecond, "Best, Rumman." Amazing.

Hui Chen: The next question is: What trend in your field is most overrated?

Rumman Chowdhury: Easy: Whether or not AI models our life or arguing about whether or not they're alive. It just misses the point.

Zach Coseglia: All right. The last question, Rumman: What word would you use to describe your day so far?

Rumman Chowdhury: "Structured." "Structured" is something I'm proud of when I can enable some structure in my day.

Zach Coseglia: Absolutely. I love that. Rumman, thank you so much for humoring us with this. And thank you so much for a wonderful conversation about the work that you're doing, your point of view, and helping our listeners define some of these really important terms so that we can be smarter about these issues that are going to be part of our future.

Hui Chen: I have to tell you how often Zach speaks of you and how highly he speaks of you, and now, I have an appreciation as to why.

Rumman Chowdhury: Thank you.

Zach Coseglia: Thank you all for tuning in to the Better Way? podcast and exploring all of these Better Ways with us. For more information about this or anything else that's happening with R&G Insights Lab, please visit our website at www.ropesgray.com/rginsightslab. You can also subscribe to this series wherever you regularly listen to podcasts, including on Apple, Google, and Spotify. And, if you have thoughts about what we talked about today, the work the Lab does, or just have ideas for Better Ways we should explore, please don't hesitate to reach out—we'd love to hear from you. Thanks again for listening.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.