On this episode of There Has to Be a Better Way?, co-hosts Zach Coseglia and Hui Chen talk to Antoine Ferrère, global head of behavioral and data science in the ethics, risk and compliance department at Novartis. With master's degrees in both management and behavioral science, Antoine discusses how his multidisciplinary team at Novartis applies behavioral and data science at scale to drive ethical behaviors, reduce risks and ensure compliance across all areas of the company. He also discusses the multi-year studies his team has implemented to better understand the role of psychological safety in both speaking up and "listening up."

Transcript:

Zach Coseglia: Welcome back to the Better Way? podcast, brought you by R&G Insights Lab. This is a curiosity podcast where we ask, "There has to be a better way, right?" There just has to be. I'm Zach Coseglia, the co-founder of R&G Insights Lab, and I'm joined, as always, by the one and only Hui Chen. Hi, Hui.

Hui Chen: Hi, Zach. Hi, everyone.

Zach Coseglia: I'm so excited for today's discussion. We are joined today by Antoine Ferrère, who is the global head of behavioral and data science in the ethics, risk and compliance (ERC) department at Novartis. Antoine, welcome—thanks for joining us.

Antoine Ferrère: Thank you so much. Good to be here with you guys.

Zach Coseglia: We are happy to have you. I think that we want to start by just getting to know you a little bit better. We ask this of pretty much all of our guests and we'll ask it of you: Who is Antoine?

Antoine Ferrère: I'm a behavioral scientist and I've been working at Novartis for four years now in that department you mentioned. For my background, originally, I did a master's in management, so that was business school. I did 10 years of consulting—pretending I was doing strategy, but doing mostly technology implementation, which is quite common in consulting, as well as back-office, offshoring, near-shoring, and optimization. After 10 years of that, then I took a turn and went into behavioral science, did a master's at the London School of Economics, and then did a bit of a pivot with my career and ended up in this marvelous position that I have in Novartis and able to do all the great things that I'm sure we'll talk about. But that's just on the professional side. Importantly, I'm also a husband—like I say, father of three, husband of one. I'm French, and I work in Switzerland, but I live across the border in France. So, that's where I am.

Zach Coseglia: Terrific. I thought it was really interesting when we look at your title and we look at the group that you're a part of—we've got three words: "ethics," "risk," and "compliance." Talk to us about the intentionality behind that name for the group and how you think of the difference between "ethics," "risk," and "compliance," which are words that sometimes, in our world, get bundled together and treated as one and the same.

Antoine Ferrère: I'll give you my interpretation and what I think that is and what I think is important. Basically, the function came about a couple years ago from an extension of what I believe was the integrity and compliance within a legal team. The function and the three names do represent a bit of a large or extended scope of what could be a traditional compliance function. So, let's start with the "compliance"—this is the traditional compliance and the managing of key risks, public perception, and regulatory risks. The "risk" one represents the fact that we're doing more than just managing a typical compliance risk for the firm or organization. You can think about transparency or about transfer of values and other things, and the risk represents the fact that the function is also responsible for managing the entire risk management framework for Novartis. The function is not accountable for all those risks—we're accountable for managing that framework—but how do we define, measure, manage, and mitigate those risks? So, that's the risk part. The "ethics" is really representing the direction from the function of the last few years that (and maybe as to why I'm here today) a realization that we have to go beyond compliance, because within the word itself, there's this notion of complying with something—complying with some external requirement that is imposed on us. I think the ethics probably captured this—and I don't know if that would apply to a large organization such as Novartis—but I'd say it is an intrinsic motivation from the company to do what's right irrelevant of what we have to comply to, if that makes sense.

Hui Chen: That makes so much sense, Antoine. I'm smiling as you're saying that because we so often hear "ethics" and "compliance" used as interchangeable words with each other, and I've always tried to emphasize that extrinsic versus intrinsic orientation of the two. Compliance is all about what you're required to do—it's about somebody else's expectations of you, whereas ethics is about what you believe and what your values are, and what you think is right. I'm curious to what extent you talk about those differences with the employees in the company?

Antoine Ferrère: Whether we do that explicitly or implicitly given some of the choices that we've made in the function, we certainly reflect and illustrate on that distinction. For example, the project I joined Novartis on was the launch of the new code of ethics. Of course, they're never called "code to comply," but they could be called "code of conduct," and so, that in itself is a signal that what we're talking about here is more than just compliance with regulation. That document and that positioning is important because it clarifies what is expected of people within Novartis, and it also sends a strong signal. Within that document, we also take the point of view of the different commitments we made—and not commitments to anybody else than ourselves—so I think that also reflects that self-motivated desire to be ethical. Yes, we have to be compliant, of course, but I think as a pharma organization or as a company working in the pharmaceutical industry, our margin of error is incredibly thin and we also have to recognize that, and so, of course, that is not a reason to want to be ethical. Sometimes, you want to justify why you need to be ethical: "It's good for business. It's sustainable." I always try to refrain from that to say, "No, there's no reason beyond the fact that it's great, that it's the right thing to do. Yes, there might be a business case for ethics, but one doesn't need one as well." It can be sometimes self-defeating as well to turn everything into a sustainable performance or things like this.

Zach Coseglia: I want to talk a little bit more about multidisciplinary teams—that's been one of the Better Ways that we've talked about on this podcast before, and it's very much the promise of this lab that Hui and I are a part of here at R&G Insights Lab. By multidisciplinary teams, for us, it's about having data scientists and behavioral scientists, and it's about that for you, too. So, tell us about your team and what may seem to some—certainly not to us—like a surprise to have a behavioral scientist and data scientists within an ethics and compliance function.

Antoine Ferrère: I think it is, indeed, a surprise, certainly outside of the organization to other people when they ask, "What do you do?" "I work in ethics, risk, and compliance." "Great. What do you do there? Are you a lawyer?" "I do social science and behavioral science." And then, you can see something in their eyes—they are like, "Oh, great." Or they say, "What?" It is quite surprising, and also to some extent, to people within the organization, as well. I guess you've given both the question and the answer here, which is "multidisciplinary," and that's when I started and even to this day, how we try to frame the value we bring to ethics, risk, and compliance.

Coming back to the code of ethics, a good way of explaining as to why this makes sense is to recognize that we are—if I can use that phrasing—in the business of changing behaviors. That's what business is about. That's what corporations are about. That's what functions are about. We want to make sure that people behave in certain ways. In whatever we do, whatever systems we use, whatever policies we have, we want to make sure that, in all cases, people do the right thing, whatever and however we define that. It's about people doing things, and it's about infinite behaviors through different mechanisms. And so, once you recognize that and you say, "If we're in a business of driving certain behaviors, then should we look to the science of that," it sounds like a good idea, right? Then, you say, "Yes, sure. What kind of stuff drives behaviors?" And there are many things. There is the code of ethics. There is the clarity of expectation. There is clarity about what people ought to do and the signaling that comes with it, what we stand for and our values, and leading by example, and that's really important—if it's not there, it's not going to work. But this is also about other things there, which is what's going on immediately around us at the time we do our behaviors? What's our emotional state? What are the cues we take from what other people do around us that might also influence our behaviors? The social science and behavioral science are trying to look at it all together and say, "What do we do about the drivers and derailers of certain behaviors, and can we go about it systematically?" And those two things work together.

Zach Coseglia: Tell us a little bit more about your team. What skill sets are there? I think that folks should hear that because there's definitely interest and momentum, I think, around data science and behavioral science, but you need data scientists and behavioral scientists to actually do it. So, tell us about the skill sets that you've actually built internally.

Antoine Ferrère: There's plenty of interest, and if I would be somebody with a sense of humor, I would say that, indeed, I think there are probably a lot of opportunities. There have been many times I've been approached by different companies wanting to know what we're doing, so there is some interest there. We started with just me as a behavioral scientist, really just a contractor for a couple of months. "Help us out with the code of ethics. We heard about system one and system two. We heard about all the biases. Sounds really fascinating. We've got an intuition that this might work, but we don't really have an idea of how we could transfer this bunch of really interesting knowledge into the whole machinery of stuff that we've been doing." So, that was my first goal, to try to translate that into something that an organization and the compliance department can do. Then, about two years ago, we decided to combine what was the reporting and analytics team with the behavioral science team, and that's why we group together behavioral and data science, because, essentially, that is about the same thing.

On my team, we've got behavioral scientists. To clarify a bit more, the natural behavioral scientists that we have are people that have done a master's in not just psychology, but behavioral science. It is the same and not the same—it depends on what you've been doing in psychology. Let's say some of them come with a background of having worked in either companies or areas that are adjacent to the problem we're trying to solve, such as audits, in the Big Four consultancies or maybe working in other consultancies or the pharma space. So, I think a bit of understanding is needed with a risk lens, whether that's a compliance risk or other kind of risk of the organization or a regulatory perspective. And then, we also have somebody who's more of a researcher or lead researcher. She has a Ph.D. in behavioral economics/neuroscience—she's great. She also has some experience with being in a think tank in large organizations. So, that's on the behavioral science side.

To finish, hopefully, rather quickly on the data science side, then we've got a different set of skills. We've got a couple of real data scientists/data engineers. "Let's grab the data. Let's write some patent code. Let's use algorithms off your shelf and adjust them." There's some machine learning, appliers, and all that stuff. To have the most impact and bang for the buck, we work in tandem with IT, but we also have a bit of capacity as a self-service to develop our own solutions and data products or make our own adjustments to a certain design problem of certain systems. So, we have a full-time designer—she's a data designer. It's a very hard skill to find because it's actual design, graphic design, UI/web design, but also a good sense of data and also a good sense of psychology—so, a really rare niche skill there. She's amazing, and she is helping us with design problems, either data or non-data problems. We also have a few contractors who are doing more development, like web development and stuff so we can build our data products.

Hui Chen: Antoine, this was so interesting to hear about your team, which is the kind of team that most organizations would probably say they don't have the luxury of building. Having this team, how does the team work with the rest of, first, the ethics and compliance organization, and then, how does it work with the rest of the company?

Antoine Ferrère: Compliance, in itself, is a bit of a meta team because we don't own the business process—we're trying to influence it for the better. We, ourselves, don't own the compliance process—we're trying to make sure we influence that in a way that it is better designed on one side (that's for the work with ERC), but also making sure that we generate and get the data built and able to report good insights back to them. And so, our work depends on the willingness and the ability for the rest of the people to work with us—it's really important for us to respect that, the fact that we have to engage with them. The beauty and the challenge of behavioral science in general—even within ethics, risk, and compliance—is that you have to be choiceful in terms of what you do, because there are so many things in terms of what you can start looking at. What we do and what we choose to do is a bit of a mixed bag of, as it is in reality: What was the starting point? Which was the code of ethics, the work, and ethical culture. What are the opportunities? Where can we have the quick impact? How do we scale up? So, that's how we do it, but we work across from a compliance, management, anti-bribery or inappropriate offense policy to the conflict of interest, third-party risk management, or speak-up office policy (the grievance mechanism that we have), and help them redesign their new processing system, make sure that it generates the right data, and encourage people to share what they think. You can also think about maybe more local country or regional teams that we work with, trying to help them with their cultural planning in terms of ethical future. So, it's really a broad set of different teams that we work with.

Then, for the rest of the organization, we had a recognition very early that, first of all, we don't own ethics—it doesn't work like that—everybody does. But if there's one function in a corporation that gets to define a little bit what it feels like to be part of a company, it's the human resources function. They don't own cultural there, but they are really influencing the way work is being perceived as well. We recognized that very early and said, "We're going to work with those guys on ethics because that's how we're going to work. Let's look at the performance management system, including recognition and rewards. Let's look at what kind of behavior it drives. How do we frame things in a good way? Let's look at the leadership development, and the hiring. How do we hire for integrity? What does it mean? Is it about asking a couple of questions about ethics or is it about something else?" So, this is where we have the most interaction, I think, because this is where we can have the most impact, in general.

Zach Coseglia: Antoine, let's dive a little bit deeper into a topic where I know you've done a lot of work, and that is psychological safety. Last year, you coauthored an article about psychological safety with others from Novartis, but also with Amy Edmondson from Harvard Business School. Tell us about your research in this area.

Antoine Ferrère: I'll start the story talking about the ethics survey where, basically, we sent it to 150,000 people across the world at Novartis, internal and long-term contractors, in 50 languages. We wanted to measure what the real level of psychological safety was across teams in Novartis because we did not have any robust measure existing, so we reached out to Amy Edmondson. We actually tweaked a little bit the six-point scale that she came up with for measuring psychological safety. We changed one of the items to make it more relevant, because she also felt, "Maybe it's due for a bit of an update. This one doesn't really make sense anyway." So, we had the chance to do that. But then, we had massive amounts of data about psychological safety across 50 countries, different demographics, and functions, in a way that was truly interesting. For the first year of the survey—and the same for subsequent years—we had about 30,000 completed answers, so that's a lot of data points about psychological safety.

We did some stats and we realized that psychological safety is beautifully associated—and I say "beautifully" because everything was matching almost in a perfect story—in a way you would expect that the more psychological safety there is in a team, the more likely members of that team are to use formal channels to report issues, and the less likely psychological safety there is on a team, the more likely they are to talk about it in a way that is not useful to Novartis, which is talk to family and friends or keep that to themselves. It was really just like the stars were aligning. Then, we ran some regression analysis and we saw that that psychological safety within the context of the survey was "predicting" the kind of reported behaviors. So, that was the insight that we found, and that's largely, in a way, what we talked about with Amy Edmondson on the MIT article.

Typically, if you run a compliance department—and I'm sure many compliance leaders will be listening to this podcast—what do you have? You have your speak-up mechanism (your grievance mechanism), and you have a hotline where people can text or chat. Then, you think about, "How do we design this? How do we do the policy?" But then, what we found about this in reality—and it may not be a bad thing, I think, by design—is there's only a very slim minority of people who actually get to report what they feel might not be super ethical to speak about because maybe that unethical behavior doesn't raise a threshold where they feel they need to involve that, and there's different ways they report things. What's the first thing that people do when they think they see something that they feel is unethical? The first thing they do, 88% of them, they choose to talk to their managers. What does it say about your role in compliance if you want to drive ethical behavior, if you want people to speak up, where do you put your attention? More tweaking of your speak-up office or making sure that managers are equipped to deal with ethical problems? Where do you get the most bang for the buck? Both are important, but you cannot neglect the role of managers. So, that was a bit of the message as well coming from the data, that the managers are the first port of call for any potential behavior that is deemed unethical and needs to be addressed.

I do want to mention though that we published months ago a follow-up article with MIT, not with Amy Edmondson this time, but coming from this research, because we found that psychological safety is really important, but then the question is: What do we do? Do we organize webinars on psychological safety? Do we have a culture journey on psychological safety? Do we train people on psychological safety? Do we have a new psychological safety policy maybe?

Zach Coseglia: What is the answer? What do we do?

Antoine Ferrère: We know that psychological safety is important, but what do we do to increase that? There was not any solid, randomized controlled trial experiment being done, so that's what we did with one division of Novartis Sandoz. We did a very large randomized controlled trial, which is nothing more than segregating managers and teams in different experimental conditions—we called that a "pilot" as well, as it feels less scary. Some managers received some kind of instruction in terms of how to conduct their next one-to-ones, some others received a different set of instructions, and the third group was the control group, very much like the placebo group you have in pharma or clinical trials that just received an email saying, "There's a study going on." Then, we tried to have the managers frame conversations with their team members differently, and we found that had a big effect on psychological safety for the cost of just an email. I think the most interesting takeaway is the following: First of all, we never talked about psychological safety in the experiment. So, you can increase psychological safety without talking about that. You can increase ethics without talking about ethics. You can maximize integrity without having to talk about integrity. Remember, not that I said that it's not important, it is, but you can also do that without because that seems to be the one-on-one change management strategy for any corporation: "We got something, we want to talk about that, and we're going to educate people into that thing." It doesn't work for all that—you want people to experience it.

Hui Chen: Antoine, what I so love about this is your willingness to experiment—this is something that we so encourage people to do. This is a situation where you say, "We're not finding a lot of research on how this is done, so let's start with a little experiment." I so much appreciate that effect, but I want to take this a little bit further, about the importance of psychological safety. We keep saying, "It's important. It's important." Do you have any insights about its importance beyond speak-up? In my imagination, when you have psychological safety, it ought to contribute to the performance of the team, so it's not just about speaking up when you see a problem, it's about speaking up when you have an idea. This may have nothing to do with misconduct or with ethics, it's just, "I'm seeing the way we're doing things. I don't think it's terribly efficient. I have this new idea about how we can do our job differently. We can do it more efficiently. We can do it with greater impact." Whatever the new idea is, psychological safety, at least in my assumption, should also help with that. So, psychological safety should contribute to at least a healthy team discussion, a healthy team brainstorming of new ideas, and a willingness to receive new ideas, which then in turn should have some kind of impact, positive impact, on performance. Do you have any insights on the importance and contribution of psychological safety in this space beyond just speak-up?

Antoine Ferrère: Absolutely. I'm really happy you brought that up because that's one of the things that we reflected on as well. Yes, I think you're right—it matters for more than just misconduct and speaking about problems. And that's why it is such a perfect segue—the perfect vehicle (more than segue)—for running an experiment, pilot, or trial (whatever you want to call that), in a way that is less scary for your organization across functions between ethics, risk, and compliance, HR, and even business units. We know psychological safety is great for many things—we know it is great for performance. And so, in the pre- and post-surveys, we also measured different desirable things that had nothing to do with ethics (they have things to do with ethics, but not on the face of it): "How much do I trust my manager? What's my sense of progress in the organization?" And these other factors also improved alongside psychological safety. The trap we all fall into is to think about the whole structure—we've got functions, so we work with concepts that are linked to a function. That's not how the human brain works. You don't go in to work thinking you walk into a function. You're there. You meet people. You do things. It's all messy. It's all related. So, it is not just about ethics, and I think that's why it's very important to work on that.

Hui Chen: I'll just add, beyond that, I have to again think there's a correlation between psychological safety and how valued an employee feels in an organization. That ability and that safety gives you that sense that you're being listened to, that your contributions matter, and that, in turn, has to translate into some kind of sense of commitment to the organization, the sense of belonging, and/or the sense of believing that you're a valued member of your team. So, I would love to see more people experiment and come up with empirical answers to some of those.

Antoine Ferrère: What I think it's important to clarify as well, and you've alluded to that in what you've just said, is sometimes—again, in a large organization, whether that's ethics, risk, and compliance, human resources, or any other functions—we think about change: "We've got to give people information. We've got to tell them they have to do something. So, we're going to tell them they have to speak-up." But I think companies don't have a speak-up problem—they have a listen-up problem. Certainly, if people are worth communicating to, they're also worth listening to. I think we've been, as organizations, with our finger on the talk button of the walkie-talkie for very long, and it's time we release it and try to listen to what's on the other side. We know that as well from the survey because the second year of the survey, we went further about asking, "Did you see something unethical in the last six months? What was it? How did you react? Did you speak to somebody?" But also asking, "You spoke to your manager? Great. What was the reaction? How did they make you feel? Do you think the problem was addressed?" Again, I think ethics is doing the hard thing when it's the right thing to do, and so, this is about listening as much as it is about speaking. Maybe we should call that a "listening-up problem" rather than a "speak-up problem."

Hui Chen: I have to say, as you were talking, you could see my head was nodding so vehemently that it was probably going to roll off my shoulders. I cannot tell you how much I agree with that because I have been saying that listening is one of the least utilized tools in the compliance and ethics tool kit, and in some of the tool kits, it doesn't even exist. As a parallel to "speak-up," we need to have "listen-up."

Zach Coseglia: I have one more topic before we get to our fun little questionnaire and get to know you a little bit better, Antoine, and that's to come back to the code of ethics, and specifically, the decision-making framework that you've embedded within it. Hui and I were talking the other day in the context of putting something together for a client where we were talking about the culture of ethics and integrity, and Hui said to me, "Why don't we just frame this as 'smart decision making?'" You all have actually created a framework to help your people make smarter decisions, to make more ethical decisions, so tell us about that.

Antoine Ferrère: When we set out to look at a behavioral change strategy for the code of ethics, there's the visible part of ethics and the code of ethics, the clarity of expectation, the signaling (that's really important), the walk-the-talk, and the displaying the values. Then, there's the more invisible part, as a way to put it, which is to know the influence in our direct environments, the other colleagues' behavior, the implicit or explicit goal settings, and all those other things that we know drive behavior. Within that visible part or very thoughtful part of ethics, there's this decision-making process that we are engaged to that sometimes we recognize we're engaged to—there are sometimes those dilemmas that we feel. And so, we wanted for that case to develop something to guide people that really recognizes there's a dilemma they have to solve. We know that's not always going to be the case—most of the time, you go about your day without thinking about what you're thinking, otherwise, it's going to be problematic. You cannot solve that only with mindfulness. Sometimes, yes—sometimes, people think about what they think, and they want to engage in a bit of a tool to help them think through that, and so, that's what we did. We wanted to go beyond just the typical reflection question you have in a code of conduct, a traditional code of conduct, which is, "What would you think? What would you do if it's on the front page of the newspaper?" I think they're very useful questions, but we wanted to take them and build something with them.

What we did is we looked at decision science, which is one of the three pillars of my team—we do behavioral science, decision science, and data science—and we said, "What's the science of decision making? What's a good decision?" And there's a science to that, as you know. There's quality of the information, quality of the inference on that information, quality of the alternatives being done, and the call to action—there are many steps and many frameworks in which you can assess that. Then, we brought the ethics angle to say, "What are the different ways the derailers, the cognitive and motivational biases, can come into play?" It can be groupthink, overconfidence, confirmation bias, or many other things that get in the way of decision-making, but also that are relevant for decisions that have an element of morality within them, which is how you could characterize ethical decision-making in more layman terms. And so, we built that series of 15 very simple questions. There's no AI. There's nothing really crazy. We just thought, "What can we do if we have 15 questions? What are the best questions we can ask to get a sense of the quality of that ethical decision making?" Some questions are very simple, and the simple ways of knowing whether or not somebody might be susceptible to certain biases, and, again, it's quite crude but it works. For example, "As part of that decision, do you have to say no to somebody you like, or do you have to say yes to somebody you don't like? Whether you got some skin in the game there, can you reflect on that?" But also, some of the ways we try to get to the biases is a bit less obvious. For example, for all of those 15 questions, you always have a choice to say, "I'm sure," "I'm not sure," etc., and then, that leads to specific exercises for all biases that we identify, and people really liked it. When you go through that process, you realize, "Here are the kind of biases you might have on that. And here's a bunch of things you can do with your teams or yourselves to try to do something about it." So, that's what we try to do, but there's much more we can do, of course, and I know we're trying to do still to improve upon and make sure it's being used. The limitation of that being that it's only going to work when people think about using that.

Zach Coseglia: You've given us so much to talk about and think about, so much so that I think we can do another whole session with you, and maybe we should do just that at some point, but for now, let's take a pause from the nitty gritty weeds of behavioral and data science, and let's just talk about Antoine. We have a questionnaire that we ask all of our guests, and it's inspired by Proust, Bernard Pivot, and Vanity Fair...

Antoine Ferrère: It's very French.

Zach Coseglia: There we go. For me, it's James Lipton from Inside the Actors Studio, which is not very French. We've got a series of questions we're going to ask you. I'll ask you the first one. You actually have a choice of two questions—you can answer whichever one you want. The first question is: If you could wake up tomorrow, having gained any quality or ability, what would it be? Or: Is there a quality about yourself that you are currently working to improve, and if so, what is it?

Antoine Ferrère: I think one that I'm trying to improve is—further to what I was talking about earlier, so there's even a little segue there—how do I think less about what I'm thinking about? I'm very meta. I'm very self-conscious in good ways. I think I'm just trying to find ways of experiencing more of the things, rather than thinking about the things that I'm experiencing, and that's something quite fundamental. So, I'm just trying to look at myself and say, "How do I improve my experience by maybe thinking less about my experience and more enjoying that experience itself?"

Hui Chen: The next question is a choice of either: Who is your favorite mentor? Or: Whom do you wish to be mentored by?

Antoine Ferrère: I'll answer both questions—here's the bonus—both in a very bad way, though. To be entirely cliché and fanboy-sounding from a behavioral science perspective, somebody I would love to meet, before it's too late, is Daniel Kahneman, who has such a great mind. Again, as I said, it's not really a big surprise for a behavioral scientist, but I really admire not only his insights, but how he seems to carry himself through all the conversations that I've see him having. So, I think he'd be a good mentor—I'm sure that I could learn, not only on the behavioral sciences, but also hopefully he is a good human being.

Then, on the mentee side, without something specific, I think mentoring is really interesting and something I really like, and actually, I need to do more of because every time I have conversations, whether formally or informally, I think that's always interesting. There's nothing more satisfying than just empowering people, giving them confidence, and seeing them just bloom. I think that is something that I extremely value very much. Of course, I try to do that with my team, but it's not the same relationship because you're encumbered by theoretical lines in a way that you're not with a mentee. So, I like the process.

Zach Coseglia: That's great. One last pick one of two. The first option is: What's the best place you've ever worked? Or: What's the best job, paid or unpaid, that you've ever had?

Antoine Ferrère: I'm afraid to give you the very boring but truthful answer: My current job is the best job I've ever had. I love it. Of course, there are ups and downs, but in general, I'm super happy to do what I'm doing now. I feel very lucky, as well. I'm kind of waiting for the moment when somebody's going to pull the curtain and wake me up, and say, "What are you doing here?" So, I really try to enjoy the impact that we have.

Zach Coseglia: I'd say that there's a theme or a trend there from our guests—that's a pretty common response. But it's great—it means we've got folks who really love what they're doing.

Hui Chen: The next question is: What is your favorite thing to do?

Antoine Ferrère: This is so hard—I like so many things. I'll tell you my answer is that I have no favorite things to do. When I was a kid, I had no passion for one thing, which meant I was so curious that I was almost jealous of people having a favorite thing they do or a favorite hobby because I was always like, "I don't know. I like many, many things." So, in a meta way, my answer is doing different things is my favorite thing to do.

Zach Coseglia: I love that. Let's see if you have a favorite for this question: What is your favorite place?

Antoine Ferrère: I think the place that I'd like to be most is when I'm not working from home, and I go to the office, and I come back in the evening, open the door of the house, I see the kids and my wife—that's a good place to be. I like going back home. So, I think in my totality of my experience, it's good and I like being there.

Zach Coseglia: Very nice. That's really nice.

Hui Chen: What makes you proud?

Antoine Ferrère: My brain is primed to think about the kids now because I just talked about them, so my answer would be, of course, I'm proud of my family. But maybe from a narrower professional topic, I'm really happy about and proud about both the work from the team that is being done, but also the space that is being given to me and the team to do the work that we do from the sponsorship that we have.

Zach Coseglia: All right. We go from the somewhat deep to the very shallow, which is: What email sign-off do you use most frequently? How do you end your emails?

Antoine Ferrère: It can range all the way from, "Best regards," which is very rare for me, maybe if I want to be super formal, to sometimes, "Thanks!"—it depends on the nature. Most of the time, though, I just say, "Best," do the line, "Antoine"—that's my default.

Zach Coseglia: I'm a fan of, "Thanks!" sometimes, as well.

Antoine Ferrère: Yes, I like the exclamation point, as well. It relieves the tension in a lot of emails.

Hui Chen: Next question: What trend in your field is most overrated?

Antoine Ferrère: It's when we reduce the field of behavioral science and the learnings not to its fundamentals—which is the scientific approach for solving problems and embracing social science in general and trying things out—but when we reduce that to simply a list of biases that exist. I think that just is not a sustainable thing—there's more than 200 of them, and some of them are the same. So, I sometimes question whether that is either overhyped or the right approach. I think we always carry the risk, if we go for the biases or not, that we're being seen as magicians coming in there and then pulling a rabbit out of the hat. But we're not magicians. We're scientists. And science is, most of the time, slow. If people remember one thing from the behavioral science and the view I have in this podcast, it is that behavioral science is not about transforming—it's about slow, marginal gains that accumulate in a way that is reliable over time. That's what this is about.

Zach Coseglia: Yes, I love that answer. I think that we find, all the time, that the average observer has a very pop-psy view of behavioral science, and it has, at times at least, become a little reductionist or resulted in folks thinking that there's some sort of silver bullet. So, I don't think that, at least, in the circles that we swim in, that answer will make you any enemies. Finally, one last question: What word would you use to describe your day?

Antoine Ferrère: "Inspiring."

Zach Coseglia: That's amazing. I love that. We'll take that.

Hui Chen: That's a good word to end your day on.

Zach Coseglia: Definitely. Antoine, you've given us so many wonderful Better Ways, from experimentation, to your lifelong sense of curiosity and the value that that can bring in our work, to the concept of listening up, not just speaking up, and so much more. Thank you for your time in joining us on the Better Way? podcast. Anything else you want to say to our listeners before we cut out?

Antoine Ferrère: It's been a pleasure.

Zach Coseglia: Thank you all for tuning in to the Better Way? podcast and exploring all of these Better Ways with us. For more information about this or anything else that's happening with R&G Insights Lab, please visit our website at www.ropesgray.com/rginsightslab. You can also subscribe to this series wherever you regularly listen to podcasts, including on Apple, Google, and Spotify. And, if you have thoughts about what we talked about today, the work the Lab does, or just have ideas for Better Ways we should explore, please don't hesitate to reach out—we'd love to hear from you. Thanks again for listening.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.