In the second episode of Ropes & Gray's podcast series, Culture & Compliance Chronicles, litigation & enforcement attorneys Tina Yu and Amanda Raad, who co-chairs the firm's global anti-corruption and international risk practice, continue their conversation with Jules Colborne-Baber, a partner and forensic audit expert at Deloitte UK, and Richard Bistrong, CEO of Front-Line Anti-Bribery LLC, about behavioral science and compliance. The first part of their conversation focused on the importance of incorporating behavioral science in compliance programs. In part two, they turn their attention to the application of behavioral science in compliance testing and monitoring.

937814.jpg

Download a summary of this episode.

Transcript:

Tina Yu: Hi everyone, and welcome back to Culture & Compliance Chronicles, a podcast series focused on the behavioral science approach to risk management. I'm Tina Yu, a litigation & enforcement associate at Ropes & Gray. I'm once again joined by my colleague Amanda Raad, a litigation & enforcement partner and co-chair of Ropes & Gray's global anti-corruption and international risk practice, as well as Richard Bistrong, CEO of Front-Line Anti-Bribery LLC, and Jules Colborne-Baber, a partner at Deloitte. In this episode, we'll turn to the application of behavioral science in compliance testing and monitoring. Jules, since you are the audit expert here, what are your thoughts?

Jules Colborne-Baber: Richard, we've talked about if someone checking in on you would have made a difference. I suppose the other aspect is around testing and monitoring, and to my mind, that's a critical part of any compliance program. And often, it's an underrated part or a part that I think organizations don't necessarily focus on sufficiently. They focus heavily on risk assessments, on putting policies and procedures in place, providing training, but then maybe the lack of resources, the lack of focus on testing and monitoring capabilities from the second line. To my mind, testing and monitoring is a critical element to a program to ensure the program's design is appropriate, but also the operational effectiveness of that program and the procedures are effective, so it's actually working in practice. Clearly, you've got testing and monitoring being two quite different things. You've got the testing, which tends to be the deep dives, the operational effectiveness and design effectiveness using sampling-based methodologies, preferably statistically significant sampling methodologies. And then you have monitoring, which increasingly is around using significant data sets or even complete data sets and putting analytics over those data sets. There may be quite straightforward tests, red flag tests or slightly more complicated analytics where you've got supervised learning or indeed unsupervised learning where you're looking for anomalies from what is normal for you, so there are the testing and monitoring aspects to it. To my mind, it's key because one of the things it does is actually provide critical information and data back to relevant stakeholders within the governance chain that the program is operating as designed and operating effectively. From a testing and monitoring perspective, what's your view, and do you think that really works to detect bad conduct? From my perspective I think it's critical, but I just wonder what your thinking was.

Richard Bistrong: It's a great question, Jules. I think part of it is, for lack of a better term, there's high altitude and there's transactional, and I think in our data-rich world it's easy to get very transactional. So looking at my own experience where the bribery conspiracies happened in places like New York, in the Netherlands, what we might consider to be low-risk regions – good governance, good transparency. The actual amounts, Jules, were quite small. For example, on the United Nations bribery conspiracy, the commission rate was under five percent, well under that red flag territory. So sometimes low commissions and low-risk regions, again, that doesn't necessarily mean there's good news, this can happen anywhere – so as I often share, if someone was looking at me transactionally, in terms of data, I don't think it would've thrown out too many red flags. But if someone was looking at me holistically, through data, I was a walking red flag. If someone was analyzing and combing through my discounts, my marketing allowances, my rebates in similar market conditions, everything was an anomaly – there was no fact pattern to my business conduct. So if someone was just sort of gathering that data at a high-level and just saying, "What is his business conduct with respect to the identification of discounting and marketing allowances?" they would have seen no rhyme or reason. I think that would have then led to some questions (to me) as to, "Why are you doing things in such a different way in similar market conditions?" So I think that monitoring, Jules, and that testing, as you identified, you can't substitute doing it transactionally, but there's also that higher level and just, "How does this look atmospherically?" And data can really help us in that world.

Jules Colborne-Baber: I think that's right. I think there's no doubt there are challenges still today in terms of data quality and data governance, and for compliance teams to obtain that sort of data, but I think you're absolutely right. You've now got the opportunity bringing diverse data sets together or data points together, starting with using some of that intelligent analytics to identify anomalies where, as you were saying, you're not looking at one particular thing or just two particular data points – you're looking at a number, bringing that together, and saying, "What looks normal for you and what doesn't look normal for you in terms of, let's say, sales practices in your sales force?" And it's that that enables you to then start to really drive out the identification of risk.

Richard Bistrong: And that starts to pull the threads out if something's not right.

Jules Colborne-Baber: I was just going to say we're certainly seeing organizations really starting to invest and think about that type of analytics nowadays. Are you seeing something similar?

Amanda Raad: Yes. I think absolutely people are trying to figure out 1) what data they have and 2) how best to use it. I think some organizations are getting overwhelmed with some of that decision making, honestly, just because there is so much data out there and trying to really decipher, "Where do I focus my time and energy best?" And sometimes I think organizations spend a little too long trying to get it perfect instead of trying and realizing there really is no perfect, that you just have to start to work with the data, and ask questions around the data, and don't look at the data in isolation. I think where I have seen the use of data and analytics be the most successful is where you take a data set, and it may not even be a very big data set, but you take a data set just for pure testing purposes and you start to ask some questions, and talk to people, and interpret some of that data. And it can pull threads that then lead you to ask more questions of different people. And, all of a sudden, you're starting to learn a little bit more. So you don't have to design the perfect risk assessment, the perfect analysis, but really just start using the data that already exists within the organization, and start talking to people, and really understanding what the data means.

Jules Colborne-Baber: I couldn't agree more. I think for me, the mantra has got to be "start small" in terms of the data, the number of use cases that you're looking at and demonstrate value. Then you can build from there because you can start to get stakeholders on board through demonstrating value when you start small – a pilot, a proof of concept and build out from there.

Tina Yu: Amanda, just to tie all of this together, earlier on, Richard was talking about how bad conduct was hiding behind good performance and if somebody had checked in on him earlier, that might have prompted him to speak up about the issues that he had been facing at that stage instead of letting it fester. I honestly think that really leads to this broader concept of speaking up and how that contributes to positive compliance culture. So what's your take on that?

Amanda Raad: Speaking up is a huge part of any successful compliance infrastructure. It's really tough because there's a couple kinds of "speak up." I think there's the speak up that Richard talked about, which is: Would Richard feel comfortable himself raising a tough situation that he might have been in in any particular time? So when you are facing a tough decision, do you feel comfortable actually flagging that truthfully and transparently to somebody else within the organization? That's one. And then of course there's the other where you see secondhand something happening that you feel the need to speak up about or not to speak up about. All of this, of course, goes to culture, right? Nobody is going to speak up in an organization where there's a culture that speak up isn't hugely valued. I think everyone is on board with appreciating that of course we need to encourage openness, and transparency, and speak up, and anti-retaliation policies – that's there and all the documentation says the right thing, but the question is: How do you demonstrate to people that it actually is safe and that it actually is valued so that they actually take the step to make the comments or to raise their hand? And people are watching – every decision that an organization makes really is a decision that is seen by the organization. So when you receive a speak-up allegation, what are you doing? How is it being handled? Is it being looked into in a clear, meaningful way? Is the person that raised the complaint or allegation aware that it's being looked into in a clear and transparent way? Or is the company taking a more defensive stance from day one and really trying to defend the particular allegation or poke holes in the complaint or allegation instead of really just trying to understand first what may or may not have happened? So you really have to think about that every day. It can't just be in a policy or in a statement. It has to be across the board as you live and breathe, dealing with some of these challenges. I think that is something that is hard, and it's hard especially when you're in a high-enforcement environment. When a company does receive a high volume of complaints, or allegations, or speak-up comments, that's a good thing because it means you have a good culture and it means that people are feeling comfortable enough to look into them. But then people have this hesitancy of, "Well, is it too many? And what would a regulator think that I'm getting this many? And how should we really effectively move through these?" And you start to get into some of these trickier questions that can lead you down a dangerous road. So certainly you need to have a process in place and a policy and a procedure in place, but I think it really comes down to how you deal with these really important moments where someone either is able to raise their hand and say, "I don't know how to do this deal without maybe going into a gray area. How do you handle that?" or, "How do you handle an internal review or an internal investigation that somebody else has flagged?" But interested, Jules, in your thoughts on how you're seeing companies really struggle with that?

Jules Colborne-Baber: So I think you basically covered it all, which is fantastic. I think there's just a few other things. One, I think, is around organizations needing to spend the time, and typically that's through surveys, but I think it's understanding really where there are issues, how comfortable their employees are within the organization in terms of reporting issues. I think surveys are a valuable tool to understand where barriers or blockers might be, understand the comfort with which employees will report and the channels that they are prepared to use, so I think that is important. I think your two other points you mentioned around the retaliation, I think that clearly in terms of the way an organization deals with concerns, the retaliation can be quite subtle, can't it? It's not always obvious – it can be quite subtle that retaliation, so I think it's very important that that is appropriately dealt with. One thing I have seen one organization do, in terms of the issues, is sharing results to the organization. It was quite a decentralized organization across the world, and through an app, it would share, clearly anonymously, but it would share an issue that was raised and it would share the investigation – how it was dealt with and the result that arose. So that was part of the way of sharing lessons and indeed, to a degree, training its organization so that the investigation team would share those types of issues on a relatively regular basis. So I think there's a couple of things: Ultimately, I think it comes down to, again, it's the leadership culture, and the trust that people need to have in their leadership because they need to be able to trust leadership in order to disclose what they feel are sensitive things to a hotline or management, and so I think that is critical.

Amanda Raad: Richard, what do you think on speak up, both from the perspective of the individual facing the challenging situation and just from a culture perspective?

Richard Bistrong: I think it's a little bit of a muscle and it needs a little bit of practice – that it just might not come intuitively, especially in the scenario of, "Now, I need to speak up about myself." As you shared, Amanda, the situation where someone's working on a deal and they realize, "I can't do this without sacrificing integrity. What do I need to do in this situation?" I think what I've seen a lot of organizations do is have if-then planning sessions and ethical dilemma workshops where a lot of these scenarios are somewhat predictable. I think we've seen very few surprises in our regulatory world. So why not act these out, do some if-then planning in the safe zone where people can talk about the different types of responses? And there may not be a perfect answer. This might be about risk mitigation, but at least you're unpacking all the workable alternatives together as a team. So I think that when someone's in that worst possible moment, jet lagged, sleep deprived, and struggling with success and they need to make a decision, they can reference, "These are some of the things that we've talked about. These are some of the people that I can call." Going back to what you both addressed that, "My organization has sparked a culture where silence is not golden but speaking up about risk, whether it's my own or something I've observed, is what is cherished." So I don't think we can paper over that – I think it needs to be brought to life and to shine some light on it. I think that having these exercises and this sort of safe dialogue, going back to when there's not a problem at hand, is what will spark that culture that we're looking for when an issue is at hand.

Tina Yu: Jules and Richard, thank you both so much for joining Amanda and me for this insightful discussion. And thank you to our listeners. We appreciate you tuning into our Culture & Compliance Chronicles podcast series.

Originally published by Ropes & Gray, April 2020

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.