Kareem Saleh, CEO of FairPlay, on Building the AI Enablement Layer for Financial Services

WordPress - Kareem Saleh (1280x960px)

Kareem Saleh is the CEO and co-founder of FairPlay, an AI enablement platform helping financial institutions test, tune, and monitor AI systems in production. Four years after his first appearance on the show, Kareem returns to discuss how FairPlay has evolved from credit model fairness into a full AI enablement infrastructure layer and why the rise of generative and agentic AI has made that work more consequential than ever.

What We Covered

  • How generative AI changes the definition of fairness in financial services
  • The shift from model validation to continuous, system-level testing
  • FairPlay’s three core capabilities: testing, optimizing, and validating AI systems
  • How one customer added one day to their model development cycle but saved 60–90 days in compliance review
  • The 25–33% of declined applicants who would have performed as well as the riskiest approvals
  • Why the question for legacy institutions has flipped, from “is AI safe enough to try?” to “is it riskier not to adopt AI?”
  • The political environment and how fairness demand reconfigures, not disappears, across administrations
  • State-level regulatory frameworks filling the federal enforcement gap
  • Kareem’s Congressional testimony on AI and algorithmic bias
  • The agentic AI opportunity in KYC, BSA, and AML workflows
  • Regulatory look-back risk and why today’s decisions can become 2029’s consent orders
  • Cash flow underwriting risks and climate risk as underappreciated threats to credit portfolios

Key Takeaways

Fairness in generative AI is fundamentally different from traditional ML — you’re debiasing reasoning and language, not just predictions, which means there is no single ground truth to evaluate against.

Institutions leaving 25–33% of creditworthy applicants on the table aren’t lowering standards. Their models simply have blind spots that better tooling can surface and fix.

The need for fairness infrastructure doesn’t decline with political shifts. It reconfigures into new domains — debanking, rural access, viewpoint discrimination — and state regulators are accelerating to fill any federal gap.

Agentic AI raises the stakes dramatically because systems are now acting autonomously and reasoning on their own, not just scoring risk, which expands the failure surface significantly.

About Kareem Saleh

Kareem Saleh is the CEO and co-founder of FairPlay, an AI enablement platform purpose-built for financial services. He has testified before Congress on AI and algorithmic bias and is a prominent voice on responsible AI adoption in regulated industries. He previously held senior roles in government and financial services before founding FairPlay.

Transcription

Kareem (00:09.23)
We’re at a moment where there is less visible federal enforcement and in some cases, I think a perception of lighter oversight. And that can create a dangerous dynamic that can lull people into a sense of complacency and cause institutions to start to under invest in compliance. But the reality is many of these laws, especially around non-discrimination, are still on the books and they have long statutes of limitations, five, sometimes six years. So the decisions that financial institutions make today can become the basis for the enforcement actions years down the line. In other words, the seeds of 2029’s consent orders are being planted right now.

Peter (00:52.174)
This is the Fintech One-on-One podcast, the show for Fintech enthusiasts looking to better understand the leaders shaping Fintech and banking today. My name is Peter Renton and since 2013, I’ve been conducting in-depth interviews with Fintech founders and banking executives. My guest on the show today is Kareem Saleh, the CEO and co-founder of FairPlay. When Kareem founded FairPlay, the mission was straightforward: help financial institutions build fairer, more accurate, machine learning based credit models. But then the world changed. Generative AI arrived, agentic systems started entering production, and suddenly the challenge of understanding how AI systems behave became orders of magnitude more complex. Rather than being disrupted by that shift, FairPlay leaned in, evolving from model validation into what Kareem describes as the AI enablement layer for financial services. In our conversation, we get into how fairness in a generative AI world is fundamentally different from fairness in traditional machine learning. We talk about how financial institutions are working with FairPlay to put AI into production safely at scale. We also discuss the political environment and why Kareem believes the demand for fairness infrastructure doesn’t decline with a change in administration, it just reconfigures. And we get into the agentic AI opportunity where FairPlay is building, testing and monitoring infrastructure for autonomous systems operating in high stakes workflows like KYC, BSA and AML. It’s a fascinating conversation about trust, risk and building with AI. Now let’s get on with the show.

Peter (02:46.7)
Welcome back to the podcast, Kareem.

Kareem (02:48.76)
Peter, it’s great to be back. It’s hard to believe it’s been four years.

Peter (02:52.814)
Yes, indeed. It has been almost four years, 2022, and let’s face it, a lot has happened. So basically, maybe you could catch us up on how FairPlay has evolved over the last four years and how you’ve kind of adapted to this gen AI world that we now live in.

Kareem (03:12.91)
Well, the short answer is that we had to rethink the problem and the solution entirely because generative AI magnifies and complicates the exact issues we’ve been talking about for years. Is the AI accurate? Is it biased? Is it stable? Is it robust to manipulation? But generative AI introduces a major shift, right? Because in traditional machine learning, the models produce numbers, but in generative AI, the models produce ideas. That is a revolution because in underwriting, you get a risk score, it maps to a probability of default, and you eventually observe some ground truths like did the borrower pay back or not? But in generative AI, often there is no ground truth. The output might be a recommendation, a summary, a justification, a strategy. So the question becomes, how do you evaluate whether an idea is correct? Even harder, how do you evaluate whether an idea is fair, because fairness in generative AI is fundamentally different. You’re no longer just debiasing a prediction, you’re trying to debias reasoning and language and judgment. And that runs into a deeper problem, which is that we don’t all agree on what a fair idea is. And the definition of what a fair idea is changes over time. What was a fair idea 50 years ago may not be considered a fair idea today, and what’s acceptable today may not be tomorrow. On top of that, generative AI systems are non-deterministic, and that is just a fancy way of saying, if you ask them the same question twice, you may get two different answers. So now you’re not just testing outputs, you’re testing distributions of possible outputs. So all of this has forced us to evolve Fair Play in a big way. We’ve moved from model validation to system level validation. We’ve moved from static testing to continuous testing and from single outputs to behavioral patterns across many different runs. So today we’re building infrastructure to answer a whole new class of questions. How does this AI system behave and evolve over time? How does it behave across different users? How does it behave under different prompts? How does it behave in certain edge cases? Because in a gen AI world, the real risk isn’t just a bad answer. It’s systematic patterns of behavior that you don’t fully understand.

Peter (05:48.436)
Or someone could actually try and trick the model and put in a prompt that is designed to create something that they want.

Kareem (05:57.304)
That’s right, adversarial attack is another area of vulnerability where generative AI adds a new layer of complication.

Peter (06:04.802)
Anyway, we’ll dive into that in a little bit, but when you’re talking to people at conferences or whatever, how are you describing FairPlay? Take us through sort of your core offerings today.

Kareem (06:16.364)
Yeah, so at a high level, FairPlay helps financial institutions adopt AI faster and more safely. We focus on three core capabilities: testing your AI systems and agents, optimizing your AI systems and agents, and validating your AI systems and agents. We help banks, fintechs, insurance companies answer questions like, are there blind spots in my AI systems? Where are my AI systems leaving money on the table? Where are my AI systems causing me to take on hidden risk? Some of the largest institutions in financial services are using FairPlay to identify issues they wouldn’t otherwise see and fix them in a profitable, defensible way. And the outcomes are very tangible. Higher approval rates, faster speed to market for new models, and greater confidence in deploying AI in production. We’re also seeing really strong adoption on the agent side, especially in high-stakes workflows like KYC, BSA, AML, and collections. Because in those environments, it’s not enough for an AI system to work, in quotes, right? It has to be consistent and explainable and aligned with regulatory expectations. And so that’s where we come in. We give institutions the tooling to test how these systems behave, to improve their performance, and to stand behind them with confidence. Ultimately, our goal is simple: to help our customers make more money and do more good.

Peter (07:43.662)
I was reading about one of your customers actually that shared that using FairPlay added one day to their model development lifecycle, but saved 60 to 90 days in compliance and model validation reviews. Speed to compliance — is that sort of a real tangible benefit today versus what it was like a few years ago?

Kareem (08:07.438)
Yeah, that’s a great question. I would say that it is part of the value, but not the whole story. The core value prop hasn’t changed, right? We help financial institutions make more money and do more good using AI. Now, the way we deliver that value has expanded, right? So if you break down our platform, you see it across three different layers. The first is optimization tools. These are about direct business impacts like increasing accuracy, approval rates, take rates, financial inclusion. This is where customers see immediate revenue lift. The second is on model validation. And that’s really where that quote comes from. Yes, we might add a day to the development process, but we save 60 to 90 days in compliance and validation. So it’s not just speed to compliance, it’s really speed to production. And then the third is agentic testing and monitoring. And this is the newest layer and increasingly critical because financial institutions are now deploying systems that interact with customers and operate with a degree of autonomy and reason on their own. And so the risk isn’t merely just model error. It’s taking the wrong action, saying the wrong thing, drifting over time. And as a consequence, you need continuous evaluation and control. So when you put all of that together, FairPlay gives institutions the ability to test, tune, and monitor their AI systems in ways that are good for the business, good for the customer, and good for the community.

Peter (09:38.24)
Is this sort of the custom systems they might’ve purchased from an AI vendor, or does this include like how they’re using ChatGPT or Claude or something like that?

Kareem (09:48.288)
All of the above, right? It can be AIs that you’ve developed in-house, AI sourced from a third party, AI systems that you’ve knit together across several vendors. At the end of the day, these systems have a tendency towards drift. They often rely on underlying foundation models that the financial institutions themselves don’t control and which can be changed without warning. And so what we’re seeing is an emerging need for what we call safe AI sourcing, which is — how do you know that a given AI solution, whether developed internally or sourced from a third party, is fit for purpose, is going to achieve your objectives, isn’t going to run your bank off a cliff, or do harm to a consumer?

Peter (10:29.624)
I’d love to talk about the lending space, where you got your start, and looking at the fairness of different models, but also the effectiveness. One of the other things in my research I discovered here was a large percentage of applicants that financial institutions declined probably would have performed as well as the riskiest people that they approve. Tell us a little bit about what you’ve discovered there, and if you could share some of the data.

Kareem (10:57.656)
Yeah, so what we find is that something like twenty-five to thirty-three percent of the highest scoring folks that most lenders decline would have performed as well as the riskiest folks they approved. And that is a staggering number and it tends to get lenders’ attention pretty quickly. This represents a very meaningful business opportunity. Now we have to walk lenders through how we came to that judgment and anytime that you’re evaluating a new credit strategy versus an existing one, you’ve got to do all kinds of analyses like swap set analyses and reject inference analyses to identify these people who were previously declined and then make judgments about how well they actually would have performed. Fundamentally, what we’re doing when we do these analyses is asking, what did the model miss? And in many cases, the answer is that the model is overweighting certain variables and underweighting others that are equally or even more predictive. So for example, a lender might rely heavily on consistency of employment. And if you think about it, consistency of employment is a perfectly reasonable variable on which to assess the credit worthiness of a man. But all things being equal, consistency of employment is going to have a disparity driving effect for women who take time out of the workforce to care for a loved one or raise a family. And so oftentimes what we’re doing is helping lenders reduce their over-reliance on a variable like consistency of employment and tune up the influence of other signals which are predictive but have less of a disparity driving effect — like the accumulation of professional credentials, stability of income, stability of residence, etc. And what we find is that something like twenty-five to thirty-three percent of the high scoring applicants actually often look very similar to good borrowers on those dimensions. And that’s the aha moment. It’s like, you’re not lowering your standards. You’re finding good borrowers that your model is currently overlooking.

Peter (12:58.71)
This is beyond the fairness piece that you talked about when we last chatted four years ago that was really focused on are you being fair in your underwriting. This is about are you being effective, right?

Kareem (13:11.566)
Both, right? I mean in some sense if your model is missing a population that would have paid you back, that is both a model quality issue and a fairness issue. But what we’re finding is that in this environment, lenders are very keen to look for new sources of hidden yield. And when we are able to identify these populations, the question becomes — how much opportunity are we leaving on the table if we don’t do this?

Peter (13:41.294)
So then who are you really focused on today? I mean, you’ve got some of the big names in fintech that are paying clients at FairPlay. Are you more and more focused on banking, credit unions? Where are you sort of finding the most traction right now?

Kareem (13:56.514)
Yeah, so we’re fortunate to work with a broad cross section of leaders across the financial services and fintech industry. That includes several top 20 banks. It includes two of the three major mobile network operators. It includes the nation’s largest white label credit card issuer, the second largest credit score provider. As you know, we are also deeply embedded in the banking as a service ecosystem with key players like Pathward. And on the FinTech side, we partner with many of the companies that are really shaping the industry, whether it’s Plaid, Chime, Upgrade, Octane, Figure, Varo, Happy Money, Splash Financial. And what’s really exciting is the diversity of use cases that we see, from underwriting to pricing to model validation to agentic-driven workflows. And what they all have in common is a recognition that AI is now core to the financial services business. And our customers come to us because they need better visibility, better control, more confidence in how those AI systems perform.

Peter (14:59.458)
I know you’ve been working with some of those clients for many years now. I’d love to kind of get a sense of what your sophisticated clients, the ones that have been with you a long time, what they know and how they operate versus someone who’s just coming to you for the first time.

Kareem (15:15.694)
There is a pretty wide spectrum in our customer base. So on the more sophisticated end, you’ve got fintechs that are building complex machine learning models. They’re using alternative data and cashflow underwriting. They’re deploying AI agents in production for things like sanction screening, politically exposed person checks, adverse media monitoring, and their teams are very technical. Their questions are, how do we push the performance of this AI system further? How do we accelerate our path to production? How do we safely scale these agents? On the other end of the spectrum, you have large banks and they are still earlier in their AI journey. They are trying to figure out how to transition from traditional underwriting and human-driven compliance processes to more automated, data-driven decisioning. And importantly, they don’t just need powerful tools, they need tools that are accessible to non-technical teams in risk and compliance and legal. A big part of what we do is bridge that gap to make these more advanced AI systems understandable and controllable.

Peter (16:24.75)
When you are talking to — often it’s going to be a traditional bank that is wary, that’s risk averse, they’re talking to you, but they’re not sure whether this is something that they can go with. Trust is a big issue. How do you develop trust with a traditional financial institution that has never done business with you before, but recognizes they have a problem?

Kareem (16:51.054)
Yeah, it’s a great question because the challenge isn’t just technical, it’s organizational and cultural. Historically, these institutions face two big constraints. The first was that the tooling all sucked. Until recently, they simply didn’t have the right tools, especially tools that could evaluate AI systems rigorously, quickly, and do that in a way that satisfies regulators. Second, they have a talent mismatch. AI is highly technical. But if you look at most banks and insurance companies, their compliance and risk teams are largely non-technical. They tend to be mostly made up of lawyers. And so you have this gap which is that the people responsible for oversight didn’t have the tools or the interface to effectively oversee AI systems. And so I think the way that we have built trust at FairPlay is by closing that gap. We have taken these capabilities that used to live only with quantitative teams and made them accessible to compliance and risk and legal through interfaces and workflows that are intuitive and transparent and aligned with how those teams actually think. I think that’s been a big part of the unlock, but there’s also a mindset shift. For risk institutions, the question used to be, is this AI safe enough to try? Now it’s becoming, is it riskier not to adopt AI?

Peter (18:18.146)
That has become the change, I think — it’s riskier to do nothing.

Kareem (18:22.23)
And they’re right to worry about it because with all of these new bank charters being granted by the OCC, competitors are coming fast and they’re coming armed with AI native infrastructure. So bringing legacy institutions along is really about two things. It’s one, lowering the barrier to adoption and two, increasing confidence. So lowering the barrier to adoption is better tools, and increasing confidence is thorough testing, validation, continuous monitoring. Once you do that, AI stops feeling like a leap of faith and starts feeling more like a controlled, measurable upgrade to how the business actually operates.

Peter (19:03.222)
So I do want to — I don’t like getting political on my show very much, but I think it’s an objective statement to say the Trump administration does not have fair lending as a priority right now. And someone like you who’s built your business on fairness and regulatory compliance — how does this political shift change your pitch to banks and others?

Kareem (19:30.594)
Yeah, it’s a reasonable question, but I actually think the premise is a bit incomplete. Fairness does not go away with a change in administration. What changes is which fairness issues get emphasized and how they’re framed, right? And you see this in the increased focus on issues like debanking and concerns about viewpoint discrimination and access for rural communities and rural Americans. These are all fundamentally fairness questions. And here’s the key point — you can’t even begin to answer those questions without bias measurement and monitoring technology. So in a sense, the need for fairness infrastructure doesn’t decline. It actually expands into new domains. The second dynamic we’re seeing — even if there is less emphasis at the federal level, the states are stepping in aggressively, right? New York, New Jersey, Colorado, Maryland, Massachusetts — they’re all advancing frameworks that focus on disparate impact and require more rigorous AI testing and validation. So from our perspective, this isn’t a headwind, it’s just a reconfiguration of the demand. And it reinforces something we’ve always believed, which is that FairPlay builds infrastructure for understanding how AI systems behave, for controlling risk, and for making better decisions. And those needs persist across political cycles. They persist across regulatory regimes. They persist across industries. So honestly, our pitch and our roadmap haven’t changed. We still help institutions test, tune, and monitor AI systems in ways that are profitable for the business, defensible to regulators, and responsible to society. And we think that value proposition is durable no matter who’s in office.

Peter (21:23.03)
It’s a good point. I hadn’t really thought of it because really there’s still fairness questions that the Trump administration is focused on. They’re just very different fairness questions than the Biden administration was focused on. Interesting. Okay. So one of the things that I know that you’ve done is you actually testified before Congress about AI and algorithmic bias. And I’d love to get your sense — beyond the regulators — what do you think about the lawmakers in Congress? I’d love to get your sense of their level of understanding of these issues and what is one thing that you wish they really understood better about how these systems work.

Kareem (22:04.494)
Yeah, I think there is often a gap between what policymakers understand about AI and how they talk about AI publicly. And that’s not surprising because the politics of AI are inherently complicated. On the one hand, there is real skepticism. Many Americans are uneasy about AI. They have concerns about job losses. They have concerns about bias. They have concerns about summoning the demon. On the other hand, I think there is a broad recognition that AI is foundational to the future of the economy and it’s foundational to America strategically. We need AI to remain competitive globally. And so I think what you’re seeing is that folks in Congress and policymakers are navigating this tension between risk and opportunity, between control and innovation. And I liken AI to nuclear power, right? When properly harnessed, it creates enormous value. But without the right safeguards, the risks can be significant. And that leads us to the real policy questions, which are — who writes the rules? And what standards do we use to evaluate these systems? And who’s throat do you choke when something goes wrong? And how do we enforce those standards? And none of those are easy questions. We are still as an industry early in working through them. If there’s one thing I wish policymakers better understood, it’s this: because generative AI systems are dynamic, you can’t regulate them effectively with one-time audits or check-the-box compliance. You need ongoing measurement, continuous monitoring, clear quantitative standards — because our goal is to ensure that as these AI systems scale, they’re reliable, explainable, and aligned with societal expectations. That’s a gap that we’re still working to close.

Peter (23:55.276)
So I want to talk about agentic AI. I saw that you launched last year the agentic assurance platform partnering with Arva AI, which I know well from our AI Native conference days — validating agentic AI systems for AML and KYB use cases. How big a leap was this for you? You were talking about fairness and discriminatory bias in credit models. Now you’re talking about governing agentic AI systems. They seem on the face of it very, very different things, but maybe you can explain how you made that leap.

Kareem (24:34.606)
It’s actually been a very natural evolution. In both cases, whether it’s a credit model or an AI agent, you’re still asking the same core questions. How accurate is this system? Who does it work well for? Who does it miss? What are the blind spots? How stable is it over time? You’re also testing what happens when the inputs change. What happens when the environment changes? What are the failure modes? The difference is with agentic systems, the stakes are higher because the system is now not just scoring risk, it’s acting autonomously, it’s reasoning, it’s interacting with users. So the surface area for risk expands dramatically, but the core discipline is the same, which is — measure behavior, identify weaknesses, improve performance. I’d say there’s one underlying truth that drives everything about our business, which is that right now people don’t fully trust AI systems. And so we’re building the infrastructure to help earn that trust.

Peter (25:37.07)
Yeah, that makes perfect sense because I think the trust will come, but it’s only going to come from companies like you making these systems trustworthy, right? And agentic AI is really hot right now. People are talking about letting it loose in all kinds of different areas — personal financial management, e-commerce purchases, a whole range of different things. So it feels to me like a big opportunity. I presume you’re seeing this the same way — is this going to be a big part of your energy going forward, really getting more into validating agentic AI systems?

Kareem (26:16.2)
Yes, it is absolutely where a lot of our energy is going, but I’d frame it slightly more broadly. We see FairPlay as the AI enablement layer for financial services, right? We offer solutions that accelerate responsible AI adoption. So that means helping financial institutions test their AI, optimize it, validate it across both traditional models and agentic systems, with the goal of accelerating AI adoption in regulated industry. Because right now the bottlenecks to AI adoption are confidence, control, and compliance. And so what we do is find and fix AI blind spots in ways that allow institutions to move faster, take more risk, and unlock more value. So as you think about FairPlay going forward, think of us as the company that helps financial institutions actually put AI into production safely and at scale.

Peter (27:09.582)
I want to give a plug to my good friend Alex Johnson and his Fintech Takes show. You did a six-part series — I listened to all of them last year — called Model Citizens. I found it fascinating. What are some of the things you learned in that series and what’s an underappreciated compliance risk that most lenders aren’t taking seriously yet?

Kareem (27:32.066)
Yeah, I think there are a few risks that are still underappreciated. The one that really stands out to me is the regulatory look-back risk. Right, so we’re at a moment where there is less visible federal enforcement and in some cases I think a perception of lighter oversight. And that can create a dangerous dynamic that can lull people into a sense of complacency and cause institutions to start to under invest in compliance. But the reality is many of these laws, especially around nondiscrimination, are still on the books. And they have long statutes of limitations — five, sometimes six years. So the decisions that financial institutions make today can become the basis for the enforcement actions years down the line. In other words, the seeds of 2029’s consent orders are being planted right now. The second area I’d highlight is cash flow underwriting. It is incredibly powerful — it’s probably the most important innovation in underwriting right now — but it introduces new risks. How do you define income? How do you treat volatility? Can certain signals, like how much you spend on hair or how much you spend on clothes every month, introduce unintended disparities? So it’s not just about adopting cashflow AI. It’s about using it carefully and correctly. Finally, I think one underappreciated risk I’d mention is climate risk. I live in Los Angeles. We have seen firsthand how disruptive climate risk can be. We’re seeing insurance become more expensive or unavailable altogether. And I think it’s only a matter of time before that directly impacts property values, credit risk, portfolio stability. And I think many lenders are still underestimating how quickly climate risk will reprice entire markets.

Peter (29:25.582)
That’s a whole can of worms which we’re not going to get into, but I do want to end with looking towards the future. You’ve got what seems to be a phenomenal opportunity here to be one of the key players in the movement towards an AI-centric financial system. If you look out five years, what would you like FairPlay to have achieved for you to say that you’ve become a big success?

Kareem (29:55.022)
For me, success in five years is pretty clear. FairPlay becomes the AI enablement infrastructure for financial services. Meaning, if you’re deploying AI in lending, in insurance, or banking, you’re using Fair Play to test it, tune it, and monitor it in production. We become part of the core stack — the layer that gives institutions confidence, control, and accountability. But that’s only half the story. The other half, and the more important half, is the human impact. Five years from now, I want to be able to point to millions of more deserving borrowers getting approved, better pricing for people who were previously mispriced, and more capital flowing into communities that have been overlooked. Not because standards were lowered, but because the models got smarter. Because we helped institutions see what they were missing, correct their blind spots, and make better decisions. So success to me is this combination — on the one hand, we’re the infrastructure layer that powers responsible AI adoption. On the other hand, we’ve made the financial system more accurate, more inclusive, and ultimately more fair in how it allocates opportunity. So if we can do both of those things, I think we’ll be a very big success.

Peter (31:05.762)
Well, that’s a good place to leave it, Kareem. Really great to chat with you again on the show. I look forward to following along as you make that success a reality. Best of luck to you.

Kareem (31:18.03)
Thank you, Peter. Great to be here.

Peter (31:26.04)
You know, one of the many things that struck me in this conversation was the point that Kareem made that the question for traditional financial institutions has flipped. It used to be, is this AI safe enough to try? Now it’s, is it riskier not to adopt AI? Especially with new AI native bank charter applicants coming armed with modern infrastructure. And this is what the head of the OCC, Jonathan Gould, said at the AI Native Banking and Fintech Conference in Salt Lake City last September. He said, the riskiest thing for banks to do right now is nothing. This is a profound shift in how legacy institutions think about AI adoption. And it’s really the commercial tailwind behind everything FairPlay is building. It reframes their entire value proposition. They’re not a compliance cost, they’re the thing that makes the leap possible. Anyway, that’s it for today’s show. If you enjoy these episodes, please go ahead and subscribe, tell a friend, or leave a review. And thanks so much for listening.