For almost a decade, I have been developing anti-fraud solutions for great global tech platforms, first at Yandex, where I built from scratch protection systems for services like Yandex Market; and later at Meta, where I worked on Ads and Commerce Integrity.

Along this journey, I have been privileged to see firsthand the rapid evolution of both fraud and the systems we build to stop it. Fraudsters adapt at breakneck speed; meanwhile, the platforms we inhabit and the marketplaces we build must protect vast ecosystems of buyers, sellers, and advertisers who expect us to keep their interactions “seamless.”

Now in 2025, I want to share the insights and the lessons I’ve gathered along the way. I hope these will help modern marketplaces stay both secure and user-friendly in the face of persistent threats.

The hidden cost of marketplace fraud

When I began developing thorough anti-fraud systems for Yandex Market in 2018, many companies saw fraud as nothing more than a budget line item. To them, it was an acceptable cost of doing business. They set thresholds for chargebacks and lost revenue and only stepped in when those figures reached an uncomfortably high level.

But sometimes I think of fraud as more than a financial nuisance and something that could just as easily defraud any one of us. I consider it a cornerstone threat to the trust that digital platforms have to earn if they are to exist and grow.

If users can’t trust a platform to deliver on the essential promises it makes, such that the user experience will be authentic, fair, and safe, that platform will have a dangerously high churn rate and a lousy reputation. Platforms are just too easy to replicate and improve upon for the successful ones not to have both an authentic and a safe experience as their essential guideposts.


10 tips to eliminate forecast bias
No matter how sophisticated our models get, forecast bias has a sneaky way of slipping into our financial plans. If you want to stop forecast bias from creeping in, here are 10 practical ways to put an end to it.


Beyond rule-based systems: the evolution of detection methods

At Yandex, we were heavily reliant on static, rule-based systems that functioned like automatons: if a user created too many accounts per day or paid with identical details across suspicious profiles, we flagged them. While these rules caught the obvious patterns (and did so quite well), they were far too rigid to keep pace with the quickly evolving tactics of our adversaries. Fraudsters figured out that it was much safer to shift between as many accounts as possible and to use a far broader range of (often plausible-sounding) cover stories.

Our big breakthrough in countering this wave of badness came when we began to focus on behavioral fingerprinting and on grouping (or clustering) not just the overt actions of the users we flagged, but also the many subtle signals (like mouse movements, session lengths, and yes, quite a few signals that are unique to each person’s way of typing) that form a signature for each user.

At our best, we could detect fraud because even when the bad guys were using new accounts or new IP addresses, they were still signing all their work with a unique fraudster fingerprint.

The balancing act: precision vs. friction

Throughout my career, one of the tough challenges I’ve faced has been ensuring that strong fraud defenses don’t result in a terrible user experience. We tried to express false negatives and false positives in terms of real money. A system can be just as damaging if it blocks lots of fraud but also a lot of good customers.

I’ve also found that friction has a habit of creeping into the onboarding process. For instance, we might call for identity verification for new merchants or impose frequent authentication checks on repeat buyers. While these measures reduce fraud, they can also push honest users away.

When we analyzed Meta’s onboarding flows closely, we discovered that certain verification steps were too strict, creating 80% onboarding friction. Adjusting those requirements by removing some steps or streamlining others maintained the same security level while removing hurdles for legitimate users that were unnecessary.

A tiered approach is one of the most effective ones today: honest, low-risk users and transactions breeze through with little friction, while higher-risk interactions trigger robust post-checks during which the honest, low-risk user is hardly aware that any sort of transaction he or she is engaged in is anything but business as usual.

During these periods when risk signals are spiking on the platform, we can afford to be a little more paranoid with the sorts of money and identity requests we’re willing to fulfill.

The multi-layered defense strategy

If there’s one lesson that has been learned during my years at both Yandex and Meta, it’s that no single tactic can cover all angles. Always remember: fraudsters are expert puzzle-solvers. Close one avenue, and they’ll try another. An effective anti-fraud program, therefore, is by necessity a multi-layered affair.

For instance, at Yandex Market, we constructed a complete system that comprised of:

  • User trustworthiness scoring: assessing behavior to catch abusers of promo codes, returns, or special prices.
  • Merchant authentication and quality assurance: ensuring that merchants not only had the right credentials but also kept the promises that made them seem trustworthy in the first place.
  • Detection of listing inflation and ad fraud: stopping if not preventing attempts to make a product or an ad seem more valuable than it actually is.
  • Transaction anti-fraud: ensuring that we could spot and stop in a timely manner any attempts to use stolen credit cards or otherwise coordinate scams that employed our product as a way to get money.
  • Monthly reporting: compiling a report that would help us spot trends, figure out what was going on, and if possible, patch any weaknesses that we were responsible for.

In addition to these basic functions, we had to think about and provide for quite a few other details.

Separate safeguards address different ways that attackers might try to fool our systems. If a fraudster gets past one layer, another layer is likely to catch them.

This setup hasn’t changed; what has changed is that the safeguards have become a lot more sophisticated, especially thanks to the more recent adoption of AI-driven anomaly detection and real-time data consolidation. This is still probably our best way to defend against all the ways that our systems can be attacked.


How CFOs are powering business strategy
This blog explores how today’s CFOs are stepping beyond traditional finance roles to become strategic partners - driving growth, shaping business decisions, and turning financial insight into organizational impact.


The power of cross-validation and human review

Fraud detection is largely the product of machine learning and automation today. But I’ve come to understand that human oversight is as crucial as ever, if not more so, to catch the emerging or subtle threats that can slip by even our best algorithms.

At Meta, we balanced the use of algorithmic flagging with an expert review system. If a user or transaction hit a certain risk threshold, we had a team of specialists on the frontlines, ready at a moment’s notice, to spring into action and investigate.

This mixed approach is very good at finding patterns that the algorithms haven’t yet learned. Each time the human team uncovered a new variant of fraud, we could feed that knowledge back into the machine learning model and make it stronger for the next time out.

The human analysts also brought something essential that pure data sometimes lacks (especially in edge cases such as the suspicious traffic spikes that turned out to be legitimate promotional campaigns): a real-life understanding of the kind of context that makes all the difference in whether something is a fraud or not.

By 2025, as AI advances even further, the balance may shift more toward automation and away from human review, but I doubt it will ever tip completely in the direction of automation. After all, people are much better than algorithms at spotting hard-to-detect errors and at reading the signals that indicate something is amiss. In short, human oversight is crucial to making sure that what we do online is both ethical and fair.

Building for future threats: anticipating fraud evolution

One issue that has grown every year is the need to stay one step ahead of fraudsters instead of merely reacting to their latest schemes. During my time building anti-fraud solutions at Yandex and, later, investigating incidents at Meta, I saw just how quickly malicious actors pivot when their old tricks fail. A new policy or anti-fraud measure might barely deter them for a moment, but then they figure out a workaround that leaves you unprepared and blindsided.

In contrast, certain firms have created “red teams” that mimic hostile forces and attempt to break into their systems. These would-be hackers are good at thinking like the bad guys and trying to figure out how to use (or misuse) every aspect of the product or policy in question to gain access. Simulating this kind of threat is a good way to identify any gaping holes in your security.

By 2025, even more than today, taking these steps is crucial. The rings of AI-fueled fraud have become far too clever, with bots now able to convincingly mimic human behavior and evil masterminds running advanced scripts that coordinate on a disturbingly large scale. To stay ahead of such threats requires not just funding but also a serious commitment to R&D, multidisciplinary cooperation, and a culture in which every assumption about your current defenses is rigorously challenged.


CFO cybersecurity: What’s the CFO’s role in cybersecurity?
From crunching numbers to combating cyber threats, the role of the CFO has leaped into uncharted territory, making CFO cybersecurity one of the hottest topics in boardrooms across the globe.


Measuring what matters: beyond detection rates

For many years, a simple statistic measured the capability and effectiveness of fraud teams: how many fraudulent actions they blocked. But that one statistic alone scratches the surface of examining whether these teams positively impact Meta’s performance or not. You could catch a large volume of fraud, great, but if you also mistakenly block thousands of legitimate users, you might do more harm than good.

At Meta, we put holdouts, human review validations, and cross-validation methods to work on refining our evaluations. We then turned these metrics into money, showing leadership the trade-off between blocking fraud and reducing friction. This “monetary coverage” approach helped us see the missed fraud and false positives in concrete financial terms, an aspect that gives these statements your invaluable prioritization. At Meta, we introduced holdouts, human review validations, and cross-validation methods to refine our evaluation.

These days, it’s equally vital to gauge your business’s overall impact, user retention, and brand trust along with the more traditional measurement of catch rates and fraud. High-level data might say your false positives are only 2%, but if that 2% includes high-value sellers or brand-new customers, the damage can be disproportionately large.

Building trust through transparency

One frequently neglected aspect of combatting fraud is transparency. Our team at Yandex Market, which increased revenue and user engagement during six months of working with that platform, observed how pivotal it was for our users to comprehend some basic details about our security measures. They certainly don’t need to see the nitty-gritty algorithms performing our scam-spotting magic, but they do want some kind of confirmation that we’re protecting them from scams, bad listings, and payment fraud.

By clearly conveying overall security measures, without revealing the precise details that bad actors might use to circumvent our safeguards, we built user trust. Even straightforward statements like, “We check every transaction in real time” or “We put merchants through a multi-step verification process before allowing them to sell on our site” go a long way toward assuaging user fears. That will be truer than ever in 2025, when skepticism around data usage and AI decision-making is at an all-time high.


Banking on a greener future: Where ESG stands in 2024
The environmental, social, and governance (ESG) movement has taken root in the financial sector, with banks increasingly recognizing its significance. But how far have we come, and what challenges remain? Here’s a look at the current landscape of ESG compliance in banking.👇🏻


Several trends are taking shape as we advance into 2025, which are affecting the future of fraud detection in online marketplaces.

Regulatory scrutiny and compliance

Globally, governments are focusing more attention on data handling, identity verification, and consumer protection. Companies that are a step ahead aren’t just complying with the bare minimum; they’re doing much better than that. They’re practicing data ethics.

These companies see laws like the GDPR (General Data Protection Regulation) or CCPA (California Consumer Privacy Act) as a floor, not a ceiling, for how they handle data. They use these regulations as a guide to treat all of their users fairly, no matter their demographic.

Decentralized and AI-powered fraud

Fraud rings are using machine learning of their own to seek out patterns or weaknesses in the operation of a given fraud scheme, and to do so in ways that allow them to scale. Stopping these clever adversaries requires equal parts sophisticated AI and the 21st-century version of a “discover, defend, and iterate” strategy.

Cross-platform collaboration

Bad actors don’t confine themselves to a single marketplace; they frequently employ the same pilfered credentials or tactics across many venues. In the tech world, working together has become the norm, indeed, the necessity, for both industry giants and smaller platforms. By pooling and analyzing the kind of data that pseudonymous fraudsters would rather not have out in the open, they can all make better, more secure products.

User empowerment

An increasing number of marketplaces now inform users about typical scams, phishing attempts, and social engineering. Using resources such as two-factor authentication prompts or obvious disclaimers made before transactions, platforms try to make sure that users are on the lookout for any funny business. This model sort of spreads the responsibility of detecting fraud between the platforms and the users and reduces the impact that fraud can have.

The human element

When I think about the past 10 years, I know for certain that our best weapon against fraud isn’t just technology. Although machine learning and data analytics are key components, and starring roles of sorts, the real fraud-busting line of defense is the human one and, underneath it, the layer of trust we have with our users.

At its essence, preventing fraud is about maintaining the relationships that fuel a bustling marketplace. Every time a user registers or finishes a transaction, they take a leap of faith that the platform is on their side. For me, that’s why I come to work every day: to ensure that this trust is well placed. Even as fraud evolves at breathtaking speed, we can watch, adapt, and protect the marketplace experience so that honest participants can engage with confidence.

Certainly, the arms race between con artists and defenders will go on, but I’m optimistic. By moving ahead with security and convenience, with threat modeling, with multilayered defenses, and keeping users in the know, we have a chance to create a safer, stronger place for everyone involved. And in the end, it’s not just about stopping the bad guys. It’s about protecting how people connect and trust one another, which is at the heart of any marketplace worth using.