
By Rob Cutler
Managing Director at Nexus AML
FINANCIAL crime compliance workers are under intensifying pressure.
Transaction volumes continue to rise, while expectations around consistency, traceability, and speed are constantly increasing.
Criminal behaviour evolves just as fast, often exploiting operational bottlenecks such as fragmented data and inconsistent decision-making.
Modernisation discussions can become overly binary. Some push for “more rules based logic”, others for “more AI”, while teams on the ground worry that automation will dilute human judgement.
A more useful framing is that financial crime analysis is delivered through three methods that can be combined deliberately:
- Rules-based logical analysis
- Human Execution
- Artificial Intelligence (including Machine Learning)
These are methods, not competing ideologies. Each is best in specific conditions. Here is how each method functions, and how they can most effectively work together.
Method 1: Rules-based logical analysis as the foundation for deterministic control
Rules-based logical analysis uses structured conditional logic such as “if”, “and”, “or”, and “then”.
Data fields are compared against reference information and thresholds to generate consistent outcomes. This approach is predictable and straightforward to evidence.
Rules-based analysis is strong where inputs are clean, structured, and stable. It is typically effective for baseline checks, simple typologies, and high-volume controls.
Its limitations are well known. As rule sets expand, they can become harder to oversee. Interactions between rules can produce unintended outcomes, and maintaining performance often requires continuous tuning. Rules also depend heavily on data quality. If data is incomplete or poorly standardised, rules can miss risk, or generate excessive false positives.
Used well, rules provide a stable base layer and clear control boundaries. Used alone, they can become brittle as complexity rises and typologies change.
Method 2: Human Execution as the adaptive layer where accountability sits
Human Execution is the method where analysts and investigators apply controls directly through judgement, rather than relying on deterministic rules or model outputs.
In practice, this includes working alerts, reviewing documents and determining whether to clear, escalate, or exit.
Human Execution is essential when the scenario is complex, or the decision requires interpretation. It is also where accountability ultimately sits. Even in automated environments, high-risk decisions and edge cases must be defensible and consistent.
Human Execution also provides crucial safeguards. People validate whether rule sets are producing the intended effect, and whether AI outputs remain reliable.
The constraints are familiar. Skilled analysts are expensive, capacity does not scale instantly, and repetitive work increases fatigue and inconsistency risk. These realities are a reason to modernise, reducing low-value manual effort while keeping judgement where it matters.
Method 3: Artificial Intelligence (including Machine Learning) to deliver scale and workflow acceleration
Artificial Intelligence is focused on creating systems capable of tasks such as pattern recognition and language processing.
Machine Learning sits within AI, and refers to algorithms that learn from data and improve performance over time.
For financial crime teams, AI is often most valuable as workflow acceleration rather than full replacement. Relevant approaches include:
- Generative AI for case summaries
- Large Language Models for language understanding and generation
- Large Reasoning Models to improve reliability by structuring reasoning before answers are produced
- Agentic AI that can execute multi-step tasks towards defined goals with limited human intervention
- Explainability and orchestration techniques that help keep workflows transparent and auditable
Operationally, AI can reduce friction. It can support triage and prioritisation, identify patterns and summarise complex histories.
The difference between useful and risky adoption is governance and design. AI systems do not understand intent in the way humans do. They detect patterns and can produce outputs that appear confident even when wrong.
Generative AI can create plausible text containing errors. Models can drift over time as behaviours change. Agentic AI introduces additional risk because it can take actions, not just provide recommendations.
That is why AI needs clear boundaries, monitoring, and human oversight, especially where decisions are high impact.
How to choose the right mix across the three methods
A practical way to design an operating model is to decide which method should lead, and which should support, at each stage of the workflow. Three factors drive that choice.
- Complexity
As the number of data points rises and interactions become more nuanced, pure rules struggle. Human Execution becomes more important, and AI can add value by structuring information and highlighting patterns. - Repeatability
When cases resemble each other, AI and Machine Learning can scale effectively because they learn from repetition. When cases are genuinely unique, Human Execution remains essential, with AI focused on support rather than decisioning. - Data availability and quality
Rules-based logical analysis and AI depend on usable inputs. If critical data is missing or inconsistent, Human Execution plays a larger role. Upstream data improvements are required before automation can be relied on.
A pragmatic adoption path that strengthens control
For many organisations, the safest and fastest route to value is workflow augmentation – rather than autonomous decisioning.
That means using AI to reduce manual effort around judgement, while keeping accountability clear.
A common pattern is:
- Rules-based analysis enforces policy requirements and baseline controls
- AI accelerates enrichment, prioritisation and summarisation
- Human Execution remains the point of judgement, escalation, and final decision
Over time, teams can shift specific repeatable tasks from Human Execution into rules or AI. But crucially, only when performance is proven and governance is in place. Treat automation as a control that must be monitored and managed, not a one-time deployment.
Combining approaches
Rules-based logical analysis provides deterministic control and auditability.
Human Execution provides judgement, adaptability, and accountability.
Artificial Intelligence, including Machine Learning, provides scale and workflow acceleration when designed with the right boundaries.
The most resilient operating models combine all three methods deliberately, matching the approach to complexity, repeatability, and data realities. Nexus AML’s AI and Automation whitepaper has done a deep dive on the topic. It outlines practical operating model guidance, use cases, and governance considerations for adopting AI safely in financial crime workflows. To find out more, read the report – Anti-Money Laundering Operations Management: Balancing The Three-Method Framework for Financial Crime Operations.
About Nexus AML
Nexus AML supports financial crime teams designing and delivering agentic AI operating models across CDD, TM, screening and fraud that combine Rules-based logical analysis, and Artificial Intelligence (including Machine Learning), governed by in house SMEs, all packaged into one auditable managed service.







