FraudBench: A Multimodal Benchmark for Detecting AI-Generated Fraudulent Refund Evidence

1College of Computing and Data Science, Nanyang Technological University
2Alibaba-NTU Global e-Sustainability CorpLab (ANGEL)
3Alibaba Group
822
Real-World
Review Samples
7,928
Total
Images
29
Total
Categories
6
AI Generation
Models
5
Evaluation
Dimensions
15+
Models &
Detectors
Synthesized with: GPT Image 2 Grok Imagine Nano Banana 2 Wan2.7-Image Qwen-Image-2.0 Qwen-Image-Edit

Abstract

Artificial Intelligence (AI)-generated images have become increasingly realistic and readily adaptable to concrete real-world claims, creating new challenges for verifying visual evidence. A concrete emerging risk is AI-generated refund fraud, in which manipulated or synthetic images are used to support claims about damaged products, poor delivery conditions, or service-related defects. Existing AI-generated image detection benchmarks mainly evaluate standalone authenticity classification, cross-generator transfer, or forensic localization, leaving claim-conditioned fraudulent evidence detection underexplored.

To bridge this gap, we introduce FraudBench, a multimodal benchmark for detecting AI-generated fraudulent refund evidence. FraudBench is constructed from real-world user-review evidence across e-commerce, food delivery, and travel-service scenarios. We curate real evidence images together with their associated review and product metadata, identify genuine damaged and undamaged evidence through MLLM-assisted filtering and human annotation, and synthesize fake-damaged evidence from genuine undamaged reference images using six state-of-the-art image editing and generation models.

Using FraudBench, we evaluate 11 MLLMs, 4 specialized AI-generated image detectors, and human participants under the same settings. Experiments show that current MLLMs often recognize real-damaged evidence but fail on many fake-damaged subsets, with fake-damage detection rates (TPR) far below the 50% baseline on most generator subsets. Specialized detectors generally perform better but remain inconsistent across generators and can produce false positives on real-damaged samples, revealing a clear gap between generic AI image detection and reliable claim-conditioned refund-evidence verification.

⚠️ Note: This paper is intended solely for academic research purposes. It studies the risk of AI-generated fraudulent refund evidence to support the development of more reliable detection methods, platform safeguards, and responsible evaluation. The goal is not to facilitate refund fraud or provide actionable guidance for misuse.

Construction Pipeline

FraudBench Construction Pipeline
Figure 1. Construction pipeline of FraudBench in three stages: (1) Data Collection — real-world refund evidence from four sources across 29 categories. (2) Data Preprocessing — multimodal aggregation, two-level cleaning, rule-based filtering, representative sampling, and anonymization with human verification. (3) Synthetic Evidence Generation — image-specific prompts fed to six SOTA models, with human quality control.

Benchmark Overview

📊 Dataset Composition

Data Split Images Description
Real-Damaged 1,012 Genuine user-submitted damaged evidence
Evaluation set · negative class
Real-Undamaged 988 Genuine undamaged reference images
Synthesis source only · excluded from evaluation
Fake-Damaged 5,928 AI-synthesized: 988 references × 6 generators
Evaluation set · positive class
Total 7,928 822 Reviews  ·  29 Categories  ·  4 Data Sources

🌐 Data Sources & Scenarios

Amazon Reviews 2023

590 review samples across 27 e-commerce product categories

Trip.com

Travel-service reviews covering hotel, attraction, and transportation complaints

GrabFood

Food delivery and dine-in & pickup service evidence

In-House Captured

Self-collected real-world samples under everyday acquisition conditions

FraudBench Evaluation Modes

Figure 2. Representative examples covering the five evaluation dimensions: input modality, contextual information, multi-step reasoning, prompt sensitivity, and real image preservation.

Evaluation Protocol

FraudBench evaluates detection capability along five complementary dimensions reflecting real-world refund-evidence verification conditions.

1
Input Modality

Single-Image vs. Multi-Image evaluation. Multi-image setting allows the model to exploit cross-image cues such as consistent lighting, viewpoint, and damage appearance within the same review.

2
Contextual Information

Review-Free vs. Review-Conditioned evaluation. Tests whether associated user review text improves detection quality or introduces misleading shortcuts.

3
Multi-Step Reasoning

Compares direct joint inference against a structured per-image decomposition step before a final aggregated verdict, testing whether evidence integration improves reliability.

4
Prompt Sensitivity

Evaluates each detector with five semantically equivalent prompt styles to measure whether decisions are driven by stable visual evidence or surface prompt phrasing.

5
Real Image Preservation

Assesses True Negative Rate (TNR) on genuine damaged evidence. Central to trustworthy refund adjudication — falsely accusing an honest customer damages platform trust.

⚖️ Asymmetric Risk in Refund Verification
False Positive

Wrongly Accusing an Honest Customer → Severe Trust Crisis

vs
False Negative

Missing AI-Fabricated Evidence → Direct Financial Losses

Key Findings

The Central Dilemma of Current Detection Paradigms

Over-Credulity (MLLMs) vs. Over-Sensitivity (Specialized Detectors)

MLLMs preserve genuine evidence but systematically miss AI-manipulated damage. Specialized detectors detect synthetic images but frequently flag real damaged goods as fake.

Finding 1

MLLM Failures Driven by Under-Detection

Across MLLMs in the single-image no-review setting, the average real-image TNR is 0.947 — but the average fake-damage TPR is only 0.197. The dominant failure mode is not excessive rejection of genuine evidence, but systematic under-detection of synthetic refund claims.

Finding 2

Strong Generator Effects

Detection difficulty varies substantially across generators. Average MLLM TPR is only 0.080 on GPT Image 2 but rises to 0.351 on Qwen-Image-Edit. Frontier text-to-image models produce more visually integrated fake-damaged evidence than image-editing pipelines.

Finding 3

Specialized Detectors: Strong but Unreliable

The best specialized detector (ForgeLens [GenImage]) achieves 0.904 balanced accuracy but is highly checkpoint-dependent. Effort [Chameleon] achieves high TPR (avg 0.854) but only TNR of 0.051 on real images — making it unsuitable for real-world refund adjudication.

Finding 4

Context Helps — When Properly Aggregated

Multi-image context significantly boosts Gemini 3 Flash from 0.720 (single-image+review) to 0.823 balanced accuracy (multi-image+review). Yet review text alone adds minimal gain (+0.005), revealing that models struggle to ground textual claims in visual evidence.

Finding 5

Damage Type Shapes Difficulty

Salient defects (cracked, leaking, shattered) are easier to detect (avg TPR ~0.25–0.30). Subtle deformations (packaging-damaged, bent/warped) are much harder (avg TPR ~0.12–0.14). Packaging-damage detection is near zero for several MLLMs.

Finding 6

Humans Also Struggle

Human participants outperform most MLLMs but still exhibit non-negligible error rates. For the easiest generator (Qwen-Image-Edit), human TPR is 0.709 vs. 0.793 TNR, confirming that even manual inspection struggles with these highly realistic AI manipulations.

Experimental Results

Single-Image, No-Review Setting  ·  TPR ↑ = Fake-Damage Detection Rate  ·  TNR ↑ = Real-Image Preservation Rate

Bold = Best in Group  |  Underline = Second Best  |  Red = Problematic Value

Multimodal Large Language Models (MLLMs)
Model Fake-Damage TPR ↑  ·  by Generator Overall Real-Damaged
TNR ↑
GPT Image 2 Grok Imagine Nano Banana 2 Wan2.7-Image Qwen-Image-2.0 Qwen-Image-Edit Bal.Acc ↑ F1 ↑
GPT-5.4 mini 0.0400.0450.0830.1290.1470.285 0.5580.212 0.994
Gemini 3 Flash 0.1580.1990.258 0.3980.4910.692 0.6740.526 0.982
Grok 4.1 Fast Reasoning 0.1050.1350.1440.1880.2180.262 0.5380.279 0.901
Grok 4.20 Reasoning 0.0620.0650.1410.1990.1610.242 0.5620.244 0.979
Kimi K2.6 0.0260.0520.0790.2180.2600.356 0.5810.277 0.996
Qwen3.6-Plus 0.0770.1400.1790.2660.3250.441 0.6080.371 0.977
Qwen3.6-35B-A3B 0.144 0.242 0.262 0.365 0.407 0.531 0.6030.476 0.882
Qwen3.5-Omni-Plus 0.0080.0350.0490.1030.1370.202 0.5440.159 1.000
Qwen3-VL-Flash 0.0430.0900.1720.2470.2780.364 0.5920.321 0.986
Qwen3-VL-Plus 0.0020.0060.0240.0500.0490.108 0.5180.075 0.997
QVQ-Max-Latest 0.220 0.239 0.294 0.3210.3350.383 0.5100.438 0.721
Random Guessing 0.5000.5000.5000.5000.5000.500 0.500 0.500
Human Reference 0.4200.5090.5390.5980.6970.709 0.6860.704 0.793
Specialized AI-Generated Image Detectors
Model Fake-Damage TPR ↑  ·  by Generator Overall Real-Damaged
TNR ↑
GPT Image 2 Grok Imagine Nano Banana 2 Wan2.7-Image Qwen-Image-2.0 Qwen-Image-Edit Bal.Acc ↑ F1 ↑
CO-SPY [ProGAN] 0.9770.0990.3120.2180.8940.996 0.7820.733 0.982
CO-SPY [SD-v1.4] 0.3600.3100.2820.3600.3390.349 0.5090.455 0.685
ForgeLens [ProGAN] 0.7000.2170.5160.8230.8590.925 0.8150.798 0.958
ForgeLens [GenImage] 0.9670.158 1.000 0.990 1.000 0.989 0.9040.915 0.957
Effort [SD-v1.4] 0.7100.5460.6580.7550.8580.890 0.7950.828 0.853
Effort [Chameleon] 0.828 0.824 0.8680.8820.8490.872 0.4520.838 0.051
IAPL [ProGAN] 0.2550.2950.1400.1220.3980.568 0.6280.438 0.960
IAPL [SD-v1.4] 0.973 0.895 0.916 0.996 0.955 0.994 0.7270.932 0.499
Random Guessing 0.5000.5000.5000.5000.5000.500 0.500 0.500
Human Reference 0.4200.5090.5390.5980.6970.709 0.6860.704 0.793

Confidence scores (Conf.) omitted for readability. Full results including Conf. are in the paper.

BibTeX

@misc{yan2026fraudbenchmultimodalbenchmarkdetecting, title = {FraudBench: A Multimodal Benchmark for Detecting AI-Generated Fraudulent Refund Evidence}, author = {Xinyu Yan and Boyang Chen and Jiaming Zhang and Tiantong Wu and Hong Xi Tae and Yichen He and Tiantong Wang and Yachun Mi and Yurong Hao and Yilei Zhao and Lei Xiao and Longtao Huang and Pengjun Xie and Wei Liu and Wei Yang Bryan Lim}, year = {2026}, eprint = {2605.08820}, archivePrefix = {arXiv}, primaryClass = {cs.CV}, url = {https://arxiv.org/abs/2605.08820} }