๐Ÿš€ FastAT Benchmark

A Comprehensive Framework for Fair Evaluation of Fast Adversarial Training Methods

๐Ÿ“‹ Overview

The FastAT Benchmark provides a rigorous and fair evaluation framework for fast adversarial training methods. Unlike public leaderboards that allow diverse combinations of model architectures, data sources, and computational budgets, our benchmark establishes conditions where all methods compete on equal footing.

This platform implements over a dozen representative FastAT methods with a unified codebase, ensuring fair and reproducible comparison across different algorithmic innovations. The benchmark systematically removes advantages from massive computational resources and unlimited external data, providing the research community with a transparent baseline for evaluating fast adversarial training techniques.

โœจ Key Features

๐ŸŽฏ Unified Architecture

All methods are evaluated on identical network structures, eliminating performance differences arising from architectural advantages rather than training procedures.

โš™๏ธ Standardized Settings

Consistent training schedules, optimizers, learning rate policies, and data augmentation strategies prevent the experimental setup from favoring any particular method.

๐Ÿšซ No External Data

Strict prohibition of using additional or synthetic data beyond the original benchmark training set ensures observed gains stem solely from the learning algorithm.

๐Ÿ“Š Dual-Metric Framework

Evaluates both robustness performance (accuracy against strong attacks) and computational cost (GPU hours and memory footprint).

๐Ÿ”ฌ Comprehensive Evaluation

Includes diverse attack methods: PGD with varying iterations, AutoAttack, and CR Attack for thorough robustness assessment.

๐Ÿ”„ Unified Implementation

Re-implemented over a dozen FastAT methods with a common interface for data loading, model initialization, training loops, and evaluation protocols.

๐Ÿ“š Supported Methods

๐Ÿ“„ FGSM-RS

ICLR 2020

๐Ÿ“„ GRAD-ALIGN

NeurIPS 2020

๐Ÿ“„ FREE-AT

NeurIPS 2019

๐Ÿ“„ FGSM-PGI

ECCV 2022

๐Ÿ“„ FGSM-PCO

ECCV 2024

๐Ÿ“„ SSAT

AAAI 2021

๐Ÿ“„ NU-AT

NeurIPS 2021

๐Ÿ“„ N-FGSM

NeurIPS 2022

๐Ÿ“„ GAT

NeurIPS 2020

๐Ÿ“„ AAER

NeurIPS 2023

๐Ÿ“„ ELLE

ICLR 2024

๐Ÿ“„ FGSM-AT

ICLR 2015

๐Ÿ“„ PGD-AT

ICLR 2018

๐Ÿ“„ FGSM-UAP

AAAI 2023

๐Ÿ“„ LIET

ICCV 2025

๐Ÿ”„ Benchmark Training Flow

๐Ÿ“ Load Configuration
โ†“
common.yaml
method.yaml
โ†“
Initialization Phase
๐Ÿ“Š Data Loaders
๐Ÿง  Model Init
โšก Optimizer
โ†“
๐ŸŽฏ Select FastAT Method
โ†“
Training Phase
๐Ÿ”„ Training Loop
โœ“ Validation
โ†“
Evaluation Phase
๐Ÿงช Final Evaluation
โ†“
PGD Attack
AutoAttack
CR Attack
โ†“
๐Ÿ“ˆ Log Results & Metrics

๐Ÿ“Š Method Comparison

Method โ†• Clean Acc (%) โ†• PGD-10 Acc (%) โ†• PGD-20 Acc (%) โ†• PGD-50 Acc (%) โ†• AA Acc (%) โ†• CR Acc (%) โ†• Training Time (s) โ†• Memory (GB) โ†•

๐Ÿ“ˆ Training Progress Comparison