Main
-
Early Work
-
Gradient/Trajectory Matching Surrogate Objective
-
Distribution/Feature Matching Surrogate Objective
-
Dataset Distillation via the Wasserstein Metric
-
Better Optimization
-
On Implicit Bias in Overparameterized Bilevel Optimization
-
On the Size and Approximation Error of Distilled Sets
-
MIM4DD: Mutual Information Maximization for Dataset Distillation
-
MGDD: A Meta Generator for Fast Dataset Distillation
-
Dataset Distillation in Latent Space
-
Distilled Dataset Parametrization
-
Dataset Condensation with Latent Space Knowledge Factorization and Sharing
-
Slimmable Dataset Condensation
-
Few-Shot Dataset Distillation via Translative Pre-Training
-
Generative Prior
-
Dataset Condensation via Generative Model
-
Latent Dataset Distillation with Diffusion Models
-
Label Distillation
-
Dataset Quantization
-
Multimodal Distillation
-
Self-Supervised Distillation
-
Self-Supervised Dataset Distillation for Transfer Learning
-
Benchmark
-
A Comprehensive Study on Dataset Distillation: Performance, Privacy, Robustness and Fairness
-
DD-RobustBench: An Adversarial Robustness Benchmark for Dataset Distillation
-
Survey
-
Data Distillation: A Survey
-
Ph.D. Thesis
-
Data-efficient Neural Network Training with Dataset Condensation
-
Workshop
-
1st Workshop on Dataset Distillation
Applications
-
Continual Learning
-
Reducing Catastrophic Forgetting with Learning on Synthetic Data
-
Sample Condensation in Online Continual Learning
-
An Efficient Dataset Condensation Plugin and Its Application to Continual Learning
-
Privacy
-
Privacy for Free: How does Dataset Condensation Help Privacy?
-
No Free Lunch in "Privacy for Free: How does Dataset Condensation Help Privacy"
-
Understanding Reconstruction Attacks with the Neural Tangent Kernel and Dataset Distillation
-
Rethinking Backdoor Attacks on Dataset Distillation: A Kernel Method Perspective
-
Medical
-
Communication-Efficient Federated Skin Lesion Classification with Generalizable Dataset Distillation
-
Progressive Trajectory Matching for Medical Dataset Distillation
-
Federated Learning
-
Federated Learning via Synthetic Data
-
Distilled One-Shot Federated Learning
-
FedSynth: Gradient Compression via Synthetic Data in Federated Learning
-
Meta Knowledge Condensation for Federated Learning
-
FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning
-
Fed-GLOSS-DP: Federated, Global Learning using Synthetic Sets with Record Level Differential Privacy
-
Federated Virtual Learning on Heterogeneous Data with Local-global Distillation
-
Graph Neural Network
-
Graph Condensation via Receptive Field Distribution Matching
-
Fair Graph Distillation
-
Neural Architecture Search
-
Fashion, Art, and Design
-
Learning from Designers: Fashion Compatibility Analysis Via Dataset Distillation
-
Knowledge Distillation
-
Recommender Systems
-
Gradient Matching for Categorical Data Distillation in CTR Prediction
-
Blackbox Optimization
-
Trustworthy
-
Can We Achieve Robustness from Data Alone?
-
Towards Robust Dataset Learning
-
Rethinking Data Distillation: Do Not Overlook Calibration
-
Towards Trustworthy Dataset Distillation
-
Towards Adversarially Robust Dataset Distillation by Curvature Regularization
-
Retrieval
-
Towards Efficient Deep Hashing Retrieval: Condensing Your Data via Feature-Embedding Matching
-
Text
-
Data Distillation for Text Classification
-
Video
-
Time Series
-
Dataset Condensation for Time Series Classification via Dual Domain Matching
-
Tabular