Scientific Publications | Page 4
Divide and not forget: Ensemble of selectively trained experts in Continual Learning
Abstract: Class-incremental learning is becoming more popular as it helps models widen their applicability while not forgetting what they already know. A trend in this area is to use a mixture-of-expert technique, where different models work together to solve the task. However, the experts are usually trained all at once using whole task data, which makes them all prone to forgetting and increasing computational burden. To address this limitation, we introduce a novel approach named SEED. SEED selects only one, the most optimal expert for a considered task, and uses data from this task to fine-tune only this expert. For this purpose, each expert represents each class with a Gaussian distribution, and the optimal expert is selected based on the similarity of those distributions. Consequently, SEED increases diversity and heterogeneity within the experts while maintaining the high stability of this ensemble method. The extensive experiments demonstrate that SEED achieves state-of-the-art performance in exemplar-free settings across various scenarios, showing the potential of expert diversification through data in continual learning.
Type of Publication: publication
Title of Journal: International Conference on Learning Representations (ICLR), Vienna Austria, 7-11.05.2024
Authors: Grzegorz Rypesc; Sebastian Cygert; Valeriya Khan; Tomasz Trzcínski; Bartosz Zielínski; Bartłomiej Twardowski
Resurrecting Old Classes with New Data for Exemplar-Free Continual Learning
Abstract:Continual learning methods are known to suffer from catastrophic forgetting, a phenomenon that is particularly hard to counter for methods that do not store exemplars of previous tasks. Therefore, to reduce potential drift in the feature extractor, existing exemplar-free methods are typically evaluated in settings where the first task is significantly larger than subsequent tasks. Their performance drops drastically in more challenging settings starting with a smaller first task. To address this problem of feature drift estimation for exemplar-free methods, we propose to adversarially perturb the current samples such that their embeddings are close to the old class prototypes in the old model embedding space. We then estimate the drift in the embedding space from the old to the new model using the perturbed images and compensate the prototypes accordingly. We exploit the fact that adversarial samples are transferable from the old to the new feature space in a continual learning setting. The generation of these images is simple and computationally cheap. We demonstrate in our experiments that the proposed approach better tracks the movement of prototypes in embedding space and outperforms existing methods on several standard continual learning benchmarks as well as on fine-grained datasets.
Type of Publication: publication
Title of Journal:The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , Seattle, USA, 17-21.06.2024
Authors: Otavian Pascu; Adriana Stan; Dan Oneata; Elisabeta Oneata; Horia Cucu
Towards generalisable and calibrated audio deepfake detection with self-supervised representations
Abstract:Generalisation—the ability of a model to perform well on unseen data—is crucial for building reliable deepfake detectors. However, recent studies have shown that the current audio deepfake models fall short of this desideratum. In this work we investigate the potential of pretrained self-supervised representations in building general and calibrated audio deepfake detection models. We show that large frozen representations coupled with a simple logistic regression classifier are extremely effective in achieving strong generalisation capabilities: compared to the RawNet2 model, this approach reduces the equal error rate from 30.9% to 8.8% on a benchmark of eight deepfake datasets, while learning less than 2k parameters. Moreover, the proposed method produces considerably more reliable predictions compared to previous approaches making it more suitable for realistic use.
Type of Publication: Conference paper
Title of Journal: Interspeech 2024
Authors: Octavian Pascu; Adriana Stan; Dan Oneata; Elisabeta Oneata; Horia Cucu
Access the open software and datasets produced by ELIAS!
ELIAS aims at establishing Europe as a leader in Artificial Intelligence (AI) research that drives sustainable innovation and economic development.
We will create a Network of Excellence connecting researchers in academia with practitioners in the industry to differentiate Europe as a region where AI research builds towards a sustainable long-term future for our planet, contributes to a cohesive society, and respects individual preferences and rights.