๐—ฃ๐—ฎ๐—ฝ๐—ฒ๐—ฟ ๐—”๐—ฐ๐—ฐ๐—ฒ๐—ฝ๐˜๐—ฎ๐—ป๐—ฐ๐—ฒ ๐—”๐—ป๐—ป๐—ผ๐˜‚๐—ป๐—ฐ๐—ฒ๐—บ๐—ฒ๐—ป๐˜ : Transitioning Heads Conundrum: The Hidden Bottleneck in Long-Tailed Class-Incremental Learning has been accepted at TMLR 2026

Paper titled โ€œTransitioning Heads Conundrum: The Hidden Bottleneck in Long-Tailed Class-Incremental Learningโ€ has been accepted at TMLR 2026 (Transactions on Machine Learning Research).

Authors: Rahul Vigneswaran K, Hari Chandana Kuchibhotla, Vineeth N Balasubramanian

๐Ÿ‘ Congratulations to all the authors!

๐Ÿ” Key Highlight: This work introduces DEREK (DEcoupling Representations for Early Knowledge Distillation), a method addressing a previously overlooked challenge in Long-Tailed Class-Incremental Learning (LTCIL): the Transitioning Heads Conundrum.

In LTCIL, head classes that are well-represented in earlier tasks become tail classes in subsequent tasks due to memory constraints, leading to accelerated catastrophic forgetting. DEREK mitigates this by decoupling head and tail learning via specialized expert networks and applying Early Knowledge Distillation before data constraints take effect, preserving rich representations.

Across 2 LTCIL benchmarks, 12 experimental settings, and 24 baselines, DEREK consistently establishes new state-of-the-art performance.