This paper summarizes recent advances in PRLM language recognition within the context of the NIST 2007 LR evaluations (LRE). We present a comparison of binary decision tree (BT) vs. N -gram models when adaptation from a universal (background) model (UBM) is used, we introduce multi-models— anchor-model-like approach to scoring, and we adopt the framework of intersession variation using factor analysis.
Rong TongBin MaHaizhou LiEng Siong Chng
Mehdi SoufifarMarcel KockmannLukáš BurgetOldřich PlchotOndřej GlembekTorbjørn Svendsen
Mohamed Faouzi BenZeghibaJean‐Luc GauvainLori Lamel