Yogesh Kumar SharmaHarish Padmanaban
Deep neural networks (D.N.N.s) have indeed revolutionized the field of artificial intelligence. However, they often need help to perform well when faced with out-of-distribution data. This is a common challenge in real-world applications, as domain shifts are inevitable. This limitation arises from the commonly accepted notion that the distribution of training and testing data is identical, a notion that is frequently violated in real-world situations. D.N.N.s are less effective with small amounts of labelled data and distributional changes, resulting in overfitting and poor generalization across different tasks and domains, even while they perform well with vast data and computational capacity. By using algorithms that learn transferable knowledge across different activities for quick adaptation, meta-learning and Generative Adversarial Networks (GANs) offer a potential method that does away with the requirement to learn every activity from the beginning.
B.G. DeepaS. DeepaS. KavithaVijayalakshmi A. Lepakshi
Afia SajeedaB M Mainul Hossain
P. NavaneethakrishnanSmitha Elsa PeterSishaj P. SimonM. Irshad Ahamed
Yu ZhengWei SongMinxin DuSherman S. M. ChowQian LouYongjun ZhaoXiuhua Wang