Summary form only given, as follows. A maximum likelihood neural network has been designed for problems which require an adaptive estimation of metrics in classification spaces. Examples of such problems are an XOR problem and most classification problems with multiple classes having complicated classifier boundaries. The metric estimation has the capability of achieving flexible classifier boundary shapes using a simple architecture without hidden layers. This neural network learns much more efficiently than other neural networks or classification algorithms, and it approaches the theoretical bounds on adaptive efficiency according to the Cramer-Rao theorem. It also provides for optimal fusing of all the available information, such as a priori and real-time information coming from a variety of sensors of the same or different types, and utilizes fuzzy classification variables to provide for the efficient utilization of incomplete erroneous data, including numeric and symbolic data.< >
Leonid PerlovskyMargaret M. McManus
Leonid PerlovskyJohn Jaskolski
Hsu-Kun WuJer‐Guang HsiehYih-Lon LinJyh-Horng Jeng