JOURNAL ARTICLE

Recurrent auto-associative networks and sequential processing

Abstract

A novel connectionist architecture that develops static representations of structured sequences is presented. The model is based on simple recurrent networks trained on an auto-association task in a way that guarantees the development of unique static representations. The model can be applied in modeling natural language, cognition, etc.

Keywords:
Connectionism Computer science Associative property Simple (philosophy) Artificial intelligence Task (project management) Recurrent neural network Association (psychology) Content-addressable storage Content-addressable memory Cognitive architecture Cognition Bidirectional associative memory Architecture Natural language processing Cognitive science Theoretical computer science Artificial neural network Engineering

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
13
Refs
0.11
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Neural Networks and Applications
Physical Sciences →  Computer Science →  Artificial Intelligence
Fuzzy Logic and Control Systems
Physical Sciences →  Computer Science →  Artificial Intelligence
Cognitive Science and Education Research
Life Sciences →  Neuroscience →  Cognitive Neuroscience

Related Documents

BOOK-CHAPTER

Auto-associative Networks

WORLD SCIENTIFIC eBooks Year: 2018 Pages: 179-197
JOURNAL ARTICLE

Sequential associative processing

Clark C. Guest

Journal:   Annual Meeting Optical Society of America Year: 1985 Pages: WT2-WT2
BOOK-CHAPTER

Recurrent Neural Networks for Sequential Processing

Lyndon WhiteRoberto TogneriWei LiuMohammed Bennamoun

Studies in computational intelligence Year: 2018 Pages: 23-36
BOOK-CHAPTER

Dynamical Recurrent Networks for Sequential Data Processing

Stefan C. KremerJohn F. Kolen

Lecture notes in computer science Year: 2000 Pages: 107-122
© 2026 ScienceGate Book Chapters — All rights reserved.