Machine Translation pertains to translation of one natural language to other by using automated computing. The most common method for dealing with the machine translation problem is Statistical machine translation. This method is convenient for language pairs with similar grammatical structures even so it taken vast datasets.N evertheless, the conventional models do not perform well for languages without similar grammar and contextual meaning. Lately this problemhas been resolved by the neural machine translation (NMT) that has proved to be an effective curative. Only a little amount of data is required for training in NMT and it can translate only a small number of training words. A fixed-length vector is used to identify the important words that contribute for the translation of text, and assigns weights to each word in our proposed system. The Encoder-Decoder architecture with Long- Term and Short- Term Memory (LSTM) Neural Network and trained modelsare employed by calling the previous sequences and states. The proposed model ameliorates translation performance with attention vector and by returning the sequences of previous states unlike LSTM.English-Hindi sentences corpus data for implementing a Model with attention and without attention is considered here. By evaluating the results, the proposed solution, overcomes complexity of training a Neural Network and increases translation performance.
Sai Yashwanth VelpuriSonakshi KaranwalR. Anita
K. Adi Narayana ReddyG. Shyam Chandra PrasadA. Rajashekar ReddyLalan KumarKannaiah