<p>The term ‘Big Data’, refers to data sets whose size, complexity, and growth rate make them difficult to capture, manage, process or analysed. Enormous Data is a grouping of such a gigantic and complex information that it turns out to be exceptionally difficult to catch, store, process, recover and examine it with the assistance of conventional RDBMS databases. For the analysis of this massive amount of data there are many techniques are used and Hadoop technology is a good choice. Hadoop is an open source software that supports the processing of huge volume of data with effective storage efficiency, powerful data recovery solutions, scaling and easy for programmers and non-programmers to work with it. Hadoop can process the massive amount of data very fast, it is very cost effective, and also it can create a duplicate copy of data in case of system failure or to prevent the loss of data. It has very high degree of fault tolerance and can be scale up from a single server to thousands of machines. This paper introduces about big data and Hadoop, HDFS Architecture, MapReduce and applications of Hadoop and the conclusion.</p>
Jatti MounikaNagaveni B. Birada
Priyanka MoreSunita NandgaveMegha Kadam
PRASAD DURGA P.ST. VivekanandanA. Srinivasan
N. ShirishaG. DivyajyothiA. PrashanthiG. Sowmya