Hadoop Common: contains libraries and utilities required by other Hadoop modules
Hadoop Distributed File System (HDFS): an appropriated document framework that stores information on the item machines, giving high total data transfer capacity over the bunch
Hadoop YARN: an asset administration stage in charge of overseeing register assets in bunches and utilizing them for planning of clients' applications
Hadoop MapReduce: a programming model for huge scale information handling
Every one of the modules in Hadoop are planned with a basic suspicion that equipment disappointments (of individual machines, or racks of machines) are normal and in this manner ought to be consequently taken care of in programming by the system. Apache Hadoop's MapReduce and HDFS parts initially got separately from Google's MapReduce and Google File System (GFS) papers.