Apache Hadoop Tutorial for Beginners

A place to get started with Apache Hadoop Ecosystem Components with hands-on which will help you to understand the concepts of Apache Hadoop Ecosystem Components in detail.

Apache Hadoop

Apache Hadoop is a software framework that allows for the distributed processing of large data sets across clusters of computers using simple programming model called MapReduce and distributed storage component called HDFS(Hadoop Distributed File System). It is designed to scale up from single machine to thousands of machines, each offering local computation and storage. Apache Hadoop is a framework used to develop data processing applications which are executed in distributed computing style. Apache Hadoop uses master-slave architecture.


Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures.

Hadoop Ecosystem Components

  1. HDFS
  2. MapReduce
  3. YARN
  4. Hive
  5. Sqoop
  6. Pig
  7. Oozie
  8. Flume
  9. HBase
  10. Zookeeper
Happy Learning !!!

Post a Comment

0 Comments