top of page

What is Apache Hadoop? And should my organization use it?

Updated: Jan 9


As the necessity for big data processing increases across the industry landscape, so does the number of tools to process said data. Selecting the right framework for your industry's specific need is paramount to ensuring fast and reliable analysis. One such framework, Apache Hadoop, is open-source and designed with scalability and fault tolerance and its core.


In this article, we will delve into the fundamental aspects of Apache Hadoop, explore its key components, and discuss whether your organization should consider adopting it.


What is Hadoop?


From Apache's website, "the Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models". In other words, it is a computing architecture that executes individual data processing tasks across multiple computers and/or servers in parallel, allowing for fast and scalable analysis. It splits the input data into multiple different machines, where each machine is called a "node".


Take a quick look at this diagram and we'll get back to the terms.



In very simple terms, there is a managing node called a "NameNode" that stores and manages all metadata for each "DataNode". Each DataNode does the data processing - but only on its divided section of the data. It has no knowledge of the rest of the system. After all the data blocks have been analyzed on each DataNode, the results are received and organized by the resource manager which provides the final results.


Apache Hadoop has a built-in resource manager called YARN to schedule and track all the tasks on each DataNode, allowing for efficient use of all available computing power. The way data is split and processed on each DataNode can be done in a series of ways using a processing framework layer such as MapReduce, Tez, Storm, or Spark. MapReduce is the built-in processing framework but others can be easily integrated - the need for another framework depends on the source, type, and processing time required for the data. For example, MapReduce cannot handle real-time algorithms but Spark can.


To ensure data is not lost across the multiple nodes in the distributed file system architecture - called Hadoop Distributed File System (HDFS) - the architecture ensures that there are at least three copies of each datablock across the different nodes. It also has at least one backup NameNode, called a Secondary NameNode, so if the NameNode were to fail, all the metadata would not be lost.



If your organization is completing a data processing task where the total amount of data will be growing drastically, the Apache Hadoop framework is likely the right choice as it allows additional nodes to be added on easily without having to reconfigure your whole data management system. Additionally, Hadoop is cost-effective at scale because it allows for the connection of multiple pieces of inexpensive hardware to do simultaneous processing as opposed to having to purchase expensive machines with immense processing capabilities.


If you want to learn more about how to set up your own single-node cluster, here is Apache's guide on how to do so. If you want to more deeply understand the details of Hadoop, here is an awesome step-by-step explanation of all the intricacies.


Who uses Hadoop?


The Hadoop ecosystem is vast and is utilized across various industries. It continues to grow and expand as new APIs and frameworks are being developed. Here is a link to a massive list of companies that use it compiled by Apache. Let's walk through a few use cases:

  1. Financial Sector - Hadoop is ubiquitous in the financial sector for risk modeling and portfolio analysis. Banks and other financial institutions have large volumes of market data and customer data that can be leveraged to make calculated decisions. JPMorgan Chase, with over 350 Billion in assets, has stated they prefer to use HDFS for analysis because of how fast data their volume of data is growing and how well it handles unstructured data.

  2. Retail - Hadoop is a great tool for log storage, report generation, search optimization, and ad-targeting. Ebay is a big player in this space, with a 532-node cluster. Multiple organizations integrate Machine Learning algorithms into their Hadoop clusters.

  3. Media and Marketing- Companies including Facebook, FollowNews, hiring sites, iNews, and LinkedIn use Hadoop to ingest and evaluate consumer data. It is also used for internal searching, analyzing click flow, and data filtering.

  4. Research - Multiple research institutions, including Cornell University Web Lab, IBM, and the Global Biodiversity Information Facility use it to analyze massive data sets and run Machine Learning algorithms.


Is Hadoop right for your organization?


In general, Apache Hadoop is a versatile framework that can be applied to most cases where big data is analyzed. The base framework does have limitations with real-time processing and other complex algorithms, but multiple APIs and processing framework layers can be applied to overcome these issues. Some questions to consider before choosing it for your organizational data needs:

  1. Does your organization possess the technical capacity to maintain an Apache Hadoop framework? If not, do you have reliable advisors and vendors who can help?

  2. How does your organization plan to manage data storage with Hadoop Distributed File System (HDFS)?

  3. What specific use cases and processing requirements does your organization have?


If you're deciding how to best manage your data to gain efficiencies, scale into the future, and simplify tough decisions, we'd love to help. Just reach out here or click on the "free consult" button to put time directly on our calendars.
bottom of page