Apache Hadoop is an open-source software framework that supports data-intensive distributed applications, licensed under the Apache v2 license. It supports the running of applications on large clusters of commodity hardware. The Hadoop framework transparently provides both reliability and data motion to applications. Hadoop implements a computational paradigm named map/reduce, where the application is divided into many small fragments of work, each of which may be executed or re-executed on any node in the cluster. In addition, it provides a distributed file system that stores data on the compute nodes, providing very high aggregate bandwidth across the cluster. Both map/reduce and the distributed file system are designed so that node failures are automatically handled by the framework. It enables applications to work with thousands of computation-independent computers and petabytes of data. Hadoop was derived from Google's MapReduce and Google File System (GFS) papers.
The entire Apache Hadoop “platform” is now commonly considered to consist of the Hadoop kernel, MapReduce and Hadoop Distributed File System (HDFS), as well as a number of related projects – including Apache Hive, Apache HBase, and others.
Hadoop is written in the Java programming language and is a top-level Apache project being built and used by a global community of contributors. Hadoop and its related projects (Hive, HBase, Zookeeper, and so on) have many contributors from across the ecosystem.
Read more about Apache Hadoop: History, Architecture, Hadoop On Amazon EC2/S3 Services, Industry Support of Academic Clusters, Running Hadoop in Compute Farm Environments, Commercially Supported Hadoop-related Products, Papers
Famous quotes containing the word apache:
“The Apache have a legend that the coyote brought them fire and that the bear in his hibernations communes with the spirits of the overworld and later imparts the wisdom gained thereby to the medicine men.”
—Administration in the State of Arizona, U.S. public relief program (1935-1943)