Sunday, February 23, 2014

Important Hadoop Articles

Apache Spark, an in-memory data-processing framework, is now a top-level Apache project. That’s an important step for Spark’s stability as it increasingly replaces MapReduce in next-generation big data applications.

MapReduce was fun and pretty useful while it lasted, but it looks like Spark is set to take the reins as the primary processing framework for new Hadoop workloads. The technology took a meaningful, if not huge, step toward that end on Thursday when the Apache Software Foundation announced that Spark is now a top-level project.
Spark has already garnered a large and vocal community of users and contributors because it’s faster than MapReduce (in memory and on disk) and easier to program. This means it’s well suited for next-generation big data applications that might require lower-latency queries, real-time processing or iterative computations on the same data (i.e., machine learning). Spark’s creator from the University of California, Berkeley , have created a company called Databricks to commercialize the technology.
Spark is technically a standalone project, but it was always designed to work with the Hadoop Distributed File System.It can run directly on HDFS, inside MapReduce and, thanks to YARN, it can now run alongside MapReduce jobs on the same cluster. In fact, Hadoop pioneer Cloudera is now providing enterprise support for customers that want to use Spark.




However, MapReduce isn’t yesterday’s news quite yet. Although many new workloads and projects (such as Hortonworks' Stinger) use alternative processing frameworks, there’s still a lot of tooling for MapReduce that Spark doesn’t have yet (e.g., Pig and Cascading), and MapReduce is still quite good for certain batch jobs. Plus, as Cloudera co-founder and Chief Strategy Officer Mike Olson explained in a recent Structure Show podcast (embedded below), there are a lot of legacy MapReduce workloads that aren’t going anywhere anytime soon even as Spark takes off.
If you want to hear more about Spark and its role in the future of Hadoop, come to our Structure Data conference March 19-20 in New York. Databricks co-founder and CEO Ion Stoica will be speaking as part of our Structure Data Awards presentation, and we’ll have the CEOs of Cloudera, Hortonworks, and Pivotal talking about the future of big data platforms and how they plan to capitalize on them.

Cloudera launches in-memory analyzer for Hadoop


Hadoop distributor Cloudera has released a commercial edition of the Apache Spark program, which analyzes data in real time from within Cloudera’s Hadoop environments.

The release has the potential to expand Hadoop’s use for stream processing and faster machine learning.

“Data scientists love Spark,” said Matt Brandwein, Cloudera director of product marketing.

Spark does a good job at machine learning, which requires multiple iterations over the same data set, Brandwein said.

“Historically, you’d do that stuff with MapReduce, if you’re using Hadoop. But MapReduce is really slow,” Brandwein said, referring to how the MapReduce framework requires many multiple reads and writes to disk to carry out machine learning duties. Spark can do this task while the data is still in working memory. Maintainers of the software claim that Spark can run programs up to 100 times faster than Hadoop itself, thanks to its in-memory design model.

Spark is also good at stream processing, in which it can monitor a constant flow of data and carry out certain functions if certain conditions are met.

Stream processing, for instance, could be applied to fraud management and security event management. “In those cases, you’re analyzing real-time data off the wire to detect any anomalies and take action,” Brandwein said. The data can also be off-loaded to the Hadoop file system for further interactive and deeper batch-processing analysis.

First developed at University of California at Berkeley, Apache Spark provides a way to load streaming data into the working memory of a cluster of servers, where it can be queried in real-time. It has no upper limit of how many servers, or how much memory, it can use.

It relies on the latest version of Hadoop data-processing network, which uses YaRN (Yet another Research Negotiator). Spark does not require the MapReduce framework though, which operates in batch mode. It has APIs (application programming Interfaces) for Java, Scala and Python. It can natively read data from the HDFS (Hadoop File System), the HBase Hadoop database and the Cassandra data store.

The Apache Spark project has over 120 developers who have contributed to the project, and the technology has been used by Yahoo, Intel, as well as a number of other, smaller, companies. DataBricks, which offers its own commercial version Spark, offers support for Spark on behalf of Cloudera users.

The idea of applying Hadoop-style analysis to streaming data is not a new one. Twitter maintains Storm, a set of open source software it uses for analyzing messages.

In addition to Spark, Cloudera also announced that it has repackaged its commercial Hadoop offering into three separate packages: the Basic edition, the Flex edition and the Enterprise Hub Edition. The Enterprise Hub addition bundles all of the additional tools that Cloudera has integrated with Hadoop, including HBase, Spark, backup capabilities, and the Impala SQL analytic edition. The Flex edition allows the user to pick one additional tool in addition to core Hadoop.

Cloudera has also renamed its Cloudera Standard edition to Cloudera Express.

-----------------------------------------------------------------------------------------------------------


No comments:

Post a Comment