Wednesday, October 30, 2013

EMC Isilon For Hadoop – No Ingest Necessary






EMC logo

In traditional Hadoop environments, the entire data set must be ingested (and three or more copies of each block made) before any analysis can begin. Once analysis is complete, results must then be exported. What’s the significance of this? COST. These are tedious and time-consuming processes, along with maintaining multiple copies of data. With EMC Isilon HDFS, the entire data set can start to be analyzed immediately without the need to replicate it, and the results are also available immediately to NFS and SMB clients.


If you don’t already own Isilon for your Hadoop environment, it is worth exploring the multitude of benefits Isilon brings over HDFS running on compute hosts. If you are already an Isilon customer, Isilon requires no data movement and instead offers in-place analytics on data, eliminating the need to build a specialty Hadoop storage infrastructure.


Ryan Peterson, Director of Solutions Architecture at Isilon, likes to say that Isilon Dedupes Hadoop since Isilon satisfies Hadoop’s need to see multiple copies of the same data without having to actually copy it. In fact, with the latest release of Isilon’s OneFS 7.1 today, a new feature called Smart Dedupe can reduce the storage further by approximately 30%. Ryan Peterson now refers to this as Hadoop Dedupe Dedupe. The first ‘Dedupe’ removes 3x replication, and the second ‘Dedupe’ reduces storage by 30%. Clever!


I sat down with Ryan Peterson to walk us through Hadoop Dedupe Dedupe:


In a traditional Hadoop deployment, data loss resulting from hardware failure is handled by replicating blocks of data across a minimum of three times (3X by default), resulting in at least 4 data copies – existing primary storage plus 3 Hadoop storage copies.


Isilon for Hadoop turns this paradigm upside down because if existing primary data is NOT already on Isilon, then only 2.2 copies of data is required to protect against data loss due to hardware failure. The first copy is from the existing primary data NOT on Isilon, and the second copy is on Isilon. Isilon’s N+M RAID –like distributed parity scheme makes 1.2 copies while providing high availability and resiliency to protect from data loss due to hardware failure (i.e. nodes and disks). I


f primary data is already on Isilon there’s no need for a separate Hadoop storage infrastructure in the first place, and only 1.2 data copies are made instead of 4. With the upcoming release of Isilon’s de-duplication feature, the storage requirements will go down further by approximately 30%.


So if customers have 300TB of raw data, they will need 900TB of new storage to run their Hadoop cluster. However if they already have this data on Isilon, they will not need any new storage and will only have 252TB of raw data to work with because data in primary is de-duped and they can run Hadoop directly on that data.


If you don’t have your data on Isilon, they will need only 252TB of new storage as against 900TB.


Wait a minute, is this Hadoop Dedupe Dedupe Dedupe?


Share


The post EMC Isilon For Hadoop – No Ingest Necessary appeared first on EMC Big Data.








via EMC Feeds http://emcfeeds.emc.com/l?s=_unknown&r=unknown&he=6874747025334125324625324662696764617461626c6f672e656d632e636f6d2532463230313325324631302532463330253246656d632d6973696c6f6e2d6861646f6f702d696e676573742d6e6563657373617279253246&i=70726f78793a687474703a2f2f62696764617461626c6f672e656d632e636f6d2f3f703d383933

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.