Editors note: this article first was posted at the Terracotta blog and was re-posted with permission. By Gagan Mehra.
When many of us hear the term Big Data, we think Hadoop. That’s only natural as Hadoop has helped countless organizations overcome huge Big Data challenges, at relatively low cost. No surprise, therefore, that analysts expect global demand for Hadoop to grow to $13.95 billion by 2017.
That said, Hadoop isn’t a complete answer to Big Data. While Hadoop is great for batch processing and storage of very large data sets, it can take hours to produce results. Then, once you gather insights from Hadoop, it can take even longer to share those insights with your enterprise apps. Every second your apps can’t see the latest insights is time that your Hadoop-derived intelligence could be delivering value, but isn’t.
To address this challenge, Terracotta recently announced the BigMemory-Hadoop Connector, a game-changing solution that lets Hadoop jobs write data directly into BigMemory, Terracotta’s in-memory data management platform. This enables downstream applications to get instant access to Hadoop results by reading from BigMemory. Hadoop jobs also execute faster, as they can now write to memory instead of disk (HDFS). The result can be a significant boost in competitive advantage and enterprise profitability.
(For those familiar with Hadoop: the BigMemory-Hadoop Connector also lets you read streaming output from Hadoop, allowing apps to get Hadoop results even faster.)
If you run a Hadoop project and you’d like to see what the BigMemory-Hadoop Connector can do, download our early access version at: http://www.terracotta.org/downloads/hadoop-connector
Of course, if you’d like to learn more about how BigMemory can help your organization make the most of Hadoop, contact me directly at [email protected], or post in the comments
Leave a Reply
You must be logged in to post a comment.