Home

Enlighten your Big Data with Apache Spark™?

Hadoop => Most cost effective and scalable system to store Big Data.
Spark => Simple unified platform for all compute needs for Big Data.
Hadoop + Spark => Complete Business Insights


  • Storage

  • Compute

  • Insights

Have you formed your Data Lake yet?

How sick are you of your data sitting in silos?
How cumbersome is it to get even simple piece of information from other departments?

Big Data Lake at the cost of EDW bucket

Hadoop and other Big Data technologies let you collect all the data in one system, at a fraction of the cost of traditional EDW systems.

No need to think about what data to save and what to throw away. No need to archive data in tape drives.

The benefit: “Get every ounce of Insight from data”

Our expertise in Big Data systems enable us to advise you about the right strategy to create and maintain Big Data Lake.

We have right tools and expertise to process your Big Data once lake is formed.

Spark – The Unified Platform for Big Data Apps

Spark provides a single platform which has libraries for all of your Big Data compute needs.

No disparate compute tools, just libraries

Over the years, multiple technologies have emerged to cater to different big data compute needs like Storm (Streaming), MapReduce, Hive(SQL like interface), Pig (high-level scripting), Mahout(Machine Learning) etc.

These technologies came with their own set of features, as well as Challenges. Spark completely changed the game. It caters to different compute needs by simply providing right libraries. Following are the libraries which come with Spark bundled as standard:

  • Spark SQL
  • Spark Streaming
  • MLLib (Machine Learning Library)
  • GraphX

Our team of experts can help you process data using Spark and it’s libraries, so that you can derive actionable insights that improve your business.

Eureka or Enlightenment phase

The promise of Big Data lies in being able to make more informed decisions – to increase sales, decrease costs, or execute your mission more efficiently. Our Big Data Analytics provide useful insights that until now could only be suggested by sampling, or were completely invisible.

Visualize your way to insights

The insights you need are buried in huge amounts of fast-moving data in a variety of data types. Looking at raw data is not only inefficient but also boring. Humans believe in power of stories and the moment you start visualizing data, it starts telling stories.

We have expertise in all industry leading visualization tools like Tableau, Datameer and Qlikview. We can also help you create custom dashboards which provides tailor made visualization interface.

Here are some examples of custom visualization.

Ask about our free Big Data POC at no cost or obligation.

From Our Blog

Is data locality really a virtue?

Hadoop has started with data locality as one it's primary features. Compute happens on a node where data is stored, it reduces data which needs to be shuffled over the network. Since every commodity machine has some basic compute power, you do not need specialized hardware and it brings the cost to a fraction of what it would be otherwise. ... More

Spark: JDBC Using DataFrames

For Spark 1.3 onwards, JdbcRDD is not recommended as DataFrames have support to load JDBC. Let us look at a simple example in this recipe. Using JdbcRDD with Spark is slightly confusing, so I thought about putting a simple use case to explain the functionality. Most probably you'll use it with spark-submit but I have put it here in spark-shell to illustrate ... More

Spark: DataFrames and JDBC

For Spark 1.3 onward, JdbcRDD is not recommended as DataFrames have support to load JDBC. Let us look at a simple example in this recipe. Using JdbcRDD with Spark is slightly confusing, so I thought about putting a simple use case to explain the functionality. Most probably you'll use it with spark-submit but I have put it here in spark-shell to illustrate ... More

We Oxygenate the Ecosystem

As InfoObjects is approaching 10 years of its founding, one question came to mind during my thinking time this morning. The question started with why InfoObjects? And very soon it changed into why the consulting business? This blog should be a good read for not only our customers but also for new joiners who make a decision to choose a ... More

Apache Spark Shining at Strata

This year Strata moved to San Jose from Santa Clara. A lot of things were different like a bigger expo hall, less parking, etc. What caught my attention was something different. This was the first time Apache Spark was put at the same level as Apache Hadoop. Till last year Apache Spark was considered one part of the Hadoop eco-system, like ... More

Demystifying Compression

Compression has an important role to play in Big Data technologies. It makes both storage and transport of data more efficient. Then why are so many compression formats, and what are the things we have to balance while making a decision about which compression format is better? When data is compressed, it becomes smaller so both disk I/O and network I/O ... More