Need help unlocking the value of Big Data?

Hadoop and the Cloud make it possible and affordable. Let InfoObjects help show you the way to improved business performance based on intelligent analysis of your Big Data.

  • Take Aim

  • Application Development

  • Focused Analytics

What do you need to know?

It’s an age-old challenge: identify the data you need to make better business decisions and get useful information as fast as possible. Big Data increases the challenge, which is where we come in….

Get the highest return on your investment

We help you go after the low-hanging fruit first – to apply efficient, cost-effective methods for extracting the insights you need, and continually improve the process to realize the value hidden in your business data.

Based on your IT environment and in-house capabilities, we create an architecture and detailed roadmap for your Big Data initiative. While we favor open source tools and technologies, we leverage your existing applications where possible to maximize your overall ROI.

Our Big Data advisory services ensure your aim is on target, and that the plan makes sense for your
organization. Then we provide the services you need to complement your team in executing your Big Data projects.

Ask about our free Big Data POC at no cost or obligation.

Leverage the Hadoop ecosystem

InfoObjects developers are experts in Hadoop technologies – down to the source-code level – and can implement all the IT infrastructure and applications needed for your Big Data challenges.

Expertise by the project or by the hour

We can take on your entire Hadoop development as a turnkey project, or assist your team in developing effective Hadoop applications. Either way, we give you access to the broad and deep expertise needed to ensure the right mixture of these Big Data ingredients:

  • HDFS (Hadoop Distributed File System) – Java-based file system for storing and accessing your Big Data
  • MapReduce – Java API for mining structured and unstructured data on large Hadoop clusters
  • HBase & Cassandra – Open Source “NoSQL” database systems
  • Hive – SQL-like query interfaces for Hadoop
  • Pig – high-level script-like language for MapReduce to process and analyze large data sets

Our promise to you: Whether you outsource to us or want an expert consultant as part of your team, we help deliver cost-effective applications optimized for your needs, yielding practical, actionable metrics that improve your business.

Ask about our free Big Data POC at no cost or obligation.

Make Big Data work for you

The promise of Big Data lies in being able to make more informed decisions – to increase sales, decrease costs, or execute your mission more ficiently. Our Big Data Analytics provide useful insights that until now could only be suggested by sampling, or were completely invisible.

Why sample when you have all the data?

The insights you need are buried in huge amounts of fast-moving data in a variety of data types. Sampling algorithms and stochastic methods were used in the past for data modeling, but are no match for the power of analytics drawn from all the data all the time.

We help you pose the right questions and use all your data – machine and human, text and image, structured and unstructured – to find answers. Big Data Analytics enable the Santa Cruz Police Department to implement more efficient predictive policing, and leading insurance companies to
reduce the cost of health care through predictive policies and services.

Our solutions focus on the critical issues affecting the performance of your organization, giving you the ability to access key metrics in almost real time, in a form useful to help you make the most informed decisions possible.

Ask about our free Big Data POC at no cost or obligation.

From Our Blog

Boiling the Big Data Ocean

Cloudera's announcement of a $900M funding round is still settling down in peoples' minds. It also got me thinking about how it's going to affect us as a relatively smaller player in the Big Data/Hadoop space. Interestingly, a huge amount of this $740 has come from our very own neighbor, Intel. The following repercussions come to the forefront:   Expand The Hadoop Ecosystem The ... More

Dust is settling down in Big Data space

Big Data space is interesting in many ways. Big Data is changing the landscape, but then landscape also is changing Big Data. In this blog, I will look at them from different angles. Gartner Big Data Hype Curve Below is Gartner's Hype cycle for emerging technologies According to this graph, Big Data is about to reach the peak of hype cycle. This data ... More

FileDescriptors and HBase

Though HBase works on HDFS, when it comes to need for open-file handles, it comes close to any regular database and needs a lot of file descriptors open. Linux, by default, limits the number of file descriptors to 1024. You can check this by issuing $ ulimit -n //1024 To change this limit, open limits.conf as root: $ sudo vi /etc/security/limits.conf hduser soft nofile 10240 hduser ... More

Moving Data into and out of Hadoop

Hadoop, though extremely powerful, is not an island. It needs to import and export data from/to a slew of sources. Typically, Hadoop ingests data from 4 types of sources: Log Files NoSQL OLTP Log Log files present a very interesting use case for Hadoop. The volume in log data is very high and semi-structured. Log data can be moved using a streaming tool, like Storm or Flume. Files Files ... More

Hadoop. What is interactive, batch, streaming, real time, NoSQL?

I am sure everyone is confused with the different terminology in Hadoop--words like streaming, real time, etc. So here's some clarification: Batch Batch means running the query in a scheduled way. You already know what your question is: you have written a MapReduce program to process data and your data is in a few large files as opposed to being spread out. ... More