soam's home

home mail us syndication

Data Trends For 2011

From Ed Dumbill at O’Reilly Radar comes some nice thoughts on key data trends for 2011. First, the emergence of a data marketplace:

Marketplaces mean two things. Firstly, it’s easier than ever to find data to power applications, which will enable new projects and startups and raise the level of expectation—for instance, integration with social data sources will become the norm, not a novelty. Secondly, it will become simpler and more economic to monetize data, especially in specialist domains.

The knock-on effect of this commoditization of data will be that good quality unique databases will be of increasing value, and be an important competitive advantage. There will also be key roles to play for trusted middlemen: if competitors can safely share data with each other they can all gain an improved view of their customers and opportunities.

There’s a number of companies emerging that crawl the general web, Facebook and Twitter to extract raw data, process/cross-reference that data and sell access. The article mentions InfoChimp and Gnip. Other practitioners include BackType, Klout, RapLeaf etc. Their success indicates a growing hunger for this type of information. I definitely seeing this need where I am currently. Limelight, by virtue of its massive CDN infrastructure and customers such as Netflix, collects massive amounts of user data. Such data could greatly increase in value when cross referenced against other databases which provide additional dimensions such as demographic information. This is something that might best be obtained from some sort of third party exchange.

Another trend that seems familiar is the rise of real time analytics:

This year’s big data poster child, Hadoop, has limitations when it comes to responding in real-time to changing inputs. Despite efforts by companies such as Facebook to pare Hadoop’s MapReduce processing time down to 30 seconds after user input, this still remains too slow for many purposes.
:
:
It’s important to note that MapReduce hasn’t gone away, but systems are now becoming hybrid, with both an instant element in addition to the MapReduce layer.

The drive to real-time, especially in analytics and advertising, will continue to expand the demand for NoSQL databases. Expect growth to continue for Cassandra and MongoDB. In the Hadoop world, HBase will be ever more important as it can facilitate a hybrid approach to real-time and batch MapReduce processing.

Having built Delve’s (near) real time analytics last year, I am familiar with the pain points of leveraging hadoop to fit into this kind of role. In addition NoSQL based solutions, I’d note that other approaches are emerging:

It’s interesting to see how a new breed of companies have evolved from treating their actual code as a valuable asset to giving away their code and tools and treating their data (and the models they extract from that data) as major assets instead. With that in mind, I would add a third trend to this list: the rise of cloud based data processing. Many of the startups in the data space use Amazon’s cloud infrastructure for storage and processing. Amazon’s ElasticMapReduce, which I’ve written about before, is a very well put together and stable system that obviates the need to maintain a continuously running Hadoop cluster. Obviously, not all applications fit this criteria but if it does, it can be very cost effective.

View From My Office

After a couple of years of working remotely, it still feels strange to have an office of my own, let alone one with a modicum of a view of the downtown SF skyline, so I am enjoying it while it lasts. We’re scheduled to move to a more central SOMA location sometime in the next month.

View From My Office

Gesture Recognition And Music

Despite Farhad Manjoo’s assertions that a week at CES is essentially a week wasted (“The Most Worthless Week in Tech”), I found this LA Times article talking about gesture recognition vendors at CES to be particularly interesting:

Competing examples on display were from PrimeSense, the Israeli designers of the microchips that power Microsoft’s popular controller-free Kinect gaming accessory, and Softkinetic, a Belgian rival that powered an interactive billboard in Hollywood last summer for “The Sorcerer’s Apprentice.” The former relies on an approach called structured light — a projector fills the area in front of the display with beams of infrared light, then a sensor detects how the beams are distorted by moving objects. The latter takes the so-called time of flight approach, which detects motion by projecting light in front of a display and measuring how long it takes to bounce back.

PrimeSense has a considerable head start in the gesture recognition field thanks to the inclusion of its technology in Kinect — Microsoft sold some 8 million units of the device in 60 days. But games are “just the tip of the iceberg,” said Uzi Breier, executive vice president of PrimeSense. “We’re in the middle of a revolution. We’re changing the interface between man and machine.”

PrimeSense is focused on living room devices, while SoftKinect is also active in display advertising and medical applications. Breier said other possible uses include automobile security and safety, robotics, home security and rehabilitation.

To this list of uses, I would add another: music. Anyone who has played air guitar, air drums and/or the theremin would agree, I think. Percussion, in particular, would be a natural fit. Perhaps, in the future, conducting itself would be the actual performance and the orchestra would not even be there!

Theremin

EC2 Instance CPU Types

Amazon provides a whole variety of instance types for EC2 but lists their CPU capabilities via “EC2 Compute Units” where

One EC2 Compute Unit (ECU) provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor.

That’s somewhat helpful for m1.small but what about c1.xlarge which has something like 20 EC2 compute units? How to map that to the real world? Fortunately, I found a cloud computing presentation from Constantinos Evangelinos and Chris Hill from MIT/EAPS which contained mappings of most of the common ec2 instance types. It’s from 2008 but should still be applicable. Drawing from the slides, we have:

  • m1.small => (1 virtual core with 1 EC2 Compute Unit) => half core of a 2 socket node, dual core AMD Opteron(tm) Processor 2218 HE, 2.6GHz
  • m1.large => (2 virtual cores with 2 EC2 Compute Units each) => half of a 2 socket node, dual core AMD Opteron(tm) Processor 270, 2.0GHz
  • m1.xlarge => (4 virtual cores with 2 EC2 Compute Units each) => one 2 socket node, dual core AMD Opteron(tm) Processor 270, 2.0GHz
  • c1.medium => (2 virtual cores with 2.5 EC2 Compute Units each) => half of a 2 socket node, quad core Xeon E5345, 2.33GHz
  • c1.xlarge => (8 virtual cores with 2.5 EC2 Compute Units each) => one 2 socket node, quad core Xeon E5345, 2.33GHz

MapReduce vs MySQL

Brian Aker talks about the post Oracle MySQL world in this O’Reilly Radar interview. Good stuff. One section though caused me to raise an eyebrow:

MapReduce works as a solution when your queries are operating over a lot of data; Google sizes of data. Few companies have Google-sized datasets though. The average sites you see, they’re 10-20 gigs of data. Moving to a MapReduce solution for 20 gigs of data, or even for a terabyte or two of data, makes no sense. Using MapReduce with NoSQL solutions for small sites? This happens because people don’t understand how to pick the right tools.

Hmm. First of all, just because you have 10-20GB of data right now doesn’t mean you’ll have 10-20GB of data in the future. From my experience, once you start getting into this range of data, scaling mysql becomes painful. More likely as not, your application has absolutely no sharding/distributed processing capability built in to your mysql setup, so at this point, your choices are:

  1. vertical scaling => bigger boxes, RAID/SSD disks etc.
  2. introduce sharding into mysql, retrofit your application to deal with it
  3. bite the bullet and offload your processing into some other type of setup such as MapReduce

(1) is merely kicking the can down the road.

(2) involves maintaining more mysql servers, worrying about sharding schemes, setting up a middleman to deal with partitioning, data collation etc.

In both (1) and (2), you still have to worry about many little things in mysql such as setting up replication, setting up indexes for tables, tuning queries etc. And in (2), you’ll have more servers running. While it is true mysql clustering exists, as does native partitioning support in newer mysql versions, setting that stuff up is still painful and it’s not clear whether the associated maintenance overhead is worth the performance you get.

It’s not a surprise more and more people are turning to (3). A hadoop cluster provides more power out of the box than a sharded mysql setup, and a more brain dead scalable path. Just add more machines! Yes, there are configuration issues involved in a hadoop cluster as well but I think they’re far easier to deal with than the equivalent mysql setup. The main drawback here is (3) only works if your processing requirements are batch based, not real time.

It is true that not all of the technologies in the Hadoop ecosystem outside of Hadoop itself are all that mature. However, BigTable solutions like Hbase are still not that easy to setup and run. Pig is still evolving but Cascading is an amazing library. Additionally, if one uses Amazon’s cloud products judiciously, it may actually be possible to do (3) really cheap (as opposed to (2) which requires more and bigger machines).

How? Store persistent files in S3 (logs etc). Use Elastic MapReduce periodically so you are not running a dedicated hadoop cluster. Use SimpleDB for your db needs. SimpleDB has limitations (2500 limit on selects, restricted attributes, strings only) but more and more people (such as Netflix) are using it for high volume applications. Furthermore, all of these technologies are enabling single entrepreneurs to do things like crawl and maintain big chunks of the web so that they can build interesting new applications on top, something that would have been too cost prohibitive in the older MySQL world. I hope to write more about it soon.

Brave New World Of Oversharing

From the New York Times:

“Ten years ago, people were afraid to buy stuff online. Now they’re sharing everything they buy,” said Barry Borsboom, a student at Leiden University in the Netherlands, who this year created an intentionally provocative site called Please Rob Me. The site collected and published Foursquare updates that indicated when people were out socializing — and therefore away from their homes.

In this day and age of Too Much Information (TMI), the only real security, it would seem, would be the “security through obscurity” variety. If everyone flooded the web about the minutiae of their day to day lives, chances are it’s going to be tough to single out anyone in particular. That approach, however, puts early adopters at risk. No longer would they be just a face in the crowd. Comes with the territory, I guess.

That being said, websites making said TMI possible should probably realize there are still some boundaries best left uncrossed.

Recruiter LOL

linkedin

The picture says it all really. For the record, the full subject line from the recruiter was “Data Analytics Architect Opportunity – NOT SPAM.”

EC2 Reserved Instance Breakeven Point 2.0

After Amazon’s reserved instance pricing announcement last year, there were quite a few folks writing about the breakeven point for your ec2 instance i.e. the length of time you’d need to run your instance continuously before the reserved pricing turned out to be cheaper than the standard pay-as-you-go scheme. Looking around, I believe the general consensus was that it would take around 4643 hours or 6.3 months. See herehere and here, for example.

Around late October of last year, Amazon announced even cheaper pricing for their ec2 instances. However, not seeing any newer breakeven numbers computed in the wake of lower prices, I decided to post some of my own. These are for one year reserved pricing for Amazon’s US-N-Virginia data center. All data is culled from the AWS ec2 page.

As we can see, the break even numbers have dropped quite a bit – down to 4136 hours on most of the instance types, a drop of almost 500 hours or so. That translates to better pricing 3 weeks earlier than before, in about 5.7 months. Interestingly enough, the high memory instances have slightly earlier break even points (by about 50 hours or so). Not quite sure why.

Netflix + AWS

Recently, I discovered Practical Cloud Computing, a blog run by Siddharth Anand, an architect in Netflix’s cloud infrastructure group. In a recent post, he writes:

I was recently tasked with fork-lifting ~1 billion rows from Oracle into SimpleDB. I completed this forklift in November 2009 after many attempts. To make this as efficient as possible, I worked closely with Amazon’s SimpleDB folks to troubleshoot performance problems and create new APIs.

Why would they need something like this? From another entry titled, Introducing the Oracle-SimpleDB Hybrid, Siddharth writes:

My company would like to migrate its systems to the cloud. As this will take several months, the engineering team needs to support data access in both the cloud and its data center in the interim. Also, the RDBMS system might be maintained until some functionality (e.g. Backup-Restore) is created in SimpleDB.

To this aim, for the past 9 months, I have been building an eventually-consistent, multi-master data store. This system is comprised of an Oracle replica and several SimpleDB replicas.

In other words, Netflix is planning to move many of its constituent services into the AWS cloud starting with their main data repository. This sounded like a pilot project, albeit a massive one, and understandably so given the size of Netflix. If this went smoothly the immediate upside would be Netflix not spending a fortune on Oracle licenses and maintenance. In addition, AWS would have proved itself to be able handle Netflix’s scale requirements.

Evidently things went well as I came across a slide deck detailing Netflix’s cloud usages further:

Fascinating stuff. From the deck, it appears that in addition to using SimpleDB for data storage, Netflix is using many AWS components in for its online streaming setup. Specifically:

  • ec2 for encoding
  • S3 for storing source and encoded files
  • SQS for application communication

I also saw references to EBS (Elastic Block Storage), ELB (Elastic Load Balancing) and EMR (Elastic Map Reduce).

I think for the longest time, AWS and other services of its ilk, were viewed as resources used by startups (such as ourselves) in an effort to ramp up to scale quickly so as to go toe to toe with the big guys. It’s interesting to see the big guys get in on the act themselves.

Replacing my MacBook Pro Drive

Back in grad school, my thesis advisor Brian Smith, to his eternal credit, really put the systems into computer systems where our research was concerned. He also placed the same emphasis on our group and how we dealt with our own computers. We joked that much like Marine Boot Camp, our group members needed to know how to take apart and put together an entire computer in less than minute in order to be able to graduate!

I tried carrying on this entire DIY ethic in my post graduate career. There were stumbles though when I first started dealing with laptops. In 2000, I permanently crippled my Dell Inspiron while taking it apart in order to replace the internal hard drive. It never worked quite as well post operation. It literally was held together by liberal applications of masking tape and glue. I still take pride in being able to sell it to a fly by night computer repair operator in Kolkata sometime around 2005. It probably exists in some incarnation somewhere, fueling some kid’s IIT aspirations right about now.

Given my last experience, I was somewhat nervous about replacing my MacBook Pro’s hard drive. No question it was due – two years of hard labor had squeezed the existing Fujitsu down to cacophonous joint on joint grinding. Each day, I thought, would be its last. However, it wasn’t a straighforward processs. Apple provides a How-To on its site if you want upgrade your MacBook but for your Pro, no dice.

Luckily, help exists online, in particular, here, here, here and here. The basic procedure is the same: you first buy a 2.5″ SATA drive, ideally 7200 RPM for faster performance even though it’s a bigger drain on the battery. I’ve never gone wrong with Western Digital, so I bought a WD 320G Scorpio. Next, you’ll need a 2.5″ enclosure – make sure it can do SATA drives. I learned the hard way. Finally, you’ll need a Phillips and a T-6 Torx screwdriver. I bought all except the Phillips screwdriver (I already own a set) from Fry’s. Not the cheapest but at least they’ll take returns in the first 30 days.

After fitting the WD drive to the enclosure, I hooked it up to the Pro’s USB drive and used SuperDuper to completely clone my main HD into an externally bootable drive. I rebooted the Pro from the external drive to confirm (hold down the option key when rebooting your Pro and, if multiple bootable devices are available, it’ll ask you to choose).

For the actual physical work, I printed out ifixit’s guide and followed it step by step. You have to take out a lot of screws and some parts. To keep track, I placed all the pages in the guide side by side and placed each set of screws next to the pictures, once I completed the step. This was particularly useful when putting everything back together again. My Phillips screwdriver is magnetized, so it holds the screw to itself. This was invaluable as many of the screws in the Pro’s casing are tiny and placing them can be tricky.

It was quite a relief when I put everything back together again, powered up my laptop and, after a brief, yet agonizing period, the Apple logo came on. And soon after, the machine booted up quite happily and my desktop appeared. Now, I have a souped up box and quite possibly have saved my company, Delve, a fair chunk of change in terms of not having to replace my laptop. One of the keys in the keyboard is still loose but I am hoping the tape and chewing gum approach will work in holding it in place!

« Previous entries · Next entries »