On Friday, Russell Garland of the WSJ wrote about the “Data Tsunami” that is coming due to increased volumes of data being generated from everything from the Facebook Social Graph, the next Interest Graph and genomics (just to name the most obvious growth driver). “Tsunami” is probably too small of a word (unless you are talking about Jupiter-scale growth). Take a look at these interesting numbers:
- The average human brain can take in and remember about one byte per second (two gigabytes over an average life time, including sleep)
- The amount of data storage in the world in 2000 was rough 300,000 terabytes—about 0.03 “brains’ worth” of storage for every person on Earth[1,2]
- This amount grew at to approximately 1,200,000 terabytes by 2010—about 90 “brains’ worth” of storage for every person on Earth.[2,3] No wonder we feel so over-loaded with data!
- By 2020, this will get even more outlandish. We will have 36,000,000,000 terabytes of data—about 2,400 “brains’ worth” of storage for every person on Earth.[2,3]
Managing storage of this volume data will be an interesting challenge for companies like EMC, IBM and Oracle (one aided greatly by Moore’s Law). However, being able to understand it will require complete reinvention of how we process, explore and analyze data.
These new technologies will be as advanced when compared to today’s data warehousing and reporting technologies as the spreadsheet was when compared to manual ledgers. They will use non-linear rule engines and artificial intelligence to find trends and determine which data are most important. They will use new data visualization techniques, leveraging everything from 3D to augmented reality (AR) technology to enable human-scale brains to explore results and conduct analyses. This, in turn, will drive new physical interfaces from the desktop to mobile to even wearables.
It should be a very interesting ride!