Editor’s note: Guest writer Vivek Wadhwa is an entrepreneur turned academic. He is a Visiting Scholar at the School of Information at UC-Berkeley, Senior Research Associate at Harvard Law School and Director of Research at the Center for Entrepreneurship and Research Commercialization at Duke University. You can follow him on Twitter at @vwadhwaand find his research at www.wadhwa.com.
DURHAM, N.C. – LinkedIn Founder Reid Hoffman said, recently, “that if Web 1.0 involved go search, get data and some limited interactivity, and if Web 2.0 involves real identities and real relationships, then Web 3.0 will be real identities generating massive amounts of data.”
Reid is a visionary and certainly had this right. But the information that Reid described is just the tip of the iceberg. We are already gathering a thousand times more data than that. The growth is exponential, and the innovation opportunities are even bigger than Silicon Valley can imagine they are.
I’m going to explain why I believe this. But let me start with a short history lesson.
Over the centuries, we gathered a lot of data on things such as climate, demographics, and business and government transactions. Our farmers kept track of the weather so that they would know when to grow their crops; we had land records so that we could own property; and we developed phone books so that we could find people. Web 1.0 made it possible to make this information globally available and searchable.
This rapidly evolved into Web 2.0. Now data were being captured on what news we read, where we shopped, what sites we surfed, what music we listened to, what movies we watched, and where we travelled. And “the powers that be” started gathering information about our age, health, education, and socioeconomic status.
With the advent of LinkedIn, Myspace, Facebook, Twitter, and the many other social-media tools, the Web became “social” and “the powers that be” began to learn all about our work history, social and business contacts, and what we like—our food, entertainment, sexual preferences, etc. This is what Reid Hoffman calls Web 3.0.
But there is much, much more happening in the Web 3.0 world. It’s not just “social”.
In 2009, President Obama launched an ambitious program to modernize our healthcare system by making all health records standardized and electronic. The goal is to have all paper medical records—for the entire U.S. population—digitized and available online. This way, an emergency room will have immediate access to a patient’s medical history, the effectiveness of medicines can be researched over large populations, and general practitioners and specialists can coordinate their treatments.
The government is also opening up its massive datasets of information with the Data.govinitiative. Four hundred thousand datasets are already available, and more are being added every week. They include regional data on the efficiency of government services, on poverty and wealth, education, on federal government spending, on transportation, etc. We can, for example, build applications that challenge schools or health-care providers to perform better by comparing various localities’ performance. And we can hold the government more accountable by analyzing its spending and wastage.
There are more than 24 hours of video uploaded to YouTube every minute, and far more video is being collected world wide through the surveillance cameras that you see everywhere. Whether we realize it or not, our mobile phones are able to keep track of our every movement—everywhere we go; how fast we move; what time we wake. Various mobile applications are beginning to record these data.
And then there is the human genome. We only learned how to sequence this a decade ago at a cost of billions of dollars. The price of sequencing an individual’s genome is dropping at a double exponential rate, from millions to about $10,000 per sequence in 2011. More than one million individuals are projected to be sequenced in 2013. It won’t be long before genome sequencing costs $100—or is free—with services that you purchase (as with cell phones).
Now imagine the possibilities that could derive from access to an integration of these data collections: being able to match your DNA to another’s and to learn what diseases the other person has had and how effective different medications were in curing them; learning the other person’s abilities, allergies, likes, and dislikes; who knows, maybe being able to find a DNA soul mate. We are entering an era of crowd-sourced, data-driven, participatory, genomic-based medicine. (If you’re interested, Dr. Daniel Kraft, a physician–scientist who chairs the Medicine track for Singularity University, is hosting a program called FutureMed, next month, which brings together clinicians, AI experts, bioinformaticists, medical-device and pharma executives, entrepreneurs, and investors to discuss these technologies.)
You may think that the U.S. leads in information collection. But the most ambitious project in the world is happening in India. Its government is gathering demographic data, fingerprints, and iris scans from of all its 1.2 billion residents. This will lead to the creation of the largest, most complex identity database in the world. I’ll cover this subject in a future piece.
It’s not all wine and roses. There are major privacy and security implications such as those I discussed in this piece. Forget about the “powers that be”: merely the information that Google is gathering today would make Big Brother envious. After all, Google is able to read our e-mails even before we do; it knows who our friends are and what they tell us in confidence; it maintains our diaries and our calendars; it can even guess what we are thinking by watching our surfing habits. Imagine what happens once Google has access to our DNA information.
Regardless of the risks and security implications, the technology will advance, however.
This period of history has been called the Information Age because it makes available instant access to knowledge that would have been difficult or impossible to find previously. I would argue that we are way beyond this; we’re at the beginning of a new era: the New Information Age.
In previous technology revolutions, companies such as IBM, Microsoft, Oracle, Google, and Facebook were born. Such giants get mired in the technologies that they helped create; they stagnate because they are making too much money and are afraid to obsolete themselves. It is ambitious startups that come along to change the world. I have little doubt that the next Facebook and Google are already being hatched in a garage somewhere.
You can find many other discussions on the conference website.
Get the latest news alerts: Follow WRAL Tech Wire at Twitter.