Over the last 10 years, my team at RIoT and I have interfaced with more that 3,000 companies that are engaging in the data economy. While those companies serve a myriad of industries with a diversity of technologies and solutions, there are a limited number of core themes that are universally challenging. At the top of the “everybody is concerned” list is data privacy.

Privacy is a hot-button topic for sure, but when you ask people to define exactly what privacy is, many have difficulty articulating a definition.

Webster’s Dictionary defines privacy as: “the quality or state of being apart from company or observation.”

Interesting, but not that practically useful.  In a legal context, privacy is generally defined as the right of an individual to not have their personal information used maliciously or disclosed publicly – except in some cases deemed “in the public interest.”

In the US, the Privacy Act of 1974 regulates how government agencies may collect, store, maintain, disseminate and use personal information stored in government systems. Over the years, specific types of data warranted additional regulation, like healthcare data protected by the 1996 Health Insurance Portability and Accountability Act (HIPAA), ensuring personal health data is not unfairly used by the insurance industry. HIPAA created a far-reaching impact, driving cellular networks to increase encryption and data centers to layer additional cybersecurity, for example.

There is increasing public pressure to create new regulations for data privacy. Prior regulation did not envision how easy it would be to capture personal information through technology, nor how easily “non-personal” data points could be analyzed together to deduce personal information accurately. Furthermore, there are few clear definitions of what “improper use” entails.

New data privacy framework receives approval from EU

I’m not convinced that additional “restrictions on use” of data would be particularly meaningful, or that they would create the halo-effect benefits that legislation like HIPAA created. Most improper uses are already covered by existing laws. When someone maliciously uses our personal data, there is usually an existing law already in place to deal with that.

●     Impersonate us – Identity Theft / Criminal Impersonation

●     Improperly access a facility – Trespassing / Breaking & Entering

●     Access money, accounts or goods – Larceny / Theft / Fraud

●     Spread mis-information – Libel / Fraud / Identity Theft

●     Embarrass, Humiliate or Intimidate –  Blackmail / Invasion of Privacy

In my opinion, the real problem is an incongruity between what commercial enterprise sees as “proper use” of personal data and what individuals are comfortable with. Consider marketing. Companies would argue that using personal data to improve the quality of advertising is actually a benefit to the consumer and a far cry from improper use. Many consumers argue that highly personalized advertising infringes on personal privacy. The opinion of value creation is key.

It is not technically difficult to execute extremely personalized advertising without breaking privacy laws. I’ll explain by example.

Let’s say I go shopping in a retail store, looking at polka dot dresses for a gift for my daughter. If I had my phone with me or was wearing a smart watch, they would make contact with the store’s wifi network. Portable devices are constantly scanning in the background for available networks and other devices. Since wifi routers rarely move, their locations are well understood. Google captured and geolocated wifi network names as they traversed public streets to build Google Street View, for example. Many retailers have RFID tags on products like apparel, creating another opportunity for an electronic handshake.  Some locations have geofencing technology that tracks the path that digital devices follow throughout a store.

Technically your phone’s location is private data. And the wifi system, RFID tags and other systems only see a de-personalized identifier. But nearly every application on your device buries a statement into their terms of use to allow them to capture location data and to share it with 3rd parties. If I linger too long in front of the polka-dot dress display, an algorithm may assume shopping intent, based on the time and position and patterns of digital handshakes.

Continuing the story, let’s say I chose not to purchase anything and head home. There is no digital fingerprint from the store’s point of sale system, which likely would have occurred if I passed through check-out and certainly if I used mobile pay. As soon as I arrive home, my phone will electronically “bump into” my smart TV. Remember, each device in this story only shares a de-personalized identifier with each other. So never has there been transmission of my name or Social Security number or fingerprint.

The retailer doesn’t need them. The digital device history of interactions is enough. Those device manufacturers share the interaction history – an easy connect-the-dots analysis of my activity – with the TV network, and they know that sharing an advertisement for polka-dot dresses carries a pretty good probability that I’ll notice. Maybe I’ll have buyer’s remorse that I didn’t purchase at the store and follow through with a purchase now.

It would be extremely difficult to craft regulation to restrict the use of device location in all cases or to throttle background device-to-device communication. These concepts are extremely useful and at times necessary to deliver services that we want and depend on every day.

Where I think regulation should go is in the direction of giving consumers control over how their data is used. End user license agreements protect commercial interests of how data is used, rather than customer interests. Many devices and software programs won’t install and operate at all, if users don’t surrender to the company’s terms. If a company allows users to exert control over how data is used, nearly always the onus is on the user to take extra action to restrict data use compared to the default state.

I think it would be useful to flip the table and regulate that services must be designed to function in a default state that does not share any personal or device data outside the application itself. Users could opt in to the additional uses of their data, of course. But they would only do so in cases where users truly see the value of consciously making their data available.

This position puts control in the hands of the consumer. It puts more work on industry to build trust with their users through transparency and value creation. When they offer opportunities that customers truly find valuable, then users will offer their data.

A major counter-argument from the tech sector is that if they cede control of private data to the user, rather than take default control themselves, it will diminish the overall market value for industry. Many applications that today are free would need to become paid services, driving up costs for consumers.

I’ll offer a response that is business 101. True market validation has always been defined as delivering products that people are willing to buy. It is time to stop offering “free” tech as a smokescreen for “I’ll take your data.”