Obscuring data from Service Providers

In the midst of my toughts to enable people to own their content, I stumble upon the problem of limiting the Service Providers ability to cross users data. In our current situation, Service Providers are able to generate statistics from users data. It enable them to sell contextualized ads, check relations between people, analyze photos, etc. Users have absolutely no possibility to prevent that from happening. Even if people store content out of those sites, they will lose control of their data once they delegates them to a given site.

It could happen that some users may accept that exploit because they get services or a direct earning from this. As of today websites like Facebook or others get that for free.

One can argue it is their precise business model to do so, but I would not agree. Let’s take a metaphor to compare that situation to other business situations. Let’s imagine you are a large petrol company and you want to exploit an oil field you discovered abroad. Could you image you might be able to exploit it without paying any license fees to the state government it belongs to ? Could you imagine not to pay fees or taxes related to the volume you extract from that field ?

If you have answered No to both questions then you have understood my point and yes you are presently an open oil field with absolutely no fees or taxes paid. It is in fact an unfair / unbalanced business and it gives you a simple explanation of the current leap forward in profit of those large service providers. In a way, users could ultimately be considered as unpaid employee but no, guess it, you are only users :).

I hope the reason I am thinking about that is clear to you now (even if the digression was a bit too long for most of you…).

One can ask: how much worth my data ? I would answer: not much, taking yours alone but a lot putting all users data together.  Some can precisely answer that question: your online services providers and ad broadcasters.

So now, I am able to come to my idea for today: content filtering browsers.

Imagine for a second the whole Internet suddenly speaking Latin with blank medias everywhere !? A stupid browser would only display that fake content to you but a smart one would retrieve much more information. It could query your content provider to retrieve your real content using metadata provided in the fake content (latin and blank media).

Yes, the idea is that services providers would only store fake contents containing metadata and only the people with the right identity could retrieve the real content from your personal store.

What difference does it make ?

Basic answer is: by putting your data under a content service control, your are able to control who is accessing it. You may put authorization on it and you are even able to monetize them (logging who is requesting them and relying on a definite contract with to monetize). This monetization could end in real money or any other kind of money that you may consume using services. You may authorize your current service providers (like Facebook) to gain access to those real resources but your are also able to remove the authorization (answering fake content or an error if you are fair).

How could we do that ?

Simple: remember how you publish content on twitter? yes, short urls. In our case, those short urls would point to your real media or textual content (eventually zipped) using a content provider URL.

At the editing stage, your brower should provide you a function to type or upload data when editing content from your service provider. It then should submit your real content to your content provider and retrieve a short url in return. it will then incorporate the short url in a basic latin text or inside the metadata of a fake image or media you upload. The good news is that, even when you are editing a complex content, it is basically contained in the browser, so it is always able to get it and replace it with a fake content (even google apps…).

Why do you require a change in browsers ?

Basic answer is: the only way to prevent other to read your data without your permission, and your service providers in particular, is that they never receive the real content. There no other way than extending the browser to achieve that…

How much work does that require ?

  • Define a new standard for data indirection: simple.
  • Implement open-source libraries to bring client and server reference implementation: easy.
  • Upgrade the four main browsers with the new protocol for media and data editing: medium to complex. Open-source browser like Firefox should come first… Extensions may be developed to demonstrate principles.
  • Bring the standard content services into infrastructure: new service providers will be happy to run new business (even current ones would like that)

Provided the fact that the new protocol would enable you to export your data when you want and to import them to another content service provider. It enable people to get control over their content and the service provider they choose to trust.

For a presentation of the content service provider idea, please read my OpenSafe article.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: