Let's say a company is thinking of using your private data in a new way. This could a new feature that collects more private data, or uses already-collected data in new ways, like face recognition. Or it could be a policy change, like customising ads based on browsing history. In any case, not everyone will be okay with the change. How should the company disclose it?
There are a spectrum of approaches it can take, from the least privacy-respecting to the most. The more sensitive a change is, the higher a level of privacy disclosure is warranted. Get it wrong, and you lose users' trust.
I'm not taking a stand on whether a change that reduces your privacy is good or bad. That depends on the feature, and the person making the judgment. I let Google Maps track where I'm, so that it shows me a customised and more useful map that shows the places I care about. But someone else may take a different view. Phrasing it as a binary question, "Is it right for Google Maps to track my location?" is often misguided. When it comes to privacy, there's often no one right answer.
Rather, the question should be, "Did they notify me appropriately?" And "appropriately" depends on how sensitive the change is. I've identified several levels of disclosure, ordered from the least privacy-respecting to the most.
Put differently, when a user has a negative opinion of a change you're making — they see no value, and only a loss of privacy — the lower the level of disclosure you choose, the more you piss that user off and lose their trust.
Which is not to say that you should automatically choose the highest level of privacy disclosure, because then fewer users will use your feature, missing the benefits it provides.
Level 1: We'll do something with your private data, and you have no choice about it: Here, the company makes a change that reduces your privacy, and doesn't let you opt out. An example is US cellular network Verizon's supercookies. If you as a user disagree with the decision, you'll find that this is the worst approach for the company to take: It advertises that they'll do whatever they feel like, and if you don't like it, screw you.
A real-world analogy is finding out that a friend has "borrowed" your car for a few hours without telling you, and when you tell him not to do it again, he says he doesn't care for your opinion.
Level 2: We'll let you opt out, if you discover it: The company provides an opt-out, but doesn't proactively inform users. If you discover it, you can toggle the setting to opt-out. The problem is that hearing about bad news from a third party reduces your trust in the company.
This is like the aforementioned car-borrowing friend agreeing not to do it again when confronted about it. He still didn't bother to tell or even ask you. You heard about it from a third person.
Level 3: We'll inform you after the fact: Here, you get a dialog box that says:
We have been doing XYZ.
[Opt out] [OK]
This still reduces trust, because the damage is already done. I once found myself thinking, "Assholes. They invited themselves to misuse my private data, violating my trust in them, and are now telling me after the fact."
This is like the aforementioned friend saying, "Hey, I took your car without asking you, and I've now returned it. I'll take it again the next time I need it, unless you tell me not to."
Level 4: We'll inform you ahead of time: Here, you get the same dialog box as before, but before the change goes into effect. The default — if you just press OK or close the dialog box or ignore the notification — is still the same: the company goes ahead and uses your private data for whatever purpose you were notified about. The only difference is that you're notified ahead of time, before the damage is done.
If you disagree, this still feels wrong. I once found myself thinking, "These jerks are out to violate my privacy. Good thing I've read the fine print to guard myself from them." When your users think they have to guard themselves from you, you're in an antagonistic relationship. You've failed, at least for that user.
This is like the aforementioned friend saying, "Hey, I'll take your car this Saturday."
Level 5: We'll ask you ahead of time (opt-in): This time, the default is more conservative. It's an opt-in, rather than the opt-out of earlier levels. If you do nothing, your private data won't be used for whatever purpose you've been asked about.
This is highly privacy-respecting, and will piss off hardly anyone. But many people will miss the benefits of the feature, since the majority stick with the defaults.
This is like the aforementioned friend asking, "Can I borrow your car this Saturday?"
Level 6: We won't do it: Some things are judged to be such bad ideas that the company doesn't do it at all, not even as an opt-in. For example, Google doesn't sell your search history to the highest bidder.
In summary, these are six levels of increasing privacy disclosure. Choose the right level depending on how sensitive you judge the change you're making to be, and you win users' trust. Get it wrong, and you lose users' trust, and subject yourself to bad press and government action.