Jeff Johnson (My apps, PayPal.Me, Mastodon)

Technology is never a substitute for consent

January 4 2025

This is a follow-up to my recent blog posts Apple Photos phones home on iOS 18 and macOS 15 and The internet is full of experts. I've read a lot of the discussion about and responses to my blog posts, which has forced me to hone my own thinking and arguments on the subject. I characterized Apple's new Enhanced Visual Search feature as a privacy violation, but my criticism was perhaps too vague, because there are different ways of understanding the notion of privacy. One natural way of understanding privacy is as synonymous with secrecy. According to this interpretation, if my data is private, then nobody except me can read my data. However, on reflection, I think that my primary objection to Enhanced Visual Search was inspired by a different, though related, understanding of privacy. (You might say that all the meanings of privacy have a "family resemblance," to use Wittgenstein's term.) The right to privacy can also mean the right to private ownership.

According to this alternative interpretation of privacy, the data on my computers is mine, to do with as I please, not as anyone else pleases. My computers, and the data on my computers, should not leave my possession, the "privacy" of my own home, without my consent. If others want access to my data, I may grant permission, but only if they ask and I agree.

With Enhanced Visual Search, Apple appears to focus solely on the understanding of privacy as secrecy, ignoring the understanding of privacy as ownership, because Enhanced Visual Search was enabled by default, without asking users for permission first. The justification for enabling Enhanced Visual Search by default is presumably that Apple's privacy protections are so good that secrecy is always maintained, and thus consent is unnecessary.

My argument is that consent is always necessary, and technology, no matter how (allegedly) good, is never a substitute for consent, because user privacy entails user ownership of their data. The problem with a lot of the responses to my original blog post is that they place the burden of proof on the users, for example, me, to explain why Apple's technology doesn't maintain perfect secrecy. We mere users are at a massive disadvantage in this argument, because we're not experts on subjects such as homomorphic encryption and thus struggle to understand the technical details and raise technical objections to the implementation.

I'm not claiming that Apple's privacy protection technology is flawed. I have no idea whether it's technically flawed. I do think it's reasonable for users to worry that any new Apple technology might be flawed in some way, given Apple's atrocious lack of quality assurance, as well as the endless list of security vulnerabilities. In any case, though, I think the question of technical perfection is mostly a red herring. Technology is never a substitute for consent. The following is not a sound argument: "Apple keeps your data and metadata perfectly secret, impossible for Apple to read, and therefore Apple has a right to upload your data or metadata to Apple's servers without your knowledge or agreement." There's more to privacy than just secrecy; privacy also means ownership. It means personal choice and consent.

The point of obtaining user consent is not to make users understand all of the technical details of the technology involved. Ideally, you could educate users in this way, but practically speaking, in the majority of cases, you probably won't. The point is to respect the autonomy of users, their ownership rights. Ultimately, it's the responsibility of the individual user to become fully technically informed or not, but it's the responsibility of the technology vendor to ask for permission to access user data, regardless of how technically informed the user might be.

There's a straw man reaction to my argument, which is that I'm somehow demanding specific, separate user consent for every individual HTTP request that ever leaves the user's computer. Of course, that's ridiculous! It's common knowledge that a web browser, for example, connects to the internet, and thus entering a URL in the address bar or clicking a link can be interpreted by the browser as user consent to do what is necessary to load the website in the browser, including the transmission of packets over the internet. On the other hand, simply using a web browser does not imply user consent for sending usage data directly to the browser vendor, even if such data is (allegedly) "anonymized." Analytics require separate consent.

Consent fatigue—when users become overwhelmed with the number of permission requests and end up perfunctorily granting all permissions in order to get work done—is a legitimate problem. I take this problem seriously, and I personally think that Apple presents permission requests too often, needlessly. However, a lot of these permission requests are for actions that the user has explicitly initiated, but the computer paternalistically interrupts to ask, "Are you sure you really want to do the thing that you're already trying to do?" This form of annoyance is entirely different from the computer initiating actions on its own, in the background, without any user action, and without any user knowledge. Enabling Enhanced Visual Search was something that Apple wanted to happen, not something that I ever explicitly requested myself. As I said before, I've never even tried to search my own photos library for landmarks. I'm not interested in that feature.

Avoiding consent fatigue is a matter of user interface design. Isn't Apple supposed to be good at user interface design? Indeed, Apple was good at user interface design, under the leadership of Steve Jobs. Not so much under the leadership of Tim Cook. During the Jobs era, Apple parodied the consent fatigue of Windows Vista. Again, though, avoiding consent fatigue is not the same as avoiding consent entirely. There's no excuse for ignoring the individual user's preferences. I think the key to obtaining consent while avoiding consent fatigue is to judiciously "package" consent so that requests are minimized as much as possible, presented only at appropriate times without getting in the way of the user's workflow. It's a difficult problem, but it's a solvable problem with design skill. No amount of engineering skill, no advances in privacy and security technology, can "solve" the problem by making consent obsolete. That's not how it works.

Appendix: Technical Details

I said above that the question of technical perfection is mostly a red herring. Nonetheless, I think the technical debate over Enhanced Visual Search has been oversimplified, and I wanted to address that, without distracting from my main argument, so I'm putting this in an appendix, to be considered independently. The oversimplification is that the data from your photos—or metadata, however you want to characterize it—is encrypted, and thus there are no privacy issues. Not even Apple believes this, as is clear from their technical papers. We're not dealing simply with data at rest but rather data in motion, which raises a whole host of other issues. From Apple's machine learning research blog post:

Identifying the database shard relevant to the query could reveal sensitive information about the query itself, so we use differential privacy (DP) with OHTTP relay — operated by a third party — as an anonymization network which hides the device's source IP address before the request ever reaches the Apple server infrastructure. With DP, the client issues fake queries alongside its real ones, so the server cannot tell which are genuine. The queries are also routed through the anonymization network to ensure the server can’t link multiple requests to the same client.

Thus, the question is not only whether Apple's implementation of Homomorphic Encryption (and Private Information Retrieval and Private Nearest Neighbor Search) is perfect but whether Apple's entire apparatus of multiple moving parts, involving third parties, anonymization networks, etc., is perfect. I think some skepticism is reasonable and warranted, especially for brand new technology that hasn't been validated by external experts.

I believe that OHTTP, Oblivious HTTP, is the same as or very similar to the technology behind iCloud Private Relay (which, incidentally, as far as I can tell, is the cause of a majority of the web page loading issues that many users unfortunately experience in Safari). With iCloud Private Relay, Apple partners with an internet provider, typically Cloudflare, and the theory is that a user's internet traffic is kept private because it first goes through two hops, Apple and its partner, with each hop receiving only partial information, and thus neither of the hops has enough information to identify both the user's IP address and sent data. I've always been somewhat skeptical of this scheme, because the two hops, the two companies, are already acting in partnership, so what is there technically in the relay setup to stop the two companies from getting together—either voluntarily or at the secret command of some government—to compare notes, as it were, and connect the dots?

Jeff Johnson (My apps, PayPal.Me, Mastodon)