🇬🇧 Howdy English speaker, please note that this article was translated from French using Lumo, a privacy-friendly AI. Forgive me for any typos in the translation, and the occasional weird passive sentence that we use so much in la langue de Molière.


The centralized, uncontrolled accumulation of information about citizens creates the conditions for an authoritarian regime. Just ask former East Germans. That’s why, in a democracy, it is the people who hold the right to privacy and the government that must act in broad daylight. It cannot be otherwise. (The Guardian, 30 janvier 2026)

We need to talk about something.

Did you ever feel that your phone is listening to you? That the coincidence of an ad being shown two hours after you discussed that exact service or product with your family is too implausible to not be spied upon all day?

In reality your phone isn’t listening. Alphabet (Google), Amazon, Meta (Facebook), Apple, Microsoft and the like—what we used to call the GAFAM a few years ago—don’t need to. They already know everything about you, without having to waste massive energy scanning 5 billion (the number of smartphones in circulation) × 16 hours (let’s say the length of a typical day) of hypothetical audio recordings per day. It is far easier to obtain more information by other means.

Starting premise: the information economy

These companies make their business thanks to the myriad of data we generate ourselves throughout the day. The social networks we consume, knowing all our tastes; the cookies and trackers present on the majority of websites knowing all our browsing; and our devices, which when approaching different Wi‑Fi networks and connecting to antennas know all our movements. All of allows building a user profile made of thousands of data points. Data collection thus does not even need microphones or cameras; it’s passive thanks to the aggregation of our data and its algorithmic analysis.

A single example: the mind‑blowing quality of Spotify recommendations, offered by an algorithm trained solely on your listening data, which knows you better than you know yourself. Imagine the size of your data profile stored on Google’s servers, which have: your emails, your calendar, your photos, your trips recorded with Google Maps, your search history and your long conversations with its Gemini AI (the equivalent of ChatGPT).

So, what are these data used for? To serve you targeted advertising, which has a higher chance of generating a click than random ads. Our information truly has the value of a currency, which we exchange for the provision of a service, most often free. The discomfort about the amount of information collected behind my back does not personally raise an existential problem for me. I’m the first to listen to my Spotify “Wrapped” summary in December, after all.

My problem lies elsewhere. My discomfort is larger. And this long introduction had only the purpose of making you aware of the staggering amount of data we produce, how easy it is to collect it, and the ignorant or passive acceptance we show toward that collection.

Toward control of information

News outlets and TV channels have not lacked topics for several months now, in a world constantly bubbling. It’s unsurprising that little attention is given to this matter, and consequently little general concern.

This issue is the constant reinforcement of control over our information by governmental bodies: national, supranational and foreign.

The European example: Chat Control

Carte Chat Control

The European Union has been looking for a solution for several years to fight child sexual abuse. It pushes to implement “Chat Control”: an automatic, mandatory scan of every message sent by every citizen. To understand all its implications, a short technical passage is necessary.

Encryption. When a conversation is unencrypted, the information travels “in clear” over the network. Any observer on the device (for example, the Instagram app) or on the network (for example, the NSA) can read and store the conversation. When a conversation is encrypted, a mathematical algorithm converts the messages into a string of characters decipherable only through the use of a key. Only the sender and the recipient possess this key, making any third‑party decryption attempt mathematically impossible. This is excellent practice to protect the right to privacy: any correspondence should be seen only by the intended recipients.

One flagship proposal of Chat Control is the mandatory comparison by every messaging app of the content sent, especially photos, to a database of child‑pornographic material. Every submission that matches an entry in the database is automatically reported to the police. A good idea at first glance…?

The first problem is the breaking of encryption. To scan a message, a third party must be allowed to compare it to the database. The promise that only the sender and recipient can read it is therefore broken 1. Weakening encryption also opens the way to increased hacking and disclosure risks (financial, judicial, journalistic, medical data).

The second problem is precedent. Once such a system is authorized and technically implemented to target child abusers, anyone could add content to the database without prior audit. The ideal candidate? An executive power seeking expanded control. Tomorrow, simply by adding new potential matches to the database, France under Marine Le Pen could decide to target immigrants, Hungary under Viktor Orbán could target LGBT people, and conservative Poland could target and report attempts at abortion.

The third problem is false positives. Any suspicious image sent, even within a legal and consensual framework (e.g. vacation photos, or an exchange between two teenagers), could potentially be reported to police and judicial authorities. In addition to placing false accusations on innocent people, experts are sure that the gigantic number of false‑positive reports would be counter‑productive. It would engage massive resources to examine each case that could be better employed for targeted, court‑mandated investigations. 100 billions messages are sent daily on WhatsApp. Even assuming a positive ratio of 99.99 %, ten million messages would have to be examined by humans each day. And that is only on WhatsApp!

The fourth problem (and not the last, but I promise to stop after) is the highly contestable legality of this measure. The idea of mass surveillance without proportional targeting and judicial oversight is probably illegal and a violation of fundamental rights guaranteed by numerous texts such as the EU Charter (article 7 & 8) or the French Constitution (revision of Article 66 in 1995, revision of Article 2 in 1999).

The Chat Control example is revealing. It prepares a precedent for generalized surveillance starting from a poorly studied, technically flawed idea that contradicts fundamental human‑rights texts.

The international example: internet segmentation through age verification

The French National Assembly voted on Tuesday 27 January the adoption of the bill banning social networks for those under 15 years old. On paper, another seemingly good idea. We know the problems caused by social networks, especially for developing brains and psyches that need time for other educational and social activities.

France would be only the second (and likely not the last) country to adopt this ban, after Australia. The wording of the idea emphasizes protecting minors, a stance that is hard to argue against.

The main problem appears when the idea is flipped: to access a social network, every citizen will have to show a clean slate by exposing personal data—age and possibly full identity—to the network. Among the envisaged solutions:

  • A “proof” via facial scan and AI analysis: this exposes the user (e.g., 16 years old) to machine error (estimated as 14) and denial of access to a site they should be allowed to use.
  • A proof by sending an identity document: in a simplified, naĂŻve implementation, this exposes the user to probable future data leaks, which have been occurring for years and do not fall short of targeting governmental services (URSSAF, France Travail). It also excludes those without an identity document. Finally, it attacks privacy because the website would gain access to the person’s identity. If decision‑makers choose this method, it is essential to devise a serious technical implementation that could partially solve this problem 2.

The examples of the United Kingdom and Australia nevertheless question legislators’ ability to implement a technically and ethically robust solution (assuming one exists). The lack of a finished solution preserving anonymity, or at least pseudonymity, would ultimately introduce the same problems as Chat Control: increased risk for journalists or activists who cannot stay anonymous during their research, and degradation of the right to privacy for everyone.

I’m moving a second, more philosophical problem to the footnotes because, although relevant, it diverts us from the core issue of digital liberty erosion 3.

As a counter‑point we could argue that, after all, we must also identify ourselves to porn sites (since 2025 in France). Should we abolish any age verification on those sites and implicitly re‑allow minors, under the pretext of strict privacy preservation?

I’ll conclude by acknowledging that this argument is valid and that protecting minors from unsuitable content is absolutely desirable. From that angle, and if a societal and scientific consensus motivates it, it also seems legitimate to prefer protecting children from social networks.

But I want to stress the imperative to think about a serious technical implementation so as not to compromise everyone’s privacy 2. A progressive set of protective measures also seems preferable to me to an outright block.

Unfortunately, major doubts remain today regarding legislation pushing a responsible technical solution, mainly because of an overly hasty agenda, driven by inexperienced politicians.

The conspiracy trap: what are the motivations?

These two examples reflect the difficulty of navigating between simple, absolutist solutions and complex, progressive ones. The desire to quickly solve minor‑protecting or crime‑fighting problems—issues on which everyone agrees, but which affect a minority—often justifies a hastily implemented measure that impacts the majority.

Very often the slope becomes steep once the precedent is set. Thus in France we now hear the Minister for Digital Affairs saying she’s ready to fight against a random set of “internet” tools: “VPNs (ed.: virtual private networks, major privacy‑protecting tools that many authoritarian countries such as Russia, Iran or China heavily restrict), are the next item on my list.” Or the British Secretary of State literally wanting to create a panoptic surveillance system for the whole country, boosted by artificial intelligence.

Panoptic prison

Panoptic prison: centralized, globalized surveillance. Source: The New York Times

During my research, a question eluded me: why do our governments want to implement mass surveillance? Is it not just conspiratorial thinking to see our politicians as inherently sneaky, evil, and driven by a hidden agenda? I have ideas rather than established answers.

The capitalist idea

More control over data in fine equals more money for the tech‑rich billionaires of the GAFAM, as seen in the first part. The erosion of privacy is thus more a side‑effect of the money race than a goal in itself. The more these corporations earn, the more they gain lobbying power over politics.

The political idea

A desire to influence a people that is losing sovereignty, for instance through more targeted advertising. This happened during the Cambridge Analytica scandal, where personal data from 87 million Facebook users were used to sway voting intentions toward the Republican Party, or toward Brexit vote in 2016.

Even today the Fidesz party of Orbán in Hungary uses personal data obtained from connections to public administrative services to influence election outcomes in its favor.

The “invisible self‑reinforcing” idea

There may not be an explicit willingness to control information and communications. Rather, a downward slope that tightens with each precedent (ratchet effect).

Governments are prompted to adopt protective measures in response to low‑impact but highly visible threats: terrorism, child sexual abuse. One only needs to recall the disproportionate U.S. response after the September 11 attacks: a full‑scale invasion of Afghanistan for 20 years and the establishment of a domestic and international surveillance system by the NSA under the cover of the Patriot Act.

Populations do not perceive the other side of the coin: less visible but more insidious dangers. The erosion of privacy, gradually turning into digital totalitarianism, also erodes democratic systems. The irony? It’s exactly what terrorists want: for Western civilization to self‑destruct from within with minimal effort.

Often, these “protective” measures are implemented by executive‑derived bodies without a direct mandate from the people: the NSA for the Patriot Act, the European Commission for Chat Control. This makes it even harder to grasp the scale and consequences of such measures.

What are the consequences?

Your life sold for €5 on the dark web

Blogger Korben wrote an excellent article on recurring data breaches. There isn’t a day when a company doesn’t admit to having been hacked and leaking its customers’ data. This also applies to state services, as seen earlier for France Travail (December 2025) or URSSAF (November 2025).

The result? With the plethora of data stored by all the services we use, it becomes easy to cross‑reference data to build a profile containing the entirety of our information and sell that to scammers for cheap. This fuels the rise of fake‑parcel scams (easy to be credible when your address has already leaked) or fake banking advisor scams (credible when your bank details have already leaked).

It is impossible to guarantee an inviolable security. Therefore we must assume data breaches are inevitable and, as such, do everything possible to minimize our digital footprint. Not easy when governments choose the path of weakening encryption (see Chat Control) or constantly demand more information (see age verification).

AI‑boosted Big Brother

Surveillance‑network technologies and Artificial Intelligence become so powerful and widespread that a 1984‑style future is no longer excluded. Imagine if the USSR had access to these technologies. It would have been easy for the regime to detect any thought contrary to the state ideology before a citizen even became aware of it. On a smaller scale, we already see such advanced control technologies deployed in China or Russia.

But you don’t need to go to authoritarian countries; just cross the Atlantic. In the United States, The Guardian reports that the “Mobile Fortify” app allows ICE to access a mountain of information from a simple facial scan on the street. The app is used by ICE, weaponized by Trump, to randomly detain individuals, often without immediate recourse for false positives. As Korben summarizes, “it flips the burden of proof: an algorithm becomes more reliable than an official document.”

Elite hypocrisy

Finally, another disturbing aspect: the hypocrisy of the governing “elites”. To bounce back to current events, it’s enough to consider the number of witnesses, even accomplices, among heads of state, ministers and deputies of all kinds mentioned in the Epstein files. The argument of protecting children, at least when it comes from these “elites”, loses some of its meaning, doesn’t it?

Another controversial aspect of the proposed implementation of Chat Control is that politicians at large would be exempt. This is an acknowledgement of the dangers posed by weakening confidentiality and encryption measures, but it also shows that politicians do not want to bear the consequences of their actions inflicted on the rest of the population.

This double‑standard, this hypocrisy, erodes peoples’ trust in their representative institutions in the long run, as well as in democracy.

What solutions?

Thus we see that respect for privacy, both in the physical world and in the digital world, is inalienable. There are many additional arguments:

  • Alex Winter does not accept the idea that if we have nothing to hide, we have nothing to fear: “Privacy has is useful. That’s why we have blinds on our windows and a door to our bathroom.”

  • Edward Snowden argues that “claiming you don’t care about the right to privacy because you have nothing to hide is equivalent to saying you don’t care about freedom of expression because you have nothing to say.” 4

The first—and most important—action is therefore to establish the fundamental importance of respecting privacy and adopting digital minimalism by default.

It is essential to educate people on these core issues, which receive far too little media exposure.

By denouncing the application of measures labeled “protective”, we are not siding with pedophiles, criminals, or terrorists. We are saying that we do not agree with the blanket application of anything under the pretext of fighting those. Rather than quick‑fix substitutes, we need to provide targeted, progressive, transparent, and scientifically grounded responses.

I am also preparing an article on the solutions we can all adopt, at our own scale, to limit the damage. Stay tuned…!

References

To go further:


  1. One counterargument is that it would not be necessary to break the encryption to scan messages if this were done locally on the machine. This is theoretically possible, but experts are very skeptical about its practical implementation and believe that, in the long term, there is a high probability that matching will be done on a remote server controlled by a third party. ↩︎

  2. This refers to the double anonymity technique. Instead of sending a complete identity document directly to the social network, the user generates an “age token” from an accredited verifier. They can then use it on any network without their identity being linked to it (https://openageinitiative.org). ↩︎ ↩︎

  3. Banning social networks (or other apps and sites when a precedent is set) for those under 15 extracts 17 % of the population from the public forum that is social media. Despite their downsides, social networks also play a sociabilising, educational, and world‑opening role when used responsibly. Philosopher Roger‑Pol Droit, for example, raises the question of learning to detect truth from falsehood on networks (C À Vous episode on Tuesday, 27 January 2026). Rather than a pure ban, he suggests educating critical thinking through better use of these networks. Most young people cannot distinguish real information from fake. Instead of being thrown into the deep end abruptly by an arbitrary “numerical majority,” it would therefore be important to gradually teach proper use of social networks. ↩︎

  4. The “I have nothing to hide” argument is fallacious: even if that’s true for you, it isn’t for your neighbour, your sister, a whistle‑blower, or a journalist. An example: the US government can already request Google for a list of people who have searched specific terms. This precedent allows any extremist government to ask Google for a list of people who don’t fit its narrowly defined criteria. In Texas, for instance, this applies to a distressed woman seeking an abortion. ↩︎