Opinions

Clippy Didn’t Spy—Today’s AI Does


The Clippy movement rejects exploitative tech practices and calls for transparency, but lasting change will require organized efforts beyond symbolism.

Reading Time: 5 minutes

Cover Image
By Haley Heredia

Recently, a wave of internet users has changed their online profile pictures to a picture of Clippy, the old Microsoft paperclip assistant, as part of a growing online protest against corporate overreach in digital spaces. This movement, sparked by YouTuber Louis Rossman, addresses the growing surveillance in consumer technology by Big Tech companies, particularly regarding artificial intelligence (AI).

Clippy, officially named Clippit, was an animated paperclip-shaped office assistant character designed to help users navigate and use Microsoft Office programs, such as Word, Excel, and PowerPoint. It used early machine learning statistical techniques to make suggestions based on what it detected the user was trying to accomplish, such as writing the beginning of a letter or formatting a document. 

While Microsoft intended to make Windows software more user-friendly, Clippy was widely criticized for its intrusive pop-ups and lack of functionality beyond basic tasks. Despite the criticisms it faced, Clippy has since become a symbol of a more transparent era of digital assistance. Unlike many of today’s AI systems, which operate behind layers of data harvest, Clippy did not collect data or track users. Instead, it operated entirely offline and locally. It was harmless.

Today’s AI tools are certainly more powerful than Clippit. They can autocomplete essays, simulate conversations, and predict behavior. Big Tech has pioneered transformative tools that increase general productivity. The goal behind these developments and added capabilities is not inherently bad: companies improve technology to make it more helpful, accessible, and efficient for consumers. However, the methods these companies are using to achieve their goals, particularly constant forced data collection, raise serious concerns.

Through the use of tracking tools, passive monitoring, and data brokering, companies can collect sensitive information, such as biometric, behavioral, communication, and geolocation data from both users and non-users. Non-user data is often collected through indirect means, including data brokers aggregating information from publicly available records, retail transactions, and social media scraping. For instance, data brokers may purchase contact information, location histories, or consumer profiles from third-party apps and services, allowing them to build detailed logs on people who have never consented to such surveillance. This means even people who do not interact with AI tools can be negatively affected by them. 

Furthermore, the mass collection of personal information by tech companies poses serious risks to users. Data breaches can expose private information, leaving users vulnerable to identity theft, financial fraud, and long-term reputational harm. In 2025, the Federal Trade Commission received 5.7 million reports of fraud and identity theft, with estimated losses from cybercrimes reaching $10.2 billion. When users attempt to protect their data, they are often met with limited functionality or outright exclusion from services. It is a coercive system, one where using these technologies, something almost necessary in the current technological climate, requires surrendering privacy.

Even more alarming is the psychological manipulation that companies are capable of through harnessing data and using machine learning models. The lack of meaningful safeguards for consumers in social media platforms has led to AI systems being trained on sensitive user data, including private messages, biometric scans, and voice recordings. Tech platforms can deploy adaptive AI models and acquire personal data to infer users’ emotional states and vulnerabilities, and utilize that insight to push targeted advertisements or content designed to influence users. Particularly, an FTC report identifies children and teenagers as being at greater risk for manipulation, surveillance, and long-term profiling without meaningful consent.

Although companies are currently regulated to collect and process personal data under laws like the GDPR in Europe and the CCPA in California, enforcement is inconsistent. Companies often exploit loopholes in these unclear or outdated laws. Moreover, companies can use information unprotected under the law to reveal the identities of users. A study found that 83 percent of Americans can be uniquely identified with simply three data points: date of birth, gender, and zip code. The same research concluded that with 15 data points, it is possible for companies to correctly identify up to 99.98 percent of individuals. 

Digital autonomy, privacy, and the free internet are slowly becoming concepts of the past, which harms practically all internet users. The Clippy movement has grown rapidly since its inception on August 7, 2025, with millions of views on Rossman’s video and widespread adoption of Clippy profile pictures across multiple platforms, such as YouTube, X/Twitter, Reddit, and others. In addition, Rossman, alongside his followers, has created the online encyclopedia Consumer Rights Wiki to highlight the systematic manipulation of digital-era customers. Similar to Wikipedia, users can freely edit and create articles detailing the impacts of products and services on customer data. While the movement has not yet led to a resolution, it has succeeded in making the frustrations of users who feel exploited by the current digital system visible.

 However, a significant portion of users remain unaware of the extent to which personal data is monitored. If we truly intend to reclaim our digital autonomy, we, as users of the free internet, need to go beyond nostalgia and memes. Broader efforts are already underway, driven by technologists, educators, and everyday users who refuse to accept unethical online data practices. Organizations such as the Electronic Privacy Information Center and Center for Democracy and Technology have advocated for algorithmic accountability and stronger data protection laws, including support for the Artificial Intelligence Civil Rights Act of 2024. Meanwhile, grassroots data activism has flourished, with local communities organizing workshops, campaigns, and educational initiatives centered around personal data usage. Supporting these local and international bodies is essential to creating a widespread movement and transforming passive awareness into a meaningful form of change.

These movements may not carry the charm and symbolism of Clippy, but they signal a collective shift towards technologies that respect our privacy by design. That means choosing tools and platforms that operate locally, are open-source, minimize data collection, and give users real control over their information. Alternatives such as Element for messaging, Firefox Private Browsing for browser searches, and Proton Mail for email are examples of software that value privacy. Market pressure from consumers can shift business incentives faster and more cleanly than legislation—which is often slow, heavy, and tangled in bureaucracy—ever could. 

However, if corporations continue to ignore ethical boundaries and exploit sensitive data, legislation will certainly become essential. A targeted approach will be necessary to prevent coercive data collection and must be achieved without expanding unnecessary government oversight into the digital world and AI development. Existing frameworks such as the CCPA, the Colorado Privacy Act, and Japan’s Act on the Protection of Personal Information offer promising models that can be modified to fit a minimal night-watchman state of governing. 

The Clippy movement serves as a clever reminder that technology can progress without being invasive. Although Clippy was never respected as a Microsoft Office Assistant, it is the symbol of a time when advancements in the online world were made without the sacrifice of digital autonomy. With this protest and further remonstrances, internet users can compel companies to shift to a more transparent and ethical data ecosystem and treat privacy as a right rather than a rare privilege.