The Technology in This Course Is Not Neutral
Facebook knows your political leaning, relationship status, financial situation, and health conditions -- even if you have never posted about any of them. It infers them from your behavior: what you click, how long you pause on a post, who you message, what you search, what you buy through tracked links. Clearview AI scraped 30 billion photos from the public internet -- Instagram, Facebook, LinkedIn, news sites -- to build a facial recognition database that law enforcement agencies across the world use without most people's knowledge or consent. China's social credit system monitors citizens' purchases, social media activity, traffic violations, and social connections to generate a score that determines access to travel, loans, schools, and jobs. Your phone tracks your location 24 hours a day and shares it with an average of 40 apps.
None of this is accidental. Every piece of technology in this course was built by people with specific incentives, and understanding those incentives is as important as understanding the code. The algorithms that recommend your next video were optimized for engagement, not your wellbeing. The data collection that powers "free" services was designed to maximize advertising revenue, not to protect your privacy. The AI systems making decisions about loan approvals, hiring, and criminal sentencing were trained on historical data that reflects decades of human bias. This article is about the gap between what technology can do and what it should do -- and why that gap is your problem whether you build technology or simply live in a world shaped by it.
The Attention Economy: You Are the Product
If you are not paying for the product, you are the product. That line has become a cliche, but its mechanics are worth examining precisely. Social media platforms are advertising businesses. Facebook (Meta), Instagram, TikTok, YouTube, and Twitter/X do not sell social networking. They sell your attention to advertisers. Their revenue is directly proportional to how much time you spend on the platform. Every design decision flows from that single incentive.
Infinite scroll removes natural stopping points. Without it, you would hit the bottom of a page and decide whether to continue. With infinite scroll, the content never ends, and the decision to stop becomes an active effort rather than a passive default. Notification triggers are engineered to pull you back: "Someone liked your post!" is not informing you of something urgent. It is exploiting a dopamine response to re-engage you. Algorithmic amplification prioritizes content that generates reactions -- and research consistently shows that outrage, fear, and controversy generate more engagement than calm, nuanced content. The algorithm does not have a political agenda. It has an engagement agenda, and emotional extremes serve that agenda.
In 2021, former Facebook employee Frances Haugen leaked internal documents (the "Facebook Files") showing that the company's own research found Instagram made body image issues worse for one in three teen girls. The research also showed that the platform's algorithm steered users toward increasingly extreme content because engagement metrics rewarded it. The company knew. The company continued. The incentive structure made it rational to continue -- user wellbeing and advertising revenue pointed in different directions, and revenue won.
The attention economy is not a conspiracy. It is an incentive structure. Nobody at Facebook decided to harm teenagers. But the system was designed to maximize engagement, engagement correlates with emotional intensity, and emotional intensity correlates with anxiety and comparison. The harm is a side effect of optimization. Understanding this matters because it means the solution is not "better people at Facebook" -- it is changing what the system is optimized for. As long as the business model is attention-based advertising, the incentive to maximize screen time will override every internal research report about user harm.
Data Collection: The Infrastructure of Surveillance
The scale of data collection in modern technology is difficult to comprehend until you see it mapped out. Every interaction with a digital device generates data, and that data is collected, stored, analyzed, and often sold.
The data broker industry is the infrastructure most people never see. Companies like Acxiom, Oracle Data Cloud, and LexisNexis aggregate data from public records, purchase histories, app trackers, loyalty programs, and dozens of other sources to build profiles containing thousands of data points per person. Your profile might include: estimated income, health conditions (inferred from purchases), political affiliation (inferred from browsing), religious affiliation, whether you own or rent, your credit score range, whether you are pregnant (Target famously figured this out before a woman's family did, based on purchase patterns). These profiles are sold to advertisers, insurance companies, potential employers, political campaigns, and anyone willing to pay -- often for fractions of a cent per person.
Government surveillance adds another layer. The Snowden revelations in 2013 exposed that the NSA was collecting metadata (who called whom, when, for how long) on virtually every phone call in the United States through the PRISM program. Metadata does not reveal what you said, but it reveals who you talk to, how often, at what times, and from where -- which is often more revealing than content. A call to an oncologist at 2 AM tells a story without any words.
AI Bias: When Algorithms Inherit Human Prejudice
Machine learning systems learn from historical data. If that data reflects human bias -- and it almost always does -- the system will reproduce and amplify that bias at scale. This is not a theoretical concern. It is happening now, in systems making consequential decisions about real people's lives.
Amazon's hiring AI (2018): Amazon built an AI resume screening tool trained on ten years of hiring data. Because the company had historically hired predominantly men (especially in technical roles), the system learned that resumes mentioning "women's" -- as in "women's chess club captain" or "women's college" -- were negative signals. It systematically penalized female candidates. Amazon scrapped the tool after discovering the bias.
Facial recognition accuracy gap: A landmark MIT study by Joy Buolamwini found that commercial facial recognition systems from IBM, Microsoft, and Face++ had error rates of 34.7% for dark-skinned women compared to 0.8% for light-skinned men. The cause: training datasets overwhelmingly featured lighter-skinned faces. When the system encountered faces that looked different from its training data, it failed. This is not an abstract accuracy problem. When facial recognition is used by law enforcement, a 34% error rate for dark-skinned women means innocent people are identified as suspects.
Predictive policing: Systems like PredPol analyze historical crime data to predict where crimes are likely to occur. But historical crime data reflects where police have been deployed, not where crime actually occurs. Neighborhoods that were over-policed in the past generate more arrest data, which makes the algorithm predict more crime in those neighborhoods, which leads to more police deployment, which generates more arrests -- a self-reinforcing feedback loop that mathematically encodes discriminatory policing patterns.
The fundamental problem with AI bias is not that algorithms are racist or sexist. It is that they are optimizers, and they optimize for patterns in the data they are given. If the data contains the consequences of decades of discriminatory practices -- in hiring, lending, policing, healthcare -- the algorithm will learn those patterns and reproduce them. "Bias in, bias out" is the concise version. The expanded version: bias in, amplified bias out, at a scale no human hiring manager or loan officer could match. One biased human makes biased decisions about hundreds of people. One biased algorithm makes biased decisions about millions.
Privacy Regulation: The Global Patchwork
Governments have begun responding to the data collection crisis, but their approaches vary dramatically. Three regulatory frameworks illustrate the spectrum.
The strongest privacy regulation in the world. Gives EU residents the right to access (see all data a company holds about you), right to deletion (demand your data be erased), right to data portability (export your data in a usable format), and requires explicit consent before data collection. Companies must have a legal basis for processing personal data. Violations carry fines up to 4% of global annual revenue. Meta was fined $1.3 billion in 2023 for transferring EU user data to the US without adequate protections. GDPR applies to any company that processes EU residents' data, regardless of where the company is located.
Gives California residents the right to know what data is collected, the right to delete, and the right to opt out of data sales. Weaker than GDPR: it allows data collection by default (opt-out vs. GDPR's opt-in) and applies only to businesses meeting certain size thresholds. Fines are modest by comparison ($7,500 per intentional violation). Its real significance is as the strongest US privacy law in a country with no comprehensive federal privacy legislation. Many companies apply CCPA standards nationwide because managing different data practices per state is impractical.
The United States has no federal privacy law comparable to GDPR. Privacy protections are sector-specific: HIPAA covers health data, FERPA covers education records, COPPA covers children's data online. But there is no general law governing what a tech company can collect, how long it can retain it, or who it can sell it to. The result is a patchwork where the same company can be subject to GDPR in Europe, CCPA in California, LGPD in Brazil, and nothing at all in most US states. Industry lobbyists have consistently blocked federal privacy legislation, arguing it would stifle innovation. Consumer advocates counter that the absence of regulation has enabled a surveillance economy that profits from the systematic erosion of privacy.
China's Personal Information Protection Law (PIPL, 2021) grants individuals privacy rights similar to GDPR -- against private companies. But the Chinese government retains extensive surveillance capabilities, including mandatory real-name registration for internet use, the "Great Firewall" censorship system, and social credit monitoring. The law protects citizens from corporate data abuse while preserving the state's ability to monitor comprehensively. This model -- privacy from companies, transparency to the state -- represents a fundamentally different philosophy than Western approaches that attempt to protect privacy from both corporate and government intrusion.
Algorithmic Accountability: Who Is Responsible When AI Causes Harm?
When a human loan officer denies your application, you can ask why. You can appeal. There is a person responsible for the decision. When an algorithm denies your application, the situation becomes murky. The algorithm's decision may be based on hundreds of variables weighted in ways that no human fully understands (the "black box" problem). Who is responsible? The company that deployed the algorithm? The engineers who built it? The data scientists who selected the training data? The original data collectors?
This accountability gap plays out across consequential domains:
Autonomous vehicles: In 2018, an Uber self-driving car killed a pedestrian in Tempe, Arizona. The system detected the person but classified them as an unknown object, then a vehicle, then a bicycle -- never triggering emergency braking. Who bears responsibility? The safety driver who was watching a video on her phone? Uber's engineering team? The algorithm that misclassified? The answer, in this case, was the safety driver (charged with negligent homicide) -- but the systemic question remains unresolved.
Content moderation: Algorithmic content recommendation has been linked to radicalization pathways. A 2019 study found that YouTube's recommendation algorithm could lead a user from mainstream political content to extremist content within a sequence of recommended videos. When a teenager is radicalized through algorithmically recommended content, is the platform responsible? Current Section 230 protections in the US largely shield platforms from liability for user-generated content -- but the algorithm that actively recommends content is not user-generated. It is the platform's product.
Healthcare decisions: An algorithm used by US health systems to allocate healthcare resources was found to systematically deprioritize Black patients. The system used healthcare spending as a proxy for health needs. But due to systemic inequities, Black patients had historically spent less on healthcare (not because they were healthier, but because they had less access). The algorithm interpreted lower spending as lower need, directing resources away from the people who needed them most. The algorithm was not designed to discriminate. It optimized faithfully for the metric it was given -- and the metric was wrong.
What You Can Do: Practical Privacy Protection
You cannot eliminate your digital footprint, but you can reduce it significantly. These are not theoretical recommendations -- they are specific actions ranked by impact.
Audit app permissions. Go to your phone's settings and review which apps have access to your location, camera, microphone, contacts, and photos. Revoke permissions that are not essential to the app's function. A flashlight app does not need your location. A game does not need your contacts. Most apps request maximum permissions because data is valuable, not because they need it to function.
Use a privacy-focused browser and search engine. Firefox with uBlock Origin blocks trackers by default. Brave blocks ads and trackers natively. DuckDuckGo does not track search queries or build a profile. Switching from Chrome + Google to Firefox + DuckDuckGo eliminates one of the largest data collection pipelines in your daily life.
Install an ad blocker. uBlock Origin is free, open-source, and blocks not just ads but tracking scripts, fingerprinting attempts, and malicious domains. This is not about avoiding annoyance -- it is about cutting the data collection pipeline. Every ad that loads on a page also loads tracking scripts from multiple advertising networks, each building a profile.
Use encrypted messaging. Signal is end-to-end encrypted, open-source, collects virtually no metadata, and is free. Use it for any conversation you would not want a data breach to expose. WhatsApp uses the same encryption protocol but collects metadata (who you message, when, how often, your contacts) and shares it with Meta.
Read privacy policies (or use tools that read them for you). Terms of Service; Didn't Read (tosdr.org) rates privacy policies in plain language. It is not realistic to read every privacy policy in full (they average 4,000+ words), but knowing the worst offenders helps you make informed choices about which services you use.
Opt out of data broker lists. Services like DeleteMe and Privacy Duck submit opt-out requests to major data brokers on your behalf. You can also do this manually (each broker has a removal process, though they make it deliberately cumbersome). This does not eliminate your data from the internet, but it reduces the number of places where it is aggregated and sold.
Answers to Questions People Actually Ask
Where Privacy and Ethics Take You Next
Privacy and ethics in technology are not peripheral concerns that you consider after the engineering is done. They are design constraints that shape what you build and how you build it. The most consequential decisions in technology are not about which algorithm to use or which cloud provider to choose -- they are about what data to collect, who has access, what happens when systems make mistakes, and who bears the cost of those mistakes.
The regulatory landscape is tightening. The EU's AI Act, passed in 2024, will require transparency and human oversight for high-risk AI systems. GDPR enforcement is intensifying, with fines reaching billions. Companies that treat privacy as an afterthought are accumulating legal and reputational risk that will compound over time.
But regulation is reactive. It addresses harms after they occur. The deeper shift is in professional norms -- the growing expectation that engineers, product managers, and designers consider the ethical implications of their work before deployment, not after a crisis. The most valuable skill you can develop is the habit of asking: "Who is affected by this system? What happens when it fails? Whose interests does it serve? And who is not in the room when these decisions are made?"
The takeaway: Technology is built by people with incentives, and understanding those incentives is as important as understanding the technology itself. The attention economy optimizes for engagement, not wellbeing. Data collection powers a $250 billion surveillance industry. AI systems reproduce the biases embedded in their training data. Privacy regulation is catching up but remains fragmented. The tools to protect yourself exist -- privacy-focused browsers, encrypted messaging, ad blockers, permission audits -- and using them is a practical first step. The larger step is recognizing that privacy and ethics are not someone else's problem. Every person who builds, uses, or is affected by technology has a stake in how these questions are answered.
