/// Critical AI Review
Neil Singh · T90 Ventures
Ethics · Critical Analysis
Ethics · Critical Analysis

The Ethics Problem We’re Not Talking About: Big Tech Leadership and AI!

When we discuss AI ethics, are we asking the wrong questions?

When most people talk about AI ethics, they focus on algorithms: Is the AI biased? Does it make fair decisions? Can it be trusted not to discriminate? These are important questions. But there’s a more fundamental issue we’re overlooking: What happens when AI is built by leaders with questionable ethical track records?

If the executives steering AI development have demonstrated a pattern of prioritizing profit over people, manipulating user data, or operating without accountability, why would we expect the AI they create to be any different?

The Leadership Problem

The major AI platforms shaping our future aren’t built in a vacuum. They’re products of specific companies, led by specific individuals whose values and decisions directly influence what these technologies become.
Consider the track record:

Meta’s Mark Zuckerberg has faced multiple Congressional hearings over data privacy violations, including the Cambridge Analytica scandal where 87 million users’ data was harvested without consent. The company was fined $5 billion by the FTC in 2019 one of the largest penalties in history for deceiving users about their ability to control privacy. Despite repeated promises to prioritize user privacy, Meta continues to face lawsuits over data practices and has been accused of knowingly harming teen mental health while publicly denying it.

X’s Elon Musk acquired Twitter and gutted content moderation teams, leading to a documented surge in hate speech and misinformation. His companies have faced numerous labor violations, SEC investigations over securities fraud, and allegations of creating hostile work environments. Musk has used his platforms to spread conspiracy theories and manipulate markets, demonstrating a pattern of prioritizing personal interests over public responsibility.

OpenAI’s leadership transition revealed internal chaos when Sam Altman was briefly ousted by the board over concerns about transparency and safety, only to return days later with most of the board replaced. The company that promised to develop AI for the benefit of humanity transitioned from non-profit to capped-profit structure, with Altman reportedly set to receive equity despite earlier claims he had no financial stake. Questions about OpenAI’s commitment to safety over commercialization remain unanswered and further elevated by the recent Ai Porn Production scandal which the platform currently is considering.

Google’s AI development operates under a company that has faced antitrust lawsuits across multiple continents, been fined billions for anti-competitive practices, and has a history of collecting user data far beyond what users understand or consent to. The company’s “Don’t Be Evil” motto was quietly removed from its code of conduct in 2018.

These aren’t isolated incidents. They represent patterns of behavior from the individuals and organizations building the AI tools millions of people now depend on daily.

Why Leadership Ethics Matter for AI

Here’s the uncomfortable truth: AI systems reflect the values and priorities of those who create them.

When leadership prioritizes growth metrics over user welfare, AI gets optimized for engagement even when that engagement comes from divisive content or addictive design patterns. When executives view user data as a commodity rather than a responsibility, AI systems are built to extract maximum information with minimum transparency.

The decisions about what data to collect, how to use it, what safety measures to implement, and when to release powerful technology to the public aren’t made by algorithms. They’re made by people, specifically, by executives with demonstrated track records that should give us pause.

The Questions We Should Be Asking

Instead of just asking “Is this AI biased?” we need to ask:

The pattern is troubling: companies rush AI products to market with minimal safety testing, harvest user interactions as training data (often without clear consent), and resist meaningful regulation while claiming to self-regulate. Meanwhile, executives reap enormous personal wealth while facing minimal consequences for failures.

What This Means for You

If you’re using AI tools daily and most of us are you need to think critically about who built them and why. This doesn’t mean AI is inherently bad or that every tool from major tech companies should be avoided. But it does mean being intentional about your choices:

Consider the source. When selecting AI tools, research the company behind them. What’s their track record on privacy? Have they faced legal action for unethical practices? How transparent are they about data usage?
Diversify your tools. Don’t rely exclusively on one company’s ecosystem. Use companies that have made explicit commitments to ethical AI development and transparency. Vote with your usage and your wallet.

Read the fine print. Before using an AI platform, understand what data you’re providing and how it might be used. Many free AI tools use your inputs to train future models is that something you’re comfortable with.
Support regulation. Contact your representatives about AI oversight. The European Union’s AI Act is setting important precedents, but the U.S. lags behind. Without regulatory guardrails, we’re trusting the same executives who’ve already violated that trust to police themselves.

The Alternative Path

Not all AI development follows the “move fast and maximize profit” model. Some companies are building with ethics as a foundation:

These alternatives may not always have the flashiest features or largest marketing budgets, but they represent a different approach one where ethical considerations aren’t just PR talking points but fundamental to the business model.

The Bottom Line

AI isn’t going away. It’s becoming more powerful and more integrated into our daily lives. That’s precisely why the ethics of the people building it matters so much.
We’ve spent years watching big tech leaders prioritize profit over privacy, growth over safety,and speed over responsibility. They’ve faced fines, lawsuits, and public outcry, yet the pattern continues. Now these same leaders are building the most powerful technology humanity has ever created.

The question isn’t whether AI can be ethical. It’s whether we’re willing to demand that the humans behind it demonstrate ethical leadership before we hand them more power over our information, our decisions, and our future. We deserve AI built by people who’ve proven they can be trusted with it. Until big tech leadership demonstrates genuine accountability and ethical behavior, we should be very careful about how much power we give their AI tools.

Final Thought

The choice is ours for now. The decisions we make today about which AI platforms to support will shape what tomorrow’s AI landscape looks like. Choose wisely.

Read original on Medium →