- The Growth Loop
- Posts
- It’s fine for me to use AI, but not for you
It’s fine for me to use AI, but not for you
Real-world examples of self-other bias in AI use
Hello hello!
A warm welcome to the 3 new subscribers who’ve joined us since last week. It’s great to have you here 🙂
Read time: 2 minutes 41 seconds
I asked one of my team members to draft an email for the product head. We were highlighting the business impact of one of the recent product tweaks. He gave me a long AI-generated draft to confirm.
I read the first 2 lines.
Twisted my jaw and started shaking my head sideways.
“This is not it. People have stopped thinking because of AI,” I mumbled
Maybe it was just a lousy draft. But last week, I came to know that there could be a completely different psychological reason for my reaction, too.
It’s called Self-other bias
💁♂️What is Self-Other bias in AI use?
Research shows that people tend to:
View their own use of GenAI (like ChatGPT) as creative, thoughtful, or supportive.
View others' use of GenAI as lazy, unoriginal, or unethical — even when both used it similarly.
This bias, highlighted in a 2024 study (Celiktutan et al., 2024), can affect everything from team dynamics to hiring and public trust.

🧠Why it works
Reason #1: We tend to judge ourselves and others very differently—especially when it comes to bad behavior. If someone else is late, we’re quick to think, “They’re just not punctual” (Internal reason). But if we’re late? It’s definitely because of external factors like traffic or an unexpected delay.(External reason)
The same bias applies to how we view AI usage. When we use generative AI, we assume the best intentions and see it as a helpful tool. But when others use it? We’re more likely to question their effort or creativity.
Reason #2: We also overestimate how much external tools (like a product or technology) influence others compared to ourselves. For instance, if we see someone glued to their phone in a group setting, we might think, “That phone is making them anti-social.” But when we’re the ones scrolling? We’re just taking a quick break.
🤔What to do about it?
Let’s look at it from both sides.
When you are at the receiving end of someone’s work connected with AI,
Reflect Before You Judge
Ask yourself: "Would I react the same way if I were the creator?"
Be conscious of the self-other bias and focus on output quality, not tool.
When you are at the sender’s end and have used AI,
Specify how you used it
For example, I got the reference X from AI
Focus on Output Quality, Not Tool
Evaluate ideas, originality, and effectiveness — not just the presence of AI
Encourage Dialogue
Talk openly about how people use AI and what value they add.
Note: Given the speed with which AI is evolving, perceptions of its use might change over time.
Real-World Examples of Self-Other Bias — and What Founders can Learn from it
Brainstorming Emails
You: "I just used ChatGPT to organize my thoughts."
Them: "They probably copied the whole thing."
Impact: Trust erodes; initiative gets second-guessed.
Builders takeaway: Set a culture where tools are seen as thought partners, not shortcuts — starting with how leaders talk about their own use.
Hiring
You: "AI helped polish my presentation."
Candidate: "They let AI do the work."
Impact: Strong candidates are overlooked based on assumptions.
Builders takeaway: Train hiring teams to separate tool use from talent. If you punish candidates for using AI well, you’ll miss your best people.
Team Projects
You: "AI gave me some ideas — the strategy is mine."
Colleague: "They relied too much on AI."
Impact: Credit is unevenly shared; collaboration suffers.
Builders takeaway: Build systems for fair credit. AI use should not erase individual contribution
Creative Work
You: "AI refined my brand message."
Others: "That feels generic. Must be AI."
Impact: Good work gets dismissed; creative risk-taking slows down.
Builders takeaway: Bias against AI outputs can block good work. Judge results by clarity and impact, not how “human” they feel.
Until next time!
Saurabh 👋
Reply