Back to all articles
Software Engineering
🤖

AI and the Problem of Responsibility Laundering

AI is a powerful tool that transforms how we work. But when engineers start citing ChatGPT as their authority, we have a problem. This is about intellectual ownership and why abdication erodes trust.

December 4, 2025 7 min read
Share:
CW

Cody Williamson

Senior Software Engineer

Let me be direct about something that’s been bothering me for a while now. There’s a pattern emerging in professional software engineering that we need to call out. I’m calling it responsibility laundering, and it’s the act of using AI as a shield for your own thinking. When someone says “ChatGPT said” or “I did my research with Claude” in a technical conversation, they’re not just being lazy. They’re abdicating intellectual ownership and expecting the rest of us to accept it. That’s not how professional engineering works, and it never will be.

Before I go further, let me be clear. I’m not anti AI. I use it constantly. ChatGPT, Claude, Copilot, Gemini, I’ve used them all extensively and I know their quirks. I pay for premium subscriptions because the productivity gains are real. AI has transformed how I approach architecture decisions, fills gaps in my knowledge around planning and pattern recognition, helps me iterate on solutions rapidly, and brings my scattered thoughts into something coherent. It helped me write this article. But these are my thoughts, my opinions, and my responsibility to defend.

The Core Problem

Here’s what I keep seeing in professional settings. Engineers starting conversations with phrases like “I asked ChatGPT and it said we should” or “Copilot suggested this approach.” Sometimes people literally paste LLM responses into technical discussions without any synthesis or critical evaluation. Brother, if you’re going to do that, I might as well skip the middleman. Hint: the middleman is you. I can prompt the model myself.

When you set an AI as the authority on your thoughts and propositions, you’ve made a choice. You’ve chosen to launder your responsibility through a model instead of owning your position. And in doing so, you’ve told everyone in that conversation that your own competence isn’t on the line. If the solution is wrong, it’s not your fault. ChatGPT said it. This is fundamentally corrosive to professional trust, and it should be actively called out when it happens.

The reason this matters isn’t philosophical. It’s practical. When I’m working through a complex system design or debugging a production issue, I need to know that the people I’m collaborating with have actually processed the problem. I need to know they understand the trade offs, the constraints, the reasons behind their recommendations. If someone’s contribution is “the AI said this,” I have no way to evaluate their understanding. I don’t know if they can defend the position. I don’t know if they’ll recognize when the approach falls apart. The trust is gone.

The Vibe Coding Problem

This goes beyond conversations. It shows up in pull requests too. If someone checks in a mountain of Cursor docs and GitHub Copilot artifacts along with their code, they’ve immediately set the tone for that review. Yes, I understand that documentation for the LLM context is sometimes necessary. I get it. But when the code itself is heavily laden with verbosity, excessive inline comments, questionable architectural decisions, and there’s markdown doc after markdown doc explaining basic concepts, it becomes very hard to trust that the author actually understands what they’ve written.

There’s nothing wrong with using AI to accelerate your work. The problem is when the work product makes it obvious that no human brain fully processed what was being built. When I see that in a review, I can’t trust the author going forward. I don’t know what they contributed versus what the model hallucinated. I don’t know if they can maintain this code. I don’t know if they can extend it when requirements change. The foundation is already shaky.

⚠️ The Real Cost of Unowned Code

If you’re hitting “Continue to iterate” repeatedly, or you’re starting new chat sessions because context is lost, or you’ve gone so far down a technical rabbit hole that you understand literally zero of what’s happening, stop. You’re building on a foundation you can’t support. This will cost you time, trust, or both.

The Right Way to Use AI

I’ve been there. I’ve gotten so deep into an AI assisted implementation that restarting made more sense than trying to understand what I’d built. I’ve lost hours because I let the model lead instead of using it as a tool. This is the trap, and it’s really easy to fall into when the productivity feels so high in the moment.

Here’s what actually works. You write your first implementation by hand, in a way you’ve set forth and understand fully. You make the decisions. You own the architecture. Then you use AI to enhance and improve in rapid iterative cycles that we’ve never had before as software developers. The AI becomes an accelerator on your thinking, not a replacement for it. You know what you built. You can defend it. You can extend it. You can throw out the AI’s suggestions when they don’t fit because you have the context to evaluate them properly.

This feels right because it is right. The human brain does the reasoning and makes the calls. The AI handles the mechanical overhead, surfaces options, fills knowledge gaps, and helps you move faster. But the ownership never leaves you. Your name is on the commit. Your reputation is on the line. Your career depends on the quality of what you ship. Act accordingly.

The Standard

There’s a lot of noise in the industry right now about AI replacing developers or vibe coding being the future of software. Some of that will probably happen eventually. But right now, if you’re a professional software engineer getting paid to build production systems, your job is to own your work. AI is a tool. An incredibly powerful one. But tools don’t have careers. You do.

Here’s the standard. AI may assist. Humans must decide. Humans always own the outcome.

If you’re using AI in a way where you couldn’t defend your solution without it, you’ve gone too far. If your contribution to a technical discussion is summarizing what a model told you, you haven’t contributed. If your pull request makes it clear that you don’t fully understand the code you’re submitting, expect to be called out. This isn’t about gatekeeping or being anti technology. It’s about maintaining the baseline of competence that professional engineering requires.

✨ The Ownership Test

Before submitting any AI assisted work, ask yourself: can I explain every decision in this code without referencing what the AI suggested? Can I extend this system when requirements change? Would I be comfortable defending this in a PR review? If the answer to any of these is no, you have more work to do.

Keep Your Edge

Our jobs will continue to transform. Our roles will look different. We’ll use AI more and more as the tools get better. That’s not a threat, it’s just reality. But until the day comes when the models can truly reason about complex systems end to end, your job is to stay sharp. Keep flexing the mental muscles that made you an engineer in the first place. Don’t let the convenience of AI assistance erode the skills you built over years of practice.

There’s nothing like writing your first implementation by hand, understanding every line, then watching AI help you polish it faster than you ever could alone. That’s the sweet spot. That’s where the real productivity lives. Not in blindly accepting suggestions, but in having the judgment to know which ones to take.

Use the tool. Don’t let the tool use you. And for the love of all things, stop citing ChatGPT in professional conversations. Own your work. That’s the job.

Enjoyed this article? Share it with others!

Share:

Have a project in mind?

Whether you need full-stack development, cloud architecture consulting, or custom solutions—let's talk about how I can help bring your ideas to life.