I used to contribute to open source as a software engineer at Oracle. Then I pivoted to product, and for nearly a decade, meaningful contributions felt out of reach. Here's how AI changed that.
The OpenStack days
Back when I was a software developer at Oracle, open source was part of the job. I worked on OpenStack cloud integrations, helping ensure Oracle ZFS storage systems played nicely with the platform. When you build on open source, you fix bugs, you add features, you improve documentation. It's a natural part of the workflow.
Then I became a Product Manager. I still enjoyed writing code, but as I spent more time with customers, I gravitated toward the why — thinking through problems from the customer's perspective.
The PM gap
As a technical PM, I was still around code — reviewing PRDs, discussing technical trade-offs, working closely with engineers — but I wasn't reading or writing it anymore. And time? PMs don't get as many deep focus blocks. You're in meetings, writing docs, aligning stakeholders. So for nearly a decade, my open source contributions stopped. I became a consumer, not a contributor. It never felt quite right.
Enter OpenClaw
A few weeks ago, I started using OpenClaw — an open source AI assistant framework. It's exactly the kind of tool I love: hackable, designed for people who want to own their tools rather than rent them.
I started setting up a custom dashboard (OpenClaw Deck) to interact with multiple AI agents side-by-side. As I used it, I noticed gaps. The model selection was hardcoded. The configuration didn't match my actual gateway setup. Little friction points that broke the experience.
Normally, this is where I'd file an issue and hope someone else fixed it. But I had my OpenClaw agent running in my workspace. It could read code, make changes, run tests, even commit to GitHub.
So I tried something.
The contribution flow
I described the problem: "The Deck has hardcoded models, but I want it to fetch them dynamically from the OpenClaw Gateway."
Openclaw explored the codebase, identified where the hardcoded values lived, and proposed a solution: add a /config endpoint to the Gateway, then fetch from it in the Deck.
Here's the thing — I don't write TypeScript professionally. I haven't touched Node.js internals in years. But I didn't need to be an expert. OpenClaw handled the syntax, the imports, the boilerplate. I focused on the product logic: what should the API return? How should the UI behave if the fetch fails? What's the right fallback?
We iterated. The agent wrote code, I reviewed it. When something looked off, I asked questions. "Why is this port hardcoded?" "Should we handle CORS here?" "What happens if the gateway is down?"
After some back and forth, we had working code, passing tests, and PRs submitted.
You still need to review
Here's what I want to emphasize: AI didn't replace my judgment. It augmented it.
I read every line of those PRs. I didn't rubber-stamp the code because the AI "knows TypeScript." I asked about the CORS headers. I questioned the error handling. I made sure the fallback behavior made sense.
You don't need to be an expert in the language to review code effectively. You need to understand the intent and the implications. Does this change do what we think it does? Are there edge cases? Security implications?
This is where PM instincts actually help. They are trained to ask "what if" questions. To think about failure modes. To consider the user experience when things go wrong.
The risks for maintainers
I want to acknowledge something: this new era of AI-assisted contributions creates real challenges for open source maintainers.
When contributors use AI to generate code, the burden of verification shifts. Maintainers have to be more vigilant about:
- Security: Is that regex actually safe? Is that SQL properly parameterized?
- Quality: Does this follow the project's patterns? Is it tested?
- Intent: Does the contributor actually understand what they're submitting?
What I learned
AI changes the time equation. What used to take a full day of ramp-up can now happen in a single sitting. The barrier isn't knowledge anymore — it's intent and review.
Your non-coding skills matter more than you think. As a PM, the ability to specify requirements, consider edge cases, and review holistically became the bottleneck — not the ability to write TypeScript. Clear problem definition beats syntax knowledge every time.
Open source needs more reviewers, not just more coders. If AI generates more contributions, the scarce resource becomes thoughtful review. That's a skill PMs have in abundance.
What's next
I'm not going to pretend I'm now a prolific open source contributor. I still have a day job. I still have time constraints.
But the mental block is gone. I know that when I encounter friction in tools I use, I can fix it. Not just file an issue — actually fix it.
That's a capability I haven't had in nearly a decade. It feels good to be back.
If you're a PM or former engineer who misses contributing to open source, I'd encourage you to try. Pick a tool you use daily. Find a small friction point. Use AI as a multiplier, not a replacement. You might be surprised what you can build.
P.S. — If you're an open source maintainer, I'd love to hear your thoughts. How are you handling AI-assisted contributions? What signals do you look for to know a contributor actually understands their PR? Drop me a line.