The Quiet Risk in the Age of DIY Software
As AI makes it easier than ever to assemble your own software stack, we’re confusing capability with accountability. Here’s the quiet risk hiding in the DIY era.
Two weeks ago, a customer forwarded me a link with a simple note: “Should we just build this ourselves?”
They’re a mid-sized operations team. Thoughtful. Not reactionary. But they’d just seen a demo of an AI “Chief of Staff” stitched together from a few tools and an LLM. It looked fast. Fluid. Personal. No procurement process. No sales cycle. Just… assemble and go.
The question wasn’t hostile. It was curious. And underneath it, I could hear something deeper: If the tools are this powerful, what are we really paying for?
If you’ve been following product conversations this week, you’ve felt it too. Playbooks for autonomous agents. The “IKEA SaaS” model — assembly required. Debates about whether customers can just use ChatGPT instead. Stories of vibe-coded apps exposing 18,000 users because basic flaws were missed. Even research posts about misused Likert scales — beautiful dashboards, shaky foundations.
It all points to the same tension: building is easier than ever. Owning the consequences is not.
As someone who spends her days in customer conversations — in the moments after implementation, during renewals, inside escalation calls — I’ve been watching this shift closely. And I think we’re underestimating one quiet risk.
We’re confusing assembly with accountability.
The IKEA Temptation
There’s something undeniably appealing about the new modularity of software.
You can:
- Connect an LLM to your CRM in an afternoon
- Spin up a productivity app without a formal roadmap
- Chain tools together with MCP servers instead of copy-pasting between tabs
- Build an internal “agent” that drafts, summarizes, routes, and reports
The barrier to experimentation has collapsed. And that’s a gift.
In 2024, Gartner estimated that over 70% of new enterprise applications would use low-code or no-code technologies. That number felt aggressive at the time. Now it feels conservative.
But here’s what I’ve learned from onboarding teams over the years: the cost of a product isn’t just in its build. It’s in its behavior over time.
Flat-pack furniture works because the physics don’t change once you assemble it. A bookshelf doesn’t reinterpret your intent next Tuesday. It doesn’t quietly update itself. It doesn’t route sensitive data to the wrong place because a prompt was phrased loosely.
Software does.
And when customers ask, “Should we just build this ourselves?” what they’re really asking is:
“Are we buying capability — or are we buying responsibility?”
That’s a very different evaluation.
When Autonomy Meets Exposure
The conversation about agents earning trust is important. But trust doesn’t emerge from autonomy alone. It emerges from predictable boundaries.
This week’s story about a vibe-coded app exposing 18,000 users isn’t surprising. It’s inevitable in a world where speed outruns review.
In customer support calls after security incidents, I’ve noticed something consistent. The distress isn’t just about the breach. It’s about the realization that no one had fully traced the system’s edges.
Who had access?
Where was data stored?
What assumptions were baked into the workflow?
What happens when the model changes?
These aren’t glamorous questions. They don’t show up in launch tweets. But they define whether a product is durable.
A 2023 IBM report found the average cost of a data breach was $4.45 million globally. That number gets quoted often. What doesn’t get quoted is the operational cost afterward: the slowed roadmap, the legal reviews, the internal morale hit.
Autonomy without guardrails doesn’t scale. It compounds risk.
And here’s the part we don’t talk about enough: customers don’t just adopt your product. They inherit your decisions.
When those decisions are invisible, that inheritance becomes fragile.
The Research Illusion: Measuring Confidence, Not Competence
Another thread I’ve noticed: beautifully structured research guides about Likert scales and attitudinal surveys.
I love good measurement. I’ve built onboarding dashboards that track adoption to the decimal point. But I’ve also sat in renewal conversations where a customer says, “Yes, satisfaction is high — but we’re still rebuilding half of it internally.”
There’s a difference between:
- “This works well enough”
- “This is resilient under pressure”
Likert scales often capture the first.
The second shows up in moments of stress:
- When a key employee leaves
- When compliance requirements change
- When usage triples unexpectedly
- When an integration fails at 2 a.m.
In a recent post-implementation review with a fintech client, our NPS score was strong. Usage metrics were climbing. But when we asked a more open question — “Where would this break if your volume doubled?” — the room went quiet.
That silence told us more than the dashboard.
Confidence is easy to measure. Structural soundness is harder.
As builders and researchers, we need to design feedback loops that surface not just delight, but durability. That means asking questions like:
- Where do you still keep manual backups?
- What workarounds feel “temporary” but have lasted months?
- What scenario makes you nervous that we haven’t discussed?
Those questions rarely trend on Medium. But they prevent churn.
Executive-Led Growth and the Return of the Human Face
The debate about B2B SaaS megaphones being broken — about corporate pages losing reach — doesn’t surprise me either.
When tools are modular and replicable, differentiation shifts.
If anyone can assemble the pieces, then what matters is:
- Who stands behind the system
- Who explains trade-offs transparently
- Who shows up when something breaks
In enterprise conversations, I’ve seen deals hinge less on feature matrices and more on executive presence. Not charisma — clarity.
A CTO once told me during a renewal discussion:
“I don’t expect perfection. I expect visibility.”
That line has stayed with me.
In a world where customers can experiment with AI themselves, your moat isn’t just functionality. It’s your willingness to be accountable in public.
That’s harder to replicate than code.
Scaling Isn’t About Users. It’s About Consequences.
There’s a popular question making the rounds: what changes when you scale from 100 to 100,000 users?
From a customer success perspective, here’s what changes first: the margin for ambiguity disappears.
At 100 users:
- You can manually patch gaps
- You can clarify misunderstandings individually
- You can rely on tribal knowledge
At 100,000 users:
- Every unclear workflow becomes a support ticket multiplier
- Every security shortcut becomes systemic
- Every vague promise becomes contractual risk
The DIY era accelerates early growth. But it also accelerates exposure.
I’ve worked with teams who brilliantly hacked together internal tools to reach their first thousand users. But when enterprise customers arrived, the conversation changed. Suddenly it wasn’t about cleverness. It was about:
- Audit trails
- Role-based permissions
- Data residency
- Incident response protocols
The scaffolding that feels unnecessary at 10 users becomes non-negotiable at 10,000.
And here’s the insight I keep returning to:
Ease of assembly does not reduce the need for architecture. It increases it.
Because the faster something spreads, the faster its weaknesses propagate.
What This Means for How We Build — and Listen
As someone who lives in the space between product and customer reality, I don’t think the answer is to resist the DIY wave. It’s to mature alongside it.
A few shifts I’m encouraging internally and with customers:
1. Design for Transparency, Not Just Output
If your system makes autonomous decisions, show the reasoning trail. Not because it’s trendy — because it lowers support friction and builds informed trust.
2. Treat Feedback as Structural Inspection
Don’t just ask, “Do you like it?”
Ask, “Where does this feel brittle?”
Collecting product feedback isn’t about feature prioritization alone. It’s about mapping stress points before they snap.
3. Separate Experimentation from Exposure
Encourage sandbox environments. Clear labeling. Guardrails. Make it easy to try — and hard to accidentally endanger real data.
4. Make Accountability Visible
Whether through executive voices, public roadmaps, or transparent incident reports — show that someone is awake at the wheel.
Customers are remarkably forgiving when they see responsibility. They’re far less forgiving when they see deflection.
The Question Behind “Should We Just Build It?”
When that customer asked if they should build it themselves, we didn’t respond with a defensive pitch.
We walked through scenarios.
- Who maintains the prompts when models update?
- Who monitors for drift?
- Who ensures compliance as regulations evolve?
- Who owns the incident call at midnight?
By the end of the conversation, the question had changed.
It wasn’t “Can we build this?”
It was:
“Do we want to own this layer of risk?”
Sometimes the answer will be yes. And that’s healthy. The ecosystem should be flexible.
But as we move deeper into this era of agents, modular stacks, and rapid assembly, I hope we keep one principle steady:
Software isn’t just something we construct. It’s something we are accountable for over time.
And accountability doesn’t trend. It doesn’t demo well. It doesn’t fit neatly into a playbook.
But in every renewal call, every escalation, every late-night incident review — it’s the thing customers remember.
In the end, the real product isn’t the interface or the agent or the clever integration.
It’s the quiet promise that someone is paying attention — especially when things get complicated.
That promise is harder to assemble.
And it’s worth more than we’re currently pricing in.
Jade leads all the Customer Success initiatives at Round Two. She is passionate about understanding the needs people have and how product collection tools like Round Two can help to generate more helpful insights.