The Shamrock Problem

The Shamrock Problem

When Doing It the Hard Way Is the Point

I decided I wanted to make fabric shamrocks.

In my mind, they were charming. Crisp little shapes with personality. The kind of handmade detail that looks effortless but makes you feel slightly superior for having made it yourself.

My first instinct was to machine sew them. If I was going to do this, I might as well do it efficiently. Clean seams. Consistency. Scale, even if the scale was only six shamrocks.

But the more I pictured feeding tiny curves under the presser foot, the less efficient it felt. It felt like a fight waiting to happen.

So I pivoted. I would hand sew them. Slower. Simpler. More control.

It was still a fight.

Halfway through the third one, I realized I was solving the wrong problem. The issue was not machine versus hand. The issue was the material.

The fabric was too limp. Too structurally unreliable. What looked crisp in my imagination collapsed in reality. They did not hold their shape. They slouched.

That was the insight.

And I would not have discovered it by optimizing my stitching method. I had to wrestle with the thing itself.


Automation Is Often a Timing Problem

In leadership and operations, we are wired for efficiency. If something works, we want to systematize it. If it repeats, we want to automate it.

That instinct is smart. It just has a sequencing requirement.

Too often, teams automate before they truly understand the work. They build workflows around assumptions. They design systems based on how a process should function, not how it actually behaves under pressure.

The result is speed layered on top of fragility.

When you do something manually, you see what does not show up in the planning document. You feel the friction. You notice the edge cases. You discover where the idea bends, or breaks.

Manual work produces evidence. Not theoretical evidence. Operational evidence.

And that evidence changes decisions.


The Real Issue Is Not Fear. It Is Signaling.

Manual work has a branding problem.

It can look inefficient. It can look like backtracking. In organizations that prize scale, doing something by hand can feel like admitting the system is not ready.

So we label it “wasted time.”

We say, “We don’t have the capacity to do this manually.”

Sometimes that is accurate.

But sometimes manual work is not regression. It is risk management.

Automation creates distance. Humans create discernment.

Software handles the common case beautifully. Humans catch the anomalies. The subtle errors. The edge conditions that technically pass the rule set but fail the real-world test.

There are moments when human hands on a process are not indulgent. They are protective. They reduce downstream error. They surface hidden complexity before it multiplies.

The goal is not to remain manual. The goal is to understand the terrain well enough to automate responsibly.

If you skip that step, you institutionalize blind spots.


Tiny Tests Prevent Expensive Mistakes

This is where curiosity becomes operational, not philosophical.

Before you scale something, run it once by hand. End to end. Notice where it slows down. Notice where it surprises you. Notice what feels structurally weak.

Then ask three questions:

  • Is the friction about skill?
  • Is it about process design?
  • Or is the core idea itself flawed?

In my case, the shamrocks were not difficult because I lacked technique. They were difficult because the material could not support the outcome I wanted.

That distinction matters.

When leaders skip the manual phase, they often try to automate around structural weakness. They add steps. They add checks. They add layers. The system becomes heavier and more complex, all to compensate for a shaky foundation.

Sometimes the smarter move is to admit the fabric is wrong.

Every meaningful initiative has a bend in the road. A moment when the early signals tell you whether this will scale cleanly or fight you at every turn.

Manual work helps you see that bend sooner.

My shamrocks never made it to prime time. But they saved me from doubling down on a flawed idea and blaming my tools.

Try one small test this week. Do it manually before you build the machine. Gather evidence with your own eyes. Then decide what deserves to scale.

Curiosity first.
Evidence next.
Automation last.

That sequence is not inefficient.

It is disciplined.