Mark’s fingers hovered over the “Submit” button. He’d spent the last 42 minutes trying to order a $15.00 mouse pad. The current procurement platform, mandated by some corporate directive from 2022, demanded three separate approvals, two vendor selection steps (for a single item from a pre-approved list), and a justification essay for a peripheral that cost less than his daily lunch. He briefly considered just buying it himself and submitting an expense report, knowing that process, though different, was its own brand of circular logic, and probably more likely to result in rejection because of some obscure policy update from Q2. It was 2:02 PM, and this simple task had eaten into time he desperately needed for a client presentation.
Success Rate
Success Rate
We build these systems, don’t we? Layers upon layers of digital red tape, designed not to streamline, but to control. The stated goal is always “efficiency” or “risk mitigation,” but the unspoken truth often feels like a deep, pervasive mistrust in human judgment. We operate from an assumption of incompetence, erecting digital guardrails at every turn, not realizing we’re simultaneously atrophying the very muscle we need most: critical thinking. We’re training people to follow checklists, to click the prescribed boxes, to navigate labyrinths, but rarely to actually *solve* problems.
We’re training people to follow checklists, to click the prescribed boxes, to navigate labyrinths, but rarely to actually *solve* problems.
The Human Element
And what is solving a problem, if not applying judgment in a unique, unscripted way?
Think of Oscar C., a refugee resettlement advisor I met a while back. His job involved navigating an unimaginable web of international and local policies, human needs, and unpredictable circumstances. A single family’s case could involve 12 different agencies, each with its own protocols and digital portals. Oscar used a mix of old-school intuition and finely honed discernment. He knew which form required a phone call instead of an email, which case worker responded best to a detailed summary versus a bulleted list, which family needed a listening ear more than a legal brief. He had to report the precise number of blankets distributed – perhaps 22, one cold November evening – and log it into a new, mandatory federal system. This system, designed to collect data for macro-level analysis, demanded a 12-step process for each item, including geo-tagging, photographic evidence, and a two-sentence impact statement. For *each* blanket. Oscar, a man who saw human dignity in every thread, found himself spending hours digitally justifying the warmth he provided, when his real work was out there, in the field, making immediate, nuanced decisions.
He was right. These systems, in their relentless pursuit of quantifiable, scalable solutions, inadvertently create a culture where the only recognized value is what can be processed by machine, where the intangible wisdom of experience is not just undervalued, but actively sidelined. It reminds me of a time I hastily sent an email, convinced I’d attached the document, only to realize my error 22 minutes later. A simple human oversight, easily corrected with a follow-up. But if that email had been part of an automated, mandatory 12-step process with approval gateways, my oversight would have triggered a cascade of delays and failed audits. Sometimes, the rigid systems punish us not for malice, but for merely being human.
The Paradox of Control
The paradox here is striking. We invest millions in optimising supply chains, data flows, manufacturing processes, even customer engagement – all crucial, all worthy. Yet, when it comes to the complex, unpredictable, and profoundly human act of *judgment*, we seem to throw up our hands and say, “Let’s build a system that bypasses it entirely.” This isn’t true optimization; it’s an abdication. It’s a preference for predictability over performance, for control over contribution. It’s an insistence that every person is a potential liability, rather than an invaluable asset possessing a unique perspective.
The promise of these systems is often alluring: reduce human error, ensure compliance, achieve consistency. And yes, for rote, repetitive tasks, automation is undeniably powerful. No one is arguing against the benefits of a well-designed database for managing inventory or automating payroll. The issue arises when we try to apply that same rigid, machine-logic framework to tasks that inherently demand adaptability, empathy, and creative problem-solving. When the solution for *everything* becomes “add another rule, build another gate,” we aren’t creating a robust future; we’re designing a brittle one. A future where every deviation from the pre-programmed path is met with resistance, and every spark of ingenuity is dampened by the demand for predefined protocols.
Insight
Recognize human value.
Balance
Structure vs. Freedom.
Growth
Adaptability.
Beyond Automation
This idea, this preference for the adaptable over the rigid, is why some truly innovative companies, like
Kitesocks, thrive on understanding the human element. They recognize that bespoke solutions, tailored to individual needs and contexts, often deliver far greater value than a generic, one-size-fits-all approach. It’s not just about selling a product; it’s about providing a genuine fit, a custom experience that acknowledges the nuanced realities of its users. This isn’t to say that all systems are bad; some structure provides necessary guardrails. But the balance has shifted too far, too often, towards an inflexible dogma. We accept that these systems slow us down, frustrate us, and ultimately make us feel less competent, all under the banner of “best practice.”
I found myself in a conversation recently, trying to explain the frustration of a new time-tracking software that added 22 minutes to my daily routine. The response was a shrug: “It’s scalable.” But what are we scaling? Bureaucracy? Disengagement? The capacity for people to feel like cogs, rather than thinkers? The real problem isn’t always the initial solution itself, but the fear of letting go, the terror of trusting people to use their own minds. We justify the cumbersome nature by saying, “It limits mistakes,” but what if it also limits innovation? What if it stops us from seeing new, simpler pathways because we’re too busy navigating the existing, circuitous ones? We often implement these systems to mitigate a perceived 2% risk, only to introduce a 22% drag on productivity and morale.
The True Cost of Control
There’s a subtle but profound shift happening: from hiring for brains to hiring for obedience. From rewarding initiative to rewarding adherence. When we strip away the need for human judgment, we strip away a huge chunk of what makes work fulfilling, what makes employees engaged. The moment Mark was contemplating buying that mouse pad himself, he wasn’t rebelling against the company; he was trying to be efficient, to get his job done despite the system, not because of it. That’s a powerful signal, a testament to an innate drive to overcome obstacles, a drive that these very systems often seek to suppress.
Designing for Humanity
This isn’t a plea for anarchy, or a return to the chaotic free-for-all of completely unstructured work. There are undeniable benefits to well-defined processes, to clear guidelines, to accountability. But there’s a critical difference between a framework that supports judgment and a cage that replaces it. A system should empower the human, not enslave them. It should provide tools, not mandates for every single thought. A good system learns, adapts, and trusts. A poor one dictates, restricts, and doubts.
The challenge, then, isn’t to eliminate all systems. It’s to build systems that recognize and cultivate human intelligence, rather than assuming its absence. It’s about designing for fluidity, for exceptions, for the beautiful, messy reality of human interaction. It’s about understanding that the real value isn’t just in the data points, but in the story they tell, the insights they reveal, and the human decisions they inform.
Elevate, Don’t Automate Away.
The goal should be to make human judgment *better*, not obsolete.
The choice isn’t between chaos and rigidity; it’s between a system that serves us and one that diminishes us. What kind of future do we want to design: one where we automate away our best qualities, or one where we elevate them? We need to consciously choose the latter, before the collective atrophy of judgment leaves us all navigating identical, unthinking paths.