The Notion of 'fake work' or coasting in the tech industry has ...
Defining the Coast: What 'Fake Work' Actually Looks Like in Tech. The term 'fake work' is provocative, but in the complex machinery of a technology organisation, it's rarely about blatant idleness. It'...
Defining the Coast: What 'Fake Work' Actually Looks Like in Tech
The term 'fake work' is provocative, but in the complex machinery of a technology organisation, it's rarely about blatant idleness. It's more insidious. It's the activity that consumes time and resources but generates negligible value towards core business objectives. You see it in the senior engineer who spends weeks perfecting a beautifully abstracted, over-engineered framework for a problem that requires a simple, maintainable script. You see it in the endless cycle of meetings to plan the planning session for a project that lacks clear stakeholder alignment. It manifests in the meticulously crafted dashboard that tracks metrics no one uses to make decisions, or the relentless refactoring of stable legacy code under the banner of 'tech debt' without a clear risk or velocity payoff. This isn't about laziness; it's often about misaligned incentives, poor role clarity, or individuals optimising for local recognition (looking technically sophisticated) over global outcomes (shipping customer value).
From an Applied Leadership perspective, the first challenge is diagnosis. Is this coasting, or is it a symptom of a broken process? A team generating reams of documentation but no deployable code might be responding to past audit failures. An analyst building ever-more complex models might be unsure what the business actually needs to decide. The leader's role is to distinguish between activity and progress. This requires moving beyond vague concerns about 'productivity' and asking specific, value-oriented questions: What user problem did this work solve? What decision was enabled by this analysis? If this output disappeared tomorrow, who would notice and why? This shifts the conversation from effort (inputs) to impact (outputs and outcomes), creating a tangible framework for identifying work that has drifted into the realm of the performative rather than the practical.
The Systemic Roots: How Processes and Culture Breed Performative Activity
Individuals rarely choose 'fake work' in a vacuum. More often, it's a rational adaptation to a dysfunctional system. Consider the common annual performance review cycle, heavily weighted towards demonstrable 'achievements'. This incentivises engineers to favour greenfield projects they can own and showcase over the gritty, essential work of maintaining and debugging critical systems. It encourages product managers to champion flashy new features while deprioritising foundational usability improvements that are harder to measure. The culture of 'busyness as a badge of honour' prevalent in many tech firms further entrenches this. When leadership celebrates long hours and full calendars, employees learn to optimise for visible activity, not silent, deep work that often yields the highest leverage. The result is a theatre of productivity: calendars packed with syncs, Slack buzzing with notifications, and Jira boards overflowing with tickets, yet forward momentum on strategic goals feels glacial.
Another potent systemic driver is the separation of 'strategy' from 'execution'. When company strategy is a vague set of aspirational slides disconnected from quarterly team objectives, teams lack a true North Star. They default to what is measurable, familiar, or politically safe—the very definition of coasting along a path of least resistance. Decision-Making at the leadership level either compounds or solves this. A decision to fund projects based on which VP shouts loudest, rather than a disciplined evaluation of expected value and resource constraints, guarantees that a portion of the portfolio will be fake work. Conversely, a decision to implement a rigorous, transparent prioritisation framework (like Weighted Shortest Job First or clear OKRs tied to strategy) forces conversations about value and trade-offs, systematically starving performative projects of oxygen and redirecting energy to genuine value creation.
The Data Science Team's Particular Vulnerability
Data science teams are exceptionally prone to 'fake work' due to the inherent ambiguity of their success metrics. Without tight coupling to business decisions, a data scientist can spend months building a model with a stunning 99% accuracy on a historical dataset, only for it to sit unused because it doesn't address a timely operational need or integrate into a user-facing workflow. The work is technically real and challenging, but its impact is fake. The allure of complex methodologies (a novel neural network architecture) can overshadow the simpler, more robust statistical model that would be easier to deploy, monitor, and explain. This is where Data Science must be applied, not abstract. The discipline must start with the decision: "What will we do differently if this model succeeds?" If the answer is unclear, the work is at high risk of being an academic exercise dressed in corporate clothing.
Quantifying the Drift: Using Data to Surface Value Disconnects
Gut feeling about inefficiency is not enough for an applied leader. You need evidence. This is where a diagnostic, data-informed approach separates effective management from vague dissatisfaction. Start by mapping work streams to value streams. For an engineering team, this could involve analysing cycle time and throughput not just for all tickets, but categorised by type: new feature vs. bug fix vs. internal tooling vs. 'architecture'. A ballooning proportion of time spent on internal tooling with no corresponding improvement in feature deployment speed is a red flag. For a product team, track the percentage of shipped features that achieve predefined adoption or engagement thresholds. A low success rate suggests development effort is disconnected from user needs. The goal isn't to create a surveillance state, but to illuminate patterns and spark focused, qualitative investigation.
Data Science techniques can be applied here at a meta-level. Simple correlation analysis can reveal if certain project characteristics (e.g., number of pre-kickoff meetings, size of initial specification document) are predictive of longer delivery times or lower post-launch usage. Natural language processing on Jira tickets or Slack channels can help identify themes of confusion or misalignment early. The key is to use data to ask better questions, not to deliver definitive verdicts. For instance, data might show that projects initiated by the marketing team have a 70% lower adoption rate than those initiated by customer support. This isn't an indictment of marketing, but a signal to investigate the decision-making and requirement-gathering process for those projects. The data points to a potential disconnect; the leader must then do the human work of understanding why.
The Leader's Toolkit: Interventions to Redirect Energy Towards Impact
Identifying 'fake work' is only half the battle; the other half is redirecting the energy without demoralising the team. The blunt instrument of simply demanding "more value" fails. Effective intervention is surgical. First, clarify the 'Why' relentlessly. For every initiative, mandate a simple, one-page charter that answers: What user or business problem are we solving? How will we measure success? What is the cost of delay? This forces value articulation before a single line of code is written. Second, implement shorter feedback loops. Replace quarterly big-bang releases with weekly or bi-weekly small batches of work. This surfaces misalignment quickly—if a feature sits unused for two weeks after release, the next batch can be pivoted. It transforms a long coasting period into a series of small, correctable drifts.
Third, and most critically, change the conversation in one-on-ones and retrospectives. Move from "What did you do?" to "What did you learn?" and "What impact did it have?". This psychological shift is profound. It signals that thoughtful experimentation that yields learning is more valuable than flawless execution of a pointless task. It empowers individuals to question the work itself. Finally, as an Applied Leadership practice, you must model the behaviour. Be transparent about your own time. Explain your decisions in terms of strategic value and trade-offs. Publicly cancel projects that no longer serve a clear purpose, framing it not as failure but as intelligent resource reallocation. This demonstrates that the ultimate 'fake work' is leadership that lacks the courage to stop work that has lost its meaning.
From Coasting to Contribution: Fostering a Culture of Authentic Work
Eradicating 'fake work' entirely is a fantasy; the goal is to minimise it and create a culture where it is safe to call it out. This requires building psychological safety, but of a specific kind: safety to challenge the validity of the work, not just to report on its progress. Leaders must actively invite dissent on project relevance. In planning meetings, ask: "Are we all convinced this is the most valuable thing we could be doing right now? If not, what's holding us back from saying so?" Celebrate when a team proactively kills its own project because the learnings showed it wouldn't deliver value. This turns the avoidance of wasted effort into a shared victory.
Ultimately, a culture of authentic work is anchored in transparency and connectedness. Individuals need to see how their work ladders up to customer and business outcomes. Use tools like internal 'showcases' where teams present not just what they built, but the problem it solved and the data showing its effect. Rotate engineers and analysts through support or sales rotations to directly hear customer pain points. This human connection is the most potent antidote to abstract, coasting work. When a developer has just spent an hour listening to a frustrated user, they return to their desk with an innate filter for value. They will naturally question the utility of that abstract framework. They become self-correcting agents, aligned not with the process, but with the purpose. This is the end state of effective Applied Leadership: not a leader who polices work, but one who architects a system where genuine contribution is the most rewarding path for everyone.
The notion of 'fake work' is less a moral failing and more a systemic design problem. It emerges from misaligned incentives, vague strategy, and feedback loops that measure activity instead of impact. For the leader, the task is part detective, part architect, and part coach. It begins with the courage to question the value of ongoing work using data and pointed questions. It requires the discipline to implement processes—like clear charters and short feedback cycles—that force alignment. And it culminates in cultivating a culture where the highest form of professionalism is not blind execution, but thoughtful contribution. The actionable takeaway is this: next week, in your team meeting, don't just review the status of tasks. Pick one ongoing project and ask the team, "If we stopped this today, what would be the consequence?" The honesty of the answer will tell you everything you need to know about whether you're steering a ship or simply admiring the wake it leaves as it drifts.
Comments ()