10x Asymmetric Risk
Why Great Leaders Learn to Lose Small and Win Big
Before I ever led large engineering organizations or AI platforms, I learned an unexpected lesson about risk in a much smaller room.
Early in my VMware career, I was deeply technical and deeply uncomfortable speaking in front of groups. Like many engineers, I believed good work should speak for itself. Then I joined the VMware VirtualSpeak Toastmasters club, almost as an experiment. The downside felt real. Public speaking meant visible failure. Awkward pauses. Forgetting words. Looking unpolished in front of peers I respected.
The upside, at the time, was unclear.
What made it worth trying was the asymmetry. The cost of failure was embarrassment in a safe room. The potential upside was learning a skill that could compound for decades. I gave my first few talks badly. Then less badly. Over time, I found my voice. Eventually, I led the club as VP of Education and helped it reach “Select Distinguished” status for the first time.
That experience permanently changed how I think about risk. Not as something to avoid, but as something to design so that failure teaches cheaply and success scales quietly.
Most leadership lessons since have been variations of that same pattern.
Symmetric Risk Is the Silent Killer of Organizations
Symmetric risk is when effort, cost, and downside scale roughly in proportion to the upside.
Six months of work. Real headcount. High organizational visibility. If it works, you get a modest improvement. If it fails, you lose time, credibility, and momentum.
Symmetric risk feels responsible. It looks mature in planning decks and steering committees. It is also how organizations slowly lose their edge without ever making a clearly bad decision.
Across large companies and startups, I have seen this pattern repeat. Teams overinvest before learning. Leaders commit reputational capital too early. By the time real data shows the idea is flawed, walking away is no longer politically or emotionally possible.
Asymmetric risk flips this equation. The downside is intentionally capped. The upside is allowed to be nonlinear.
Lesson One: Small Experiments Can Carry Disproportionate Impact
When generative AI first entered the enterprise conversation, the dominant reaction was caution. Hallucinations. Compliance risk. Brand exposure. All valid concerns.
At Uber, the temptation was to wait. To let the technology mature. To watch others take the early hits.
Instead, we asked a different question. Where can we learn fast, fail quietly, and still unlock real leverage if things work?
The answer was internal agent workflows. Conversation summarization. Assistive tooling. No customer-facing language. No automated decisions. Humans firmly in control.
The initial investment was intentionally small. A senior but compact team. Clear success metrics. Explicit exit criteria. If it failed, the blast radius would be limited to a narrow operational surface area.
What happened instead was a step-function shift. Productivity gains were real. Accuracy crossed thresholds we could defend. Trust grew organically. That early, asymmetric bet became the foundation for copilots, bots, and platform-level GenAI adoption that ultimately delivered hundreds of millions in business impact.
The insight was not technical brilliance. It was a risk design.
Lesson Two: Platform Bets Are Only Asymmetric If You Can Survive the Valley
Earlier in my career at Springpath, and later at Cisco, I learned a harder lesson about asymmetric risk. Platform work is inherently high upside, but it is also where teams most often overcommit too early.
Building replication primitives, telemetry pipelines, orchestration layers, and data protection frameworks does not generate immediate applause. These investments delay visible wins. They demand patience from stakeholders who are often measured quarter to quarter.
What made these bets work was intentional staging. We shipped the smallest viable primitives that could support multiple futures. Each phase had a clear question it was designed to answer. Adoption, reliability, extensibility.
If a signal failed to materialize, we could stop without having sunk years of effort. If it worked, the leverage multiplied. Native replication unlocked new enterprise use cases. SaaS analytics changed how customers operated our product. Data-driven insights expanded the total addressable market and accelerated acquisition.
The lesson was clear. Platform investments only become asymmetric when leaders preserve optionality long enough to learn.
Lesson Three: Careers Are Portfolios, Not Single Bets
Asymmetric risk applies just as much to careers as it does to products.
Looking back, many of the inflection points in my career did not optimize for safety. They optimized for learning velocity and surface area. Working deep in hypervisors and distributed systems. Joining early-stage companies before outcomes were clear. Taking on ambiguous leadership roles where success was not guaranteed.
What made those risks rational was not confidence. It was asymmetry.
The downside was bounded. Skills would compound. Technical depth would transfer. Reputation would survive. The upside, if things worked, was disproportionate. Scope expansion. Trust. The ability to shape systems and organizations rather than just contribute within them.
Leaders who stall often mistake predictability for prudence. In reality, they accumulate symmetric risk over time. Stable roles. Incremental scope. Minimal exposure. Eventually, relevance erodes.
Lesson Four: AI Magnifies Asymmetry for Better and Worse
AI is the most asymmetric force I have seen in my career.
A small, well-positioned team can now compress years of tooling into months. Entire categories of manual work can disappear. New operating models become possible almost overnight.
At the same time, the downside scales just as fast. Poorly governed AI systems can erode trust instantly. Errors propagate at machine speed. Regulatory and brand risks multiply.
This is why AI leadership is not about enthusiasm. It is about architecture.
The teams that succeeded did not start by asking whether something could be automated. They started by asking how failure would look, how fast it could be detected, and how cleanly it could be reversed.
Human-in-the-loop designs. Progressive exposure. Kill switches. Observability before optimization. These were not constraints. They were what made asymmetric upside possible in the first place.
Lesson Five: Culture Determines Whether Asymmetric Risk Is Even Possible
No amount of strategy matters if culture punishes failure indiscriminately.
In environments where every miss triggers blame, leaders default to safe, symmetric bets. Innovation slowly disappears while execution theater increases.
The healthiest cultures I have been part of share a few consistent traits. Decision ownership is clear. Retrospectives are honest rather than performative. There is a shared understanding that a bad outcome does not automatically imply a bad decision.
When those conditions exist, teams are willing to take many small, thoughtful risks. Over time, those compound into outcomes that look inevitable in hindsight but were anything but.
A Practical Test for Leaders
Before committing to any meaningful initiative, I ask myself a simple set of questions.
What is the maximum credible downside, not the theoretical one? Can it be capped or reversed? If this works unusually well, does it change the trajectory or merely meet expectations? And are we staffed for learning speed rather than just delivery?
If downside is uncapped and upside is incremental, it is not worth doing.
If downside is survivable and upside is nonlinear, hesitation becomes the real risk.
Closing Thought: Leadership Is the Art of Risk Design
The most persistent myth in leadership is that great outcomes come from avoiding risk.
They come from repeatedly taking the right kind of risk. Small enough to survive. Structured enough to learn. Bold enough to matter.
Asymmetric risk is how platforms replace features. How AI moves from demos to durable advantage. How leaders transition from managing teams to shaping systems.
The goal is not fearlessness. It is disciplined courage.
And over the long arc of a career, that discipline is what separates leaders who deliver momentary wins from those who leave lasting systems behind.
~10xManager


Great article, Bhavesh! I absolutely agree asymmetric risk with small, structured bets is what drives massive outcomes. Jeff Bezos' 2008 AWS risk evaluation is the perfect example, tiny experiments that exploded into trillion dollar leverage.