The American Healthcare System is Failing
Five years from now, we won’t recognize the system we have today.
The American healthcare system is collapsing—and fast. In five years, it may be unrecognizable. I foresee two decades of chaos. People will die from lack of access to care. Families will be shattered. Large employers will dominate even more, leveraging economic advantages to build systems far beyond today’s self-insurance models while smaller companies will simply get out of the business of healthcare altogether.
The concept of pre-existing conditions will return, albeit through different systems that skirt the ACA. Government won’t stand in the way, because it has no answers to the problem.
Serious medical conditions will make people unemployable. Chronic illnesses will limit job options and force people to accept lower wages for coverage. Cash-pay systems will surge. Americans will need to become more self-reliant.
No one wants to talk about this—it’s too ugly, and there are no clear solutions. But ignoring it won’t help. Our tendency to wait for collapse before acting is deeply ingrained. We suffer from normalcy bias, avoiding hard decisions because the costs are immediate and the benefits distant. We don’t fix what’s broken—we wait until it’s shattered.
While I make predictions, I know they’re wrong. But they help us think and prepare. An economist I follow tells the story of a WWII weather officer whose team was tasked with long-term forecasts. The team was never able to do any better than simple flips of a coin. When the team asked to be relieved of its useless duties, they received this reply: “The commanding general knows the forecasts are no good. But he needs them for planning purposes.”
I share long-term forecasts for the same reason—to help you prepare for an uncertain future.
How America’s Healthcare System Became So Broken
The evolution of today’s broken healthcare system began in earnest after World War II, but its malignancy took root in 1965—the year Medicare became law. While I was born that year, I don’t think my birth was the cause; it’s mere coincidence. Still, the timing is symbolic. Medicare was a well-intentioned response to a growing crisis, but it also marked the beginning of a long, slow unraveling of the economic foundation of health insurance.
Insurance is about risk. There is no health insurance industry in the United States because the system is not based on risk.
The Seeds of Medicare
Medicare was created to solve a very real problem: seniors were living longer, but they couldn’t afford the medical care that came with aging. Several forces converged to make this urgent:
Medical science was advancing, offering new ways to extend life—but at a cost.
The post-war economy had become mobile, scattering families and leaving elderly parents without informal care.
Social Security provided income, but not healthcare, and private insurers refused to cover older adults, citing the greater risk.
Politicians responded to pressure from a growing, aging voter base.
Medicare was not designed as traditional insurance. It was—and still is—a government-run payment system, more akin to Social Security than to private health coverage. It reimburses providers for services, regardless of individual risk. At the time, this seemed manageable. People didn’t live much beyond 65, and medicine was relatively inexpensive. But that didn’t last.
Employer-Sponsored Insurance: A Wartime Workaround
Meanwhile, the private insurance model was taking shape in a very different way. During WWII, the U.S. government froze wages to control inflation. Employers, desperate to attract workers, began offering health insurance as a fringe benefit—something not counted as wages and exempt from income tax. This workaround became the foundation of the employer-sponsored insurance system we still rely on today.
Initially, these plans covered basic hospitalization and surgical services. Over time, they expanded to include physician visits, major medical coverage, and eventually dental and vision care. But the system was never designed for portability, equity, or long-term sustainability. It was a patchwork solution to a wartime problem that calcified into national policy.
The ACA and the Collapse of Risk-Based Insurance
Risk-based employer health plans began to unravel with the passage of HIPAA in 1996. While most associate HIPAA with privacy, its more significant impact was on accessibility and portability. The law restricted how group plans could handle pre-existing conditions, marking the start of a major shift away from risk-based concepts.
Key provisions included:
· Shorter waiting periods: Plans could exclude coverage for pre-existing conditions for no more than 12 months.
· Credit for prior coverage: Previous insurance counted toward reducing or eliminating exclusion periods.
· Portability: Coverage could be maintained when changing jobs or employment status, if eligibility criteria were met.
HIPAA made it easier for individuals to obtain and keep coverage despite pre-existing conditions. In doing so, it began the separation of health insurance from individual health risk—undermining the foundation of risk-based employer plans.
The Affordable Care Act (ACA) was another response to crisis—this time, the crisis of millions of uninsured Americans and rampant discrimination against those with pre-existing conditions despite HIPAA. The ACA’s mandate that insurers cover everyone, regardless of health status, was a moral and political victory. But economically, it was the last straw that broke the back of the traditional insurance model.
Insurance depends on risk pooling: healthy people pay in, sick people draw out. When coverage is guaranteed and premiums can’t be adjusted for risk, healthy individuals have little incentive to buy insurance until they need it. The ACA tried to counter this with an individual mandate, but that was later eliminated by the Supreme Court. Now, many treat insurance like a fire extinguisher—something to pull out (buy) only when the flames are visible.
Why Healthcare Insurance Isn’t Really Insurance Anymore
One of the most fundamental problems with the U.S. healthcare system is that it no longer behaves like insurance. In traditional insurance markets—life, auto, homeowners—the concept of risk is central. You pay premiums not because you expect to “get your money’s worth,” but to protect against unlikely but potentially catastrophic events.
With life insurance, we hope never to collect. I hope I stop paying premiums at some point—maybe 70, 80 or even 90 and live on, having never collected a penny.
With homeowners insurance, a majority of homeowners go their entire life without ever filing a claim.
With auto insurance, a single claim every few years is typical. Multiple claims can label you a high-risk driver, making coverage more expensive or even inaccessible.
These systems work because they rely on risk stratification and behavioral incentives. People are rewarded for being low-risk, and penalized for being high-risk. This keeps premiums affordable and encourages responsible behavior.
Healthcare insurance, however, has abandoned this model. Especially since the ACA mandated coverage for pre-existing conditions, there is no meaningful risk assessment in the system. People can wait until they’re sick to buy coverage, knowing they cannot be denied. This creates a dynamic where insurance is no longer about protecting against future risk—it’s about paying for known costs.
In effect, healthcare insurance has become a cost-sharing mechanism, not a risk-based product. People calculate whether they’ll “get back more than they pay in,” and if they’re young and healthy, they often opt out entirely. There’s little incentive to maintain continuous coverage, and no penalty for waiting until care is needed. Even if you have to wait until the next ACA open enrollment period, you are never more than 365 days away from being able to purchase a good policy. In extreme cases, you can quit your job and qualify right away, only to seek a new job a week later. If you are suddenly to incur hundreds of thousands of dollars in cancer treatments, quitting your $80,000 per year job isn’t such a bad idea if it means you can now purchased subsidized coverage through the marketplace. Better yet—get yourself fired and collect unemployment insurance, too.
These realities have broken the economic logic of insurance. Without a balanced risk pool—where healthy individuals subsidize the sick—the system becomes unstable. Costs rise, premiums spike, and insurers either exit the market or restrict coverage in other ways.
Death Spiral: The Risk Pool Worsens Each Year
As medical costs rise, so do premiums for fully insured plans and expenses for self-insured employers. In response, younger, healthier, and lower-wage employees increasingly opt out—many simply can't afford their share, even with employer contributions. Some drop out for financial reasons; others understand they can buy in later when they need care.
As low-cost individuals exit, the average risk—and cost—of those remaining increases. Despite employers’ efforts to control expenses, costs continue to rise at twice the rate of inflation. The issue isn’t poor cost control—it’s the growing concentration of high-risk individuals.
We’re nearing a tipping point. Carriers that once required 70% employee participation have dropped thresholds to 50% or even lower, reflecting the accelerating collapse of the traditional risk pool.
As costs rise, the risk concentration increases. It’s an unsustainable trend.
The Ripple Effects of a Riskless System
The absence of risk-based incentives in healthcare insurance doesn’t just distort consumer behavior—it reverberates throughout the entire system.
1. Innovation Gets Skewed Toward Profit, Not Prevention
In a system where insurance is expected to cover everything, innovation tends to focus on billable interventions rather than preventive solutions. Pharmaceutical companies and device manufacturers prioritize treatments that can command high reimbursement rates. Preventive care, lifestyle medicine, and long-term health investments—while often more cost-effective—struggle to gain traction because they don’t fit neatly into the fee-for-service model.
2. Providers Are Incentivized to Maximize Billing
Hospitals and physicians operate in a system where revenue is tied to volume and complexity of services. Without meaningful cost-sharing or risk-based pricing, there’s little incentive to limit unnecessary procedures or optimize efficiency. The result is a system that rewards doing more, not doing better.
3. Insurance Markets Lose Their Economic Logic
Traditional insurance markets rely on underwriting—assessing risk and pricing accordingly. In healthcare, that logic has been legislated out. Insurers can’t charge more for higher-risk individuals, nor can they deny coverage. This forces them to rely on broad subsidies, government support, or aggressive cost controls to remain solvent. It also drives consolidation, as smaller insurers struggle to compete without the ability to manage risk.
4. Consumers Become Passive Participants
When insurance is expected to cover everything, consumers disengage from cost awareness. There’s little incentive to shop for value, ask questions, or consider alternatives. This erodes market discipline and drives prices higher across the board.
A System Designed to Fail Gracefully, Not Succeed Efficiently
The current healthcare system is built to absorb crisis, not prevent it. It’s reactive, not proactive. It’s designed to pay for illness, not invest in health. And because it lacks the core mechanics of insurance—risk, incentive, and accountability—it continues to spiral toward unsustainability.
Why Cost Control Is Mostly a Myth
While some costs are theoretically controllable, the reality is far more complex. Here’s why:
Employee Turnover Undermines Long-Term Investment
The median employee tenure in the U.S. is about six years. Investing in employee health is a long-term proposition, but many employees won’t be around long enough for those investments to pay off. Older, higher-earning employees may stay longer, but they also tend to be more expensive to insure.Prevention Isn’t Always Cost-Saving
Preventive care is ethically and clinically important, but it’s not always economically efficient. Early detection of a disease like cancer may reduce initial treatment costs, but it can also lead to years of ongoing care. The economics are murky, even if the clinical benefits are clear.Better Benefits Attract Higher-Risk Employees
Generous health plans tend to attract employees with greater medical needs. This is especially true in competitive labor markets, where workers with chronic conditions seek out employers with robust coverage.Better Benefits Also Retain Higher-Risk Employees
Once enrolled, employees with significant health issues are less likely to leave, knowing they may not find comparable coverage elsewhere. This creates a retention bias that increases long-term costs.Private Employers Pay Far More Than Government Programs
The private sector pays 3–4 times what Medicare and Medicaid pay for the same services. This cost-shifting inflates prices in the employer-sponsored market, driving healthcare inflation at double the rate of general inflation.Adverse Selection Within the Workforce
As premiums rise, healthier employees opt out of coverage, leaving a sicker, more expensive pool behind. This self-selection accelerates cost increases and undermines the risk pool.
The Illusion of Control
The belief that employers can meaningfully control healthcare costs by adjusting plan design or negotiating better rates is largely an illusion. In self-insured arrangements, insurance carriers profit handsomely—not because they bear risk, but because the employer does. Carriers merely administer the plan and collect fees, while the financial burden of claims falls squarely on the employer.
Each year, costs are shuffled to create the appearance of savings, but the underlying system continues to erode. Worse still, in the self-insured market, rising healthcare prices actually benefit carriers. Their incentives to reduce costs exist only in fully insured products, where they bear the risk. This creates a deep conflict of interest—one that won’t be resolved until self-insured plans disappear. Yet even then, the challenge of covering pre-existing conditions remains unresolved.
Many smaller employers have already opted out of traditional healthcare altogether. Others are turning to ICHRAs, giving employees money to purchase their own coverage. As this trend accelerates, fully insured plans may see a resurgence. But their long-term viability is questionable, as they’re required to cover the most expensive pre-existing conditions without adequate compensation for the risk.
Change is coming quickly. It was already underway. Now, with a government poised to accelerate the shift, the transformation of employer-sponsored healthcare is inevitable—and imminent.
While this article is already long enough, I will not get into solutions here. I will leave that for future articles. That said, there is one simple thing we can do to get started: STOP USING THE WORD INSURANCE when it comes to healthcare. We do not have a health or medical insurance system. We have a payment system. That would be a good, simple place to begin.
Get ready for a wild ride!