Long before anyone thought of healthcare as an industry, solo practitioners tended to the illnesses and health needs of the people in their communities. Sliding fee scales, reimbursement systems, and malpractice insurance did not exist.
Doctors came to the patient’s home, provided what care and comfort they could, and were often paid whatever the patient or family could afford. Hospitals as we think of them today also did not exist.
The hospitals that did exist were not places to go to be cured and made whole; they were places where only the poorest of the poor went to die. Anyone who had family or money was taken care of in the home.
As the United States entered the 1900s, hospitals began to change, primarily due to the great advances in science that were occurring worldwide. From the late 1800s to the mid-1900s, the introduction of germ theory, the discovery of vitamins and insulin, the introduction of food safety standards and waste management, and the widespread accomplishments of vaccination programs led the way to understanding the disease process and, in turn, to the development of effective prevention measures and treatments for many diseases and illnesses.
This new belief in science provided a firm foundation for what we now think of as the healthcare industry.
Until the time of the Great Depression, the US government saw no role for itself in the personal health and well-being of US citizens. The Great Depression’s unprecedented unemployment, homelessness, and suffering provided the impetus for one of the United States’ first government programs: Social Security.
At first, Social Security provided no benefits specific to healthcare, but it did set a precedent for the government’s later involvement in caring and providing for the health of US citizens.
Prepaid healthcare, aka health insurance, really came into its own after World War II. The US government was surprised by the number of young men who had not been able to be enlisted in the armed services due to physical/health deficiencies.
These men were predominantly in their late teens to late twenties, a time when they should have been in their prime. The government’s concern over this issue was threefold. First, the inability of so many men to serve in the military was a realistic security concern.
Second was the concern that many men with health and physical problems would not be able to fully participate in, and contribute to, the nation’s workforce, productivity, and long-term economic advancement. Finally, for moral and ethical reasons, perhaps the government did have a responsibility to help ensure US citizens’ health and well-being.
To address these concerns, the federal government created three federal programs. It established the National Institutes of Health to research and find cures for diseases; passed the Hill-Burton Act, which offered low-interest loans to communities to build hospitals and clinics; and assigned health insurance tax-exempt status, which provided that neither employers nor employees would pay tax on the money employers used to purchase health insurance for their employees.
Through the creation and implementation of all three programs, the federal government’s role in them was indirect; it created an environment in which US citizens could live healthier and more productive lives but did not actually provide care directly.
Assignment of tax-exempt status to health insurance in particular did just what the federal government wanted it to do: It created a strong incentive for employers to provide health insurance for their employees and for employees to have their health insurance purchased for them with pretax money from their employers.
The large-scale health insurance industry that developed in the United States reimbursed hospitals and physicians on a fee-for-service, retrospective basis, which is referred to as the usual, customary, and reasonable (UCR) reimbursement system.
The amount the provider receives is decided after the service has been delivered. After providing care to the patient, the hospital or physician submits an itemized bill requesting reimbursement from the patient’s health insurance carrier.
Upon receiving the request for reimbursement, the insurance company determines whether the requested amount is within the usual range of charges for those services and within the range of reimbursement the insurance company customarily remits for those services and then decides whether it is reasonable to pay the provider the amount requested.
Reimbursement requests are unbundled—listed piece by piece and service by service—because it is more profitable for the provider to request reimbursement for each and every small segment of care and service delivered than to count all care rendered as one service (i.e., bundle) and request reimbursement for just that one bundle.
Finally, because UCR reimbursement does not have a preset fee schedule, providers could steadily increase the amount of reimbursement they requested and thus raise the UCR reimbursement intervals higher and higher.
By the time Medicare and Medicaid were created in the mid-1960s, the UCR reimbursement system was firmly engrained and became the payment mechanism for these two public programs.
By the 1970s, however, Medicare and Medicaid had grown into hugely expensive programs, and government at both the state and federal levels began to question the wisdom of UCR reimbursement, which had allowed healthcare costs to rise much faster than the pace of inflation. The healthcare industry seemed to be spinning out of control.
Health technology had become a double-edged sword: While paving the way for hospitals to become science fiction–-like miracle factories, new technologies were rapidly being developed and used and often were implemented before their true costs and benefits were thoroughly analyzed.
The number of health professions had exploded, each with its own education system and licensing requirements. Many thought that healthcare had simply become too big and too expensive to sustain.
In the 1972 presidential election, each candidate clearly needed to have a healthcare plank in his platform. The Democrats continued to champion national health insurance.
The Republicans needed a healthcare initiative distinct from national health insurance, and thus they introduced health maintenance organizations (HMOs).
The Republican candidate, Richard Nixon, won the election, and soon thereafter Congress passed the HMO Act, which provided start-up monies for these new organizations.
HMOs and managed care are beautiful in theory. On paper, they effectively reverse all the perverse financial incentives of the UCR system and promote a true health-oriented approach to care.
In practice, however, managed care became rigid, restricting hospital care and reimbursing only for preapproved services from specific HMO providers.
Still, many HMO patients had more favorable health outcomes at a lower cost than did patients with traditional fee-for-service health insurance, and HMOs and managed care evolved and grew in the private health insurance sector.
Many people lack knowledge of the high level of competition among healthcare facilities, the tight profit margins reported by some, and the large profits reported by others, the high level of liability involved, the complexity of reimbursement for patient care, and the high cost of personnel and administration in this particular part of US healthcare.
Throughout the 1970s, a number of cost control mechanisms were introduced in an attempt to control Medicare costs and spending. None of them was successful at staunching the apparently unlimited need and demand for healthcare services among the nation’s elderly.
Thus, in 1983 Medicare introduced a new payment system for hospitals called Diagnostic-Related Groups (DRGs), a prospective, fee schedule approach that reimbursed predetermined amounts by diagnosis, not by each service provided.
In a prospective payment system, providers know before the service is rendered the amount the insurance company will reimburse. While this approach may sound reasonable today, in 1983 it was earth-shattering.
For many services, hospitals did not know how much they spent on patients; they counted on being generously reimbursed by insurance and then cost-shifting some of the over-reimbursement to cover services they provided free of charge to those unable to pay.
Cost shifting was a virtually unknown practice unless one was a hospital CEO or CFO. It did, however, enable hospitals to meet their broad missions of caring for the poor and the uninsured.
The introduction of DRGs and other prospective reimbursement mechanisms forced providers to become far more financially sophisticated and fiscally responsible.
Medicare reimbursement was a large segment of most hospitals’ and physicians’ revenue. Providers soon found that when reimbursement was strictly limited to a predetermined amount, they could remain viable only if they eliminated waste and allocated all resources wisely to every service provided to each and every customer.
Inevitably certain services became recognized as far more profitable than others and certain customers became far less profitable than others. For the first time in its history, the healthcare industry needed to think about what it was providing, how it was providing it, and to whom it was providing it.
In this way, the healthcare industry slowly and painfully adopted a competitive, market-oriented approach to seeking and caring for customers.