Hey, I have a right to make a buck
I’m a corporation with rules to protect me
If you get hit by a puck
And once had a scratch on your knee
Then why should I pay for your care?
That pre-condition was there
And I will fight to the death
Or until you take your last breath
To prove you haven’t a claim
Because it is you who’s to blame
The California Nurses Association recently conducted a study to determine the claim rejection rate of major insurers in California. As you may have already heard, the rejection rate averaged 22%. These insurers rejected over 45 million claims according to numbers that the insurers themselves submitted. While the rejections were not characterized in terms of rationale, the sheer numbers are staggering and indicate several issues that underlie insuring healthcare. The insurers claim that many of the rejections are related to paperwork as though that was something over which they had no control. They also were unable or unwilling to provide numbers to support their defense of claim acceptance.
The claims rejection numbers varied from a high of 39.6 for Pacificare to 6.5% for Aetna with Cigna coming in at 33% and Healthnet rejecting 30%. Anthem Blue Cross and Kaiser each reported 28% rejections. We know that the claims business is complicated. I personally have three levels of claim including Medicare, a paid for supplement by an insurance company and Tricare (the military medical insurance). This often results in confusion when my healthcare provider (physician or hospital, for example) has difficulty fitting a claim into the right category and to the proper insurer, even though two of the three are government entities. Providers don’t always get it right. According to my physician son, who is a gifted analyst and has an IQ over 180, the process is obscure and requires a medical administrative expert rather than an automated system or a “fill in the blanks” approach. There are medical procedures that are similar to each other and yet the reimbursement rate may vary according to the description, so there is no guarantee that a claim will be paid at the rate expected or paid at all. This alone sometimes results in multiple applications and rejections. He has also learned that not all insurance companies have the same rules, so the combinations and permutations of filing a correct claim approach infinity as the number of providers and the number of insurers increase. That alone may be a good reason to consider a single payer system, although single payer is anathema to the “free” market approach. I use the word “free” because free market is a relative term since one insurer in Alabama (Blue Cross), for example, covers 89% of those insured. There may, in reality, be few choices in the “free” market. We have outlined a few sources for increasing costs and reducing coverage of those who happen to be insured. Denial of claims may also lead to denial of coverage for pre-existing conditions. We will consider that separately in the paragraphs ahead. First, let us look at the history of the healthcare insurance system we have cobbled together over the years. It may hold the key to some of our frustration.
Healthcare and heath insurance, as we now know it, really began after World War II, but there were precursors of note. Prior to about 1920, hospitals did not deliver the preponderance of healthcare, but individual physicians did and often they delivered care to the patient’s home. In 1929, Baylor University began the first medical insurance system that eventually became Blue Cross by offering to insure against hospital expenses for a cost of about $1.20 per month. Physicians later, concerned that hospitals could take their place in delivering care followed up with what eventually became Blue Shield as a way to help in getting paid for their work without depending on hospitals. Only a few years later, the depression forced everyone to rethink insurance as hospital endowments were lost to the economics of the time and many physicians were going unpaid. In the 1920s and earlier, medical care was actually less expensive than the loss of wages for being sick or incapacitated. People and corporations became aware that it was cheaper to pay for insurance than it was to pay for an unstable workforce that might miss work due to injury or illness. That is right. It was CHEAPER to have insurance than to lose wages or to lose the critical skills of a worker. The government was not involved. Businessmen protected their businesses and individuals protected themselves through cheap insurance. After WW II, however, the march of technology was such that the cost of care began to rise to pay for the increasingly more complex equipment, procedures and chemistry involved. Insurance was still cheaper than losing the services of a worker, but the margin began to narrow and it was commonplace for corporations to contract for medical care insurance coverage offered as a benefit to employees. The government remained apart from healthcare or insuring citizens for care. Other nations developed systems that essentially viewed healthcare as a benefit provided by a tax system that could ensure that providers could be compensated without worrying about endowments and that citizens could receive care regardless of job status. Kaiser was a major government production contractor, but it developed its own healthcare system to ensure a steady workforce. Kaiser Permanente survives today despite the decline of Kaiser Industries. We did not choose the same path as most nations and thereby built up a whole industry out of the accidental circumstances of the 1920s and 1930s. This served us reasonably well until medical costs spiraled out of control and the understandable reaction of corporations was to reduce or eliminate health insurance. Despite an increasing population, the number of insured declined. With a reduction in the number of insured and a need to stay economically viable in an environment of increasing medical costs, the price for insurance went up thereby causing even more employers to reduce or eliminate coverage. A cobbled system began to fail and eventually fall apart. If you were unfortunate enough to suffer a major illness or injury, you became a likely subscriber to bankruptcy that, in turn, placed further pressure on individuals and corporations, including insurance companies. Our healthcare system depends on timely and accurate financial support. The system is now broken.
Health insurance companies depend on a regular flow of premiums into their coffers in order to provide a service of insuring people from loss due to some undesirable health condition. That is a given. It is the way of the corporate world. That is capitalism 101 and it is to be expected…or is it?
If we encourage the same business behavior of health insurers as we get from manufacturing or accounting services, for example, we would expect unprotected open market competition and also internal scrutiny by each company to reduce costs and increase profits. That makes sense and cents. If I can get cheaper materials or labor and still produce a quality and competitive product or service, then there is no harm to the consumer. On insuring healthcare, however, my profit is affected more directly by denying or delaying care through the claims process so that my premium collection can overweigh my expenses. This simply is not the same as manufacturing or accounting. It does not work and yet we cannot assume that an insurance company will become a non-profit entity because it has altruistic management. The management would and should be fired if they cannot turn a profit. Incompatibility is inherent. Therefore claim denial is increasingly likely as costs rise. If the cost of my raw materials increases and the cost of my labor increases, there simply are not many ways to stay in business unless I price my product higher and reduce my expenses (claims). Pre-existing conditions have become synonymous with claim denial. Unfortunately, the associated logic has more to do with profit than medicine. Exactly how is acne a pre-existing condition for breast cancer, for example? The examples are legend. And they are disturbing. Treating cancer is expensive. Claim denials are cheap.
It is time to develop a system that emphasizes the medical over the fiscal and to think the unthinkable that perhaps the common good is no longer well served by a system that was cobbled together when medicine was crude and cheap. Medicine is now sophisticated and expensive. Get over it. We can help ourselves by either going to a single payer system with one set of rules or we can inject competition into the equation through a combination of government oversight, incentives and penalties and see if that experiment works well enough to save health insurance companies. You decide.
Peace,
George Giacoppe
15 August 2009
Friday, September 25, 2009
Thursday, September 03, 2009
Tortuous Presumptions
The recent release of the “CIA Inspector General’s Special Review of Counterterrorism Detention and Interrogation Activities, September 2001 to October 2003”—detailing once again the appalling torture techniques employed by U.S. interrogators in their attempt to get information from “the worst of the worst”—has been discussed by experts far more qualified than myself. One aspect of the report, however, especially as disclosed by former CIA analyst Ray McGovern (“Closing in on the Torturers,” Aug. 26, 2009, consortiumnews.com), struck me forcefully. It concerned the operating assumption among interrogators, in the absence of any evidence, that their Al-Qua’ida captives (called “high value detainees) must have had crucial information, and were refusing to give it up. Here is what the report says:
According to a number of those interviewed for this Review, the Agency’s intelligence on Al-Qa’ida was limited prior to the CTC (Counterterrorist Center) Program. The Agency lacked adequate linguists or subject matter experts and had very little hard knowledge of what particular Al-Qa’ida leaders—who later became detainees—knew. This lack of knowledge led analysts to speculate about what a detainee ‘should know’…When a detainee did not respond to a question posed to him, the assumption at Headquarters was that the detainee was holding back and knew more; consequently, Headquarters recommended resumption of EITs [enhanced interrogation techniques].
McGovern adds one more bit of data from the Review, and then a comment:
Some participants in the Program, particularly field interrogators, judge that CTC assessments to the effect that detainees are withholding information are not always supported by an objective evaluation of available information and the evaluation of the interrogators but are too heavily based, instead, on presumptions of what the individual might or should know.
And then comes McGovern’s comment:
“People were tortured on the basis of ‘presumptions.’ Nice.”
What struck me when I read this was how similar it sounded to the root rationale governing the arrest and detention of American civilians during World War II. The phrase then in vogue by the FBI, military intelligence, and the Alien Enemy Control Division of the Department of Justice, was “potentially dangerous.” This was the term that was used to justify first investigating and then preparing dossiers on thousands and thousands of Americans with roots in the three prospective enemy nations—Japan, Germany, and Italy—even before war broke out. These investigations were undertaken primarily by J. Edgar Hoover’s FBI, starting in 1936 after a meeting the Director had with President Franklin Roosevelt. By 1940, the individuals investigated—many of them targeted by informants—were placed on a Custodial Detention Index prepared by Hoover’s FBI. The term “custodial detention” clearly indicates that anyone on the list was automatically a candidate for arrest and detention in the event the United States entered the war, which it did on December 7, 1941. And on that date, and in subsequent months, thousands on the list (some 60,000 domestic arrests were made during the war) were arrested, detained, interrogated about their activities and associations, and, if they could not “prove their innocence,” interned at Army-run camps for the duration of the war. Most were so-called “enemy aliens,” those immigrants who had been born in Italy, Japan or Germany and had not yet become U.S. citizens, but many were naturalized U.S. citizens with roots in those now-enemy nations.
It was in respect to the latter that the Department of Justice, under the direction of Attorney General Francis Biddle, in about 1943 looked into the reasoning behind the term “potentially dangerous,” and came to some stunning conclusions. It should be noted that both Biddle and his predecessor, Robert Jackson (later elevated to the Supreme Court) had expressed reservations about many such wartime assumptions earlier. Specifically, Jackson had warned about the casual use of the term “subversion” or “subversive activity” with regard to the spying then being done on Americans. Jackson maintained that subversion was a dangerous concept because there were “no definite standards to determine what constitutes a ‘subversive activity’, such as we have for murder or larceny.” The Attorney General expanded on this problem with more examples:
Activities which seem benevolent or helpful to wage earners, persons on relief, or those who are disadvantaged in the struggle for existence may be regarded as “subversive” by those whose property interests might be burdened thereby. Those who are in office are apt to regard as “subversive” the activities of any of those who would bring about a change of administration. Some of our soundest constitutional doctrines were once punished as subversive.
That the Attorney General knew whereof he spoke could have been grimly attested to by one Italian immigrant and “enemy alien” named Federico Dellagatta. Dellagatta had been reported for making suspect statements—“irresponsible talk about the greatness of the Italian people and the Italian army”—while shining shoes in Providence RI’s Union Station He was arrested and detained by the FBI, judged no danger to the nation by his hearing board, and recommended for parole. But when his case was reviewed by the DOJ’s Alien Division, the term “subversive activity” came into play, with grim results for the bootblack. Here is what the reviewer said:
“In the opinion of this reviewer, subject’s persistent talk in praising and boasting of the greatness of the Italian people and of the Italian army while employed in a shoe shining shop constitutes downright subversive activity..” [emphasis added, ed.]
Because of his “subversive” talk, therefore, Dellagatta was interned. Francis Biddle, shortly afterward, weighed in on the related danger of sedition statutes, one of which had been quietly included in the Alien Registration Act of 1940. The act made it a criminal offense for anyone to advocate overthrowing the Government of the U.S. or any state, or even to be “a member of an association which teaches, advises or encourages such an overthrow.” For Francis Biddle, then Solicitor General, such sedition statutes were too easily misused, and often conflicted with the bedrock First Amendment right to free speech. As he later wrote in his autobiography, In Brief Authority:
History showed that sedition statutes—laws addressed to what men said—invariably had been used to prevent and punish criticism of the government, particularly in time of war. I believed them to be unnecessary and harmful.
When he became Attorney General, Biddle opposed many of the proposed measures demanded by the military (though to his everlasting shame, he cooperated in the internment of 110,000 Japanese, including 70,000 U.S. citizens), especially its Individual Exclusion Program aimed at naturalized citizens of German and Italian descent. Biddle actually refused to prosecute several who violated their exclusion orders. His real objections came in 1943, however, when he ordered his department to prepare a report on the Program. After examining and completely invalidating the entire rationale for removing individuals from allegedly vulnerable coastal zones, the report then attacks the concept of “potential dangerousness” as the basis for exclusion (and, by implication, for internment as well.) It notes, first, that “the concept of potential dangerousness itself contains the element of possibility.” Saying someone is “potentially dangerous,” that is, is equivalent to saying that someone “might possibly be a possible threat.” The report then concludes:
Practically, the use of phrases such as this [i.e. ‘potentially dangerous’] suggests that those who use them hold the view that a subject of an exclusion case must be excluded unless it is clear that there is no reason to exclude him. This is analogous to saying that the burden of proof is on the excludee, although the excludee, of course, cannot meet the burden, since he is not advised of the charges against him.
Unfortunately, there were no Robert Jacksons or Francis Biddles in George W. Bush’s Department of Justice, or in his CIA. Where those two WWII Attorneys General understood and, for the most part, respected the law, the Constitutional protections afforded all persons in the United States (such as the right to know what one is charged with), and the presumption of innocence enshrined in English law since the 12th century, Bush’s political appointees did not. Therefore, it seemed perfectly natural to them and their underlings to make “presumptions” about what a detainee could be expected to know, and to torture him if he did not reveal what was expected. Of course, as lawyers, they were adept at coining novel names for such practices, names like Enhanced Interrogation Techniques. They were also adept—Yoo, Addington, Bybee, Gonzalez, on up to the President and Vice-President—at issuing diabolical directives to both define what torture was (or more often was NOT), and why those interrogators who employed it could not be liable for prosecution. As the Review notes:
The OLC [Office of Legal Counsel, where Yoo and Bybee worked] determined that a violation of Section 2340 [of the torture statute,18 U.S. Legal Code] requires that the infliction of severe pain be the defendant’s “precise objective.” OLC also concluded that necessity or self-defense might justify interrogation methods that would otherwise violate section 2340A.
OLC produced another legal opinion on 1 August 2002 at the request of CIA…The opinion concluded that use of EITs on Abu Zubaydah would not violate the torture statute because, among other things, Agency personnel: (1) would not specifically intend to inflict severe pain or suffering, and (2) would not in fact inflict severe pain or suffering.
So there you have it. Interrogators “presume” that a detainee knows more than he’s saying, and on that basis get permission to use “Enhanced Interrogation Techniques” like wall slamming, sleep and food deprivation, and waterboarding. Then, having done this—in Zubaydah’s case, using the waterboard over 180 times—they then say that legally ‘We didn’t intend to hurt the little fellow, nor did we even know it hurt or caused any suffering whatever; we only wanted information. The fact that people tend to emerge from these sessions gibbering like idiots may be due to the diabolical training they all get. And besides, the bosses insisted.’
Though torturing suspects based on a “presumption” of what they know is different from interning them, or excluding them from vast areas because of their “potential danger,” the entire policy forms a continuum which turns on the same idea. That idea seems to be that, regardless of the law, one can never take too many precautions, or be too squeamish about methods when confronting what one presumes to be a “potentially dangerous” or “potentially knowledgeable” population.
Lawrence DiStasi
=
Subscribe to:
Posts (Atom)