Deny, Defend, Depose

Status
Not open for further replies.
Did I hallucinate reading that the CEO guy who was killed recently was in charge of a company that used AI to routinely deny critical health care coverage to people who subsequently died because they didn't get the surgeries and other care they needed to live? :huh:

I don't think hallucinations are the issue. It's trusting some journalist/businessman that misrepresents a couple of formulas as "life-controlling AI" that may be out of order.

I wouldn't call a program designed to deny patients care based on a combination of metrics an "Artificial Intelligence". Hence I am skeptical AI entity has gained a measure of control over humans.

It's a good old classical natural intelligence doing its dark thing.
 
Did I hallucinate reading that the CEO guy who was killed recently was in charge of a company that used AI to routinely deny critical health care coverage to people who subsequently died because they didn't get the surgeries and other care they needed to live? :huh:

It's important to understand this is not AI controlling people's lives, this is health insurance executives implenting a chatbot that tells them to deny claims and then they deny claims and say the AI told them to do it. The main use of AI in these contexts is not to "control lives" but to, essentially, launder decisions that are being made by humans. It is similar to Israel using a chatbot that they program to tell them to kill Palestinians, and then they go and kill Palestinians.
 
It's not a human personally reviewing each and every file and stamping "DENIED" on them, right? They programmed a machine to do that, from what I read.

Even something like the automated answering services that most businesses have now are enough to drive me into a fit of rage. I can't think how it benefits anyone to make the customer so angry by the time they get to talk to a human (if they get to talk to a human) that they have trouble calmly articulating the problem and having the patience to deal with the excuses the customer service agent is mandated to give.

It's important to understand this is not AI controlling people's lives, this is health insurance executives implenting a chatbot that tells them to deny claims and then they deny claims and say the AI told them to do it. The main use of AI in these contexts is not to "control lives" but to, essentially, launder decisions that are being made by humans. It is similar to Israel using a chatbot that they program to tell them to kill Palestinians, and then they go and kill Palestinians.

You're using semantics to arrive at the same result. It's not a human that directly denied the claims. And if this results in the claimant's death through denial of critical services, how is that not controlling the claimant's life. You can't reason with a machine.
 
It's not a human that directly denied the claims.

Automated programs designed to deny insurance claims were used decades before modern take on AI came to be in 2022-23.

The fact that human automated his routine don't magically make this human less responsible.
 
You're using semantics to arrive at the same result. It's not a human that directly denied the claims. And if this results in the claimant's death through denial of critical services, how is that not controlling the claimant's life. You can't reason with a machine.
I think that's the idea. They sluff it off on... well, you could call it a Terminator 💀
 
Automated programs designed to deny insurance claims were used decades before modern take on AI came to be in 2022-23.
Dealing with automated voices over the phone has been making people want to kill themselves for at least a couple of decades now.

"Your concerns are very important to us, please wait & someone will be with your shortly" certainly has been the last thing some people have heard :(
 
You're using semantics to arrive at the same result. It's not a human that directly denied the claims. And if this results in the claimant's death through denial of critical services, how is that not controlling the claimant's life. You can't reason with a machine.

It's an important point, not semantics, because we don't want to lose sight that the problem here is not the chatbots as such, it is the humans programming and using them.
 
They hit the number of denials that they aim for, with such a tool ostensibly providing cover. It'll be tweaked if it doesn't.

Removes Mr. Incredible from the equation.
 
Last edited:
Dealing with automated voices over the phone has been making people want to kill themselves for at least a couple of decades now.

"Your concerns are very important to us, please wait & someone will be with your shortly" certainly has been the last thing some people have heard :(

And it should be pointed out this is entirely intentional, there are people sitting in a corporate office talking about "well if we make the phone tree x amount more complicated then y% of people will just give up rather than wait on hold for 3 hours and this improves our margins"
 
And it should be pointed out this is entirely intentional, there are people sitting in a corporate office talking about "well if we make the phone tree x amount more complicated then y% of people will just give up rather than wait on hold for 3 hours and this improves our margins"
Of course. Like your avatar would say it's all mathematics.
 
1734647371320.png


Nice photo.
 
Did I hallucinate reading that the CEO guy who was killed recently was in charge of a company that used AI to routinely deny critical health care coverage to people who subsequently died because they didn't get the surgeries and other care they needed to live? :huh:

If workers deviate more than 1% from the AI, they are disciplined or fired.


When patients or their doctors have requested to see nH Predict's reports, UnitedHealth has denied their requests, telling them the information is proprietary, according to the lawsuit. And, when prescribing physicians disagree with UnitedHealth's determination of how much post-acute care their patients need, their judgments are overridden.

:undecide:

The lawsuit argues that UnitedHealth should have been well aware of the "blatant inaccuracy" of nH Predict's estimates based on its error rate. Though few patients appeal coverage denials generally, when UnitedHealth members appeal denials based on nH Predict estimates—through internal appeals processes or through the federal Administrative Law Judge proceedings—over 90 percent of the denials are reversed, the lawsuit claims. This makes it obvious that the algorithm is wrongly denying coverage, it argues.

But, instead of changing course, over the last two years, NaviHealth employees have been told to hew closer and closer to the algorithm's predictions. In 2022, case managers were told to keep patients' stays in nursing homes to within 3 percent of the days projected by the algorithm, according to documents obtained by Stat. In 2023, the target was narrowed to 1 percent.

And these aren't just recommendations for NaviHealth case managers—they're requirements. Case managers who fall outside the length-of-stay target face discipline or firing. Lynch, for instance, told Stat she was fired for not making the length-of-stay target, as well as falling behind on filing documentation for her daily caseloads.

Ultimately, case managers do not decide on coverage or denials—those decisions fall to NaviHealth's physician medical reviewers. But, those physicians are advised by the case managers, who are held to the 1 percent target.

:dubious:
...
And case managers are specifically trained to defend the algorithm's estimate to patients and their care providers. One training document obtained by Stat discussed the blunt tactics case managers were told to take when patients and caregivers pushed back on denials. It stated:

  • If a nursing home balked at discharging a patient with a feeding tube, case managers should point out that the tube needed to provide "26 percent of daily calorie requirements" to be considered as a skilled service under Medicare coverage rules.
  • If a nurse took a broader tack, and argued a patient was unsafe to leave, case managers were instructed to counter, in part, that the algorithm's projections about a patient's care needs, and readiness for discharge, are based on a "severity-adjusted" comparison to similar patients around the country. "Why would this patient be any different?" the document asks.

No winning​

Even for the patients who appeal their AI-backed denials and succeed at getting them overturned, the win is short-lived—UnitedHealth will send new denials soon after, sometimes within days.

A former unnamed case manager told Stat that a supervisor directed her to immediately restart a case review process for any patient who won an appeal. "And 99.9 percent of the time, we're going to turn right back around and issue another [denial]," the former case manager said. "Well, you won, but OK, what'd that get you? Three or four days? You’re going to get another [denial] on your next review, because they want you out."

:cringe:

The whole article (from late 2023) is basically a try-not-to-die-from-anger test. :mad:
 
How is this even legal ?
Private, for profit health insurance companies are all about shareholder value, not healthy subscribers.
 
Private, for profit health insurance companies are all about shareholder value, not healthy subscribers.
Yeah, well, that explains why they attempt this, that doesn't explain how it's not punished harshly by law.
 
Yeah, well, that explains why they attempt this, that doesn't explain how it's not punished harshly by law.
Because the for profit health insurance companies and their mates write the laws. The people keep electing people like Trump rather than people like Bernie and that keeps it that way.
 
Automated programs designed to deny insurance claims were used decades before modern take on AI came to be in 2022-23.

The fact that human automated his routine don't magically make this human less responsible.
I didn't say it did.

I think that's the idea. They sluff it off on... well, you could call it a Terminator 💀
I guess that's how the dead guy was able to sleep at night. Wonder how his family felt about it, or if they were of the same sociopathic mindset.

Dealing with automated voices over the phone has been making people want to kill themselves for at least a couple of decades now.

"Your concerns are very important to us, please wait & someone will be with your shortly" certainly has been the last thing some people have heard :(
I've actually told the human customer service agent that the automated system makes me so angry and frustrated that I want to reach into the phone and rip its electronic head off. Some of them have laughed, and I said, "I'm not joking. Please pass this feedback up the line, that it's counterproductive to good customer service to make people angry before we ever get to talk to a human.

And it should be pointed out this is entirely intentional, there are people sitting in a corporate office talking about "well if we make the phone tree x amount more complicated then y% of people will just give up rather than wait on hold for 3 hours and this improves our margins"
When you're on hold, there's a constant barrage of "use the website to fix your problem."

Well, there are times that the reason I'm calling is because the website doesn't work. So how am I supposed to use a website I can't access?

Some agents have actually said, "Don't you have a friend whose computer you could use?"

That's so not the point.

If workers deviate more than 1% from the AI, they are disciplined or fired.




:undecide:



:dubious:
...


:cringe:

The whole article (from late 2023) is basically a try-not-to-die-from-anger test. :mad:
And they wonder why people are starting to remember their history.

It's like that here. I have never so profoundly hated a group of people as I hate the current provincial government we have.
 
I don't need to know the ins and outs of their policies to believe that cold-bloodedly shooting someone in the street is morally reprehensible. Even if he's guilty of what you claim, there's this thing called "due process".
Are you smelling coffee or smelling your own farts?

I think you're one personal tragedy away from being a rabid supporter of violence yourself. Not everyone is born with empathy, but when the things happen to themselves, they often change tack.
 
in a just or let us say 50% just society the CEO wouldn't dare doing what he did . The tragedy or whatever is that his death contributes nothing to any solution that would have provided an improved health coverage to Americans . As everybody follows American capitalism , that's also our future in the rest of the world . The killer is an agent of Elon Musks of this world , he is not brave or anything and he will be as comfy as Eppstein , before whatever happened to him . As one of the billions he has acted against , ı do have NO obligation to like Luigi whatever .
 
Status
Not open for further replies.
Back
Top Bottom