Rendered at 17:19:30 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
bradreaves2 4 hours ago [-]
I figured this was “CEO said a thing” journalism [1], but buried in the last paragraph is a real scorcher:
> “Undeniable proof that confidently uninformed hospital administrators are a danger to patients: easily duped by AI companies that are nowhere near capable of providing patient care,” [Radiologist Dr.] Suhail told Radiology Business. “Any attempt to implement AI-only reads would immediately result in patient harm and death, and only someone with zero understanding of radiology would say something so naive. But in some sense, they’re correct: Hospitals are happy to cut costs even if it means patient harm, as long as it’s legal.”
Well, let's not forget the conflict of interest on the other side as well, of someone having invested decades of professional experience into a very lucrative field already getting obliterated by AI in some narrow fields.
Getting rid of radiologists is as much nonsense and saber rattling as suggesting using AI would harm patients.
The answer is clearly just the same as in software development or any other AI impacted field: Let the best professionals handle 10x+ the volume. What that means for all the rest of employees is the question of the century though...
teeklp 3 hours ago [-]
> Getting rid of radiologists is as much nonsense and saber rattling as suggesting using AI would harm patients.
Did a chatbot tell you that? What makes you think it is so?
compounding_it 4 hours ago [-]
If hospitals are so concerned about cutting costs, getting sued is probably worse. However they are all insured against malpractice. I would be careful about insurers who could default if they find too many malpractice claims.
Cthulhu_ 3 hours ago [-]
Isn't it also in the insurer's best interest that the hospitals do good work? They'd be another force against hospitals using AI to diagnose or misdiagnose people.
Of course, given that these are legal cases, it would take years for any consequences to be turned into actions.
ricardobayes 4 hours ago [-]
To be frank I'm more concerned about non-litiguous countries here as the potential downsides are much lower to roll-out "AI radiologists". Some of those countries have multi-month or even year-long waitlists for specialist consultations so it might even be more tempting from a healthcare management level.
_dark_matter_ 3 hours ago [-]
For folks with long wait times, maybe the advantage of "immediate access to AI radiologist" beats out "wait for human radiologist"? Would be interesting to weigh those harms against each other.
palmotea 1 hours ago [-]
> For folks with long wait times, maybe the advantage of "immediate access to AI radiologist" beats out "wait for human radiologist"? Would be interesting to weigh those harms against each other.
The harm of getting surgery to get tissue removed due to a false positive seems a pretty big.
freejazz 3 hours ago [-]
>If hospitals are so concerned about cutting costs, getting sued is probably worse.
That hasn't stopped them any other time they cut costs. Have you ever spoken to a nurse who works in a hospital?
thefz 48 minutes ago [-]
Some hospitals having a CEO is an aberration
luma 4 hours ago [-]
Brother-in-law graduated med school in the early 90s and has been a practicing ER physician since. We discussed this recently and he related that his advisors told him not to go into radiology back in the late 80s because the assumption was that computers were going to take over the field. He's not too far away from retirement and it's only now that we're starting to see some signs of this prediction from 30+ years ago.
As others in the thread note, there are plenty of concerns around operational use of AI solutions in the medical space, but radiology has a much larger target painted on it than other practices as a fair portion of the job (but certainly not all!) can boil down to high-skill pattern recognition from visual inputs. The current list of AI-enabled devices going through FDA approval is public, more than 3/4 of the list are targeting radiology use cases: https://www.fda.gov/medical-devices/software-medical-device-...
storus 4 hours ago [-]
The issue with radiologists is that on average they are able to spot ~35% of correct diagnoses, while the world's best radiologists ~45%. AI might get us to ~50% which is ~15% better than an average radiologist (who still needs to review it).
orwin 3 hours ago [-]
Maybe radiologist mean something different in my country, but here radiologist don't diagnose (i mean, except you see them for a broken bone or something), oncologist do. I did an observation internship with a radiologist when i was 20 (95% of my family are doctor/nurses/PT, i wanted to know what a degree in physics could help me do in the field, and radiologist was the only path to medecine from my initial formation where i only lost a year, and not two). You spend your time calculating doses, finding patient history, and calibrating machines, it's much more a technician role than a MD. In any case, and even if in the US radiologist diagnose cancer, that's such a small part of their job it shouldn't matter.
3 hours ago [-]
Betelbuddy 3 hours ago [-]
And you are going to provide the references that will sustain this opinion, so we can elevate it to a fact...
jatora 2 hours ago [-]
Its fine to ask for sources. It's also fine to not give sources when relaying information in freeform comments. It's not fine to ask for sources in the tone you are using though, as though you are annoyed and simply expect sources to always be included with claims. There are better ways of accomplishing your goals.
Betelbuddy 2 hours ago [-]
Someone drops very specific percentages about diagnostic accuracy....numbers that, if true, have serious implications for patient outcomes, and your concern is that I did not ask nicely enough for a source? I could not think of a more HN typical response...
I did not even call the claim false, even if it almost deserve it...I said, essentially ...let's see the references so we can treat this as fact rather than opinion.
What you did is write a longer and more prescriptive comment about my tone than anything anyone has written about the actual substance :-)). You tone policed a one line request for evidence while giving a complete pass to unsourced medical statistics presented as fact.
If we are ranking things that erode discourse quality, I would say you are higher on the list.
jatora 1 hours ago [-]
My tone attacked a passive aggressive, entitled, and lazy comment. Calm down and just learn a better way to approach things. Other commenters skeptical of the claim approached it in a much more mature manner.
Betelbuddy 1 hours ago [-]
Three comments in... and you still have not said a single word about whether radiologists actually catch 35% of diagnoses. But you have found time to call me passive aggressive, entitled, lazy, and immature. For one sentence. Asking for a source...
You are now, multiple comments deep, doing the thing you accuse me of...being more invested in tone than substance.
The irony is genuinely impressive at this point.
storus 47 minutes ago [-]
If you look at early stage diseases it's probably even way less than 35%...
Forgeties79 51 minutes ago [-]
> Calm down
You had a point until you did that.
czbond 3 hours ago [-]
^ Knowing this, I would believe the best course of action for a hospital administrator would be to implement a "blind workflow" to reduce risk & lawsuits.
A radiologist should separately review a scan, an AI separately review it, and then combine the 2 results for review.
camdenreslink 3 hours ago [-]
I have seen very conflicting data on this. You shouldn’t state it so confidently.
hattar 3 hours ago [-]
I assume the numbers are made up as an example.
I worry that rational takes like this end up completely lost in the battle between motivated parties who yell far louder, but have minimal investment in actual outcomes for those who will be depending on these technologies. The debate over self-driving vehicles is another example.
yread 3 hours ago [-]
Persuade someone to run a prospective trial and show the outcomes. Everything else is bullshit
Forgeties79 3 hours ago [-]
Where are you getting these numbers? Even a cursory search doesn’t put the numbers anywhere near such poor performance by real people.
AI at 50% would be notably worse (also where are you getting that number?)
storus 3 hours ago [-]
From radiologist AI training datasets, evaluated long-term/post-mortem.
nchmy 3 hours ago [-]
Sauce or gtfo
Forgeties79 3 hours ago [-]
I hate to be “source?” about it but your numbers are so far off what every search result is showing.
storus 3 hours ago [-]
I am not saying those are for all diagnoses, but for some tricky yet important ones (i.e. detecting them early might save your life).
Forgeties79 2 hours ago [-]
You did not give specificity of any kind until now, and now I’m even more curious where these numbers are coming from.
storus 33 minutes ago [-]
Some data (average radiologist score):
Early-Stage Lung Cancer (via Chest X-ray) 33.3%
Clinical Staging of Stage I Pancreatic Cancer (via CT, MRI, EUS) 21.6%
Breast Cancer (via Mammography in Dense Tissue) 30%
Cuneiform fractures (foot, X-Ray) 0%
Midfoot fractures (general, X-Ray) 12.5%
Cuboid fractures (X-Ray) 14.29%
Navicular fractures (X-Ray) 22.22%
Talus fractures (X-Ray) 21.43%
Individual radiologists often scored 5% in those as well. The skill distribution is brutal.
beej71 3 hours ago [-]
"They're is almost certainly cancer there."
Are you sure?
"You're right to push back. Upon reinspection, it appears to be something else."
voidUpdate 4 hours ago [-]
> "and is “actually better than human beings,” he told the audience.
“For women who aren’t considered high risk, if the test comes back negative, it’s wrong only about 3 times out of 10,000,” Lubarsky said. "
What's the false negative rate for human beings? And what about women that are considered high risk? Is it better or worse?
malfist 3 hours ago [-]
I'm also suspicious of that 3 out of 10k times. Did they compare an AI examination against a human 10,000 times in novel scenarios? Or did they run it against some data set that's probably in the training data? Or did they run it against some synthetic dataset that is not a good representation of the real world?
Or did they run 5 tests, found zero inaccuracies and extrapolated to 10,000 but though 0 mistakes was too unbelievable and would give away the game.
Did they test the xray on only uncomplicated cases like young healthy people with no deformities? Or did they test it on complex cases too, maybe cases where there are multiple issues and some should be ignored, like elderly or people with different shaped bodies.
Also, what is "wrong" here? Is it a false negative, or a false positive? Is it a misdiagnosis? There's levels of wrongness, especially in the medical field.
voidUpdate 3 hours ago [-]
"wrong" is a false negative. It says that if the test came back negative, it was wrong 3 in 10,000 times, which means there was actually cancer that it didn't find
cbg0 4 hours ago [-]
> Sandra Scott, MD, CEO of the One Brooklyn Health, a small hospital facing tight margins, agreed with this line of thinking, according to Crain’s.
Does this CEO of a small hospital realize that their hospital will take the legal responsibility if there's no doctor to sue for malpractice?
catapart 4 hours ago [-]
Speaking of which... when people talk about "replacing" humans with AI, it makes me wonder if there's some kind of law we can push for that says "if you are part of the chain of command that signs off on AI being able to make final determinations, and that causes legal issues, you will be legally liable in place of the AI, since computers cannot be liable." Let a jury decide who, in the chain, bears what burden, case by case, but provide for prima facie liability for all parties in the chain, when a valid suit is tried. I want to see how strong the push is for AI when it's the CEO's personal money on the line.
stvltvs 2 hours ago [-]
The chain of responsibility must include the AI vendor. If vendors aren't liable for malpractice, there will be less incentive for all due diligence when lives are on the line.
ricardobayes 3 hours ago [-]
Honestly yes, you are 100% right that it should be a responsibility thing. I remember back in the day it was said that self-driving car companies would have legal responsibility in case of an accident. I remember that kind of put a damper on the rollout and also took a lot of hype and focus away from the whole industry.
4 hours ago [-]
emceestork 4 hours ago [-]
Hospitals already usually pay for malpractice insurance on behalf of the physicians.
devilbunny 3 hours ago [-]
They do, but it’s the physician who is personally liable, not the hospital. It’s just another form of compensation.
My wife and I are both physicians. Our house doesn’t belong to either of us, strictly; it belongs to our marriage. You have to have a legal claim against both of us to put it in jeopardy.
guzfip 4 hours ago [-]
I’m sure there’s a golden parachute somewhere around to save her.
dist-epoch 4 hours ago [-]
This works in the other direction too - a human mises a cancer, that 10 out of 10 radiology models say it's there with 99% confidence. That hospital will lose in court for negligence.
cbg0 3 hours ago [-]
Is this the norm in US courts, evaluating a human's performance against LLMs?
freejazz 3 hours ago [-]
If it is a regular practice of such doctors to use such tools, and that doctor did not, then it is malpractice. That is how malpractice works. You have to fall below the standard of care in a way that proximately caused the damages.
mcphage 3 hours ago [-]
> a human mises a cancer, that 10 out of 10 radiology models say it's there with 99% confidence
I think the cases where judgements differ—either between humans or AI or both–will be the difficult to discern cases, where no human and no LLM will have 99% confidence.
squidhunter 4 hours ago [-]
When can we start replacing CEOs with AI?
4 hours ago [-]
lapcat 3 hours ago [-]
Do you think this would help?
The CEO is an employee of the board of directors and the stockholders. An AI CEO would no doubt be as ruthless as a human CEO, if not more so. In other words, I wouldn't anticipate any improvement in CEO behavior.
squidhunter 3 hours ago [-]
If I was going to reduce labor costs by +1M/year, I would rather eliminate 1 CEO then 10 radiologists. I would much rather have 1 unemployed CEO in society than 10 unemployed radiologists. At the very least, "AI" should replace through attrition rather than direct layoffs...
lapcat 3 hours ago [-]
> If I was going to reduce labor costs by +1M/year, I would rather eliminate 1 CEO then 10 radiologists.
This is a false dichotomy. Why not both?
I think it's a bit strange to hope or assume that an AI CEO would somehow preserve human jobs.
stvltvs 2 hours ago [-]
Missing the point. If CEOs realize that they're more replaceable by AI than nurses and medical assistants, for example, then maybe they'll take a more nuanced view of the technology.
lapcat 1 hours ago [-]
No, you're missing the point, because the views of the people to be laid off are irrelevant. Again, the stockholders own the company, not the CEO. If CEOs start chaging their tune on AI as soon as their own jobs are at stake, that would just demonstrate to the stockholders that human CEOs are untrustworthy and need to be replaced.
Before AI came along, CEOs were already arbitrarily laying off workers, to please the stockholders. The stockholders like these cost-cutting measures, and whether the measures make sense is secondary to the CEOs doing what their bosses want. If the stockholders believe that they can cut the CEOs too, they surely will.
rickydroll 4 hours ago [-]
I don't think AI is ready to replace CEOs, but it would make a good assistant for an H-1B CEO.
parliament32 3 hours ago [-]
If anyone is replaceable by AI, executives are first in line. Make "decisions" based on expert input, give presentations, sit in meetings and on calls. No liability, no concrete "work product" to speak of, so why not?
saintfire 4 hours ago [-]
Not sure AI is ready to replace anyone but that doesn't seem to be the road block.
Molitor5901 3 hours ago [-]
Fellow panelist David Lubarsky, MD, MBA, president and CEO of the Westchester Medical Center Health Network, said his system is already seeing great success in deploying such technology. The AI Westchester uses misses very few breast cancers and is “actually better than human beings,” he told the audience.
“For women who aren’t considered high risk, if the test comes back negative, it’s wrong only about 3 times out of 10,000,” Lubarsky said.
Sounds like 3 wrongs are an acceptable level of risk for this CEO. It would be interesting to put radiologists up against AI to see which have better results, but I would still rather a human read my chart and then have AI give the second opinion, rather than the other way around.
WarmWash 3 hours ago [-]
I stand to be corrected, but last time this cropped up about a year ago, there was a pretty severe mismatch in the use of the word "AI".
The NYT ran a story about "AI taking over radiology", where they talked to radiologists at the Mayo clinic (who have an AI research lab), who flatly told NYT that no - AI will not be replacing radiologists, the AI is not good enough.
Here is the rub though, the "AI Lab" was doing research using local CNN's with ~30M parameters. Basically 2017 consumer GPU tier AI tech.
I don't know yet if there has been a modern transformer of datacenter scale that has been explicitly pre-trained for medicine/radiology, along with extensive medical/radiology RLHF.
elephanlemon 4 hours ago [-]
>amid rising demand for imaging
Okay so demand for imaging is up, so we should GET RID of the radiologists? How about we AUGMENT them with AI so that they can do their job better and faster? Why does it need to be either or?
storus 4 hours ago [-]
Currently they are augmenting them with Indian radiologists and just sign off whatever they found.
givemeethekeys 3 hours ago [-]
AI = Actual Indian.
seesthruya 3 hours ago [-]
This is illegal in the USA.
storus 4 hours ago [-]
How about we started replacing all companies that are replacing humans with AI using AI as well? As they decided to one-way participate in the economy (suck the money, not give anything back), we can make sure the one-way trend is done with rapidly. The cost of running a company will approach zero in the future. We now have massively profitable companies that are making record layoffs; something doesn't compute.
georgeecollins 4 hours ago [-]
It would be interesting to start a co-op or non-profit run by AI for the benefit of the employees and customers. If it worked it would have a huge competitive advantage. I guess the question is where would the capital come from, but as a co-op the employees could buy in and just take the profits as a distribution.
Thinking about this some more: US tax laws really favor income from investment over income from wages. So ideally a co-op member would put something in to join, get a wage, and have an appreciating asset in a tax advantaged account.
storus 3 hours ago [-]
Something like that. I'll try to do it as a side project next as I have some spare compute and ran 99% automated e-commerce companies before.
roody15 4 hours ago [-]
This should not be surprising. The CEO's primary objective is likely to increase profits and so this will be his/her primary focus. Even if the technology is not ready for prime time just making announcements like this likely helps increase negotiating pressure on radiologist group contracts and salaries.
seesthruya 4 hours ago [-]
Here we go again. There's something about radiology that makes it the perfect bait for nerd sniping. I guess it's probably the misunderstanding that it is exclusively pattern recognition.
Here are my opinions, after a 20 year career as a diagnostic radiologist, and 45 years as a hobbyist computer programmer
1. There are no products currently on the market that can replace a radiologist.
2. If you can't fully and completely replace radiologists, you will still need them around in significant numbers.
3. Because of the infinite variation in human anatomy, physiology, and pathology, it is my opinion that AGI will be required to fully and completely replace radiologists.
4. Once AI is strong enough to replace radiologists, it will be strong enough to replace every other job as well.
5. Based on current RVU compensation models, any cost savings achieved by hospitals replacing radiologists with AI will quickly be lost by reimbursements being adjusted down. There is no way an insurance company will pay the same for an AI interpretation and a human interpretation.
6. There are significant unanswered medicolegal questions that will need to be addressed before AI can operate unsupervised.
In conclusion, I will work as a human radiologist until I retire in 10 years
malfist 3 hours ago [-]
To support your point, my dentist office did a trial using AI to read the xrays they take of your jaw and teeth. According to the AI reading my xray, I needed every single filling in my teeth replaced because they were all showing signs of leaking.
The dentist reviewed it and told me that there's just too much variation in how places to fillings and the different densities of the filling and the replaced tooth material for the AI to make good judgements. He didn't think any of my fillings would need replacing I likely have many more years before they fail.
Betelbuddy 3 hours ago [-]
Its hysterically funny you are being downvoted...
malfist 3 hours ago [-]
Sometimes people can't tolerate hearing other's lived experiences.
coldtea 3 hours ago [-]
Anything to please the stockholders. It's not like patient's best interests mattered much to them before AI either.
beardyw 3 hours ago [-]
"We could replace a great deal of radiologists with AI at this moment"
Perhaps they cost a great number of money?
amluto 3 hours ago [-]
I find the whole field of radiology to be utterly baffling. There are doctors who specialize in, and hopefully understand, specific diseases and/or parts of the body. But we have radiologists who are supposed to be able to look at images, taken by quite a variety of technologies and parameters, of any part of the body, and are expected to accurately interpret the findings, possibly without any relevant context.
In my personal experience interacting with the medical system, it’s, unsurprisingly, quite common for an actual specialist to look at the same images a radiologist looked at, and see something quite different. And it’s nearly always the case that a specialist or a reasonable careful non-specialist who is willing to read a bit of the literature or even ask a chatbot [0], will figure out that at least half of what the radiologist says is utterly irrelevant.
So I think that the degree to which ML can perform as well as a radiologist is not necessarily a great measurement for ML’s ability to assist with medical care.
[0] Carefully. Mindlessly asking a chatbot will give complete nonsense.
rich_sasha 2 hours ago [-]
I'd take it further/slightly parallel direction. Medicine is at the same time a science and a weird "feel and experience" area.
On the one hand it's a science: controlled experiments, calculated dosages, all based on an understanding of low level biology, fancy imaging methods, measuring currents in people's bodies and so on.
On the other hand, there seems to be plenty of "he seems fine to me", "tests came back fine but something seems off to me so let's try another test", "doesn't seem to be responding to this drug, let's try the other one", "in my experience this drug works better than that one". It seems like a pretty big chunk of subjectivity is actually a part of the field.
amluto 2 hours ago [-]
> On the one hand it's a science: controlled experiments
Those experiments are so hilariously expensive these days, and the results are often not actually fully published, so good data is often unavailable.
> calculated dosages
Often calculated based, in large part, on researchers’ vibes and their vibes when designing experiments.
> all based on an understanding of low level biology
There are many, many drugs with partially or even almost fully unknown mechanisms.
seesthruya 3 hours ago [-]
Radiologists work best in consultation with the physicians ordering the studies. Sadly, this is less and less common as workloads increase in medicine. When I started 20 years ago there were whole teams that came through the radiology department every morning to review all of the cases on their patients. Now I go weeks without seeing another physician.
devilbunny 3 hours ago [-]
Irrelevant to them. A radiologist is on the hook for missing a tiny possible tumor in a scan for a blood clot.
They like to show off occasionally. We had a rectal foreign body that was described as a Phillips-head screwdriver. I was hoping to catch them out by noticing it was Pozidriv, but it was in fact a Phillips.
Shank 4 hours ago [-]
> “For women who aren’t considered high risk, if the test comes back negative, it’s wrong only about 3 times out of 10,000,” Lubarsky said.
I mean, if I were a choosing person and I could choose to have a human radiologist review AND an AI review I think I would prefer that. 3/10,000 sounds like a very good rate but a false negative on a cancer diagnosis is life threatening, no?
jon-wood 4 hours ago [-]
"The AI is wrong only 3:10,000 times" is a statement screaming out for the follow up question "how often are the humans wrong". Maybe 3:10,000 is astonishingly good, maybe humans are 10x or 100x better, right now I have no real way of knowing short of a literature review in a field I know nothing about.
zamadatix 4 hours ago [-]
At a certain point the false positives start creating more harm than trying to further reduce the false negatives (which is, perhaps counterintuitively, eventually true for even the most serious of risks). Whether that's the case here depends on a lot of information not in the article.
4 hours ago [-]
jacknews 4 hours ago [-]
Surely they could offer a cheaper 'unregulated, no guarantee' AI interpretation with a confidence rating, and an optional follow-up 'are you sure?' expert assessment at full price.
OTOH they're probably planning to charge full price anyway, but massively reduce costs, because, profit.
mothballed 4 hours ago [-]
You can already cash pay to have imaging in my state, without any prescription, then send the images off to some voodoo witch in the Congo if you want. Seems to be the way to do it, just have the hospital do imaging and then the patient does with the image whatever they want. Then the hospital has no liability except in the case they did not image it correctly.
mrtksn 4 hours ago [-]
Why AI is able to do everything except CEO and social media hype up work? Why engineers and doctors still need CEOs to do their job?
From the votes I see that this is unpopular opinion but apparently there are close to 400 million companies in the world, of those 60K are publicly traded.
I am sure that there's enough data to train top notch CEO on this, since they are required to keep records all the time and give speeches for living.
Surely privately owned companies where the CEO is also the owner wouldn't like it but replacing the CEO with an AI in institutions with professional CEOs seems overdue. The radiologist AI certainly will be much better served by AI CEO.
Betelbuddy 3 hours ago [-]
I am pretty sure current AI is not capable of replacing Radiologists, but I am pretty sure is already good enough to replace 90% of current CEOs. I have worked with multiple CEOs...
SV_BubbleTime 4 hours ago [-]
With a little extra irony, I’m honestly certain our HR dept could easily be replaced with AI to far better effect. They would surely disagree.
mrtksn 4 hours ago [-]
The job description should be sufficient prompt to replace the HR, add some RAG and skill files based on a few months of in-company chat tool data and paperwork, I don't see why there's still HR around. The AI HR can choose to hire entertainers etc. for some tasks but why would keep HR on payroll al the time?
SV_BubbleTime 3 hours ago [-]
> I don't see why there's still HR around
Main reasons…
1. HR doesn’t work with you. They work for your CEO or Board. Consider them a toxic entity if you ever have a real problem.
2. HR is a socially accepted jobs program for people without any discernible skills, beyond basic data entry and organization. Effectively no one else wants to do it. The issue is that with point one, these people are told they are important and it immediately goes to their heads.
k2xl 4 hours ago [-]
Didn't we just hear predictions about this from Geoffery a few years ago that turned out to be false? I could have sworn I heard Jensen talk about how the inverse has happened?
Don't we have more radiologists than we did five years ago?
GerryAdamsSF 4 hours ago [-]
He is blatantly and obviously lying likely to boost stock prices. Radiologists do physical procedures too.
OutOfHere 3 hours ago [-]
Interventional radiologists do procedures, but most radiologists are not interventional. If their jobs are on the line, I guess they will have to be.
GerryAdamsSF 2 hours ago [-]
[dead]
RA_Fisher 4 hours ago [-]
That’s good, reducing healthcare costs will increase access and boost the our health.
Agree that AI should replace CEOs. They’re often biased in unhelpful ways that AI isn’t and it costs people wellbeing.
> “Undeniable proof that confidently uninformed hospital administrators are a danger to patients: easily duped by AI companies that are nowhere near capable of providing patient care,” [Radiologist Dr.] Suhail told Radiology Business. “Any attempt to implement AI-only reads would immediately result in patient harm and death, and only someone with zero understanding of radiology would say something so naive. But in some sense, they’re correct: Hospitals are happy to cut costs even if it means patient harm, as long as it’s legal.”
[1] https://karlbode.com/ceo-said-a-thing-journalism/
Getting rid of radiologists is as much nonsense and saber rattling as suggesting using AI would harm patients.
The answer is clearly just the same as in software development or any other AI impacted field: Let the best professionals handle 10x+ the volume. What that means for all the rest of employees is the question of the century though...
Did a chatbot tell you that? What makes you think it is so?
Of course, given that these are legal cases, it would take years for any consequences to be turned into actions.
The harm of getting surgery to get tissue removed due to a false positive seems a pretty big.
That hasn't stopped them any other time they cut costs. Have you ever spoken to a nurse who works in a hospital?
As others in the thread note, there are plenty of concerns around operational use of AI solutions in the medical space, but radiology has a much larger target painted on it than other practices as a fair portion of the job (but certainly not all!) can boil down to high-skill pattern recognition from visual inputs. The current list of AI-enabled devices going through FDA approval is public, more than 3/4 of the list are targeting radiology use cases: https://www.fda.gov/medical-devices/software-medical-device-...
I did not even call the claim false, even if it almost deserve it...I said, essentially ...let's see the references so we can treat this as fact rather than opinion.
What you did is write a longer and more prescriptive comment about my tone than anything anyone has written about the actual substance :-)). You tone policed a one line request for evidence while giving a complete pass to unsourced medical statistics presented as fact.
If we are ranking things that erode discourse quality, I would say you are higher on the list.
You are now, multiple comments deep, doing the thing you accuse me of...being more invested in tone than substance.
The irony is genuinely impressive at this point.
You had a point until you did that.
A radiologist should separately review a scan, an AI separately review it, and then combine the 2 results for review.
I worry that rational takes like this end up completely lost in the battle between motivated parties who yell far louder, but have minimal investment in actual outcomes for those who will be depending on these technologies. The debate over self-driving vehicles is another example.
AI at 50% would be notably worse (also where are you getting that number?)
Early-Stage Lung Cancer (via Chest X-ray) 33.3%
Clinical Staging of Stage I Pancreatic Cancer (via CT, MRI, EUS) 21.6%
Breast Cancer (via Mammography in Dense Tissue) 30%
Cuneiform fractures (foot, X-Ray) 0%
Midfoot fractures (general, X-Ray) 12.5%
Cuboid fractures (X-Ray) 14.29%
Navicular fractures (X-Ray) 22.22%
Talus fractures (X-Ray) 21.43%
Individual radiologists often scored 5% in those as well. The skill distribution is brutal.
Are you sure?
"You're right to push back. Upon reinspection, it appears to be something else."
“For women who aren’t considered high risk, if the test comes back negative, it’s wrong only about 3 times out of 10,000,” Lubarsky said. "
What's the false negative rate for human beings? And what about women that are considered high risk? Is it better or worse?
Or did they run 5 tests, found zero inaccuracies and extrapolated to 10,000 but though 0 mistakes was too unbelievable and would give away the game.
Did they test the xray on only uncomplicated cases like young healthy people with no deformities? Or did they test it on complex cases too, maybe cases where there are multiple issues and some should be ignored, like elderly or people with different shaped bodies.
Also, what is "wrong" here? Is it a false negative, or a false positive? Is it a misdiagnosis? There's levels of wrongness, especially in the medical field.
Does this CEO of a small hospital realize that their hospital will take the legal responsibility if there's no doctor to sue for malpractice?
My wife and I are both physicians. Our house doesn’t belong to either of us, strictly; it belongs to our marriage. You have to have a legal claim against both of us to put it in jeopardy.
I think the cases where judgements differ—either between humans or AI or both–will be the difficult to discern cases, where no human and no LLM will have 99% confidence.
The CEO is an employee of the board of directors and the stockholders. An AI CEO would no doubt be as ruthless as a human CEO, if not more so. In other words, I wouldn't anticipate any improvement in CEO behavior.
This is a false dichotomy. Why not both?
I think it's a bit strange to hope or assume that an AI CEO would somehow preserve human jobs.
Before AI came along, CEOs were already arbitrarily laying off workers, to please the stockholders. The stockholders like these cost-cutting measures, and whether the measures make sense is secondary to the CEOs doing what their bosses want. If the stockholders believe that they can cut the CEOs too, they surely will.
“For women who aren’t considered high risk, if the test comes back negative, it’s wrong only about 3 times out of 10,000,” Lubarsky said.
Sounds like 3 wrongs are an acceptable level of risk for this CEO. It would be interesting to put radiologists up against AI to see which have better results, but I would still rather a human read my chart and then have AI give the second opinion, rather than the other way around.
The NYT ran a story about "AI taking over radiology", where they talked to radiologists at the Mayo clinic (who have an AI research lab), who flatly told NYT that no - AI will not be replacing radiologists, the AI is not good enough.
Here is the rub though, the "AI Lab" was doing research using local CNN's with ~30M parameters. Basically 2017 consumer GPU tier AI tech.
I don't know yet if there has been a modern transformer of datacenter scale that has been explicitly pre-trained for medicine/radiology, along with extensive medical/radiology RLHF.
Okay so demand for imaging is up, so we should GET RID of the radiologists? How about we AUGMENT them with AI so that they can do their job better and faster? Why does it need to be either or?
Thinking about this some more: US tax laws really favor income from investment over income from wages. So ideally a co-op member would put something in to join, get a wage, and have an appreciating asset in a tax advantaged account.
Here are my opinions, after a 20 year career as a diagnostic radiologist, and 45 years as a hobbyist computer programmer
1. There are no products currently on the market that can replace a radiologist.
2. If you can't fully and completely replace radiologists, you will still need them around in significant numbers.
3. Because of the infinite variation in human anatomy, physiology, and pathology, it is my opinion that AGI will be required to fully and completely replace radiologists.
4. Once AI is strong enough to replace radiologists, it will be strong enough to replace every other job as well.
5. Based on current RVU compensation models, any cost savings achieved by hospitals replacing radiologists with AI will quickly be lost by reimbursements being adjusted down. There is no way an insurance company will pay the same for an AI interpretation and a human interpretation.
6. There are significant unanswered medicolegal questions that will need to be addressed before AI can operate unsupervised.
In conclusion, I will work as a human radiologist until I retire in 10 years
The dentist reviewed it and told me that there's just too much variation in how places to fillings and the different densities of the filling and the replaced tooth material for the AI to make good judgements. He didn't think any of my fillings would need replacing I likely have many more years before they fail.
Perhaps they cost a great number of money?
In my personal experience interacting with the medical system, it’s, unsurprisingly, quite common for an actual specialist to look at the same images a radiologist looked at, and see something quite different. And it’s nearly always the case that a specialist or a reasonable careful non-specialist who is willing to read a bit of the literature or even ask a chatbot [0], will figure out that at least half of what the radiologist says is utterly irrelevant.
So I think that the degree to which ML can perform as well as a radiologist is not necessarily a great measurement for ML’s ability to assist with medical care.
[0] Carefully. Mindlessly asking a chatbot will give complete nonsense.
On the one hand it's a science: controlled experiments, calculated dosages, all based on an understanding of low level biology, fancy imaging methods, measuring currents in people's bodies and so on.
On the other hand, there seems to be plenty of "he seems fine to me", "tests came back fine but something seems off to me so let's try another test", "doesn't seem to be responding to this drug, let's try the other one", "in my experience this drug works better than that one". It seems like a pretty big chunk of subjectivity is actually a part of the field.
Those experiments are so hilariously expensive these days, and the results are often not actually fully published, so good data is often unavailable.
> calculated dosages
Often calculated based, in large part, on researchers’ vibes and their vibes when designing experiments.
> all based on an understanding of low level biology
There are many, many drugs with partially or even almost fully unknown mechanisms.
They like to show off occasionally. We had a rectal foreign body that was described as a Phillips-head screwdriver. I was hoping to catch them out by noticing it was Pozidriv, but it was in fact a Phillips.
I mean, if I were a choosing person and I could choose to have a human radiologist review AND an AI review I think I would prefer that. 3/10,000 sounds like a very good rate but a false negative on a cancer diagnosis is life threatening, no?
OTOH they're probably planning to charge full price anyway, but massively reduce costs, because, profit.
From the votes I see that this is unpopular opinion but apparently there are close to 400 million companies in the world, of those 60K are publicly traded.
I am sure that there's enough data to train top notch CEO on this, since they are required to keep records all the time and give speeches for living.
Surely privately owned companies where the CEO is also the owner wouldn't like it but replacing the CEO with an AI in institutions with professional CEOs seems overdue. The radiologist AI certainly will be much better served by AI CEO.
Main reasons…
1. HR doesn’t work with you. They work for your CEO or Board. Consider them a toxic entity if you ever have a real problem.
2. HR is a socially accepted jobs program for people without any discernible skills, beyond basic data entry and organization. Effectively no one else wants to do it. The issue is that with point one, these people are told they are important and it immediately goes to their heads.
Don't we have more radiologists than we did five years ago?
Agree that AI should replace CEOs. They’re often biased in unhelpful ways that AI isn’t and it costs people wellbeing.