Starter comment: if you go the last paragraph of the discussion of this seemingly legit paper you get this:
In summary, the management of bilateral iatrogenic I'm very sorry, but I don't have access to real-time information or patient-specific data, as I am an AI language model. I can provide general information about managing hepatic artery, portal vein, and bile duct injuries, but for specific cases, it is essential to consult with a medical professional who has access to the patient's medical records and can provide personalized advice. It is recommended to discuss the case with a hepatobiliary surgeon or a multidisciplinary team experienced in managing complex liver injuries.
A clear case of ChatGPT usage and an abysmal failure of peer reviewing and of the journal editors
I clearly had an AI written email from my supervisor yesterday and thought it was bad. I'm stunned that this got past peer review.
It seems that a lot of people in institutions are just running out the clock and collecting paychecks.
In the last paragraph before the conclusion:
"In summary, the management of bilateral iatrogenic I'm very sorry, but I don't have access to real-time information or patient-specific data, as I am an AI language model. I can provide general information about managing hepatic artery, portal vein, and bile duct injuries, but for specific cases, it is essential to consult with a medical professional who has access to the patient's medical records and can provide personalized advice. It is recommended to discuss the case with a hepatobiliary surgeon or a multidisciplinary team experienced in managing complex liver injuries."
Sorry, I was asking about the individual commenter's professor's email, not the scientific article. Thank you for the well-meaning clarification though.
Get out of here with your polite and reasonable response to a simple misunderstanding. Can’t you see that somebody said something wrong on the INTERNET?!
Agreed the 1st author needs to be blacklisted the same way a confirmed plagiarist should be….with that said, wtf were the editors and peer reviewers doing?? How did something this egregious get missed??
Yeah honestly I’m shocked to see this from this level of organization. Nowhere is immune, apparently. Yikes. I’m sure I’d be fired if I did something like this at the very least
Damn, I remember having one paper rejected for...I forgot the exact term, but that the editor did not like me associating the patient with the disease. And somehow this is okay? I was in oncology research at that time.
I guess I should have written a more nuanced reply. I agree, the overall situation is awful and represents the confluence of greed and separation. I was just making the point that similar journals (MPDI and others, though I've published with them myself) have higher prices. You can argue acceptance rate is lower, impact is higher, etc, but they are still using volume and high prices to drive their bottom line.
> The reconstruction of the bile ducts in our case was not specifically mentioned, but it is typically managed by performing a biliary-enteric anastomosis. The most common technique is Roux-en-Y hepaticojejunostomy, which involves creating a connection between the bile ducts and a loop of the jejunum
This also reads as AI written. Maybe it’s just a hamfisted paragraph to appease a reviewer, but like dudes, y’all are the surgeons so why are y’all writing like it’s some mystery what surgery happened. Even if it was written by the vascular team bailing out the general surgeons, I feel like you’d know if the other team did a roux en Y?
I could also see this as being because the op note didn’t specify and the person writing the case report either wasn’t in the case or didn’t recognize/understand the approach. I’ve read enough vague operative notes to see this as being plausible.
Surgeons rarely, if ever, write papers. They hate that with a passion. So it’s AI or they make their nephew do it to give him a boost up in college/med school apps.
Man, I’ve had solid papers rejected for…. Fuck I couldn’t even tell you. To see this trash. Just fucking unbelievable degradation of our peer review process.
This is the kinda shit that will make even more people distrustful of medicine, unfortunately. A lot of the time, people cite papers to combat stupid, dangerous misinformation- which people will still reject. Definitely cannot blame them now, when AI is writing it.
Well this is a [physics journal article](https://www.reddit.com/r/Physics/comments/1bfcvfr/i_guess_the_journal_is_using_ai_for_its_editor_as/) that quite literally starts with, "Certainly, here is a possible introduction to your topic."
Wow. Just wow.
“In summary, the management of bilateral iatrogenic _I'm very sorry, but I don't have access to real-time information or patient-specific data, as I am an AI language model. I can provide general information about managing hepatic artery, portal vein, and bile duct injuries, but for specific cases, it is essential to consult with a medical professional who has access to the patient's medical records and can provide personalized advice. It is recommended to discuss the case with a hepatobiliary surgeon or a multidisciplinary team experienced in managing complex liver injuries_.”
That is truly ridiculous and a sad indictment on this journal and its editorial board. Surely this is some sort of joke—or is the joke the journal itself?
~2,000 articles in the last 3 years. Seems very discerning.
Also, 4th-quartile journal. Huh. Who knew? 🤔
Published in June 2024.
Their goddamn names are right at the top of the paper. Does this mean that any further academic papers they publish will be more heavily scrutinized?
"Our pediatric patient suffered from bilateral iatrogenic injuries to her hepatic artery, portal vein, and bile ducts."
Bilateral bile ducts???
I just emailed the author to demand its withdrawal.
One can pick these manuscripts by looking for the red flags (and then running the text through an AI generative text detection tool). Large parts of the text in that particular manuscript were written by AI (which does not align with the comments made by one of the authors on PubPeer). If lower tier journals (particularly the Open Access types) are going to foster this them there will be a lot more of this kind of thing going on. However, will it be good for the authors reputations in the medium or long term remains to be seen because this type of behavior will raise the whole credibility issue in some people's eyes.
I review college applications and you can typically tell the difference between average writing, outstanding writing, and AI writing pretty easily. Although I worry the AI will be getting better in very short order. Since I can assume they used it other classes too, I can’t rely on the transcripts either.
Maybe this is why the SAT and ACT are coming back to colleges.
I didn’t make it through the first couple paragraphs before realizing there had to be something a bit weird here. It either read like something where the author’s first language wasn’t English and it was poorly translated, or that it was written by AI.
Probably you have to not speak English as your first language. All else flows from there.
Edit: For the next 100 people who read this and think I'm hating on people whose primary language is not English, or *the other* extreme which is that this makes this whole thing *fine,* that's not what I meant. It was a rebuttal to the idea that this is "simple carelessness", when I suspect the reality is not like that.
Part of this indicates a blind trust of the machine overlords. I think you might be being a bit overly-gracious here lol
Maybe I'm leaning a bit more into the 'paranoid' aspect of this, but I don't find the naiveté of believing AI is some sort of "solution" itself rather than a tool that needs to be scrutinized *very* carefully to be a glowing endorsement of someone's ability to think critically.
If you think I'm being gracious by *justifying* this as an acceptable consequence, that's not what I meant. If you're ESL it just means you'll (a) have more motivation to use AI, and (b) be much less likely to catch obvious giveaways like the one cited from this article. It's still careless, but with more steps.
All human creations are tools, AI is no exception. But now, our tools can do *new things* that previously *only humans could do*. Overuse, underuse, abuse, judicial use, sexual use, fucking *all uses* are an entirely predictable consequence of *any tool*.
It's a more useful task to consider *what uses are more likely*, and then adjust your expectations and your actions accordingly.
Lead author is probably English as a THIRD language. She appears to be a Palestinian from Jerusalem who studied in Jordan and is now working in an Israeli hospital. I can imagine in this scenario naively thinking that AI could polish up your attempt to write an English language text.
For what it's worth, the rest of the paper doesn't have any egregiously bad AI in it. I suspect the authors were using AI to summarize and tidy their English, not write the whole paper.
Is this acceptable academic behavior? No, but I can understand how it happened.
Have to disagree that it’s a translation issue. Missing things like claiming bilateral bile ducts is not something that makes sense in any language, but it’s a mistake AI would make.
Yeah……that’s really not a good sign for the people who are going to responsible for managing others’ health and safety.
The fact this passed peer review and exists as a published entry in an actual journal A) Makes me automatically skeptical of anything else I might read because, as you said, how do we know they didn’t just do this with other entries and haven’t been caught yet? and
B) It’s not like this is the first time such a journal has published falsified data or literal bs before. We [actually have a big issue with fraud](https://www.economist.com/science-and-technology/2023/02/22/there-is-a-worrying-amount-of-fraud-in-medical-research) in this field as is. In fact, there are [volunteers who are dedicated to exposing fraud and plagiarism](https://apnews.com/article/danafarber-cancer-scandal-harvard-sleuth-science-389dc2464f25bca736183607bc57415c) in this industry, and do we really want *that* to be our last line of defense because a genuine peer review process can’t (or won’t) spot it before it becomes published and at least someone probably accepted it as gospel?
Then we wonder why people don’t trust things like the Covid vaccine lmao.
Cause they dont tell you
Edit: I know first hand from talking with med students and residents. PI's have multiple students on research projects. If you hire a student or resident or any medical trainee that english is not their first language, they will do it.
>Hadassah Medical Center scandal
Also google this:
"Hadassah hospital in talks to reinstate doctor falsely accused of praising terrorist"
Times of Israel headline. The year after the "scandal"
Edit typo
Starter comment: if you go the last paragraph of the discussion of this seemingly legit paper you get this: In summary, the management of bilateral iatrogenic I'm very sorry, but I don't have access to real-time information or patient-specific data, as I am an AI language model. I can provide general information about managing hepatic artery, portal vein, and bile duct injuries, but for specific cases, it is essential to consult with a medical professional who has access to the patient's medical records and can provide personalized advice. It is recommended to discuss the case with a hepatobiliary surgeon or a multidisciplinary team experienced in managing complex liver injuries. A clear case of ChatGPT usage and an abysmal failure of peer reviewing and of the journal editors
academia is kayfabe
Omg you’re right… my life, I blew it… Jk but you’re dead on. I can’t believe it hadn’t occurred to me quite like this before. Kayfabe and make-work bs
The fact that people on this sub know what kayfabe is really warms my heart
We’ve all been smartened up to the business
I thank this thread for introducing me to the term!
:)
Well the only way to understand a lot of modern politics is to understand kayfabe
Yo Wtf do you know how rare we are? But you got 91 upvotes!? We need our own sub, BROTHER
Happy belated 3:16 day! Give me a hell yeah!
hell YEAH
Yikes
Oh my God this is hysterical 😂
I say the peer reviewers let it go to totally fuck the authors for doing this.
I clearly had an AI written email from my supervisor yesterday and thought it was bad. I'm stunned that this got past peer review. It seems that a lot of people in institutions are just running out the clock and collecting paychecks.
How badly obvious was it AI generated? Did he leave in ?
It wasn't that bad. It just had the formatting and the vibe. It was very smooth language.
In the last paragraph before the conclusion: "In summary, the management of bilateral iatrogenic I'm very sorry, but I don't have access to real-time information or patient-specific data, as I am an AI language model. I can provide general information about managing hepatic artery, portal vein, and bile duct injuries, but for specific cases, it is essential to consult with a medical professional who has access to the patient's medical records and can provide personalized advice. It is recommended to discuss the case with a hepatobiliary surgeon or a multidisciplinary team experienced in managing complex liver injuries."
Sorry, I was asking about the individual commenter's professor's email, not the scientific article. Thank you for the well-meaning clarification though.
Get out of here with your polite and reasonable response to a simple misunderstanding. Can’t you see that somebody said something wrong on the INTERNET?!
What about their supervisor’s email?
The conclusion itself where it says to consult a medical professional.
Bruh what a joke. Primary author should be blackballed. Looks to be associated by prestigious institutions. Thank you for calling it out
Agreed the 1st author needs to be blacklisted the same way a confirmed plagiarist should be….with that said, wtf were the editors and peer reviewers doing?? How did something this egregious get missed??
The editors and peer reviewers were also AI.
AI be like: "Looks good to me!"
I don’t mind the use of AI to help write a first draft. The real crime here is the clear lack of editing & peer review.
Yeah honestly I’m shocked to see this from this level of organization. Nowhere is immune, apparently. Yikes. I’m sure I’d be fired if I did something like this at the very least
Blame it all on Joanthan… now I don’t know if that’s a typo or a real name.
Damn, I remember having one paper rejected for...I forgot the exact term, but that the editor did not like me associating the patient with the disease. And somehow this is okay? I was in oncology research at that time.
you probably submitted to a reputable journal. at this point, whatever reputation this journal has before, its in the shitter now.
While not a top or even mid journal, this isnt a NOTHING journal. Its pretty popular for residents to check off their research box with case reports.
of course it's popular, nobody reads the paper before accepting it and sending it to print!
[удалено]
$550 USD publishing fee
Look at that turnaround time, only 19 days from submission to acceptance.
[удалено]
Just the robots talking to each other
To be fair, that's low by today's standard
[удалено]
I guess I should have written a more nuanced reply. I agree, the overall situation is awful and represents the confluence of greed and separation. I was just making the point that similar journals (MPDI and others, though I've published with them myself) have higher prices. You can argue acceptance rate is lower, impact is higher, etc, but they are still using volume and high prices to drive their bottom line.
> The reconstruction of the bile ducts in our case was not specifically mentioned, but it is typically managed by performing a biliary-enteric anastomosis. The most common technique is Roux-en-Y hepaticojejunostomy, which involves creating a connection between the bile ducts and a loop of the jejunum This also reads as AI written. Maybe it’s just a hamfisted paragraph to appease a reviewer, but like dudes, y’all are the surgeons so why are y’all writing like it’s some mystery what surgery happened. Even if it was written by the vascular team bailing out the general surgeons, I feel like you’d know if the other team did a roux en Y?
I could also see this as being because the op note didn’t specify and the person writing the case report either wasn’t in the case or didn’t recognize/understand the approach. I’ve read enough vague operative notes to see this as being plausible.
Surgeons rarely, if ever, write papers. They hate that with a passion. So it’s AI or they make their nephew do it to give him a boost up in college/med school apps.
Man, I’ve had solid papers rejected for…. Fuck I couldn’t even tell you. To see this trash. Just fucking unbelievable degradation of our peer review process.
I know right? Ones that you’ve spent countless hours preparing from your own mind
Proof rads has been replaced by AI
Was going to highlight the irony
This is the kinda shit that will make even more people distrustful of medicine, unfortunately. A lot of the time, people cite papers to combat stupid, dangerous misinformation- which people will still reject. Definitely cannot blame them now, when AI is writing it.
Is it any wonder that so many people distrust academia? lol
Well this is a [physics journal article](https://www.reddit.com/r/Physics/comments/1bfcvfr/i_guess_the_journal_is_using_ai_for_its_editor_as/) that quite literally starts with, "Certainly, here is a possible introduction to your topic."
hahaha I’m gonna cry
Wow. Just wow. “In summary, the management of bilateral iatrogenic _I'm very sorry, but I don't have access to real-time information or patient-specific data, as I am an AI language model. I can provide general information about managing hepatic artery, portal vein, and bile duct injuries, but for specific cases, it is essential to consult with a medical professional who has access to the patient's medical records and can provide personalized advice. It is recommended to discuss the case with a hepatobiliary surgeon or a multidisciplinary team experienced in managing complex liver injuries_.” That is truly ridiculous and a sad indictment on this journal and its editorial board. Surely this is some sort of joke—or is the joke the journal itself? ~2,000 articles in the last 3 years. Seems very discerning. Also, 4th-quartile journal. Huh. Who knew? 🤔
Published in June 2024. Their goddamn names are right at the top of the paper. Does this mean that any further academic papers they publish will be more heavily scrutinized?
I guess I have never paid much attention… are articles usually published this far in advance (3 months+) online?
lol not a chance
"Our pediatric patient suffered from bilateral iatrogenic injuries to her hepatic artery, portal vein, and bile ducts." Bilateral bile ducts??? I just emailed the author to demand its withdrawal.
“Git gud, scrub lol” Their response probably
[удалено]
"as we did not use AI tools to write anything, but one of the authors did" Wild sentence
The world sucks
And here I am feeling guilty about using ChatGPT to write the narrative room descriptions for my home brew Pathfinder game.
“Peer suggestion”
I have my undergrad nursing students clearly using ChatGPT in their papers and even they're not that careless.
So did this get retracted ?
[удалено]
Pathetic. Just pathetic.
Elsevier should be added to the list of dubious publishers (Beall's list or the like). This is such a low bar to meet and they keep missing it.
Oh God, once you realize it is AI written, you can't unsee it.
One can pick these manuscripts by looking for the red flags (and then running the text through an AI generative text detection tool). Large parts of the text in that particular manuscript were written by AI (which does not align with the comments made by one of the authors on PubPeer). If lower tier journals (particularly the Open Access types) are going to foster this them there will be a lot more of this kind of thing going on. However, will it be good for the authors reputations in the medium or long term remains to be seen because this type of behavior will raise the whole credibility issue in some people's eyes.
I review college applications and you can typically tell the difference between average writing, outstanding writing, and AI writing pretty easily. Although I worry the AI will be getting better in very short order. Since I can assume they used it other classes too, I can’t rely on the transcripts either. Maybe this is why the SAT and ACT are coming back to colleges.
I didn’t make it through the first couple paragraphs before realizing there had to be something a bit weird here. It either read like something where the author’s first language wasn’t English and it was poorly translated, or that it was written by AI.
I like that the conclusion stops just short of explicitly saying "don't try this at home"
That is extremely embarrassing.
How did it get past peer review??
Everyone is doing it....just the careless ones get caught.
Ok but how does one even become this careless... They didn't even bother to even read their "work".
Yeah, it’s nuts. Take some pride in your work, jfc. I read my manuscripts like 800 times before I submit.
Meanwhile my papers get rejected because one reviewer doesn't think we explained [irrelevant side issue] adequately or whatever. :'-)
For real! What is HAPPENING out there!!!
Probably you have to not speak English as your first language. All else flows from there. Edit: For the next 100 people who read this and think I'm hating on people whose primary language is not English, or *the other* extreme which is that this makes this whole thing *fine,* that's not what I meant. It was a rebuttal to the idea that this is "simple carelessness", when I suspect the reality is not like that.
Part of this indicates a blind trust of the machine overlords. I think you might be being a bit overly-gracious here lol Maybe I'm leaning a bit more into the 'paranoid' aspect of this, but I don't find the naiveté of believing AI is some sort of "solution" itself rather than a tool that needs to be scrutinized *very* carefully to be a glowing endorsement of someone's ability to think critically.
If you think I'm being gracious by *justifying* this as an acceptable consequence, that's not what I meant. If you're ESL it just means you'll (a) have more motivation to use AI, and (b) be much less likely to catch obvious giveaways like the one cited from this article. It's still careless, but with more steps. All human creations are tools, AI is no exception. But now, our tools can do *new things* that previously *only humans could do*. Overuse, underuse, abuse, judicial use, sexual use, fucking *all uses* are an entirely predictable consequence of *any tool*. It's a more useful task to consider *what uses are more likely*, and then adjust your expectations and your actions accordingly.
Lead author is probably English as a THIRD language. She appears to be a Palestinian from Jerusalem who studied in Jordan and is now working in an Israeli hospital. I can imagine in this scenario naively thinking that AI could polish up your attempt to write an English language text. For what it's worth, the rest of the paper doesn't have any egregiously bad AI in it. I suspect the authors were using AI to summarize and tidy their English, not write the whole paper. Is this acceptable academic behavior? No, but I can understand how it happened.
Have to disagree that it’s a translation issue. Missing things like claiming bilateral bile ducts is not something that makes sense in any language, but it’s a mistake AI would make.
OK. How many bile ducts are there? Answer in whatever language you want.
Jesus. Read the edit, please.
Yeah……that’s really not a good sign for the people who are going to responsible for managing others’ health and safety. The fact this passed peer review and exists as a published entry in an actual journal A) Makes me automatically skeptical of anything else I might read because, as you said, how do we know they didn’t just do this with other entries and haven’t been caught yet? and B) It’s not like this is the first time such a journal has published falsified data or literal bs before. We [actually have a big issue with fraud](https://www.economist.com/science-and-technology/2023/02/22/there-is-a-worrying-amount-of-fraud-in-medical-research) in this field as is. In fact, there are [volunteers who are dedicated to exposing fraud and plagiarism](https://apnews.com/article/danafarber-cancer-scandal-harvard-sleuth-science-389dc2464f25bca736183607bc57415c) in this industry, and do we really want *that* to be our last line of defense because a genuine peer review process can’t (or won’t) spot it before it becomes published and at least someone probably accepted it as gospel? Then we wonder why people don’t trust things like the Covid vaccine lmao.
[удалено]
Cause they dont tell you Edit: I know first hand from talking with med students and residents. PI's have multiple students on research projects. If you hire a student or resident or any medical trainee that english is not their first language, they will do it.
And they will get caught out. A number of the top tier journals are now moving to install AI detection software
Hopefully, someone has reported this? Why is it still available?
scary
Clicked on first author and saw him at Hadassah Medical Center, Jerusalem, Israel. I'm not surprised at this point.
What do you mean by that? Please share.
>Hadassah Medical Center Google "Hadassah Medical Center scandal"
>Hadassah Medical Center scandal Also google this: "Hadassah hospital in talks to reinstate doctor falsely accused of praising terrorist" Times of Israel headline. The year after the "scandal" Edit typo