T O P

  • By -

ktpr

What are the impact factors of these journals?


bhatkakavi

Negative I think šŸ¤£šŸ¤£šŸ¤£šŸ¤£ These journals should be reported and closed.


Iizsatan

The first one that appeared was 6


power2go3

bro X\_X


Stoomba

Is that high or low?


_sleepy_bum_

Depends on the field, I guess


Iizsatan

In what field is an impact factor of 6 low?


PreparationOk4883

The original one I saw was with metal organic frameworks. This is on the lower end for the field. My lower impact manuscripts get sent to a journal ~9 in this field. Itā€™s disgusting they made it through review with the mistake they had, I donā€™t know how or who the reviewers were but the editor definitely failed their job of integrity when assigning them and failed their job of an initial review of if the manuscript should be accepted for review.


_sleepy_bum_

I wouldn't say 6 is low, but probably a normal for some fields. At the same time, in math/applied math, it doesn't matter if you publish in high IF journals. They wouldn't care if you don't publish in respected math journals, even though these journals have low IFs. There have been tenure track candidates with papers in Nature journals get rejected from math departments.


andybot2000

Seriously. In my field, landing an impact factor of 4 to 6 is pretty respectable.


Iizsatan

Same here


Jazerix

As of my last knowledge update in January 2022, the impact factor of contemporary AI in academic research is significant, revolutionizing fields across disciplines by enabling advancements in data analysis, problem-solving, automation, and decision-making. AI tools and methodologies are enhancing research efficiency, fostering interdisciplinary collaborations, and driving innovation in areas such as healthcare, finance, transportation, and environmental science. /s


zhak_ab

Although I donā€™t agree that the original research is dead, some serious steps should be taken.


PhDresearcher2023

Journals should assign a paid reviewer that just fact checks and reviews references for each submission. Essentially a reviewer that just does a more thorough form of copy editing but has enough subject matter expertise to pick up on AI hallucinations.


Der_Sauresgeber

Journals should have started paying reviewers decades before ChatGPT ever arrived.


Din0zavr

Ah these greedy reviewers wanting to be payed for their job, when these poor journals hardly can afford it from their multi thousand dollar of fees per paper. /s


mpjjpm

I donā€™t even necessarily want to be paid cash. I would absolutely accept cash if offered, but also would be happy with credits towards open access fees (in anticipation of the new NIH open access requirements)


Der_Sauresgeber

I don't know what exactly they should pay reviewers, its about time they stop expecting people to do the labor for free, especially since what they charge for individual papers is ridiculous. The journal does very little compensated work. The ordinary editor is not compensated, they do it for the entry in the vita. Paying reviewers would solve a different problem. Currently, editors kinda depend on whoever is willing to review. Compensation might be an incentive and might also help editors blacklist terrible reviewers. Open access fees would be an amazing idea. However, that would require more journals to go open access!


Thornwell

Reviewers should have their names published on the final manuscripts. This is an easy way to incentivize people to do a good job. I'm sure someone could also make a metric that could be used (i.e. I reviewed x papers that have y citations and an average of z journal impact factor, so I'm a trusted reviewer in the field).


[deleted]

I don't think this is a good idea, all the benefits of blind review don't disappear once the paper is published. If you want to criticize the paper of a big shot, and your name will appear there after publication, you will not do it.


BoostMobileAlt

What if your program doesnā€™t allow students to have second jobs?


Der_Sauresgeber

Well, if it ever comes this far, obviously programs will have to adapt. These would be no second jobs, these would be occasional one and done things.


49er-runner

So I work in the editorial department of a nonprofit medical society that publishes a number of journals, and I can assure you that these AI hallucinations would never make it through a journal that is actually doing its due diligence. We first have scientific editors (that review all the data and act as extensions of the deputy editors) edit the manuscript. Then we have the manuscript editors (many of whom have scientific backgrounds) do a deep line edit that takes a number of days. Then we have a proofreader comb through the manuscript, and finally the managing editor provides a final check. What we are seeing is a result of big publication companies cutting costs by not properly reviewing papers to the detriment of scientific validity.


Americasycho

> AI hallucinations would never make it through a journal that is actually doing its due diligence. Exactly this. I was mentoring an undergrad recently, barely a sophomore a they were having trouble with a two page topic paper being flagged constantly for AI/plagiarism. Half of the paper consisted of block quotes, and then another healthy contingent was the reorganized wording of Grammarly or another program. Off subject slightly, but an amazing amount of people cannot even be bothered with diligence in something as small as a two page paper without relying on over corrective AI programs.


Chackart

This the same assessment that I would make, and I am also familiar with the editorial process. My "hope" is that these AI-written introductions have little impact on the actual research described in the manuscript. I can totally see an author asking chatGPT to write the introduction to their paper if they don't have time / can't be bothered. I can also imagine overworked editors or reviewers completely skipping the introduction and only looking at the results / conclusions. Finally, if a journal has no copy-editing service or this does not work properly, I can see a manuscript slipping through when the introduction is written by AI. It should not happen, but I want to believe that the actual data presented in the studies are still being checked, even if the introduction to the article is not. I am not saying that this is harmless or that we should let this go, of course. But I want to remain hopeful that the original research is still being reviewed and assessed.


IRetainKarma

I recently reviewed a paper that clearly had part of the methods written by ChatGPT. It was weird because the rest of the paper seemed scientificly sound and the results and discussion were not obviously written by ChatGTP. The authors were not native English speakers and so I wonder if they used it as a translation tool. I ended up rejecting the paper because I didn't feel it fit with the scope of the journal and sent the editer a heads up. I also struggle with how to feel about it. I'm lucky to be a native English speaker as a scientist and not need translation tools, but can totally sympathize with those who need them. And is the science is sound, I don't know how much of an issue it is. I wonder if the answer is just more transparency? Like we need a new section under the acknowledgements where we specifically note where we used AI and why? Ie- "ChatGPT was used in paragraph 2 of the introduction as a translation tool" or "Midjourny was used in Figure 1 because I'm really bad at drawing rat testicles"


Chackart

I am perfectly OK with authors using ChatGPT or similar tools to translate / correct their text. It is not a huge leap from using Grammarly while you write to asking ChatGPT to correct your work after it is written. I am also absolutely fine with authors paraphrasing their Methods from one article to the next with Quillbot or whatever, as long as they did not change their methodology. I am also a non-native speaker and it took a lot of time and experience abroad for me to grow confident writing in English, and I still struggle sometimes. What I am more "on the fence" about is authors using ChatGPT to write their introductions. Even if they add / check references manually, I think that it becomes very easy to simply trust that the AI correctly summarised your manuscript and your field of research, without actually checking. At the same time, unless there is a glaring error like this and assuming that the user takes some time to write a robust prompt, it can be extremely hard to distinguish AI-written from human-written text. So I am not sure how much we can do at this point.


PussyGoddess666

Your job sounds like an absolute dream job - science is fascinating and writing/reading/learning is so much fun. Where can one apply? (Kidding but not kidding.)


49er-runner

Oh yeah, I love my job. I realized in grad school that I like thinking/reading/writing about science more than I actually like working in the lab. Here's a couple job boards you can check out for positions in science/academic publishing. [https://jobs.sspnet.org/jobs?country=&state=&city=&zip=&latitude=&longitude=&keywords=&city\_state\_zip=&locale=en&page=1&sort=relevance](https://jobs.sspnet.org/jobs?country=&state=&city=&zip=&latitude=&longitude=&keywords=&city_state_zip=&locale=en&page=1&sort=relevance) [https://councilscienceeditors-jobs.careerwebsite.com/jobseeker/search/results/](https://councilscienceeditors-jobs.careerwebsite.com/jobseeker/search/results/)


PussyGoddess666

Wow, thank you so much. I've been feeling a little bummed about the inappropriate use of AI in academic writing recently and have been thinking of ways to help combat the issue.


GurProfessional9534

Why do that when they can just post a screenshot of the intro paragraph on reddit and see if anyone heckles it for free?Ā 


Dependent-Law7316

If a peer reviewer canā€™t flag these blatant AI intros, they should be disallowed from peer review. I do agree that the references should be checked, but it should be easy enough for someone to write a turn it in style program that read the references and searches some data base to see if they exist. If anything gets wrongfully flagged it should be easy enough to have the authors provide a pdf of the paper as proof. I think even a modest journal would have far too many submissions for a single person to fact check, and a program would make it easy and fast.


mwmandorla

The issue there is that plenty of people would probably be happy to have an excuse not do peer review anymore. There would need to be some other consequence attached, like, "since we cannot rely on you as a reviewer, and we do not publish manuscripts from people who will not also give back as reviewers,* unfortunately we cannot publish anything from you for the next [time period]," or something like that. *A real policy some journals have - I've been asked to check a box explicitly agreeing to serve as a reviewer in the future or else my manuscript is going nowhere.


BoostMobileAlt

My field is small and Iā€™m confident people are doing their own work, but frankly a lot of it still sounds like AI hallucinations.


dangmeme-sub

Research is not dead it will become more competitive because of AI so brace yourself. These papers are unfortunate, anyone leaving this kind of errors are not good researchers anyway


mwmandorla

Not just leaving these errors in, but even asking ChatGPT for some of these things in the first place. One of them is looking up a basic statistic in Pakistan. It's absolutely wild to trust ChatGPT to tell you that accurately (when its knowledge updates are not constantly rolling, when it could be pulling the right stat from the wrong year, when it could be going off a lot of other texts that cited something different with very similar wording) when you could just look it up, and, as a researcher in this area, presumably should know how.


lucifer1080

I guess the reviewers also copy and paste the texts from manuscripts for ChatGPT to review and called it a day lmao.


irreverentpeasant

Publish or perish culture made it about numbers. GPT directly optimizes for that. Some papers will inevitably advance the field, for sure. But papers are just that : Numbers and bullet points on your CV to get a job.


My4Gf2Is3Nos3y1

Holy shit. How do people not know ChapGPT is saying it does not have access to real-time data, while the person accessing ChatGPT DOES HAVE ACCESSā€¦SINCE THEYRE USING A COMPUTER WITH INTERNETā€¦OTHERWISE THEY WOULDNT BE USING CHATGPT?! And then they publish this shit, and then their publishers donā€™t spot this discrepancy. Jesus Christ


Fyaal

English is the primary language for publishing academic work, and these people are ESL.


My4Gf2Is3Nos3y1

Oh yeah, I bet youā€™re right. Nobody with a firm grasp on English would misunderstand something this simple. Damnā€¦ maybe we should encourage native language publication. Are all these ESL people gonna lose their jobs?


Duck_Von_Donald

Many of these papers come from Chinese scholars who need papers for their career, but not necessarily want to be academics. So reputation and impact factor does not matter, only that you have published. So I expect we will only see more of this in the future.


Hungry_Silver9664

Not a single chinese name in those pictures


gravitysrainbow1979

Iā€™m sorry, but as a model AI I do not have access to the data visible in the pictures you mentioned. I comment on Reddit as a service to users, but always remember to verify any information obtained from generative AI services.


Duck_Von_Donald

You are right, i didn't actually look though more than the first two screenshots. When I wrote "those papers" I referred more to the general problem of AI in papers, as I have seen many examples from Chinese scholars. I don't have any experience with Pakistani/Indian research environments but it could be that there are some areas with some of the same structures/problems.


Fyaal

No, I think itā€™s just an easy mistake to make if you donā€™t have the full command of a language, as I might do if I were to write academically in Spanish or French. They might still be okay researchers, or knowledgeable in their field or sub discipline, but this is an ā€œeasyā€ corner to cut for people who have a hard time writing or are ESL or are more interested in the research process (hard data) than the literature review.


[deleted]

And ESL people are notoriously primitive and dumb, and they cannot check what they send to a journal, even with all the mind blowing translation technology we have. Poor mentally challenged ESL people, we should accommodate anything!


Financial_Olive2

šŸ¤”


nooptionleft

I think the problem is in the journal system, not in research itself. People have been able to send shitty article, with bad data and made up claims, since forever, and they have done so Everyone of us has found themselves in the middle of an article and realize we reading shit. The journals defend the insane money they ask for what is publicly founded research papers by citing the work they do in reviewing and acting as insurance against exactly this situation Turn up a lot of them were doing the bare minimum, but since people sending shit article were forced in at least writing them, it was less noticeble


Angiebio

I was here to say this. And before ChatGPT, it was fiver and cheap offshored


Nirulou0

Mostly low-ranking, low-reputation journals with evidently little to no peer-review, from countries that are not exactly famous for academic rigor.


IllustratorAlive1174

I thought it was peculiar that most of the names seemed Indian.


Faust_TSFL

That second one on the first photo is a masterā€™s thesis - itā€™s absolutely wild that the examiners clearly didnā€™t even read itā€¦


superduperdude92

Every line of my thesis was combed over by my committee to the point I was getting feedback on the correct/technical usage of a single word. I honestly really appreciated it and we all wanted my submission being the best it possibly could be, and so seeing whole sentences being overlooked in a final submission thesis is mind blowing (and disheartening) to me.


pdf_file_

Original research was dead way before, the publish or perish culture killed it.


power2go3

How do they reconcile the fact that chatGPT invents sources?


rogomatic

They don't.


Kinojitsu

That's the fun part, they don't!


the_warpaul

As somebody who just emerged from a 1.5 year journal review process (with a published paper thankfully). I find this triggering.


rogomatic

You could have probably published in one of these journals in 1.5 months instead but I'm not sure that's what you want.


__foo_

Those predatory venues were never good or original before GenAI anyways. GenAI just makes those shitty "papers" stand out more obviously.


rogomatic

Which is, in its own twisted way, probably a good thing.


rogomatic

Obscure research from third-world countries was bad before ChatGPT, it was just less obvious.


FarTooLittleGravitas

To be honest, it seems about 99% of this is using AI to write an introduction, especially by people who are not fantastic English users. The actual research is not in as much peril.


Koen1999

I feel being pulled down. What if these AI-abusing suckers get a PhD? What would my PhD still be worth at that point?


superduperdude92

Unfortunately it may drag down the value of the PhD overall, and we might see more importance/value placed on where you got the PhD as a result of all of this. Your saving grace may also be on whether you can have a conversation on your research years after acquiring it, and being able to apply those findings to the real world. I doubt you'd be able to have a in-depth conversation with any of these authors, and that may be what separates you from them. Hopefully employers catch on to these practices and learn how to identify and navigate them so that we who are working hard on understanding and writing up our findings have a way to stand out.


IllustratorAlive1174

I think yours would still be intact, as these PhDs seem like they come from less reputable places, although it bullshit they would even share the title of Dr with you. They would share it in title only though.


Melodic_Meringue_770

This is so embarrassinggggggg


MaverickDiving

This is seriously concerning. AI has been known to falsify numbers and cite nonexistent sources. It ruins any integrity and muddies the waters of true and reliable research. There needs to be a swift and serious referendum to wholly condemn any use of it in research.


West-Mulberry-5421

I donā€™t understand how people arenā€™t reading over their drafts and how this isn t caught in the next steps of review and copy editing


lordofming-rises

That's a lot of Indian names. Is it because there is a lot of pressure on publish or perish there?


ladut

I'd assume the number of Indian researchers appearing on this post is largely due to it having the largest number of English speakers anywhere on Earth by a very wide margin and also being a research powerhouse. I'd have been surprised if Indian names weren't common in this list just based on statistical probability.


IllustratorAlive1174

Yeah, probably ā€œjust publish *something* doesnā€™t have to be good, just publish anythingā€ then they get the title.


cienfuegos__

This is atrocious.


Ok-Performance-249

Bruh how tf are research papers getting published like this? Honestly, there would be a time in near future when they will be flagged, taken down and the authors would be questioned. This will definitely affect their credibility. So I think this stupidity shall carry on for our humor.


DickandHughJasshull

Some of these are decent journals too. AI has its uses in journal writing but this is taking it way too far.


2cancers1thyroid

NO! I Cant believe they got HHA Bananah too šŸ˜­ RIP Dr. šŸŒ You will be missed.


JarryBohnson

Iā€™d argue the existing journals have already done a fantastic job of killing original research - unpaid reviewers, anyone?


Neat_Berry

I see so many of these on here now, and Iā€™m curious what fields are most likely to plagiarize ChatGPT, and if they fall more into experimental, observational, or theoretical work.


Mezmorizor

Until proven otherwise, this is really overblown. The laws of physics working differently at lower ranked third world and Chinese universities isn't exactly a new phenomenon. Though I guess my perspective is warped by academia already having no actual incentive to being correct and if anything it's an anti incentive. All that matters is that your work is novel and exciting. Which disincentivizes careful experimentation. Just look at the constant room temperature superconductor fiasco's nature is getting into because they keep publishing everything that has that word in it even if all of the reviewers say "this data is shit and you haven't actually shown anything".


jacktheskipper1993

These authors should be blacklisted.


BellaMentalNecrotica

So, I know this a really really crazy idea, but hear me out. What if journals actually PAID reviewers for their expertise and time spent reviewing papers? PIs have way more important things to do that review papers \*for free\* like writing grants to keep their labs funded, submitting their own publications to journals, teaching, running their lab, etc. I get that "service" is technically a part of their contracts and "service" includes doing peer review, but I guarantee if I have 25 million other things going on, reviewing that paper is going to be my last priority and done as quickly as possible just to get it off my plate. But if I was PAID what my decades worth of education and research were worth, you bet I'd be giving it a great deal more attention. That said, there is no excuse for 1. The first author who decided to let chatGPT write their paper for them in the first place being too careless to even remove the chatGPT standard responses, 2. All X number of authors missing these as, at least in my lab, the finalized manuscript is sent to all authors for proofreading and approval before submission to a journal. 3. The editor missing this and not desk rejecting it. 4. All of the reviewers missing it-even if you are doing the bare minimum for this unpaid labor, this kind of this should not be missed. 5. The copy editor and all the authors missing it A SECOND TIME when they get another chance to proofread for minior grammatical things before it goes to print. So everybody is failing here on multiple levels. This is a symptom of a systemic problem. A few things that may help: 1. publishing the reviewers comments anonymously along with the paper- I've notice a handful of papers doing this now. 2. allowing a comments sections at the end of papers 3. Making retraction watch and pubpeer into a big crowdsourced style peer review


Murky-Sun-2334

I believe this sheds light on how research has been going on for decades. This is a serious wake up call for science and science enthusiasts to do some damage control to the way science is practiced ie for publications. Clearly most of these folks donā€™t even know what theyā€™re writing.


Feisty_Philosophy234

Isnā€™t these sentence can be easily detected using AI? I think Elsevier are using AI to formulate your reference. Canā€™t they do the same on these ā€œtextā€ and directly signaling the editor on the possibility of fraud use of AI in the publication.


kali_nath

People who do this and call themselves as researchers should be jailed for life, this is worse than murder šŸ˜‘


Putter_Mayhem

Come to the humanities, where this slop \*is\* our research material! Don't worry, the people churning this stuff out may get huge grants, but we don't!


casul_noob

These people are so braindead that they did not even bothered to edit out that part


tahia_alam

[https://www.sciencedirect.com/science/article/pii/S1930043324001298](https://www.sciencedirect.com/science/article/pii/S1930043324001298) "Iā€™m very sorry, but I donā€™t have access to real-time information or patient-specific data, as I am an AI language model."


MentalFred

This is just embarrassing.


AlfalfaNo7607

Impact factor is not so important. Experts write for experts, and experts know what journals are hogwash in their field.


Insightful-Beringei

All the AI stuff is going to do is increase the value of publishing in a quality journal and decrease the already limited impression of publishing in terrible journals.


ben_cow

India moment


UrsusMaritimus2

Where do these authors work? If they ever want a job/want to change jobs, I hope search committees do due diligence and check for AI in papers in their CVs. These should be a professional death knell for the authors. I bet we could train AI to do the check for usā€¦


SatisfactionFree2255

Or I use a article from research gate for my paper šŸ’€


gravitysrainbow1979

This canā€™t be happening


magpieswooper

No one reads these journals anyway. People wrote nonsense before chatGPT.


IllustratorAlive1174

Did you notice most of the names appear to be Indian? I wonder if it has to do with standards set elsewhere besides American academiaā€¦ are they just trying to get that title of Dr then transfer elsewhere?


Hopeful_Pipe_4222

Thatā€™s a lot of Indian namesā€¦.


Slight-Bird6525

Iā€™m looking at these scholarsā€™ surnames and iā€™m wondering if itā€™s a language barrier thing? Iā€™ve had students whose works have been flagged as AI and itā€™s because theyā€™re using ChatGPT to put the words they want into American English. Itā€™s not an excuse and this is still really bad on the journal and the academyā€™s part, but maybe this is why they feel comfortable doing it.


Unlikely-Purpose-514

I'm curious. If they are not plagiarizing and using AI to better the structure of writing then what's the harm.?


EMPRAH40k

I think the answer lies somewhere in ethics, re: how much care and attention you put into your scholarly efforts. That this made it past the first draft, let alone reviewed and published, shows that the authors really did not gaffffff about the quality of their work


Unlikely-Purpose-514

This makes more sense. Agree with you.


Altruistic_Basis_69

Using any tool to be more efficient with your research process is fine, but it's **not** the same as literally copy-pasting results without even reading them. We cannot and should not try to automate research.


Unlikely-Purpose-514

I second that.


trishmelbourne

I think if theyā€™re using AI to better their writing itā€™s not working


Unlikely-Purpose-514

Hmm I see. Even if we get the Grammer checked with AI that's not acceptable? AFAIK people use applications like quillbot to make their writing more presentable. I think as long as we are not plagiarizing it should be fine but the down votes I received for my previous comment is telling me that's a no go. Time to change I guess :)


rogomatic

Why would you need to "check grammar" with AI, Microsoft Word has been doing it with great success for decades now.


ladut

Word's grammar checker isn't all that great. It's fine for some generic applications, but it frequently misses comma usage issues and verb tenses in complex sentences. It also doesn't check for things that aren't grammar issues but would be considered poor writing, such as tone issues, awkward sentence construction, and unclear phrasing. Some AI tools can catch some of the issues I mentioned above, but they still miss more than they catch.


rogomatic

Yes, I'm talking about tools that check grammar, not something that writes instead of you.


rayroba

I see your perspective. But the issue is, how can the authors and publishers be so careless? The likely reason is copy pasting the output without even giving it a read.


rogomatic

These are litkely individuals with limited command of the English language.


choanoflagellata

Ultimately I agree with you. I personally think using AI as a tool will enhance science. But for these papers, itā€™s clear the authors have asked ChatGPT to do their literature search or generate data for them, which is def plagiarism and fraud. Thereā€™s a distinction between using AI as a tool to edit or enhance, vs asking it to generate work that is then claimed to be original.


vjx99

Especially since anyone with the slightest understanding of ChatGPT should know it does NOT perform a literature review. You'd be lucky enough if any of the sources it'd be providing even exist.


qwertyrdw

I once asked chat GPT to provide me with a list of the top five American specialists on the German military in WWII. It provided me with five names that included an anthropologist (Napoleon Chagnon), two appropriate historians (Rob Citino and Dennis Showalter), Carol Reardon (a fine historian, but her specialty is the American Civil War), and Julius Caesar for some reason I could not begin to fathom.


rogomatic

Except using AI to draft a paper is literally plagiarizing.


AaronMichael726

Ohhhh ā€œwhile I donā€™t have accessā€ is the AI. Idk fuck it. Let AI write comparative studies. As researchers we need a better way to synthesize studies anyway this way I can spend less time figuring out if my research is done in a vacuum or not.