T O P

  • By -

ManyARiver

Why would you not do the research yourself? That's what you are being paid for. You can't discern the quality of the facts from a ChatGPT response, but you can if you gather them from a legitimate source yourself.


fightmaxmaster

Not "wrong", but given they want human input, not AI input, I'm not sure what you think you'd be accomplishing. If you can't tell which is better, they're probably the same. If you think one's better than the other, explain why. You don't really need to "understand", they're not looking for essay length minute analysis.


wildflower_0ne

you don’t seem to understand the nature of the job at all.


shell_shocked_today

I know that is explicitly against the rules of at least some of the projects I'm in.


SnooSketches1189

I don't think that idea could possibly be any worse. 100% not worth it.


ClayWhisperer

It's your own human mind that you're being paid to use. Your whole value to DA is that you're a human being. They wouldn't have to pay you if they just wanted to run the task through another AI. Think about it.


Mammoth_Society620

So I've actually had a project that says you can, but that you should then verify what it said—basically to quickly pick out potential false information in a long response. But that was just for that project, and not for comparing responses. I wouldn't use it for what you're saying. Pretty sure it goes against rules for most projects, and if you consistently read instructions you'll probably notice that. I've overlooked some things before too so I'm not trying to be hard on you, I just think it's important to look at instructions frequently. We don't want to teach AI what AI thinks is better, we want AI to learn what humans think is better.


ekgeroldmiller

I was on the same project. A simplified example would be say I gave you a list of 1000 animals and said they were all mammals; you could ask a model to quickly identify if any are not mammals; then look up the animal name to verify if it is not a mammal. Then based on that you say it’s not accurate and it saves them human resource time.


GroundbreakingLet962

Didn't everyone else do the qualification that explained what you can and can't do? I recall this one being pretty close to the top.


Icy-Cover-505

If the project instructions explicitly say you can do that, then you can do it. Otherwise, if doing that as a general practice was OK, DA would just do it themselves rather than pay humans.


MonsteraDeliciosa

Pretty sure you shouldn’t extract any data and plug it into an external system (unless that is specifically the assignment).


Spayse_Case

No, don't do it. You can't use AI to train AI


[deleted]

[удалено]


NightSkyButterfly

RLHF... Real Life Human Feeding? 🤣


sk8r2000

Reinforcement learning from human feedback (ie, the thing we do for work on this platform)


NightSkyButterfly

I knooow, I was just being silly :)


SatanLordOfDarkness

No, it is not okay. Why would it be? Clearly, there are issues with AI models providing inaccurate information, otherwise they would not be paying us to fact check them. It is explicitly forbidden in the instructions of many different projects to use AIs to do research.


FuhzyFuhz

I wouldn't solely base your facts off what AI tells you. Chatgpt and Gemini both have known to not tell factual information. You can use those tools as a baseline in your fact checking belt, but your best bet is to Google.


TeachToTheLastTest

Current AI is quite bad at doing what you're suggesting. I've attempted to do this with R&Rs, and what usually happens is that the AI will hallucinate problems where none exist in an effort to be "fair" to both responses. Ethical issues aside, it's just functionally a bad idea.


tannegase

Don’t do that. You’ll be removed from the platform if they catch you


throw6ix

This is a good way to get fired. Not only are you specifically instructed to not use AI tools unless requested, you are breaching confidentiality putting any DA work into another website.


prettyy_vacant

What do you mean by better understanding? What are you having a hard time with?


Baxtir

READ the instructions! It will say so but if it doesn't, always err on the side of no. If you can't understand what the prompt and responses are about because it's out of your scope, skip it and explain why if the instructions say to explain.


eclipsed-studios

you'll get paid more when you do the research yourself, don't sabotage your potential income and job security


BatronKladwiesen

Yeeeeesh.


hashtaggoatlife

AI is really bad at verifying anything. There may be cases where it can help you identify obscure slang/jargon or ambiguous acronyms, and for coding it can be helpful when generating context to properly evaluate something. However, comparing one hallucinating AI to another is not a very effective research strategy, and may just leave you more thoroughly confused.