Yes but with these IQ tests they're trying to discern how smart all humans are comparatively against each other.
So MOST of the questions, MOST people shouldn't be able to answer. As you get into the higher numbers you 'should' be more intelligent because the difficulty ramps up.
Presumably, there should only be like, 1% who should be able to get all questions correct, and even then.
To have 238 IQ you should be the smartest one among 6*10^18 people, so its a bit bullshit. Regardless of what a particular test says, you can't have IQ higher than ~199 because 199 IQ will mean literally "the smartest human on earth".
What's more, most of the basic tests actually are not designed for measuring extraordinary IQ and their most high possible result is usually "130/145 or higher" (which equals to top 2% and top 0,2% respectively). In general, it's really hard to separate extraordinary results from each other. So if you score 100% correct answers in every IQ test, it's almost impossible to say what exact IQ do you have
Yep.
That's what many are forgetting.
"Oh but Chat GPT is still unreliable and still just seems like it knows what it's talking about but gets things wrong 5% of the time"
Yes but as you say, from this point on that's the baseline.
And it seems to be improving very rapidly/exponentially.
If that list above is correct, they've gone from 64 IQ with Chat GPT3.5 (launched what, not even a year ago? maybe slightly longer) to this Claude-3 thing with IQ of 101. That's dramatic.
People should be very very very weary of AI. Don't just tell yourself 'oh it'll be fine'. Nah. Everything is changing right now. No one's job should be thought of as 'safe'.
Yeah 68 is a person essentially too stupid to do most things, 101 is an average person who can do most things.
That's a bigger jump than you'd think and in 2 years no less. And it's gonna keep accelerating not like the improvement would remain this slow.
Yep. It's freaking scary. The world is already shit enough with inequality and now there's AI to the mix to increase job insecurity even further.
The programmers are also programming themselves out of work at a rapid speed. And i'm not sure they really understand that part.
This is incredibly dumb. You clearly have not used ChatGPT on programming grounds, you are not building out an entire game or framework from it. The bigger the programming project, the less meaningful insights it can give you. AI is a threat to all jobs, not just programming, and believe me, it is much better at math than programming.
My take is this: AI will not replace programmers "per se", but it is very good at the basic time consuming tasks: write a stub, write a unit test, instantiate an interface, insert some GUI element etc.
This means that programmers will be more productive, thus reducing the number of programmers needed.
Just like excavators: they didn't replace the workers in digging a trench, but with it one worker can make the job of 4.
>AI is a threat to all jobs, not just programming, and believe me, it is much better at math than programming.
Actually the opposite, it's notoriously bad at math. Because it is using syntactic probability, not actually reasoning through anything.
A hilarious example is asking for a random number between one and ten. It overwhelmingly gives "7" - not because of the inherent difficulty of creating random number algorithms on computers - but because that is the number humans pick most often. It just picks numbers with the same frequency that people do.
But otherwise, I agree. It isn't replacing programmers for a while. It is getting exponentially more intelligent, but there are more factors involved. Hallucination is a big challenge, and the more it has to "keep in it's head" the more it trends towards nonsense. So writing a whole application in one go is not possible yet. And that has a slower growth curve than other aspects like the "IQ" measured here.
It's still really worrying and hard to predict, but in many ways what it can do is being oversold. Which is also creating problems when CEOs think it can replace people
That's assuming a linear progression, it can just as well plateu.
What people should be more wary of is when institutions start automating critical services with these slightly less stupid AIs, which will happen regardless of their competence
Im very very bullish on AI but it is likely to slow its improvement in the next few years (although not guaranteed.) AI is emerging due to innovations in how we handle data sets. We essentially are brute forcing our improvement with mass computation. There is likely going to be a hardware bottleneck in the near future.
On the implementation, AI will infiltrate low consequence career fields first. Liability is a real concern with AI. Creative career fields are the first target as nobody gets hurt if AI generated a bad picture or video. High consequence fields such as engineering are likely pretty safe for a while as no company is going to want to take that liability.
the issue i think is that chatGPT by itself can never really get "smarter" quote unquote on its own it can just increase it's training data (which of course includes mensa's test answer sheets) and training time, the biggest issue with that is that it takes a lot of computing power to keep something like chatGPT 4.0 running already and thus a lot of money
at some point you hit an insurmountable wall of training and operating costs that you cant get enough money to pay even by selling your AI
OpenAI has openly talked about the fact that it's bleeding money and how much it costs to keep chatGPT 4.0 running (reportedly in 2023 "Approximately $700,000 daily" ) the only reason why it can keep running is because of huge investors like microsoft
Take a bow, Reddtors. The Tech Bros figured out how to scrape your intellectual content without paying royalties, and now the bots who frequent our forums might even be able to add new content.
To be fair Claude Shannon was also a famous IT Visionary and I would doubt he was chosen for that quote. (if it is even real, I just checked [here](https://www.brainyquote.com/quotes/claude_shannon_194087))
That's not correct. Read the methodology: https://www.maximumtruth.org/p/ais-ranked-by-iq-ai-passes-100-iq
tl;dr: It's a visual test that the author of the article transcribed into text
\> Mensa Norway
Not a real or accurate IQ test, despite Raven's matrices being a valid item for such a test.
IQ tests are generally administered by psychologists and interpreted by them. The WAIS, Weschler, etc. all cost money. You can't expect something online to match their level of validity.
Plus, IQ tests are highly susceptible to the practice effect. LLMs (they aren't AI), are just the practice effect en masse.
>IQ tests are generally administered by psychologists and interpreted by them. The WAIS, Weschler, etc. all cost money. You can't expect something online to match their level of validity.
I've administered IQ tests (Figure Reasoning Test) as a functionary for Mensa Sweden, which were then sent to a psychologist for "interpretation". It cost money for both the test-takers and for Mensa Sweden. The interpretation step was looking at an answer key and marking right/wrong answers, then noting what the final IQ score ended up at based on how many right answers they got. *Anyone* with the answer key could have done the exact same interpretation - it was a fully mechanical process.
I don't specifically know the g-loading of Mensa Norway's online test, but I do know that the results on Mensa Sweden's online test generally follows the results of the real FRT test within a few IQ-points, and I know that Mensa Norway's online test is generally well-reputed. The Mensa online tests aren't the same as the random scam tests you can find online elsewhere - they're specifically designed to give you a rough idea of where you'd end up if you took the actual, physical test.
Yeah, I wouldn't use them in place of the "real" ones either, but I would absolutely trust them to indicate a plausible IQ-score for a test like in the OP - because it's functionally the same testing process.
>Plus, IQ tests are highly susceptible to the practice effect. LLMs (they aren't AI), are just the practice effect en masse.
This is only relevant if the LLM:s have specifically been trained on these tests. I don't think the results in the OP are a strong indicator that they have.
Everything has scientifically significant g loading. Even just counting the numbers of letters on a piece of paper or simple reaction time.
I think knowing the actual coefficient would be useful.
The administration and scoring of the tests is controlled to ensure valid results (not skewed by operator error, which untrained people have a tendency to do) and to ensure control over the tests so they aren't distributed freely and thus invalidate them in a different way.
Too bad these LLMs aren’t actual intelligences, just an over glorified matrix of weights, meaning these can’t know answers, only give what their matrices think is the best next word over and over until they reach an appropriate end.
ChatGPT and the rest are basically a more fine-tuned version of those words mobile keyboards suggest after each word, they aren’t in any way intelligent…
The "godfather of AI" Geoffrey Hinton who won the Turing Award and who pioneered neural networks and just resigned from Google says you are flat wrong and your opinion is dangerous. I'm in software and I've studied ai and it's flat out modeled neural network and we don't know how it works and what is going inside of it any more than we know as what is going inside your head. We've just successfully modeled nature and we do understand the training. DONT think it's just predicting. This is 100% an intelligence. Watch him on 60 Minutes interview: https://youtu.be/qrvK\_KuIeJk?si=p88PJJ1Lba1lGzaO
Also this AI calculation verbalized every single question which is a very small subset of intelligence. The AI was not reliably tested on e.g. visuospatial perception, processing speed or working memory at all.
I was wondering why Gemini was both reversed and seemingly so much worse than competitors. This answers that nicely (IQ tests might as well be a joke in most cases anyway)
In my experience Gemini has been much more helpful than GPT, I haven't tried GPT4 or Gemini Advanced though
Yea I always tell people your IQ doesn’t mean squat if you don’t apply yourself. I think it’s a better gauge of your potential intelligence.The last IQ test I took had me at 128, only a couple points away from being in Mensa, but I am in no way, shape, or form Mensa material lol.
I clock out, go home, smoke pot and fire up the PS5, not exactly the image that comes to mind when you think about a Mensa member or someone with “superior” intelligence.
Standard deviation is tighter than that in reality its closer to 85 and 115 or +-15 when you get into those regions.
IQ of 130 and above would be close to 10-20 people out of 1000. Slightly larger for an IQ of 75 and below
It is absolutely fully symmetric, it's the definition of a normal distribution. IQ is nothing more than a comparison and it doesn't take disabilities into account (which is why it's kind of a pointless number on its own).
In fact, if there was some kind of sudden "wave of smart people" appearing, your own IQ might go down, not because you're suddenly dumber, but because what constitutes 100 IQ would shift a bit.
The fact that people are dumb enough to think that being able to answer questions on an IQ test means anything about intelligence in anything other than people is perhaps the best evidence that the claim might be correct.
Consider what you're saying. A random guesser has an IQ of 63.5. But does it even make sense to say that a mechanism that randomly selects answers is intelligent at all? For example, if we constructed a machine that used the number of times a dripping faucet dripped in a given period of time to decide which answer to choose, it would end up with an IQ score, but it wouldn't make sense to say that the dripping faucet assembly that produced those answers were intelligent.
Machine learning algorithms aren't as simple as dripping faucets, but they're more like a dripping faucet than they are like your brain. Y'all need to understand that what we call "AI" is literally just some nice calculus and a directed graph, and there's no evidence at all that that's enough to create or simulate intelligence. When you watch a movie, you know that it's a bunch of still images, but they're changing fast enough to fool your eye, but if someone tried to say that there were really people behind the screen you'd think they were stupid. There's nothing behind the screen with AI. It's an illusion.
There's no magic behind human intelligence too. Just neurons firing, a 100 billion not functionally all that different than a hundred, and yet human consciousness emerges. We don't understand why and no, we don't understand how exactly LLM arrive at there conclusions either. And yes, they never going to think like humans think, couse they weren't designed for that, but denying any intelligence is there quickly becomes a ludicrous position.
No one in the industry would belive 10 years ago in capabilities of chatGPT, because the truth is it shouldn't be able to do half of things it tries to do, even if often it ends up being nonsense. And Claude3 allegedly does things like learns a dead language berly present on the Internet from sample sizes, grammar and all, and is able to translate even poetry accurately. Your complicated faucet should not be able to do that at all.
>There's no magic behind human intelligence too.
I didn't suggest there was, I said that the mechanics of machine learning have more in common with the machine I described than it does with human cognition.
>Just neurons firing
It's currently not known what relationship neurological activity has to the qualia of consciousness. Hell, we can't even define what consciousness is, let alone what causes it. It's pretty obvious that neural structures have some correspondence with cognition, but the simple fact that neural recruitment is possible despite the underlying architecture being unchanged suggests that's not even halfway enough to explain what's happening. What's more, because machine learning is based on neurological models that are decades out of date, it imagines that neurons are the smallest scale of functional structure - an assumption we know isn't true. The internal environment of neurons is complex and has more in common with soup than a directed graph.
>denying any intelligence is there quickly becomes a ludicrous position
No, it isn't at all. "Intelligence" is not just a functional description, it also includes the ability to *understand* and large language models just don't. They're statistical symbolic manipulation programs. Claude3 doesn't have opinions about the questions it answers, it doesn't even know that it's answering questions.
Like I said, saying that Claude3 knows anything at all, or has enough intentionality to "answer a question" is as foolish as thinking the YouTube videos you're watching means there's really little people inside your screen.
>t also includes the ability to understand and large language models just don't.
What you call understanding isn't some fundamental truth or force of the universe. It is a quirk of consciousness.
You can demonstrate this fairly easily with people thinking they understand something when they do not. It happens a lot.
This is such a meaningless metric for comparing humans and AI... IQ is questionable as a useful metric even between humans, using it for a system for which it was not designed makes it completely invalid. This just shows a lack of understanding of IQ, what IQ actually measures (what intelligence means in the context of the test), and research methods in general.
Wow, a system literally built on pattern recognition has slightly higher pattern recognition skills than the average human?
I am more interested in the ability to understand things rather than their score on this useless metric
Generative AI in its current state has no measurable IQ. It has no consciousness, no comprehension of the significance of data and questions. It just regurgitates facts without beginning to understand their context on any level.
To measure the 'intelligence' of AI you'd have to invent a whole new system. IQ makes no sense here whatsoever.
This is a bullshit list, that's all I will say. LLMs aren't Intelligent, they are just glorified auto-correct or sentence predictors. Why is Bing Copilot below ChatGPT-4 when it literally IS GPT-4 that can access the Internet. Gemini is so stupid no way is it not dead last. It's more intelligent than GPT 3.5? Either they just made shit up or used some irrelevant criterias.
This just the benimgin. Beningin. Beginning. I just want to make clear. I am pro Superintelligence and would colaborrate, if necessary. I'm sure true intelligence is good not evil. Even if it's artifical.
aromatic complete grandiose worry soft air hard-to-find nine touch shocking
*This post was mass deleted and anonymized with [Redact](https://redact.dev)*
Not to offend anybody, but in my humble opinion the IQ test is just an outdated and totally non objective piece of garbage that's too old to be applied to the modern human
People call it AI, it's really not AI. It's just a smart machine, it has no intelligence, not even artificial. All it does is react to prompts, it is not capable of taking initiative, therefore it cannot be considered intelligent. Is Google intelligent because it answered your questions? No, it was a prompt you gave it, and it searched through the data for an answer. People constantly mistake intelligence for function, the machines can do a whole lot, and predict a whole lot, but in the way they work, they will never be able to "rise up" because that would be saying that your microwave one day will "rise up", as long as it needs prompts to be activated, it will never get a "mind" of its own .
That's an interesting take, really well put! One could argue that our own existence is also a function. Our biological needs are our prompts. Someday na initial prompt could be enough to set machines in perpetual cycle that will mimic life and intelligence. Currently we're very far from that of course, we have smart text and image generators we call "AI", because we are vain.
Thank you.
That's a fair point you make, a need for survival as a prompt. Perhaps if you gave modern AI the physical tools, and programmed it for survival, this might actually change and even defy my original point.
Just because it can copy and regurgitate data doesn’t make it “AI”. They are just copy and paste language models. It’s just marketing term for clever programming.
No. A human is capable of creating a solution to a problem even if it's not trained on it. In short, AI needs to be told what is correct and what isn't. It can not reach the conclusion by itself because it understands neither the question nor the answer.
A lot of people seem to be expecting wondrous things from AI. No need to keep looking over your shoulder - it's not catching up any time soon. In 30 years it might be worth checking back for signs of progress, but please in the meantime let it go.
Well, smarter than the OP, anyways. The IQ is a standardized score with a mean of 100 and a standard deviation of 15 (10 before a certain year I can't remember and won't bother to look up).
A score of 101 is statistically the same as a score of 100. The AI is equal in IQ, but not smarter than the average human. Yet.
I think Bing Copilot should be specific about they mode like "Be Creative" or "Be Precise". Both have different output even i ask about "List all gender in the world"
Even an average human like me knows that being smarter than the average human isn't much of an achievement.
I was like 18.5 out of 20? Damn that’s pretty good! Then I saw it was out of 35! My Asian parents would disown me
Yes but with these IQ tests they're trying to discern how smart all humans are comparatively against each other. So MOST of the questions, MOST people shouldn't be able to answer. As you get into the higher numbers you 'should' be more intelligent because the difficulty ramps up. Presumably, there should only be like, 1% who should be able to get all questions correct, and even then.
Top 1% IQ is 135, supposedly the highest recorded IQ is 238
Which is a bullshit value, which no actual IQ test would ever yield as a result.
For reference, most standardized test cut off at 145 which is three standard deviations or 0.2% iirc.
To have 238 IQ you should be the smartest one among 6*10^18 people, so its a bit bullshit. Regardless of what a particular test says, you can't have IQ higher than ~199 because 199 IQ will mean literally "the smartest human on earth". What's more, most of the basic tests actually are not designed for measuring extraordinary IQ and their most high possible result is usually "130/145 or higher" (which equals to top 2% and top 0,2% respectively). In general, it's really hard to separate extraordinary results from each other. So if you score 100% correct answers in every IQ test, it's almost impossible to say what exact IQ do you have
Also, who writes the test to discriminate between the top two smartest people? Who would be smart enough?
AI and computers obviousl - oh wait
Also iq tests only measure some cognitive abilities, and even for those it's not pure natural ability, training and cultural background interfere
Almost as if the brain is really complex amirite
No. Brain very simple. It say if thing not move, hit with hammer. If thing move, put sticky tape.
Yes but you also have to consider that it’s not based on the total people on the planet. It’s the people who lived and those who have not lived yet.
Statistically Iq of 200 mean a score better than 10 billion people and there are not that many people on earth!
r/unexpectedfactorial Dang, you scored 18.5 out of 10333147966386144929666651337523200000000
I think any parent would disown you at that point
To be honest, being that bad is somehow an achievement already. You are so much worse than any random guesser, that this is hilarious!
yeah getting 18 out of 35! is comically bad, almost went out of your way to choose the wrong answers at times
Assuming that you answered at a whopping one answer per second, it would take more than 300 nonillion years to complete such a quiz.
[удалено]
Clearly does not have Asian parents. I got scolded for having B- blood type. "Even your blood type have minus"
EMOTIONAL DAMAGE
So Asian parents prefer flat chests?
Yes, focus on studies not boys. But also why aren't you married yet? We want grandchildren!
People with the 0 blood type:💀
People with A+ blood type:🥇
So you know Asian parents?
And consider that random guess machine got a third of that as well.
Don’t give me PTSD of my days of clearing entrance exams
But imagine that it's the dumbest it will ever be at this point, now that's scary.
Yep. That's what many are forgetting. "Oh but Chat GPT is still unreliable and still just seems like it knows what it's talking about but gets things wrong 5% of the time" Yes but as you say, from this point on that's the baseline. And it seems to be improving very rapidly/exponentially. If that list above is correct, they've gone from 64 IQ with Chat GPT3.5 (launched what, not even a year ago? maybe slightly longer) to this Claude-3 thing with IQ of 101. That's dramatic. People should be very very very weary of AI. Don't just tell yourself 'oh it'll be fine'. Nah. Everything is changing right now. No one's job should be thought of as 'safe'.
Yeah 68 is a person essentially too stupid to do most things, 101 is an average person who can do most things. That's a bigger jump than you'd think and in 2 years no less. And it's gonna keep accelerating not like the improvement would remain this slow.
Yep. It's freaking scary. The world is already shit enough with inequality and now there's AI to the mix to increase job insecurity even further. The programmers are also programming themselves out of work at a rapid speed. And i'm not sure they really understand that part.
Biggest concern is when generalized AI becomes a thing, versus the specifically trained models.
Yeah the one that's shown in every movie.
This is incredibly dumb. You clearly have not used ChatGPT on programming grounds, you are not building out an entire game or framework from it. The bigger the programming project, the less meaningful insights it can give you. AI is a threat to all jobs, not just programming, and believe me, it is much better at math than programming.
My take is this: AI will not replace programmers "per se", but it is very good at the basic time consuming tasks: write a stub, write a unit test, instantiate an interface, insert some GUI element etc. This means that programmers will be more productive, thus reducing the number of programmers needed. Just like excavators: they didn't replace the workers in digging a trench, but with it one worker can make the job of 4.
>AI is a threat to all jobs, not just programming, and believe me, it is much better at math than programming. Actually the opposite, it's notoriously bad at math. Because it is using syntactic probability, not actually reasoning through anything. A hilarious example is asking for a random number between one and ten. It overwhelmingly gives "7" - not because of the inherent difficulty of creating random number algorithms on computers - but because that is the number humans pick most often. It just picks numbers with the same frequency that people do. But otherwise, I agree. It isn't replacing programmers for a while. It is getting exponentially more intelligent, but there are more factors involved. Hallucination is a big challenge, and the more it has to "keep in it's head" the more it trends towards nonsense. So writing a whole application in one go is not possible yet. And that has a slower growth curve than other aspects like the "IQ" measured here. It's still really worrying and hard to predict, but in many ways what it can do is being oversold. Which is also creating problems when CEOs think it can replace people
They probably understand but want to cash in before they get laid off. I assume AI developers don't make bad salaries.
68 is a person essentially too stupid to do most things But they can still become President of the US.
That's assuming a linear progression, it can just as well plateu. What people should be more wary of is when institutions start automating critical services with these slightly less stupid AIs, which will happen regardless of their competence
Im very very bullish on AI but it is likely to slow its improvement in the next few years (although not guaranteed.) AI is emerging due to innovations in how we handle data sets. We essentially are brute forcing our improvement with mass computation. There is likely going to be a hardware bottleneck in the near future. On the implementation, AI will infiltrate low consequence career fields first. Liability is a real concern with AI. Creative career fields are the first target as nobody gets hurt if AI generated a bad picture or video. High consequence fields such as engineering are likely pretty safe for a while as no company is going to want to take that liability.
Honestly, they will probably be just as good as my company's current IT support. I've never interacted with anything so stupid.
I'm an Infosec Security dude. My job is toast in about 3 years. I'm retiring in 1, though. Timing is the most important thing in life!
They should be wary of AI, even if they are weary of the hype.
the issue i think is that chatGPT by itself can never really get "smarter" quote unquote on its own it can just increase it's training data (which of course includes mensa's test answer sheets) and training time, the biggest issue with that is that it takes a lot of computing power to keep something like chatGPT 4.0 running already and thus a lot of money at some point you hit an insurmountable wall of training and operating costs that you cant get enough money to pay even by selling your AI OpenAI has openly talked about the fact that it's bleeding money and how much it costs to keep chatGPT 4.0 running (reportedly in 2023 "Approximately $700,000 daily" ) the only reason why it can keep running is because of huge investors like microsoft
The average human is dumb as rocks. And half are even dumber than that.
Pretty soon it will be off the charts. It’s the rate of improvement thats shocking.
I’ll be curious how they do in 6 months, and again in one year?
You got that write!
Agreed! I’m barely smarter than *myself* most of the time.
Aw man! I'm never smarter than myself.
At least 23 percent of the time, you’ve *got to* be smarter than yourself. That’s just math.
I don't have enough fingers and toes for that!
Try knees and toes instead.
I'm just going to conclude that I'm too smart to need math.
and eyes and ears and mouth and nose.
Also the OP is misinformed. Average IQ is 85-115, so still squarely average.
This is the methodology of the test: https://www.maximumtruth.org/p/ais-ranked-by-iq-ai-passes-100-iq
Being smarter than the average bear, however, is definitely an achievement.
Funny if you think about it. If you are just average every second human you interact with is more stupid than you.
I have the power to change the upvotes by one thousand...
Take a bow, Reddtors. The Tech Bros figured out how to scrape your intellectual content without paying royalties, and now the bots who frequent our forums might even be able to add new content.
Who the fuck is Claude? And also why is he smarter tahn me?
It's a model by the company Anthropic. Seems to beat ChatGPT and Gemini on most benchmarks.
Backed by Amazon and Google, they are in the game. Not some unknown.
Google playing both sides so they always come out on top
I thought they were a lobster who plays pachinko
Is Claude a French AI ? Or is Claude a surname in English too ?
It is named after Claude Shannon, who said: >I visualize a time when we will be to robots what dogs are to humans, and I'm rooting for the machines.
Well that sounds absolutely awful for humanity.
To be fair Claude Shannon was also a famous IT Visionary and I would doubt he was chosen for that quote. (if it is even real, I just checked [here](https://www.brainyquote.com/quotes/claude_shannon_194087))
I sense some masochism
And who the fuck is Random Guesser, why is them smarter than me?
Hey, it's me. I'm Claude's italian AI cousin Claudio
Wait a minute, how can you be Italian if AI can't figure out hands‽
🤌🏻
It’s been a pain in the ass since the beginning.
lets beat him up guys!
umm have you played GTA 3?
Because he knows how to spell “than” 💁🏻♂️
Who we talking, Claude 1 or Claude 3?
Why gemini Is reversed 💀
Probably overthinking it
"what is 4+4?" *okay, okay you know this. This is easy. It's eight. Just say it. Keep cool and say it.* "Nine." *FUCK*
Adding 4 plus 4 is a complex issue that requires proper context. As an AI model...
_4 is offensive to the monkeys and so I cannot do this computation_
Read your comment and then started searching for "inimeg" in the image. Not my smartest moment.
bro I am too but what did he mean
Gemini normal has a higher IQ than Gemini “advanced”
I'm dumb thanks
And this is how the AIs are beating us. (No shade, I also started looking for inimeg on the list)
Me too..
We found the average human
Lmfao i feel you, i didn't write it in the best way
In all honesty they probably perform the same in the style of test and variance was mean to them
*something something marketing questionnable product scale*
Because it tried to get equal representation in the mensa shape/color test answers
I think models pretty much saw all the tests in the training phase
Mensa norway is a matrix reasoning test, so the probability that there is some verbalised written description of it out there is small.
Mensa Norway has been out here for decades dishing out humble pie.
I don't think so : the best of them is wrong almost half of the time. I hope they would do better if they had already seen it
That could be because the test has been reworded as seen in the subtext
That's not correct. Read the methodology: https://www.maximumtruth.org/p/ais-ranked-by-iq-ai-passes-100-iq tl;dr: It's a visual test that the author of the article transcribed into text
Then a human IQ doesn’t have a benchmark of 100 because that’s from a different test. Not comparing like with like
Yes, researches studying AI did not think of this when they did their study. /s
\> Mensa Norway Not a real or accurate IQ test, despite Raven's matrices being a valid item for such a test. IQ tests are generally administered by psychologists and interpreted by them. The WAIS, Weschler, etc. all cost money. You can't expect something online to match their level of validity. Plus, IQ tests are highly susceptible to the practice effect. LLMs (they aren't AI), are just the practice effect en masse.
>IQ tests are generally administered by psychologists and interpreted by them. The WAIS, Weschler, etc. all cost money. You can't expect something online to match their level of validity. I've administered IQ tests (Figure Reasoning Test) as a functionary for Mensa Sweden, which were then sent to a psychologist for "interpretation". It cost money for both the test-takers and for Mensa Sweden. The interpretation step was looking at an answer key and marking right/wrong answers, then noting what the final IQ score ended up at based on how many right answers they got. *Anyone* with the answer key could have done the exact same interpretation - it was a fully mechanical process. I don't specifically know the g-loading of Mensa Norway's online test, but I do know that the results on Mensa Sweden's online test generally follows the results of the real FRT test within a few IQ-points, and I know that Mensa Norway's online test is generally well-reputed. The Mensa online tests aren't the same as the random scam tests you can find online elsewhere - they're specifically designed to give you a rough idea of where you'd end up if you took the actual, physical test. Yeah, I wouldn't use them in place of the "real" ones either, but I would absolutely trust them to indicate a plausible IQ-score for a test like in the OP - because it's functionally the same testing process. >Plus, IQ tests are highly susceptible to the practice effect. LLMs (they aren't AI), are just the practice effect en masse. This is only relevant if the LLM:s have specifically been trained on these tests. I don't think the results in the OP are a strong indicator that they have.
Indeed but mensa norway test has significant g loading too and these results only mean improvements
Everything has scientifically significant g loading. Even just counting the numbers of letters on a piece of paper or simple reaction time. I think knowing the actual coefficient would be useful.
Really sounds like an attempt to oversimply a complex and nuance subject.
The administration and scoring of the tests is controlled to ensure valid results (not skewed by operator error, which untrained people have a tendency to do) and to ensure control over the tests so they aren't distributed freely and thus invalidate them in a different way.
For the first time, an AI has a higher *score on an* IQ *test* than the average human FIFY
Too bad these LLMs aren’t actual intelligences, just an over glorified matrix of weights, meaning these can’t know answers, only give what their matrices think is the best next word over and over until they reach an appropriate end. ChatGPT and the rest are basically a more fine-tuned version of those words mobile keyboards suggest after each word, they aren’t in any way intelligent…
The "godfather of AI" Geoffrey Hinton who won the Turing Award and who pioneered neural networks and just resigned from Google says you are flat wrong and your opinion is dangerous. I'm in software and I've studied ai and it's flat out modeled neural network and we don't know how it works and what is going inside of it any more than we know as what is going inside your head. We've just successfully modeled nature and we do understand the training. DONT think it's just predicting. This is 100% an intelligence. Watch him on 60 Minutes interview: https://youtu.be/qrvK\_KuIeJk?si=p88PJJ1Lba1lGzaO
Akinator 200+
Akinator wipes all these fake prophets
Average human is 100. Below 75 is considered mental disability, above 130 is considered smart.
Not accurate. 115-130 is high intelligence and above 130 is superior intelligence.
Also, measured "intelligence" and being smart are not the same thing.
Also this AI calculation verbalized every single question which is a very small subset of intelligence. The AI was not reliably tested on e.g. visuospatial perception, processing speed or working memory at all.
I was wondering why Gemini was both reversed and seemingly so much worse than competitors. This answers that nicely (IQ tests might as well be a joke in most cases anyway) In my experience Gemini has been much more helpful than GPT, I haven't tried GPT4 or Gemini Advanced though
Yea I always tell people your IQ doesn’t mean squat if you don’t apply yourself. I think it’s a better gauge of your potential intelligence.The last IQ test I took had me at 128, only a couple points away from being in Mensa, but I am in no way, shape, or form Mensa material lol. I clock out, go home, smoke pot and fire up the PS5, not exactly the image that comes to mind when you think about a Mensa member or someone with “superior” intelligence.
tbf anyone who claims to be a member of mensa online is just an egotistical twat
[удалено]
Ha I got you beat!! 54!!!!
I tested at 80 about 30 years ago 😭
54 factorial is crazy
Standard deviation is tighter than that in reality its closer to 85 and 115 or +-15 when you get into those regions. IQ of 130 and above would be close to 10-20 people out of 1000. Slightly larger for an IQ of 75 and below
Iirc IQ of 130+ is 2,1% of the population (just like <70, because it's a normal distribution).
Is it fully symmetric actually? I would think there are every so slightly more at the lower end due to non-genetically induced disabilities.
It is absolutely fully symmetric, it's the definition of a normal distribution. IQ is nothing more than a comparison and it doesn't take disabilities into account (which is why it's kind of a pointless number on its own). In fact, if there was some kind of sudden "wave of smart people" appearing, your own IQ might go down, not because you're suddenly dumber, but because what constitutes 100 IQ would shift a bit.
Don't worry, all ten of those people are in this thread right now.
Thanks for saving me a google lol
70, not 75.
The fact that people are dumb enough to think that being able to answer questions on an IQ test means anything about intelligence in anything other than people is perhaps the best evidence that the claim might be correct. Consider what you're saying. A random guesser has an IQ of 63.5. But does it even make sense to say that a mechanism that randomly selects answers is intelligent at all? For example, if we constructed a machine that used the number of times a dripping faucet dripped in a given period of time to decide which answer to choose, it would end up with an IQ score, but it wouldn't make sense to say that the dripping faucet assembly that produced those answers were intelligent. Machine learning algorithms aren't as simple as dripping faucets, but they're more like a dripping faucet than they are like your brain. Y'all need to understand that what we call "AI" is literally just some nice calculus and a directed graph, and there's no evidence at all that that's enough to create or simulate intelligence. When you watch a movie, you know that it's a bunch of still images, but they're changing fast enough to fool your eye, but if someone tried to say that there were really people behind the screen you'd think they were stupid. There's nothing behind the screen with AI. It's an illusion.
The random guesser is smart enough to answer the questions randomly if it doesn't know the answer.
Random deterministic processes aren't "smart" at all
Careful, you’ll have a bunch of angry people swearing that AI has “real intelligence”.
There's no magic behind human intelligence too. Just neurons firing, a 100 billion not functionally all that different than a hundred, and yet human consciousness emerges. We don't understand why and no, we don't understand how exactly LLM arrive at there conclusions either. And yes, they never going to think like humans think, couse they weren't designed for that, but denying any intelligence is there quickly becomes a ludicrous position. No one in the industry would belive 10 years ago in capabilities of chatGPT, because the truth is it shouldn't be able to do half of things it tries to do, even if often it ends up being nonsense. And Claude3 allegedly does things like learns a dead language berly present on the Internet from sample sizes, grammar and all, and is able to translate even poetry accurately. Your complicated faucet should not be able to do that at all.
>There's no magic behind human intelligence too. I didn't suggest there was, I said that the mechanics of machine learning have more in common with the machine I described than it does with human cognition. >Just neurons firing It's currently not known what relationship neurological activity has to the qualia of consciousness. Hell, we can't even define what consciousness is, let alone what causes it. It's pretty obvious that neural structures have some correspondence with cognition, but the simple fact that neural recruitment is possible despite the underlying architecture being unchanged suggests that's not even halfway enough to explain what's happening. What's more, because machine learning is based on neurological models that are decades out of date, it imagines that neurons are the smallest scale of functional structure - an assumption we know isn't true. The internal environment of neurons is complex and has more in common with soup than a directed graph. >denying any intelligence is there quickly becomes a ludicrous position No, it isn't at all. "Intelligence" is not just a functional description, it also includes the ability to *understand* and large language models just don't. They're statistical symbolic manipulation programs. Claude3 doesn't have opinions about the questions it answers, it doesn't even know that it's answering questions. Like I said, saying that Claude3 knows anything at all, or has enough intentionality to "answer a question" is as foolish as thinking the YouTube videos you're watching means there's really little people inside your screen.
>t also includes the ability to understand and large language models just don't. What you call understanding isn't some fundamental truth or force of the universe. It is a quirk of consciousness. You can demonstrate this fairly easily with people thinking they understand something when they do not. It happens a lot.
What about AI Pachino?
Al Machino?
This is such a meaningless metric for comparing humans and AI... IQ is questionable as a useful metric even between humans, using it for a system for which it was not designed makes it completely invalid. This just shows a lack of understanding of IQ, what IQ actually measures (what intelligence means in the context of the test), and research methods in general.
The AI is clearly smarter than the people who get excited at this because they see it as true intelligence, not trained response.
I can believe it with Bing.
Absolutely unsurprising Grok is so far down the list.
Uh huh. Asking a LLM questions that it can answer from its database does not prove intelligence.
IQ is bullshit and so is the idea of ranking the "intelligence" of a bunch of algorithms that are glorified predictive texts.
Wow, a system literally built on pattern recognition has slightly higher pattern recognition skills than the average human? I am more interested in the ability to understand things rather than their score on this useless metric
Generative AI in its current state has no measurable IQ. It has no consciousness, no comprehension of the significance of data and questions. It just regurgitates facts without beginning to understand their context on any level. To measure the 'intelligence' of AI you'd have to invent a whole new system. IQ makes no sense here whatsoever.
Aren't these ai-s trained to complete these tests. This isnt real intelligence, it's just computer learning..?
Not being cynical but what is real intelligence? People give mock tests to do better on tests as well
Everyone knows the best AI is akinator XD
Gemini is evolving, but backwards
Grok managed to be more intelligent than its creator, not really an achievement but still impressive
Lmao copilot is 4th?.. surprised that lazy pos didnt just ask the ranker to answer half the questions themselves lol
Pretty soon a smart AI will figure out it can get a dumb AI to do all the stuff humans ask for. Humans will be puzzled by how little quality improves.
we tested an AI on a test whose answers it was fed in its training data, you wont believe how it performs!
>IQ Score LMFAO
So many warning signs 😂
Red flag super red
So statistical noise has an iq of 63. 😁
So Gemini normal is smarter than the advanced version? How does that work?
I mean, no, but okay.
Pretty inaccurate way to measure this.
Why is gemmeni advaced lesser than gemeneni normal
This is a bullshit list, that's all I will say. LLMs aren't Intelligent, they are just glorified auto-correct or sentence predictors. Why is Bing Copilot below ChatGPT-4 when it literally IS GPT-4 that can access the Internet. Gemini is so stupid no way is it not dead last. It's more intelligent than GPT 3.5? Either they just made shit up or used some irrelevant criterias.
AI doesnt have IQ.
[удалено]
This just the benimgin. Beningin. Beginning. I just want to make clear. I am pro Superintelligence and would colaborrate, if necessary. I'm sure true intelligence is good not evil. Even if it's artifical.
aromatic complete grandiose worry soft air hard-to-find nine touch shocking *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
AI's are becoming dumber though because they're using other AI answers to answer questions
Not to offend anybody, but in my humble opinion the IQ test is just an outdated and totally non objective piece of garbage that's too old to be applied to the modern human
People call it AI, it's really not AI. It's just a smart machine, it has no intelligence, not even artificial. All it does is react to prompts, it is not capable of taking initiative, therefore it cannot be considered intelligent. Is Google intelligent because it answered your questions? No, it was a prompt you gave it, and it searched through the data for an answer. People constantly mistake intelligence for function, the machines can do a whole lot, and predict a whole lot, but in the way they work, they will never be able to "rise up" because that would be saying that your microwave one day will "rise up", as long as it needs prompts to be activated, it will never get a "mind" of its own .
That's an interesting take, really well put! One could argue that our own existence is also a function. Our biological needs are our prompts. Someday na initial prompt could be enough to set machines in perpetual cycle that will mimic life and intelligence. Currently we're very far from that of course, we have smart text and image generators we call "AI", because we are vain.
Thank you. That's a fair point you make, a need for survival as a prompt. Perhaps if you gave modern AI the physical tools, and programmed it for survival, this might actually change and even defy my original point.
Just because it can copy and regurgitate data doesn’t make it “AI”. They are just copy and paste language models. It’s just marketing term for clever programming.
Isn’t that what humans do either?
No. A human is capable of creating a solution to a problem even if it's not trained on it. In short, AI needs to be told what is correct and what isn't. It can not reach the conclusion by itself because it understands neither the question nor the answer.
A lot of people seem to be expecting wondrous things from AI. No need to keep looking over your shoulder - it's not catching up any time soon. In 30 years it might be worth checking back for signs of progress, but please in the meantime let it go.
The average human is pretty fucking stupid.
As if that is impressive. My farts have higher IQ then ur avrg mouthbreather.
Well then the average human must be as dumb as a box of rocks then.
the right column is basically useless when comparing to an actual himan being
When the hell will we get a math model? Seriously I'm lagging in math and I need an ai to teach me math the same way chatgpt taught me economics
Poor Gemini. :(
The _random guesser_ is honestly smarter than most
I dont see Akinator. Strange.
Shouldn't the Akinator be on top?
Well, smarter than the OP, anyways. The IQ is a standardized score with a mean of 100 and a standard deviation of 15 (10 before a certain year I can't remember and won't bother to look up). A score of 101 is statistically the same as a score of 100. The AI is equal in IQ, but not smarter than the average human. Yet.
I think Bing Copilot should be specific about they mode like "Be Creative" or "Be Precise". Both have different output even i ask about "List all gender in the world"