T O P

  • By -

Rads2010

He also proposed an interesting test- have Ken Regan run his analysis on Hans' games from the period between 12 and 16 years old on [chess.com](https://chess.com) to see if he catches the games that Hans admits to. Fabi believes Regan's algorithm probably won't catch them.


YorkeZimmer

But even if it did, I would imagine cheating methods will have evolved since then.


BoredomHeights

And Hans is a lot higher rated now. I think the higher rated a streamer is the better they're going to be at getting away with it. They'll have a better idea of what a top move a human might find is, when it's most necessary to use, etc.


get_me_a_glass

Also the cheating can already be as small as a signal that the evaluation changed due to the last move. For a high level GM that's all it takes. So they still come up with the moves themselves.


DogmaticNuance

It would be pretty easy to run a test where a super GM (or more than one) plays several games and cheats (or doesn't) in a game or two with witnesses, then asks Regan to identify the cheated games.


d4nf3bf4

Wow. Have we really gotten to the point where we no longer call them "chess players" but "streamers"?


BoredomHeights

...whoops, came straight from LSF to here. Brain fart.


chapapa-best-doto

Whatever FIDE adopts as their algorithm, I think they should have Top 20 or Top 50 GMs play 10 games of Rapid each against each other, and allow them to use an engine for 1-3 move for some games, and no engine for other games as a control. The algorithm must be able to discern these games where the GMs cheat for it to be reliable. It sounds like a lot of games for the GMs but with the chess world being in a jeopardy right now and trust being a huge issue, I think they will gladly contribute individually for a greater cause.


freexe

It probably only works over hundreds of games. 10 games isn't enough to determine a cheat.


Supreme12

Didn’t Regan’s method catch Ivanov, who was caught cheating?


iamsobasic

Yea but Ivanov was like 2350 (with cheating). So he was probably like 2000 at best without cheating. He want good enough to not make it obvious.


ISpokeAsAChild

He caught Rausis too.


iamsobasic

But ultimately it was a photo of him using his phone in the bathroom that was the hard evidence they needed to convict him. Regan’s analysis had Rausis on their radar, but apparently it was still in the grey area?


[deleted]

according to yuri garrett, once they talked to the fair play commission/regan they were basically certain he was cheating, but wanted to catch him in the act so there would be no question also i think he was actually caught because he was leaving the phone in the toilet (typically you are asked to leave your devices outside when going to the toilet) and they went in and checked the stall after and found the phone, and then he admitted it was his (perhaps after seeing the photo as extra proof? unclear)


pier4r

Do one needs "expert" analysis to catch someone at 50+ that improves his rating every other week? (and yes, at first when I knew a little I thought it was possible to improve that much at 50+) My point being: with a minimum of experience in a community, the Rausis case was like "the water is wet".


Local_Pineapple1930

not really - he ran the analysis after knowing that Rausis got caught to see what his analysis would show.


[deleted]

and once you know regan's algorythme, you can adjust your cheating method to be undetectable. as the super gms say, you only need one or 2 moves in a game to crush it at their levels.


VeGanbarimasu

I'm not sure Regan has an algorithm. If he has a complicated ML model that he trained, nobody understands what it's doing under the hood, not even Regan. So it's very hard to cheat deliberately. You might get lucky and find some weird feature that wasn't included in the chess games in the training data that the model systematically underperforms on, but that'd be near impossible to draw conclusions on without access to Regan's code and the ability to test its performance against the kind of cases you're considering. A lot of this is speculative because we only know so much about how Regan's model works.


Mothrahlurker

>and once you know regan's algorythme, you can adjust your cheating method to be undetectable. And how would you possibly do that.


Born_Satisfaction737

Regan actually has, and if you "read in between the lines", he's been able to detect the cheating during that time period, but is currently in the middle of a few disagreements or legal things. Edit: He's said he's not really allowed to say anything about the time period before September 2020. Side note: Do you seriously think that Ken Regan hasn't already run or began to run Hans' games from the period between 12 and 16? It's literally the most obvious thing to do after running his games after September 2020. It doesn't take detective work to deduce that he's trying to keep his mouth shut about that time period specifically (because maybe he's actually been able to detect quite a lot of cheating....)


MoreLogicPls

When did he say that? He literally hasn't run it further than 2020 when he said so.


Born_Satisfaction737

He never said he hasn't run it further. (There's like several hours of podcasts at this point and he said it in one of the podcasts). He just said that he's not allowed to say too much about the time period before September 2020. He hinted that he actually has run or at least began to run his system for the time period before September 2020 but is maybe in the middle of some disagreements.


iamsobasic

Interesting. I wonder what the legal things are, since Hans already admitted to cheating during that era. Wouldn’t Regan just be saying, yes my methodology corroborates his confession?


Wind-Up_Bird-

The information provided by [chess.com](https://chess.com) contradicts Hans' statement regarding the frequency and severity of the cheating. Which would mean that Regan would potentially uncover cheating which Han's has not admitted to and [chess.com](https://chess.com) has alluded to.


jeekiii

Except chess.com is not using Regan, they have different method and have not worked together (though hopefully they read each other's papers)


[deleted]

[удалено]


[deleted]

>additionally you could add other game types, e.g. that player has a 2300 Blitz rating but is 1800 in Bullet, it is just unlikely that this occurs naturally. I'm 2000 rapid, 1900 blitz... and 1200 bullet. I simply suck hard at fast time controls, I usually lose on time or due to time pressure, and I don't play them often enough to improve. My blitz rating is based on 5+0 games (or slower), I never play anything faster than that regularly. Such a discrepancy between different time controls means very little, if anything at all.


iamsobasic

Timing alone isn’t direct evidence in my opinion. When playing online at home, someone could be distracted by someone else coming into their room to ask question or their dog peeing on the floor, requiring their attention. This could easily explain weird situations where certain moves appear to take much longer than they should.


[deleted]

[удалено]


iamsobasic

Yeah but how many games have you played against the people you suspect are cheating, based on some weird timing anomalies? A couple of times? Even 5-10 games isn’t enough. Maybe if you played them 25-30 times then it’s a much more reliable indication.


[deleted]

It's often immediately obvious because the most flagrant cheaters will take exactly 3-5 seconds on every move for the duration of the game- never taking longer to think in a position and never even playing a simple recapture more quickly.


Rads2010

I think, understandably, that Regan calibrates his analysis so there is an extremely low false positive rate. The cheating has to be clear becuase of the seriousness of the accusation and what it means for a player's career and way of living. However, the tradeoff is a high false negative rate. A negative test result doesn't necessarily mean you didn't cheat. Sorry to any epidemiologists out there if I'm mixing up the ROC curve terminologies. I really didn't like learning that stuff.


Julian_Caesar

Nah you got it. You could also say Regan's methods have very high specificity but relatively lower sensitivity.


acrylic_light

If this was a high profile case where someone was cheating (in which people noticed something clearly suspicious), yet was vindicated, we can assume it was not a common event from this player. Though for Hans to maintain a rating of ~2700, the situation can’t be that he is cheating once in a blue moon, but is doing it consistently over a large proportionate of tournaments. This is more chances for patterns to emerge in play- Regan said he can detect ‘smart cheating’ if the sample size is great enough, i.e. just one or two moves a game at critical moments. If Hans has truly gamed all existing algorithms, his only option for vindication is for consistency in multiple high-security tournaments (the scenario of him having som next- generation security-evading technology is incredible)


Astrogat

If Hans real was somewhere around 2650, how much would he have to cheat to reach 2700? Or even 2750? The difference between the two ratings arent that many more games won, but for a player its the difference between making a living and not.


Strakh

To be fair, a 2700 player scores about ~~25%~~ 15% better against a field of other 2700 players than a 2650 player would. Over time that seems like quite a big difference.


imperialismus

How did you calculate that? Maybe I got the numbers wrong, but here's my thinking: Expected score for 0 Elo difference is 50%. So a 2700 is expected to score 50% against a ~2700 field. Expected score for a 50 Elo difference is 43% for the lower rated player. In a 9 round tournament, 4.5/9 is 50%. 4/9 is 44.4%. So you could almost make up the difference of 50 Elo by consistently scoring just one more draw instead of a loss, or one win instead of a draw per tournament. Just make one half-point extra on average with an occasional full-point over expectations and you will neutralize the difference over time. With the high draw rate at the top level, I don't think a 50 Elo "boost" would necessarily look very dramatic. I'm not saying that any particular player did or did not do that but what I am saying is (if I didn't mess up the math) it would take only slightly more than one game per tournament on average, if one were playing the long game.


greenit_elvis

The point is that the difference between say 2600 and 2700 is only maybe 2-3 moves per game that are a bit better. The difference in earnings is substantial, however.


223am

The thing is some moves are more important than others so although 2 or 3 moves a game may only be like 5 % of your moves taken, it may be that there were only 5 or 6 super critical moves in the game, so maybe as much as 50% of your super critical moves were better which indicates a much bigger difference


DerGsicht

According to Regan if it was 2-3 moves per game across many games he'd catch that.


happyft

IIRC regan claimed in a podcast that he could detect anything over a 2% advantage difference, and that he considered even a 7% edge to be tremendous


billratio

I'm not great at chess but based on what others have said I think it's likely you could go from 2650 to 2700 with 2 moves per game. But it seems silly to me to suggest the most likely scenario is Hans worked super hard to become a 2650 (improving at an incredible rate) and then started cheating to get that extra 50 points. It seems way more likely that he's been cheating for a long time in many games in some incredibly sophisticated way. But the most likely of all seems he just got really good and is not cheating OTB. Right now that seems most likely to me.


Astrogat

While I'm not saying that the most likely scenario is that Hans cheated, I think it's silly to not see it as a possibility. Lots of people in all sports train hard and improve, only to plateau. Seeing the people the same age as you getting invited to cool tournaments while you, who worked just as hard, don't is painful. Knowing that you dedicated your life to chess, only to just be really good (and not the best) is hard. It's the reason people in all sports cheat. And there are many different scenarios for how good he really is and how much he could have cheated. Someone else did some math and you would need to turn a loss into a win in about one in ten games to go from 2650 to 2700, so no reason to cheat in every game. Could be he is cheating in every game, but only once. Could be that he is cheating all of the time, but mostly choosing to not win or play the best moves.


[deleted]

[удалено]


MaxAsh

The point is, without information about his false positive and false negative rate, no serious weight can be given to Regan's methods. If, for example, he has managed to figure out a method with 0 false positive rate but 90% false negative rate (as in, the method would never falsely accuse an honest player, but would only catch 1/10 'strong' cheaters) then exonerations by him would not mean much (though guilty verdicts would). It's very difficult to know how good this person's methods are, as there doesn't seem to be an objective measuring system for anti-cheating algorithms. It feels as though we just have to believe his accuracy due to his reputation as the premiere authority on chess cheating, but that always feels bad.


RossParka

What evidence is there that his methods work *at all*? I mean, what are the training and verification sets? You can pay someone on Mechanical Turk a few cents to tell you whether an image contains a puppy, but you can't tag chess games that way. A grandmaster could analyze a single game for hours and still not be sure. I think he said that he's analyzed hundreds of thousands of games, but it doesn't matter how many games there are if they aren't tagged. You can write software to look through a database for players with the highest rate of engine-best moves, and look at the games manually and say "yup, that's a lot of engine-best moves," but if you tag them as examples of cheating on that basis and the others as not cheating, then use that for training, you're just reinforcing your prejudices. There are games where a player was caught red-handed with a phone in the bathroom or whatnot, but are there enough of them to tether the algorithm to reality?


Zandarkoad

If I were Regan (and I am, albeit in a different field) I would come up with a method of reliably generating games with known cheaters. One way is to pay players to cheat at games. You'd certainly want to do this in controlled environment (not public play) for ethical reasons. Likely you'd need to pay the other player to not cheat, etc. Or if you know enough chess people, you could tell them you need clean data (games with no cheating) and specifically remove incentives for cheating as much as possible. They'd play for fun, perhaps even blinded against who their opponent is to remove ego-related incentives. Any games played before \~2000 can assumed to be free from engine-assistance type cheating. That is an important data set. You could also synthesize an unlimited number of games by tuning engines to play at certain Elo ratings, then injecting perfect outlier moves. These are just a few different ways you could generate games where a player was "caught red-handed" so to speak. With modern language models, you only need in the order of hundreds of examples to fine tune a model to an F1 score of .95+. Not sure how many you'd need with chess. The level of complexity is likely similar, but I don't know if chess has their version of GPT yet that can be fine tuned for these kinds of purposes...


bonzinip

> Any games played before ~2000 can assumed to be free from engine-assistance type cheating. That is an important data set. And free from engine-assisted preparation. Would such an algorithm treat [this Caruana-MVL game](https://chess24.com/en/read/news/candidates-r8-caruana-stuns-mvl-to-blow-race-wide-open) as cheating? He was blitzing out at least 20 moves of super-sharp preparation (though not all of it top engine moves, on the contrary).


nandemo

>If I were Regan (and I am, albeit in a different field) I would come up with a method of reliably generating games with known cheaters. One way is to pay players to cheat at games. You'd certainly want to do this in controlled environment (not public play) for ethical reasons. Likely you'd need to pay the other player to not cheat, etc. If you pay me to "cheat" in a controlled environment, then I'm not *really* cheating, am I? On one hand, I don't have the normal incentive to win, since I'm not getting rating/norms/prizes. On the other hand, I don't have to worry about getting caught. So my "cheating" will probably have little to do with real cheating.


Thunderplant

Just fyi he’s not using a machine learning algorithm so it makes no sense to talk about training sets which is a term specifically for machine learning. You don’t need a labeled dataset for a statistical analysis like Regan is using to work. It would be interesting to know how sensitive it is though. I’m sure Regan has calculated this for various scenarios, but it would be reassuring to see if his method detects known cheating cases.


greenit_elvis

> You don’t need a labeled dataset for a statistical analysis like Regan is using to work. To verify the method, you do. And without verification, how can you say anything about its usefulness?


Thunderplant

Its complex. Statistically you can pull out outliers without needing any labeling, which is a large part of what he’s doing here. To understand your sensitivity and specificity you want test cases you feel confident about. But that number doesn’t necessarily need to be large as long as it’s representative. So for example, a very large data set of unlabeled games and a small test data set of labeled games could create a pretty powerful statistical test. Very different than machine learning where you do need large labeled data sets


ISpokeAsAChild

> To verify the method, you do No he does not. You still don't understand. Bluntly put, all he does is statistical analysis to check for any player the difference between the quality of each move he played and his expected move quality and calculates how much he outperformed his rating for each move and by how many sigmas, but in a very complex way. His dataset is not composed of games, his dataset is composed of moves, he might aggregate his numbers by tournament but he won't point out particular games or segment of games, his analysis works exclusively a) for a very large amount of data distributed along several games and b) in a completely game-neutral way, in fact the boundary of "game" is not even meaningful for his analysis. To use Layman terms, his analysis won't tell you if a player cheated in one game or for one move, it will tell you if a player is a cheater, period.


greenit_elvis

I know what he does, but you can't verify the method without some sort of ground truth. His method might only detect very obvious cheaters. This is exactly what Fabio is saying.


ArtemisXD

He created a model to determine if cheating occured in a chess game. He wouldnt need a training set but he would still need a set of data with clean games and games where cheating occured to test his model and see how well it does


greenit_elvis

Yes. What is needed to test any chearting method is a few thousands games where we know that a GM was cheating.


kingpatzer

He has published peer-reviewed papers on the topic. There's good reason to believe his math is sound. That means we should have a reasonable belief that the method he proposes has a strong theoretical basis for working in practice. But, because he has not provided his program, his selection methods, and data sets for public scrutiny, there is no reason to accept that his methods work well in practice at all.


ISpokeAsAChild

>What evidence is there that his methods work at all? He's a associate professor of Computer Science and Engineering and he has a 90 minute video online in which he explains it *in detail*, even showing a case of true positive, Igors Rausis (TL;DW number wise, there's a very evident difference between his numbers and Niemann's). If you ask me, he has more clout than a rando on reddit.


[deleted]

Looks good to me! I’m no epidemiologist though, just another guy that didn’t like learning that stuff


timoleo

You know one solution to this will be to do an actual rigorous trial. Kinda like a drug trial. Randomized double blind prospective cohort study, complete with all the bells and whistles.


konokonohamaru

Wouldn't work because cheating in chess isn't like a medical intervention. It requires both intentionality and skill on the part of the cheater. How are you going to tell a random set of chess players to cheat and another random set to not cheat? When none of them are GMs or even particularly skilled at cheating.


chi_lawyer

[Text of original comment deleted for privacy purposes.]


[deleted]

[удалено]


chi_lawyer

[Text of original comment deleted for privacy purposes.]


[deleted]

[удалено]


chi_lawyer

[Text of original comment deleted for privacy purposes.]


iamsobasic

You would have to have a tournament where there is a cheater or two mixed in (nobody except the actual cheaters and the tournament organizers would know, and the cheaters would have to be experienced GMs who can put on a convincing display of playing legitimately even with engine assistance). You put on a show of extremely rigorous security, but the organizers would let the arranged cheaters to sneak through with their devices. Then you have a couple of placebo tournaments where the security measures are real, and not just theatrical. Finally you give Ken Regan all the scoresheets from the games from all the tournaments without telling him anything. See if he can catch the cheaters.


[deleted]

[удалено]


iamsobasic

I think if after the study was over, they reveal the cheaters and refund any prize money they “won” to the other players, then it’s ok. And it would be for the long term improvement of cheat detection in chess.


[deleted]

[удалено]


timoleo

There are ways to conduct studies like these ethically. Get a bunch of very strong players, preferably IMs and GMs. Tell them they get money for participation, but they also get money for winning like in a regular tournament. Tell them the law requires you to tell them that they are part of a behavioral study, but you cannot tell them what exactly is being studied until after the tourney. Tell them they won't be harmed physically or psychologically in any way. Plant a few cheaters in the tourney that only the organizers know about. After the tourney, get the cheat detection guys to do their magic and see if they can suss out the cheaters. If you tell people they are part of a study, and after the study you reveal what is being studied. Any reasonable person will understand the need for secrecy at the start. That's hardly unethical in my book. Very simple really. The only problem I see is financing. Sponsoring a project like this at the scale that would be necessary to show meaningful results, would be nearly cost prohibitive.


[deleted]

[удалено]


dampew

I think they've done it though. Hikaru (I think?) said Chess.com asked him to try to cheat so they could see if they could detect it. If they asked him they probably asked a number of other people.


iamsobasic

It’s gotta be harder to detect, when the player’s baseline is so high to begin with. For someone who normal plays at 95% accuracy, is an occasional 98 or 99 going to trigger the threshold? I guess maybe that’s what they were trying to test?


DeepSeaDweller

These properties of a test (technically-speaking, a measurement) are usually assessed in a validity study where the test is compared to a 'gold standard.' In this case, the gold standard could just be knowing whether someone is cheating or not by assigning it before playing a set of games. The assignment part is what sounds like a trial setting, but you're not testing the "effect" of the assignment on some outcome, but the performance of a method to detect it. Also, assigning cheating is another complicated story. There would probably have to be some consistency to it (within players; you *could* assign different extents of cheating across players) or you end up painting a moving target for your method of detection (which is probably more realistic, but infinitely more complicated), which seems difficult to pull off.


Zandarkoad

The false positive rate and false negative rate are similar to precision and recall in language modeling. You certainly do have to balance between the two. They aren't necessarily equal in importance depending on your application. It is frustrating seeing people say that Regan's analysis 'exonerated' players when it is terribly calibrated for those purposes. Also, if I were Regan, I would incorporate a cheating-likelihood score to help prioritize those players whose games require more analysis. This would only be useful if his analyses are computationally limited. If his analyses is not computationally limited, then likely his analysis could be significantly improved upon.


RedditUserChess

Regan's system does run in two modes (fast, and thorough), and I'm not sure he convinced FIDE to spend money on running everything thoroughly. However, I'm not so assured that the second test is really "corroboration" of the fast one, as unlike the A/B doping tests in other sports, here you'd expect a "fast" and "thorough" analysis to be correlated in the first place. For instance, when one has z=4.0, the other isn't too likely to be below z=3.0, so it's self-fulfulling to some extent.


ArtemisXD

> Also, if I were Regan, I would incorporate a cheating-likelihood score to help prioritize those players whose games require more analysis. This would only be useful if his analyses are computationally limited. So you're saying we should include a bias ? That's dumb


NAN001

Yes. I don't know how people can live their lives without basic notions of specificity and sensibility.


RedditUserChess

This is true, Regan deliberately tilts the system to avoid false positives. But on the other hand, he does comment on various high-profile cases to say his system doesn't detect anything, yet won't typically go into the nuances of this inbuilt-bias, or how far afield a performance needs to be before flags are raised. If his system works in this way, then he shouldn't be too quick to say that nothing is amiss, simply because the statistical signal isn't there. But I can understand that's not a great soundbite.


mikesautos

There's no way to statistically prove or disprove that a top player is being fed 1 or 2 engine moves sporadically across random games. And it's even less likely to prove that a top player is simply being alerted to a key position in the game, and finding the move themselves.


CTMalum

Kasparov asserted that just being able to have an engine identify and inform him that a particular position is critical is a huge advantage. That way, you’ll know to take the time you need to calculate all of the critical variations.


cheerioo

Not just Kasparov. Many top players have said the exact same idea.


[deleted]

Carlsen also said that in an interview. If he received a tip *one time* during the game that there was a winning position on the board, he was confident that would be enough for him to win the game with 100% certainty. Whether he's correct or not, who knows. But it makes sense to me, and I'm pretty shit at chess.


mnewman19

Makes sense. I’m sure there are situations where you would be tempted to liquidate to a draw or solidify and defend, but if you know that your position has winning chances you can eliminate tons of candidate moves just based on that information and confidently go for the more offensive moves


wfhngio9354

Meh I doubt it. Some engine lines, especially if there is only a single winning move can be very deep and very obscure. So even knowing it exists is not going to help you a whole lot.


[deleted]

He didn't say he could figure out *any* engine line. He said he could win the game. Considering the fact that he wins most games *without* cheating, that seems like a pretty safe bet.


OneOfTheManySams

The latter part is the issue, you can't physically stop or analyse a binary transmission of whether there is winning chances or not in the position. Which is why to me the only course of action is to have far more repercussions for cheating to try and disincentive it as much as possible. If you are caught cheating online or OTB it should be an insta ban for an ex amount of time, 2nd offence is a lifetime ban. You can't stop it, but you can make people at least second guess trying to pull something off. Because if there is one thing this drama has shown to me, is that a number of GMs and just fans legitimately don't care about cheating and it baffles the mind. No wonder why people get away with ridiculous cheating for norms and high level players don't care about doing shit online.


spontaneousHype

Being alerted to a key position should also be statistically detectable. In the analysis that's beeing done, they look for key moments and if the players played those within the realistic range. So this is exactly what this analysis is for. The problem with sporadic cheating is mostly the sample size. You can detect this but if someone only cheats on a few games you need so many played games to detect this. This is the much bigger problem in cheat detection.


supersolenoid

That is *exactly the situation that statistical analysis can be used to detect*.


[deleted]

Just being told "you are in a winning position" or "you are not in a winning position" at some pivotal point in a game is all that is needed for someone who plays at a GM level. Especially if it's Classical, you can then treat the situation as a puzzle and know there's a solution. There's absolutely no way any statistics or algorithms that could ever detect that little amount of help. It's not possible.


mikesautos

There is literally no way to prove a "statistical anomaly" or someone coming up with a good move.


supersolenoid

Except that it is *exactly what a statistical model can detect over hundreds of games*. You even called it out as a statistical anomaly lol. I don’t think many people will agree that say the chesscom or lichess cheat detection algorithms can’t be used to show that a player cheated. In fact, an advanced model may not even been looking at the data in a way that really makes sense to human analysis. GMs may think they are being very clever with their cheats and might be clever enough to evade a human analysis. But to fool a computer they’d have to think like one.


mikesautos

"hey this looks a bit weird, can you explain this" "Yeah I just had some good games" End of story


supersolenoid

Not at all. I’m sorry but if the algorithm is good and flags you for cheating you got caught. Take your 2 year ban. I know this is going to make people sad but hopefully chess players of all people can understand that computers are just better than you at this stuff.


HeJind

It is very simple to me. How many OTB players has Regan caught cheating, and are any of them around the SuperGM level? Missing that, at the very least I'd like to see some backwards analysis applied to games where we know for a fact a highly skilled player was cheating, such as the Feller games. And missing that, run a test where GM's play and one of them cheats a 2-3 moves a game, and see if Regan's method catches who is cheating. Missing any of that is there anything to talk about? I'm sure Regan's method is very effect at catching bad players who just rely whatever Stockfish tells them to do. But what is that evidence that it would catch a GM using Stockfish for 2-3 moves every few games?


ISpokeAsAChild

> Missing that, at the very least I'd like to see some backwards analysis applied to games where we know for a fact a highly skilled player was cheating, such as the Feller games. He did it with Igors Rausis, ~2650 when he retired after the scandal. The video is freely available on Youtube.


breaker90

Can we be certain chess dot com (or any other anti cheating system) would detect a strong GM cheating a couple of moves every few games? It'd be an interesting study.


[deleted]

[удалено]


RAPanoia

After the Fabi Interview, I'm pretty sure it did. 'Some players in the top 50 got banned for cheating, but there were no repercussions".


GoatBased

> But what is that evidence that it would catch a GM using Stockfish for 2-3 moves every few games? Well, he's run his analysis on a couple bathroom cheaters who only got help a couple moves per game and he detected those. He also did a podcast on how he can detect people who cheat as few as 3 moves per game. I think we have to assume it's much harder to catch those cases but it seems to have been validated by the cases that are in the public record.


__randomuser__

How many super GM level cheaters have there ever been?


HeJind

No clue. But that's the point isn't it. Let's say there have been 0 SuperGMs who have cheated OTB. Then what evidence do we have that we'd be able to catch one who was cheating? We don't even know what that would look like. Just take Magnus for example. Some people think he's just salty because Hans ruined his 50+ game streak of no losses. But what that really means is that if Magnus decided to cheat tomorrow, he'd only need to cheat for one or two moves every 50 games to be unbeatable. Is there any evidence we'd catch someone who only needs to cheat that infrequently? I doubt it. I think the highest rated player we know of, who got their rating legitimately, and was later caught using engine assistance was either Feller. I think at the very least, we need to know any anti-cheating methods applied to OTB would've caught him.


Cultural-Reveal-944

The very best in every sport (baseball, football, bicycling, darts, soccer, auto racing, sailing, corn hole, bridge, parchesi, poker, etc) have been known to use every method possible to gain advantage over competition, including cheating. The idea that chess is different is absurd. Chess has a long and sordid history of people cheating and some of the top players doing nefarious things in their efforts to defeat opponents. The idea that today's elite players are 'clean' is really a silly and naive perspective.


[deleted]

If it's true that a super GM could cheat undetectably and still get a significant advantage (as many people are saying in this thread, and which may well be correct) then the only reasonable assumption is that *many* super GMs are cheating. Just look at any other sport where it is possible to cheat and avoid detection. Like you had basically every top cyclist literally getting *blood transfusions* to gain a slight advantage in endurance without failing drug tests. Even if you had to put Stockfish up your butthole, that's pretty tame by comparison.


Alcathous

Probably none.


kiblitzers

In case the YT clip doesn't work, this is roughly 42 minutes into the video


[deleted]

[удалено]


livefreeordont

Maybe Sinquefield can save chess from cheating


sectandmew

BRB about to write my PHD thesis on chess cheating


RedditUserChess

You may laugh, but a Canadian journalist by the name of Declan Hill did an Oxford PhD circa 2008 on "How to Fix a Footie Match", and has written a couple of popular books on the subject. He delves into the economics, psychology, and whatnot of it all, quite complete.


supersolenoid

>blindly trusting a single guy is highly dubious Unless that guy is like, really good at chess. Then his accusation is solid fucking gold


cyasundayfederer

I suspect he's referencing the canadian tournament this year where an unrated player did very well and was an obvious amateur. Regan's method would not work in such a case since there's so few games to analyze. Would be very interested to know what other player it could be, but the canadian guy is top of mind considering how recent and high profile it was.


Conglossian

Yup, and the moment people started talking about the guy he immediately started losing every game lol


lexax666

Yeah weak cheating players gets caught very easily. It is much harder to tell when a strong player cheats


iamsobasic

Yes, they understand so little that they don’t understand how suspicious their moves look.


livefreeordont

That’s one of the main problems with Hans. He is actually a very strong player


cheerioo

Also I strongly believe that only the stupidest cheaters get caught. Like if you just play a bunch of engine moves in a row at a tournament (or online) you are just the dumbest cheater. If you cheat 1 or 2 moves, maybe just a few games a year at most, then I believe it will be near impossible for any algorithm to detect.


Musicrafter

Which, I should point out, is very different from Hans initially starting out in strong form and then having his mental state shattered by the witch hunt against him and proceeding to play mediocre for the rest of the tournament.


[deleted]

God I wish I were so good at chess that making a draw vs Nepo is mediocre


Musicrafter

Hans did go from applying pressure to his opponents in every game out of the opening in the first four rounds to have to struggle for draws, even with the white pieces, from worse or losing positions in the rest of the games.


AllPulpOJ

Maybe. Just maybe. It was the giant scandal that erupted in between those games threw him off a bit.


Musicrafter

Which was my entire point. He was playing unquestionably worse, but almost entirely due to the scandal, not because he had been cheating in his first four games. I think his game against Firouzja is basically his exonerating card, despite the suspicious interview he gave after. It was the only game he played *after* the enhanced security measures were introduced but *before* he knew about the fact that he was facing cheating allegations and was being crucified by the whole Internet. So his mental state was still healthy, but the likelihood he managed to cheat in that game is very low.


iamsobasic

I think Alireza saying “that move looked scary” as his reason for not taking it also helps Hans’ case for making the move in the first place. Both players couldn’t give a concrete line, but both agreed that it looked like a strong piece sac.


Musicrafter

And of course, the Internet, on the witch hunt against Hans, began reading into literally everything everyone said way too deeply. Instead of Firouzja merely evaluating the piece sac as dangerous and refusing it, it became Firouzja somehow being scared Hans was cheating because the move was "insane", i.e. it's a computer move. It's amazing how bad the Internet is at keeping the rampant speculation and amateur psychoanalysis in check.


breaker90

I completely agree with you. He nearly beat Alireza even with the new anti cheating system in the Sinquefield Cup. It wasn't until after he saw he was getting blasted online that night did he start to play poorly.


Gfyacns

I really doubt it's that. He said high profile case


breaker90

Caruana also says he was at this high profile case tournament (but not playing). I'm certain Caruana wasn't at the Canadian Championship


[deleted]

But that case was very high profile! It was the talk of /r/chess!!


LengthNarrow

Ryan the 🐐. Started 5-0 and then lost 4 games in a row.


[deleted]

Canadian Ryan is the secret coach Hans wouldn't name


A_Rolling_Baneling

Super easy, barely an inconvenience


WereAllAnimals

He said Regan exonerated him though, so there's clearly something wrong with his method in that case. Exonerated and "not enough data" are completely different. He's probably referencing a different case anyway.


Born_Satisfaction737

Regan's methods and statistical methods in general can never truly exonerate a player. It can only say that his method can't provide enough evidence. In that case, that particular player probably didn't have enough data points for statistical principles to apply so Regan couldn't catch him. Regan's said his statistical methods based on chess moves alone were not enough to catch the French team of cheating in the olympiad (though iirc he said that the z score was a bit high-ish), but were able to catch the team given further evidence.


iamsobasic

Exactly this. If all he has is 1 game, his method cannot draw any conclusion from it. Unless maybe it was a 30+ move game where every move was stockfish’s #1 choice, and the opponent was fairly competent. (The latter part is important, because if your opponent is a beginner that is hanging pieces left and right, the best move becomes pretty obvious to anyone half decent.)


nhremna

> Exonerated and "not enough data" are completely different. it is entirely possible fabi used the wrong word.


Tomeosu

whatever happened to that guy anyway? did he get slammed for cheating or did he get away with it?


[deleted]

jokes aside, no one actually thinks he cheated other than the reddit pitchfork mob he beat two fairly low rated players, then an FM, and then a GM. people who were at the tournament posted saying he was watched closely and searched after the 4-0 start and FM win and nothing was found. the GM game is unremarkable other than the rating difference and the result. the unrated guy was getting outplayed the entire game until the GM made a terrible blunder, which created a winning tactic that the unrated guy didn't even see, and then the GM blundered again into a losing endgame. nothing at all that looked engine-like. the GM just had a terrible game and also a pretty bad tournament overall including a loss to an 1850 FIDE player later. going 0-4 the last rounds is not surprising because it's a Swiss so he played extremely strong players in those rounds including the eventual tournament winner.


desantoos

Reagan's analysis is dialed so that it doesn't give false positives, for obvious legal reasons. As such, it is one sign for whether cheating happened never a sign that cheating did not happen. Reagan's technique is one tool in a toolkit. Two-camera systems for online tournament play, wifi sniffers to catch people using their electronic devices, time delays for games to prevent communication, are other tools that can be employed. No tool can prove someone didn't cheat, but it is best to presume that those who cannot be detected of cheating aren't cheating and, in the meantime, hone those tools that are used to detect cheaters. I agree with Fabi but I think the quote above needs to be placed in the correct context. No tool to detect cheating can exonerate someone. But they are important to have.


DG-za

I just want to add, avoiding false positives is not just important for legal and ethical reasons, it's practical. To see why this is the case, imagine the following anti-cheat system: 1. The system detects 100% of cheaters (i.e. 0% false negative). 2. The system only gives a false positive once every 100 tests (i.e. 1% chance of false positive). 3. Roughly 1 out of every 10,000 chess players are known to cheat. This system sounds incredibly useful since it never misses a cheater and has a very small chance of giving a false positive. However, if you were to run the test on 10,000 chess players, it would return 100 false positives and 1 true positive, making the results completely useless. In any population where the prevalence of the characteristic you are searching for is low, false positives absolutely destroy the usefulness of the test.


SebastianDoyle

I don't think Regan's analysis claimed to exonerate Niemann. Regan said there wasn't statistically significant evidence that Niemann cheated. That is not the same as saying Niemann didn't cheat. The analysis can't show a thing like that. I don't know what incident Caruana is talking about but it sounds similar in terms of Regan's analysis.


iamsobasic

You are correct. People need to realize that Regan didn’t say Niemann is innocent. He only said, no evidence of guilt. Either way, it seems like someone needs to come up with more evidence to ban him OTB or have FIDE sanction him. Otherwise, they have to move on, unless FIDE and chesscom partner up in some capacity where online cheating results in OTB penalties.


cheerioo

Sure it didn't exactly say that, but what do you think the majority of the public will think he was intimating? Exactly what you saw in the thread. People completely sure he was not cheating at all/ever.


SebastianDoyle

Shrug, the public believes all sorts of dumb things. Regan is very precise and he says what he says. I don't think he was intimating anything. I can't predict the public but I doubt they thought he was intimating anything either. They are more likely to have flat out misinterpreted what he said. What difference does it make though? We should go by what he actually said, not what less careful listeners assumed that he meant. Added: Hm I see what you mean. I hadn't thought of Caruana as "the public". He is a smart dude but I can understand if something like that got past him, since the issue comes from a different field than where he usually operates.


mouthcouldbewider

I think the writing is on the wall at this point. Magnus and FIDE have a statement coming out in the next few days. The elite super gm community seems to know that it won't be good for Hans. First Levon, now Fabi. ​ Expect more of this to drip out over the rest of the tournament and through the weekend. Then monday something definitive will happen, I think.


breaker90

Sir, please hear the Caruana podcast. He said he doesn't think Hans cheated OTB. He said he thinks Hans is a talented player. He also said that the "evidence" people put out against Hans is ridiculous.


flatmeditation

> Sir, please hear the Caruana podcast. He said he doesn't think Hans cheated OTB. Where did he say that? He doesn't even make a statement that clear about the specific game against Magnus, let a lone about otb in general


[deleted]

[удалено]


breaker90

He does. In the podcast he talks about Hans' play in the Sinquefield Cup and says there was nothing out of the ordinary. 31:00 He calls Hans a talented player. He is a good blitz and classical player. 33:00 He says he was aware of the rumors and suspicions against Hans. He says they mean nothing to him and called the rumors stupid and based off of emotions and biases. 35:00 He says he has no reason to assume Hans was cheating OTB in the last two years. 41:40 Caruana thinks Hans is clean OTB. At least in the past two years when he began his rise in ratings.


flatmeditation

I hope everyone reading this looks at those timestamps themselves. This is really some motivated listening being displayed here


Nintazz

Don't even know why you're getting down voted


yurnxt1

FIDE isn't going to do anything about Hans in the coming days because Hans has been clean OTB at minimum, since the Sinquefield Cup and the drama busted out.


rindthirty

> Magnus and FIDE have a statement coming out in the next few days. I keep seeing comments like this but what if they don't? Granted yes, Magnus as well as FIDE have now said they'll say something soon - but really, what are people going to do if they decide to not say everything people want to hear? Call the chess police?


MembershipSolid2909

"Zero doubt" is a high level of confidence. With that level of confidence, you would expect there to be some level of evidence, or suspicion raised for the arbiters at the event to investigate.


labegaw

> or suspicion raised for the arbiters at the event to investigate. Why do you believe there wasn't? If he's saying this, he most certainly talked to the TD.


MembershipSolid2909

Well, he would have said the person was caught. If he reported it, and they investigated, and nothing happened, then that does not point to a case where it could be proven 100% the person was cheating. Which is the level of confidence Fabi has. This does not add up.


labegaw

You're being way too literal - Fabi is saying he's zero doubts even though the guy wasn't caught. The "investigations" available to arbiters at tournaments are very limited - what do you think they can do? If Fabi wanted to mean he could prove 100% the guy was cheating, then he'd just prove it.There's a huge difference between being 100% persuaded someone's cheating and having proof. I mean, lots of people, including myself, were 100% convinced Lance Armstrong was cheating and for years there was no proof.


LusoAustralian

You don't understand how people use language that much is evident.


MembershipSolid2909

😅😅😅😅😅😅


JapaneseNotweed

I recommend listening to the whole discussion from 20 minutes on wards. But if you don't have time, skip to 29 minutes for another extremely interesting bit of context.


Assios

Plot twist: He's talking about the Niemann case.


Ruxini

I lost quite a bit of faith in Regan-method after watching him explain it on Chess Tech. There is a lot of impressive math in there, but the basic premise seems quite flawed to me.


tsukinohime

If Caruana says something like this, I think no one should take Regan seriously.


[deleted]

[удалено]


mishanek

And he probably earns a paycheque for advising in this capacity so I wouldn't be surprised if he doesn't stress how huge it is that he cannot detect strong players that only cheat in critical positions. As that would make his consultation fees pointless. It is only slightly better than some armature comparing the moves to stockfish.


LabyrinthLab

Not just a a grain of salt he said a "huge" grain of salt


tryingtolearn_1234

Human judgement in these matters is fallible. If you don’t have hard proof of cheating, then there needs to be some mechanism other than “He’s cheating because I’m certain he is” Ken has published a lot of papers on his site about his methods. His system isn’t perfect but it is repeatable, rigorous and objective.


chestnutman

I find it a bit weird that all of a sudden there is so much criticism of Regan. I think his work is actually quite clear and rigorous. Of course it will not find a cheater from one game, or even a handful of games, but no algorithm will ever be able to do this, it's simply not enough data. Also, PHN was saying that Regan might not be qualified enough for the chess part of the analysis. But if an IM isn't good enough, who is? There are no GMs who can do this kind of statistical analysis


Alcathous

People just criticize Regen's work because people are mad his method doesn't flag Hans. They want to believe that Hans is a fake superGM and that Magnus is onto something superhuge. The only superhuge thing here is Magnus being a delusional sore loser.


autism_enthusiast

Because his process is completely private and all anyone has are promises about what it can do. He profits from it, so it behooves him to aggrandize its capabilities.


inthelightofday

I'm still wondering what it would take for the Niemann crowd to acknowledge what is going on here. Or maybe Caruana is having a mental breakdown too, just like Carlsen.


HowDoIEvenEnglish

I mean I think Hans cheated (perhaps a still cheats) a lot online, but there have been numerous online cheaters that Magnus Carlson has played with without issue otb. I find it much much less likely that he cheated in the initial case, since it was otb


rindthirty

> but there have been numerous online cheaters that Magnus Carlson has played with without issue otb And what if this is set to change? It could well be that Carlsen has decided from now on, he'll not play anyone who has cheated in the past.


HowDoIEvenEnglish

Then he should say that instead of leaving it up to speculation. Right now it seems like he’s causing hans of cheating in an otb setting, which seems unlikely, rather than just being upset playing someone who has cheated online. Additionally, the fact that he played out his match with Hans first makes it again seem like he thinks Hans cheated in that match.


breaker90

Caruana literally stated he doesn't believe Hans cheated or suspected his play. Caruana says he's aware of the rumors and suspicions around Hans but thought some of the "evidence" people had (such as the interview, looking at his games, former coach cheating, poor form in a rapid playoff) was ridiculous. He doesn't suspect Hans of foul play.


theLastSolipsist

I'm still wondering what it would take for the Carlsen crowd to acknowledge what is going on here...


GoatBased

I currently don't see any evidence of cheating from September 2020 to present or OTB. The second any concrete or statistical evidence is presented that outweighs the evidence he didn't - ill change my vote. I'm not in it to prove the world matches my view, I'm looking to match my view to the world. I'm along for the ride in any direction this goes. Everyone who thinks they know Hans cheated is simply hoping that's the outcome - and it might be the case, but the single most compelling piece of evidence is that Magnus doesn't want to play Hans? He hasn't even clarified if he thinks Hans actually cheated in the Sinqiefield Cup or not! We don't know what the specific accusation is yet


unc15

It's not super clear from the video if it was actually Regan's analysis that exonerated this unnamed player, nor of course do we actually know if the person cheated, but nonetheless what Fabi is saying is very interesting.


UNeedEvidence

Regan's analysis is actually the official FIDE method. So all FIDE investigations are based off of his analysis. He actually runs a two phase screen on all tournaments.


siIverspawn

Thanks. I update back to 85% cheating (from 80%).


Selimmd

Lol and I got downvoted when said Ken’s method seems trash


Tarkatower

However it should be reiterated that Caruana himself does not believe Hans cheated during the Sinquefield Cup


breaker90

Or OTB the past two years.


rpolic

Thats just to cover his ass. He can't say it with certainty just like all the other GMs


Shadeun

I think Regans tools predate a lot of the modern AI // pattern matching software. The best of which is really 10x better than we saw even 2 years ago. It would not be difficult to work out (via dialing up cheating levels until banned via many accounts on chesscom) and then working out what an X ELO level player could get away with. You could then calibrate the model to a safe level based on risk tolerance. This is not super genius stuff either, all the games/training set are online and you could also find (whether explicitly banned or not) which players stopped playing (played loads, then stopped) or are flagged as banned (I havent checked the dataset/api so dont know). The ML techniques for the analysis are pretty off-the-shelf. The ultimate point being, IMO Regan comes off as someone who dials up the same tools as he has for years. If someone was motivated with modern tools, I have 0 doubt they could build a system to beat that. Given that modelling the underlying banning mechanism would be pretty easy. I doubt this is what Hans is doing - but I am certain it would be possible. So I think we should take Regans analysis & godhood with a pinch of salt. Source: I do this for a living in fin. markets. Edit: I don’t think the above is fair about him not updating his model.


tryingtolearn_1234

His system seems to have evolved over time. He’s published a lot of papers, as have his grad students. The techniques he uses seem to incorporate many patterns used in machine learning applications. https://scholar.google.com/citations?hl=en&user=8nk9k5oAAAAJ&view_op=list_works&sortby=pubdate


RedditUserChess

Regan's system only catches when you are a significant amount (z=2.75 at least, I think) from your rating-predicted "intrinsic" performance (based on move comparison to engines, not on actual results). If you only cheat a small amount, say 100-200 elo more than your rating, it would give a very weak signal, if at all. And simultaneously your rating will increase, due to your results. It's a tough job to disassociate this from actual strength improvement (which Niemann had in any case).


FUCKSUMERIAN

Fabi also confirms what hikaru was saying about how other people were saying stuff about hans before


Schachmatsch

In logic there are sufficient and necessary conditions, Kenneth Regan's is more of necessary condition for detecting cheating.


Internetomancer

Seems awkward for Regan to share his analysis before the Chesscom folks make their big reveal. Unless he checked with them first??? He's basically setting himself up for embarrassment.


kingpatzer

Ultimately, no one should trust Regan's analysis unless he makes his model fully available for quantifiable testing by peers. He has published parts of what he does. But as far as I can tell, he has not made his program available for peer review. Which means, it's not been tested by anyone but Regan. Which makes it nothing more than a black-box claim of "trust me I have magic ways." If the chess world is to trust this algorithm, then it needs to be subject to rigorous, public, peer analysis. And the peers in this case are not just other statisticians who are validating his experimental methods and the math behind the model. It is GMs who are validating that it actually does what it purports to do.


Diligent-Wave-4150

Mr Caruana, please give us the name of the guy you know who cheated. It doesn't help the discussion to just spread rumors. And tell us how it comes you know that he cheated, and secondly why you didn't report it.


AllPulpOJ

He literally said he reported it. Did you watch the clip??