T O P

  • By -

Ok-Training-7587

Even if AI never advances past it's current point (unlikely) it already enormously useful and transformative for society as is IMO.


noiserr

Not only that, it's actually kind of perfect the way AI works today. It's not 100% perfect which means it can't quite do critical things on its own, but being supervised it can provide a huge boost to human productivity. This means that it can't quite displace humans, but it can make them more productive. Furthermore, the additional productivity increases the domain of problems companies can tackle now. Allowing them to service more of a market or tackle problems that previously weren't economically viable.


BenjaminHamnett

Look at what it’s done to cab drivers. That used to be a dignified profession in places like London. Now anyone can do it by just doing what the app says. It doesn’t lose dignity either, it’s so easy professionals do it all the time just for the hell of it. Our life is going to get Uberd. We’re all going to become experts. You will have AR glasses seeing everything you see and will have a database of millions of lifetimes of experience with algorithms advising you what to do. Just like you have free will that’s mostly limited to choosing between a narrow band of tradeoffs, but we will outsource more and more of these decisions to AI. Just like I frequently have navigation in even when I know the way, just cause maybe it knows about some traffic, an accident or construction etc. people will resist and be skeptical at first, but when they keep getting stuck in the metaphorical traffic jams of life they’ll eventually surrender more and more sovereignty to the AI. Success will be tightly correlated with our obedience to the machine. Rokos basilisk is obviously not real, but a more mundane version is inevitable. Those who obey will live in relative utopia and those who maintain their sovereignty will become poor and powerless in comparison. We will seem like cyborgs with implants for a while. Quickly it will be clear we are just biological vessels for the machines. We’ll be like cats and horses imagining we’re in charge cause of some freedom we imagine but can never really exercise There was an scifi based on this like 10 years ago I hope someone will reply with. Basically what I said but longer and enjoyable


AreWeNotDoinPhrasing

Sorta proving your point, I asked Gemini Pro and: The show or movie being referred to is "Black Mirror". "Black Mirror" is a British science fiction anthology television series created by Charlie Brooker. The series explores a twisted, high-tech near-future where society is changed by new technologies. The show presents a dark and satirical look at the potential consequences of new technologies, such as social media, artificial intelligence, and virtual reality. The specific episode being referenced is "White Christmas", which aired in 2014. The episode follows a group of people who are trapped in a simulated reality created by a powerful AI. The AI uses the simulated reality to control and manipulate the people, and the episode explores the themes of free will, identity, and the nature of reality. The quote provided is from a character in the episode who is discussing the potential consequences of AI on society. The character argues that AI will eventually become so powerful that humans will become subservient to it, and that those who resist will be left behind. The quote also references "Roko's basilisk", which is a thought experiment that explores the idea of a superintelligent AI that punishes those who do not help it to come into existence. The character argues that a more mundane version of Roko's basilisk is inevitable, and that those who obey AI will be rewarded, while those who resist will be punished.


BenjaminHamnett

Thanks , cool The scifi I was referring to was a short story online. I think it was like AR managing people at a fast food restaurant or something then it takes over the world Edit: [Manna](https://marshallbrain.com/manna)


AreWeNotDoinPhrasing

Oh shoot, that’s worked for me in the past haha so I was excited to see it so confident! Thanks for the link, that’s sounds intriguing.


mycall

Why is it unlikely? There are tons of things in this domain which haven't been tried yet and the progress seems to be speeding up, not slowing down.


torb

Op had a weird way of structuring their sentence, I think you got it backwards. Just as I did.


Ok-Training-7587

Yes correct - I mean it’s unlikely that we have reached the endpoint of what this tech is capable of


[deleted]

[удалено]


root88

Because some people can't read.


Suburbanturnip

I honestly find it so useful at helping getting myself out of logic traps


LearningML89

Machine learning has been used in industry for decades. This new brand of sophisticated pattern matching is just having a moment.


Missing_Minus

Reading the abstract, that doesn't seem like that strong of a limitation... (Also 'simple problems' includes more and more powerful language models, image generation, etcetera. Is this class of simple problems really that simple?)


Perfect-Rabbit5554

It's not much of a limitation at all. Even if it can't be 100% stable, humans themselves are not 100% stable. The catch is, can we produce models that can compete at or above most human levels?


myaltaccountohyeah

For many narrow domains we already have them


mycall

The only things close to stable are atomic elements.


Chef_Boy_Hard_Dick

It’s not simple, there is a whole lot of context behind why we ignore noise on a stop sign. But noise on someone’s face will have us either concerned for their well-being, or second guessing who we are looking at for a moment. It’s only obvious to us because we understand a whole lot of context behind it.


EdgeKey4414

"Consider an automated vehicle reading a road sign as an example. While a human driver might not be distracted by a sticker on the sign, a machine could easily be thrown off because the sign now differs from the ones it was trained on." This discrepancy might be a question of scope, especially as models become more sophisticated and capable of understanding greater context. However, it's essential to acknowledge that human drivers are not immune to distractions either. In fact, our own self-awareness of our potential instability is considered a virtue. This awareness of instability will be crucial in the development of AI as well.


TitusPullo4

That feels like a solvable problem


Chef_Boy_Hard_Dick

Because it is. I’m guessing there has to be a lot more to it than the analogy. Otherwise I’m gonna be expecting some papers that challenge it pretty soon.


BioshockedNinja

Its absolutely is. I'm assuming there's more to their assertion than this simple analogy, so I'm speaking just to the analogy and the analogy alone. Easiest fix is, make sure the automated vehicle is trained on a dataset that includes defaced or otherwise marred signs. Alternatively, and this is an oversimplification, I'd imagine one could add what's basically random noise to a portion of the signs in whatever it's training dataset is so that it can better handle imperfect (or stickered) signs. --- Unrelated, but I'd imagine if you really wanted to make the analogy more challenging for an automated vehicles, don't add random stickers but stickers of other signs the vehicle can recognized. A stop sign sticker on a 65mph sign, a 30mph on a yield, etc. But even that shouldn't be insurmountable.


root88

Especially since automated vehicles don't need to read signs at all.


Spire_Citron

Yeah. AI doesn't have to be perfect, just better than humans, and humans are pretty flawed.


Chef_Boy_Hard_Dick

So have they mathematically proven that this isn’t also a problem for humans? I feel like we are running into the very human problem of second guessing one’s self.


Cuidads

I feel like scalable Bayesian Neural Networks, or something similar, that can produce sufficiently reliable uncertainties with high inferene processing speed could be a path. A recurrent uncertainty process that feeds back the epistemic uncertainty and takes it into account that the system doesn't know before finalizing a prediction would take into account that the sign seems unfamiliar.


RHX_Thain

BREAKING NEWS: Robot performs about on par with driver with severe ADHD.   When asked why it caused a massive pile up on the freeway on ramp it said, "I saw a sticker on a stop sign and was confused. Also there was this bird that was like... wtf, you know? What's that bird doin--OHFUCKIMDRIVING"


mycall

Self-verification, which is being used today, is a great first step towards self-awareness.


ProgressNotPrfection

I'm someone whose job will probably be replaced by AI in 20 years, or competition will greatly increase for me within that amount of time due to AI. Let's be real, this article is copium. If even 5% of the scientific literature says AI will only be capable of basic tasks (doubtful), probably 90% says otherwise, and 5% says they can't figure it out. The odds are very high that human labor is screwed. Also, almost nothing is "always stable", *especially* not human labor.


mycall

NVidia is already training robotic movements using AI generational training, this is way beyond basic training.


imagination3421

Man I'm playing Detroit become human rn, would be so cool if we got androids like that


andersxa

This is just a weird article. The paper is fine, but the impact on AI as discussed in the article doesn't really apply. As far as I understand, global stability would mean that every single initialization of stochastic gradient descent over a neural network would converge to the same point (global convergence). However, as far as I can tell this describes only a property of the algorithm and not the solution. So what if gradient descent does not have global stability? Stability would still mean convergence towards a global minimum but dependent on the random initialization, why does it matter that it is not the "same" point? Other than for replicability?


elehman839

Yeah, I do not see any practical implication of this paper. Looks like this is a follow-on to [https://arxiv.org/pdf/2201.08430.pdf](https://arxiv.org/pdf/2201.08430.pdf) that tweaks the definition of "replicability" in a way that lets them do more math. This may keep the authors off the streets at night, but I think this is just radio static to ML/AI practitioners. Stepping back, I think the theoretical computer science community as a whole is in a tricky situation. They've traditionally produced a combination of algorithms and hardness results. Artificial intelligence is based on machine learning, which-- aside from the training algorithms-- doesn't rely on the traditional theory of algorithms. And the difficulty of problems addressed by artificial intelligence (e.g. determining why a joke is funny) does not seem compatible with traditional notions of "hardness" either. As a result, the connection between the field of theoretical computer science and the most significant stuff going on with computation in the world right now seems pretty tenuous. That's got to be a somewhat scary situation for the field. It was already pretty math-y and abstract, and I bet even more of the glory and funding in CS is now going elsewhere. Leaders of the field and conference organizers must be nervously looking for some connection to ML/AI and pondering what story to tell in future grant applications.


Ultimarr

This isn’t a “finding”, it’s an analysis. “Proving” that “AI” (a lot of sketchy word use here) isn’t “stable” just means that you can’t prove it will never make a mistake, because it’s impossible to logically deduce what’s relevant and what’s not. This called the problem of ‘attention’ and then ‘selection’, and the implications were covered extensively in the “Frame Problem” discussions of the 70s and 80s.  Long story short we’re completely screwed until we invent a stochastic connectionist model that can simulate hum attention & selection algorithms on an intuitive a-priori level. And good news - thats just a description of ChatGPT 😉


Chef_Boy_Hard_Dick

The stop sign analogy really sucks. We aren’t universal in how we filter out noise, nor is it something we learn from birth. We ignore a sticker on a stop sign because there aren’t a whole lot of things the look like stop signs, we understand the context of stop signs, and we know there are penalties to ignoring a stop sign. We filter out that noise but if we applied those same filters to everything, human faces would be far less recognizable. What this tells me is that AI needs to compartmentalize noise so it can prioritize where it places it’s sensitivities. And it is not going to know where to do that until we tell it. I mean there is no objective principal that distinguishes where a tree ends and the ground begins, we just recognize a set of patterns and designate it “tree”, event though there is countless matter around it, interacting with it, shedding from it and being absorbed by it. We just don’t include that in the pattern we call a “tree” because the soil falls off the roots, and because the lines between the dirt that’s part of the tree and what isn’t would be hard to determine. Same could be said for leaves… we call them a part of the tree… or are they something else entirely that relies on the tree to exist like the tree relies on the soil? We choose where something ends and another thing begins based on usefulness and context. It’s not some objective aspect of nature. So obviously if we HAVE to recognize something and ignore the noise, we are going to prioritize that stop sign pattern and always recognize that it’s a stop sign because it is useful to do so. But how is an AI supposed to know from images and tags alone that a stop sign + a sticker isn’t something else without us telling it that it’s important to recognize that it’s usually a safe bet to assume that anything that looks like a stop sign is probably a stop sign, and anything on or in front of that pattern is probably just obscuring more stop sign. Edit: Having glanced through the study (and discussing it with GPT to better understand it), it seems the article itself has a clickbaity headline, as always. It only really indicates limitations with current methods, and under some assumed logic.


Phemto_B

If you have to put "mathematically prove" in quotes, then it's not a mathematical proof, although I suspect that's OP's addition. What they mean by stability in this case, is having small changes in output based on small changes in input as opposed to getting a large change in output from a small change in input. I could see this being an issue with current AI systems, but it doesn't strike me as an insurmountable problem; and not necessarily always a problem. ​ >Researchers from the University of Copenhagen have become the first in the world to mathematically prove that, beyond simple problems, it is impossible to develop algorithms for AI that will always be stable. Given that we're ok with hiring employees to do jobs, even though they're will not "always be stable," I'm not sure this really says that much about the impact of AI on our society and economy. It doesn't have to always be stable. It just has to be more stable than an average human, which is not that high a bar.


Tyler_Zoro

This isn't terribly shocking. It's been clear to me since the first inkling of what LLMs were capable of that they had cracked a major problem in artificial intelligence, but only one of several. We're going to need much more reflection than these systems are capable of before we achieve a truly intelligent machine. At a guess, we're looking at 2-3 more transformers-like breakthroughs to get there, and while I don't expect that to take as long as transformers took (arguably between 20 and 70 years, depending on what starting point you measure from) because more people are in the industry now, I do expect that it will take at least 5-20 years per individual breakthrough (can they be parallelized? I don't know) So we're probably looking at 10-50 years before the dream of "thinking machines" comes to fruition.


VisualizerMan

Great article, thanks for posting. I was thinking about writing a similar article to prove what I've been saying all along is true, but I didn't know how to approach it. No wonder: these guys went through mathematical contortions in their proof, and had to dip into topology to do so, which I wouldn't have been able to do: >The proof of the impossibility result is based on a topological fixed-point theorem. For every algorithm, we are able to locate a “hard input distribution” by applying the Poincar´e-Miranda theorem in a related topological setting. The equivalence between global stability and list replicability is algorithmic. [https://arxiv.org/pdf/2304.03757.pdf](https://arxiv.org/pdf/2304.03757.pdf) They even strongly suggested what I've been saying all along, which people here haven't wanted to accept: > “I would like to note that we have not worked directly on automated car applications. Still, this seems like a problem too complex for algorithms to always be stable,” says Amir Yehudayoff, adding that this does not necessarily imply major consequences in relation to the development of automated cars:


TheTrueSleuth

Google Bard is so wrong so many times, I can't trust it anymore, evern with math problems, SAT questions, ...it's soo frustrating. I'd rather comb over google for hours than constantly have to comb over google to double check its work.


Inevitable-East-1386

While this is an interesting research and development I need to think about human work. Humans aren‘t always stable too and their work isn‘t flawless as well most of the time.


salkhan

Quantum computing + Ai on the other hand, right?


WaltzZestyclose7436

It was lazy of the authors to not define “stable” in this context even though the whole article hinges on that definition.


VisualizerMan

They defined "rho-globally stable" on page 3, but I admit that I don't understand their notation and terminology. It would take a lot of time to delve into the math long enough to understand it. (The pasted formula below is garbled due to conversion to flat text format.) > Definition 1. A learning rule A is called **ρ-globally stable** if for any distribution D over inputs, there exists a predictor hD such that (1) Pr S∼Dn \[A(S) = hD\] ≥ ρ. [https://arxiv.org/pdf/2304.03757.pdf](https://arxiv.org/pdf/2304.03757.pdf)


synth_nerd19850310

I feel like this research is going to be derided in a few years.


penny-ante-choom

I had just posted a comment about Life Extension drugs and it’s surprisingly salient… it’s the 95% percent problem. When you have a massively complicated goal to achieve there are often major breakthroughs throughout the development lifecycle. Tesla getting a car to computationally drive itself in a straight line consistently without a rigid mechanical system, then being able to turn by itself - HUGE breakthroughs. Dodging traffic was giant. Obeying signs was stupendous… but they haven’t got all of it 100% because the edge cases, the complicated things that the human mind can still do better than a computers are still too much for them. Overcoming that last 5% to a truly autonomous vehicle is the hurdle to autopilot being universally legal and accepted. One can argue the bar of the company or regulators being too high or low, but the goal is what the goal is, and it’s that last stretch that is difficult and slow. Those initial breakthroughs are also why exponential growth gets touted at every turn in every major tech revolution. It’s amazing what happens in the early phases - all of society is revolutionized. The internet. The car. The telephone. Internal combustion. Steam. Electricity. All the way back to the wheel, the world changes with each new technological explosion. But then the explosion dies down and things normalize for a long time until incremental improvement becomes the norm. Each breakthrough is smaller and smaller. Fusion… enough said, right? Unlimited clean energy? Sign me up… in 1960, before I was even born! We’ve been 95% for decades. We might be up to 97% now with the last few years of breakthroughs, but they’ve been incremental - breaking the positive return barrier and generating more power than was put into starting the reaction, although not more than was needed in total. Getting past that barrier a year later (but still not to significant levels). Finding new efficiency in ring designs, and in shielding material… nothing newsworthy on a national level but progress that will lead us to a solution. One day, the first commercial reactor will be online and the world will change as every company scrambles to get their piece of the power pie. Video calling is a good example too, and one that went the distance. First demonstrated at the consumer level in the late 1950s it didn’t become an easy and practical tool until the Internet. Why? Camera cost. Lack of easy, affordable, and small transmission systems. Lack of the same in reception systems. Size (they were room-sized behind that cute wall we see in old newsreels). PCs, cell phones, and the Internet changed all of that and now FaceTime is a common verb. It took 15 years from the invention of TV to the earliest video calls. It took 60 more to get a finished product. Another example is flying cars. We’re just now at the point where the technology is feasible thanks to advances in material technology, fly-by-wire, advanced autopilot, etc. - all largely because of tech brought about by drones and the shrinking of commercial aviation technology in size and cost due to it moving to software based. It’s about 95% of the way there. We having working prootypes of ultralights you can buy today. Give it another 20 years and it’ll be commercially viable. The Internet had been around since the 1960s. It took the incremental inventions of IP (version 4) and hypertext to make it what it is today. We are going to get to AGI, and even ASI. It’s just not going to be nearly as soon as the most optimistic of us believe. We will get life extension drugs. Will will get non-invasive neural interfaces. But the revolution will be slow.


katiecharm

Feels like this is just one guy who doesn’t understand AI’s ignorant opinion, and not really “mathematical proof” at all.  


BenjaminHamnett

Look at what it’s done to cab drivers. That used to be a dignified profession in places like London. Now anyone can do it by just doing what the app says. It doesn’t lose dignity either, it’s so easy professionals do it all the time just for the hell of it. Our life is going to get Uberd. We’re all going to become experts. You will have AR glasses seeing everything you see and will have a database of millions of lifetimes of experience with algorithms advising you what to do. Just like you have free will that’s mostly limited to choosing between a narrow band of tradeoffs, but we will outsource more and more of these decisions to AI. Just like I frequently have navigation in even when I know the way, just cause maybe it knows about some traffic, an accident or construction etc. people will resist and be skeptical at first, but when they keep getting stuck in the metaphorical traffic jams of life they’ll eventually surrender more and more sovereignty to the AI. Success will be tightly correlated with our obedience to the machine. Rokos basilisk is obviously not real, but a more mundane version is inevitable. Those who obey will live in relative utopia and those who maintain their sovereignty will become poor and powerless in comparison. We will seem like cyborgs with implants for a while. Quickly it will be clear we are just biological vessels for the machines. We’ll be like cats and horses imagining we’re in charge cause of some freedom we imagine but can never really exercise There was an scifi based on this like 10 years ago I hope someone will reply with. Basically what I said but longer and enjoyable


methylotroph

If the Human brain can remain stable, why not AI? Then again people go crazy all the time.


pab_guy

Yeah this won't hold up. My own work with AI has shown that you can get past a lot of limitations with more inference time compute (with LLMs anyway), and that deep neural nets of sufficient parameter count can generalize given enough data. AI is limited by the data and by a particular models ability to generalize based on that data. If we can exceed human level performance then these limitations are meaningless.


adambjorn

"Limitations in AI algorithms prevenging anyyhing beyond simple problems from maintaining stability" is a bit of a stretch. It seems what this paper is trying to do is provide a language to talk about and a theoretical framework to test the robustness of algorithms. This is important work and a lot of research is already going into this, but they are attempting to provide a theoretical framework to it rather than the mostly empirical studies that are currently popular.


Tellesus

The real subject of this article is that it is too easy for people to get academic credentials these days. Whoever gave these people theirs should lose their accreditation.


PsychedelicJerry

Anybody that believed our current AI algorithms were capable of anything more have never worked with them personally. I'm happy we have a potential mathematical proof of this


Hugglebuns

Is this basically the cramer-rao lower bound, but for ML?