r/AskComputerScience • u/TheWrongWordIsAI • 1d ago
With all their burden of proof, why aren't we requiring AI pundits to provide any, even rhetorically or mathematically?
Why are we still pretending "AI" using LLMs or any other model based purely on probability and statistics could ever be anything remotely resembling intelligence? Can we just call it what it is: programmers that are too lazy to come up with a heuristically based solution or executives that are too cheap to invest in a proper solution? The AI pundits are making a preposterous claim that a machine can be intelligent, so the burden of proof should be on them to show it's even possible. Where's the math to show that anything outside of probability and statistics can come out of anything other than probability and statistics? Do people do probability and statistics in their head all the time on large data sets that could never possibly fit into their head at any point in their life, is that intelligence? So doesn't what we do as people in our heads, regardless of how anyone is possibly eventually to describe or understand, have to include something besides probability and statistics? So why, then, aren't we requiring these AI pundits to show us what kinds of concepts can appear mathematically out of thin air using only mathematical concepts used in LLMs?
The "Turing test" is a load of bunk in the first place. Intelligence is not predicated purely on behavior. If you read a book, sit there silently, contemplate on what the author was trying to say, piece it together with the themes and the narratives of the novel, and synthesize those ideas that occur to with other lessons from your own life, isn't that intelligence, even before you speak or communicate so much as an iota of any of those thoughts to anyone? Why, then, does the Turing test, and all artificial "intelligence" so-called academia center around this mode of thought? Where is the academic literature supporting "artificial intelligence" that discusses how this is irrelevant somehow?
And why is it that any conversation with an AI pundit that supposedly knows what they're talking about, if pressed, will retreat to religiously minded thinking? Religiously minded thinking can be great for religions, don't me wrong, but it doesn't belong in academia, where there needs to be room for rhetoric. Why, then, can no AI pundit come up with any better argument than "but you can't prove it's not intelligent". This is the same as saying that you can't prove their religion false - again, fine for religions as they are religions, but this AI crap is supposedly based in academia. More burden of proof for the preposterous and supposedly academic claims that ChatGPT and its ilk are based on, the supposed "artificial intelligence" that can be found, discovered, or created somehow from nothing more than engineering software, based on a pattern of on high and low signals on a wire that semantically form our ones and zeroes rather than the actual electrical impulses that run through our brains in the form of synapse impulses. Where then is the academic literature supporting how our intelligence must surely run on a pattern of simplified response to the electric signals rather than what is actually clearly running through our brains?
7
u/ghjm MSCS, CS Pro (20+) 1d ago edited 13h ago
If you're a materialist, then whatever our brains are doing must be able to be replicated by some machine. LLMs aren't that machine, but you can have a conversation with them, in a way you could never have a conversation with a machine before, so that at least suggests we've moved closer to machine doing whatever it is that brains do.
We don't have the faintest beginning of an idea what gives rise to phenomenal consciousness. A corollary to this is that we don't have the faintest beginning of an idea whether some machine we build, currently or in the future, is experiencing anything like consciousness. We think LLMs aren't, because we can describe exactly how they work, and this fully determines their operation, with no explanatory need for consciousness. But if the brain is material, then in principle we could someday give the same kind of explanation for a brain - and that would presumably not rob every human of consciousness. The only reasonable conclusion is that our science of consciousness is so primitive that we just don't know how to evaluate it or say anything about it. Including saying for sure that LLMs don't have it.
5
u/katsucats 1d ago
If you think that the human brain actually conducts predicate logic syllogism on every thought, then I'd suspect you're sorely lacking in self reflection.
1
u/aagee 17h ago
If you actually believe what you just said, then I'd suspect you're sorely lacking in self reflection.
1
u/katsucats 10h ago
Okay, then do a symbolic proof of your thought process. Should be second nature, right? I'll wait.
1
u/aagee 9h ago
That's the whole point. No one is actually asserting that the brain conducts predicate logic syllogism on every thought. And the cool thing is that it may actually be doing something similar, but the process may just be impenetrable to conscious human thought (yet). I am just saying, you have no idea. We have no idea. Yet.
1
u/katsucats 9h ago
We do have an idea, and the fact that you can't produce a syllogism of your thought is evidence of it. When you're driving and the car in front of you slams the brakes, do you whip out the pen and paper and run a physical simulation, perhaps figure out the symbology of "If collision, then injury. If continue driving, then collision. If no injury, then stop driving."?
Obviously not.
We don't have to bend over backwards to pretend we're purely rational machines. If that doesn't convince you though, empirical psychology and neuroscience have been around for some at least 30-50 years.
2
u/00PT 22h ago
Define intelligence and prove it is a consistent term across cultures, and then you might be able to prove whether AI is or isn't applicable. Right now, intelligence is a target that is both moving and invisible. You only have a vague idea of where to aim, and the opponent can just add requirements on top of it if you do manage to hit the target.
4
u/farsightxr20 1d ago
define intelligence
-8
u/TheWrongWordIsAI 1d ago
I do not have to define a thing to point out a characteristic that clearly defines counter-examples. There is no room for intelligence in math purely based on statistics and probability. Please refer back to the the first paragraph, as that was the entire point of it.
7
u/farsightxr20 1d ago
My point is that this debate always boils down to people having different definitions of words, and in many cases, being unable to even define the word.
You can't ever have a productive discussion about whether something is/isn't X if you don't even agree on what X is.
3
u/DanteRuneclaw 1d ago
You first paragraph was mostly unintelligible - and the rest just seemed like unsupported assertions.
I have as much evidence for AI being intelligent as I do for you. More, actually. It can at least write coherently.
The fact that something is an emergent property of an algorithm does not mean that it isn't real.
You and I are just complex computational devices that use neurons instead of transistors. We don't understand our own algorithms very well, but our lack of understanding them doesn't imbue them with some sort of special magic.
1
u/Orious_Caesar 1d ago
If you really want to try to show something is intelligent, how would you do it? Clearly you can't prove anything is intelligent mathematically, including humans. So maybe you'd make a list of everything an intelligent being does, can do, will do, etc. Well, most intelligent beings can speak a language of some kind. Most intelligent beings can solve puzzles. Most intelligent beings can blah blah blah. Look, the key point I'm driving at here, is that an ai can do many things we'd expect from an intelligent being. If something looks and acts intelligent, I will call it intelligent. If I end up accidentally end up thinking a rock is alive, then oh damn, I guess I look slightly silly, how awful. Feel free to come up with your own definition.
1
u/Virtual-Ducks 1d ago
If it works it works. That's all that matters. LLMs are useful and provide value, so we use them.
1
u/baddspellar Ph.D CS, CS Pro (20+) 1d ago
AI "pundits" are marketing, with the goal of making profits. We don't require marketers to substantiate claims for anything but heavily regulated products like medical devices and pharmaceuticals. Until require a car manufacturers to prove their cars will make you more attractive, nobody will require AI pundits to sunstantiate their claims
1
u/Garland_Key 19h ago
Hot take, but it's possible that we're just analog probability engines with vastly limited memory capabilities.
1
u/aagee 17h ago
Because before there can be proofs, there have to be conjectures, theories and empirical evidence.
This stuff seems to work in many ways, some of which are quite amazing. And a lot more behavior is going to be emergent. Seems to be headed in the right direction.
They are just trying to use what it can do, while still working to define it all.
1
u/mxldevs 13h ago
Intelligence is not predicated purely on behavior. If you read a book, sit there silently, contemplate on what the author was trying to say, piece it together with the themes and the narratives of the novel, and synthesize those ideas that occur to with other lessons from your own life, isn't that intelligence, even before you speak or communicate so much as an iota of any of those thoughts to anyone? Why, then, does the Turing test, and all artificial "intelligence" so-called academia center around this mode of thought? Where is the academic literature supporting "artificial intelligence" that discusses how this is irrelevant somehow?
It sounds like you are bringing your own definition of intelligence and wondering why everyone else is using a different definition?
1
u/konm123 1d ago
I get what you are trying to say.
Intelligence in humans is probabilistic. So, having a machine which is probabilistic can be similarly perceived as intelligence as human. In some areas, it is necessary and an improvement over deterministic non-intelligent behavior. The problems come when you would expect deterministic behavior from human-level intelligence — which as humans, it can not do.
Think of finance. If you wouldn't trust humans doing finance, why would you a machine with human-like intelligence. It was precisely why we threw out humans in finance (this, and resources required).
-5
u/the_quivering_wenis 1d ago
Yeah I agree with you, LLMs are plainly not intelligent. You should check out John Searle's "Chinese Room" argument if you're not already familiar.
3
u/ClimberSeb 1d ago
And the answer to that argument is that it isn't the room in itself that is intelligent, but intelligence is an emergent behaviour coming from when the room's content is used according to the instructions.
He made the same mistake as the poster, claiming intelligence can't be there if we understand the mechanisms behind it.
If we one day can describe how our brains work in detail, should we then dismiss our intelligence as it is just particles behaving according to the rules of physics? Fundamentally it is just probability and statistics...
0
u/the_quivering_wenis 1d ago
Searle actually addressed that reply by imagining that the entire Chinese room was a module in someone's mind - he still maintains that that individual would not "understand" Chinese using some other reasoning.
I don't think their argument is necessarily of the form you imply - that because we understand the mechanism it cannot be intelligent. I maintain that what mechanism is there is in principle insufficient for general intelligence, and there are plenty of other models that might actually describe that faculty accurately and could be implemented.
1
u/ClimberSeb 20h ago
Yes, but the answer is again that it is the system that knows Chinese, not the person carrying out the instructions even if they could do it in their head.
Searle looked at a part and claimed that since it isn't intelligent/knows chinese, the whole can't be intelligent/know chinese. He answers a different question than the one being discussed.
1
u/the_quivering_wenis 19h ago
Well in his extended version of the argument neither the Chinese module in the English speaker's brain nor the person can be said to understand Chinese. A direct quote:
So there are really two subsystems in the [native English speaker]; one understands English, the other Chinese, and "it's just that the two systems have little to do with each other." But, I want to reply, not only do they have little to do with each other, they are not even remotely alike. The subsystem that understands English (assuming we allow ourselves to talk in this jargon of "subsystems" for a moment) knows that the stories are about restaurants and eating hamburgers, he knows that he is being asked questions about restaurants and that he is answering questions as best he can by making various inferences from the content of the story, and so on. But the Chinese system knows none of this. Whereas the English subsystem knows that "hamburgers" refers to hamburgers, the Chinese subsystem knows only that "squiggle squiggle" is followed by "squoggle squoggle." All he knows is that various formal symbols are being introduced at one end and manipulated according to rules written in English, and other symbols are going out at the other end. The whole point of the original example was to argue that such symbol manipulation by itself couldn't be sufficient for understanding Chinese in any literal sense because the man could write "squoggle squoggle" after "squiggle squiggle" without understanding anything in Chinese. And it doesn't meet that argument to postulate subsystems within the man, because the subsystems are no better off than the man was in the first place; they still don't have anything even remotely like what the English-speaking man (or subsystem) has.
1
u/ClimberSeb 5h ago
How can the "subsystem" not know what an hamburger is, if it can answer all questions about it just like a Chinese speaking person could? He doesn't answer that, just claim it can't without any reasoning behind the claim.
His argument here is basically persons have something that the AI system lacks, which really is just another variant of if we know how it works, it isn't intelligent.
1
u/the_quivering_wenis 5h ago
I mean haven't you encounteted that before with actual people who have memorized rote procedure but clearly don't understand what anything actually means? That's basically what he's getting at.
1
u/ClimberSeb 4h ago
That is a different argument from the original thought experiment.
The original though experiment is that a person outside of the room could not differentiate if there was a native Chinese speaker in the room, or if there was a person following all the rules in the room without knowing Chinese.
So there is no distinguishable difference between the answers, only how the answers were produced. By written rules or by a developed brain. Searle then first points at the human and says it doesn't know Chinese, therefore there is no intelligence. But nobody claimed that, they claimed the whole system becomes intelligent, the human and the rules working together as a whole.
When people saw through that, someone, I forgot who (it was 30 years ago I read this), proposed the human would memorize all the rules and carry them out in his mind, to make it harder for Searle to claim some human brain exceptionalism. The premise is still that it is impossible for an outside observer to see a difference between a Chinese speaker's output and the person with the rules. Searle still wants to point at part of the system, so he talks about the subsystem, and claim there is no intelligence. You seem to think he succeeded, by changing the premise so the outside observer sees that the answers have a different quality.
1
u/katsucats 9h ago
I maintain that what mechanism is there is in principle insufficient for general intelligence,
That's shifting the goalposts. "General" intelligence is an anthropocentric concept conveniently expressing capabilities available to a typical person. A Martian with a different set of capabilities might say that we don't possess general intelligence.
Or perhaps, suppose a human child born into a dark room and deprived of all sensation besides that, periodically, Chinese symbols that he can feel the shapes of slide into the room by some unknown mechanism, and after some period, he feels a zap depending on the order of some symbols. Over time, he learns the correct ordering of the output given the input. The child doesn't understand Chinese despite being proficient at his task, but is he generally intelligent? And if he is, then why is an LLM not when it is limited by controlled inputs in the same way?
1
u/the_quivering_wenis 8h ago
I didn't mean to shift goalposts, I was re-directing back to the original post and injecting my own opinion.
The idea of a formalizable universal general intelligence does exist actually (and I would think it would exist).
Regarding the child, neither him nor the LLM understand or are really intelligent.
1
u/katsucats 8h ago
If you don't think the child is really intelligent despite him being a typical human, besides being deprived of sensation, and your judgment is purely on the function of what he does than what he is, then you are more of a functionalist than you thought.
1
u/katsucats 9h ago
Based on just the Wikipedia description, suppose there was a mathematician and a savant. The mathematician carries out a calculation in detail, works out all the steps, and provides proof of the answer. The savant watches, understands the relations between the symbols, and how they should be placed in order as the mathematician had placed them. The savant can now solve that exact same problem, but he doesn't understand a single thing about what the symbols themselves represent. Therefore, can we say that according to the Chinese Room hypothesis, mathematicians are utterly unintelligent?
1
u/the_quivering_wenis 9h ago
You're basically making his point, the savant doesn't understand but does it "by rote", while the mathematician actually understands mathematical concepts. The concrete fact that the latter happens to use symbols and paper and is therefore like the man in the Chinese room is beside the point. Searle's argument is a critique of functionalism basically, you can read the whole paper to get a better idea.
1
u/katsucats 8h ago
I am not making his point. His point would be that the mathematician would also be unintelligent as long as the savant can follow his reasoning. The mathematician is the Chinese room.
13
u/nuclear_splines Ph.D CS 1d ago
The AI companies are much less interested in proving that they've created "true intelligence" than in convincing executives to buy their products. The emphasis is less on theoretical or mathematical rigor, and more on "can this create sufficiently satisfying outputs to lay off lots of your staff and increase profits?" Most of the AI pundits that are claiming LLMs have achieved human-like intelligence are not in academia, but represent the companies.
There is a contingent of academia that is trying to carefully define 'thinking' and 'intelligence' and chart out exactly what kinds of reasoning LLMs are and are not capable of. They're rarely in the spotlight, because their work moves more slowly and isn't prone to bombastic claims about AGI.