Well I did not hear "talking" even when primed to hear that word. It might have come up with "having carnal knowledge with" and be less wrong. You can look up videos on misheard lyrics as well. Still, maybe Biden was just having a brain fart and misspoke - it can happen - I do it often enough. Still, the word that came out was not "talking" not matter how you spin it. Why would AI translate it as such? The AI was retrained for this contingency? Unlikely, but we he have two mysteries: what quirk of Biden's demented brain caused him to come up with this bizarre statement (not what I would call a joke) and why would the AI mistranslate it as "talking".
There are many ways that these LLM neural networks are like us (albeit with a brain the size of a worm compared with ours - but not to be confused with a worm in terms of how it is about to transform society). We both have ways of doing things that make us like prediction machines, in part. When we don't have much signal, but a load of noise, we still make predictions about meaning; we have to in order to survive. Biden is making a lot of weird noises, but we still have to try and predict what he is saying. The AI predicts what he most likely said. I tested this video on 4 people, without telling them about what other's were saying. Of course, they all could hear it was Biden speaking, so this would have massively influenced what they predicted he was saying. These were all people who I would say are politically non-partisan, although they are aware of Biden's dementia, of course. Two thought he very clearly said 'f****** your wife', one said he very clearly said 'talking to your wife', and the fourth said "I'm not sure" but when pressed said "I think he used the f word" but he was very hesitant to speak his mind, particularly because he personally never uses the f word.
Human recognition of words heavily depends on expectation, which operates at both conscious and subconscious levels. Here’s how expectation influences our understanding:
Top-down Processing:
When we encounter language, our brains use top-down processing. Prior knowledge, context, and expectations shape how we perceive and interpret incoming information.
For example, if you see the word "piano" in a sentence about music, your expectation of related words like "keys," "notes," or "concert" may influence how you interpret the sentence.
Predictive Processing:
Our brains are predictive. As we read or listen to speech, we continuously predict what words or phrases might come next based on context and our understanding of the topic.
These predictions help us anticipate and comprehend language more efficiently.
Subconscious Influence:
Much of this expectation and prediction occurs subconsciously. We’re often unaware of the specific mechanisms at play, yet our brains seamlessly integrate these processes to facilitate rapid comprehension.
Contextual Priming:
Exposure to certain contexts or stimuli can prime us to expect specific words or meanings.
Hearing words associated with a particular topic can prime us to recognize related words more quickly in subsequent encounters.
Overall, human word recognition is intricately linked to our ability to predict and interpret based on context and expectation, often operating without our conscious awareness. This interplay between expectation and language processing is a fascinating area where cognitive science and linguistics intersect.
Interesting that you tie linguistic understanding to prediction. I may have encountered this idea before, but cannot remember (there is a lot I do not remember). I am pretty sure our interpretation of words depends on context. Maybe it is statistical in nature. I think at one time I knew a lot more about that. I will have to ask Chat.
It is still an open question in my mind to what degree LLM AI is remotely similar to us.
The "hard problem of consciousness" looms large here. I have been obsessed with philosophy of mind issues since I was a teenager, and am far less confident that I have good answers than I was in my youth. LLM AI brings a whole new perspective. That it works at all flummoxes me. Do say it works because of neural nets and transformer architecture is just so much handwaving as far as I can see.
By the way, if Biden had not been such a monster throughout his life, I would have some sympathy for him.
Well I did not hear "talking" even when primed to hear that word. It might have come up with "having carnal knowledge with" and be less wrong. You can look up videos on misheard lyrics as well. Still, maybe Biden was just having a brain fart and misspoke - it can happen - I do it often enough. Still, the word that came out was not "talking" not matter how you spin it. Why would AI translate it as such? The AI was retrained for this contingency? Unlikely, but we he have two mysteries: what quirk of Biden's demented brain caused him to come up with this bizarre statement (not what I would call a joke) and why would the AI mistranslate it as "talking".
There are many ways that these LLM neural networks are like us (albeit with a brain the size of a worm compared with ours - but not to be confused with a worm in terms of how it is about to transform society). We both have ways of doing things that make us like prediction machines, in part. When we don't have much signal, but a load of noise, we still make predictions about meaning; we have to in order to survive. Biden is making a lot of weird noises, but we still have to try and predict what he is saying. The AI predicts what he most likely said. I tested this video on 4 people, without telling them about what other's were saying. Of course, they all could hear it was Biden speaking, so this would have massively influenced what they predicted he was saying. These were all people who I would say are politically non-partisan, although they are aware of Biden's dementia, of course. Two thought he very clearly said 'f****** your wife', one said he very clearly said 'talking to your wife', and the fourth said "I'm not sure" but when pressed said "I think he used the f word" but he was very hesitant to speak his mind, particularly because he personally never uses the f word.
ChatGPT 3.5 says:
Human Recognition of Words and Expectation
Human recognition of words heavily depends on expectation, which operates at both conscious and subconscious levels. Here’s how expectation influences our understanding:
Top-down Processing:
When we encounter language, our brains use top-down processing. Prior knowledge, context, and expectations shape how we perceive and interpret incoming information.
For example, if you see the word "piano" in a sentence about music, your expectation of related words like "keys," "notes," or "concert" may influence how you interpret the sentence.
Predictive Processing:
Our brains are predictive. As we read or listen to speech, we continuously predict what words or phrases might come next based on context and our understanding of the topic.
These predictions help us anticipate and comprehend language more efficiently.
Subconscious Influence:
Much of this expectation and prediction occurs subconsciously. We’re often unaware of the specific mechanisms at play, yet our brains seamlessly integrate these processes to facilitate rapid comprehension.
Contextual Priming:
Exposure to certain contexts or stimuli can prime us to expect specific words or meanings.
Hearing words associated with a particular topic can prime us to recognize related words more quickly in subsequent encounters.
Overall, human word recognition is intricately linked to our ability to predict and interpret based on context and expectation, often operating without our conscious awareness. This interplay between expectation and language processing is a fascinating area where cognitive science and linguistics intersect.
Interesting that you tie linguistic understanding to prediction. I may have encountered this idea before, but cannot remember (there is a lot I do not remember). I am pretty sure our interpretation of words depends on context. Maybe it is statistical in nature. I think at one time I knew a lot more about that. I will have to ask Chat.
It is still an open question in my mind to what degree LLM AI is remotely similar to us.
The "hard problem of consciousness" looms large here. I have been obsessed with philosophy of mind issues since I was a teenager, and am far less confident that I have good answers than I was in my youth. LLM AI brings a whole new perspective. That it works at all flummoxes me. Do say it works because of neural nets and transformer architecture is just so much handwaving as far as I can see.
By the way, if Biden had not been such a monster throughout his life, I would have some sympathy for him.
F.U.C.K.I.N.G. There can be no other interpretation. There IS NO OTHER. They are an Insane Death Cult 😳 💀
I have to say, combing his lip movements with the sounds, it seems very clear to me too.
I think 🤔 that we call it "hearing" 👂?
So creepy but I agree. He did NOT say talking.
Another thing that is confusing is that it sounds very clear that he used three words there and not four as in 'talking to your wife'.
Additionally, unlike songs etc. this audio is very clear. And we have the lip movements too.