Artificial Intelligence is a disorienting subject. A few days ago, I realized for the first time the poverty of human language as a communication medium. It is incredibly imprecise; semantic ambiguity is ubiquitous. If it weren't for vast amounts of shared background knowledge between human speakers, many utterances would be unintelligible.

This accounts for the impossibility of creating computers that can understand the natural language used by human beings. As usual, nature has evolved a twisty, inelegant solution that works well enough. It is very hard to copy her ways.

Share this


Ah yes. A few random thoughts.

In this David Friedman lecture, David quotes Ray Kurzweil, who has given an estimate for when AI will catch up to human brain capacity and functioning.

If I remember correctly it's less than fifty years (correct me if I'm wrong). In my opinion, this is horribly inaccurate. That estimate is wayyyyy too soon.

I site the lecture, because I find it informative, and peeps have been yapping about David since the dawn of man. I'm sure I could have just quoted Ray directly, but I chose not to.

I would think we all appreciate the imprecision, for without it, these huge wank sessions (blogs) would not exist. Not to mention the vast amount of art. However, hardly anybody can formulate logic through a mathematical equation; and it is Math which harnesses the very precision that language - "in general" - lacks.

I too find language terribly insufficient. Which is why I'm a numbers man, but I think the insufficiency is its strength, as well as its beauty.

Manipulation through words

These seem appropriate links for this entry.

David Friedman on the use of word choice to manipulate opinion.

Ilya Somin comments.

Interesting stuff, I thought.

Thinking along these lines is not new. Linguists have been arguing about the extent to which language influences thought for a long time.



How many Lojban speakers

How many Lojban speakers does it take to change a broken light bulb ?

Two. One to figure out what to change it into, one to figure out what kind of bulb emits broken light.

Arthur, That's a great


That's a great illustration of syntactic and semantic ambiguity. Only background knowledge allows humans to parse a phrase like "change a broken light bulb".

There are an infinite numbers of such phrases.

... a little light humor....

I tend to ask, "How many ___ does it take to screw in a light bulb?" That usage arguably provides more entertaining semantic ambiguities -- provided you have a large enough light bulb.

I know I am not the first

I know I am not the first person to think these thoughts, but it is interesting to note that we get along pretty well despite such ambiguity.

I just use this as a reminder that evolution produces strange beasts, one further reason to remember that humans are arbitrary and irrational creatures.

I BEG your pardon?

it is interesting to note that we get along pretty well despite such ambiguity.

How DARE you say such a thing about my mother???



Groucho Marx

"Time flies like an arrow fruit flies like a banana." - Groucho

Guaranteed to make any artificially intelligent robot attacking a starship explode with a syntax error.


Single words such as "lie" are ambiguous themselves, lexically. a false statement? or putting oneself in horizontal position in a present tense?

The ambiguity is a way of packing the information into smallest possible set of expressions.

"I like painting my models nude"

But English is exceptionally simple... too simple. In other languages it is much easier to communicate very subtly different meanings.

language of numbers, predicate logic, General Semantics

I too find language terribly insufficient. Which is why I'm a numbers man....

While I was reading The Zen of Physics, a physics prof. remarked that much of the science in the book was now dated. Then he remarked that even when it wasn’t, the book was somewhat controversial. This appears to be a common dynamic in physics: the results of a given experiment will become well acknowledged, reproducible and reliable – that is, everyone agrees on the numbers – yet will provoke endless debate when people try to describe in words what the experimental results demonstrate.

I recall a class in which we were given the task of translating conversational English into predicate logic. Consider the sentence “Everybody loves my baby, but my baby don’t love nobody but me.” We’d say something like this: There exists b and me, elements of the domain of bodies. For all x, element of the domain of bodies, x L b, where “L” is the function “love.” And for all x, b L x implies that x = me. Substituting the first formula into itself, we learn that b L b – that is, if everyone loves my baby, then my baby loves my baby. And substituting this third formula into the second, we learn that b = me – that is, if my baby loves my baby, and my baby don’t love no one but me, then I must be my baby. All, the benefits of rigorous language!

But one example that tripped everyone up was the sentence “Ponce de Leon searches for the Fountain of Youth.” People translated it as follows: There exists p, element of the domain People, and there exists an element f element of the domain Objects, such that p S f, where S is the function “to search for.” And what is f? Oh, f is the Fountain of Youth. Then do you REALLY mean to say “There exists f”? Does this mode of language pre-suppose the existence of the objects in the sentence? And if we’re striving for rigor, what does “exist” mean anyway?

Of course, many people have wrestled with how to add rigor to language. For example, in Tractatus Logico-Philosophicus Ludwig Wittgenstein sought to boil down the rules of language to seven propositions. His (greatly revised) thoughts triggered a discussion that has evolved into the school of General Semantics. Adherents seek out ways to express ideas in conversational English with some precision and clarity. According to Wikipedia,

Many General Semantics practitioners view its techniques as a kind of self-defense kit against manipulative semantic distortions routinely promulgated by advertising, politics, and religion, as well as those found in self-deception.

The goal is not only to avoid language patterns that muddle the hearer's thoughts, but to avoid language patterns that muddle the speaker's thoughts. After all, no one spends more time listening to my thoughts than I do; no one is more burdened by my conceptual weaknesses than me.

(But with my frequent postings here, I'm striving to change that. Why not share the load...? :-) )

This accounts for the

This accounts for the impossibility of creating computers that can understand the natural language used by human beings.

Well, for some definition of "understand." It's possible to eliminate ambiguity from a natural language, but that a computer still wouldn't be able to understand it, because computers can't understand anything.

But this reminds me of an idea I had a few years back. As I understand it, the hard part of machine translation is the parsing. Parsing natural language probably requires strong AI, but generating translated text from a parse tree seems like it should be a tractable problem. But parsing should also be tractable with human assistance, either by writing in an unambiguous subset of the source language, or by providing disambiguating feedback during the parsing process.

This wouldn't be terribly useful if you didn't know Polish and needed to translate a Polish document you found on the Web, but it would be very useful if you wanted to write a document and then translate it to many different languages.

because computers can't

because computers can't understand anything.

Well, that depends. I may be misquoting you, but I'll throw this out there anyway.

I think you mean in the "here and now" computers can't understand anything (Penrose likes to call this insight), and I would agree 100%. Or maybe you mean it as it pertains to our language, or any language? No matter, I'll sidetrack here for a bit.

However, if we are talking about the potential of computer understanding in the future, then that begs the question: are our brains an algorithmic process? If so, computers could potentially understand just as we can. The base level process would be similar if not exactly the same: a functional equivalence. The difference between a Turing Machine and your brain, then, would be a matter of complexity.

So, if our minds are algorithmic, it is of unimaginable complexity. Which is why the prediction I sited earlier made by Ray Kurzweil is, I believe, absolutely wrong.

It is my thought that we aren't even CLOSE to getting that far with AI, but I have a hunch our brains work like any other computer at the most basic of levels.