As Dembski points out, our use of language is dependent on our grasp of syntax (the rules governing the ways words can be fitted together), semantics (the meanings of words in isolation and in context), and pragmatics ("what the intent of the speaker is in influencing the hearer by the use of language"). Dembski paraphrases Larson as arguing that "we have, for now, no way to computationally represent the knowledge on which the semantics and pragmatics of language depend." In other words, we cannot program a computer with all it would need to know in order for the computer to be able to use language in exactly the way humans use it.
Here is an example that demonstrates this fact. Consider the following sentences, which are examples of what are known as Winogram schemas (see this document for other examples):
The car crashed into the truck because it was going too slow.
The car crashed into the truck because it was going too fast.
Note that the two sentences are identical except for one word ("slow" vs. "fast"). However, that single word difference changes the reference of the pronoun it, and consequently changes the meaning of the entire sentence. Human beings have no trouble understanding what the pronoun refers to in each sentence, but a computer can only guess. This is because a computer lacks an understanding of semantics. It is due to this lack of semantic understanding that, as Dembski puts it, "Siri and Alexa are such poor conversation partners"!
This inability of computers to grasp the subtleties of human language not only demonstrates the challenge of creating machines that can imitate human cognition, but also is yet another example of how really complex human language really is. How such complexity could have evolved through pure chance would seem extraordinarily difficult to explain.
Image of computers from Wikimedia Commons