It’s a common problem, where one person will say something and another will hear the same words, but each will think the words mean something different. Sometimes they might even hear different things, leading to further miscommunication.
The problem isn’t the speaker, the listener, or even their communication skills.
It’s human language.
(I’ll concede that maybe it’s the speaker’s fault for using one.)
Human languages are known to be unreliable for a variety of reasons, including the possibility of unstated assumptions. Clearly, in any miscommunication, the fault lies in lost data integrity, and no spoken language is designed with this flaw in mind. Making matters worse, none are even designed in the first place (Esperanto and other conlangs don’t count).
Clearly we need to implement some method of ensuring the integrity of intention across the communication. The logical conclusion is also the obvious one: we need a checksum.
My proposition is simple: you can continue to use your fallible language; but once you’ve finished speaking, you calculate a quick checksum of what you meant and say that too. Then your listeners can interpret what they heard, calculate their own checksum, and compare. If they get a different result, then they can attempt some other possible meanings, and failing that, ask for clarification.
This way, clarification only needs to be provided when it is necessary. If nobody queries you about what you meant to say, you can assume that they already know.
It’s foolproof too: anyone who isn’t able to determine the checksum, or isn’t even sure what they themselves mean, is a fool and not worth your time.
Now I just have to work out what the algorithm should be. I don’t expect that should take too long, after all, how difficult could it be?
No comments found.