Modern computer technology can be a little hard to keep up with. This is particularly true since so many terms seem to have slightly different meanings depending on where you look up the definition. “Bot” most certainly falls into that category. But for purposes of this article, “bots” refers to virtual computer assistants that communicate using spoken languages. Many of us have versions of these assistants in our own homes or as apps on our smartphones. Siri, Alexa and Cortana are just a few examples, but there are many more.

While it’s impressive to many of us that these recent advancements in AI (artificial intelligence) can produce language at all, the language they do produce is primitive at best. As it turns out, when a machine speaks, it sounds just like a machine. So if technology has come far enough to produce machines that speak, why can’t they produce language that is more natural and native-sounding? As it turns out, there is only one major reason why.

The first and foremost reason can be explained very simply: writing software that understands and produces language is difficult – very, very difficult. This is true even for huge companies with a virtually unlimited amount of money to spend on research and development. Most engineers engaged in developing this artificial intelligence use a scripting standard called AIML (Artificial Intelligence Markup Language) and a system referred to as pattern matching, which means that the developers themselves need to literally guess what the human user of the bot will ask, then hard code that into the bot. This is extremely challenging and time-consuming, to say the very least.

Although technology has brought us a long way down the AI road, we’re still not as far along as you might think. Bots are pretty impressive when it comes to simple communication, but expecting a bot to understand modern jargon and the almost constantly evolving English (or any other) language exceeds what even the most talented developer can accomplish for now. In fact, we’re far from being able to carry on anything beyond the simplest conversations the most technologically advanced bots available today.

Still, what developers have accomplished is impressive. While our bots’ conversation abilities may be limited, just consider what they CAN do, like remembering earlier conversations, “learning” from their users, storing a remarkable amount of information, understanding context (usually), and even the ability to change the subject when it’s appropriate to do so.

So, while the language of bots may be unnatural sounding, it’s important to stop and consider how far we really want this technology to go. If a bot is capable of carrying on in-depth conversations using human-like communication skills (something even many people are incapable of), what else will it be able to do? Display negative emotions in itself or evoke those emotions in its users, perhaps? Misinterpret or ignore instructions because of our tone of voice? Gather personal information and pass that along to other devices maybe? Make decisions on its own that affect our lives, then act on those decisions? Or listen in on our private conversations, store that information, and reuse it again at inappropriate times in the future?

If you’re frustrated because your bot doesn’t sound enough like a native speaker of your language, you might want to rethink that. You’re probably better off – at least for the time being – in seeking intelligent conversation from another human being. Bots are handy devices, without a doubt, but let’s not be in a rush to make them too human. The possible ramifications are unsettling at best, and frightening at worst. When it comes to artificial intelligence, be careful what you wish for!