Thoughts on the (Near) Future of AI Applications

February 25, 2023 5 min read

Thoughts on the (Near) Future of AI Applications

Since the release of ChatGPT by OpenAI, the interest in AI applications has skyrocketed, and rightly so. ChatGPT suddenly hit us with a system that might even pass a not too strictly administered Turing test.  If you have ever wondered how it would feel to meet an alien life form, a conversation with ChatGPT is the closest thing you could have ever experienced. Thanks to ChatGPT, this technology has demonstrated that we are entering a new era of innovation based on AI technology that will bring significant and unpredictable changes to how we do things on this planet. Such significant impending change sparked our curiosity here at NOVEDGE, and propelled us to explore how this technology works and how it could apply to our business. So far, we have come up with four different apps to help customers with Tech Support, Product Selection, Computer Configurations, and General Design Software Use  suggestions. These are prototypes, but they can give you an idea of how powerful this technology is (and how much better it will become), and how easily it can integrate into various applications (it took us literally days to develop our prototype apps).

 As interesting as these apps are, what I wanted to write about today is what I believe we have learned from our experience with AI, and what we see as the most impactful applications for the near future.

Even the smartest people that ever lived made mistakes. Why would computer intelligence be any different? Being intelligent does not mean being infallible.

by Cristiano Sacchi

First of all, a lot is being written about the fact that AI chatbots and related applications sometimes make silly mistakes and maybe even “embarrass” themselves. Well… as per the “I” in AI, if we see them as “intelligent” systems, we have to accept that they will always be fallible. Even the smartest people that ever lived made mistakes. Why would computer intelligence be any different? Being intelligent does not mean being infallible.   I see this as an expectations mismatch. We have become accustomed to “algorithmic computing” (i.e., pretty much all the HW/SW that runs our world), and in the world of algorithms, any mistake/miscalculation/failure is called a “bug” and should (and generally is) be corrected. When you leave the world of well-defined algorithms, mistakes are not necessarily bugs - in the case of human intelligence we call them “lapses of judgment" or in cases of presumably preventable misjudgment, “stupidity”. So, with AI we are in a different domain where, over time, the system will become smarter and smarter but never infallible.

Another important consideration that I think we should keep in mind is that intelligence is generally applied to problems with numerous unknown and/or partially known variables. We all live our lives in a universe of unfathomable complexity that we are incapable of predicting beyond a very short period of time. All we have to navigate this environment is our intelligence, and despite our best efforts we still constantly misjudge situations with all the related negative consequences. AI, or any other “I”, cannot possibly be immune to this, because here the issue is not in the ability of the system to solve problems but in the fact that problems must be “solved” despite the structural lack of accurate inputs.  In the world of algorithmic computing, most of the time, we simply refuse to “solve” under-constrained problems, labeling them as unsolvable. And when we do “solve” them, we typically tackle them with some sort of statistical/heuristic that we fully expect to provide partial/inaccurate results, and by doing that we avoid the expectations mismatch that seems to be going on with AI.

If we can get successfully past this initial phase of expectations mismatch without tarnishing AI’s image before it can prove itself, we will probably get to a place similar to where we are with things like the weather forecast: we understand it is imprecise and occasionally plain wrong, but we are really glad we have it.

So, what could the near future bring when it comes to applications of AI? If you listen to what’s being discussed about it right now, the key issue is that we should be very worried about all the fake things that AI will bring that will be indistinguishable from the real ones. Not to mention the issue of AI-generated fake stuff being inadvertently (or maybe even intentionally) fed back to AI that will generate an overwhelming amount of realistic garbage. In addition, AI is also supposed to, in sparse order, decimate the middle class by rendering many middle-level jobs redundant, render Google irrelevant because Microsoft made a deal with OpenAI, and ruin online searches for all of us because it can make mistakes but we will not be able to spot them, and so on…  These are all very legitimate concerns and given our recent experience with social media, which was supposed to happily connect us all and quickly turned into an untamable multi-headed beast that is corroding our social fabric, we should surely worry about them. Let’s call that the “dark side of AI”. But what could the “good side” bring?

My personal expectation is that AI will not have such an impact on search in the near future, or at least as much as it is being talked about. I may be naïve here but when it comes to finding things on the internet Google works just fine. I do not remember the last time I scrolled to the third page of results and I do not see the second page very often either. Sure, anything can be done better, and AI will end up enhancing search somehow, but in its current ChatGPT form I do not think search is where it will make the biggest difference right away.
In my opinion, ChatGPT is exactly what its name says: an unbelievable system you can chat with. I compare ChatGPT with a hypothetical 13 years old savant that somehow managed to memorize the entire web and can talk about it. As we have seen in ChatGPT, this is technology you can have a conversation with, and that I think is the lowest-hanging fruit. As such, I am not sure that search is the first place where I would like to see it. I do not feel I need it to find web pages better/faster: I want to talk to it. So, I want it to be the next iteration of Alexa, Siri, and Google Assistant, and I want it with me all the time. It can answer almost any question, it understands multiple languages and can translate between them (including computer languages), and it helped me write this blog post, too. I think that the first jobs it will threaten will be those where people talk to people to provide information, entertainment, and possibly even some sort of emotional support (yeah, it may be that it could help people feel less lonely).  Our little experiment confirms that in its current form AI language models are great at front-line support, where the interaction is not about finding specific information but about having a useful conversation about a topic to gather information. And that I think will be the first wave of applications.

What happens after that? At least a decade of breakneck innovation that is completely unpredictable. Let’s be watchful to make sure that the “dark side” does not prevail and enjoy the spectacular applications that this technology will deliver.

AI Support

Also in NOVEDGE Blog