AI in Data analytics – 4 How’s and a peek into the future

Okay, we’ll do it, too: Let’s talk about AI in data analytics – specifically, let’s talk about why Large Language Models (LLM’s) such as GPT and Gemini don’t deal well with data and how they will eventually be able to.

AI in Data analytics TLDR: LLMs are designed as artificial neural networks based on relationships and association probabilities. The issue is: business data is almost never association probability based – it’s causal, because that’s its point. So when you feed GPT-3 a bunch of numbers, it will look in its dataset for “associated numbers” … which is why it keeps making stuff up. Depending on what you want to do, a simple background analysis algorithm will support you a lot better. Eventually in the future though, we may see AI use association probability to connect events and outcomes with historical, numerical data – which is when the impact will be galactic, and the predictive capabilities will reach deterministic levels … happy days.

Recently I asked OpenAI’s ChatGPT to assess a text I wrote. The answer was detailed, fair, coherent and intelligent – so basically it was amazing.
GPT is a Large Language Model or LLM which, if you boil it down to the very basics, is an algorithm that has learned how to speak. And while the global hype about this AI’s capabilities is justified, many are now trying to use it in different contexts. One of them is our field of expertise – data analytics, planning and forecasting.

And let me tell you, attempts to launch GPT versions or API calls into forecasting models have, so far, failed spectacularly. So maybe we need to talk about what LLMs actually are and how they operate.

1. How LLM-AI’s function and why they’re not the answer

LLM’s are artificial, human-corrected neural networks. They are, as mentioned above, algorithms that have been designed to build a synapse-like web of probable relationships and associations which are corrected either by comparison to real-life human usage or simply by human editors.They also rely heavily on datasets which are  man-made and therefore erroneous.

LLM AIs are association professionals: They play a gigantic game of association with n-grams, letters, words, phrases, paragraphs etc. So if you feed an LLM a letter, say: E, it might associate the most common things about E, find the most common words beginning with E, ending with E and the words containing the most Es as well as words that are loosely related to E such as ‘alphabet’ and of course everything connected to that and so on … It ‘connects the dots’ which it considers having the highest probability of being associated based on previous human usage in its dataset and will finally provide an answer from the information it deems most relevant. And in doing so, it closely resembles what we consider linguistic intelligence. It can also spot opposites and easily identify outliers in the association net – it hence develops something like a rhetorical logic.

However, language is not mathematically logical. It’s entirely based on relationships, association, context and convention. And data simply isn’t – because that’s its point. Data analyses are meant to be objective, interchangeable, and mathematically correct. Data that has different meaning, depending on context is incredibly difficult to utilise effectively and we try to standardise formats and find clever ways to extract clean, objective truths out of it wherever we can.
The point of collecting and analysing data is always to draw conclusions and make predictions based on them. For that, we need valid, clean and correct data that is mathematically causal – to the point where it’s a numerical representation of cause-effect-chains. The ulitisation of those chains is what drives our thirst to collect and understand data – it is not a self-serving effort.

2. How would we even use AI in data analytics?

And perhaps this is where we need to start the discussion: What do we actually want AI in data analytics to do for us? Don’t be fooled, it’s a rhetorical question: Obviously, we want artificial intelligence to do the work instead of human intelligence – if it can do a better job, that is.

In the context of steering controlling software that means that AI’s job is to analyse the data, draw conclusions from it and make predictions based on it in the context of current economic, cultural and political conditions as well as take into account the occasional Suez canal crisis. And then, with all that in mind, to increase or decrease budgets by an appropriate degree.

Large Language Models are not yet able to analyse data effectively – because instead of mathematical causality, they look for linguistic associations.
And really, this is the reason why we’re so enamored with them. A technical reproduction of linguistic intelligence has eluded us for such a long time that we have forgotten that mimicking mathematical intelligence is essentially the basis of computing. After all, if you ‘prompt’ a decent calculator with a mathematical formula, it will simply answer correctly straight away.

Sticking with that example – asking a Large Language Model to analyse your business data is like asking a calculator to tell you a joke. It’s not designed to do that, and it won’t excel at it – no pun intended.

3. How will LLM’s impact business intelligence and forecasting?

LLM AIs shine whenever it comes to extrapolating information from human communication patterns. They are amazingly useful when it comes to attaining conceptual knowledge and spotting trends. For instance, asking an LLM which relies on a contemporary dataset can quickly give you insights on economic, political or cultural trends on a global level – use it to obtain background information in your planning processes. What will be the most likely fashion trends next year? What is most talked-about kitchen appliance? What are common issues people voice about our product or the competitor’s product? Because global markets are so heavily influenced by human psychology, you may also be successful in determining the trust a population has in a specific market or currency, which can directly benefit your budget planning as it enables you to make use of currency effects.

In short: Use LLMs for what they’re designed for – understanding human communication.

When it comes to planning and forecasting software, there may be a gimmicky API call to put comments into a plan or provide context for a version, but nothing revolutionary, really. Some speculation: In the mid- to long-term we could see LLMs which do the same as humans do – differentiate between objective data and associative probabilities. And once that is possible, we may see the advent of AI that can use associative probability functions to calculate the causal chain reactions of events within the context of human interaction, communication and psychology. So, basically, algorithms that draw comparisons to historical events, analyse historical data and make predictions based upon it – at a level more detailed and more insightful than any human ever could master.
Mind you, the processing power needed for that is immense, and the ethical implications are incalculable – so that won’t be publicly available for a while.

In the meantime, using comparatively simple statistical analytical models in the background of your data landscape to spot outliers and calculate the probability of causal relationships is likely to be a much better bet. That is not to say that making your system future-proof is a not a priority right now.

The fact remains: When data-driven, predictive AI with linguistic capabilities and enough processing power comes around, anyone who isn’t able to integrate it immediately will just get outsmarted at every step by the competition and the market at large.

4. How should we prepare for AI in data analytics?

Fortunately, preparing for that event is rather simple and coincides with good business practice in general: Get your data in order, implement an effective business intelligence system with planning capabilities, minimise manual efforts needed and retain open APIs for the eventual integration of truly predictive AI. Need help with that? Well, that’s what we’re here for. Simply get in touch! We’ll be more than happy to help.

AI in data analysis - What does the future look like?
Coming Events
No event found!

Share:

LinkedIn
Twitter
Email

RELATED POSTS

Better Insights ALLOCATION AND REPLENISHMENT

Reach the optimum in Allocation and Replenishment

Allocation & Replenishment is complex – and getting it right just isn’t easy! Sending the wrong stock to stores, or more commonly sending the wrong quantities to stores, is super common in retail. But it doesn’t have to be! Read on and learn how a systemic solution for allocation and replenishment can support you in this increasingly complex task.

Butget Cycle

The Four Phases of a Budget Cycle – Made Easy with the Right BI Tool

Have you ever thought about why the regular financial budget season spans over three months? It‘s because Controlling and FP&A departments have to walk a tightrope between accuracy and timely completion as they often rely on a variety of spreadsheets. What falls short is the flexibility to react to today’s world with its rapid market dynamics. Read on to learn how the right BI tool can help!

BUCHEN SIE
EINE DEMO

In einer Live-Demo zeigen wir Ihnen eine hochwertige Lösung auf Basis von bdg ONE. Gern besprechen wir vorab Ihre konkreten Anforderungen.

In der Live-Demo:

  • Einblick in die Architektur einer Bi- & EPM-Lösung
 
  • Wie bdg eine BI-/EPM-Lösung individualisiert
 
  • Features, die bdg speziell für Ihre Branche entwickelt hat
 
  • Wie ein typisches Projekt mit bdg abläuft
 

Wählen Sie einfach einen Termin; Sie erhalten Zugang zu einem digitalen Meeting-Raum – ein bdg-Mitarbeiter wird Sie dort begrüßen.