AI fails

Page may contain affiliate links. Please see terms for details.

Psamathe

Veteran
Long term, remember the metaverse ... so important Facebook even changed their name, invested US$46bn. But compared to the US$46bn on Metaverse, Zukenberger is only pushing US$15bn into AI. Sign of confidence or another of the many fads that are going to change the world.

AI has become a very overused term. Now seems to apply to bacis machine learning or just pattern recognition as every software product tries to jump on the bandwagon. But bandwagons pass disappearing into the distance as those pouring vast sums in lick their losses.

Had a bit of an "exchange" with Adobe recently as their photo DAM/Editor was flagging images as having been edited (by their software) using AI when they hadn't. Initially they were "You need to distinguish between Generative AI and just AI" which was drivel and eventually they "Investigations have shown a bug which we'll be fixing".

Not making predictions but it wouldn't be the first computer/IT bubble subject to bursting.

Ian
 
  • Like
Reactions: C R
OP
OP
briantrumpet

briantrumpet

Legendary Member
Long term, remember the metaverse ... so important Facebook even changed their name, invested US$46bn. But compared to the US$46bn on Metaverse, Zukenberger is only pushing US$15bn into AI. Sign of confidence or another of the many fads that are going to change the world.

AI has become a very overused term. Now seems to apply to bacis machine learning or just pattern recognition as every software product tries to jump on the bandwagon. But bandwagons pass disappearing into the distance as those pouring vast sums in lick their losses.

Had a bit of an "exchange" with Adobe recently as their photo DAM/Editor was flagging images as having been edited (by their software) using AI when they hadn't. Initially they were "You need to distinguish between Generative AI and just AI" which was drivel and eventually they "Investigations have shown a bug which we'll be fixing".

Not making predictions but it wouldn't be the first computer/IT bubble subject to bursting.

Ian

I still think that AI is in effect Google with knobs on with an extremely good ability to mimic human language. There's absolutely no doubt that the current models are extraordinary in their ability to crunch data (hence the language ability, and speed at which it can code, for instance), but five-year-olds still have quite extraordinary abilities too, such as being able to tell the difference between 'blueberry' and 'blueberrby'.

FWIW, if you read up on infant cognition, even before language, toddlers have a concept of 'one' and 'two' ('three' takes a little while longer). If that's not miraculous, I don't know what is. And yes, I'm being entirely serious: it's just one tiny example of how the human brain has evolved to program itself to survive and flourish in an infinitely complex universe.
 
  • Like
Reactions: C R

BoldonLad

Old man on a bike. Not a member of a clique.
Location
South Tyneside
I still think that AI is in effect Google with knobs on with an extremely good ability to mimic human language. There's absolutely no doubt that the current models are extraordinary in their ability to crunch data (hence the language ability, and speed at which it can code, for instance), but five-year-olds still have quite extraordinary abilities too, such as being able to tell the difference between 'blueberry' and 'blueberrby'.

FWIW, if you read up on infant cognition, even before language, toddlers have a concept of 'one' and 'two' ('three' takes a little while longer). If that's not miraculous, I don't know what is. And yes, I'm being entirely serious: it's just one tiny example of how the human brain has evolved to program itself to survive and flourish in an infinitely complex universe.

To the best of my knowledge (not a high bar, I admit), we do not actually fully understand how human intelligence, memory, reasoning, etc work, therefore, it is unlikely we have managed to replicate it in AI.
 
  • Like
Reactions: C R
OP
OP
briantrumpet

briantrumpet

Legendary Member
To the best of my knowledge (not a high bar, I admit), we do not actually fully understand how human intelligence, memory, reasoning, etc work, therefore, it is unlikely we have managed to replicate it in AI.

Quite correct. The often quoted opinion is that we know more about the universe than we know about the human brain and its 86 billion neurons. Despite the fact that there have been incredible advances in neurology and cognitive function, central questions such as 'what is consciousness?' remain almost wholly unanswered, not least as they are philosophical as well as scientific. (Try asking yourself the question "Who am I, how do I know who I am, and where in my body is the actual thing that I think of as me?" and you end up going down to the pub instead and finding a different, probably drunker 'me'.)
 
OP
OP
briantrumpet

briantrumpet

Legendary Member
And there I was thinking that at least it should be good with numbers...

1754658096763.png
 
True ... ''shitification''. So yes, Copilot pro does a better job of searching in a fraction of the time.
If it filters out the right results. if not you're back to square one or you rely on false information. (and yes both can happen during a normal google search too, i'm just pointing out that just because it is AI it isn't always better.)


I haven't used Deepseek, or asked any question critical of the Chinese government. But saying one system fails to answer one question doesn't negate the capability of other AI services with all questions.
That's correct you didn't, you did miss the point tough. First off my point is AI now is doing exactly what google did before it got to big(and Yahooo and Altavista) second and more important point is AI shows what it's programming wants to show you. Now i'm not going to add a conspiracy about Microsoft, Google or Apple's reasons, but let just add one more thing, if you ask something like ''how to kill google assistant on your Apple phone, you get an detailed answer, similar as ''How to kill Siri'' on an android phone. same question on a Apple phone ''i don't know how to respond to that'' and similarly google doesn't like to explain how to disable their own software either.

innocent on first looks, but the wider picture is more worrying especially as AI gets better and it's getting harder to tell what is AI and what is not.

I don't find CoPilot to be very wrong - it can miss things, but you have control over the scope of the search from the message box. My experience is that if you phrase you questions in precise language, then Copilot pro is generally accurate, and faster than Google searches, precisely because of the ''shitification'' that you stated yourself. Other people in my workplace agree with me.
Which is again exactly what people said when goole started to get less precise just to be clear i'm talking about AI in general i'm not singling out co-pilot.
 
OP
OP
briantrumpet

briantrumpet

Legendary Member
Haha, at first I thought "That's an unusual roundabout!", then twigged that it's an AI creation, with phantom lanes & islands, and zebra crossings going in random directions. When not just use a photo of an actual Indiana roundabout?

WUAeRK4K9HF9UPEd&_nc_zt=23&_nc_ht=scontent.fsxb1-1.jpg
 

Regular.Cyclist

New Member
AI is good when you feed it the data you wish it to assess. Ask it a question where it has free reign to find it’s own sources and you get failures.

I have used Copilot for the following with good success.
  • Producing minutes for a meeting from a transcription.
  • Writing my own job description from a really dated existing one and one closer to where it needs to be.
  • Writing a terms of reference for a specific group where lots of valid information is available.
Generative AI is too hopeless if you want realism in what it produces.
 
  • Like
Reactions: C R

Psamathe

Veteran
An interesting article about AI Halucinations and why they probably cannot be resolved. I found it interesting as it puts part of the causes of the halucinations down to the mathematical processes used in LLMs and not only on the rubbish training data.
Why OpenAI’s solution to AI hallucinations would kill ChatGPT tomorrow
The paper provides the most rigorous mathematical explanation yet for why these models confidently state falsehoods. It demonstrates that these aren’t just an unfortunate side effect of the way that AIs are currently trained, but are mathematically inevitable.

The issue can partly be explained by mistakes in the underlying data used to train the AIs. But using mathematical analysis of how AI systems learn, the researchers prove that even with perfect training data, the problem still exists.
It's from The Conversation so neutral in having no axe to grind about AIs Good or Bad.

Ian
 
OP
OP
briantrumpet

briantrumpet

Legendary Member
An interesting article about AI Halucinations and why they probably cannot be resolved. I found it interesting as it puts part of the causes of the halucinations down to the mathematical processes used in LLMs and not only on the rubbish training data.

It's from The Conversation so neutral in having no axe to grind about AIs Good or Bad.

Ian

Thanks. Very interesting. It doesn't sound like there's an easy (commercial) solution.

The underlying problem is that humans have a tendency to undervalue/disparage doubt. I know someone who used to have the grand title of 'Head of Climate' at the Met Office, and she said that although they calculate confidence ratings for every forecast, the feedback from the public is that they didn't want them published alongside the forecasts. So they didn't. I'd have preferred that they educated the public on the unavoidable doubt inherent in any systems involving turbulence.

I've been mulling for a while what the Briantrumpet School of Excellence's motto would be, and the first part of it would be "Embrace doubt". (The mulling was in response to seeing the motto of one extremely large local comprehensive school's three-word motto was: "Learn - Progress - Grow". Meaningless tripe.)
 
  • Like
Reactions: C R

Psamathe

Veteran
It doesn't sound like there's an easy (commercial) solution.
...
In my mind I try and distinguish between machine learning and Artificial Intelligence, though as a continuum I'd find it hard to argue a lot of cases as being one or the other. AI has become a marketing requirement for anything software - if your app or software doesn't have "AI" then your market is very limited. But a lot of that "AI" is definitely on the "machine learning" or often even "machine pre-learnt" end of the scale the AI being no more than a tick in a box that isn't real.

To me the article moved the LLM more to the "machine learning" direction on the continuum, basically a lot of statistical analysis.

...
The underlying problem is that humans have a tendency to undervalue/disparage doubt. I know someone who used to have the grand title of 'Head of Climate' at the Met Office, and she said that although they calculate confidence ratings for every forecast, the feedback from the public is that they didn't want them published alongside the forecasts. So they didn't. I'd have preferred that they educated the public on the unavoidable doubt inherent in any systems involving turbulence.
Personally I find the weather probabilities very useful. UK Met Office do publish probability of rainfall on their forecasts and I find that very useful. Unfortunately I find the ICON (German) and even KNMI (Dutch) forecasting for the UK more accurate than the Met Office and sources I use for those don't publish probabilities.
 
  • Like
Reactions: C R
OP
OP
briantrumpet

briantrumpet

Legendary Member
Personally I find the weather probabilities very useful. UK Met Office do publish probability of rainfall on their forecasts and I find that very useful. Unfortunately I find the ICON (German) and even KNMI (Dutch) forecasting for the UK more accurate than the Met Office and sources I use for those don't publish probabilities.

Yes, indeed re the rainfall % probability, but it's more the 'confidence index' that would be helpful in assessing whether or not the various models are converging.

I'm pretty sure I've mentioned it before, but Meteociel is a really useful: you can Google "Meteociel xxx" for UK cities, and it gives you the raw output from five or six different models (including ICON) without human interpretation, and if you look at the different models you can see the degree of convergence (or not) for yourself, so giving a fair picture of 'confidence'.
 
Top Bottom