BRFR Cake Stop 'breaking news' miscellany

Page may contain affiliate links. Please see terms for details.

C R

Guru
Sure, as you say, it's got really good at writing code,

Not really. It's got really good at producing output that mashes together answers from stackoverflow and others. For small bits it mostly works well enough. For bigger stuff it really provides no more than a rough idea of what you you need to do. It is barely more that glorified code completion.
 
OP
OP
briantrumpet

briantrumpet

Über Member
Not really. It's got really good at producing output that mashes together answers from stackoverflow and others. For small bits it mostly works well enough. For bigger stuff it really provides no more than a rough idea of what you you need to do. It is barely more that glorified code completion.

OK, that's been oversold to me too then. It seems to be another case of users of AI needing to know their shît before using it as a sometimes-useful tool. And I'm back to my maths teacher in 1975 not letting us use mechanical adding machines until we could do the mental/paper arithmetic at least roughly.
 

icowden

Shaman
OK, that's been oversold to me too then. It seems to be another case of users of AI needing to know their shît before using it as a sometimes-useful tool. And I'm back to my maths teacher in 1975 not letting us use mechanical adding machines until we could do the mental/paper arithmetic at least roughly.

From my point of view, as C R said, it's good at generating bare-bones code to use. However quite a lot of the time the code will have basic errors in. Sometime the AI can fix the problem, sometimes it gets in a loop where it fixes problem A but creates problem B and when it fixes that it recreates problem A. I have found that giving it a new input will make it think differently but you have to be able to read and understand the code to help it out.

I've also found the Gemini is better for code than ChatGPT
 
OP
OP
briantrumpet

briantrumpet

Über Member
What could possibly go wrong?

vnpmhah6wpg7bp3xfql4szdicld63zc62tvsqumdms5q4@jpeg.jpg
 
  • Laugh
Reactions: C R
OP
OP
briantrumpet

briantrumpet

Über Member
From my point of view, as C R said, it's good at generating bare-bones code to use. However quite a lot of the time the code will have basic errors in. Sometime the AI can fix the problem, sometimes it gets in a loop where it fixes problem A but creates problem B and when it fixes that it recreates problem A. I have found that giving it a new input will make it think differently but you have to be able to read and understand the code to help it out.

I've also found the Gemini is better for code than ChatGPT

Does Co Pilot do similar?

I'll admit to not using any of them for pretty much anything, other than amusement about giving them ridiculous questions, and am still of the (maybe erroneous) opinion that they are in effect Google wrapped up with a human-like language ability. It's always been the case that to get the best answers out of Google you need to learn how to ask it questions, and to do that you at least need to know something about the subject, and how to use key vocabulary. Seems that that is still the case with AI. The danger is that if you don't already know something, then the answer it gives you could you leave you worse than ignorant, by giving you false information but you believing that you now know something accurate & useful.

1750503654756.png
 
  • Like
Reactions: C R
You've lost me. Advertising sells us stuff we don't need. It doesn't in itself pretend to do anything else. The AI industry is pretending to be able to do things it can't, using the LLM output of human-like language as the smokescreen. As it stands, a large chunk of it is a fraud: the step-change in its ability to mimic human language has been the prompt for its hyperbolic claims of 'intelligence'.

Sure, as you say, it's got really good at writing code, but it's really important that people realise what it's not good at, as well as the dangers of its overuse to humans' ability to evaluate stuff & to develop deep understanding.

Advertising has some standards, so there needs to be a grain of truth to the teeth whitening toothpaste, but I doubt many people really believe the full extent of many of the promised benefits seen in adverts.

Similarly I wouldn't expect an industry lobby group to be entirely truthful. Even then I'm not sure anyone has claimed that AI will solve everything, so I think it is a strawman to argue against that.

For now, I find copilot more effective than google, so that's progress in my eyes.
 

Pross

Regular
OK, that's been oversold to me too then. It seems to be another case of users of AI needing to know their shît before using it as a sometimes-useful tool. And I'm back to my maths teacher in 1975 not letting us use mechanical adding machines until we could do the mental/paper arithmetic at least roughly.

In my industry we’ve got a generation of ‘engineers’ that have grown up using design software and who are really knowledgeable on how the software works but have a complete lack of understanding of engineering fundamentals to know if the output is correct. I’m nowhere near as good on the software but can look at the outputs and get a good feel for if it is correct. I’ve seen some whoppers that could have cost someone a fortune had the design made it to site.
 
  • Like
Reactions: C R

icowden

Shaman
Advertising has some standards, so there needs to be a grain of truth to the teeth whitening toothpaste, but I doubt many people really believe the full extent of many of the promised benefits seen in adverts.
But only in some areas. The adverts in mobile gaming seem to be totally immune to the ASA. Thus the vast majority of adverts have at least one of the following issues:
  • Is entirely misleading (i.e. game shown isn't the actual game)
  • tell outright lies, "No adverts, no wifi"
  • pretend gambling games aren't (earn money with testerapp)
  • contain inappropriate sexual or adult content and are shown in a game with a 12+ rating
Mobile gaming seems to be a deregulated no-mans land.
 
OP
OP
briantrumpet

briantrumpet

Über Member
Advertising has some standards, so there needs to be a grain of truth to the teeth whitening toothpaste, but I doubt many people really believe the full extent of many of the promised benefits seen in adverts.

Similarly I wouldn't expect an industry lobby group to be entirely truthful. Even then I'm not sure anyone has claimed that AI will solve everything, so I think it is a strawman to argue against that.

For now, I find copilot more effective than google, so that's progress in my eyes.

OK, points taken to an extent, but obviously I don't think it's a strawman, otherwise I'd not have made the comparison.

And yes, sure, there is progress (AI's ability to scrape & assimilate data is impressive), but at the moment its claims are wildly overstated, and given its prominence and the centralised and undemocratic control it gives a few very rich people (see Musk's aim of literally rewriting history), I'm still on the decidedly sceptic side of the argument, not least because of the real risk it poses in people's ability to develop deep understanding and ability to assess AI's output. It's not AI that is going to kill humans, but humans who don't recognise AI's flaws.
 
OP
OP
briantrumpet

briantrumpet

Über Member
In my industry we’ve got a generation of ‘engineers’ that have grown up using design software and who are really knowledgeable on how the software works but have a complete lack of understanding of engineering fundamentals to know if the output is correct. I’m nowhere near as good on the software but can look at the outputs and get a good feel for if it is correct. I’ve seen some whoppers that could have cost someone a fortune had the design made it to site.

I suspect that experience will be replicated in multiple industries/disciplines, and why I think being open-eyed about the limitations of AI and being aware of what one needs to do to use its output profitably & safely is paramount.
 
  • Like
Reactions: C R

BoldonLad

Old man on a bike. Not a member of a clique.
Location
South Tyneside
In my industry we’ve got a generation of ‘engineers’ that have grown up using design software and who are really knowledgeable on how the software works but have a complete lack of understanding of engineering fundamentals to know if the output is correct. I’m nowhere near as good on the software but can look at the outputs and get a good feel for if it is correct. I’ve seen some whoppers that could have cost someone a fortune had the design made it to site.

The same thing has happened with calculators/computers vs mental arithmetic, there are whole generations who have no "feel" for numbers, if the machine spits out a number, they accept it.

No doubt, similar with car diagnostic software etc etc
 
  • Like
Reactions: C R
Top Bottom