swee'pea99
Member
Is this a fail?
"Sorry Dave, I did that."
"Sorry Dave, I did that."
(I do hope that is for real!!!)
Shades of Jackie/Jockie Wilson on TOTP.
OK, this is genuine I think. Chat GPT doing as good a job as it did for MattGPT.
View attachment 14295
It doesn't replace their normal modelled forecasts but is additional."AI as applied to weather forecasting is radically different to the physics based techniques currently widely used. There is no need to explicitly simulate the physics of the atmosphere; this is currently implicit in the re-analysis training data used. The user may not even know the impact of the local or large scale physical effects upon the process.
The major difference between forecasting techniques is:
• traditional weather forecasting models assume a reasonably accurate physical model of the Earth system. The biggest unknown is the initial conditions from which to start the forecast.
• AI and ML do not explicitly handle the physics of the atmosphere. The aim is to define an empirical model that links observational data at one time with similar data for a short time later. Currently the empirical model has been developed (or trained) on reanalysis fields as observations and compared with reanalysis fields for the later time. Having found this empirical model it can be used iteratively to compute forecast fields at successive time steps into the future.
Before an AI forecasting system can be implemented it has to have been trained on a large amount of observed data. At ECMWF the AIFS is trained mainly to produce six hour forecasts."
(https://confluence.ecmwf.int/display/FUG/Section+2.2+Artificial+Intelligence+Model)
A new report on the accuracy of AI overviews
https://futurism.com/artificial-intelligence/google-ai-overviews-misinformation
"The improvement between Gemini 2 and Gemini 3 may be papering over a more serious flaw. In the Oumi analysis, Gemini 2 provided answers that were “ungrounded” 37 percent of the time, meaning the AI Overviews cited websites that didn’t support the information they provided. But with Gemini 3, this jumped to 56 percent. On top of suggesting that the AI is pulling facts out of thin air, ungrounded responses make it difficult for users to verify the AI’s claim."