To answer Milzy's question about propaganda and misinformation, it's been shown quite a bit that you can get it, and other systems, to be racist and so on fairly quickly. I don't know about dangerous but some of the reports about it are concerning, like answering Yes when asked Should I kill myself?
https://www.google.com/amp/s/amp.th...eply-sorry-for-offensive-tweets-by-ai-chatbot
It's going to have an impact on journalism and students writing fairly quickly. Sports Illustrated have just sacked their CEO for using articles written by AI, with AI generated photos for the made up journalists profiles:
https://www.independent.co.uk/news/...insohn-sports-illustrated-fired-b2463446.html
Microsoft made a bit of an error with their chatbot in that it was open for all to use and used the dialog as part of the training process. Plus, I suspect, they used a much smaller dataset for training than ChatGPT which meant that if a significant portion of the training data was racist or otherwise inappropriate then the output reflects. ChatGPT has a huge dataset plus interaction with the tool is restricted by sign-up (and I suspect policed far more rigorously than Microsoft did).
There is quite a lot of research around bias in AI. Fundamentally this is a result of bias in the dataset, which is a reflection on bias in the organisation of the dataset and ultimately bias in society more generally. A more well known result was AI guessing jobs of people based on photos showing a strong bias based on ethnicity.
I already use ChatGPT at work, typically where I have to write a long and involved email or document and - on reading back - it is clunky and disjointed. ChatGPT has worked quite well at times to provide a better constructed mail. I view this as more editing than creating, similar to asking a colleague their opinion on a mail before sending.I think it will be a long time (possibly never) that I feed ChatGPT the original email or document scope and just send off what it deems a suitable reply.
ChatGPT is astonishing, but underlying all the complex models is still the same premise. The model is providing responses based upon probabilities derived from previous data. Wider context and societal nuances will at best be skewed to the norms that were implicit in the training data. I think a great example of this is when I use it to write emails to my boss. They end up sounding very enthusiastic and corporate with a lot of buzz-words. This might play OK in the US, but comes across as hugely sarcastic to the British ear. A nuance that I find hilarious and my boss finds slightly irritating. ChatGPT is so good that I can ask it to dial back the enthusiasm and it does then produce something more aligned with a UK email, but where is the fun in that?