Chat GPt - a way to go for reliable information

mikem

Well-known member
Hade is actually the angle of inclination of a fault / rift / lode, rather than the feature itself.
Sorry, yes it's a feature of the rift, but that is already implied in the word.

Graph of uses of hading from 1800 to present:
 

bills

New member
A good point, but as an engineering geologist rather than a mathematician I have relied on Schum's handbook of Mathematical Identities and the like for 40 yrs, without feeling the need to prove each one. I think it is about a matter of trust, checking and use where appropriate, but if I can use something that is going to give me the right solution efficiently then I do not see why it should not be used as a tool like any other piece of computer software.
 

alanwsg

New member
I was trying to solve a puzzle and asked it ...
Give me a 9 letter word that means rotting or decaying

The reply ...
One possible word that means rotting or decaying and has nine letters is "putrefying".
 

aricooperdavis

Moderator
A lot of these comments are the same as complaining that your microwave hasn't cleaned the bathroom
Absolutely. ChatGPT isn't trying to be a generalized artificial intelligence, but a language model. The fact that it feels like a generalized intelligence says a lot about what we consider intelligence to be.
 

ChrisB

Well-known member
On another forum I moderate (not caving related) we require new users to have their first post approved, to eliminate spammers. We are now getting very plausible replies to posts, written in the style of ChatGPT. The other give-away is that the threads replied to are 5 to 10 years old (they hope nobody notices) and although it's a UK forum, the IP is somewhere in Asia and the email address is typically in Oz. But if you didn't know about ChatGPT, you'd think they were from a real person. It's only a matter of time before they improve to the point where you can't tell.

PS - yes, we do have a captcha, but I think AI is solving that too
 

2xw

Well-known member
On another forum I moderate (not caving related) we require new users to have their first post approved, to eliminate spammers. We are now getting very plausible replies to posts, written in the style of ChatGPT. The other give-away is that the threads replied to are 5 to 10 years old (they hope nobody notices) and although it's a UK forum, the IP is somewhere in Asia and the email address is typically in Oz. But if you didn't know about ChatGPT, you'd think they were from a real person. It's only a matter of time before they improve to the point where you can't tell.

PS - yes, we do have a captcha, but I think AI is solving that too
A month or so ago researcher's tasked one of these models to solve a captcha and it went on taskrabbit and hired a human to do it by telling them it was a person who was blind and couldn't solve captchas.


There are other AIs that can detect chatgpt output but obviously I can see it adding to your burden as a moderator!
 

pwhole

Well-known member
There are other AIs that can detect chatgpt output but obviously I can see it adding to your burden as a moderator!

Just look for all the post's with the inappropriate apostrophe's, and they'll be the human ones. 'No HGV's' ;)
 

aricooperdavis

Moderator
I appreciate I'm being a bit anal here, but I think it's worth reiterating as this is very much in the public consciousness and a source of much discussion (and scaremongering) right now:

ChatGPT is not an artificial intelligence, it's a language model. It's not piloting drones, winning games of chess, or dreaming of electric sheep. It's just very very good at guessing what word(s) are likely to follow others.
 

mikem

Well-known member
It may not be, but it is from a company potentially working towards that end (from their website): "Today’s research release of ChatGPT is the latest step in OpenAI’s iterative deployment of increasingly safe and useful AI systems."
 

aricooperdavis

Moderator
I don't think OpenAI will produce an AGI. I'm not convinced it's possible, or particularly useful, and they're certainly not trying to develop anything like one at the moment. However I think their language models have the potential to be seriously disruptive to the labour market, and we're not at all prepared to deal with that very real and current risk.
 
Top