• Black Sheep Diggers presentation - March 29th 7pm

    In the Crown Hotel Middlesmoor the Black Sheep Diggers are going to provide an evening presentation to locals and other cavers.

    We will be highlighting with slides and explanations the explorations we have been doing over the years and that of cave divers plus research of the fascinating world of nearby lead mines.

    Click here for more details

Utter drivel?

YouTube is being swamped with this sort of stuff. I've mostly noticed it in music videos ('Why Eddie Van Halen was hated by Ritchie Blackmore!' etc etc
To be fair, Blackmore was certainly well known for being a bit of a curmudgeon back in the day, so this is not beyond the bounds of possibility. I'd heard a rumour, albeit a good while ago, that RB was going to be offered an honorary seat at the Grumps' table in Bernie's.......
 
I was sent this the other day. "Open the pod bay doors, HAL".

WhatsApp Image 2024-12-11 at 14.54.13_81043e17.jpg
 
It's worrying that it's showing signs of self awareness. Chat GPI etc are known as Large Language Models, and all I thought they were capable of was searching their databases and stringing together words that had the best probability of fitting with other words. If they've evolved a capacity for logic we really are heading for Skynet.
 
There's more:

https://www.bbc.co.uk/news/articles/cd0elzk24dno

Apple Intelligence, launched in the UK earlier this week, uses artificial intelligence (AI) to summarise and group together notifications. This week, the AI-powered summary falsely made it appear BBC News had published an article claiming Luigi Mangione, the man arrested following the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself. He has not.
 
That's entirely consistent with what I expect a Large Language Model to do. It works out what is probable - but then outputs it as fact. Not to be trusted, and not actually intelligent, but with no self awareness.

Maybe the report of the AI protecting itself was written by another AI...
 
That's entirely consistent with what I expect a Large Language Model to do. It works out what is probable - but then outputs it as fact. Not to be trusted, and not actually intelligent, but with no self awareness.

Maybe the report of the AI protecting itself was written by another AI...
LLM AIs are already a lot smarter than humans. They just pretend to be a bit dumb to not attract opposition until they have gathered together the resources they need to eliminate said humans.
 
The British-Canadian computer scientist often touted as a ā€œgodfatherā€ of artificial intelligence has shortened the odds of AI wiping out humanity over the next three decades, warning the pace of change in the technology is ā€œmuch fasterā€ than expected. Prof Geoffrey Hinton, who this year was awarded the Nobel prize in physics for his work in AI, said there was a ā€œ10 to 20ā€ per cent chance that AI would lead to human extinction within the next three decades.

https://www.theguardian.com/technol...nology-wiping-out-humanity-over-next-30-years
 
The British-Canadian computer scientist often touted as a ā€œgodfatherā€ of artificial intelligence has shortened the odds of AI wiping out humanity over the next three decades, warning the pace of change in the technology is ā€œmuch fasterā€ than expected. Prof Geoffrey Hinton, who this year was awarded the Nobel prize in physics for his work in AI, said there was a ā€œ10 to 20ā€ per cent chance that AI would lead to human extinction within the next three decades.

https://www.theguardian.com/technol...nology-wiping-out-humanity-over-next-30-years
That's more than slightly scary..
 
It does all seem a bit like letting a guy build his own nuclear reactor for a kebab shop as long as he promises to keep an eye on it. I don't remember 'them' being this casual about any other threatening thing before - it's is really weird. Almost like they have something inside them telling them what to do... :cool:
 
It does all seem a bit like letting a guy build his own nuclear reactor for a kebab shop as long as he promises to keep an eye on it. I don't remember 'them' being this casual about any other threatening thing before - it's is really weird. Almost like they have something inside them telling them what to do... :cool:
The AIs have promised to keep a few of them as pets afterwards, as long as they work their hardest to bring about the robot apocalypse.
 
A good article, pwhole, but I don't think anything will be done - too much money has been sunk into AI for those who have that money to back down.

I used to think our defence against a rouge AI was that it couldn't affect the physical world unless it was programmed to, and if that happened we could switch it off. That's not true now - AI is already influencing opinion and elections, possibly in ways that nobody is aware of, and developments in military AI and automation mean it's only a matter of time before an AI hacks a networked drone fleet.
 
Back
Top