• Black Sheep Diggers presentation - March 29th 7pm

    In the Crown Hotel Middlesmoor the Black Sheep Diggers are going to provide an evening presentation to locals and other cavers.

    We will be highlighting with slides and explanations the explorations we have been doing over the years and that of cave divers plus research of the fascinating world of nearby lead mines.

    Click here for more details

AI Generated Photographs

Well it certainly helps with very bright highlights and dark shadows, of which there are many in cave photography! There's often so much more hidden in there, and it can usually be brought out to quite surprising levels, with little noise and usually with no detriment to the image. My dream is that HDR becomes the common format, albeit with capable monitors, which don't yet fully exist. My latest monitor does have a 'HDR' setting, but it only really works on images specifcally made in that way, which is probably only a 20% 'improvement' and consumes a lot more power whilst it's doing it, so arguably there is a limit to its usefulness, merely as a user. Although a public display-use of HDR images/video would certainly be useful, as much more information could be displayed to the viewer - think cave-painting presentations or similar, where access is not allowed.

My old APS-C DSLR from 2008, which I will still use underground, uses a 10-bit RAW format. The new one I just bought, which is full-frame and will definitely not go underground, now supports a 14-bit RAW format, which is an astonishing increase in fidelity really. That's like two or three extra exposure stops at each end of the scale.

Of course, the limitation of print is even worse, as 'LDR' is the only method available. But I still think starting with the best possible source material will still produce a high-quality output, within the limitations of the medium - as in, a glossy expensive large-format book or magazine as opposed to a laser-printed newsletter.
 
AI shouldn't be confused with proper photography. If you want to make an AI image then fair enough as long as you said it's AI that did it. But if you try pass it off as a proper photo it's really cheating.
If I went to a restaurant and ordered a nice steak and then took complete credit for it when it's the kitchen that actually did the work and all I said was 'steak and chips please" that would be very wrong. Same goes for AI pretending to be something real.
Also I think adobe recently had an advert to 'ditch the photoshoot' or similar and to use their AI stock photos - apparently if you use adobe stock photos they use them to train their AI as part of their Ts and Cs (as well as the bit that says they have a licence to use your stuff).
 
I liked this AI image of the
biblical story of David fighting the ammonites
14C6E6DF-B0AF-4DA5-A1E3-91F3F55282D0.jpeg
 

ChatGPT And Other LLMs Produce Bull Excrement, Not Hallucinations​


Links to a paper "ChatGPT is bullshit"

Recently, there has been considerable interest in large language models: machine learning systems which produce human-
like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are
often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better
understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important
way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters,
and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations
as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.
 

ChatGPT And Other LLMs Produce Bull Excrement, Not Hallucinations​


Links to a paper "ChatGPT is bullshit"

Recently, there has been considerable interest in large language models: machine learning systems which produce human-
like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are
often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better
understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important
way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters,
and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations
as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

ChatGPT And Other LLMs Produce Bull Excrement, Not Hallucinations​


Links to a paper "ChatGPT is bullshit"

Recently, there has been considerable interest in large language models: machine learning systems which produce human-
like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are
often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better
understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important
way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters,
and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations
as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.
I’m sorry but any published article that references section headings that don’t exist in the paper I’m absolutely going to take with a pinch of salt… did nobody proof real this at all???

I get the point of it but it missed a big issue about expectation vs reality.

From learning the rules of language nobody expected these things to behave the way they do. They were never designed to be used to retrieve truthful information, they just so happen to be able to most of the time by chance. Just look at the published results of all of the exams these things have passed, it truly is impressive.

When plugged into other models, techniques like retrieval augmented generation greatly improve the truthfulness of retrieved statements.

Anyone who thinks LLMs in their current state are windows to truthful information are very wrong. Never rely on any of the outputs to be truthful - it was never designed for this.
 
Last edited:
A I generated images of the perfect faces.

Textbook data bias there. But try to correct bias too much yourself and you can go too far and generate things that are ridiculous (search for Googles ethnically diverse nazis from this year). In some cases, as bad as bias and racism is, it happened and it existed. So there is a dilemma if you should remove it completely or not, otherwise models won’t learn what it is and that it is bad (we are still working on this).
 
I’m sorry but any published article that references section headings that don’t exist in the paper I’m absolutely going to take with a pinch of salt… did nobody proof real this at all???
Is this within the PDF, or from the springer.com or hackaday.com pages? I'm afraid I can't spot missing references. Could you provide examples so I know what I'm missing seeing, please.
 
Is this within the PDF, or from the springer.com or hackaday.com pages? I'm afraid I can't spot missing references. Could you provide examples so I know what I'm missing seeing, please.
“In Sect. 3.2.3” and elsewhere in the pdf. I mean from the name alone I can tell this is a comedic paper so maybe it’s expected that the basics are missed and were not spotted during peer-review. Clearly more thought was put into the title!

Just because something is published in a journal (not all are equal) doesn’t mean that it is credible.
 
AI stuff is tantamount to fiction and should be given the same reception. In terms of made up stuff which people are willing to cite as evidence of truthfulness it should find plenty of willing supporters from the realms of religious zealotry.
 
But most of the controls in Adobe Camera RAW (which I use) simply affect the colour, brightness and all the usual stuff - there are a few 'erase' or 'heal' tools, but they're for removing spots and glitches, not faking photos. I think there's a real 'wrong end of the stick' issue here. Granted, in Photoshop itself there are now a few AI-based features that would allow you to 'fake' photos, but being originally shot in RAW or not would make zero difference to that - it would just be better quality.

RAW formats simply allow a much greater dynamic range of brightness and colour in the captured photo - that's it. So difficult shots, in terms of bright lights and dark shadows, are now easier to recover. As a photographer, I want my output to look as good as possible - that just seems obvious to me. For example, it would be very difficult indeed to do something like this without shooting in RAW.

View attachment 19632
That's Cowlow Nick isn't it?
 
Back
Top