(Replying to PARENT post)

I think you've fallen into the trap of "AIs don't generalize, they memorize." But they do in fact generalize. The reason ChatGPT is so valuable is precisely because it can help out with situations that have never been seen before, not because it merely unlocks old preexisting knowledge. The fella who saved their dog with ChatGPT comes to mind. https://nypost.com/2023/03/27/chatgpt-saved-my-dogs-life-aft...
๐Ÿ‘คsillysaurusx๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I think you have fallen into the trap of mistaking interpolation for generalization.

Working with these models every day, it's clear that they can certainly interpolate between points in latent space and generate sensible answers to unseen questions, but it's pretty clear that they don't generalize. I've seen far to many examples of models failing to display any sense of generalization to believe otherwise.

That's not to say that interpolation in a rich latent space of text isn't very useful. But it's not the same level of abstraction that comes from true generalization in this space.

๐Ÿ‘คPheonixPharts๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Did you read the article you posted?

-ChatGPT initially gives the same diagnosis the vets did, Babesiosis

-It notes that Babesiosis may have either been a misdiagnosis or there may be a secondary condition/infection causing the remaining symptoms after the Babesiosis treatment didn't resolve all of them

-It suggests such a hypothetical secondary condition could be IMHA, which the article notes is an extremely common complication of Babesiosis with this specific dog breed

-A quick Google search brings up a fair amount of literature about the association between Babesiosis and IMHA

So in fact this is the opposite of a never before seen situation, ChatGPT was just regurgitating common comorbidities of Babesiosis and the vets in question are terrible at their job.

๐Ÿ‘คZircom๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

> With ChatGPT, you cannot realistically explain why you got some output in a way that anyone other than an AI/ML expert would find satisfying

IMHO OP is talking about "explainability" of the results, which is notoriously bad for current AI. For certain applications (idk if SQL would be one but mortgage application might be one) it is required to be able to explain how the computer got to the decision.

๐Ÿ‘คmejutoco๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

> I think you've fallen into the trap of "AIs don't generalize, they memorize."

Binary classifiers don't generalize?

Just because my output is not generative does not mean we are cannot learn / generalize elsewhere. Think of it as a 2-stage process.

๐Ÿ‘คbob1029๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0