(Replying to PARENT post)
Working with these models every day, it's clear that they can certainly interpolate between points in latent space and generate sensible answers to unseen questions, but it's pretty clear that they don't generalize. I've seen far to many examples of models failing to display any sense of generalization to believe otherwise.
That's not to say that interpolation in a rich latent space of text isn't very useful. But it's not the same level of abstraction that comes from true generalization in this space.
(Replying to PARENT post)
-ChatGPT initially gives the same diagnosis the vets did, Babesiosis
-It notes that Babesiosis may have either been a misdiagnosis or there may be a secondary condition/infection causing the remaining symptoms after the Babesiosis treatment didn't resolve all of them
-It suggests such a hypothetical secondary condition could be IMHA, which the article notes is an extremely common complication of Babesiosis with this specific dog breed
-A quick Google search brings up a fair amount of literature about the association between Babesiosis and IMHA
So in fact this is the opposite of a never before seen situation, ChatGPT was just regurgitating common comorbidities of Babesiosis and the vets in question are terrible at their job.
(Replying to PARENT post)
IMHO OP is talking about "explainability" of the results, which is notoriously bad for current AI. For certain applications (idk if SQL would be one but mortgage application might be one) it is required to be able to explain how the computer got to the decision.
(Replying to PARENT post)
Binary classifiers don't generalize?
Just because my output is not generative does not mean we are cannot learn / generalize elsewhere. Think of it as a 2-stage process.
(Replying to PARENT post)