SargeZT
๐ Joined in 2018
๐ผ 506 Karma
โ๏ธ 88 posts
Load more
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
Copilot & Copilot Chat cut down my coding time on a brand-new ML optimizer that was released in a paper last week from what would have been 20+ hours into a 4-hour session and I got fancy testing code as a free bonus. If I had coded it by myself and taken the full amount of time it would have taken to figure out which parameter was being sent on which layer for which gradient was currently being processed, I wouldn't have had the energy to write any tests.
I don't understand what people's expectations are of AI that they're being disappointed. You figure out the limitations quickly if you use it on a regular basis, and you just adapt those shortcomings into your mental calculus. I still code plenty by myself in a good old vim session because I don't think copilot would actually be very useful in reducing the amount of time it would take me to code something up, but I don't count that as a "failure" of AI, I view it as knowing when to use a tool and when not to.
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
Edit: It may not be the single largest database, I suspect that honor goes to Chess.com or Lichess, but it is certainly the largest curated one.
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
What would be scummy would be some pharmaceutical company discovering this, not releasing the information, and trying to develop (or worse, failing to develop) a drug on their own after internal research that was never released to the world.
(Replying to PARENT post)
(Replying to PARENT post)
Not understanding how conciousness works in our bodies does not directly imply that conciousness cannot emerge in a complex system of fully understood basic components.
(Replying to PARENT post)
Hard disagree. I've used GPT-4 to write full optimizers from papers that were published long after the cutoff date that use concepts that simply didn't exist in the training corpus. Trivial modifications were done after to help with memory usage and whatnot, but more often than not if I provide it the appropriate text from a paper it'll spit something out that more or less works. I have enough knowledge in the field to verify the corectness.
Most recently I used GPT-4 to implement the paper Bayesian Flow Networks, a completely new concept that I recall from the comment section on HN people said "this is way too complicated for people who don't intimately know the field" to make any use of.
I don't mind it when people don't find use with LLMs for their particular problems, but I simply don't run into the vast majority of uselessness that people find, and it really makes me wonder how people are prompting to manage to find such difficulty with them.