SargeZT

โœจย Programmer turned baker turned nurse with some mental illness. If you need something coded, frosted, or care completed, let me know.

๐Ÿ“… Joined in 2018

๐Ÿ”ผ 506 Karma

โœ๏ธ 88 posts

๐ŸŒ€
15 latest posts

Load

(Replying to PARENT post)

> The language model's body of "knowledge" tends to fall off outside of functionality commonly covered in tutorials. Writing a "hello world" program is no problem; proposing a design for (or, worse, an addition to) a large application is hopeless.

Hard disagree. I've used GPT-4 to write full optimizers from papers that were published long after the cutoff date that use concepts that simply didn't exist in the training corpus. Trivial modifications were done after to help with memory usage and whatnot, but more often than not if I provide it the appropriate text from a paper it'll spit something out that more or less works. I have enough knowledge in the field to verify the corectness.

Most recently I used GPT-4 to implement the paper Bayesian Flow Networks, a completely new concept that I recall from the comment section on HN people said "this is way too complicated for people who don't intimately know the field" to make any use of.

I don't mind it when people don't find use with LLMs for their particular problems, but I simply don't run into the vast majority of uselessness that people find, and it really makes me wonder how people are prompting to manage to find such difficulty with them.

๐Ÿ‘คSargeZT๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

You are literally describing the fundamental problem of truth in philosophy and acting as if it's different because a computer is involved at one step in the chain.
๐Ÿ‘คSargeZT๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

At my lab, a positive culture is viewed as a very bad sign. So bad in fact that it may necessitate IV antibiotics.
๐Ÿ‘คSargeZT๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Show me one assistant who can promise they are right 100% of the time and I will show you one liar.

Copilot & Copilot Chat cut down my coding time on a brand-new ML optimizer that was released in a paper last week from what would have been 20+ hours into a 4-hour session and I got fancy testing code as a free bonus. If I had coded it by myself and taken the full amount of time it would have taken to figure out which parameter was being sent on which layer for which gradient was currently being processed, I wouldn't have had the energy to write any tests.

I don't understand what people's expectations are of AI that they're being disappointed. You figure out the limitations quickly if you use it on a regular basis, and you just adapt those shortcomings into your mental calculus. I still code plenty by myself in a good old vim session because I don't think copilot would actually be very useful in reducing the amount of time it would take me to code something up, but I don't count that as a "failure" of AI, I view it as knowing when to use a tool and when not to.

๐Ÿ‘คSargeZT๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

It does not. Regular ChatGPT without plugins does not have access to any tools. Throw it a script with some weird outputs and it'll definitely fail, every time. While the script 'evaluation' stuff can be pretty decently impressive, it is not actually executing anything.
๐Ÿ‘คSargeZT๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Researchers unaffiliated with Facebook are allowed to possess and use the original weights though, and they can make use of these weights.
๐Ÿ‘คSargeZT๐Ÿ•‘2y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

After I finished it myself, I ran it through ChatGPT (and Davinci, but that largely failed.) It generated part 1 perfectly, but it was unable to make the jump into the three compartment intersections without significant prompting, and the first couple times it completely lost the uppercase/lowercase distinctions. It was able to generate largely perfect tests from the examples though, and I had it debug itself until it worked. It wasn't amazing code, but it passed its tests and actually thought of some edge cases I hadn't considered when I coded my solution, such as odd length strings.
๐Ÿ‘คSargeZT๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Oh, the software is absolutely awful, and you're spot on about the metadata (and the CBH format as a whole,) but I would also toss in another point where it's (sadly) the best: Opening prep. There just isn't much else that can compare despite the awful interface.
๐Ÿ‘คSargeZT๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Yes. Every single serious chess master uses Chessbase as it has the largest database available out there even though the price is pretty obscene. The customer base is more dedicated than most and it's probably the single most important tool a chess pro can get outside of an engine itself.

Edit: It may not be the single largest database, I suspect that honor goes to Chess.com or Lichess, but it is certainly the largest curated one.

๐Ÿ‘คSargeZT๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

They should, but you're the only one you can control your responses.
๐Ÿ‘คSargeZT๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I'm still living in 1708 Sweden, don't spoil anything for me!
๐Ÿ‘คSargeZT๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Eventually it may be a contributor to a commercialization, but now it makes it a target for research for others. This isn't patenting anything, it's scientists saying 'this is an important bit when it comes to the effects of fasting so let's focus on this.'

What would be scummy would be some pharmaceutical company discovering this, not releasing the information, and trying to develop (or worse, failing to develop) a drug on their own after internal research that was never released to the world.

๐Ÿ‘คSargeZT๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Provably there's little that can be said about chess at large when it comes from opening to endgame. Heuristically speaking though matching moves against even a beginner is a very bad strategy for play, as a recent speedrun being done by GM Aman Hambleton on youtube has been showing.

[1] https://www.youtube.com/watch?v=uUfF6l4A6Ks

๐Ÿ‘คSargeZT๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

By your logic would we suddenly lose consciousness/sentience if we gained complete knowledge of our own bodies and brains? What if at a sub-quantum level everything decomposes to a simple turing machine?

Not understanding how conciousness works in our bodies does not directly imply that conciousness cannot emerge in a complex system of fully understood basic components.

๐Ÿ‘คSargeZT๐Ÿ•‘3y๐Ÿ”ผ0๐Ÿ—จ๏ธ0