snickell
📅 Joined in 2013
🔼 396 Karma
✍️ 68 posts
Load more
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
The physical aspect I can’t give up is I can hold the phone with my thumb on the bottom and my middle finger on the top and scroll with my index finger to read. Wish I could buy that capability on a new iPhone, maybe one even slightly smaller.
Time to go find out if there’s even a way to downgrade, oof this is slow.
(Replying to PARENT post)
(Replying to PARENT post)
1) a bunch of people who contributed one or two PRs, but it took the maintainers more time to review/merge the PR than the dev time contributed
2) a much smaller set of people who come back and do more and more PRs, eventually contributing more time than it takes to review their work
A major existing reason to review PRs from class 1 "once or twice" contributors (perhaps the main reason?) is that all class 2 "maintainer-level" contributors start as class 1.
I agree there's an awkward middle ground here, now you have to define where the boundary is between class 1 and class 2, but I think if you were able to graph contribution level you'd find there's already something of a bimodal distribution naturally in many projects anyway.
Show HN:
"universal application where LLM does all computation directly"
This is based on a loop where user commands, or mouse clicks, are fed to the LLM, and the LLM is instructed to simply render the next frame, as if it was rendering the frame of a video. In this particular case, we actually render static HTML+CSS as the "image" because image output from existing LLMs doesn't have high enough text fidelity.
Computation is done by the LLM itself, the LLM does not "write code" it IS the code.
(Replying to PARENT post)
It takes mouse clicks, sends them to the LLM, and asks it to render static HTML+CSS of the output frame. HTML+CSS is basically a JPEG here, the original implementation WAS JPEG but diffusion models can't do accurate enough text yet.
My conclusions from doing this project and interacting with the result were: if LLMs keep scaling in performance and cost, programming languages are going to fade away. The long-term future won't be LLMs writing code, it'll be LLMs doing direct computation.
(Replying to PARENT post)
How much would AirBnB pay for the intelligence everyone gets all their info from having a subtle bias like this? Sliiightly more likely to assume folks will stay in airbnbs vs a hotel when they travel, sliiightly more likely to describe the world in these terms.
How much would companies pay to directly, methodically and indetectably bias “everyone’s most frequent conversant” toward them?
(Replying to PARENT post)
(Replying to PARENT post)
It's easy to focus on libgraft's SQLite integration (comparing to turso, etc), but I appreciate that the author approached this as a more general and lower-level distributed storage problem. If it proves robust in practice, I could see this being used for a lot more than just sqlite.
At the same time, I think "low level general solutions" are often unhinged when they're not guided by concrete experience. The author's experience with sqlsync, and applying graft to sqlite on day one, feels like it gives them standing to take a stab at a general solution. I like the approach they came up with, particularly shifting responsibility for reconciliation to the application/client layer. Because reconciliation lives heavily in tradeoff space, it feels right to require the application to think closely about how they want to do it.
A lot of the questions here are requesting comparison's to existing SQLite replication systems, the article actually has a great section on this topic at the bottom: https://sqlsync.dev/posts/stop-syncing-everything/#compariso...
(Replying to PARENT post)
But LLMs performance varies (and this is a huge critique!) not just on what they theoretically know, but how, erm, cross-linked it is with everything else, and that requires lots of training data in the topic.
Metaphorically, I think this is a little like the difference for humans in math between being able to list+define techniques to solve integrals vs being able to fluidly apply them without error.
I think a big and very valid critique of LLMs (compared to humans) is that they are stronger at "memory" than reasoning. They use their vast memory as a crutch to hide the weaknesses in their reasoning. This makes benchmarks like "convert from gtkmm3 to gtkmm4" both challenging AND very good benchmarks of what real programmers are able to do.
I suspect if we gave it a similarly sized 2kloc conversion problem with a popular web framework in TS or JS, it would one-shot it. But again, its "cheating" to do this, its leveraging having read a zillion conversion by humans and what they did.
(Replying to PARENT post)
(Replying to PARENT post)
UPDATE: naive (just fed it your description verbatim) cline + claude 3.7 was a total wipeout. It looked like it was making progress, then freaked out, deleted 3/4 of its port, and never recovered.
(Replying to PARENT post)
We're not going to settle the preference for dynamic vs static types here. Its probably older than both of us, with many fine programmers on both sides of the fence. I'll leave it at this: well-informed programmers choosing to write in dynamically typed languages DO read code without types, and have happily done so since the late 1950s (lisp).
The funny thing is, I experience the same "how do you even??" feeling reading statically typed code. There's so much... noise on the screen, how can you even follow what's going on with the code? I guess people are just different?
> LLMs will make fewer type errors, and more errors that are uncaught by types
The errors I'm talking about are like "this CSS causes the element to draw part of its content off-screen, when it probably shouldn't". In theory, some sufficiently advanced type system could catch that (and not catch elements off screen that you want off-screen)? But realistically: pretty challenging for a static type system to catch.
The errors I see are NOT errors that throw exceptions at runtime either, in other words, they are beyond the scope of current type systems, either dynamic (runtime) or static (compile time). Remember that dynamic languages ARE usually typed, they are just type checked at runtime not compile time.
> perhaps that gives the delusion that the LLM is doing it completely without type system.
I mentioned coding in JS with cline, so no delusion. It does fine w/o a type system, and it rarely generates runtime errors. I fix those like I do with runtime errors generated when /I/ program with a dynamic language: I see them, I fix them. I find they're a lot rarer in both LLM generated code and in human generated code that proponents of static typing seem to think?
(Replying to PARENT post)