ramchip
๐ Joined in 2008
๐ผ 3,829 Karma
โ๏ธ 1,362 posts
Load more
(Replying to PARENT post)
https://www.latintimes.com/trump-ally-slammed-saying-alligat...
(Replying to PARENT post)
Looking at it naively - deriving a new key sounds similar to picking a new function within a family of possible functions?
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
> it's like reviewing a PR with no trust possible, no opportunity to learn or to teach, and no possibility for insight that will lead to a better code base in the future
The first part is 100% true. There is no trust. I treat any LLM code as toxic waste and its explanations as lies until proven otherwise.
The second part I disagree somewhat. I've learned plenty of things from AI output and analysis. You can't teach it to analyze allocations or code complexity, but you can feed it guidelines or samples of code in a certain style and that can be quite effective at nudging it towards similar output. Sometimes that doesn't work, and that's fine, it can still be a big time saver to have the LLM output as a starting point and tweak it (manually, or by giving the agent additional instructions).
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
Sometimes I give the agent a vague design ("make XYZ configurable") and it implements it the wrong way, so I'll tell it to do it again with more precise instructions ("use a config file instead of a CLI argument"). The best thing is you can tell it after it wrote 500 lines of code and updated all the tests, and its feelings won't be hurt one bit :)
It can be useful as a research tool too, for instance I was porting a library to a new language, and I told the agent to 1) find all the core types and 2) for each type, run a subtask to compare the implementation in each language and write a markdown file that summarizes the differences with some code samples. 20 min later I had a neat collection of reports that I could refer to while designing the API in the new language.
(Replying to PARENT post)
I don't see anyone doing that here. LLM writing was brought up because of the writing style, not the dash. It just reinforces the suspicion.
(Replying to PARENT post)
(Replying to PARENT post)
(Replying to PARENT post)
I'm not sure I understand the question - all queue systems I've used separate delivery and acknowledgement, so if a process crashes during processing the messages will be redelivered once it restarts.
Do you have a concrete example of a flow you're curious about?
Maybe these could help:
- https://ferd.ca/the-zen-of-erlang.html
- https://jlouisramblings.blogspot.com/2010/11/on-erlang-state...
(Replying to PARENT post)