ramchip

โœจย Tokyo

๐Ÿ“… Joined in 2008

๐Ÿ”ผ 3,829 Karma

โœ๏ธ 1,362 posts

๐ŸŒ€
15 latest posts

Load

(Replying to PARENT post)

You can prototype by hand too. Personally I find it might take me 10 min to try a change with an LLM that would have taken me 30 min to 1hr by hand. It's a very nice gain but given the other things to do that aren't sped up by LLM all that much (thinking about the options, communicating with the team), it's not _that_ crazy.
๐Ÿ‘คramchip๐Ÿ•‘5d๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

๐Ÿ‘คramchip๐Ÿ•‘6d๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

How does this differ from the KDF chain in Signal?

Looking at it naively - deriving a new key sounds similar to picking a new function within a family of possible functions?

๐Ÿ‘คramchip๐Ÿ•‘1mo๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

This is very similar to Jose Valim's "Mocks and explicit contracts" from 2015, down to using a Twitter client as example.

https://dashbit.co/blog/mocks-and-explicit-contracts

๐Ÿ‘คramchip๐Ÿ•‘1mo๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I think a common opinion is that it's useful as a research and code generation tool, and that it has some really negative effects on the Internet and society in general. Since the discussion on HN is often focused on coding the first aspect is just a bit more visible.
๐Ÿ‘คramchip๐Ÿ•‘1mo๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I was very put off by his article "A knockout blow for LLMs?", especially all the fuss he was making about using his own name as a verb to mean debunking AI hype...
๐Ÿ‘คramchip๐Ÿ•‘1mo๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Right. I don't treat the LLM like a colleague at all, it's just a text generator, so I partially agree with your earlier statement:

> it's like reviewing a PR with no trust possible, no opportunity to learn or to teach, and no possibility for insight that will lead to a better code base in the future

The first part is 100% true. There is no trust. I treat any LLM code as toxic waste and its explanations as lies until proven otherwise.

The second part I disagree somewhat. I've learned plenty of things from AI output and analysis. You can't teach it to analyze allocations or code complexity, but you can feed it guidelines or samples of code in a certain style and that can be quite effective at nudging it towards similar output. Sometimes that doesn't work, and that's fine, it can still be a big time saver to have the LLM output as a starting point and tweak it (manually, or by giving the agent additional instructions).

๐Ÿ‘คramchip๐Ÿ•‘1mo๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I think that's right, and not a problem in practice. It's like asking a human why: "because it avoids an allocation" is a more useful response than "because Bob told me I should", even if the latter is the actual cause.
๐Ÿ‘คramchip๐Ÿ•‘1mo๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Like the article says... I feel it's counter-productive to picture an LLM as "learning" or "thinking". It's just a text generator. If it's producing code that calls non-existent APIs for instance, it's kind of a waste of time to try to explain to the LLM that so-and-so doesn't exist. Better just try again and dump an OpenAPI doc or some sample code into it to influence the text generator towards correct output.
๐Ÿ‘คramchip๐Ÿ•‘1mo๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

It gives you more effective search keywords. "Fibers in feathers" isn't too bad, but when it's quite vague like "that movie from the 70s where the guy drank whiskey and then there was a firefight and..." getting the name from the LLM makes it much faster to google.
๐Ÿ‘คramchip๐Ÿ•‘1mo๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Personally, I spend _more_ time thinking with Claude. I can focus on the design decisions while it does the mechanical work of turning that into code.

Sometimes I give the agent a vague design ("make XYZ configurable") and it implements it the wrong way, so I'll tell it to do it again with more precise instructions ("use a config file instead of a CLI argument"). The best thing is you can tell it after it wrote 500 lines of code and updated all the tests, and its feelings won't be hurt one bit :)

It can be useful as a research tool too, for instance I was porting a library to a new language, and I told the agent to 1) find all the core types and 2) for each type, run a subtask to compare the implementation in each language and write a markdown file that summarizes the differences with some code samples. 20 min later I had a neat collection of reports that I could refer to while designing the API in the new language.

๐Ÿ‘คramchip๐Ÿ•‘1mo๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

> assuming a single character immediately means they used an LLM is just plain wrong

I don't see anyone doing that here. LLM writing was brought up because of the writing style, not the dash. It just reinforces the suspicion.

๐Ÿ‘คramchip๐Ÿ•‘1mo๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

The massive concurrency and fault isolation properties are very nice for telephony switches (and web servers etc.) but not usually super relevant for phone apps.
๐Ÿ‘คramchip๐Ÿ•‘2mo๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Fascism rewards loyalty above competence.
๐Ÿ‘คramchip๐Ÿ•‘3mo๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

> For example, if a process has taken an item off a queue and then crashes before having fully processed it, how is that accounted for?

I'm not sure I understand the question - all queue systems I've used separate delivery and acknowledgement, so if a process crashes during processing the messages will be redelivered once it restarts.

Do you have a concrete example of a flow you're curious about?

Maybe these could help:

- https://ferd.ca/the-zen-of-erlang.html

- https://jlouisramblings.blogspot.com/2010/11/on-erlang-state...

๐Ÿ‘คramchip๐Ÿ•‘3mo๐Ÿ”ผ0๐Ÿ—จ๏ธ0