codeptualize

✨ Twitter: @codeptualize Email: codeptualize[at]pm.me

πŸ“… Joined in 2021

πŸ”Ό 1,631 Karma

✍️ 433 posts

πŸŒ€
15 latest posts

Load

(Replying to PARENT post)

Seeing this kinda stuff makes me want to keep my physical license and ID. No need for digital ones, I'm good with the cards.
πŸ‘€codeptualizeπŸ•‘1moπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

Besides the obvious issues at hand, it's kinda ironic they publish this on Github, EU tech independence is going great.
πŸ‘€codeptualizeπŸ•‘1moπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

Haha that’s a good point, I guess another sign that they really have no clue what they are doing
πŸ‘€codeptualizeπŸ•‘1moπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

Maybe we should scan their communications for corruption and undue influence. I'm sure it's all above board, so it should be fine if we get an independent group to review them right? (Just following to their reasoning..)
πŸ‘€codeptualizeπŸ•‘1moπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

One interesting line in the proposal:

> Detection will not apply to accounts used by the State for national security purposes, maintaining law and order or military purposes;

If it's all very safe and accurate, why is this exception necessary? Doesn't this say either that it's not secure, or that there is a likely hood that there will be false positives that will be reviewed?

If they have it all figured out, this exception should not be necessary. The reality is that it isn't secure as they are creating backdoors in the encryption, and they will flag many communications incorrectly. That means a lot of legal private communications will leak, and/or will be reviewed by the EU that they have absolutely no business looking into.

It's ridiculous that they keep trying this absolutely ridiculous plan over and over again.

I also wonder about the business implications. I don't think we can pass compliance if we communicate over channels that are not encrypted. We might not be able to do business internationally anymore as our communications will be scanned and reviewed by the EU.

πŸ‘€codeptualizeπŸ•‘1moπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

"the science" I don't agree with this part, and I think it's quite dangerous to rope that in.

Science is not one way of thinking, it's a methodology, it's seeking truth. There might be bad actors and idiots, there is likely lots wrong, but the beautiful thing about science is that facts matter. If someone publishes bullshit you can repeat the study and proof them wrong.

That science is (wrongfully) taken as justification for stupid things, is not on "the science" as a whole.

If anything makes me hopeful, it is science and the remarkable developments happening.

πŸ‘€codeptualizeπŸ•‘1moπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

I'm not assuming anything, I work in software development. In this industry we spend ungodly amounts of time and resources to attempt to keep data safe, and create systems like the ones proposed to flag and handle malicious activity of many kinds. I think I know quite well how hard it is, and how easy it is to get it wrong, with potentially very real consequences.

The only things being handwavingly dismissed are the collateral damage, side effects, very real risks, and concerns about the effectiveness of the proposed solutions.

πŸ‘€codeptualizeπŸ•‘1moπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

This! Very well explained.
πŸ‘€codeptualizeπŸ•‘1moπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

I lost count how many times the "lets get rid of encryption" plans have been tried and failed. It's truly ridiculous how these people don't understand anything about encryption and somehow still think this is a good idea.

How is it possible that after years of discussing plans like this, they still managed to not listen to anyone who knows anything about encryption and online safety?

Makes me really worried about the future. There is a lot going on in the world, and somehow they feel the need to focus on making our communications unsafe and basically getting rid of online privacy.

The goal they are trying to achieve is good, but the execution is just stupid and will make everyone, including and maybe especially the people they want to protect, less safe online.

The age verification thing is another example. All it does is send a lot of sensitive traffic over cheap or free VPN's (that might be controlled by foreign states). Great job, great win for safety!

πŸ‘€codeptualizeπŸ•‘1moπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

I think there are two cases:

1. Self hosting

2. Running locally on device

I have tried both, and find myself not using either.

For both the quality is less than the top performing models in my experience. Part of it is the models, part might be the application layer (chatgpt/claude). It would still work for a lot of use cases, but it certainly limits the possibilities.

The other issue is speed. You can run a lot of things even on fairly basic hardware, but the token speed is not great. Obviously you can get better hardware to mitigate that but then the cost goes up significantly.

For self hosting, you need a certain amount of throughput to make it worth it to have GPU's running. If you have spiky usage you are either paying a bunch for idle GPU's or you have horrible cold start times.

Privacy wise: The business/enterprise TOS's of all big model providers give enough privacy guarantees for all or at least most use cases. You can also get your own OpenAI infra on Azure for example, I assume with enough scale you can get even more customized contracts and data controls.

Conclusion: Quality, speed, price, and you are able to use the hosted versions even in privacy sensitive settings.

πŸ‘€codeptualizeπŸ•‘1moπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

Any decent sized project will encounter breaking changes in dependencies.

The big frontend frameworks have great backward compatibility and usually provide codemods that automatically update your project.

If you install UI components and other libraries that might get abandoned or have breaking changes in major version updates you might have to put in more effort, that's not different in Go or Python.

πŸ‘€codeptualizeπŸ•‘3moπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

This is the answer. I thought this was fairly common knowledge, height is animated quite often (think dropdowns), no need for the over complications.
πŸ‘€codeptualizeπŸ•‘3moπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

Whisper large v3 from openai, but we host it ourselves on Modal.com. It's easy, fast, no rate limits, and cheap as well.

If you want to run it locally, I'd still go with whisper, then I'd look at something like whisper.cpp https://github.com/ggml-org/whisper.cpp. Runs quite well.

πŸ‘€codeptualizeπŸ•‘4moπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

Love this, definitely rooting for this to get big!

I think the goal is great. My dream language is something "in between Go and Rust", Go but with more expressive types, Rust-light, something along those lines. This seems like it is hitting that sweet spot.

Imo Go gets a lot right when it comes to productivity, but the type system always annoys me a bit. I understand the choice for simplicity, but my preference is different.

Rust is quite enjoyable, especially when it comes to the type system. But, kinda the opposite of go, it's a lot, maybe too much for me, and I frequently don't know what I'm doing haha. I also don't really need Rust level performance, most things I do will run totally fine with GC.

So Go with some extra types, same easy concurrency, compilation and portability sounds like a winner to me.

πŸ‘€codeptualizeπŸ•‘4moπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

I kinda agree but the article does not do a great job at defending the position. Who cares about docs?

Vibe coding, or just letting AI take the wheel will work in some situations. It allows non coders to do things they couldn't before and that's great. Just like spreadsheets, no code tools, and integrations tools like Zapier, this will fill a bunch of gaps and push the threshold where you need to get software devs involved.

But as with all these solutions there is that threshold were the complexity, error margin gets, and scale go beyond workable and then you need to unfuck that situation and enforcing correctness. And I think this will result in plenty "oops my data is gone" types of problems.

If you know upfront that your project will get complex and/or needs to scale you might be best off skipping the vibe coding and just getting it right, but for prototypes, small internal tools, process "glue", why not.

It's not a replacement of software engineering as a whole (yet), it's just another tool in the toolbox and imo that's great. Can I use it, no.. I have tried and it just doesn't work at all for bigger more complex projects.

πŸ‘€codeptualizeπŸ•‘6moπŸ”Ό0πŸ—¨οΈ0