(Replying to PARENT post)

What kind of basic programming does one have to do to fee threatened by chatgpt? This tool does great at regurgitating basic coding but anything a little more complex is a mix of nonsense and confidence. Do people actually write “leet code” on a daily basis?
👤1234throway🕑2y🔼0🗨️0

(Replying to PARENT post)

I just don’t feel threatened by AI at all. Maybe I’m not seeing the full picture, but the quality of your software necessarily depends on a constant re-evaluation of customer needs, business priorities, human values, etc. Lots of squishy stuff.

Programming is not just taking product requirements and spitting out the correct algorithms.

👤danielvaughn🕑2y🔼0🗨️0

(Replying to PARENT post)

> The kind of thing that you witness once in a generation. (The last two times were object-oriented programming and the World-Wide Web.)

Folks, let's get real.

I kind of snarked at this but then I realized it's written by Bertrand Meyer and the conclusion is spot on: I agree that should these tools proliferate it will highlight the importance of formal methods and verification.

Even if ChatGPT-like systems get faster and gain deeper models of computer syntax and structure, I suspect the one problem that will be difficult to solve is elegance and abstraction. Often the abstractions we choose are based on laws and ways of thinking that help us manage complex phenomenon using laws and notation which make it easier for us to reason about them.

And even if that does get solved some how, we're going to have to understand them some how. A stronger emphasis on proofs and model checking will be useful to anyone who wants to be sure that the program the FutureGPT produced isn't simply "some what right" but is actually right.

👤agentultra🕑2y🔼0🗨️0

(Replying to PARENT post)

I feel that in the short run, all programmers will simply become more productive. But in the medium and long run, the narrow SE roles will become obsolete: Such as any SE role where there’s a “certification process”, eg: Cloud (AWS, GCP, Kubernetes), iOS, Android, even CCNA, etc.

A generalist SE (side note: like how I assume HN’s readership bends towards) will be the one to benefit, as a good generalist SE can have networking understanding (CCNA), Ops/DevOps (cloud), and SE (backend, frontend, mobile) and more (at embedded programming is where the line could be drawn, but ChatGPT is good at explaining hardware concepts too!), to thrive and depend on ChatGPT or equivalent system for discernible assistance.

(Minor: I even tweeted my opinion with my personal note about it too: https://twitter.com/raj_nathani/status/1615709768487948292?s...)

👤rajnathani🕑2y🔼0🗨️0

(Replying to PARENT post)

In case anyone is interested, I have started on a natural language programming web app based on OpenAI's API.

https://aidev.codes. By default it is like the OpenAI JavaScript playground, except it immediately hosts the results in a web page and allows you to edit the accumulated prompt.

Also can edit specific files. Many improvements planned. Tonight I got a lot of stuff done related to creating and searching knowledgebases using embeddings.

If anyone tries it, please consider writing any feedback in the Discord.

👤ilaksh🕑2y🔼0🗨️0

(Replying to PARENT post)

Knowing what you want is both the fun and the hard part of most software development. That and understanding the environment you are operating on. Both of these things are not something I feel an AI has a strong advantage with given the fact that you need to provide this information.

Perhaps I can try to say, "Please decide what browser features I should use to maximize profit.", and it will actually do some proper research and contract out studies and all that... but I don't see that happening yet. Even still, the point of validation remains.

I can't help but draw a parallel with automated theorem provers. Sure they resolve "true", but then who validates the validators? It's a never ending cycle.

At the end of the day, you must learn to love and trust your tools and then stand by the joint creations.

👤nixpulvis🕑2y🔼0🗨️0

(Replying to PARENT post)

I feel like there was so (informal?) theorem that a full-specification of a program is at least as long as the program. Since you have to tell the AI what the specification is, the worst that could happen is programming becomes more like Knuth's "literate programming", or maybe even better, all programs become formally verified. But that will just mean chasing bugs in your specification. It could eliminate implementation bugs, but it's not going to help with figuring out what my problem actually is. It also isn't going to be able to choose among the solutions, so I'll just end up making a bunch of choices. So maybe programmers become like technical managers, but managing an AI bot instead of a person. Which is sort of how it is today, except the compiler is pretty stupid and pedantic.

But even something straightforward: "write a hash function whose output evenly mixes the input" or "write a function to convert an RGB image to a dithered B&W image" is harder than it seems. How do I know the dithering algorithm chosen is good for my data? How do I know the hash function actually does what I asked and doesn't have some funnel somewhere? I'll have to write some tests, but I can't get the AI to write the tests, because how do I know the tests will be right?

👤prewett🕑2y🔼0🗨️0

(Replying to PARENT post)

So... If it's so revolutionary, why can't I get it to solve level 1 advent of code problems?

Like here is what it generates for the 2016 day 1 problem:

def find_distance(instructions):

    x, y = 0, 0

    direction = 0 # 0: North, 1: East, 2: South, 3: West

    visited = set()

    visited.add((0,0))

    instructions = instructions.split(", ")

    for instruction in instructions:

        turn = instruction[0]

        distance = int(instruction[1:])

        if turn == "R":

            direction = (direction + 1) % 4

        else:

            direction = (direction - 1) % 4

        for _ in range(distance):

            if direction == 0:

                y += 1

            elif direction == 1:

                x += 1

            elif direction == 2:

                y -= 1

            else:

                x -= 1

            if (x, y) in visited:

                return abs(x) + abs(y)

            visited.add((x, y))

    return abs(x) + abs(y)
This function returns 113 from my input for that day, which is actually the answer for part 2... For part 1 it should be 234.

When I tried in Rust the solution didn't even compile, which is business as usual as far as my experience goes for trying to get ChatGPT to write anything practical (not a 'toy' example) in Rust.

I gave it another chance with day 2 in python and it failed at that as well. These are VERY simple tasks, CHILDREN can solve the initial couple days of advent of code.

In this article they give an example of a square root function. Maybe the authors could consider trying some more realistic tasks? So silly...

👤gptgpp🕑2y🔼0🗨️0

(Replying to PARENT post)

Maybe I'm overly optimistic but I see these tools as empowering us all to be more productive and become directors who specify requirements rather than spend time doing the work. ChatGPT's ability to write documents for certain things that would take a while to research myself is an obvious example.

For Software Engineering I look forward to stopping writing the majority of my code and instead managing this tool and helping guide it to create apps and websites faster than I can and outside the areas of my expertise.

👤robbomacrae🕑2y🔼0🗨️0

(Replying to PARENT post)

I think at its current state ChatGPT or any GPT3.5 or similarly trained transformer tool can empower someone who cannot code (kind of me) but can understand the logic and prompting that goes into it to get a lot of new things done. I'm a Product Manager who has written some code in the past, but not for many years in a professional setting anyway. I'm quite technical, but not specifically in the code anymore.

In addition, it can help individuals and teams learn/debug/ship things quicker - which is unfortunately/fortunately something that every company wants and needs.

If it does what some think, and it may, UBI can certainly be necessary for some tasks if you consider diffusion tools like Dall-e and StableDiffusion as well.

👤brianjking🕑2y🔼0🗨️0

(Replying to PARENT post)

Yeah, wake me up when it can do close to the metal programming on an ARM to fulfill real time requests from an industrial robot, for instance...also, in safety critical systems, as a bonus.

I'm not saying that it will never happen, but if your job is threatened by ChatGPT right now, you were not really doing real software engineering anyway.

👤nostromo123🕑2y🔼0🗨️0

(Replying to PARENT post)

Some years ago (perhaps 2015) I told a non-programmer that MOST programmers would be obsolete by the end of the century because of AI. Surprisingly he scoffed saying there is no way AI will be able to handle all those business edge cases. The stuff in the article is pretty basic and a long way from complex business logic, but we are on our way. I just hope business application developers like myself can survive until retirement, which for me is sometime around 2050. I am getting less optimistic.
👤systematical🕑2y🔼0🗨️0

(Replying to PARENT post)

See it as augmentation and not a threat. People who went ‘into programming’ and learned the basics and then stood still will have issues, but chatgpt still needs senior or talented developers to do anything useful for now (like juniors, you need to tell it things step by step). But then it’s a great tool, saving a lot of time and money. Just wished it was not owned by a few rich peeps and that training costs will drop fast.
👤tluyben2🕑2y🔼0🗨️0

(Replying to PARENT post)

This thing would make garbage-in-garbage-out a lot more wide spread … accelerating the ruin of info tech …
👤quantum_state🕑2y🔼0🗨️0

(Replying to PARENT post)

This is part of what I'm working on for InventAI (https://inventai.xyz). It'd be great to get into YC, but I'm a solo founder! Any potential co-founders out there?
👤jasfi🕑2y🔼0🗨️0

(Replying to PARENT post)

Pretty sure the outsourcing shops will use it to inject more mountains of shit code into everything. But I guess thats not a bad things for the employment of consultants who will be called in to fix things.
👤gsatic🕑2y🔼0🗨️0

(Replying to PARENT post)

It can’t even solve like day 4 in advent of code, and only solve the previous ones because the problem can be found solved on the internet verbatim, with a different question attached.

So, nothing, I guess.

👤kaba0🕑2y🔼0🗨️0

(Replying to PARENT post)

I feel pretty secure in my job until ChatGPT can write itself. Once it can improve on itself, then I'll be really worried.
👤squidbot🕑2y🔼0🗨️0

(Replying to PARENT post)

Takes some skill to spot that n(n+1)(2n+1)/6 is not quadratic, doesn't it?
👤mathteddybear🕑2y🔼0🗨️0

(Replying to PARENT post)

ill still leetcode just ffs.
👤fizzbuzz69🕑2y🔼0🗨️0