Hi,
Just to put things into perspective.
Well, this example dates from some years ago, before LLMs and ChatGPT. But I agree that the principle is the same. (an that was exactly my point).
If you analyse this. The error the person made was that he assumed an arduino to be like a PC, .. while it is not. An arduino is a microcontroller. The difference is that a microcontroller has resources that are limited: pins, hardware interrups, timers, .. An addition, pins can be reconfigured for different functions (GPIO, UART, SPI, I2C, PWM, ...) Also, a microcontroller of the arduino-class does not run a RTOS, so is coded in "baremetal". And as there is no operating-system that does resource-management for you, you have to do it the application.
And that was the problem: Although resource-management is responsability of the application-programmer, the arduino environment has largly pushed that off the libraries. The libraries configure the ports in the correct mode, set up timers and interrupts, configure I/O devices, ...And in the end, this is where things went wrong. So, in essence, what happened is the programmer made assumption based on the illusion created by the libraries: writing application on arduino is just like using a library on a unix-box. (which is not correct)
That is why I have become carefull to promote tools that make things to easy, that are to good at hiding the complexity of things. Unless they are really dummy-proof after years and decades of use, you have to be very carefull not to create assumptions that are simply not true.
I am not saying LLMs are by definition bad. I am just careful about the assumptions they can create.
Obtainium seems to have a very interesting take on this. Thanks for the link! I will check it out ๐