CopilotX, Blackbox.ai and Github copilot today (30-07-2024) find loops in less than 5 seconds in programming text, giving you even 4 fix possibilities, depending on what you want. That's faster than any human with the fastest speed reading. An average human wouldn't even finish reading the code before the AI has already corrected it. Spellbox handles all programming languages Tabnine AI debugs errors and even gives you better variations than you thought as an option. I don't just think you're outdated, in fact, even I'm outdated, because today 30-07-2024, AI is worse than it will be tomorrow, I literally mean tomorrow. Each person using these AIs contributes hour by hour to their learning.
I figured I might be a potential use-case for this, as I sometimes write Javascript code to play a game called Bitburner (cool game btw) but I have no actual training in doing so so I just bash my head against a wall until it kind of works. I'm a programmer, maybe the AI can help me. I took one of my scripts, deleted a bracket in a loop right at the top, and asked the AI why it wasn't working, saying that I suspected there was something wrong with the flow of the script and providing the error message.
It provided me with five helpful points on how to make my code more readable. Sure, fair enough, my code is horrendous. But my IDE also flagged the missing bracket automatically in 0.00001 seconds. It did this by counting the number of brackets and checking they had a counterpart. LLMs can't do this, because that's a logical approach and LLMs just generate something that
looks like logic went into making it. It just looked at all its training data for "user asks someone else for help" and mixed together the usual responses. Hence why I got tips on making my code more
human-readable when asking a
machine how to fix it.
I then tried specifically prompting it that maybe there was a bracket missing. It suggested that I delete the part of my code that declares the main function, aka, to make it completely unusuable. As that would make the code's structure significantly harder to interpret than a single missing bracket on line 4, I didn't bother asking it how I would fix that. Again, there was no actual logic behind this suggestion, it just misinterpreted the error message this time.
You're probably thinking that the future models will fix it or something, but they
can't. The fundamental approach of LLMs creates these problems, there is no logic behind them, they're just very good at making word assosciations. They can make words that go together, they can't think about which words they should actually be. It's like how if I put ever increasing amounts of computing power and resources into my phone's autocorrect, it's not going to magically learn to think. There's nothing in it that could do that. It'll just get faster and maybe suggest more likely words.