Without skilled developers supervising AI coding assistants, they are likely to break your code rather than write it. Right now, only people can fine-tune and evaluate AI. In the rush to embrace coding assistants like Amazon CodeWhisperer to generate new code for developers, we haven’t spent much time asking if that code is any good. By some measures, the answer is clearly “no.” According to a GitClear analysis, “Code generated during 2023 … resembles [that of] an itinerant contributor,” likely caused by increased use of coding assistants. This is not to say that coding assistants are bad. They can be incredibly helpful. The issue is we need to invest more time figuring out ways to apply generative AI to tasks like code refactoring now, as covered in a recent Thoughtworks interview. The good news? AI can help, but perhaps not always in the ways we expect. The wrong kind of race Much of the focus on coding assistants has been on how they improve throughput for developers. Unfortunately, this is rarely the right metric. Developers, after all, actually spend relatively little time writing new code. As Adam Tornhill, founder and CTO at CodeScene, said in the Thoughtworks interview, up to 70% of a developer’s time is spent understanding an existing system rather than adding code to it (which might comprise 5% of her time). Not only is development speed the wrong metric, it also distracts developers from stepping back from their code to make fewer, better bets on which code to write in the first place, as I’ve noted. What matters more than development speed? Readability, for one. As Martin Fowler, chief scientist at Thoughtworks, stresses in the same interview, “Readability of a codebase is key to being able to make changes quickly because you need to be able to understand the code in order to change it effectively.” Coding assistants, although helpful for increasing development speed, can be much more helpful in explaining code or rewriting it in a more familiar programming language, thereby giving a new spin on “readability.” Another thing that matters is refactoring, which reduces complexity and improves readability in code by making small changes to the code without impacting its external behavior. Here, unfortunately, AI has been less helpful, as Tornhill details. Tornhill’s company, CodeScene, used large language models (LLMs) from OpenAI, Google, Meta, and more to refactor code, but found that 30% of the time the AI failed to improve the code. Worse, two-thirds of the time, the AI actually broke the unit texts, an indication that instead of refactoring code, it was changing the external behavior of the code in subtle but critical ways (“really odd things like moving a ‘this’ reference to an extracted function, which would alter its meaning, [or removing] entire branches,” etc.). The absolute best performing AI for CodeScene correctly refactored the code just 37% of the time. The rest of the time AI got refactoring wrong or simply didn’t improve the code. That’s not a hit rate developers can trust. “AI now makes it so easy to write a lot of code that shouldn’t be written in the first place,” Tornhill notes. We can’t really rely on AI to write code for us or to improve existing code (especially legacy code with functions that run hundreds of lines of code: “You stuff that into a large language model, and it will break down, guaranteed,” declares Tornhill). Instead, we need to look for other ways to put AI to use. People matter more than ever The key is to align developers with AI, rather than try to replace them with AI. “The quicker we’re able to generate new code, the harder it is for the team to understand that code,” notes Tornhill. Throughout the interview, this theme kept coming up; namely, the need to keep smart developers involved in the process to evaluate and tune AI. However much developers may worry about their robot creations taking over, that’s not going to happen anytime soon. In fact, in many ways, people are more important than ever, given the increased use of AI. Though you may be tempted to let AI do your development for you, the reality is that it can’t. Strong developers, coupled with traditional helps, such as linter tools, code reviews (to maintain familiarity with code), etc., are essential to effectively use AI. Given the propensity for AI tools to accelerate code, what we need most of all is to slow things down a little. Now is a great time to figure out where AI can help improve discrete processes within code development, under the guidance of experienced developers. Related content analysis Azure AI Foundry tools for changes in AI applications Microsoft’s launch of Azure AI Foundry at Ignite 2024 signals a welcome shift from chatbots to agents and to using AI for business process automation. By Simon Bisson Nov 20, 2024 7 mins Microsoft Azure Generative AI Development Tools news Microsoft unveils imaging APIs for Windows Copilot Runtime Generative AI-backed APIs will allow developers to build image super resolution, image segmentation, object erase, and OCR capabilities into Windows applications. By Paul Krill Nov 19, 2024 2 mins Generative AI APIs Development Libraries and Frameworks feature A GRC framework for securing generative AI How can enterprises secure and manage the expanding ecosystem of AI applications that touch sensitive business data? Start with a governance framework. By Trevor Welsh Nov 19, 2024 11 mins Generative AI Data Governance Application Security news Go language evolving for future hardware, AI workloads The Go team is working to adapt Go to large multicore systems, the latest hardware instructions, and the needs of developers of large-scale AI systems. By Paul Krill Nov 15, 2024 3 mins Google Go Generative AI Programming Languages Resources Videos