Matt Asay
Contributor

Making generative AI work for you

opinion
Oct 14, 20245 mins
Development ToolsEmerging TechnologyGenerative AI

Find the sweet spot where genAI boosts your productivity but doesn’t get you in over your head where you can’t tell good output from bad.

Paper cranes, change, evolution. A colorful paper crane becomes a paper airplane in flight.
Credit: Lightspring / Shutterstock

It’s not often someone can talk about genAI in a “pragmatic and realistic” way, but those are exactly the accolades handed out to AWS Product Management Director Massimo Re Ferrè following his recent generative AI (genAI) talk. It’s not hard to find opposing sentiment. We continue to pile mountains of hype on generative AI while the economics of training large language models (LLMs) are insane. “The capex on foundation model training is the ‘fastest depreciating asset in history,’ ” says Michael Eisenberg. To really hit its stride, genAI is going to take time.

But that’s someone else’s problem to solve. For you, the question is how to use (or ignore) genAI in your work right now. For that, Re Ferrè introduces a useful framework for thinking about how and when to embrace genAI. As he says, you can rent a car and use that car to drive into a wall or get to the beach, just like you can use genAI to generate terrible hallucinations or to drive real productivity as a developer. It really is a choice.

The wow moment

Every so often, Re Ferrè said in his talk, you encounter a technology so disruptive that it creates a “wow moment” when “you realize something is really changing in the industry and … this is how things are going to work in the long run.” Other “wow moments” have been things like virtual machines, cloud, and, he argues, genAI.

At its most basic, genAI “statistically predict[s] what you want” out of a corpus of data. LLMs comb through data to find patterns and then surface those patterns in interesting ways. The alternatives are that you try to know everything yourself and build assets (like an edge function in coding or a blog post by a writer), or you can search for information online to guide you in building that asset. In each of these cases, the onus is on you to create the asset. GenAI offers a third approach: You use natural language to prompt an LLM to create the asset for you.

The problem with this third method is trust. How do you trust genAI to return consistent, reliable results? This is the hardest thing about genAI, and it’s where Re Ferrè offers some very helpful guidance.

Learning zone good

As part of his talk (followed up by a detailed blog post), Re Ferrè utters an inconvenient truth about technology: Everything fails. As much as vendors may want to pitch their software or hardware as infallible tools to tackle an enterprise’s hardest problems, the reality is that most tech can be useful—within proper guardrails. Because everything, including genAI, fails, “it’s just a matter of how you approach that failure, how you mitigate that failure, [and how you’re] aware of the risk associated with potential failure,” he suggests.

The key, he says, is to figure out how to get value from genAI assistants despite their failings, by maintaining appropriate control. He frames this as being either in the “boost zone” or the “learning zone.”

The boost zone is “where you can leverage the assistant for tasks that are close to your skill levels and where you can still be in full control,” he stresses. In other words, you’re capable of doing all the work yourself, but you choose to have a genAI assistant complement that work (e.g., you write functions but then have an assistant document what each function does with a three-line description). Because you could do the work yourself, it’s easy for you to verify that the genAI bot is doing it well. It saves you time, but you’re still in control.

The learning zone pushes you a bit out of your comfort zone. This is where you “leverage the assistant to help you at a level of complexity you are not fully familiar with,” Re Ferrè says, though it’s not so far from your knowledge that you’re in totally uncharted territory. Maybe you know how to write a function in Java but you need it in Rust, so you ask the assistant to tell you what that would look like. As he suggests, this roughly equates to “the 2024 version of searching the Internet for something you don’t know.”

Danger zone bad

Where you don’t want to be, he posits, is the “danger zone” where you’re asking the assistant to, say, write a CRM system when you know nothing about CRM systems. In the danger zone, you have no good way of verifying whether the output is a “yeah!” or a “nope!” The ideal place to be, he writes, is “somewhere around the boost zone (thus improving your productivity in a very controlled manner), but you also want to stretch your comfort zone to explore how to do things you are not familiar with (without taking too much risk).”

So, can genAI be an exceptional asset for improving productivity? Yes. But that, as Re Ferrè notes, is really a matter of using it within guardrails, rather than blindly asking it to do too much.

Matt Asay
Contributor

Matt Asay runs developer relations at MongoDB. Previously. Asay was a Principal at Amazon Web Services and Head of Developer Ecosystem for Adobe. Prior to Adobe, Asay held a range of roles at open source companies: VP of business development, marketing, and community at MongoDB; VP of business development at real-time analytics company Nodeable (acquired by Appcelerator); VP of business development and interim CEO at mobile HTML5 start-up Strobe (acquired by Facebook); COO at Canonical, the Ubuntu Linux company; and head of the Americas at Alfresco, a content management startup. Asay is an emeritus board member of the Open Source Initiative (OSI) and holds a J.D. from Stanford, where he focused on open source and other IP licensing issues.

More from this author