As more CIOs and devops teams embrace generative AI, QA teams must also adapt their continuous testing practices to keep up. Generative AI, ChatGPT, and large language models (LLMs) like GitHub Copilot and other AI code-generation tools are changing software development practices and productivity. McKinsey reports that developers using generative AI tools are happier, more productive, and able to focus on more meaningful work. According to this report, AI can help developers speed up code documentation, generation, and refactoring by anywhere from 20% to 50%. This data suggests that more CIOs and devops teams will experiment with generative AI software development capabilities to improve developer productivity and accelerate application modernizations. If generative AI helps accelerate coding and software development, will testing and quality assurance keep pace with the higher velocity? Unfortunately, history suggests that testing practices lag behind improvements in development productivity and devops automation. Kent Beck defined test-driven development (TDD) in the late 1990s, and test automation has been around for a while, but many companies continue to underinvest in software testing. Continuous testing lags behind investments in automating deployments with CI/CD, building infrastructure as code (IaC), and other devops practices. As more organizations use devops to increase deployment frequency, teams are required to adopt continuous testing, using feature flags, enabling canary releases, and adding AIops capabilities. Here are three ways that developers and teams can adapt continuous testing for the new development landscape with generative AI capabilities. Increase test coverage As a first step, quality assurance (QA) teams should expect more third-party code from generative AI, and add the tools and automation to review and flag this code. “Generative AI tools will continue to grow more popular over the next year, and this will increase velocity drastically but also pose security risks, says Meredith Bell, CEO of AutoRABIT. “Teams need to incorporate static code analysis and integration testing automation now to act as guardrails for this new technology.” Static and dynamic code analysis, including SAST, DAST, and other code security testing, are key tools for devops teams looking to leverage AI-generated code or integrate open source and other coding examples suggested by LLMs. These tests can identify security vulnerabilities and code formatting issues, regardless of whether a developer or an AI generated the code. Automate test cases QA teams should also expect devops teams to build features faster, which will mean more test cases requiring automation. If software testing isn’t keeping up with development and coding velocity, how and where can generative AI tools close the gap? Mush Honda, chief quality architect at Katalon, suggests, “AI-generated tests based on real user journeys should be combined with visual tests, accessibility verifications, and performance benchmarks across browsers and devices to ensure all releases meet a comprehensive user experience.” Emily Arnott, content marketing manager at Blameless, believes QA must also consider using LLMs to generate and automate more test cases. “Testing automation can use AI tools like LLMs to become faster and more flexible,” she says. “LLMs allow you to request a script using natural language, so you can say, ‘Write me a script that tests this piece of code with every input from this log file’ and get something that works.” Scale and manage test data Something else to expect is an increase in test complexity. For example, generating test cases for a search engine could leverage the user journeys and popular keywords captured in log files and observability tools. But with more companies exploring LLMs and AI search, using natural language query interfaces and prompts, test cases will also need to become more open-ended. To meet this demand, QA will need a much larger and more dynamic test data set. Devops teams should look for ways to automate the testing of applications developed with LLMs and natural language query interfaces. “In agile environments, time is of the essence, and a comprehensive, self-service test data management system is critical,” says Roman Golod, CTO & co-founder at Accelario. “Devops teams need to be able to automatically generate virtual databases from production to nonproduction environments.” Increasing test capabilities, frequency, and the size of test data sets may require devops teams to review the architecture and capacity of devops and testing infrastructure. Sunil Senan, SVP and global head of data, analytics, and AI at Infosys, adds, “Application teams should consider the migration of devsecops pipelines to hyperscalers with AI-driven test automation capabilities such as synthetic data generation, test script generation, and test anomaly detection to enhance ML operations.” Conclusion In sum, QA can increase the scope and depth of testing by increasing test automation, scaling continuous testing, using generative AI test generation capabilities, and centralizing large test data sets. “Leading edge app development teams will adopt AI-driven exploratory testing and continuous regression testing,” says Esko Hannula, SVP of product management at Copado. “Testing will shift from reactive to proactive, with AI identifying edge cases and bugs before a feature is even built. This level of robotic continuous testing should not only accelerate development but drive app quality to a level we’ve been unable to achieve with basic test automation.” Coty Rosenblath, CTO at Katalon, adds, “We are seeing more elaborate tests to validate production, where they might have had only relatively simple synthetics in the past. Teams are building dynamic test suites that can focus specifically on areas of change and risk and avoid delaying releases waiting for full regression suites.” Generative AI capabilities used in coding and software development should be the final wake-up call for devops and QA leaders to invest in continuous testing, centralize test data, improve test coverage, and increase test frequency. Look for testing platforms to add generative AI capabilities to meet these objectives. Related content analysis Azure AI Foundry tools for changes in AI applications Microsoft’s launch of Azure AI Foundry at Ignite 2024 signals a welcome shift from chatbots to agents and to using AI for business process automation. By Simon Bisson Nov 20, 2024 7 mins Microsoft Azure Generative AI Development Tools news Microsoft unveils imaging APIs for Windows Copilot Runtime Generative AI-backed APIs will allow developers to build image super resolution, image segmentation, object erase, and OCR capabilities into Windows applications. By Paul Krill Nov 19, 2024 2 mins Generative AI APIs Development Libraries and Frameworks news Microsoft rebrands Azure AI Studio to Azure AI Foundry The toolkit for building generative AI applications has been packaged with new updates to form the Azure AI Foundry service. By Anirban Ghoshal Nov 19, 2024 4 mins Microsoft Azure Generative AI Development Tools feature A GRC framework for securing generative AI How can enterprises secure and manage the expanding ecosystem of AI applications that touch sensitive business data? Start with a governance framework. By Trevor Welsh Nov 19, 2024 11 mins Generative AI Data Governance Application Security Resources Videos