Design thinking is critical for developing data-driven business tools that surpass end-user expectations. Here's how to apply the five stages of design thinking in your data science projects. What is the role of data scientists in your organization? Are they report generators, database query jockeys, machine learning model developers, or generative AI experimenters? Are they citizen data scientists and data analysts tasked with developing data visualizations, evaluating new data sets, or improving data quality for business departments? Organizations looking to become more data-driven often start with a services mindset where employees with data skills are tasked to develop reports, dashboards, machine learning models, and other analytics deliverables. Some will also have data integration, stewardship, and governance responsibilities, including analyzing new data sources, improving data quality, or enhancing data catalogs. Digital trailblazers seeking to advance their organization’s data-driven practices will go beyond the data service delivery model and seek to develop and support data and analytics as products. Instead of building many one-off data tools based on people’s requests, these trailblazers see the benefits if defining and developing actionable data products and enhancing them based on end-user needs, strategic goals, and targeted business outcomes. One way to transform from a service to a product mindset and delivery model is by instituting design thinking practices. These practices start by understanding end-users’ needs. They take an iterative, test-driven approach to validating assumptions and improving user experiences. Leaders can incorporate design thinking into agile and scrum, and it’s a foundational practice for developing world-class customer experiences. Design thinking’s five stages—empathize, define, ideate, prototype, and test—are similar to some aspects of data science methodologies. However, design thinking and other highly human-centric approaches go further. This article looks at how to use design thinking to design experiences that support multiple departments in using data products for decision-making. For simplicity, we’ll consider a data science team preparing to build a new product that will help the organization understand customer profitability. The five stages of design thinking Empathize with end-users Define the vision behind any data product Ideate to identify non-functional requirements Iterate to improve experiences and capture end-user feedback Test to see where analytics drives business impacts 1. Empathize with end-users Even a straightforward category like customer profitability brings on a wide range of stakeholder needs, questions, and opportunities to use data for actionable results. “Understanding the diverse needs of users’ business processes and tailoring the layout to prioritize key relevant, personalized insights is critical to success,” says Daniel Fallmann, founder and CEO of Mindbreeze. Finance, marketing, customer service, product development, and other departments likely have different questions, opportunities, and pain points when it’s hard to ascertain a customer’s or segment’s profitability. For example, marketing may want to alter campaign strategies toward more profitable customer segments, while customer service may offer incentives and upsells to more profitable customers. One key way for data scientists to empathize with end-users is to observe the current state of how people use data and make decisions. For example, the customer service rep may have to look at several systems to understand customer size and profitability, losing precious minutes responding to customers and likely making mistakes when developing insights on the fly. The marketer may be looking at outdated information when optimizing campaigns, resulting in missed opportunities and higher advertising expenses. Fallman suggests, “Data scientists must start with a user-centric approach when building dashboards offering 360-degree views of information.” In our example, understanding the different stakeholder segments and the business impacts of how things are done today is a key first step. 2. Define the vision behind any data product Observing end-users and recognizing different stakeholder needs is a learning process. Data scientists may feel the urge to dive right into problem-solving and prototyping but design thinking principles require a problem-definition stage before jumping into any hands-on work. “Design thinking was created to better solutions that address human needs in balance with business opportunities and technological capabilities,” says Matthew Holloway, global head of design at SnapLogic. To develop “better solutions,” data science teams must collaborate with stakeholders to define a vision statement outlining their objectives, review the questions they want analytics tools to answer, and capture how to make answers actionable. Defining and documenting this vision up front is a way to share workflow observations with stakeholders and capture quantifiable goals, which supports closed-loop learning. Equally important is to agree on priorities, especially when stakeholder groups may have common objectives but seek to optimize department-specific business workflows. In our example, let’s say the customer service vision statement focuses on answering questions about a single customer and benchmarking their profitability against other customers in their segment. Marketing has a different vision, seeking a top-down view of the profitability trends in leading customer segments to optimize their campaigns. The organization in this case chooses to prioritize the bottom-up customer service vision, which lets them see where access to better intelligence improves customer satisfaction and increases revenue. 3. Ideate to identify non-functional requirements Design thinking institutes an ideate stage, which is an opportunity for agile data science teams working on solutions to discuss and debate approaches and their tradeoffs. Some questions data science teams should consider during the ideate phase include looking at technology, compliance, and other non-functional requirements. Here are some examples: Are there common stakeholder and end-user needs where the team can optimize solutions, and where are persona- or department-specific goals more important to consider? Does the organization have the required data sets, or will new ones be needed to improve the product offering? What data quality issues need to be addressed as part of the solution? What are the underlying data models and the database architectures? Is there technical debt that needs addressing, or is an improved data architecture required to meet scalability, performance, or other operational requirements? What data security, privacy, and other compliance factors must the team consider when developing solutions? The goal is to understand the big picture of what the data product may require, then break down the big boulder into sprint-sized chunks so the team optimizes work across the entire solution’s architecture. 4. Iterate to improve experiences and capture end-user feedback When working with data, a picture may be worth a thousand words, but an actionable dashboard is worth much more. An agile data science team should implement back-end improvements in the data architecture, improve data quality, and evaluate data sets every sprint, but the goal should be to present a working tool to end-users as early as possible. Agile data science teams need early feedback, even if all the capabilities and data improvements are works in progress. “The most effective dashboards see the highest level of usage rather than simply being the most visually appealing,” “says Krishnan Venkata, chief client officer of LatentView Analytics. “When creating dashboards, it’s essential to adopt an iterative approach, continuously engaging with end-users, gathering their feedback, and making improvements. This iterative process is crucial for developing a dashboard that offers valuable insights, facilitates action, and has a meaningful impact.” Steven Devoe, director of data and analytics at SPR, adds, “When building a dashboard, data scientists should focus on the high-value questions they are trying to answer or problems they are trying to solve for their audience. People go to dashboards seeking information, and as data scientists, you must construct your dashboards logically to give them that information.” Other steps for smarter data visualizations include establishing design standards, leveraging visual elements to aid in story-telling, and improving data quality iteratively. But it’s most important to reconnect with end-users and ensure the tools help answer questions and connect to actionable workflows. “Too often, I see data scientists trying to build on dashboards to answer all possible questions, and their dashboards become convoluted and lose a sense of direction,” says Devoe. In our example, trying to fulfill customer service and marketing needs in one dashboard will likely introduce design and functional complexities and ultimately deliver an analytics tool that is hard to use. 5. Test to see where analytics drives business impacts While agile teams should iteratively improve data, models, and visualizations, a key objective should be to release data products and new versions into production frequently. Once in production, data science teams, end-users, and stakeholders should test and capture how the analytics drive business impacts and where improvements are needed. Like most digital and technology products, a data product is not a one-and-done project. Iterations help improve experiences, but testing—including pilots, betas, and other release strategies—validates where further investments are needed to deliver on the targeted vision. Becoming a data-driven organization is a critical goal for many companies, but there’s a significant transformation opportunity for companies to use design thinking to improve data products iteratively. Related content feature What is Rust? Safe, fast, and easy software development Unlike most programming languages, Rust doesn't make you choose between speed, safety, and ease of use. Find out how Rust delivers better code with fewer compromises, and a few downsides to consider before learning Rust. By Serdar Yegulalp Nov 20, 2024 11 mins Rust Programming Languages Software Development how-to Kotlin for Java developers: Classes and coroutines Kotlin was designed to bring more flexibility and flow to programming in the JVM. Here's an in-depth look at how Kotlin makes working with classes and objects easier and introduces coroutines to modernize concurrency. By Matthew Tyson Nov 20, 2024 9 mins Java Kotlin Programming Languages analysis Azure AI Foundry tools for changes in AI applications Microsoft’s launch of Azure AI Foundry at Ignite 2024 signals a welcome shift from chatbots to agents and to using AI for business process automation. By Simon Bisson Nov 20, 2024 7 mins Microsoft Azure Generative AI Development Tools news Microsoft unveils imaging APIs for Windows Copilot Runtime Generative AI-backed APIs will allow developers to build image super resolution, image segmentation, object erase, and OCR capabilities into Windows applications. By Paul Krill Nov 19, 2024 2 mins Generative AI APIs Development Libraries and Frameworks Resources Videos