In his book, “AI Driven,” Wolf Ruzika expands the traditional concept of an MVP into three new forms that help build effective, trusted AI deployments through agile development. The AI factory process includes the traditional minimum viable product (MVP-1) concept that requires focusing on one or a few key feature(s) that demonstrate the product’s value to a user. MVP-1 saves time and money by avoiding early-stage scope creep or development of noncritical features. Users get a chance to experience the product while the concrete is still wet, so to speak, and developers have the opportunity to adjust development to match user needs.
The factory introduces two new MVPs that help users and developers co-create products. The minimum viable prediction (MVP-2) and the minimum viable process (MVP-3) help validate the AI’s ability to deliver business value and not simply provide a functional product. MVP-2 tests whether the AI inference or prediction can deliver the intended business value. MVP-3 identifies the most efficient workflow from data science (source) to business production (outcome). Combining the prediction and process MVPs improves the probability of success through pre-release testing of the user product (MVP-1), the prediction (MVP-2) and the target business process (MVP-3).
The fourth MVP is the AI factory’s minimum viable AI (MVA), which is the development overlay into the enterprise’s agile development process. The MVA allows the different contributing organizations (e.g., product management, data engineering, etc.) to synchronize development of AI functions and offers in harmony with existing agile and even continuous lifecycle processes.
MVPs at Work
The operational heat map outlined in the factory ideation phase maps where the four MVPs intersect and are used in the development process — it’s essentially the play that is run to create an AI product. First, the MVPs need to identify the core mission of the business that the AI is expecting to support. For example, if you are a healthcare provider, your core is to provide health services to your clients. Within the core is a system or a limited number of systems that fulfill its mission — for example, the electronic medical records management system and its data storage. This is the initial data feed to the AI product and it’s where the AI model(s) would likely be deployed.
From this core, the minimum viable process (MVP-3) defines a boundary where the organization interacts with third parties — customers, vendors, suppliers, etc. — and creates an optimized path to business processes at the core and edge. If there is no direct connection, the AI solution can be configured to provide that bridge.
The minimum viable prediction (MVP-2) is responsible for delivering the algorithm and inference value to the edge users and systems, making them more efficient. The job of the minimum viable product (MVP-1) is to deliver the optimal user experience — notebooks, dashboards, co-pilots, etc. — to translate the AI’s value into something actionable. The minimum viable AI’s role is to define how the AI factory development process for this product interconnects with the organization’s agile development efficiently and effectively (e.g., merging with an existing platform development effort).
Do Try This at Home, but Handle With Care
Not all AI is created equal or interchangeable, so part of good AI factory management is matching the type of AI tools and approaches with the core business or mission problem. The factory process is designed to provide a highly synchronized process that allows the organization to identify the right AI for the job, and it requires experience to make that selection. It may be helpful to remember that with enough abstraction, there are three branches of AI that will work for any case:
- AI based on structured data
- AI based on unstructured data
- Generative AI
Having experienced personnel or consultants to guide the fundamental AI approach for any use case makes AI factory delivery even more successful and predictable.
MVPs in Public Sector
The expanded MVP concept is ideal for public-sector organizations because it provides exposure to risks and value long before the completion of a project or product development. A department or agency program could immediately benefit from using the MVPs by including checkpoints in any AI development and delivery that would allow users and program managers to see under the hood of the AI process and make any needed corrections.
Inspection of the MVPs will also reveal whether the process is properly defined, whether the optimal AI model or tool has been chosen, and whether the solution has the proper data feeds to be effective. Ultimately, leveraging these MVPs can help transform how the public sector harnesses technology for better governance and public service delivery.
Read Part 1 of this series here. Read Part 2 here.
Winston Chang serves as Snowflake’s Global Public Sector CTO. He supports global government and education ecosystems for modernizing data practices. He is an expert in organizational transformation derived from data, AI/ML and innovation. His personal mission is to help government and educational institutions leverage data for maximum societal impact.
Leave a Reply
You must be logged in to post a comment.