Helen Orgis about AI and minimum viable partnerships

    In the realm of artificial intelligence (AI) development, the quest for innovation often overshadows the need for responsible and ethical deployment. However, a transformative approach called Minimum Viable Partnerships (MVPs) is emerging as a game-changer, enabling the responsible advancement of AI applications while addressing the associated risks and ethical challenges. This article delves into the definition of MVP, provides insights into how companies should approach partnerships for AI projects, highlights ethical issues and risks associated with AI, and emphasizes the crucial role of partnerships in mitigating these risks.

    Understanding Minimum Viable Partnerships 

    When we hear the term MVP, we might think of Minimum Viable Products, which involve developing a product with minimal features to gather customer feedback and iterate quickly. Similarly, Minimum Viable Partnerships (MVPs) extend this concept to collaborations between different ecosystem players. Rather than embarking on comprehensive and long-term partnerships from the start, MVPs focus on forming alliances with minimal resources and commitments to enable rapid collaboration and validate mutual value. MVPs encompass collaborative relationships between entities possessing complementary skills, resources, and expertise, fostering innovation and responsible AI development. The terms Minimum Viable Collaboration (MVC) or Minimum Viable Ecosystem (MVE) are used interchangeably to describe this approach [2] [3].

    Characteristics of Minimum Viable Partnerships

    In today's dynamic environment, organizations must test, evaluate, and adapt rapidly. MVPs prove particularly valuable when entering new markets, exploring unfamiliar areas of expertise, or forging new partnerships, where potential impacts and risks are not yet fully understood.

    Clear Goals & Responsibilities: MVPs are built on a shared vision and understanding of the market. Aiming for the same goals ensures that the collaboration focuses on essential aspects during the pilot phase. By defining clear roles and responsibilities early on, conflicts are minimized, and decision-making, accountability, and governance structures are streamlined [1] [3].

    Limited Resources & Collaboration: In MVPs, partners invest limited resources, such as finances or premises, to quickly implement the collaboration and assess the potential advantages and risks of a partnership. However, establishing mechanisms for sharing best practices and lessons learned enhances collective learning and accelerates AI innovation [1]. The resources invested in the pilot will provide a good understanding of what an expansion of the partnership will mean for both parties, for example on a global scale.
    Rapid implementation: MVPs begin small, such as with limited projects or joint ideation sessions, enabling partners to quickly grasp the benefits and potential challenges of collaboration without involving heavy paperwork [1].

    Feedback & Evaluation: Successful partnerships thrive on open communication. Therefore, it is important to continuously exchange, document, and evaluate feedback. This enables fact-based decision-making, early identification, and resolution of ethical concerns, technical challenges, and emerging risks, ensuring the sustainability and effectiveness of the partnership [3] [1].

    Alignment of Values and Ethics: MVPs foster alignment in ethical principles, values, and long-term goals, ensuring a shared commitment to developing AI applications prioritizing societal well-being and responsible practices.
    By using an MVP approach, organizations and institutions can quickly validate whether the collaboration can be successful and, if necessary, further develop or terminate it.

    Ethical Issues and Risks Associated with AI

    While AI holds remarkable potential to transform various aspects of society, it is essential for organizations to address the ethical issues and risks associated with its development and deployment at an early stage. By acknowledging biases, ensuring data privacy, promoting transparency, considering societal impacts, and establishing clear accountability frameworks, they can navigate the path of AI development responsibly and ethically [5] [11].

    Excerpt of potential risks [6][7]:

    • Biases and discrimination in algorithms
    • Privacy and data infringement
    • Lack of transparency and explainability (opaque algorithms)
    • The spread of deep fakes and manipulation
    • Liability and accountability issues
    • Security concerns and a lack of regulation (at national and global levels)
    • Potential job displacement
    • Potential threat to human dignity
    • Lack of ethical decision-making and morality (including robots)

    Unaddressed, these risks could undermine public trust, hinder AI adoption, and result in adverse consequences for individuals and society.

    The Importance of Partnerships in Mitigating Those Risks

    In the fast-paced world of AI, no single institution can tackle complex challenges alone without risking ethical, moral, or legal problems. Partnerships play a pivotal role in ensuring trustworthy AI applications by facilitating knowledge sharing, expertise pooling, and resource collaboration [8 – 10]. Minimum Viable AI Partnerships (MVAPs) bring together diverse stakeholders to drive responsible and ethical development and deployment of AI technologies before an AI application is launched or scaled globally.

    MVAPs create mutually beneficial environments for joint learning, cross-sector dialogue, sharing best practices and lessons from previous AI incidents, identifying biases and safety concerns, and rapid prototyping and experimentation. They enable proactive measures to navigate the ethical complexities of AI and promote collective intelligence. When considering stakeholders for a trustworthy AI solution, it is crucial to include a diverse range of actors to ensure comprehensive perspectives and expertise [10].

    Ecosystem Players to Consider for MVAPs

    The goal is to find the right teammates within an ecosystem and jointly build a solid foundation for ethical AI that can be quickly tested and optimized before the solution is further developed and brought to market [12]. It would take too long to involve all the below players, as an MVAP is primarily about speed and insights. However, the more perspectives that are combined in a product or application, the more likely it is to meet high ethical and security standards.

    Industry Leaders and Technology Companies: Collaborating with corporations and industry leaders not only offers access to technical expertise, AI tools, and other infrastructure capabilities. Their participation also helps promote responsible practices and encourages transparency. They could play a significant role in third-party auditing [5].

    Academic & Research Institutions: Academic institutions contribute vital research, theoretical knowledge, and scientific rigor to AI projects in areas such as machine learning, data analysis, and ethics. Collaboration with academia helps foster interdisciplinary research and may ensure a more comprehensive understanding of AI's impact.

    Domain Experts: Domain experts possess deep domain-specific knowledge and understand the practical implications and challenges within a particular industry or sector. Including experts from various domains such as healthcare, law, finance, or education ensures that AI solutions align with the specific requirements, regulations, and ethical considerations of those fields.

    End Users: End users, such as individuals or communities, provide valuable feedback and insights on the real-world impact and usability of AI applications. Create room for them to openly share feedback on whether your AI solutions meet their needs, respect their rights, and consider their concerns. Make sure to include end users from different communities and social groups to reduce biases.

    Policymakers and Regulators: Government representatives and policymakers are essential stakeholders. They shape policies, guidelines, and regulations to govern AI development, deployment, and use. Their involvement ensures ethical boundaries and alignment with societal values.

    Civil Society Organizations: NGOs and advocacy groups play a critical role in advocating for transparency, fairness, and accountability in AI development, representing societal interests and raising awareness even for potentially oppressed social groups.

    Ethical Advocates & Legal Experts: Ethicists and legal experts contribute to the development of ethical frameworks and guidelines, ensuring alignment with fundamental values, human rights, and legal requirements.

    By engaging diverse MVAPs, organizations can foster public trust, reduce biases, and mitigate risks associated with AI applications, paving the way for responsible, robust, and trustworthy AI systems.

    Embracing Partnerships for Responsible AI Practices

    In the ever-evolving AI landscape, partnerships become paramount for driving AI innovation, ensuring ethical development, and addressing societal challenges. This article highlights the importance of Minimum Viable Partnerships (MVPs) as a quick, easy-to-implement, and valuable tool for advancing AI applications. By forging collaborations and embracing an open innovation mindset, organizations can leverage combined resources, expertise, and diverse perspectives to navigate the complexities of AI. The article concludes that a diverse set of ecosystem players working together can shape a future where AI applications genuinely benefit individuals and society as a whole.


    [1] Weisenberger, J. (2019). Creating Minimum Viable Partnerships. Available online: https://www.linkedin.com/pulse/creating-minimum-viable-partnerships-john-weisenberger

    [2] Boyd, S. (2019). Minimum Viable Ecosystem. Available online: https://medium.com/work-futures/minimum-viable-ecosystem-53ae03d43cbf

    [3] Lewrick, M. (2018). A minimum viable ecosystem for collaborative success. Available online: https://inform.tmforum.org/features-and-opinion/a-minimum-viable-ecosystem-for-collaborative-success

    [4] OECD (2019). OECD AI Principles overview. Available online: https://oecd.ai/en/ai-principles

    [5] Brundage, M. et al. (2020). Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. Available online: https://arxiv.org/pdf/2004.07213.pdf

    [6] Bossmann, J. (2016). Top 9 ethical issues in artificial intelligence. Available online: https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/

    [7] Gordon, C. (2022). 2023 Will Be The Year Of AI Ethics Legislation Acceleration. Available online: https://www.forbes.com/sites/cindygordon/2022/12/28/2023-will-be-the-year-of-ai-ethics-legislation-acceleration/

    [8] Ebadi, B. (2018). Collaboration Is Necessary for Ethical Artificial Intelligence. Available online: https://www.cigionline.org/articles/collaboration-necessary-ethical-artificial-intelligence/

    [9] Boukherouaa, E. et al. (2021). Powering the Digital Economy: Opportunities and Risks of Artificial Intelligence in Finance. Available online: https://www.elibrary.imf.org/view/journals/087/2021/024/article-A001-en.xml

    [10] Manyika, J. (2019). Tackling bias in artificial intelligence (and in humans). Available online: https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans

    [11] Mittelstadt, B. D. et al. (2016). The ethics of algorithms: Mapping the debate. Available online: https://www.researchgate.net/publication/309322060_The_Ethics_of_Algorithms_Mapping_the_Debate

    [12] Murray, F. & Budden, P. (2022). Strategically Engaging With Innovation Ecosystems. Available online: https://sloanreview.mit.edu/article/strategically-engaging-with-innovation-ecosystems/