Nicolas Miailhe co-founded The Future Society (TFS) in 2014 and originally incubated it at the Harvard Kennedy School of Government. Incorporated as a nonprofit, TFS is an independent AI Policy think-and-do tank. Since its creation, TFS has pioneered institutional innovation work to help design and deploy Artificial Intelligence governance frameworks, principles and mechanisms. A recognized strategist, thought-leader, and implementer, Nicolas has lectured around the world, and advises governments, international organizations, philanthropies, and multinationals. He is an appointed expert to the Global Partnership on AI (GPAI), where he co-chairs the Committee on Climate Action & Biodiversity Preservation. He is also an invited expert to the OECD’s AI Group of experts (ONE.AI) and to UNESCO’s High Level Expert Group on AI Ethics, among others. Nicolas has taught AI Governance at the Paris School of International Affairs (Sciences Po), and at the IE School of Global & Public Affairs in Madrid. He is also a Future World Fellow with the Center for the Governance of Change at IE Business School in Madrid. An Arthur Sachs Scholar, Nicolas holds a Master in Public Administration from Harvard Kennedy School, a Master in Defense, Geostrategy & Industrial Dynamics from Panthéon-Assas University, and a Bachelor of Arts in European Affairs and International Relations from Sciences Po Strasbourg.
Interview with Nicolas Miailhe, founder of the Future Society : AI, Governance, and Philanthropy
Katherine Macdonald (KM): Could you present yourself and tell us what led you to founding the Future Society?
Nicolas Miailhe (NM): I spent 15 years in the high tech industry working on industrial cooperation and transfers of technologies. This experience gave me an acute understanding of AI, the pathway towards more powerful AI, and what the challenges in rendering AI safe and responsible were.
I was preoccupied with the advances in AI and the lack of governance frameworks, practices, and instruments around them, particularly internationally. I decided to move on from my job in the industry to attend the Harvard . There, I gained the knowledge and network to cofound a team and start The Future Society (TFS), a small nonprofit independent from the industry, which, since 2014, has worked on setting up the vision, principles, frameworks and now the instruments of AI governance.
It’s been mainly about moving from principles to practice. There is a lag between the rate at which AI is evolving and the magnitude of institutional innovation required to operationalize the governance of AI and put it into practice. Imagine, it took only 45 days to get from zero to 100 million users of Chat GPT. Add onto that how difficult bureaucracies are to reform, how hard institutions are to build and especially, how challenging cultural change is to push for. Since this was a unique challenge and we needed it at a transnational, if not global level, I started an organization dedicated to that purpose.
KM: Are there are certain definitions in AI that are more contended, or over which there is less consensus?
NM: AI, as a concept, is and remains extremely contested. It emerged in the 1950s, as a socio-technical imaginary used to mobilize funds to advance a techno-scientific agenda which was never quite clear. Fast forward 70 years, we remain essentially in a similar position: still grappling with this definition problem. The context is different in the sense that now we have a mature , which was not the case in the past when AI remained a dream. Algorithms powered by data and compute now create enormous value in our economies. They exhibit increasing cognitive and agentic abilities both in the virtual and the physical worlds. Now, there is the prospect of an artificial intelligence revolution, but we still lack an agreed upon definition.
In fact, the definition itself remains an artifact of governance that is hotly contested everywhere we see it, including within the OECD, G20 principles, and the EUA Act, which is a legally binding regulatory regime being developed in Europe. The ways in which people have gone about solving the problem is to gather experts within these governance forums and break it down into smaller bits and forge common understandings across sectors, around which practitioners, investors, and developers can agree. But it remains a problem.
KM: Where are we right now regarding AI governance?
NM: We are now moving from principles to practice in the governance of AI. There are two main challenges. One is a capacity building problem. Governments and regulatory agencies don’t understand these technologies very well. But let’s be clear: the designers of GPT 4 themselves do not understand how GPT 4 operates. They have intuitions and correlations but cannot interpret precisely and explain how the system operates, which remain black boxes. If this is the case for the people who design them, imagine those who are supposed to control, administer, and regulate them. This is why we, at TFS, look at the governance of AI as the governance of the entire value chain, not only post deployment, but of how these things are researched, developed, and deployed.
As an example, look at how regulators have continued to struggle at regulating digital data and cybersecurity, which have been around for 20 years. How many hospitals have been hacked in Europe, and probably in Canada? What is shameful about the cybersecurity situation, is that we knew more or less what to do.
With AI, you must audit, evaluate and certify algorithms, their data sets, their documentation, and the competence frameworks of the people developing them and deploying them. If we have not been able to do that for cybersecurity, where we knew exactly, more or less, what was happening, it’s going to be tremendously more difficult to do it for AI.
KM: You mentioned the exponential speed at which AI was evolving, what challenges does this pose?
NM: This is the second main challenge: the current speed and magnitude of AI innovation versus institutional innovation. As we are trying to figure out the capacity required to regulate and administer the digital space, build self, soft and hard regulatory mechanisms, and have a fluid cultural deployment of resulting good practices across the emerging AI industrial value chain, technology keeps on moving at lightning speed. The problem is then being compounded; in other words, the target is moving. We are injecting bigger, and more powerful, but less interpretable AI black boxes into the core pillars of our digital economy: search engines, social media, and now jobs themselves. We need to indulge in anticipatory regulation and institutional innovation, which is extremely difficult to do and takes time.
So, operationalizing AI governance is about defining the right practices, the right cocktails of self, soft and hard –backstop- regulatory mechanisms and their enforcement to shape the behaviors of those who develop, deploy, and invest in AI technologies towards responsibility, safety, and robustness. Doing that and readapting it for constantly evolving technologies is a huge challenge.
KM: How is philanthropy getting involved in the governance and development of AI?
NM: Philanthropy too, is interested in the hype around AI, its potential to do good and its risks. But, they too have the responsibility of not pushing the hype, and instead to mobilize and foster responsible innovation, including through governance.
The magnitude of institutional innovation required to operationalize the governance of AI and address the two challenges I mentioned cannot be solved by bureaucracies alone. They usually don’t innovate well. Startups, nonprofits and corporations are better suited to innovate around governance issues, proposing for instance regulatory sandboxes and certification mechanisms. But, if we leave it only to corporations, which are too often driven by short term profit and return on investment, are we sure we are going to align with public interest? Not necessarily, especially given the current competitive pressure to recklessly race ahead. We have seen it with the first two waves of digitalization since the turn of the millennium.
The regulatory wave that we have seen after the World Wars and in the 1970s, has been pushed in conjunction with a general pushback against monopolization by the private sector in the electricity, oil, telecom and digital industries. A similar exercise must be done here. We need philanthropy to step in and provide impetus, and a connecting tissue in terms ofinnovation to confront the challenges of digital and AI governance. Bureaucracies cannot do it alone, and the private sector cannot be trusted to do it alone.
KM: We’ve discussed the establishment of the governance structure, what about the enforcement of AI regulations?
NM: Let’s use the data privacy example again. We have not yet started to be able to appropriately regulate data and privacy. The general data protection regulation (GDPR) in the EU came into law May 2018. It’s been five years, and yet we’ve not been able to regulate it appropriately. Again, there’s a capacity issue. How many investigators are there in data protection agencies to investigate not only Google and Facebook, but the hundreds of SMEs and local authorities? And there has been a lot of forum shopping undermining enforcement.
Also, data protection authorities not only need to investigate forbidden practices and abuses, but also respond in a way that provides inspiration and capacity building, especially to smaller and less equipped actors for whom the cost to comply can be really high. Otherwise, you de facto hinder digital innovation at the base of the economic pyramid.
If this is the case for data regulation which has been enforced since 2018, imagine what it is going to be for AI governance. Philanthropy has a moral obligation and a huge impact opportunity to step in along the entire value chain, through institutional innovation, education and capacity building for the private, public, and NGO sectors. Vilas Dhar, the president of the Patrick J. McGovern Foundation, embodies that vision and has shown exceptional leadership on this, seeding the entire field.
KM: What has philanthropy been funding around AI?
NM: The public sector cannot do it alone. Philanthropy needs to bridge the gap between bureaucracy and private interests. There are opportunities, as I said, along the lines of regulatory sandboxes and algorithmic auditing for instance, where I think philanthropy can step in and help show the way towards responsible innovation for the public good.
KM: Would you be able to define what you mean by regulatory sandboxes?
NM: Good question, as it means a lot of things and there is no agreed upon definition. The concept initially emerged in the fintech industry. Basically, it’s a mechanism through which digital products are tested before their large-scale deployment on the markets in a privileged relationship with the regulator to get some form of pre access to the market and testing to ensure consumer safety.
It’s also a way for the regulator to learn about the technology and its risks, and thus grow its understanding and competence. It carries several challenges as it may push companies to disclose privileged information or expose their competitive position in a way that would go beyond what they would want; or that it can be used as a form of non-tariff trade barrier preventing de facto entry into a regulated market. So, these regulatory sandboxes must be well calibrated to avoid being captured by industry or conversely turned into bureaucratic monsters.
KM: Do you see any potential issues with philanthropy’s involvement in the development or governance of AI?
NM: Of course, philanthropic money does not come without strings attached, let’s be clear about that. Philanthropy has its own heavy baggage, it’s no panacea. Philanthropists are not elected. If they are going to step into the public realm, they don’t necessarily have the legitimacy to fill the gap. It must be done humbly and cautiously given the mounting civic distrust.
Never forget that Open AI itself is and claims to be governed ultimately under a nonprofit regime, a claim that is getting increasingly contested, also due to a lack of transparency and active civic engagement over what are exactly the parameters of their governance and how/why it has evolved over the past 7 years quite radically.
This in and of itself embodies a lot of the debate that does and should surround the governance of AI. Oftentimes philanthropy is the result of a debatable accumulation of wealth due to market dynamics where “the winners take all”. We must celebrate tech billionaires’ vision and readiness to redistribute wealth to maximize social impact. But we also need to able to question the dynamics at the core of digital capitalism that have enabled such baffling accumulation of wealth and power in the first place. Because they have toxic effects too. This is one of the great macro challenges that philanthropy must confront if it wants to step into the governance of AI with legitimacy and thus maximum impact to bridge the gap between the private and the public sectors. It must be self-reflexive and step into the realm cautiously.
KM: Do you see any situations in which philanthropy has a particular role to play?
NM: Philanthropy can play a role in ensuring equitable access. Here’s an example. The European Center for Algorithmic , is a much needed capacity building effort led by the European Commission to create internal competence and tooling needed to enforce existing and forthcoming legislation including GDPR, the Digital Services Act, and the incoming EU AI Act. These three include, in one way or another, algorithmic auditing and evaluation requirements, and to execute them, you need an abundance of public servants and of experts backed by evidence-based policymaking. This is good because it is a very significant capacity building effort. However, whose interest will it serve? Not necessarily the small villages in rural Italy or Greece. While we need such a “center” to ensure regulatory enforcement, it is far from sufficient if we are the balance regulation and innovation to the benefit of the many.
Philanthropy could build upon that effort to ensure that it does not translate into a centralization of power away from the periphery. We also need to empower the legislative and the judiciary branches, beyond the executive branches like the European Commission. What about the judges and members of parliament and local councils? If we want to stay true to self-governance, we need to look at these questions and empower actors at the periphery. This is another area where philanthropy can get involved: making sure that its governance is accessible and being distributed in a more equitable manner effectively.
There is both an opportunity and a challenge, as this wave of AI, in terms of governance, also mandates immense efforts around education and dealing with inequalities. Philanthropy has a moral obligation and an immense impact opportunity to get involved on these fronts.
KM: Is there anything else you would like to add regarding philanthropy’s role in AI?
NM: With private philanthropy, funds often come with less and/or smarter strings attached, allowing for heavy risk taking, which is not so easy to do for the private or public sectors. It works best when philanthropy is self-reflexive and leverages its unique position in between the private and the public interestFor example, entities like the Omidyar network which makes impact investments, venture capitalism, and philanthropy at the same time make for very interesting actors.
We need hybrid strategies. This was the intent behind Open AI. It has backfired in so many ways in my opinion, for want of transparency and accountability. But we should not let the baby go with the bathwater. We will be needing more of these hybrid architectures to solve the immense institutional innovation and capacity building challenges which lie ahead. There will never be a silver bullet, nor will they replace our democratic pillars. But hybrid strategies led by philanthropic actors have a real role to play at both design, pilot, delivery, and scale up stages.