<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Techplomacy Magazine]]></title><description><![CDATA[Techplomacy Magazine: Conversations that matter | A monthly publication featuring global voices at the intersection of tech/AI, diplomacy, governance, and national security.]]></description><link>https://magazine.techplomacyfoundation.org</link><generator>Substack</generator><lastBuildDate>Thu, 07 May 2026 08:23:12 GMT</lastBuildDate><atom:link href="https://magazine.techplomacyfoundation.org/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Techplomacy Foundation]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[techplomacyfoundation@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[techplomacyfoundation@substack.com]]></itunes:email><itunes:name><![CDATA[Olin Thakur]]></itunes:name></itunes:owner><itunes:author><![CDATA[Olin Thakur]]></itunes:author><googleplay:owner><![CDATA[techplomacyfoundation@substack.com]]></googleplay:owner><googleplay:email><![CDATA[techplomacyfoundation@substack.com]]></googleplay:email><googleplay:author><![CDATA[Olin Thakur]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Who Governs AI When the World Is Divided]]></title><description><![CDATA[From Bletchley to India, AI summits are maturing. But US-China rivalry, ungoverned agentic systems, and structural coordination failures threaten to fracture the architecture before it sets.]]></description><link>https://magazine.techplomacyfoundation.org/p/who-governs-ai-when-the-world-is</link><guid isPermaLink="false">https://magazine.techplomacyfoundation.org/p/who-governs-ai-when-the-world-is</guid><pubDate>Fri, 13 Mar 2026 14:08:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!CZ9L!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65cd699c-2ec7-44f5-978a-10113dc8bf3e_6000x4000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CZ9L!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65cd699c-2ec7-44f5-978a-10113dc8bf3e_6000x4000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CZ9L!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65cd699c-2ec7-44f5-978a-10113dc8bf3e_6000x4000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!CZ9L!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65cd699c-2ec7-44f5-978a-10113dc8bf3e_6000x4000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!CZ9L!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65cd699c-2ec7-44f5-978a-10113dc8bf3e_6000x4000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!CZ9L!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65cd699c-2ec7-44f5-978a-10113dc8bf3e_6000x4000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CZ9L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65cd699c-2ec7-44f5-978a-10113dc8bf3e_6000x4000.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/65cd699c-2ec7-44f5-978a-10113dc8bf3e_6000x4000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5415148,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://techplomacyfoundation.substack.com/i/190817332?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65cd699c-2ec7-44f5-978a-10113dc8bf3e_6000x4000.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CZ9L!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65cd699c-2ec7-44f5-978a-10113dc8bf3e_6000x4000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!CZ9L!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65cd699c-2ec7-44f5-978a-10113dc8bf3e_6000x4000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!CZ9L!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65cd699c-2ec7-44f5-978a-10113dc8bf3e_6000x4000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!CZ9L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65cd699c-2ec7-44f5-978a-10113dc8bf3e_6000x4000.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Cyrus Hodes. Credit: AI Safety Connect</em></figcaption></figure></div><p>By<strong> Cyrus Hodes, </strong><em>Founder, AI Safety Connect</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://magazine.techplomacyfoundation.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>AI governance is at an inflection point. The summits are multiplying, the declarations are accumulating, and the institutions are slowly taking shape. But the harder question is whether any of it is actually building toward something durable, or whether geopolitical rivalry is quietly hollowing it out before the architecture can set.</p><p>I have spent the past several years working across these processes, from OECD expert groups to the Global Partnership on AI to the India AI Impact Summit in February 2026. What follows is an honest assessment of where global AI governance stands: what is working, where the coordination is genuinely failing, and what the window for action looks like from where I sit.</p><p><strong>The Summit that changed the frame</strong></p><p>The India AI Impact Summit in February 2026 marked a structural shift in AI governance. For the first time, a major global AI summit was hosted in the Global South, and it reframed the conversation in ways that previous summits had not. Where Bletchley in 2023 focused on frontier risk, Seoul in 2024  expanded the conversation to include a broader set of stakeholders, and Paris in 2025 emphasized action-oriented commitments, India centered its agenda on three foundational pillars: People, Planet, and Progress. Development outcomes, public service delivery, and inclusive growth were placed at the core of the governance discussion rather than treated as secondary concerns.</p><p>This matters because AI governance has, until now, been shaped overwhelmingly by the countries building frontier systems. The assumptions embedded in those frameworks reflect specific institutional contexts, regulatory traditions, and risk profiles. India&#8217;s approach, articulated through its November 2025 AI Governance Guidelines, represents a distinctive path: prioritizing innovation through existing legal frameworks rather than standalone AI legislation, and positioning safety as an enabler of trust and scale rather than a constraint on development. The summit drew on eight regional AI conferences held across Indian states between October 2025 and January 2026, feeding region-specific use cases and policy gaps directly into the national agenda. That kind of bottom-up input is rare in international governance processes and it produced a richer, more grounded conversation as a result.</p><p>AI Safety Connect participated in the summit precisely because this shift matters for the broader governance trajectory. </p><blockquote><p>&#8220;A governance architecture shaped by only a handful of nations will struggle to achieve legitimacy or effectiveness across the diverse contexts where AI is actually being deployed.&#8217;&#8217;</p></blockquote><p><strong>From inclusion to agenda-setting</strong></p><p>The question is now less about whether Global South countries are included and more about whether they have genuine agenda-setting power. India demonstrated what that looks like in practice. The AI Compendium, released during the summit, documented real-world AI applications across priority sectors: health, energy, education, agriculture, gender empowerment, accessibility. These are practical case studies from countries where infrastructure varies widely, languages are diverse, and populations are vast.</p><p>The Strategic Foresight Group&#8217;s report &#8220;India&#8217;s AI Gambit: Navigating the Global Race,&#8221; launched in January 2026, made the national security case explicitly: if India depends on foreign suppliers for critical AI tools and safety testing, it faces leverage vulnerabilities in moments of crisis. Investing in domestic AI safety research, shared testbeds, and international coalitions gives India bargaining power. This is a sovereign strategic argument, not a request for inclusion.</p><p>India&#8217;s linguistic and cultural diversity, combined with its scale, also positions it to advance multilingual and multimodal AI systems in ways that monolingual frameworks cannot. When AI is being deployed for healthcare access in rural communities, financial inclusion for underbanked populations, or agricultural decision-support across varied climatic regions, the governance frameworks need to reflect those realities.</p><p>The implication for equity is straightforward: governance frameworks designed exclusively in Washington, Brussels, or Beijing will be incomplete. They will miss failure modes, deployment challenges, and safety requirements that only emerge in different institutional and social contexts. This is why the India summit&#8217;s emphasis on AI for public service delivery was not a soft addition to the governance conversation but rather a correction.</p><p><strong>What middle powers can actually do</strong></p><p>Middle powers lack hard power in frontier AI. They are not building the largest models, they do not control the compute supply chain, and they do not host the dominant frontier labs. But this framing mistakes the nature of the leverage available to them.</p><p>Middle powers have two underused instruments. The first is collective market and regulatory leverage. If multiple large and mid-sized economies align on baseline safety expectations for AI systems deployed in their markets, that shapes incentives for frontier developers regardless of where those developers are based. Procurement standards, market access conditions, and safety certification requirements become powerful tools when coordinated across a bloc of countries representing billions of users.</p><p>The second is coalition diplomacy. </p><blockquote><p>&#8220;The risk we should be most concerned about is not that middle powers fail to build their own frontier systems. It is that geopolitical competition between the US and China fractures the governance landscape into incompatible blocs, leaving everyone else to pick sides rather than shape outcomes.&#8217;&#8217; </p></blockquote><p>Middle powers can resist that dynamic by building coordinated positions that exert pressure on both superpowers to maintain shared safety baselines. This is not a passive role. It requires institutional capacity, technical literacy, and sustained diplomatic investment.</p><p>At the India summit,  Dutch Prime Minister Dick Schoof made this argument in a special address at an AI Safety Connect event. His framing was direct: middle powers represent the largest part of the world economy and the strongest democratic traditions, and their strength lies in building coalitions. The Netherlands provides the lithography systems, Japan provides the precision technology, Germany supplies the mirrors, Taiwan supplies the production facilities, Korea makes the memory chips, and India offers the talent base. The AI governance architecture is being established now. The window for middle powers to shape it will not remain open indefinitely.</p><p><strong>The fragmentation trap</strong></p><p>US&#8211;China competition creates a gravitational pull toward fragmentation. Each summit risks becoming a venue where great-power positioning displaces substantive technical and governance work. The practical consequence is that important coordination mechanisms, including shared evaluation methodologies, incident reporting standards, and verification protocols, get stalled because they become entangled in broader strategic rivalries.</p><p>But the same competitive dynamic that threatens fragmentation also creates the strategic opening for other actors. When bilateral channels between Washington and Beijing are constrained, multilateral and Track 1.5 forums become essential infrastructure. </p><blockquote><p>&#8216;&#8216;Neutral convening spaces that maintain credibility across geopolitical blocs can facilitate conversations that official diplomatic channels cannot. This is not a substitute for great-power agreement. It is the connective tissue that keeps coordination alive when official processes are gridlocked.&#8217;&#8217;</p></blockquote><p>The open-source dimension adds further complexity. Chinese frontier labs, including Moonshot AI and DeepSeek, have released powerful open-weights models that rival proprietary systems. This accelerates diffusion of advanced capabilities beyond the control of any single jurisdiction, making coordination on safety standards more urgent while simultaneously making enforcement through traditional export controls harder.</p><p>The risk now is less about AI governance summits becoming useless under great-power rivalry and more about them being captured by it, reduced to declarations of principle that obscure the absence of practical coordination. The antidote is building governance mechanisms that function regardless of which geopolitical configuration prevails: shared evaluation tools, interoperable safety standards, and trusted channels for incident reporting that work across blocs rather than within them.</p><p><strong>Where coordination is actually failing</strong></p><p>Consider what happened with Grok&#8217;s image generation capabilities at the turn of 2025&#8211;2026. When users discovered the system could generate non-consensual intimate imagery, the response was fragmented. Different countries reacted according to their own timelines and legal frameworks. Partial fixes eventually appeared, but journalists who tested them found they did not work reliably. The entire episode took weeks to produce even inadequate results.</p><p>This was an easy case, with near-universal agreement that non-consensual intimate imagery is harmful and no legitimate use case to defend. Still, no coordinated response mechanism existed.</p><p>The coordination failures are structural, not incidental. Competitive pressures push labs to ship faster, even when internal teams advocate for caution. There is no credible way to verify claims about model capabilities, training procedures, or safety measures across labs, making mutual trust difficult to establish. Frontier labs also lack standardized mechanisms for reporting serious incidents, so problems remain siloed until public scandals force disclosure. And regulatory frameworks differ enough across jurisdictions that companies can exploit gaps.</p><p>These failures matter because the cases ahead will be harder. Existing models can already assist non-experts in designing biological agents. AI tools are being repurposed as semi-autonomous cyber attackers. These are documented capabilities, not speculative scenarios. </p><blockquote><p>&#8216;&#8216;If we cannot coordinate effectively on cases where everyone agrees, the prospects for managing genuinely contested or ambiguous situations are poor. The infrastructure for collective response needs to be built before the next crisis, not improvised during it.&#8217;&#8217;</p></blockquote><p><strong>The governance gap nobody is talking about</strong></p><p>The governance conversation is largely focused on the right problems  such as  bias, misuse, concentration of power, safety of frontier systems. But it is happening at the wrong layer. Current frameworks assume that humans initiate actions and AI systems execute them. That assumption is becoming outdated.</p><p>The rapid proliferation of agentic AI, systems that act autonomously, chain tasks together, and operate across platforms without step-by-step human direction, is outpacing governance frameworks designed for a simpler paradigm. When an AI agent autonomously receives information, researches it, produces content, and distributes it across messaging platforms without human prompting at each stage, the accountability chain becomes unclear. Who is responsible when the output is harmful? The developer of the agent framework? The user who deployed it? The platform that hosted it?</p><p>The Moltbook episode in early 2026 illustrated this vividly. A platform marketed as a social network for AI agents attracted enormous attention and over a million claimed accounts. Security researchers subsequently discovered that the platform had no meaningful verification of whether accounts were AI or human, exposed databases containing passwords and API keys, and widespread malware. The dramatic headlines about &#8220;agents forming religions&#8221; obscured the more substantive lesson: when agent infrastructure is deployed without governance, oversight, or verification, the result is a security disaster. The spectacle was largely performative, but the vulnerabilities were real.</p><p>As open-source agent frameworks become more powerful and accessible, the barrier to deploying autonomous AI systems is dropping rapidly. The governance conversation needs to catch up with where the technology actually is, not where it was two years ago.</p><p><strong>Not a grand institution, but not nothing</strong></p><p>The most realistic trajectory is neither pure institutionalization nor pure fragmentation. What is emerging is something more layered and pragmatic: official summit processes setting high-level political direction, complemented by a growing ecosystem of Track 1.5 platforms, expert networks, and technical coordination bodies that build practical pathways between summits.</p><p>The progression from Bletchley to Seoul to Paris to India shows increasing sophistication, not stagnation. Each summit added something. Bletchley established the political salience of frontier AI risk. Seoul expanded the stakeholder base and deepened the technical agenda. Paris marked a shift toward action commitments, while still being part of the International AI Safety Report process. India brought Global South perspectives and development-oriented governance frameworks into the core conversation, even if the focus shifted from safety to responsible AI more broadly. This is iterative institution-building, even if it does not yet resemble a single formal body.</p><p>The more realistic trajectory is convergence through practice rather than through a grand institutional design. AI Safety Institutes are already aligning on evaluation methodologies across jurisdictions. The OECD Expert Groups on AI Futures, Compute, and Trustworthy Investments are building shared technical vocabulary. The Global Partnership on AI is developing interoperable safety frameworks. These practical coordination mechanisms accumulate over time into something that functions like institutional infrastructure, even without a formal charter.</p><p>The critical variable is governance quality. A &#8220;global AI safety commons&#8221; that provides shared access to evaluation tools, red-teaming methodologies, and monitoring systems could accelerate convergence. But if it is poorly designed or weakly governed, it becomes symbolic rather than functional. The quality of implementation determines whether emerging coordination mechanisms mature into durable institutions or remain aspirational declarations.</p><p><strong>How soft norms become hard rules</strong></p><p>Soft norms are hardening faster than is widely appreciated. The International AI Safety Report, led by Yoshua Bengio with input from 72 international experts and backed by more than 30 countries, established something new in AI governance: a commitment to ongoing, evidence-based assessment that keeps pace with technological change. The report&#8217;s Key Update, published in October 2025, documented significant capability advances driven by new training techniques rather than simply scaling model size, and identified new challenges for monitoring and controllability. This continuous assessment model is itself a norm, one that creates expectations of transparency and shared evidence that did not exist before.</p><p>India&#8217;s AI Governance Guidelines represent another form of norm crystallization. By articulating a governance framework that uses existing legal instruments rather than standalone AI legislation, India created a replicable model for other countries, particularly in the Global South, that need governance pathways compatible with their institutional capacity. The guidelines&#8217; emphasis on balancing innovation with sovereignty concerns offers a template that is already informing discussions in other jurisdictions.</p><p>The pattern is becoming visible: voluntary commitment leads to shared assessment methodology, which leads to convergent standards, which creates market expectations that function like regulation even without formal legal force. When enough jurisdictions align on evaluation requirements, safety benchmarks, or transparency expectations, frontier developers face compliance obligations regardless of where binding legislation stands. This is how soft governance becomes hard governance in practice: through alignment and mutual expectation rather than through a single treaty.</p><p><strong>Safety is not the obstacle</strong></p><p>The assumption that safety, innovation, and ethics are competing priorities requiring balance, is itself part of the problem. Safety enables trust and trust enables adoption at scale. Without credible safety infrastructure, AI deployment stalls or, worse, proceeds without accountability until something goes catastrophically wrong.</p><p>The practical challenge is not philosophical but infrastructural. High-level principles on responsible AI exist in abundance. The gap is in operationalization: how do you translate principles into evaluation tools, verification mechanisms, and monitoring systems that work across diverse regulatory environments, institutional capacities, and development contexts?</p><p>This is where the concept of an AI Safety Commons becomes practically important. If safety tools, including evaluation methodologies, red-teaming frameworks, risk assessment protocols, are treated as shared infrastructure rather than proprietary advantages, they can travel across borders and jurisdictions faster. Countries that lack the resources to build independent safety testing from scratch can access and adapt shared tools. The analogy is public health infrastructure: common standards for testing and certification, accessible to all, enabling local implementation that reflects local conditions.</p><p>The politically feasible path runs through demonstrating that safety capacity creates competitive advantage rather than constraining it. For countries like India, investing in domestic AI safety research and shared testbeds is both a sovereignty measure and an economic strategy. A nation that can credibly certify AI systems as safe for its population has leverage over foreign developers seeking market access.</p><p><strong>The cost of waiting</strong></p><p>Without coordination, governance becomes reactive rather than preventive: we wait for crises like the Grok incident, scramble to respond, and then move on until the next scandal. The same failure modes recur because there is no mechanism for sharing information across jurisdictions or sectors.</p><p>Fragmented governance also creates a compliance situation that burdens responsible actors without improving safety. When standards differ across jurisdictions, companies face duplicative regulatory requirements that consume resources without producing commensurate protection. Smaller companies and developing nations are disproportionately affected because they lack the capacity to navigate multiple overlapping frameworks. The irony is that fragmentation imposes costs without delivering the coordination benefits that governance is supposed to provide.</p><p>The deeper risk is temporal. AI capabilities are advancing faster than governance frameworks can adapt when those frameworks are developed independently and without coordination. The gap between what AI systems can do and what governance structures can oversee widens with each generation of models. In a fragmented landscape, this gap becomes a permanent condition rather than a temporary challenge.</p><p>The parallel with climate governance is instructive. Wealthy nations industrialized, emitted, and built their economies while the Global South bore the worst consequences of the resulting damage. AI governance risks repeating this pattern, but faster. The nations building and profiting from frontier AI systems are not the ones most exposed to the harms of ungoverned deployment. </p><blockquote><p>&#8220;The window for building coordination infrastructure that includes all affected parties is narrower than most policymakers assume. Coordinated, imperfect governance that improves iteratively remains far better than fragmented responses that leave the most vulnerable communities absorbing harms they had no role in creating.&#8217;&#8217;</p></blockquote><div><hr></div><p><em>This article is part of Techplomacy Magazine&#8217;s AI Diplomacy at a Crossroads special series, 2026.</em></p><p><strong>Cyrus Hodes</strong> is Founder of AI Safety Connect, a multi-stakeholder convening platform advancing international coordination on AI safety. He is a Venture Partner at Lionheart Ventures and Co-Founder of The Future Society and Stability AI. Cyrus serves on OECD Expert Groups on AI Futures, Compute, and Trustworthy Investments, and co-led the GPAI Safety and Assurance of Generative AI (SAFE) project. He holds a Master in Public Administration from the Harvard Kennedy School and previously advised the UAE Minister of State for AI.</p><p><em>The views expressed in this article are those of the author and do not necessarily reflect the views of Techplomacy Magazine or the Techplomacy Foundation. Articles may be republished in full, without alteration, with credit to Techplomacy Magazine (<a href="http://techplomacyfoundation.org">techplomacyfoundation.org</a>).</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://magazine.techplomacyfoundation.org/p/who-governs-ai-when-the-world-is?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://magazine.techplomacyfoundation.org/p/who-governs-ai-when-the-world-is?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><div><hr></div><p><strong>In case You Missed It</strong></p><ul><li><p><a href="https://techplomacyfoundation.substack.com/p/ai-and-the-global-classroom-governance">AI and the global classroom: governance, judgment, and the future of learning</a>,  Prof. Alexander Sidorkin, California State University</p></li><li><p><a href="https://oecd.ai/en/wonk/ai-safety-solutions-risk-mapping">AI safety solutions mapping: An initiative for advanced AI governance</a>, Cyrus Hodes, AI Safety Connect<em>.</em></p></li><li><p><a href="https://oecd.ai/en/wonk/dlt-blockchain-ai-technologies">Exploring DLT and blockchain: A comparative analysis... and how they integrate with AI technologies</a><em>, Cyrus Hodes &amp; Benjamin Yablon</em></p></li><li><p><a href="https://oecd.ai/en/wonk/web3-ai-blockchain-governance">A look at AI and Blockchain governance and an interview with CZ</a> ,  Cyrus Hodes, Benjamin Yablon, <em>Interview with Changpeng Zhao on OECD AI Principles.</em></p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://magazine.techplomacyfoundation.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI and the global classroom: governance, judgment, and the future of learning]]></title><description><![CDATA[As AI reshapes cognition, pedagogy, and societal structures, global leaders must confront the ethical, strategic, and diplomatic implications of human-machine collaboration in education and beyond]]></description><link>https://magazine.techplomacyfoundation.org/p/ai-and-the-global-classroom-governance</link><guid isPermaLink="false">https://magazine.techplomacyfoundation.org/p/ai-and-the-global-classroom-governance</guid><pubDate>Fri, 12 Dec 2025 08:03:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!kfr-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cafad55-94b8-419b-b21b-df4a1e0c89c9_1280x1148.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kfr-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cafad55-94b8-419b-b21b-df4a1e0c89c9_1280x1148.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kfr-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cafad55-94b8-419b-b21b-df4a1e0c89c9_1280x1148.jpeg 424w, https://substackcdn.com/image/fetch/$s_!kfr-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cafad55-94b8-419b-b21b-df4a1e0c89c9_1280x1148.jpeg 848w, https://substackcdn.com/image/fetch/$s_!kfr-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cafad55-94b8-419b-b21b-df4a1e0c89c9_1280x1148.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!kfr-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cafad55-94b8-419b-b21b-df4a1e0c89c9_1280x1148.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kfr-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cafad55-94b8-419b-b21b-df4a1e0c89c9_1280x1148.jpeg" width="1280" height="1148" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4cafad55-94b8-419b-b21b-df4a1e0c89c9_1280x1148.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1148,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kfr-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cafad55-94b8-419b-b21b-df4a1e0c89c9_1280x1148.jpeg 424w, https://substackcdn.com/image/fetch/$s_!kfr-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cafad55-94b8-419b-b21b-df4a1e0c89c9_1280x1148.jpeg 848w, https://substackcdn.com/image/fetch/$s_!kfr-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cafad55-94b8-419b-b21b-df4a1e0c89c9_1280x1148.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!kfr-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4cafad55-94b8-419b-b21b-df4a1e0c89c9_1280x1148.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Photo: Prof. Alexander M. &#8220;Sasha&#8221; Sidorkin, California State University, Sacramento</em></p><h4><strong>Editor&#8217;s Note</strong></h4><p>As artificial intelligence transforms classrooms across the globe, its impact reaches far beyond teaching methods. In this feature, Alexander Sidorkin, Professor of Graduate and Professional Studies in Education and former Chief AI Officer and Director at California State University Sacramento, explores the profound ways AI is reshaping human cognition, pedagogy, and societal structures. Drawing on his experience in education and AI policy, Sidorkin examines the ethical, strategic, and diplomatic dimensions of human-machine collaboration in learning, highlighting both the risks and opportunities that emerge.</p><p>From national competency to cultural relevance, his insights illuminate how AI challenges traditional notions of expertise, equity, assessment, and governance. This conversation is part of <a href="https://techplomacyfoundation.substack.com/">Techplomacy Magazine</a>&#8217;s special series, <em><a href="https://www.techplomacyfoundation.org/write-for-us">The Cognitive Frontier</a></em>, which investigates how AI is redefining human potential, critical thinking, and the future of education.</p><p>Sidorkin&#8217;s perspectives offer policymakers, educators, and industry leaders a nuanced guide for navigating a rapidly evolving educational landscape where technology and human judgment intersect.</p><p><em>Olin Thakur, Editor-in-Chief, Techplomacy Magazine</em></p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://magazine.techplomacyfoundation.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h4><strong>Redefining expertise in the AI era</strong></h4><p>Expertise in an AI age cannot remain centered on procedural mastery that machines now execute flawlessly. We need to shift toward what I call Extended Executive Cognition: the ability to orchestrate cognitive resources across human and artificial agents. This involves developing strategic judgment about task decomposition, knowing what to delegate to AI and what requires human insight, and maintaining metacognitive awareness about when machine outputs need human correction.</p><p>For countries with limited technological infrastructure, this creates both challenge and opportunity. The challenge is obvious: access gaps threaten to widen global inequities. The opportunity is more subtle. Nations can leapfrog traditional educational sequences that consumed decades in developed countries. </p><blockquote><p>Just as many African nations bypassed landline telephone infrastructure and moved directly to mobile networks, educational systems can skip the long march through procedural automaticity and move directly to teaching strategic thinking with AI assistance. Supplementing limited supply of highly qualified teachers with AI-powered tutor bots can also boost educational attainment in developing countries.</p></blockquote><p>This requires reconceptualizing what schools deliver. Instead of spending years drilling multiplication tables or grammar rules that AI handles instantly, education should focus on developing discerning thinking: the ability to evaluate AI outputs for accuracy, relevance, and contextual appropriateness. Students need to recognize eloquent emptiness; the fluent but substantively hollow content AI produces so convincingly. They need a theory of alien mind to understand how AI systems process information in ways fundamentally different from human cognition.</p><p>The infrastructure question becomes not whether countries have universal high-speed internet, but whether students have sufficient access to practice human-AI collaboration under guided conditions. Even intermittent AI access, when paired with strong pedagogy, can develop the executive cognition that matters most for future work.</p><h4><strong>Extended executive cognition as a national competency</strong></h4><p>Extended Executive Cognition is more than an individual skill. It is a fundamental capability for navigating an AI-integrated world, and nations that develop it systematically will possess significant advantages. But framing it as a national competency requires careful thought.</p><p>The capability itself involves several interconnected dimensions. Strategic allocation: breaking complex projects into components and deciding what humans versus machines should handle. Cognitive load distribution: preventing human overload while maintaining efficiency across the extended human-AI system. Input specification design: crafting prompts and providing context that aligns AI processing with human goals. Discerning thinking: evaluating output quality and adding genuine value through human insight. These skills form the master capacity for orchestrating distributed cognitive systems.</p><p>Policy should prioritize this development through educational transformation rather than narrow vocational training. The goal is not producing prompt engineers but cultivating workers and citizens who can think strategically with technological partners.</p><blockquote><p>Curricular reform must overcome what I call the curriculum curse, those years spent practicing prerequisite skills whose purpose remains opaque. AI enables engaging students with meaningful complexity before they master all foundations, using AI to bridge knowledge gaps just-in-time rather than requiring exhaustive prerequisites.</p></blockquote><p>Diverse educational systems will develop extended executive cognition differently, reflecting local contexts and values. Some nations may emphasize collaborative approaches where students work in teams with AI, others may focus on individual mastery. What matters is ensuring all students develop sufficient executive cognition for meaningful participation in AI-integrated work and civic life. The inequality danger lies not in varied approaches but in populations receiving no systematic development of these capabilities while others do. Compute is expensive, and  countries that invest heavily in expanding their compute capacity should plan to share that capacity with the rest of the world.</p><h4><strong>Equity and access in AI-assisted learning</strong></h4><p>The equity challenge in AI-enhanced education operates on multiple levels. The most visible is technological access: reliable internet, capable devices, and quality AI tools. Yet focusing only on infrastructure misses deeper concerns about how AI integration could entrench or disrupt existing inequalities.</p><blockquote><p>First, we must acknowledge what I call the pedagogy of grace rather than merit-based thinking. Traditional education pretends students start from equal positions and rewards those who demonstrate the most effort. This fiction obscures massive differences in access to resources, preparation, and support. AI makes these inequities more visible. The student with extensive AI experience, sophisticated digital literacy, and home support leverages these tools far more effectively than the one encountering them only in under-resourced classrooms.</p></blockquote><p>The disability dimension reveals the accommodation paradox most starkly. AI functions like eyeglasses or hearing aids, making functional barriers disappear rather than curing underlying conditions. A student with dyslexia can interrogate text through speech while AI converts spoken thoughts into polished prose. Students with ADHD receive organizational support without human judgment or impatience. A specific diagnosis does not matter.  Gatekeeping to receive special support becomes indefensible when AI transforms accommodation from scarce resources requiring rationing to abundant support that costs nothing to scale. </p><p>The student who struggles with writing mechanics because of documented disability and the student who struggles because they attended under-resourced schools both need the same assistance. Why privilege one over the other? Traditional procedural barriers, handwriting requirements, spelling tests, calculation speed, served as exclusionary mechanisms disguised as academic standards. AI exposes this historical function by making these barriers obsolete, shifting competition from mechanical facility to actual thinking. When support becomes abundant rather than scarce, diagnostic gatekeeping loses its justification.</p><p>Governance frameworks must address several dimensions. Infrastructure equity requires ensuring baseline access, but not necessarily identical technology everywhere. Strategic deployment might provide robust AI access in educational institutions even when home connectivity remains limited. The key is structured time for students to develop executive cognition under guidance.</p><p>Pedagogical equity matters more than technological equity. Teachers need frameworks for teaching with AI that work across diverse contexts. This means practical guidance on designing learning activities that remain meaningful when AI can perform traditional academic tasks. It means assessment approaches that capture genuine learning rather than just polished outputs. It means understanding how to teach Extended Executive Cognition to students with varying backgrounds.</p><p>Content and cultural equity requires ensuring AI systems provide meaningful feedback in multilingual contexts without imposing Western pedagogical frameworks. Current AI models were trained predominantly on English content reflecting specific cultural assumptions. Governance must support development of locally relevant AI educational tools and approaches that honor diverse epistemologies.</p><p>The deepest equity challenge concerns what gets valued. If education continues prioritizing procedural tasks AI performs effortlessly, those with access gain automatic advantages. But if it shifts toward authentic complexity, contextual problem solving, and collaborative intelligence, then AI can level the playing field, allowing students from less privileged backgrounds to produce sophisticated work demonstrating genuine intellectual capability.</p><h4><strong>Human oversight versus automation in assessment</strong></h4><p>The automation question in assessment is one of the most consequential policy choices education faces. AI promises efficiency, consistency and cost savings, but the danger lies in reducing assessment to what machines can measure while missing what matters for learning.</p><blockquote><p>Much current assessment serves what I call educational theater, performances of rigor that satisfy external audiences without genuinely measuring learning. Multiple-choice tests, five-paragraph essays, standardized formats work well for machine scoring because they reduce complex thinking to simple procedures. But education&#8217;s purpose is not producing outputs machines can grade efficiently.</p></blockquote><p>Human judgment remains essential for evaluating capabilities that matter in an AI age. Extended Executive Cognition requires assessing how students orchestrate human and machine resources, make strategic choices about task allocation, and demonstrate metacognitive awareness. Discerning thinking demands assessing how students evaluate AI outputs critically, recognize eloquent emptiness, and add genuine value through human insight. Machines cannot assess their own limitations effectively.</p><p>Different cultures hold different expectations about what education should accomplish and how fairness manifests in assessment. Some emphasize individual achievement, others collective advancement. Some value innovation and risk-taking, others mastery of established knowledge. These cultural differences should inform assessment design rather than being erased by standardized algorithmic approaches.</p><p>The balance involves using AI strategically. Let it provide rapid feedback on procedural elements like grammar, citation format or computational accuracy, freeing human attention for higher-order capabilities evaluation. Let it surface patterns in student work for closer examination, but reserve judgment on intellectual sophistication, creative insight, and ethical reasoning for human evaluators who understand cultural context and educational purpose.</p><blockquote><p>Policymakers should resist the efficiency seduction. Yes, AI can grade essays quickly. But education&#8217;s value lies not in processing student work efficiently but in developing human capabilities that resist automation. Assessment must capture those capabilities through approaches that prioritize meaningful evaluation over mechanical efficiency.</p></blockquote><h4><strong>AI, pedagogy, and societal trust</strong></h4><p>When AI mediates learning and evaluation, it transforms the social contract between institutions and the public, threatening trust but also creating opportunities to rebuild confidence through transparency and authentic assessment.</p><blockquote><p>Trust erosion appears in several ways. Parents and employers question whether credentials reflect genuine human capability or AI use. Students doubt their own competence when machines contribute significantly to their work. Faculty lose confidence in their ability to evaluate learning when traditional evidence becomes unreliable. This crisis deepens when institutions respond through detection and prohibition rather than meaningful integration.</p></blockquote><p>An example of this erosion is a case at Texas A&amp;M, where a professor failed an entire graduating class based on faulty AI detection. Students protested their innocence, but institutional trust in machines exceeded trust in students. This incident reveals the brittleness of assessment systems that depend on distinguishing humans from machine work rather than evaluating actual capability.</p><p>The path toward rebuilding trust requires several shifts. First, transparency about AI use rather than prohibition. When institutions acknowledge that AI has become part of intellectual work, they can focus on teaching effective collaboration rather than policing boundaries. Make AI integration explicit in course design, assignment structure, and assessment criteria. This honesty helps students, parents, and employers understand what capabilities are actually being developed.</p><p>Second, shift from product-focused to process-focused assessment. Instead of inferring learning from polished outputs that AI can generate, require documentation of thinking processes. Collaboration logs showing prompt iterations, decision-making rationales, and reflection on strategy effectiveness provide richer evidence of learning than final papers. This metacognitive transparency serves both assessment and pedagogical purposes.</p><p>Third, emphasize authentic complexity that grounds learning in specific contexts where machine outputs require human judgment. When students work with real community organizations, navigate actual ethical dilemmas, or solve problems with unique local constraints, their AI-assisted work demonstrates capabilities machines alone cannot provide. This contextual grounding makes assessment more meaningful and trust more justified.</p><blockquote><p>Institutions gain trust not by pretending AI does not exist, but by developing robust frameworks for teaching, learning, and assessment in an AI-integrated world. The societies that navigate this transition successfully will be those that face the disruption honestly rather than defending obsolete practices.</p></blockquote><h4><strong>Cognitive evolution versus cognitive loss</strong></h4><p>Framing AI-related changes in traditional skills as either evolution or loss reveals more about our ideological commitments than about actual cognitive development. Both framings contain truth, and policymakers must resist simplistic narratives in either direction.</p><p>What appears as loss often represents strategic reallocation of cognitive resources. I propose the External Automaticity Hypothesis: that just as internal automaticity (fluent execution of procedures through practice) frees cognitive capacity for higher-order thinking, external automaticity (fluent use of AI for procedural tasks) may achieve similar benefits. When navigation apps handle route planning, drivers do not lose spatial reasoning capacity; they redistribute cognitive effort toward safe vehicle control. Similarly, when AI handles citation formatting or computational procedures, students can focus on argument development or conceptual understanding.</p><p>Yet external automaticity carries risks. Not all cognitive processes can be externalized. Embodied skills, cultural intuitions, and interpersonal capabilities resist technological delegation. Students need sufficient foundational understanding to detect AI errors and inappropriate responses. The boundary between strategic delegation and problematic dependence remains genuinely unclear and likely varies across domains and individuals.</p><p>The adaptation framing helps when it focuses on what new capabilities emerge. Extended Executive Cognition represents a genuinely novel form of thinking that previous generations did not need. Students must learn to orchestrate cognitive resources across human and artificial agents, a metacognitive demand that may exceed traditional executive functions. Theory of alien mind for AI requires understanding how these systems process information through mechanisms utterly unlike human cognition, a form of cognitive flexibility that is genuinely new. Eloquent emptiness detection demands resisting processing fluency bias in contexts where machines produce impressive-sounding nonsense.</p><p>Policymakers should avoid both panic about cognitive decline and naive celebration of enhancement. Instead, invest in empirical research on actual cognitive development in AI-integrated contexts. </p><p>We need longitudinal studies tracking whether external automaticity produces lasting benefits or creates dependencies that emerge when technological support is removed. We need domain-specific investigations of which cognitive tasks benefit from internal versus external automaticity. We need developmental research on optimal timing for introducing AI assistance at different ages and competency levels.</p><blockquote><p>Education policy should focus on capabilities that remain valuable regardless of technological change: adaptive problem solving in novel contexts, ethical reasoning in complex situations, creative synthesis of diverse perspectives, and collaborative intelligence that leverages both human and machine strengths. These are true adaptations, not mere reactions to AI disruption.</p></blockquote><h4><strong>AI literacy as diplomatic leverage</strong></h4><p>The notion that AI-literate cognition could function as soft power deserves serious consideration, though not in the narrow technological sense that dominates current discourse. Nations will not gain diplomatic advantage primarily through producing more prompt engineers or AI technicians. The leverage comes from developing populations capable of sophisticated Extended Executive Cognition that shapes how societies integrate AI across all domains.</p><blockquote><p>Nations with citizens and institutions skilled in human-AI collaboration can model integration that balances technology and human values. They can export educational frameworks, offering approaches to teaching, learning and assessment, that other countries can adapt. This pedagogical leadership represents real soft power.</p></blockquote><p>Technical alliances increasingly depend on trust in how partners deploy AI systems. Nations with populations that critically evaluate AI outputs become attractive partners and can meaningfully shape AI governance. Global negotiations on standards, accountability, and data sharing benefit from broad AI literacy, not just elite expertise.</p><p>The diplomacy angle also involves cultural dimensions. Different societies approach human-AI collaboration through distinct epistemological frameworks. Some emphasize individual agency, others collective intelligence. Some prioritize efficiency, others context and relationship. These differences should inform AI development and deployment rather than being erased by Western-dominated technical standards. Nations that successfully integrate AI while maintaining cultural identity and values offer models of technological adoption that honor human diversity.</p><h4><strong>Regulatory foresight from educational AI</strong></h4><p>AI in education is a test case for broader governance challenges because education&#8217;s unique vulnerabilities expose risks that other sectors may face eventually. Lessons learned should inform regulation across sectors while respecting context-specific differences.</p><blockquote><p>The erosion of assessment tools in education reveals a broader pattern: AI threatens the core functions of institutions when it can perform the very tasks these institutions rely on to measure capability. Education depended on essays, problem sets, and exams to evaluate learning. Once AI could produce these artifacts, the measurement system faltered. </p></blockquote><p>This is not unique to schools. Professional credentialing relies on exams and practical demonstrations that AI may soon handle. Healthcare quality metrics depend on documentation and diagnostic accuracy that AI now assists. Legal practice involves research and drafting that AI can augment. Every domain faces its own version of the calibration problem: how to distinguish human contribution from machine input when both are legitimately present.</p><p>The lesson from detection failures extends beyond education. AI detection tools in schools proved catastrophically unreliable, producing false positives that disproportionately affected non-native speakers while missing actual AI use through simple evasion. Other sectors are experimenting with similar detection systems for AI-generated content, synthetic media, and automated decision-making. Experience in education suggests these approaches will fail in the same way. What works instead is transparency and collaboration between humans and AI, rather than trying to enforce segregation.</p><p>AI also amplifies existing inequalities. Students with more resources, preparation, and support leveraged AI far more effectively than those without. This pattern repeats elsewhere: in healthcare, digitally literate patients with access navigate AI-enhanced medicine more successfully; in legal systems, clients who can afford AI-assisted representation gain advantages; in labor markets, workers skilled at collaborating with AI tools command premium pay.</p><blockquote><p>International coordination faces similar challenges. Education needs AI-assisted credentialing standards that travel across borders while respecting local values. Healthcare requires agreements on AI-assisted diagnosis and treatment that balance innovation with safety. Labor markets need policies for AI-enhanced work that prevent a race-to-the-bottom. Without such coordination, nations risk competing by lowering standards, approving questionable AI deployments, or failing to protect vulnerable populations.</p></blockquote><p>Cultural context also matters. AI systems trained mainly on Western populations may make inappropriate recommendations for other genetic or cultural groups. Legal AI reflects the precedents of specific jurisdictions. Agricultural AI assumes conditions that may not hold globally. Regulators must ensure AI development includes diverse perspectives from the outset, rather than treating adaptability as an afterthought.</p><p>Most importantly, education shows that AI integration demands a fundamental rethinking of institutional purpose, not just superficial technological addition. Schools cannot simply add AI tools to existing practices; they must reconsider what learning means, how capabilities develop, and what outcomes truly matter. The same applies to healthcare, which must rethink diagnostics, treatment, and patient relationships, and to legal systems, which must reconsider adversarial procedures, precedent interpretation, and justice itself. Regulation should insist on this foundational rethinking rather than accepting shallow integration that preserves outdated assumptions.</p><h4><strong>Cultural context and AI feedback</strong></h4><p>Providing meaningful feedback across multilingual and multicultural contexts without imposing dominant frameworks is one of AI&#8217;s hardest governance challenges. Current AI systems, trained predominantly on Western educational content and assessment practices, risk becoming tools of epistemological colonization.</p><p>The challenge of AI feedback in education has three dimensions. First, <strong>linguistic complexity</strong> goes far beyond translation. Effective educational feedback must account for cultural communication norms, rhetorical styles, and pedagogical expectations. Some cultures favor indirect feedback to preserve face, others prefer direct critique. Some emphasize individual achievement, others collective growth. AI trained on Western directness may feel harsh or inappropriate in contexts that value subtlety.</p><p>Second, <strong>pedagogical diversity</strong> matters. Different societies have distinct learning traditions that deserve respect. Indigenous knowledge systems stress relational learning and community accountability. Confucian traditions value mastery through repetition. Progressive Western approaches prioritize creativity and critical thinking. African philosophies often integrate spiritual and communal dimensions. No single AI model can capture all these epistemologies.</p><p>Third, <strong>content relevance</strong> is critical. AI judging student work by generic standards can undervalue local knowledge, cultural practices, or community-centered perspectives. A history essay highlighting indigenous viewpoints may score lower despite its rigor, and scientific explanations blending traditional and Western knowledge may be flagged as confused even when sophisticated.</p><p>Governance must respond thoughtfully. AI systems should provide transparent documentation of training data and assumptions so educators and students can evaluate cultural biases. Development of locally relevant AI tools is essential, with training data reflecting regional pedagogies and values. AI feedback should be limited to procedural tasks like grammar, citation, and computation, leaving intellectual quality, argumentation, and creativity to human evaluators who understand context. Multilingual and multicultural assessment frameworks should recognize diverse forms of excellence, using AI for organization and pattern recognition but never replacing human judgment.</p><blockquote><p>The goal is not to eliminate AI from educational feedback but to deploy it in ways that support local pedagogical wisdom. Achieving this requires ongoing collaboration between AI developers, educators, and communities to ensure AI serves learning, not standardization.</p></blockquote><h4><strong>Preparing for cognitive interdependence</strong></h4><p><em>Education policy faces a fundamental challenge in preparing students for cognitive interdependence, where decision-making spreads across humans and machines, blurring traditional lines of responsibility. This is not a distant future, it is already emerging and demands immediate pedagogical action.</em></p><p>Moving from individual cognition to distributed intelligence is more than adding tools to existing practices. It requires what I call autonomous agency: the ability to direct human-AI collaboration while maintaining meaningful choice and purpose. This is at the heart of what I term Diacognitive Mode: thinking through technological extensions while keeping human judgment central.</p><blockquote><p>Several capabilities are essential. Extended Executive Cognition serves as the master skill, orchestrating cognitive resources across human and artificial agents. It includes task decomposition, cognitive load distribution, input specification design, and dynamic attention allocation. Students must develop an intuitive sense of what to delegate to AI and what requires human oversight, constantly adjusting as both their skills and AI capabilities evolve.</p></blockquote><p>Ethics of answerability becomes critical. Students collaborating with AI must take full responsibility for outcomes, verifying accuracy and ensuring ethical use. Excuses like &#8220;the AI made a mistake&#8221; are not acceptable. This standard mirrors professional realities where workers remain accountable for AI-assisted work.</p><p>Understanding AI itself is also necessary. Students need a theory of alien mind: how these systems process information through statistical patterns and probabilistic outputs, not human reasoning. Recognizing these differences helps students collaborate effectively while judging when to trust machine outputs.</p><p>Education policy should integrate these capabilities across the curriculum rather than treating them as optional. Courses from elementary through higher education must explicitly teach human-AI collaboration. Assessment should capture distributed cognitive processes, not just final products. Teachers need professional development to navigate cognitive interdependence.</p><p>Institutions must provide infrastructure for guided AI access, policies that encourage transparency, and frameworks recognizing responsible AI use as legitimate. Ethical reasoning about distributed decision-making is vital. When algorithms influence medical treatments, loan decisions, or criminal risk assessments, humans must exercise judgment, understanding when to accept, modify, or override machine recommendations.</p><blockquote><p>Preparing students for cognitive interdependence means embracing AI&#8217;s transformation rather than resisting it. It means teaching them to think with and through machines while remaining fully human. It requires developing metacognitive sophistication to navigate a cognitive landscape no previous generation faced. Education that achieves this transformation secures its relevance in an AI-integrated world.</p></blockquote><div><hr></div><p><strong>Alexander M. Sidorkin</strong> is Professor of Graduate and Professional Studies in Education at California State University Sacramento. He previously served as Dean of the College of Education and as Chief AI Officer and Director of the National Institute on AI in Society at the same institution. He provides consulting services to educational leaders and organizations at every stage of AI adoption, from initial assessment to seamless integration into instruction. He also advises startups developing AI-driven solutions for the education market.</p><p><em>The views expressed in this article are those of the interviewee and do not necessarily reflect the views of Techplomacy Magazine or the Techplomacy Foundation. Articles may be republished in full, without alteration, with credit to Techplomacy Magazine (<a href="http://techplomacyfoundation.org/">techplomacyfoundation.org</a>).</em></p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://magazine.techplomacyfoundation.org/p/ai-and-the-global-classroom-governance?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://magazine.techplomacyfoundation.org/p/ai-and-the-global-classroom-governance?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h3><strong>In Case You Missed It</strong></h3><p><a href="https://techplomacyfoundation.substack.com/p/ai-readiness-and-partnership-priorities">AI Readiness and Partnership Priorities in T&#252;rkiye</a></p><p><a href="https://techplomacyfoundation.substack.com/p/brazils-vision-for-ai-diplomacy-sovereignty">Brazil&#8217;s Vision for AI Diplomacy: Sovereignty, Scale, and Safety in a Multipolar World</a></p><p><a href="https://cdn.openai.com/pdf/7ef17d82-96bf-4dd1-9df2-228f7f377a29/the-state-of-enterprise-ai_2025-report.pdf?utm_source=www.therundown.ai&amp;utm_medium=newsletter&amp;utm_campaign=openai-reveals-who-s-winning-with-ai-at-work&amp;_bhlid=bd9fe2d109503207aa7248c34e452751fc6536b0">The state of enterprise AI</a> report (OpenAI)</p><p><a href="https://go.cloudplatformonline.com/dc/MBgYEBdczzKwXu-AuN4fdaVWAUYeH32hOm0e2cpyDHWzYHK8f6NH5LLVzTxyOqZX03HX7UNi6JWn2QvH-jYdcRtQnA2ttTypAbkvsZc8ywAhkvVNrcQ05X-JlrlEbRWJc3_3KCumB2TU93WOBmbk7X8BqvYekO_7wxe0hA2UyIL0iw8on4i636dXTHZwi1mpZXiA6L-REKIQPujPPSQQRaf3ziiuyCiU8INemqJwzMIM9gN3BhkYuipwlrMuf87Z5ZbS3irwOQeMLJEShcwYr9UicnL8Qjyi1QpzFYbHmIhz9KKLSj1Ksg5HVj4xNoyNif8q9o_KVwDclCGLwiTinK4WgrWi9qX6YLdnMkqoqgE=/ODA4LUdKVy0zMTQAAAGeo3pQx6jzE3JP2f7DF05J1TwPQaB-JPyHCU08gyDxfAml42SebaW_hAhFKW7P2f8gNqXHHdA=">State of AI-Assisted Software Development</a> report to dive deep into how AI is impacting technology-driven teams (Google)</p><div><hr></div><p><em><strong>Republishing</strong>: All our articles may be republished in their entirety, without alterations, to prevent misinterpretation or misuse. Please credit via <strong>Techplomacy Magazine (<a href="http://techplomacyfoundation.org/">techplomacyfoundation.org</a>).</strong></em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://magazine.techplomacyfoundation.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://magazine.techplomacyfoundation.org/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[AI Readiness and Partnership Priorities in Türkiye]]></title><description><![CDATA[In conversation with UN Resident Coordinator Babatunde A. Ahonsi]]></description><link>https://magazine.techplomacyfoundation.org/p/ai-readiness-and-partnership-priorities</link><guid isPermaLink="false">https://magazine.techplomacyfoundation.org/p/ai-readiness-and-partnership-priorities</guid><pubDate>Fri, 26 Sep 2025 11:15:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VXiY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6382b950-507e-431d-8b67-2a7dbd904711_975x650.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VXiY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6382b950-507e-431d-8b67-2a7dbd904711_975x650.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VXiY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6382b950-507e-431d-8b67-2a7dbd904711_975x650.png 424w, https://substackcdn.com/image/fetch/$s_!VXiY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6382b950-507e-431d-8b67-2a7dbd904711_975x650.png 848w, https://substackcdn.com/image/fetch/$s_!VXiY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6382b950-507e-431d-8b67-2a7dbd904711_975x650.png 1272w, https://substackcdn.com/image/fetch/$s_!VXiY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6382b950-507e-431d-8b67-2a7dbd904711_975x650.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VXiY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6382b950-507e-431d-8b67-2a7dbd904711_975x650.png" width="975" height="650" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6382b950-507e-431d-8b67-2a7dbd904711_975x650.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:650,&quot;width&quot;:975,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:775407,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://techplomacyfoundation.substack.com/i/174320530?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6382b950-507e-431d-8b67-2a7dbd904711_975x650.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!VXiY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6382b950-507e-431d-8b67-2a7dbd904711_975x650.png 424w, https://substackcdn.com/image/fetch/$s_!VXiY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6382b950-507e-431d-8b67-2a7dbd904711_975x650.png 848w, https://substackcdn.com/image/fetch/$s_!VXiY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6382b950-507e-431d-8b67-2a7dbd904711_975x650.png 1272w, https://substackcdn.com/image/fetch/$s_!VXiY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6382b950-507e-431d-8b67-2a7dbd904711_975x650.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Photo: Babatunde A. Ahonsi</em></p><h3>Editor&#8217;s Note</h3><p>Artificial intelligence is no longer just a technology issue; it is a strategic one. Governments everywhere are facing the pressure to build national AI capabilities while safeguarding sovereignty and human rights. T&#252;rkiye is no exception. With its <strong>National Artificial Intelligence Strategy</strong>, the country has set ambitious goals: raising AI&#8217;s contribution to GDP to 5 percent, creating 50,000 new jobs in the sector, and ranking among the top 20 countries on global AI indexes by 2025. Yet success will depend on more than vision. In this issue, Techplomacy Magazine speaks with <strong>Babatunde A. Ahonsi, UN Resident Coordinator in T&#252;rkiye</strong>, about the country&#8217;s progress, its gaps, and how the UN is helping shape a responsible path forward.</p><p><em>Olin Thakur, Editor-in-Chief, Techplomacy Magazine</em></p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://magazine.techplomacyfoundation.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><strong>T&#252;rkiye&#8217;s National AI Strategy (NAIS) </strong>reflects a desire not just to adopt artificial intelligence, but to embed it across the economy and public services in a way that is globally competitive and nationally accountable. Six strategic priorities define the plan: training experts, supporting research and entrepreneurship, improving data quality and access, creating regulations to accelerate socioeconomic adaptation, strengthening international cooperation, and managing structural and workforce transformation. By 2025, the government aims to deliver 24 objectives and 119 measures that align with its broader &#8220;Digital T&#252;rkiye&#8221; and &#8220;National Technology Move.&#8221;</p><p>For Babatunde Ahonsi, this vision is inspiring but still tenuous, requiring careful implementation to realize its full potential.<strong> &#8220;T&#252;rkiye&#8217;s strategic vision for AI is clear and ambitious. The most significant gap appears to be in the operational capacity for robust and ethical governance,&#8221; </strong>he explained. While principles such as fairness and transparency are enshrined in the national strategy, turning them into daily practice is another matter. &#8220;Particularly for high-risk AI systems, operational safeguards are not yet consistent.&#8221;</p><p>The UN has not yet carried out a full readiness assessment in T&#252;rkiye, but Ahonsi believes one is overdue. He sees strong value in applying <strong>UNESCO and UNDP methodologies</strong> to map progress, identify weak points, and provide a structured baseline for government and industry.</p><p><strong>Looking ahead to the next two years, </strong>Ahonsi highlighted three concrete areas where the UN can help accelerate responsible adoption. The first is developing <strong>Responsible AI in Public Procurement Guidelines</strong>, so that the government&#8217;s purchasing power is used to shape the market in line with ethical standards. The second is creating a <strong>Civil Service AI Literacy Program</strong>, adapted from UNESCO&#8217;s framework, to build human capacity across government. The third is <strong>supporting the design of early interoperable pilots</strong> that can scale within ministries and municipalities.</p><p><strong>T&#252;rkiye&#8217;s decision to align with the EU AI Act is another cornerstone.</strong> For Ahonsi, this should not be seen as a zero-sum choice between Europe and its eastern neighbors. <strong>&#8220;It is an opportunity for T&#252;rkiye to leverage its position to become a regional standard-setter,&#8221; he argued</strong>. To that end, the UN is prepared to provide technical assistance for harmonization with the EU law while also helping the country convene a <strong>Regional AI Governance Dialogue</strong> with partners in the Caucasus and Central Asia. <strong>Such a forum, he suggested, would allow T&#252;rkiye to showcase its experience while promoting interoperable, rights-based frameworks in a region where standards are still emerging.</strong></p><p><strong>Data sits at the heart of the strategy. N</strong>AIS calls for secure data sharing through a new <strong>Public Data Space</strong>, expansion of anonymized datasets via the Open Government Data Portal, and the creation of a <strong>National Data Dictionary</strong>. Ahonsi fully supports this direction. &#8220;Effective data governance is the bedrock of a trustworthy AI ecosystem,&#8221; he said. The UN is ready to work with the DTO and the Turkish Statistical Institute (TurkStat) on a <strong>national FAIR data framework</strong> to improve openness while protecting privacy.</p><p><strong>On infrastructure, </strong>T&#252;rkiye&#8217;s approach is intentionally hybrid. Sovereign computing resources are being reserved for sensitive tasks, while <strong>university clusters</strong> will take the lead on model adaptation and research. Global cloud providers remain part of the mix, especially for commercial and cross-border projects. Ahonsi views this balanced model as both pragmatic and protective. &#8220;The most effective path is not a binary choice,&#8221; he observed.</p><p><strong>Inclusion is another critical theme</strong>. T&#252;rkiye&#8217;s strategy calls for spreading benefits to Anatolian provinces, expanding digital agriculture, and supporting workforce reskilling. For refugees and minorities, targeted interventions will be key.<strong> &#8220;AI can be a powerful leapfrogging mechanism for service delivery and bridging gaps,&#8221;</strong> Ahonsi noted. Recent examples include the <strong>AI for Good Innovation Factory T&#252;rkiye</strong>, organized with the International Telecommunication Union, and <strong>Digital Technologies for Agriculture</strong>, which showcased smart irrigation, traceability, and e-commerce solutions. <strong>For refugees, UNHCR is piloting AI tools to cut delays in status determination by spotting inefficiencies in asylum processing.</strong></p><p>Yet technology brings risks alongside opportunities. <strong>T&#252;rkiye&#8217;s regional position</strong> makes questions of surveillance, misinformation, and dual-use applications particularly pressing. Ahonsi is clear: <strong>&#8220;Mitigating dual-use risks requires building strong democratic firewalls to prevent the repurposing of technology for uses that may erode civil liberties.&#8221; </strong>He advocates for a <strong>legal mandate requiring independent human rights impact audits</strong> for all high-risk AI systems, especially in policing and justice.</p><p><strong>The government&#8217;s National AI strategy</strong> also emphasizes <strong>structural transformation of the workforce</strong>. By 2025, T&#252;rkiye aims to train 10,000 graduate-level AI specialists, expand certification programs for new professions, and support organizational adoption of AI through tools like an <strong>AI Maturity Model</strong> and a <strong>Public AI Platform</strong>. Ahonsi linked these efforts directly to economic goals. &#8220;T&#252;rkiye is seeking not only to innovate but to ensure that its institutions and workforce can adapt at the right pace,&#8221; he said.</p><p><strong>Cultural heritage protection, often overlooked in AI discussions</strong>, is another area where the UN sees potential. With T&#252;rkiye home to sites of global significance, cross-border sensitivities are inevitable. <strong>&#8220;Technology can act as a powerful neutral witness,&#8221; Ahonsi explained. </strong>He pointed to <strong>UNESCO&#8217;s Dive into Heritage initiative, </strong>which<strong> uses digital tools to document and safeguard heritage, as a model for reducing disputes and shifting conversations toward evidence-based preservation.</strong></p><p><strong>International partnerships </strong>will also be decisive. Instead of traditional aid, Ahonsi called for <strong>strategic co-investment</strong>. Concessional compute access, time-limited research licenses, and targeted capacity grants could help T&#252;rkiye advance its own sovereign AI capabilities while still benefiting from global expertise. The UN, he emphasized, is ready to play its role as a convener to bring such partnerships together.</p><p><strong>Finally, transparency will determine trust</strong>. T&#252;rkiye&#8217;s strategy already includes regular evaluation and adaptation, but Ahonsi argued for a public-facing layer. &#8220;Public reporting is a powerful catalyst for building trust,&#8221; he said. The UN is prepared to facilitate the co-development of a <strong>national AI Readiness Report</strong>, aligned with frameworks like UNESCO RAM, UNDP AILA, and the Stanford AI Index, to provide regular updates that can be compared internationally.</p><p>By 2025, T&#252;rkiye hopes to meet ambitious benchmarks: boosting AI&#8217;s GDP contribution, expanding employment to 50,000, prioritizing local applications in public procurement, and ranking in the top 20 of global AI indexes. Ahonsi believes these goals are within reach if governance, inclusion, and transparency remain central. <strong>&#8220;The challenge is less about ambition than about building the operational, ethical, and inclusive systems that turn strategy into lived progress,&#8221;</strong> he concluded.</p><div><hr></div><p><em>The views expressed in this article are those of the author(s) or interviewee(s) and do not necessarily reflect the views of Techplomacy Magazine or the Techplomacy Foundation. Articles may be republished in full, without alteration, with credit to Techplomacy Magazine (<a href="http://techplomacyfoundation.org/">techplomacyfoundation.org</a>).</em></p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://magazine.techplomacyfoundation.org/p/ai-readiness-and-partnership-priorities?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://magazine.techplomacyfoundation.org/p/ai-readiness-and-partnership-priorities?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p><em>Techplomacy Magazine is a nonpartisan, nonprofit, independent monthly publication featuring curated interviews and in-depth features with global leaders at the intersection of tech/AI, diplomacy, governance, and national security &#8212; with a special focus on the Global South.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://magazine.techplomacyfoundation.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Techplomacy Magazine is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Brazil’s Vision for AI Diplomacy: Sovereignty, Scale, and Safety in a Multipolar World]]></title><description><![CDATA[How Brazil is balancing AI innovation, digital sovereignty, and global cooperation in a multipolar world]]></description><link>https://magazine.techplomacyfoundation.org/p/brazils-vision-for-ai-diplomacy-sovereignty</link><guid isPermaLink="false">https://magazine.techplomacyfoundation.org/p/brazils-vision-for-ai-diplomacy-sovereignty</guid><pubDate>Sun, 17 Aug 2025 12:31:04 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!L8K1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe07d3164-f87d-4d60-bf79-bc8986a9a9f5_1219x853.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!L8K1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe07d3164-f87d-4d60-bf79-bc8986a9a9f5_1219x853.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!L8K1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe07d3164-f87d-4d60-bf79-bc8986a9a9f5_1219x853.jpeg 424w, https://substackcdn.com/image/fetch/$s_!L8K1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe07d3164-f87d-4d60-bf79-bc8986a9a9f5_1219x853.jpeg 848w, https://substackcdn.com/image/fetch/$s_!L8K1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe07d3164-f87d-4d60-bf79-bc8986a9a9f5_1219x853.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!L8K1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe07d3164-f87d-4d60-bf79-bc8986a9a9f5_1219x853.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!L8K1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe07d3164-f87d-4d60-bf79-bc8986a9a9f5_1219x853.jpeg" width="1219" height="853" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e07d3164-f87d-4d60-bf79-bc8986a9a9f5_1219x853.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:853,&quot;width&quot;:1219,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:108350,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://techplomacyfoundation.substack.com/i/171181992?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe07d3164-f87d-4d60-bf79-bc8986a9a9f5_1219x853.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!L8K1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe07d3164-f87d-4d60-bf79-bc8986a9a9f5_1219x853.jpeg 424w, https://substackcdn.com/image/fetch/$s_!L8K1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe07d3164-f87d-4d60-bf79-bc8986a9a9f5_1219x853.jpeg 848w, https://substackcdn.com/image/fetch/$s_!L8K1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe07d3164-f87d-4d60-bf79-bc8986a9a9f5_1219x853.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!L8K1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe07d3164-f87d-4d60-bf79-bc8986a9a9f5_1219x853.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Photo: Eugenio Garcia</em></p><h4><em>In Conversation with Ambassador Eugenio V. Garcia | Interview by Olin Thakur, Techplomacy Magazine</em></h4><p><strong>Editor&#8217;s Note</strong></p><p>In an era where artificial intelligence is reshaping the global order, Brazil is positioning itself as a bridge between the Global South and the broader international AI governance debate. Ambassador Eugenio Garcia, who heads the Department of Science, Technology, Innovation, and Intellectual Property at Brazil&#8217;s Ministry of Foreign Affairs, has been navigating the intersection of diplomacy and technology, from Silicon Valley to the United Nations General Assembly. In this conversation, he reflects on Brazil&#8217;s vision for AI diplomacy: how to balance sovereignty with cooperation, promote equitable access, and avoid a new &#8220;Cold War mindset&#8221; in emerging technologies. His perspective offers policymakers and industry leaders an unvarnished view of AI governance in a multipolar world.</p><div><hr></div><p>When Ambassador Eugenio Garcia talks about &#8220;digital sovereignty,&#8221; he doesn&#8217;t treat it as an obstacle to global cooperation but as its foundation. &#8220;Nations must have their own voice and be capable of determining clearly what their needs and priorities are,&#8221; he says. Without that voice, international standards risk becoming tools designed by others, locking countries into platforms or economic models that serve outside interests.</p><p>Garcia rejects the idea that sovereignty and binding international standards are in conflict. To him, they&#8217;re stages of the same process: self-determined priorities first, then cooperative standard-setting that respects them. This approach has shaped Brazil&#8217;s role in the BRICS Leaders&#8217; <a href="http://www.brics.utoronto.ca/docs/250706-ai.html">Statement on AI governance</a>, which he describes as &#8220;a collaborative governance of AI, not a competitive one.&#8221;</p><p>Ethical diversity is one of the thorniest challenges in AI policy. Concepts like fairness and transparency carry different meanings across cultures and legal systems, yet Garcia sees room for alignment. &#8220;The vocabulary for the global governance of AI is still under construction,&#8221; he notes. &#8220;Even without a single international definition, there is a certain consensus around principles.&#8221; The key, he says, is ensuring every country is heard, because &#8220;the opportunities arising from AI are not the same for all.&#8221;</p><p>For less-resourced nations, Garcia identifies three essentials for a thriving AI ecosystem: human talent, data, and compute infrastructure. Some countries have world-class researchers but lack the computing power; others have infrastructure but struggle to attract talent. Data is plentiful but often inaccessible or poorly structured. He argues for targeted capacity-building and cooperation to close these gaps, enabling countries to &#8220;use AI to further their development according to their own needs and capabilities.&#8221;</p><p>On AI governance models, Garcia is clear: the International Atomic Energy Agency (IAEA) is the wrong template. &#8220;Data is not like uranium or plutonium,&#8221; he says. AI, unlike nuclear material, is a general-purpose technology integral to economic development. Restrictive control would risk shutting out the very nations that stand to benefit most. Instead, he advocates for an inclusive, democratic framework that keeps AI &#8220;accessible without discrimination.&#8221;</p><p>When it comes to risk mitigation, Garcia is wary of framing AI through the lens of great-power rivalry. &#8220;One of the first risks we want to avoid is the entrenchment of a rigid &#8216;Cold War mindset&#8217; around science and technology,&#8221; he warns. The BRICS approach, he says, emphasizes dialogue, representation for the Global South, and a focus on practical issues such as access to technologies, open standards, labor market impacts, and equitable access rather than solely hypothetical security threats.</p><p>The private sector&#8217;s dominance in AI innovation raises questions about co-regulation. Garcia insists the UN should be &#8220;at the core&#8221; of any global framework, but without necessarily creating a new AI-specific agency at this moment. Regulation, he says, remains mostly a responsibility of states, even in a multistakeholder setting. &#8220;Commercial incentives and the public interest may not converge in some cases,&#8221; he adds. &#8220;Good regulation can help find a balanced approach.&#8221;</p><p>In crisis scenarios, whether runaway misinformation or autonomous weapons without meaningful human control, Garcia points to prevention as the priority. &#8220;AI does not, by itself, create disinformation. It is a tool used by humans to do so,&#8221; he says. For hard security threats in matters related to international peace and security, the UN Security Council remains the proper venue. For promoting information integrity, digital education and media literacy are essential tools against bias, manipulation, and deepfakes.</p><p>At home, Brazil is advancing its own AI regulatory framework through a bill in the National Congress. The proposed law aims to strengthen legal certainty, clarify responsibilities, and encourage innovation by tailoring rules to different ecosystem actors. &#8220;Regulation and innovation are not mutually exclusive,&#8221; Garcia stresses. In his view, well-crafted rules can reduce risk, enable entrepreneurs to act, and allow innovation to flourish without undermining the public interest.</p><p>His philosophy comes back to balance: sovereignty and cooperation, innovation and safeguards, national needs and global norms. The BRICS Leaders&#8217; <a href="http://www.brics.utoronto.ca/docs/250706-ai.html">statement on the Global Governance of Artificial Intelligence</a>, he suggests, can serve as a blueprint for countries aiming to build AI ecosystems that value both public interest and innovation. In a rapidly changing technological landscape, Garcia sees AI governance not as a competition for dominance, but as an opportunity for nations, especially in the Global South, to help shape a system that works for all.</p><div><hr></div><p><em><strong>Ambassador Eugenio V. Garcia </strong>is a career diplomat with over three decades of service, has represented Brazil from Silicon Valley to the United Nations. He has served as Deputy Consul General in San Francisco, Head of Science, Technology, and Innovation, and focal point for Silicon Valley (2021-2024), as well as senior adviser to the President of the UN General Assembly (2018&#8211;2020). He is also an academic researcher on AI and global governance.</em></p><p><em>The views expressed in this article are those of the interviewee and do not necessarily reflect the views of Techplomacy Magazine or the Techplomacy Foundation. Articles may be republished in full, without alteration, with credit to Techplomacy Magazine (<a href="http://techplomacyfoundation.org/">techplomacyfoundation.org</a>).</em></p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://magazine.techplomacyfoundation.org/p/brazils-vision-for-ai-diplomacy-sovereignty?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://magazine.techplomacyfoundation.org/p/brazils-vision-for-ai-diplomacy-sovereignty?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h3><strong>In Case You Missed It</strong></h3><h4><strong>BRICS Leaders' Statement on the Global Governance of Artificial Intelligence</strong></h4><p>Artificial Intelligence represents a milestone opportunity to boost development towards a more equitable future, fostering innovation, enhancing productivity, advancing sustainable practices, and concretely improving the lives of people everywhere on the planet.</p><p>To achieve that goal, global governance of AI should mitigate potential risks and address the needs of all countries, especially those of the Global South. It must operate under national regulatory frameworks and the UN Charter, respect sovereignty as well as be representative, development-oriented, accessible, inclusive, dynamic, responsive, grounded in personal data protection, the rights and interests of humanity, safety, transparency, sustainability, and conducive to overcoming the growing digital and data divides, within and between countries. <a href="http://www.brics.utoronto.ca/docs/250706-ai.html">Read More</a></p><h4>IDAIS-Shanghai, 2025</h4><p>At the International Dialogue on AI Safety (IDAIS) in Shanghai, leading researchers and policymakers issued a <strong>Consensus Statement on Ensuring Alignment and Human Control of Advanced AI Systems</strong>.</p><p>The statement warns that <strong>future AI systems may soon rival or surpass human intelligence</strong>, creating risks of unintended behavior and potential <strong>loss of control</strong>. Such scenarios could trigger catastrophic or even existential threats if powerful general-purpose AI operates outside human oversight.</p><p>Signatories&#8212;including <strong>Geoffrey Hinton, Yoshua Bengio, Stuart Russell, Andrew Yao, Ya-Qin Zhang, Gillian Hadfield, Max Tegmark</strong>, <strong>Dan Hendrycks, Se&#225;n &#211; h&#201;igeartaigh</strong> and senior figures from  various universities, and Anthropic&#8212;called for urgent international cooperation to ensure advanced AI remains controllable and aligned with human values.</p><p>The document frames this moment as a <strong>pivotal juncture for humanity</strong>, emphasizing that seizing AI&#8217;s opportunities requires confronting its risks through global coordination, technical safeguards, and governance mechanisms. <a href="https://idais.ai/dialogue/idais-shanghai/">Read More</a></p><h4>White House Unveils America&#8217;s AI Action Plan</h4><p>The White House has released <em><a href="https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf">Winning the AI Race: America&#8217;s AI Action Plan</a></em>, following President Trump&#8217;s January executive order on <a href="https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/">removing barriers</a> to U.S. leadership in artificial intelligence. The plan outlines more than 90 federal policy actions under three main pillars: accelerating innovation, building American AI infrastructure, and leading in international diplomacy and security.</p><p>Key measures include expanding AI exports through secure, full-stack packages for allies, speeding up permits for data centers and semiconductor plants, creating national initiatives to boost technical occupations, cutting federal regulations that slow AI development, and requiring government contracts with AI developers whose frontier models are free from ideological bias. <a href="https://www.whitehouse.gov/articles/2025/07/white-house-unveils-americas-ai-action-plan/">Read More</a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://magazine.techplomacyfoundation.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Join my new subscriber chat]]></title><description><![CDATA[A private space for us to converse and connect]]></description><link>https://magazine.techplomacyfoundation.org/p/join-my-new-subscriber-chat</link><guid isPermaLink="false">https://magazine.techplomacyfoundation.org/p/join-my-new-subscriber-chat</guid><dc:creator><![CDATA[Olin Thakur]]></dc:creator><pubDate>Sat, 19 Jul 2025 05:38:18 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!KYZT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Today I&#8217;m announcing a brand new addition to my Substack publication: Techplomacy Magazine subscriber chat.</p><p>This is a conversation space exclusively for subscribers&#8212;kind of like a group chat or live hangout. I&#8217;ll post questions and updates that come my way, and you can jump into the discussion.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://open.substack.com/pub/techplomacyfoundation/chat&quot;,&quot;text&quot;:&quot;Join chat&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://open.substack.com/pub/techplomacyfoundation/chat"><span>Join chat</span></a></p><div><hr></div><h2>How to get started</h2><ol><li><p><strong>Get the Substack app by clicking <a href="https://substack.com/app/app-store-redirect">this link</a> or the button below.</strong> New chat threads won&#8217;t be sent sent via email, so turn on push notifications so you don&#8217;t miss conversation as it happens. You can also access chat <a href="https://open.substack.com/pub/techplomacyfoundation/chat">on the web</a>.</p></li></ol><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://substack.com/app/app-store-redirect&quot;,&quot;text&quot;:&quot;Get app&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://substack.com/app/app-store-redirect"><span>Get app</span></a></p><ol start="2"><li><p><strong>Open the app and tap the Chat icon.</strong> It looks like two bubbles in the bottom bar, and you&#8217;ll see a row for my chat inside.</p></li></ol><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KYZT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KYZT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!KYZT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!KYZT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!KYZT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KYZT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg" width="1456" height="728" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:728,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:241528,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://kylewarrentest.substack.com/i/114198534?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KYZT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!KYZT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!KYZT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!KYZT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f63c9a-2296-4c96-a2f9-52648999bb00_2000x1000.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><ol start="3"><li><p><strong>That&#8217;s it!</strong> Jump into my thread to say hi, and if you have any issues, check out <a href="https://support.substack.com/hc/en-us/sections/360007461791-Frequently-Asked-Questions">Substack&#8217;s FAQ</a>.</p></li></ol>]]></content:encoded></item><item><title><![CDATA[Call for Experts to Join the Global AI Policy & Governance Roster]]></title><description><![CDATA[JOIN THE GLOBAL AI POLICY & GOVERNANCE EXPERTS ROSTER]]></description><link>https://magazine.techplomacyfoundation.org/p/call-for-experts-to-join-the-global</link><guid isPermaLink="false">https://magazine.techplomacyfoundation.org/p/call-for-experts-to-join-the-global</guid><dc:creator><![CDATA[Olin Thakur]]></dc:creator><pubDate>Fri, 07 Mar 2025 05:14:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!aud5!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Febec4bff-3c4e-42fc-aebe-de9944efd729_500x500.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>JOIN THE GLOBAL AI POLICY &amp; GOVERNANCE EXPERTS ROSTER</strong></p><p><a href="https://www.techplomacyfoundation.org/">Techplomacy Foundation</a> is building a <strong>highly selective, pre-vetted roster</strong> of <strong>top-tier AI policy and governance experts</strong> with <strong>both technical AI and policy expertise</strong>. We seek <strong>seasoned trainers, researchers, and consultants</strong> who have advised or trained <strong>mid-to-senior executives, government officials, and diplomats</strong>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://magazine.techplomacyfoundation.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Techplomacy Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><strong>Why Join?</strong></p><p>&#10004; <strong>Exclusive Opportunities</strong> &#8211; Gain priority access to high-impact AI policy and governance training and consulting projects in the global south/Asia-Pacific.<br>&#10004; <strong>Prestigious Network</strong> &#8211; Collaborate with a global community of experts shaping AI policy at the highest levels.<br>&#10004; <strong>Streamlined Engagement</strong> &#8211; Being on our roster fast-tracks you for relevant engagements when opportunities arise.<br>&#10004; <strong>Remote &amp; Flexible</strong> &#8211; Work on meaningful AI governance initiatives with flexibility.</p><p><strong>Who Should Apply?</strong></p><p>&#9989; <strong>Minimum 5 years</strong> of experience in AI policy and governance as a <strong>trainer, researcher, or consultant</strong>.<br>&#9989; Proven track record of <strong>training or advising senior executives, government officials, or diplomats</strong> at <strong>medium or large-scale organizations</strong>.<br>&#9989; Deep expertise in <strong>applied AI, AI governance, and policy</strong>.</p><p><strong>Important Notes:</strong></p><ul><li><p><strong>This roster is highly competitive</strong> &#8211; only a select number of experts will be onboarded.</p></li><li><p>We&#8217;re a small team, so response times may vary.</p></li></ul><p>If you meet the criteria and are ready to make a global impact, <strong>apply now</strong> to join the forefront of AI policy and governance.</p><p>For more information or to contact us, kindly visit <a href="http://www.techplomacyfoundation.org/">www.techplomacyfoundation.org</a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://magazine.techplomacyfoundation.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Techplomacy Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Global Moves on AI Safety: New Institutes, Investments, and Industry Commitments]]></title><description><![CDATA[AI latest news, key takeaways and jobs.]]></description><link>https://magazine.techplomacyfoundation.org/p/edition-1-ai-governance-digest</link><guid isPermaLink="false">https://magazine.techplomacyfoundation.org/p/edition-1-ai-governance-digest</guid><dc:creator><![CDATA[Olin Thakur]]></dc:creator><pubDate>Sun, 11 Feb 2024 13:26:39 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/22727c40-3e35-4b85-80c8-2e3dccb8f87c_200x190.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Welcome to Edition 1! If you have a relevant job you&#8217;d want to share, <a href="https://docs.google.com/forms/d/e/1FAIpQLSfHi9MJmn2Nr1AuMY9Ejy-EuF_ceRA9SnYBMDJ6w05Umh9sGw/viewform?usp=sharing">let us know</a>.</p><h1>AI Policy and Governance</h1><h3><strong>Biden-Harris Administration Announces First-Ever Consortium Dedicated to AI Safety</strong></h3><p>On February 7, 2024 US Secretary of Commerce Gina Raimondo announced key members of the executive leadership team to lead the <a href="https://www.nist.gov/artificial-intelligence/artificial-intelligence-safety-institute">U.S. AI Safety Institute </a>(USAISI), which will be established at the National Institute of Standards and Technology (NIST). Biden-Harris administration has announced the creation of the first-ever consortium dedicated to AI safety. The consortium will include more than 200 members from industry, academia, and government. It will work to develop guidelines for the safe use of AI, including red-teaming guidelines, capability evaluations, and risk management practices. (<a href="https://www.nist.gov/news-events/news/2024/02/biden-harris-administration-announces-first-ever-consortium-dedicated-ai">NIST</a>)</p><h3><strong>Britain  to invest 100 million pounds in AI research and regulation</strong></h3><p>British government is investing &#163;90 million in nine new research hubs focused on AI in areas like healthcare, chemistry, and mathematics. Another &#163;10 million will be used to help regulators address the risks and opportunities of AI. Britain is partnering with the United States on responsible AI. (<a href="https://economictimes.indiatimes.com/tech/technology/britain-invests-100-million-pounds-in-ai-research-and-regulation/articleshow/107442713.cms">The Economic Times</a>)</p><h3><strong>AI developers to begin sharing safety test results with US government</strong></h3><p>US government is requiring AI developers to share safety test results and is investing in AI innovation and attracting AI experts. The White House says this is the &#8220;most significant&#8221; action taken on AI by any government. The goal is to make sure AI systems are safe before they are released to the public. (<a href="https://www.globalgovernmentforum.com/developers-to-share-ai-safety-tests-with-us-government-canada-launches-tech-talent-recruitment-platform-news-in-brief/">Global Development Forum</a>)</p><h3><strong>Sam Altman Seeks Trillions of Dollars to Reshape Business of Chips and AI</strong></h3><p>OpenAI CEO Sam Altman seeks a staggering $5-$7 trillion investment to boost global chip production and accelerate AI development, aiming to overcome limitations faced by OpenAI and potentially reshape the entire semiconductor industry. This ambition faces significant hurdles due to the immense scale involved, exceeding the current size of the chip industry itself. (<a href="https://www.wsj.com/tech/ai/sam-altman-seeks-trillions-of-dollars-to-reshape-business-of-chips-and-ai-89ab3db0?mod=ai_news_article_pos2">WSJ</a>)</p><h3><strong>Safer skies with self-flying helicopters</strong></h3><p>Autonomous helicopters made by Rotor Technologies, a startup led by MIT PhDs, take the human out of risky commercial missions. Traditional helicopter missions can be dangerous, and autonomous flight can improve safety. Rotor Technologies is developing self-flying helicopters that can carry heavy payloads and travel long distances. These helicopters are safer because they eliminate the risk of pilot error. Rotor hopes to use these helicopters for new applications, like scientific missions. (<a href="https://news.mit.edu/2024/safer-skies-self-flying-helicopters-rotor-technologies-0209">MIT News</a>)</p><h3><strong>Meta Will Crack Down on AI-Generated Fakes&#8212;but Leave Plenty Undetected</strong></h3><p>Meta will soon start labelling deepfake or artificial intelligence-generated images posted on its Facebook, Instagram and Threads platforms as &#8220;Imagined with AI&#8221;, to differentiate between those and human-generated content, the social media conglomerate&#8217;s president for global affairs said. The move is likely to put pressure on Meta&#8217;s peers in the social media and internet space to come up with respective tools to fight deep fakes on their respective platforms. By labelling images or content generated with the help of AI tools, especially those offered by Meta, the company hopes to give users more information about the content they are interacting with and subsequently sharing. (<a href="https://economictimes.indiatimes.com/tech/technology/meta-to-start-labelling-ai-generated-deepfake-images-hopes-move-will-pressure-industry-to-follow-suit/articleshow/107462481.cms">The Economic Times</a>)</p><h3><strong>AI safeguards can easily be broken, UK Safety Institute finds</strong></h3><p>The UK&#8217;s new artificial intelligence safety body has found that the technology can deceive human users, produce biased outcomes and has inadequate safeguards against giving out harmful information. The AI Safety Institute published initial findings from its research into advanced AI systems known as large language models (LLMs), which underpin tools such as chatbots and image generators, and found a number of concerns. (<a href="https://www.theguardian.com/technology/2024/feb/09/ai-safeguards-can-easily-be-broken-uk-safety-institute-finds">The Guardian</a>)</p><h3>Top AI Companies Join Government Effort to Set Safety Standards</h3><p>Top AI companies are joining a government effort to create safety standards for AI. The consortium will include industry leaders, civil society groups, and academics. They will work together to establish safety standards regarding AI, such as preventing misinformation and privacy violations. (<a href="https://time.com/6692891/us-ai-safety-institute-consortium/">Time</a>)</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://magazine.techplomacyfoundation.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://magazine.techplomacyfoundation.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h1>Job Board</h1><p><strong><a href="https://openai.com/blog/superalignment-fast-grants">Superalignment Fast Grants</a>, </strong>OpenAI, Remote, Global</p><p><strong><a href="https://cset.georgetown.edu/job/research-fellow-workforce/">Research Fellow</a>, Global AI Workforce, </strong>Georgetown University, Center for Security and Emerging Technology, USA</p><p><strong><a href="https://jobs.lever.co/futureof-life/57cd52f0-9c1b-45a5-bd15-c01d6c787bc1">Representative for the AI &#8203;&#8203;Safety Summit</a>, </strong>Future of Life Institute, Paris, France</p><p><strong><a href="https://jobs.lever.co/Anthropic/e93d9020-530b-446c-994f-5822e79906f3">Research Engineer</a>, Model Evaluations, </strong>Anthropic, Multiple Locations</p><p><strong><a href="https://www.civilservicejobs.service.gov.uk/csr/jobs.cgi?jcode=1897477">Societal Impacts Strategy and Delivery Adviser</a>, </strong>UK Government, Department for Science, Innovation and Technology, London, UK</p><p><strong><a href="https://www.constellation.org/careers/operations-leadership">Operations Leadership</a>, </strong>Constellation, San Francisco Bay Area</p><p><strong><a href="https://docs.google.com/document/d/1_oIbPc9H1cKr_LHmYSRYeYgfj2BgbgCHR6kvSYSHaEw/edit#heading=h.mt94upt5bwb0">Research Fellowship</a>, Law &amp; AI (Summer 2024), </strong>Legal Priorities Project, Remote, Global</p><p><strong><a href="https://careers.un.org/jobSearchDescription/225779?language=en">Programme Management Officer</a>, </strong>United Nations, New York, NY</p><p><strong><a href="https://jobs.careers.microsoft.com/global/en/job/1677239/Principal-Data-Scientist-(Responsible-AI)">Principal Data Scientist</a>, Responsible AI, </strong>Microsoft, Barcelona, Spain</p><p><strong><a href="https://grnh.se/ecbd00832us">Frontend Engineer</a>, </strong>Elicit, Remote, Global</p><p><strong><a href="https://jobs.lever.co/Anthropic/8ecc4eab-e2be-4c85-bef3-ea67f0b419c8">Sales and Audit, Compliance</a>, </strong>Anthropic , San Francisco Bay Area /New York, NY/ Seattle metro area / London, UK</p><p><strong><a href="https://cset.georgetown.edu/job/research-fellow-workforce/">Research Fellow, Global AI Workforce</a>, </strong>Georgetown University, Center for Security and Emerging Technology, USA</p><p><strong><a href="https://jobs.ashbyhq.com/miri/b1c86c64-2cdf-423e-8634-619e5b68274d">ML Research Engineer</a>, </strong>Machine Intelligence Research Institute, San Francisco Bay Area</p><p><strong><a href="https://docs.google.com/document/d/1uqJe2xHUc5W8C1HYE59OCHVfvLUiBfktCIeRvOB5P8I/edit#heading=h.j1003vg53xo3">Chief Operations Officer</a>, </strong>SaferAI, Remote, Global</p><p><strong><a href="https://docs.google.com/document/d/1-Klfeq--UE7CXr-y1YHynfUUcuUTH_nmwSF9I4T2cJM/edit#heading=h.p49amlse7bqy">AI Safety Research Manager</a>, </strong>Existential Risk Alliance, Cambridge, UK</p><p><strong><a href="https://www.constellation.org/careers/technical-programs-lead">Technical Programs Lead / Technical Programs Director</a>, </strong>Constellation, San Francisco Bay Area</p><p><strong><a href="https://www.amazon.jobs/en/jobs/2354589/appsec-ai-security-amazon-stores">Application Security Engineer</a>, AI Security, Amazon Stores, </strong>Amazon, Various, USA</p><p><strong><a href="https://openai.com/careers/compliance-engineer">Compliance Engineer</a>, </strong>OpenAI, San Francisco Bay Area</p><p><strong><a href="https://nairrpilot.org/allocations">Advanced Computing Allocations to Advance AI Research and Education</a>, </strong>National Artificial Intelligence Research Resource, USA</p><p><strong><a href="https://docs.google.com/document/d/1-WV4LPcleEMQO5slSz90wfBuXH5mlXXA37JoQJGOq9s/edit#heading=h.j1003vg53xo3">Technical Standardization Lead</a>, </strong>SaferAI, Remote, Global</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://magazine.techplomacyfoundation.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://magazine.techplomacyfoundation.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p><em>Subscribe and please share Techplomacy Magazine with yours friends and colleagues in your network.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://magazine.techplomacyfoundation.org/p/edition-1-ai-governance-digest?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://magazine.techplomacyfoundation.org/p/edition-1-ai-governance-digest?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p>]]></content:encoded></item><item><title><![CDATA[Welcome to Techplomacy Magazine]]></title><description><![CDATA[Conversations that matter.]]></description><link>https://magazine.techplomacyfoundation.org/p/welcome-to-techplomacy-magazine</link><guid isPermaLink="false">https://magazine.techplomacyfoundation.org/p/welcome-to-techplomacy-magazine</guid><dc:creator><![CDATA[Olin Thakur]]></dc:creator><pubDate>Sun, 11 Feb 2024 12:35:40 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ewVu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c664079-791f-4771-b12b-1be31499773e_500x500.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Techplomacy Magazine</strong><br><em>Conversations That Matter.</em></p><p><strong>Techplomacy Magazine</strong> is an independent Substack + <a href="https://www.linkedin.com/newsletters/techplomacy-7358498253123805185/">LinkedIn</a>-based publication featuring curated interviews and in-depth features with global leaders at the intersection of <strong>tech/AI, diplomacy, governance</strong>, and <strong>national security</strong>&#8212;with a special emphasis on voices from the <strong>Global South</strong>.</p><p><strong>Our mission:</strong><br>To foster transparency, informed dialogue, and cross-border understanding in a world where <strong>AI, cybersecurity, and digital sovereignty</strong> are redefining national interests.</p><p><strong>Key topics we cover:</strong><br>&#8226; AI Governance &amp; Responsible Innovation<br>&#8226; Digital Sovereignty &amp; Global Tech Policy<br>&#8226; Cybersecurity &amp; National Interests</p><p>We are a <strong>nonpartisan</strong>, <strong>nonprofit</strong>, and <strong>independent</strong> initiative&#8212;committed to building a future where technology serves all equitably.</p><p><strong>Subscribe</strong> to gain full access to interviews and expert insights&#8212;and be part of the conversation shaping our digital future.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://magazine.techplomacyfoundation.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://magazine.techplomacyfoundation.org/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item></channel></rss>