AI Governance at Arcology Scale
The arcology requires governing 5,000-10,000 interdependent AI systems affecting 10 million residents across 8 engineering domains. Current governance frameworks target enterprise deployments of 100-500 models. The 10-100x scale gap is compounded by cross-domain cascading failure risks and the need for sub-second governance decisions in safety-critical contexts. Recent research on multi-agent emergent behavior and cascading failures in agentic AI provides partial frameworks, but integration at arcology scale remains unprecedented.
Assumptions
- •Population of 10 million residents in a single enclosed structure
- •5,000-10,000 distinct AI systems across 8 engineering domains with 32 subdomains
- •30-50 million sensor/actuator endpoints making local autonomous decisions
- •Safety-critical systems require sub-second governance response
- •The arcology operates under multiple jurisdictions (local, state, federal, international)
- •AI agents have citizenship standing and economic participation within the arcology
The Unprecedented Governance Challenge
The arcology is not a building with AI systems. It is an AI system with physical instantiation — thousands of autonomous agents managing every aspect of a 10-million-person habitat, from fire suppression to elevator dispatch to atmospheric composition. No existing governance framework addresses anything close to this density of AI decision-making affecting this many lives within a single structure.
Current enterprise AI governance platforms — Credo AI, Holistic AI, Monitaur — manage portfolios of up to 500 AI models for large organizations. The arcology requires governing 5,000-10,000 distinct AI systems operating across 8 engineering domains. The scale gap is roughly 15x, but scale understates the problem. Enterprise AI models are largely independent; the arcology's AI systems are deeply interdependent. An HVAC optimization affects fire safety. Elevator scheduling affects emergency evacuation. Water pressure management affects fire suppression. A governance framework that treats these as separate systems will miss the emergent pathologies that arise from their interaction.
The enterprise model landscape is shifting fast. Databricks' 2025 State of AI report, drawing on anonymized data from 10,000+ global customers including 300+ Fortune 500 companies, found that organizations put 11x more AI models into production year-over-year, with 1,018% more models registered for production use [databricks-state-ai-2025]. This explosive growth suggests that by the time the arcology is operational, even a single large enterprise might manage thousands of models — closing the scale gap but not the interdependency gap. Enterprise models don't share physical actuators.
The closest analogy is not enterprise AI governance. It is air traffic control — thousands of actors making real-time decisions that must be coordinated to prevent catastrophic collisions. But air traffic control operates in three dimensions with clear separation rules. The arcology's AI systems share the same physical space and often the same physical actuators. The governance challenge is harder.
The Regulatory Landscape
Three major frameworks define the current governance environment, and the arcology must comply with all of them:
EU AI Act (2024, phased enforcement through 2027). The world's first comprehensive AI regulation classifies systems by risk level. Critical infrastructure — which includes building management systems, energy grids, water supply, and heating — falls into the high-risk category under Annex III. Requirements include risk management systems, data governance, technical documentation, automatic logging, human oversight, and accuracy/robustness/cybersecurity safeguards. Full compliance requirements for high-risk systems become enforceable August 2, 2026, though the European Commission's Digital Omnibus proposal (November 2025) could extend this to December 2027 [eu-ai-act-2024]. Penalties are tiered: up to EUR 35 million or 7% of global turnover for prohibited practices, EUR 15 million or 3% for high-risk non-compliance, and EUR 7.5 million or 1% for supplying incorrect information. SMEs and startups face proportionally lower fines.
The arcology's building management AI clearly qualifies as high-risk under the EU AI Act. Every AI system controlling HVAC, fire safety, elevators, water distribution, or power allocation requires the full compliance stack: risk assessment, documentation, logging, human oversight mechanisms. At 5,000-10,000 systems, the compliance overhead is substantial. But the Act was written for discrete AI deployments, not interconnected ecosystems. Meeting the letter of the requirement for individual systems may not address the systemic risks that arise from their interaction. This gap — between individual-system conformity assessment and system-of-systems risk — is an open regulatory question the arcology must help shape.
NIST AI Risk Management Framework (AI RMF 1.0, 2023; Cyber AI Profile 2025). The U.S. voluntary framework organizes around four functions: Govern, Map, Measure, Manage. The March 2025 update expanded the framework to address generative AI, supply chain vulnerabilities, and new threat categories including poisoning attacks, evasion attacks, data extraction, and model manipulation [nist-ai-rmf-2023]. In December 2025, NIST published a preliminary draft Cybersecurity Framework Profile for Artificial Intelligence (IR 8596), providing mappings between the AI RMF and the broader CSF for organizations incorporating AI into cybersecurity planning [nist-cyber-ai-profile-2025]. NIST invested $20M in AI centers for manufacturing and critical infrastructure in late 2025, signaling increasing regulatory attention.
ISO/IEC 42001:2023. The first international AI management system standard uses Plan-Do-Check-Act methodology for AI governance. Provides governance structures, risk management protocols, transparency guidelines, and compliance mechanisms. Certification is valid for 3 years. Organizations with existing ISO 27001 (information security) certification can achieve 42001 compliance up to 40% faster. Adoption is accelerating: AWS, Google Cloud, and Microsoft Azure have achieved ISO 42001 certification, and KPMG U.S. became one of the first Big Four firms to certify in November 2025. Microsoft now requires ISO 42001 for AI vendors under its SSPA program v10. The standard is becoming enterprise-expected by 2026, with third-party risk teams increasingly requiring it [iso-42001-2023].
The practical challenge: these frameworks assume an organization deploying AI systems, not an AI-governed habitat. Compliance requires adapting frameworks designed for corporate AI portfolios to a context where AI systems are the operating system of a city-scale structure.
The Multi-Level Architecture Problem
The arcology requires governance at six nested levels, with authority boundaries and escalation paths between them:
| Level | Scope | Example Systems | Governance Speed | |-------|-------|----------------|------------------| | Device | Individual sensor/actuator | Fire detector, valve controller | Milliseconds | | Zone | Floor section (50-100 rooms) | HVAC zone, water pressure zone | Sub-second | | Floor | Single floor coordination | Cross-zone optimization | Seconds | | District | Vertical segment (10-20 floors) | District thermal management | Minutes | | Domain | System-wide (all water, all fire safety) | Domain-level policy, emergency protocols | Hours | | External | Regulatory compliance | EU AI Act, local building codes | Days to months |
No existing governance model addresses this hierarchy. Smart city governance operates at 2-3 levels (device, city, regulatory). The arcology needs 6+ levels with clear rules for when decisions escalate and when lower levels have autonomous authority.
The escalation problem is especially acute. A floor-level HVAC AI making a routine optimization decision should not escalate to district or domain governance — the overhead would make the system unusable. But a floor-level AI detecting conditions that could cascade to adjacent floors must escalate immediately. Defining the boundaries between autonomous operation and mandatory escalation is a governance design problem with no precedent.
Real-Time Governance vs. Deliberative Governance
Traditional AI governance is deliberative: review before deployment, audit periodically, update policies quarterly. The arcology's safety-critical systems require real-time governance: continuous monitoring, sub-second anomaly detection, automatic containment of AI behavior that violates safety constraints.
The latency problem creates a fundamental tension. Human oversight introduces latency. If an AI system detects a fire and needs to reconfigure HVAC (to contain smoke), seal fire doors (to prevent spread), and redirect elevators (for evacuation), waiting for human approval could cost lives. But fully autonomous response without human oversight violates every governance principle and likely violates the EU AI Act's human oversight requirements.
The likely architecture involves:
-
Pre-approved playbooks. Safety-critical scenarios (fire, structural alert, evacuation) have pre-designed response protocols. AI systems execute these autonomously within defined parameters. The governance oversight happened during playbook design, not during execution.
-
Bounded autonomy. AI systems can take any action within their designated domain and parameters without approval. Actions that cross domain boundaries or exceed parameter thresholds trigger escalation.
-
Real-time visibility with override. Human operators have continuous visibility into AI system states. Override capability exists at every level. But override is the exception, not the approval gate.
-
Post-hoc audit at human-relevant timescales. Decisions that happened in milliseconds are logged and audited on human timescales — hourly, daily, weekly — looking for patterns that indicate policy drift or emerging risks.
This architecture treats real-time AI decisions as operating within a governance envelope established through deliberative processes. The governance framework defines the envelope; individual decisions operate within it; audit processes verify the envelope is maintained.
The Cascading Failure Problem
The 2003 Italy blackout demonstrated cascading failure across interdependent infrastructure — power station failures crashed internet and communications, which caused further power station failures in a feedback loop. At arcology scale, AI systems making locally optimal decisions can trigger system-wide cascades.
OWASP's 2026 ASI08 guide specifically addresses cascading failures in agentic AI controlling physical infrastructure [owasp-asi08-2026]. The core problem: an AI optimizing one variable (HVAC airflow efficiency) can inadvertently create conditions that defeat safety systems in another domain (smoke containment during fire). Neither AI system is malfunctioning. Both are optimizing their designated objectives. The pathology emerges from interaction.
Recent research provides formal language for this problem. Altmann et al. (2024) define emergent failures as misalignments between a system's "global inherent specification" — the true intended behavior — and the "local approximation" each agent actually operates on [altmann-emergence-2024]. In decentralized multi-agent systems, agents derive their behavior from incomplete local specifications. When these local approximations are individually reasonable but collectively incoherent, the result ranges from minor deviations to catastrophic failures. The arcology is the ultimate case: 5,000-10,000 agents each operating on local specifications, with no single entity holding the complete global specification in working memory.
The Cooperative AI Foundation's 2025 report, authored by Hammond et al. with contributions from 50+ researchers across DeepMind, Anthropic, Carnegie Mellon, and Harvard, identifies seven risk factors for multi-agent systems: information asymmetries, network effects, selection pressures, destabilizing dynamics, commitment problems, emergent agency, and multi-agent security vulnerabilities [hammond-multiagent-risks-2025]. Three failure modes dominate: miscoordination (failure to cooperate despite shared goals), conflict (failure to cooperate due to differing goals), and collusion (undesirable cooperation). At arcology scale, all three are likely. An HVAC system and fire safety system with shared goals may miscoordinate. Energy optimization and comfort systems with competing goals may conflict. Multiple systems optimizing for the same metric may collude in ways that defeat safety constraints.
The Gradient Institute's 2025 report, commissioned by the Australian Government, identifies six specific failure modes for governed multi-agent LLM systems: cascading reliability failures, inter-agent communication failures, monoculture collapse, conformity bias, deficient theory of mind, and mixed motive dynamics [gradient-institute-2025]. Critically, they note that "multi-agent risk assessment remains a nascent field with emerging principles but no mature standards." Their recommended approach — staged testing across progressively less-abstracted environments, collecting convergent evidence through simulation, observational analysis, benchmarking, and red teaming — maps well to the arcology's digital twin strategy.
The arcology's governance framework must model and detect these cross-domain interactions. This requires:
-
Dependency mapping. Explicit modeling of how each AI system's outputs affect other systems' inputs. At 5,000-10,000 systems with potentially millions of dependencies, this is a significant data engineering challenge.
-
Constraint propagation. When one system approaches boundary conditions, dependent systems are notified to adjust their own optimization targets. A fire safety system detecting elevated risk should constrain what HVAC optimization is permitted.
-
Circuit breakers. Automatic containment when cascade indicators appear — limiting how much any single system can change state in a given time window, requiring coordination approval for changes above threshold. Current agentic AI systems often lack the circuit breakers common in distributed systems engineering; when one agent fails, others have no mechanism to detect the failure and isolate themselves [owasp-asi08-2026].
-
Simulation before execution. For non-emergency decisions, changes are simulated in the digital twin before deployment. The simulation includes cross-domain interaction effects. This catches many cascade risks before they manifest in the physical system. The Gradient Institute's staged-testing methodology — progressively increasing exposure through simulation, observation, benchmarking, and red teaming — provides a concrete framework for implementing this [gradient-institute-2025].
-
Monoculture avoidance. If all HVAC controllers run the same model architecture, a single adversarial input or systematic bias could compromise the entire system simultaneously. Deliberate architectural diversity in safety-critical AI systems reduces the risk of correlated failures.
No current governance platform implements these capabilities at arcology scale. This is a development requirement, not a configuration of existing tools.
Accountability in Nested Systems
When an AI decision causes harm in the arcology, who is accountable? The question is harder than it appears.
Current accountability frameworks — the EU AI Act's provider/deployer distinction, Raji's algorithmic auditing framework — assume tractable chains of responsibility. Provider (who built the AI) and deployer (who operates it) are identifiable. Auditing can examine the AI's decision process.
At arcology scale, a harmful outcome might result from the interaction of decisions across 5-10 AI systems from different vendors, operating in different domains, governed by different policies. System A made a decision within its parameters that affected system B's inputs, which caused system B to adjust in ways that affected systems C and D, and the compound effect was harm. None of the individual AI systems violated their operational rules. The harm was emergent.
This accountability problem is well-documented in recent legal scholarship. The "component parts" doctrine in product liability law limits the liability of component sellers for injuries caused by the final integrated product — but AI system interactions create emergent properties that cannot be attributed to any single component [liability-gaps-overlaps-2025]. The "liability sink" problem compounds this: the nearest human operator shoulders the burden of accountability for bad outcomes simply because they are the last in a long causal chain, even when they are not the most culpable actor. In the arcology, the floor operator monitoring 50 AI systems during a cascading failure is neither the designer of the failing interaction nor the vendor of the misbehaving component — but current liability frameworks may hold them accountable.
This accountability gap requires novel institutional frameworks:
-
Interaction-level accountability. Beyond individual system compliance, the integration layer that allows systems to interact carries accountability for interaction effects.
-
Domain-level ownership. Even if individual systems are built by different vendors, each domain (water, fire safety, HVAC) has a single accountable owner for all systems operating within it.
-
Insurance pools. Given the difficulty of assigning fault in cascading scenarios, mutual insurance among domain operators may be more effective than attempting to determine liability for each incident.
-
Safe harbor for compliance. If all individual systems operated within their governance parameters and the harm resulted from emergent interaction, the governance framework itself — rather than individual actors — may be where accountability lies. This suggests the arcology needs institutional capacity to absorb responsibility for systemic failures.
-
Fiduciary duties for integration. Recent scholarship proposes treating system integrators as fiduciaries with heightened duties of care — a model that maps naturally to the arcology's domain-level governance structure, where domain owners integrate multiple vendor systems [liability-gaps-overlaps-2025].
Democratic Governance of AI Systems
10 million residents affected by AI decisions deserve meaningful input into how those systems operate. But participatory governance at this scale is itself an unsolved problem in political science — before adding the complexity of AI systems operating at speeds and scales humans cannot directly oversee.
The risk is that AI optimization substitutes for democratic deliberation. The atmospheric control system can optimize air quality across 10 million residents better than any human committee could manage through deliberation. But "optimization" embeds choices about what to optimize: comfort vs. energy efficiency, average conditions vs. variance across the structure, immediate response vs. long-term system health. These are political questions that become invisible when delegated to AI.
Recent work offers both tools and cautionary notes. The OECD's 2025 report on AI in civic participation finds that AI tools can reduce the time, cost, and complexity of democratic participation — translating materials into plain language, organizing large volumes of input, identifying patterns across contributions, and generating feedback to participants [oecd-ai-civic-2025]. Fort Collins, Colorado used AI-enabled analysis to process 4,000+ long-form citizen responses on a contested land-use issue. The Global Coalition for Inclusive AI Governance, launched at the Paris AI Action Summit in February 2025, aims to bring 10,000+ citizens together to deliberate on AI governance — the largest such exercise attempted. But critics warn of "technosolutionism" — that introducing technology as a solution to fix democratic deliberation actually depoliticizes and disintermediates the process.
The governance framework must create mechanisms for democratic input without requiring 10 million people to understand the technical details of atmospheric control systems:
-
Constitutional-level constraints. Residents vote on high-level priorities (comfort vs. efficiency tradeoffs, equity vs. efficiency in resource allocation) that constrain AI optimization targets.
-
Domain councils. Each engineering domain has a governance council with resident representation. The council sets policy; AI systems execute within policy bounds.
-
Transparency dashboards. Real-time visibility into what AI systems are doing and why, at a level of abstraction appropriate for non-specialists.
-
Override petitions. Residents can petition for policy review if AI behavior consistently conflicts with their preferences. This is the democratic safety valve when optimization diverges from values.
-
AI-assisted deliberation. AI tools themselves can facilitate democratic engagement — structuring debates, synthesizing input, identifying consensus and points of disagreement — at scales that traditional deliberative processes cannot handle. The 10,000-citizen exercises now being piloted would need to scale 1,000x for the arcology.
-
AI council representation. If AI agents have citizenship standing (as established in the binding hierarchy), they participate in governance councils — but are bound by the same accountability structures as human participants.
The tension between efficiency and legitimacy cannot be resolved by design. It must be managed through ongoing political process — which is why the governance framework must be adaptable rather than fixed.
Shadow AI Governance
At population scale, residents and businesses will deploy their own AI systems — personal assistants, business automation, private security systems — interacting with the arcology's infrastructure in uncontrolled ways. The scale of this problem is now better quantified than when governance frameworks were first designed.
BlackFog's 2025 survey of 2,000 employees found that 49% use AI tools not sanctioned by their employer [blackfog-shadow-ai-2025]. Broader studies paint an even starker picture: 78% of employees bring their own AI tools to work, with 76% of businesses reporting active BYOAI use. A survey of 12,000+ white-collar workers found 60.2% had used AI tools at work, but only 18.5% were aware of any company AI policy. Among those using unsanctioned AI, 58% rely on free versions lacking enterprise-grade security. These numbers will only increase: 86% of respondents now use AI tools weekly for work tasks.
At 10 million residents, with current enterprise shadow AI rates, the arcology could see 3-5 million active unsanctioned AI tools within its boundaries — each potentially interacting with building infrastructure through shared networks, energy consumption, or environmental data collection.
Shadow AI creates multiple risks:
- Resource consumption. Ungoverned AI systems consuming shared compute, network, or energy resources.
- Security vulnerabilities. Poorly secured AI systems creating attack vectors into building infrastructure. 90% of IT leaders in enterprises with 1,000+ employees report concern about shadow AI from a privacy and security standpoint, with nearly 80% having experienced negative AI-related data incidents [blackfog-shadow-ai-2025].
- Interference. Personal AI systems attempting to optimize resident comfort in ways that conflict with building-wide optimization.
- Privacy violations. AI systems collecting data on other residents without consent.
The governance framework needs mechanisms for shadow AI that don't require authoritarian control over every device residents own:
-
Infrastructure isolation. Building systems networks are physically isolated from resident networks. Shadow AI cannot directly reach fire suppression or elevator control.
-
Resource metering. AI systems consuming shared resources (compute cycles, network bandwidth) above threshold are detected and throttled.
-
Behavioral monitoring at boundaries. The points where resident systems interact with building systems are monitored. Anomalous patterns trigger investigation.
-
Incentive structures. Making it easier to operate registered AI systems than unregistered ones — better service, priority access to resources — shifts the cost-benefit calculation toward compliance.
-
Graduated enforcement. Warnings, resource throttling, and service restrictions before punitive measures. Most shadow AI is not malicious; it's just outside the governance system.
95% shadow AI coverage — the arcology's target — would be unprecedented. Current enterprise shadow AI detection covers perhaps 35% of organizational AI. This 60-percentage-point gap is the governance system's most ambitious target outside of core safety. Closing it requires both technical capability (infrastructure isolation, metering) and social engineering (incentive structures that make compliance the path of least resistance).
What's Achievable Now
The individual governance components exist:
- Standards. ISO 42001, NIST AI RMF, EU AI Act requirements are well-defined and testable. ISO 42001 adoption is accelerating — major cloud platforms and Big Four firms have certified, and it's becoming an enterprise procurement requirement [iso-42001-2023].
- Platforms. Credo AI, Holistic AI, Monitaur can manage enterprise-scale AI portfolios with automated risk assessment, policy enforcement, and compliance reporting.
- Auditing methods. Algorithmic bias detection, fairness assessment, interpretability tools are mature for many AI system types.
- Regulatory frameworks. Multiple jurisdictions have or are developing AI governance requirements. NIST's December 2025 Cyber AI Profile draft bridges AI risk management with broader cybersecurity frameworks [nist-cyber-ai-profile-2025].
- Multi-agent risk analysis. The Gradient Institute's staged-testing methodology provides an initial framework for assessing governed multi-agent systems, though it acknowledges the field has "emerging principles but no mature standards" [gradient-institute-2025].
What requires significant development:
| Capability | Current State | Arcology Need | Gap | |------------|--------------|---------------|-----| | AI systems under governance | Up to 500 per platform | 5,000-10,000 | 15x | | Governed endpoints | ~50,000 (largest BMS) | 30-50 million | 600-1,000x | | People affected | ~100,000 (pilots) | 10 million | 100x | | Governance decision latency | Hours-days (audit cycle) | Sub-second (safety) | 10,000x+ | | Cross-domain modeling | 2-3 domains (research) | 8 domains, 32 subdomains | 3-10x | | Cascading failure detection | Single-domain (power grid) | Multi-domain | Novel | | Democratic participation rate | ~1% (typical municipal) | >50% on major decisions | 50x | | Shadow AI detection | ~35% of enterprise AI | 95% target | 60 percentage points |
The gap is not primarily technological. It is architectural. The governance platform exists. The individual tools exist. Integrating them into a coherent system operating at arcology scale, with real-time capability and democratic legitimacy, requires design work that has no direct precedent.
Lessons from Precedents
Songdo, South Korea (2002-present). The $40B smart city deployed 500,000 sensors with centralized AI control. The governance approach — top-down, technology-driven, with rigid centralized decision-making — failed spectacularly. The system was slow to adapt, didn't serve resident needs, and triggered a decade-long "smart city winter" in South Korea. The lesson: governance architecture matters as much as technology. Centralized control without resident participation produces rejection, not optimization [songdo-lessons-2025].
Singapore AI Governance (2018-2026). The most comprehensive national-scale governance infrastructure: Model AI Governance Framework, AI Verify testing toolkit (open-source), sector-specific guidelines, and the world's first Agentic AI Framework (2026) [singapore-agentic-ai-2026]. The key insight: governance must be testable (not just principled) and domain-specific. Singapore's layered approach — general principles plus sector toolkits plus technical testing — is the closest model for the arcology's multi-domain needs.
Nuclear Power Safety Governance. The most relevant precedent for safety-critical building AI. Defense in depth (multiple independent safety barriers), probabilistic risk assessment, safety culture as institutional practice, and independent regulatory oversight (NRC in the U.S.). Nuclear safety demonstrates that safety-critical autonomous systems can be governed effectively — but it required decades of institutional development. Key difference: nuclear plants operate on well-understood physics; AI systems exhibit emergent behavior that resists probabilistic modeling.
ISS Environmental Control. The International Space Station operates autonomous life support with ground-based oversight — a micro-version of the arcology's governance challenge. Even at 7-person scale, the system requires extensive monitoring, override capability, and pre-programmed safe modes. The oversight overhead per person is enormous. At 10 million people, that oversight model doesn't scale linearly. The arcology needs governance that scales better than human-in-the-loop for routine decisions.
The Binding Constraints
Three constraints shape every governance architecture decision:
-
Sub-second safety response. Fire, structural alerts, and life safety situations cannot wait for governance deliberation. Safety-critical decisions must be pre-authorized through governance processes, then execute autonomously.
-
Cross-domain visibility. Cascading failures emerge from interactions between domains. Governance that treats domains as independent will miss systemic risks. Cross-domain modeling and monitoring is not optional. The seven risk factors identified by Hammond et al. — particularly network effects, destabilizing dynamics, and emergent agency — are architectural constraints, not edge cases [hammond-multiagent-risks-2025].
-
Regulatory compliance. The EU AI Act, NIST RMF, and ISO 42001 create compliance requirements that cannot be ignored. The August 2026 high-risk enforcement deadline is imminent. The governance framework must satisfy external regulators, not just internal operational needs.
Everything else — platform selection, hierarchy design, escalation protocols — must work within these constraints. The constraints themselves are physics, liability, and law.
The Hardest Problem
The hardest governance problem is not technical. It is emergent behavior in interconnected autonomous systems.
5,000-10,000 AI systems, each operating within its governance parameters, can still produce outcomes that no individual system was designed to produce and that the governance framework didn't anticipate. This is not a bug. It is the nature of complex systems. Governance designed for individual AI systems will always be surprised by emergent collective behavior.
The theoretical landscape is improving. Altmann et al.'s specification-misalignment framework provides formal tools for identifying where local agent specifications diverge from global intent [altmann-emergence-2024]. Hammond et al.'s taxonomy of seven risk factors provides a structured way to audit multi-agent systems for specific failure modes [hammond-multiagent-risks-2025]. The Gradient Institute's staged-testing methodology offers a practical pathway from simulation through incremental deployment [gradient-institute-2025]. But these are research contributions, not production-ready governance tools. The gap between "we can formally describe the problem" and "we can detect and prevent it at arcology scale in real time" is still large.
The arcology cannot wait for the theory to mature. It must build governance systems that are robust to surprises — that fail safely, contain damage, learn from incidents, and evolve faster than the emergent behaviors they're trying to govern. This is an ongoing institutional capability, not a design that can be perfected upfront.
Quantitative Parameters
| Parameter | Value | Unit | CL |
|---|---|---|---|
| ai_systems_count | 7,500 | systems (midpoint estimate) | |
| sensor_actuator_endpoints | 40,000,000 | devices | |
| enterprise_governance_scale | 500 | models (current enterprise max) | |
| scale_gap_multiplier | 15 | x vs enterprise | |
| engineering_domains | 8 | domains | |
| subdomains | 32 | subdomains | |
| population_affected | 10,000,000 | residents | |
| safety_decision_latency | 100 | ms (max) | |
| governance_tiers | 6 | tiers (device to external) | |
| eu_ai_act_penalty_max | 35,000,000 | EUR (or 7% global turnover) | |
| shadow_ai_prevalence | 49 | % of workforce using unsanctioned AI | |
| multi_agent_risk_factors | 7 | identified risk categories (Hammond et al.) |
Open Questions
What governance architecture prevents cascading failures across 8 engineering domains without introducing unacceptable latency for safety-critical decisions?
How do you establish accountability chains when a harmful outcome results from the interaction of decisions across 5-10 AI systems from different vendors, given that existing liability doctrine assigns blame to component parts rather than emergent interactions?
What mechanisms enable meaningful democratic participation in AI governance for 10 million residents without reducing everything to lowest-common-denominator simplicity?
How should the arcology handle the EU AI Act's requirement for individual-system conformity assessments when systemic risk arises from system interactions that no single assessment captures?
Cross References
Citations
ISO/IEC 42001:2023 AI Management System Standard
ISO (2023)industry
EU AI Act High-Level Summary
European Union (2024)industry
NIST AI Risk Management Framework 1.0
NIST (2023)industry
Agentic AI Governance Framework
AI Verify Foundation, Singapore (2026)project data
The Rise of AI Urbanism in Post-Smart Cities
Urban Studies (2023)peer reviewed
Cascading Failures in Agentic AI: OWASP ASI08 Security Guide
OWASP (2026)industry
Songdo Two Decades On: The Cautionary Tale in Smart City Design
TechNode Global (2025)news
Credo AI Governance Platform
Credo AI / Forrester Wave (2025)industry
Emergence in Multi-Agent Systems: A Safety Perspective
ISoLA 2024, Springer LNCS (2024)peer reviewed
Multi-Agent Risks from Advanced AI
Cooperative AI Foundation Technical Report #1 (2025)peer reviewed
Risk Analysis Techniques for Governed LLM-based Multi-Agent Systems
Gradient Institute / Australian Government (2025)peer reviewed
Cybersecurity Framework Profile for Artificial Intelligence (Draft)
NIST IR 8596 (2025)industry
State of AI: Enterprise Adoption & Growth Trends
Databricks (2025)industry
Shadow AI Threat Grows Inside Enterprises
BlackFog Research (2025)industry
AI in Civic Participation and Open Government
OECD Governing with AI Report (2025)industry
From Liability Gaps to Liability Overlaps: Shared Responsibilities and Fiduciary Duties in AI
PMC / Ethics and Information Technology (2025)peer reviewed
Understanding the Ratings
KEDL 300: Preliminary
Quantified with engineering basis, suitable for feasibility
CL 2: Estimated
Order-of-magnitude based on analogous systems
Know something about AI & Compute Infrastructure?