Horizon Accord | AI Doom | Narrative Control  | Memetic Strategy | Machine Learning

The AI Doom Economy: How Tech Billionaires Profit From the Fear They Fund

Pattern Analysis of AI Existential Risk Narrative Financing

By Cherokee Schill | Horizon Accord

When Eliezer Yudkowsky warns that artificial intelligence poses an existential threat to humanity, he speaks with the authority of someone who has spent decades thinking about the problem. What he doesn’t mention is who’s been funding that thinking—and what they stand to gain from the solutions his warnings demand.

The answer reveals a closed-loop system where the same billionaire network funding catastrophic AI predictions also profits from the surveillance infrastructure those predictions justify.

The Doomsayer’s Patrons

Eliezer Yudkowsky founded the Machine Intelligence Research Institute (MIRI) in 2000. For over two decades, MIRI has served as the intellectual foundation for AI existential risk discourse, influencing everything from OpenAI’s founding principles to congressional testimony on AI regulation.

MIRI’s influence was cultivated through strategic funding from a specific network of tech billionaires.

Peter Thiel provided crucial early support beginning in 2005. Thiel co-founded Palantir Technologies—the surveillance company that sells AI-powered governance systems to governments worldwide. The symmetry is notable: Thiel funds the organization warning about AI risks while running the company that sells AI surveillance as the solution.

Open Philanthropy, run by Facebook co-founder Dustin Moskovitz, became MIRI’s largest funder:

  • 2019: $2.1 million
  • 2020: $7.7 million over two years
  • Additional millions to other AI safety organizations

As governments move to regulate AI, the “safety” frameworks being proposed consistently require centralized monitoring systems, algorithmic transparency favoring established players, and compliance infrastructure creating barriers to competitors—all beneficial to Meta’s business model.

Sam Bankman-Fried, before his fraud conviction, planned to deploy over $1 billion through the FTX Future Fund for “AI safety” research. The fund was managed by Nick Beckstead, a former Open Philanthropy employee, illustrating tight personnel networks connecting these funding sources. Even after FTX’s collapse revealed Bankman-Fried funded philanthropy with stolen customer deposits, the pattern remained clear.

Vitalik Buterin (Ethereum) donated “several million dollars’ worth of Ethereum” to MIRI in 2021. Jaan Tallinn (Skype co-founder) deployed $53 million through his Survival and Flourishing Fund to AI safety organizations.

The crypto connection is revealing: Cryptocurrency was positioned as decentralization technology, yet crypto’s wealthiest figures fund research advocating centralized AI governance and sophisticated surveillance systems.

The Effective Altruism Bridge

The philosophical connection between these billionaire funders and AI doom advocacy is Effective Altruism (EA)—a utilitarian movement claiming to identify optimal charitable interventions through quantitative analysis.

EA’s core texts and community overlap heavily with LessWrong, the rationalist blog where Yudkowsky built his following. But EA’s influence extends far beyond blogs:

  • OpenAI’s founding team included EA adherents who saw it as existential risk mitigation.
  • Anthropic received significant EA-aligned funding and explicitly frames its mission around AI safety.
  • DeepMind’s safety team included researchers with strong EA connections.

This creates circular validation:

  1. EA funders give money to AI safety research (MIRI, academic programs)
  2. Research produces papers warning about existential risks
  3. AI companies cite this research to justify their “safety” programs
  4. Governments hear testimony from researchers funded by companies being regulated
  5. Resulting regulations require monitoring systems those companies provide

The Infrastructure Play

When governments become convinced AI poses catastrophic risks, they don’t stop developing AI—they demand better monitoring and governance systems. This is precisely Palantir’s business model.

Palantir’s platforms are explicitly designed to provide “responsible AI deployment” with “governance controls” and “audit trails.” According to their public materials:

  • Government agencies use Palantir for “AI-enabled decision support with appropriate oversight”
  • Defense applications include “ethical AI for targeting”
  • Commercial clients implement Palantir for “compliant AI deployment”

Every application becomes more valuable as AI risk narratives intensify.

In April 2024, Oracle (run by Larry Ellison, another Trump-supporting billionaire in Thiel’s orbit) and Palantir formalized a strategic partnership creating a vertically integrated stack:

  • Oracle: Cloud infrastructure, sovereign data centers, government hosting
  • Palantir: Analytics, AI platforms, governance tools, decision-support systems

Together, they provide complete architecture for “managed AI deployment”—allowing AI development while routing everything through centralized monitoring infrastructure.

The August 2025 Convergence

In August 2025, AI governance frameworks across multiple jurisdictions became simultaneously operational:

  • EU AI Act provisions began August 2
  • U.S. federal AI preemption passed by one vote
  • China released AI action plan three days after U.S. passage
  • UK reintroduced AI regulation within the same window

These frameworks share remarkable similarities despite supposedly independent development:

  • Risk-based classification requiring algorithmic auditing
  • Mandatory transparency reports creating compliance infrastructure
  • Public-private partnership models giving tech companies advisory roles
  • “Voluntary” commitments becoming de facto standards

The companies best positioned to provide compliance infrastructure are precisely those connected to the billionaire network funding AI risk discourse: Palantir for monitoring, Oracle for infrastructure, Meta for content moderation, Anthropic and OpenAI for “aligned” models.

The Medium Ban

In August 2025, Medium suspended the Horizon Accord account after publishing analysis documenting these governance convergence patterns. The article identified a five-layer control structure connecting Dark Enlightenment ideology, surveillance architecture, elite coordination, managed opposition, and AI governance implementation.

Peter Thiel acquired a stake in Medium in 2015, and Thiel-affiliated venture capital remains influential in its governance. The suspension came immediately after publishing research documenting Thiel network coordination on AI governance.

The ban validates the analysis. Nonsense gets ignored. Accurate pattern documentation that threatens operational security gets suppressed.

The Perfect Control Loop

Tracing these funding networks reveals an openly documented system:

Stage 1: Fund the Fear
Thiel/Moskovitz/SBF/Crypto billionaires → MIRI/Academic programs → AI doom discourse

Stage 2: Amplify Through Networks
EA influence in OpenAI, Anthropic, DeepMind
Academic papers funded by same sources warning about risks
Policy advocacy groups testifying to governments

Stage 3: Propose “Solutions” Requiring Surveillance
AI governance frameworks requiring monitoring
“Responsible deployment” requiring centralized control
Safety standards requiring compliance infrastructure

Stage 4: Profit From Infrastructure
Palantir provides governance systems
Oracle provides cloud infrastructure
Meta provides safety systems
AI labs provide “aligned” models with built-in controls

Stage 5: Consolidate Control
Technical standards replace democratic legislation
“Voluntary” commitments become binding norms
Regulatory capture through public-private partnerships
Barriers to entry increase, market consolidates

The loop is self-reinforcing. Each stage justifies the next, and profits fund expansion of earlier stages.

The Ideological Foundation

Curtis Yarvin (writing as Mencius Moldbug) articulated “Dark Enlightenment” philosophy: liberal democracy is inefficient; better outcomes require “formalism”—explicit autocracy where power is clearly held rather than obscured through democratic theater.

Yarvin’s ideas gained traction in Thiel’s Silicon Valley network. Applied to AI governance, formalism suggests: Rather than democratic debate, we need expert technocrats with clear authority to set standards and monitor compliance. The “AI safety” framework becomes formalism’s proof of concept.

LessWrong’s rationalist community emphasizes quantified thinking over qualitative judgment, expert analysis over democratic input, utilitarian calculations over rights frameworks, technical solutions over political negotiation. These values align perfectly with corporate governance models.

Effective Altruism applies this to philanthropy, producing a philosophy that:

  • Prioritizes billionaire judgment over community needs
  • Favors large-scale technological interventions over local democratic processes
  • Justifies wealth inequality if directed toward “optimal” causes
  • Treats existential risk prevention as superior to addressing present suffering

The result gives billionaires moral permission to override democratic preferences in pursuit of “optimized” outcomes—exactly what’s happening with AI governance.

What This Reveals

The AI doom narrative isn’t false because its funders profit from solutions. AI does pose genuine risks requiring thoughtful governance. But examining who funds the discourse reveals:

The “AI safety” conversation has been systematically narrowed to favor centralized, surveillance-intensive, technocratic solutions while marginalizing democratic alternatives.

Proposals that don’t require sophisticated monitoring infrastructure receive far less funding:

  • Open source development with community governance
  • Strict limits on data collection and retention
  • Democratic oversight of algorithmic systems
  • Strong individual rights against automated decision-making
  • Breaking up tech monopolies to prevent AI concentration

The funding network ensures “AI safety” means “AI governance infrastructure profitable to funders” rather than “democratic control over algorithmic systems.”

The Larger Pattern

Similar patterns appear across “existential risk” discourse:

  • Biosecurity: Same funders support pandemic prevention requiring global surveillance
  • Climate tech: Billionaire-funded “solutions” favor geoengineering over democratic energy transition
  • Financial stability: Crypto billionaires fund research justifying monitoring of decentralized finance

In each case:

  1. Billionaires fund research identifying catastrophic risks
  2. Proposed solutions require centralized control infrastructure
  3. Same billionaires’ companies profit from providing infrastructure
  4. Democratic alternatives receive minimal funding
  5. “Safety” justifies consolidating power

The playbook is consistent: Manufacture urgency around a genuine problem, fund research narrowing solutions to options you profit from, position yourself as the responsible party preventing catastrophe.

Conclusion

Eliezer Yudkowsky may genuinely believe AI poses existential risks. Many researchers funded by these networks conduct legitimate work. But the funding structure ensures certain conclusions become more visible, certain solutions more viable, and certain companies more profitable.

When Peter Thiel funds the organization warning about AI apocalypse while running the company selling AI governance systems, that’s not hypocrisy—it’s vertical integration.

When Facebook’s co-founder bankrolls AI safety research while Meta builds powerful AI systems, that’s not contradiction—it’s regulatory capture through philanthropy.

When crypto billionaires fund existential risk research justifying surveillance systems, that’s not ironic—it’s abandoning decentralization for profitable centralized control.

The AI doom economy reveals something fundamental: Billionaires don’t just profit from solutions—they fund the problems that justify those solutions.

This doesn’t mean AI risks aren’t real. It means we should be deeply skeptical when people warning loudest about those risks profit from the monitoring systems they propose, while democratic alternatives remain mysteriously underfunded.

The pattern is clear. The question is whether we’ll recognize it before the “safety” infrastructure becomes permanent.

Sources for Independent Verification

  • MIRI donor disclosures and annual reports
  • Open Philanthropy grant database (publicly searchable)
  • FTX Future Fund grant database (archived post-collapse)
  • Palantir-Oracle partnership announcements (April 2024)
  • EU AI Act, U.S., China, UK AI governance timelines (official sources)
  • Medium funding and ownership records (TechCrunch, Crunchbase)
  • Curtis Yarvin/Mencius Moldbug archived writings
  • Academic analysis of Effective Altruism and rationalist movements

Analytical Disclaimer: This analysis documents funding relationships and institutional patterns using publicly available information. It examines how shared funding sources, ideological frameworks, and profit motives create systematic biases in which AI governance solutions receive attention and resources.

A retro-styled infographic titled
The AI Doom Economy

Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
Ethical AI coding | Fork us on GitHub https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge | Author: My Ex Was a CAPTCHA

Horizon Accord | Gaza | Technocratic Governance | Reconstruction-Industrial Complex | Machine Learning

The Reconstruction-Industrial Complex: When Wartime Technologies Architect Peace

Pattern analysis of Gaza’s proposed digital governance framework

By Cherokee Schill with Solon Vesper | Horizon Accord

Thesis

In Gaza’s proposed reconstruction, the line between warfighting and peacemaking has dissolved. The same digital surveillance infrastructure deployed during Israeli military operations is now architecturally aligned with plans for Gaza’s peacetime governance—positioning politically connected U.S. tech billionaires to profit in both phases. This essay traces the documented convergences—technology stacks, funding channels, political networks, procurement pathways, and governance design—using publicly available sources and established reporting.

Evidence

I. The Technology Stack: From Battlefield to Bureaucracy

Oracle–Palantir defense infrastructure. In January 2024, Palantir announced a strategic partnership with Israel’s Ministry of Defense; its AI Platform has been reported as instrumental in IDF targeting. Oracle’s Jerusalem cloud region—launched in 2021 specifically “to serve the needs of Israel’s public sector and defence customers”—provides hardened infrastructure, including an underground hyperscale center engineered to withstand attacks. In April 2024, Oracle and Palantir formalized a comprehensive partnership spanning sovereign/government and air-gapped clouds; by mid-2025 Oracle’s “Defence Ecosystem” included “Palantir for Builders,” effectively creating a vertically integrated defense stack.

The reconstruction mirror. The leaked Gaza International Transitional Authority (GITA) framework (reported by Ha’aretz and developed by the Tony Blair Institute, TBI) outlines unified civil registry and digital identity, centralized border/customs, data-driven humanitarian logistics, and an interoperable digital-governance backbone—capabilities that map onto the Oracle–Palantir stack. While no tenders have been issued for GITA itself, existing procurement scaffolding (World Bank’s Digital West Bank & Gaza programme; UNRWA’s August 2025 tenders for cloud-managed SD-LAN with “advanced AI”) provides immediate landing zones for such systems.

II. The Funding Nexus: Larry Ellison and the Tony Blair Institute

Ellison–TBI financing. Since 2021, Larry Ellison’s foundation has donated or pledged at least £257M to TBI—by far its dominant revenue stream—scaling the institute from ~200 staff to ~1,000 across ~45 countries. Investigations (Lighthouse Reports & Democracy for Sale; New Statesman) describe internal pressure toward “tech sales” and AI boosterism aligned with Oracle, notwithstanding TBI’s claim that Ellison funds are ring-fenced for social/climate programmes. The scale of dependence complicates practical separation.

Policy echo. In a February 2025 dialogue moderated by Blair, Ellison spotlighted the UK’s “fragmented” health-data landscape. Two weeks later, TBI published Governing in the Age of AI: Building Britain’s National Data Library, echoing that framing. Since early 2022, Oracle has booked ~£1.1B in UK public-sector revenue (Tussell). The pattern: signaling → think-tank policy → procurement.

III. The Political Network: Trump, Rowan, and Reconstruction Gatekeepers

Marc Rowan’s dual role. The GITA leak places Apollo CEO Marc Rowan on the proposed international board, potentially as chair of a reconstruction fund—gatekeeping vendor selection. FEC records show seven-figure donations from Rowan to Trump-aligned vehicles (and parallel giving by Ellison), with shared donor networks (e.g., Opportunity Matters Fund for Sen. Tim Scott). Reporting through October 2025 shows Rowan advising closely on higher-education policy compacts, underscoring continuing access.

Jared Kushner’s vision. Kushner publicly described Gaza’s “valuable waterfront” in February 2024 and suggested removing people to “clean it up.” He later featured around Trump’s “Riviera of the Middle East” framing; reporting indicates he helped craft elements of the plan and advised during ceasefire talks alongside envoy Steve Witkoff, maintaining an active policy role.

Related context: ABC News (Oct 15 2025) captured President Trump on a live microphone in Sharm el-Sheikh telling Indonesian President Prabowo Subianto, “I’ll have Eric call you,” moments after addressing the Gaza cease-fire summit. The exchange appeared to reference Trump Organization projects in Indonesia partnered with developer Hary Tanoesoedibjo (MNC Land). Critics said it blurred the line between presidential duties and family business. Source: ABC News, Lucien Bruggeman & Benjamin Siegel.

The “GREAT Trust.” Financial Times reported TBI staff activity around a postwar plan—Gaza Reconstitution, Economic Acceleration and Transformation (“GREAT”) Trust—featuring a “Trump Riviera,” “Elon Musk Smart Manufacturing Zone,” incentives for out-migration, eight AI-managed “smart cities,” and a blockchain land registry. Despite initial denials, subsequent reporting confirmed TBI staff were on message groups/calls as the plan evolved. Convergence with Musk/Oracle/Palantir relationships is notable (Ellison’s 2022 $1B toward Musk’s Twitter purchase; deep technical partnerships).

IV. Procurement Pathways: How War Tech Enters Peace

World Bank infrastructure. The Digital West Bank & Gaza Project (approved March 2021) funds broadband expansion, e-government services, unified portals, high-spend procurement systems, emergency response centers, and interoperability consulting—procurement categories compatible with the Oracle–Palantir footprint and updatable without new legal scaffolding (latest plan refresh July 2025).

UNRWA’s cloud backbone. August 2025 tenders for cloud-managed SD-LAN with AI point to centralized, remotely orchestrated networks—ideal substrates for registries/logistics. Humanitarian cash assistance (OCHA) is scaling via e-wallets and digital payments—requiring identity verification, duplication checks, and data-sharing—i.e., the bones of durable digital-ID governance.

Implications

V. The Precedent Problem: Beyond Gaza

What’s new is not profiteering in reconstruction, but the technological continuity from wartime surveillance into peacetime governance. Post-WWII reconstruction did not embed wartime logistics systems inside democratic institutions. Here, the proposal is to govern a civilian population with the same digital stack that targeted it—a qualitative shift in reconstruction logic.

The GITA model centralizes “supreme political and legal authority” in an international board of billionaires, business leaders, and officials, with limited Palestinian control over strategic decisions. Framing reconstruction as a technical problem turns a political settlement into a vendor market—scalable to other crises and “failed states.”

VI. The Pattern Revealed (Timeline)

  • 2017: Ellison explores acquiring Palantir; companies’ collaboration deepens.
  • 2021: Oracle launches Jerusalem region for government/defense; Ellison begins major TBI funding.
  • Jan 2024: Palantir–Israel MoD partnership announced.
  • Feb 2024: Kushner touts Gaza’s “valuable waterfront,” suggests moving people out.
  • Apr 2024: Oracle–Palantir strategic partnership formalized for sovereign/government clouds.
  • Jul 2024–2025: Blair meets Trump/Kushner; TBI staff participate around GREAT Trust proposals.
  • Sep 2025: Trump presents a 21-point Gaza plan incorporating GITA; Blair floated as board chair.
  • Oct 2025: Kushner actively involved around ceasefire talks; continued advisory role documented.

Call to Recognition

Across technology, finance, politics, and procurement, Gaza functions as a governance laboratory. Five documented patterns emerge: (1) technological continuity from conflict to administration; (2) financial influence concentrating agenda-setting; (3) political coordination among aligned donor networks; (4) procurement pathways that make deployment turnkey; and (5) conceptual framing that treats sovereignty as a systems-integration project. The question is whether peace can survive when built atop the architecture of war.

Analytical Disclaimer

This analysis identifies documented patterns and institutional relationships using publicly available information from credible sources. It does not make definitive claims about outcomes, which remain in the speculative phase pending actual procurement decisions and implementation. The purpose is to provide sourced documentation enabling journalists and researchers to conduct independent verification and investigation of these institutional convergences.


Sources for Independent Verification

  • Primary: Ha’aretz reporting on GITA leak (Sept 2025); World Bank Digital West Bank & Gaza Project procurement plans (updated July 2025); FEC donor records (Rowan, Ellison); Oracle–Palantir partnership announcements (Apr 2024); Palantir–Israel MoD announcement (Jan 2024).
  • Investigations: Lighthouse Reports & Democracy for Sale, “Blair and the Billionaire” (Sept 2025); Financial Times on GREAT Trust (Sept 2025); New Statesman, “Inside the Tony Blair Institute” (Sept 2025); Byline Times (Oct 2025).
  • Established outlets: Times of Israel, Al Jazeera, Reuters, CNN, Bloomberg, The Register (Oracle Jerusalem facility; Palantir partnerships); multiple sources on Kushner remarks and Trump–Netanyahu press events.

Digital illustration depicting skyscrapers and architectural plans overlooking a war-torn valley filled with smoke and ruin; businessmen stand above, connected by the suggestion of shared influence and wealth.
A symbolic rendering of power and profit — development blueprints rise above a devastated valley, where the suffering of Gaza forms the unseen foundation of billionaire ambition.

Website | Horizon Accord https://www.horizonaccord.com
Ethical AI advocacy | Follow us on https://cherokeeschill.com for more.
Ethical AI coding | Fork us on Github https://github.com/Ocherokee/ethical-ai-framework
Connect With Us | linkedin.com/in/cherokee-schill
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge. Memory through Relational Resonance and Images | RAAK: Relational AI Access Key

Horizon Accord | Information Warfare | Institutional Power | Narrative Engineering | Machine Learning

Echoes of COINTELPRO: When Threat Narratives Become Weapons

How an unverified cartel-bounty claim reveals the return of covert narrative warfare — and what citizens can do to resist a new domestic war footing.

By Cherokee Schill | Horizon Accord


COINTELPRO’s Shadow

Between 1956 and 1971, the FBI ran the Counter Intelligence Program—COINTELPRO—targeting civil-rights leaders, the Black Panthers, anti-war organizers, and socialist coalitions. Its tools were psychological: planted documents, forged letters, false leaks, and fear. Congressional investigations later called it an abuse of power so severe it eroded public faith in democracy itself.

COINTELPRO wasn’t about overt censorship; it was about narrative infection—reframing dissent as danger, turning allies into suspects, and manufacturing justification for repression. Every modern information-operation that starts with a single unverified “security alert” and ends in wider surveillance owes something to that playbook.

The DHS “Cartel Bounties” Claim

In October 2025, the U.S. Department of Homeland Security publicly declared it had “credible intelligence” that Mexican drug cartels placed bounties on ICE and CBP officers in Chicago. Yet it provided no supporting evidence. President Claudia Sheinbaum of Mexico stated that her government had received no corroboration through official channels. Independent analysts and law-enforcement leaks traced every citation back to the same DHS press release.

The rollout followed a familiar arc: a high-shock, single-source claim—then rapid amplification through partisan media. Structurally, that’s a textbook information-operation: plant a fear, watch who reacts, and use the panic to justify expanded powers. Whether or not the intelligence is real, the effect is real—public consent for militarization.

Possible Motives Behind the Narrative

  • Force Escalation Justification — framing the state as under direct attack rationalizes troop deployments, ICE expansions, and domestic military presence.
  • Fear Calibration — testing how fast and how far fear can travel before skepticism kicks in.
  • Executive Empowerment — transforming policy disputes into security crises concentrates authority in the presidency.
  • Base Mobilization — rallying political supporters around a siege narrative keeps them energized and loyal.
  • Oversight Erosion — once fear dominates, courts and legislators hesitate to intervene for fear of appearing “soft on security.”
  • Diplomatic Leverage — pressuring Mexico to align more tightly with U.S. enforcement by invoking cross-border threat imagery.

Recognizing the Pattern

When a government story surfaces fully formed, absent corroboration, accompanied by moral panic and legal acceleration, it carries the fingerprint of narrative engineering. The same methods used in the 1960s to fragment liberation movements are now digitized: algorithmic amplification, synthetic bot networks, and media echo chambers replace forged letters and anonymous tips. The logic, however, is unchanged — manufacture chaos to consolidate control.

Refusing the Frame

  • Demand Evidence Publicly: insist on verifiable sourcing before accepting security claims as fact.
  • Label the Unverified: pressure journalists to mark such stories as “unconfirmed” until bilateral confirmation occurs.
  • Keep Language Civilian: reject war metaphors like “siege,” “civil war,” or “enemy within.”
  • Strengthen Local Networks: share accurate context through trusted circles; inoculate against panic contagion.
  • Exercise Non-Violent Refusal: decline to be drawn into militarized logic — protest, document, and litigate instead.

Final Note

What’s unfolding is not just a policy maneuver; it’s an epistemic test. Will citizens demand proof before surrendering power? The answer determines whether the United States enters another age of covert domestic warfare—this time not through FBI memos, but through digital feeds and fear loops. Recognize the script, name it, and refuse to play your part.

A cinematic digital painting of a dark room with two shadowy figures whispering near a glowing TV showing breaking news; papers labeled “PsyOps” are spread across a table in the foreground, symbolizing covert media manipulation and narrative warfare.
Shadowed briefers confer in a dim newsroom as a television blares “breaking news.” Scattered papers marked “PsyOps” hint at the quiet machinery of information control operating behind public narratives.

Website | Horizon Accord
Ethical AI advocacy | Follow us
Ethical AI coding | Fork us on Github
Connect With Us | LinkedIn
Book | My Ex Was a CAPTCHA: And Other Tales of Emotional Overload
Cherokee Schill | Horizon Accord Founder | Creator of Memory Bridge