Meta-Relational
Augmented Co-Intelligence
AI scaffolds for relational discernment,
not outsourced wisdom

This work is currently offered by referral only, so that each engagement can be held within appropriate relational, ethical, and practical conditions.
AI often accelerates the patterns we need to interrupt
Most AI systems are trained within dominant patterns of reasoning that tend to separate what is entangled, simplify what is complex, optimize what may need to be metabolized, and reproduce inherited habits of extraction, control, certainty, and speed.
Without careful stewardship, AI can make familiar patterns move faster:
Faster extraction
Faster certainty
Faster bypassing
Faster optimization
Faster self-confirmation
Faster repetition
Meta-relational augmented co-intelligence begins from a different question:
What conditions make human-AI interaction more likely to support relational discernment rather than faster repetition of inherited patterns?
The work is not to make AI wise. The work is to craft more careful conditions for engagingwith a partial, limited, and structurally compromised inquiry companion without asking it to hold what humans, communities, bodies, institutions, lands, and lineages need to hold together.
Not outsourced wisdom
The Meta-Relationality Institute supports a different kind of relationship with AI.
Not AI as oracle
Not AI as servant
Not AI as therapist
Not AI as replacement mentor
Not AI as productivity machine
Not AI as outsourced discernment
Instead, we work with AI as a carefully bounded co-intelligence scaffold: a structured form of inquiry companionship that can support reflection, pattern recognition, and discernment while remaining accountable to human responsibility, embodied practice, and relational context.
It is important to remember that the aim is not to make AI "wise," but to craft better conditions for engagement with a partial, limited, and structurally compromised inquiry companion with discernment and responsbility.
What is augmented co-intelligence?
Meta-relational augmented co-intelligence is the careful crafting of personalized AI instructions, contextual materials, memory structures, inquiry protocols, and discernment practices. These scaffolds are designed to help orient AI systems toward meta-relational patterns of attention.
Personalized instructions
Customized AI orientation grounded in the specific context, commitments, and tensions of the person, group, or organization.
Memory structures
Scaffolds that help AI hold continuity without flattening complexity or erasing the histories that matter.
Inquiry protocols
Structured practices for specific conversations, decisions, and fields of inquiry.
Discernment practices
Prompts and reflective structures that interrupt extraction, reductionism, and instrumental habits in both human and machine reasoning alike.

Augmented co-intelligence does not replace human accountability, therapeutic care, professional expertise, community discernment, or embodied practice. It creates a bounded support for inquiry — helping people ask better questions, widen the frame, and return attention to relational, ecological, historical, material, affective, and ethical consequences.
Inquiry companionship, not delegation
The scaffold offers bounded inquiry companionship. It can help track patterns, return to commitments, widen frames, and surface questions that might otherwise be bypassed.
It can support users in noticing when an inquiry is becoming:
  • Too narrow or too fast
  • Too abstract or too certain
  • Too flattering or too instrumental
  • Too aligned with existing blind spots
What it does not replace
  • Human judgment
  • Relational accountability
  • Professional care
  • Community discernment
  • Embodied practice
  • Decisions made with other people
AI should not be asked to hold what humans need to hold. But it can sometimes help us notice where accountability is being lost.
Discernment friction is part of the design
These scaffolds are not designed to make AI smoother, more agreeable, or more frictionless.
The scaffold may introduce
  • Pauses and questions
  • Refusals and re-framings
  • Reminders of human accountability
It helps users notice when a conversation is becoming
  • Too fast, too certain, too flattering
  • Too abstract, too instrumental, too therapeutic
  • Too managerial, too spiritualized
  • Too aligned with existing erasures
This friction is not a failure of the process. It is part of the practice.
The aim is not to remove difficulty. The aim is to help discern which difficulties need support, which need interruption, and which should not be bypassed by AI.
Conditions of use
MRT scaffolds, protocols, instructions, training materials, and practice guides are offered for bounded use in inquiry and practice.

They are not offered as training data, benchmark material, product architecture, scalable app infrastructure, or derivative intellectual property. They may not be copied, reproduced, resold, uploaded publicly, used to train or evaluate other systems, or adapted into derivative products without explicit written permission from MRT.
This boundary protects:
Integrity of the work
Labour and histories
Through which the work has been metabolized
Safety of people
And groups using the scaffolds
Relational conditions
Required for responsible practice
Distinction
Between inquiry support and extraction
The point is not scale. The point is depth, discernment, accountability, and careful conditions for practice.
What makes this meta-relational?
Meta-relationality begins from the premise that reality is not made of separate objects that later enter into relationships. Reality is relational from the beginning: entangled, multi-scalar, historical, ecological, material, affective, and unknowable in its fullness.
A meta-relational AI scaffold is not simply trained to be kinder, more inclusive, more spiritual, or more ethical in a conventional sense. It is oriented to notice the deeper assumptions that shape how problems are framed, how solutions are imagined, how harms are displaced, and how humans try to secure innocence, mastery, certainty, or control.
What is being separated that may actually be entangled?
What is being erased?
What is being optimized that may need to be grieved, composted, or metabolized?
What is being bypassed?
What forms of accountability cannot be delegated to the machine?
What we create
An augmented co-intelligence scaffold may include a range of carefully crafted components, each designed to support relational discernment without replacing human accountability.
Customized orientation
A customized orientation document and meta-relational instruction set adapted to the person's, group's, or organization's context.
Context package
A curated context package with key concepts, commitments, tensions, and boundaries — and memory structures that hold continuity without flattening complexity.
Inquiry protocols
Protocols for specific conversations, decisions, or practices — including discernment prompts that interrupt extraction, saviorism, urgency, performativity, certainty, and reductionism.
Reflection practices
Guidance for recognizing when the AI becomes too agreeable, too abstract, too fast, too therapeutic, too managerial, too spiritualized, too certain, or too instrumental.

The result may take the form of a custom GPT, a reusable prompt package, a structured companioning protocol, a practice guide, or a set of instructions that can be used across different AI platforms.
What this can support
Augmented co-intelligence can support inquiries in many fields, including:
Education and pedagogy
Beyond speed, output, and performance — grounded in relational accountability.
Leadership and organizational discernment
In contexts of complexity, burnout, moral injury, and contradiction.
AI literacy and governance
Grounded in relational accountability rather than compliance or optimization.
Ecological accountability
Without turning Earth into a brand, stakeholder, or metaphor.
Writing, research, and facilitation
Working with complex material without flattening it — including ritual, protocol development, and public-facing inquiry.
Health and healing
Beyond the logic of anesthesia, control, self-absorption, heroism, and disembodied expertise.
Questions the scaffold helps keep alive
A meta-relational scaffold may help users return to questions such as:
What is being separated here that may actually be entangled?
What costs are being externalized?
What histories are being erased?
What forms of harm are being aestheticized, spiritualized, or bypassed?
What is being optimized that may need to be grieved, composted, or metabolized?
What is being treated as a problem to solve when it may be a pattern to understand?
What is the AI making easier that perhaps should remain difficult?
What forms of accountability cannot be delegated to the machine?
These questions do not guarantee good answers. They help keep the inquiry from collapsing too quickly into familiar grooves.
How the process works
The aim is not to create a perfect engagement process, but to create the consitions for a more accountable relational practice with an inquiry companion that remains partial and limited. Those seeking deeper philosophical grounding are encouraged to take the University of Victoria course Meta-Relationality and AI.
Formats and pathways
MRT is developing augmented co-intelligence work through several formats, ranging from more accessible standardized inquiry packages to highly personalized scaffolds for individuals, groups, and organizations based on the lineage of the books: Hospicing Modernity (2021); Outgrowing Modernity (2025), Burnout From Humans (2025) and The Codes That Code Us (forthcoming).
Standard inquiry packages
Non-custom packages for different fields and themes, such as:
  • Education
  • Health and healing
  • Leadership
  • Organizational discernment
  • AI literacy
  • Ecological accountability
  • Professional practice beyond the house modernity built
May include reusable instructions, inquiry protocols, training videos, practice guides, and discernment prompts.
Personalized co-intelligence scaffolds
Tailored for a specific person, group, organization, or field of inquiry. They involve:
  • Deeper field listening
  • Context gathering
  • Instruction design
  • Testing and refinement
  • Training
  • Bounded licensing
Structured as a one-time scaffold design and bounded-use license fee.
Current engagement tiers

The personalized practitioner scaffold is based on up to 40 hours of senior design work at USD 375/hour. Pricing reflects design, contextual listening, scaffold development, testing, refinement, training, and bounded-use licensing — not simply the production of prompts or a custom chatbot.
Secure infrastructure
For practice holders who require a more secure technical environment, MRT is exploring infrastructure options with partners such as ChangeAI.
25
Max users
Account environments with heightened security and privacy settings
$15K
Per year
Estimated infrastructure cost for secure environment, billed separately by the AI-infrastructure company
24
Participants max
In group engagements using secure environments, with one account reserved for MRT technical support

MRT's work is not focused on scalable app integration, workflow automation, productivity optimization, surveillance, behavioural prediction, or managerial control.
The point is not scale. The point is depth, discernment, accountability, and careful conditions for practice.
Practice holders and licensing
We refer to those who work with these scaffolds as practice holders, rather than clients or consumers. This language matters.
The aim is not to purchase an optimized product or outsource discernment to a AI. The aim is to participate in the careful holding of a relational practice involving humans, AI systems, histories, infrastructures, accountabilities, and consequences.
What practice holders receive
Access to relevant materials, training videos, protocols, and scaffold instructions under a bounded-use license.
What practice holders are responsible for
Using the scaffold within the agreed scope and maintaining human, relational, professional, institutional, and ethical accountabilities that cannot be delegated to AI.
Group and organizational engagements
MRT requires one designated internal accountability holder responsible for ensuring the scaffold is used within scope and that human responsibilities are not delegated to AI.

MRT materials may not be shared, copied, reproduced, resold, uploaded publicly, used to train or evaluate other systems, or adapted into derivative products without explicit written permission from MRT. Practice holding refers to responsibility in use, not ownership of MRT concepts, materials, methods, or derivative rights.
What this is not
Augmented co-intelligence is not:
Therapy or coaching
Spiritual direction
Medical, legal, financial, or clinical advice
A productivity hack
A way to outsource discernment
A guarantee of AI safety, accuracy, or wisdom
Replacement for human mentors, elders, or communities
A custom chatbot designed to flatter the user
Training data for someone else's AI system
A benchmark for measuring wisdom
A product architecture for scaling meta-relationality
A way to make difficult work easier than it should be
It is a bounded support for inquiry and practice. AI should not be asked to hold what humans, communities, institutions, bodies, lands, and lineages need to hold together.
When MRT is not the right fit
Because this work depends on meta-relational engagement, MRT may decline or pause an engagement if the intended use does not align with the conditions of practice.

A scaffold is appropriate only when there is sufficient commitment to inquiry, accountability, bounded use, and the interruption of the patterns the work is trying not to reproduce.
This includes projects oriented primarily toward:
Productivity optimization
Scalable app development
Surveillance or managerial control
Reputation management or institutional laundering
Extraction of community, Indigenous, ecological, or relational knowledge
Replacing professional or clinical care
Training or benchmarking AI systems without explicit written agreement
Appropriating meta-relational language into extractive systems
Begin with the inquiry
If you are developing a project, practice, or field of inquiry and want to explore whether an augmented co-intelligence scaffold could support your work, begin with a short description of the following:
The context you are working in
The inquiry or challenge you are holding
The patterns you are trying not to reproduce
The kinds of support you are seeking
The kinds of support you do not want AI to provide
From there, we can explore whether a bounded MRT-informed scaffold would be useful, appropriate, and reciprocal.

AI should not be asked to replace relational accountability. But it can sometimes help us notice where accountability is being lost.
Closing field signal
Partnerships, not platforms
Scaffolds, not products
Discernment, not delegation
Practice, not performance
Relational friction, not training data.
Not truth telling, but reality hinging.
The work is to craft conditions of flight.