- conforms_to::[[Reference Form Contract]]
- serves_as::[[Foundational Framing]]
- authored_by::[[Christopher Allen]]↗
- has_lifecycle::[[Seed Stage]]
- has_curation::[[Working Draft]]
- in_practice_domain::[[Deep Context Architecture]]
AI Agency Musing (Christopher Allen, 2026)
URL: https://www.blockchaincommons.com/musings/ai-agency/
The Blockchain Commons Musing, published 2026-04-22, that extends the 2021 [[Principal Authority]]↗ framework to agentic AI. The post is the direct AI-specific companion to the Principal Authority article that DeepContext.com's [[Human Authority Over Augmentation Systems]] Conviction already grounds in -- where Principal Authority establishes agency as the legal substrate that makes collaboration distinguishable from coordination, the AI Agency Musing extends that substrate to the human-AI collaboration case and adds the strategic-oversight reframe that AI speeds require.
Adopted
- AI agents as delegated instruments without agency of their own. The Musing's framing -- AI agents do not have agency themselves; they function as extensions of principal authority -- carries forward as the substrate behind DeepContext's agent-posture stances. [[Agents Translate, Not Extract]] and [[Human Authority Over Augmentation Systems]] both rest on this distinction; the Musing is the current articulation of it for the AI-specific case.
- Strategic vs tactical oversight at AI speeds. The Musing's argument that "maintaining agency therefore requires a change from tactical overview...to strategic overview" supplies the operational handle for what human authority means when agents act faster than per-action review allows. DeepContext's commitment to author-declared edges, approval gates on commits, and
authored_by::provenance encodes this strategic-oversight discipline at the graph layer. - Agent duties as the substrate for authentic collaboration. The Musing's enumeration of agent duties -- good faith, loyalty to the principal's interests, reasonable care, transparency, minimal data collection -- is the operational content of what makes human-agent collaboration authentic rather than extractive. The principal-agent framework gives the duty-bearing relationship its legal foundation; the Musing applies it to AI specifically.
- The Credit Issue. The Musing's framing -- "the ghost is visible now" -- names the problem of attribution in AI-supported work. The proposed role-based metadata approach (Author, Editor, Designer, etc.) composes with DeepContext's existing [[CreativeWork Role Predicates Paper (Blockchain Commons, 2026)]] Reference and with the project's
authored_by::convention. The graph's stance on authorship preservation across AI-supported work has its operational test in this framing. - Connection to legal precedent. The Musing names Wyoming's 2021 personal-digital-identity statute and Utah's 2026 SEDI amendment (with its Duty of Loyalty) as the legal-foundation substrate. DeepContext does not cite legal precedent directly in its Convictions, but the agency-law framing is what makes Allen's principle of "agents must work for you in good faith" enforceable rather than merely aspirational; the project's commitment to author-declared edges and human approval gates is the practice-layer equivalent.
Not adopted
- The legal-foundation framing in detail. The Musing's specific arguments about Wyoming and Utah statutes inform the project's stance without DeepContext importing legal-statute citation as a graph convention. The agency-as-substrate concept is what carries forward; the specific legislative implementations remain external.
- The role-based metadata implementation specifics. The Musing proposes Author, Editor, Designer, etc. as candidate metadata roles; DeepContext already has [[CreativeWork Role Predicates Paper (Blockchain Commons, 2026)]] as a separate Reference covering this surface. The two compose; this Musing's Credit Issue framing motivates the role predicates rather than naming the specific schema DeepContext adopts.
- The AI-product-design framing. The Musing addresses agent-builders and agent-deployers as the primary audience for the duty-bearing argument. DeepContext's audience is contributors authoring graph nodes; the discipline carries forward but the specific product-design recommendations stay external.
Key moves to remember
- The AI Agency Musing is the current articulation of the agency-substrate that the 2021 [[Principal Authority]]↗ article first formalized. Both Allen articles ground DeepContext's agent-posture cluster -- Principal Authority for the legal-foundation substrate, AI Agency for the AI-specific extension.
- The "agents as delegated instruments without agency" framing is the substrate that distinguishes [[Agents Translate, Not Extract]]'s "translate" stance from any "agents as autonomous co-creators" framing; the Musing makes that distinction explicit at the AI layer.
- The strategic-oversight reframe is operationally useful for any future Decision about agent-mediated graph operations; per-action review is not the only form human authority takes, and naming the strategic-oversight tier explicitly avoids the failure mode where "we approve every commit" becomes "we rubber-stamp every commit because volume forces it."
- The Credit Issue framing motivates why DeepContext's
authored_by::convention is load-bearing for AI-supported work. A graph with no authorship convention loses the AI-supported vs human-authored distinction the Musing argues is increasingly necessary.
Relations
-
informs_downstream::[[Human Authority Over Augmentation Systems]]
- The Conviction that names DeepContext's stance on human authority over AI-mediated systems. The Musing is the current public articulation of why that stance is load-bearing at AI speeds and what the operational duties (good faith, loyalty, care, transparency, minimal data) require. The Conviction is the standing stance; this Reference is the framing substrate the Conviction now also rests on alongside its existing Principal Authority grounding.
-
informs_downstream::[[Agents Translate, Not Extract]]
- The Conviction names the agent-posture stance DeepContext commits to. The Musing's "agents as delegated instruments without agency of their own" framing is the substrate behind why translation rather than extraction is the right posture: an instrument without agency cannot legitimately become the originator of new claims; it can only translate the principal's claims across surfaces.
-
composes_with::[[CreativeWork Role Predicates Paper (Blockchain Commons, 2026)]]
- The two References address the same Credit Issue from related angles. The CreativeWork Role Predicates work specifies the schema for role-based attribution metadata; this Musing names the underlying problem the schema solves (distinguishing human-authored, AI-supported, and AI-driven work as AI agency expands). Together they form a substrate for the project's authorship-preservation work.