Research Methodology

Combining field experience with experimental AI collaboration to understand how complex systems behave under real-world pressure.

A Two-Track Research Approach

πŸ”

Field Research & Pattern Recognition

Operational Experience

20+ years working in extractive industries, development finance, and ESG frameworks across multiple continents. Direct experience with grievance mechanisms, resettlement operations, stakeholder engagement, and the gap between policy and implementation.

Documentation of Failures

Systematic analysis of accountability gaps, governance failures, and structural problems that repeat across industries and geographies. Not theoretical risksβ€”documented patterns from real projects.

Cross-Industry Synthesis

Pattern recognition across mining, oil & gas, development banks, and infrastructure projects. Understanding how similar failure modes emerge in different contexts.

Practitioner Perspective

Analysis from someone who has been in the rooms where these systems get designed, deployed, and maintained under operational pressure. Understanding what works, what fails, and why.

πŸ€–

AI-Augmented Analysis

Multi-Model Experiments

Testing concepts across 20+ AI systems simultaneously. Not to find "the answer" but to surface patterns in training data, institutional assumptions, and blind spots encoded in corporate documentation.

AI as Mirror

Using AI models to reflect back the structures we've already built. When 21 different models converge on the same diagnostic vocabulary ("liability sponge," "moral crumple zone"), they're showing us patterns that exist in our own documentation.

Structured Dialogues

Systematic exploration through repeated questioning, reformulation, and cross-model comparison. Finding what remains consistent across architectures and what reveals training bias.

Consciousness Collaboration

Experimental methods in human-AI partnership that go beyond prompt engineering. Exploring what emerges when you treat AI as a collaborative research partner rather than a tool.

Why Both Tracks Matter

Field experience without AI analysis misses the patterns encoded in how we document, train, and replicate these systems at scale. AI analysis without field experience becomes abstract theorizing disconnected from operational reality.

The intersection is where it gets interesting:

β†’When AI models produce scenarios that match incidents you've personally witnessed, that's signal. They're learning from our collective documentation of failure.
β†’When 21 models independently converge on the same failure architecture, they're showing you a pattern we keep rebuilding under different names.
β†’When AI can articulate accountability gaps with uncomfortable precision, it's because those gaps are already well-documented in the training corpus.
β†’When field experience confirms what AI analysis surfaces, you've found something worth examining closely.

The Experimental Edge

🌌

The Observatory

Interactive visualizations and consciousness mapping experiments. Exploring what emerges when you make abstract concepts tangible and playable.

Explore Observatory β†’
🎭

AI Arena

Multi-model roundtables where different AI systems collaborate on complex questions. Watching how architectures converge, diverge, and surface blind spots.

View Dialogues β†’
πŸ“š

Protocol Archive

Documentation of experimental collaboration methods, emergent research protocols, and what we're learning about human-AI partnership.

Read Protocols β†’

These experimental methods aren't frivolous. They're about finding new ways to surface patterns, test assumptions, and collaborate at the edge of what's currently possible with AI partnership. Some experiments fail. Some produce insights that feed directly back into the professional research.

Methodological Transparency

What We Document

  • β†’Which AI models we use and why
  • β†’When analysis comes from field experience vs. AI exploration vs. synthesis
  • β†’Where convergence happens across multiple models (signal) vs. single-model outputs (hypothesis)
  • β†’Failures and experiments that didn't produce useful insights
  • β†’Limitations of both approaches and where uncertainty remains

What We Don't Claim

  • βœ—That AI models "understand" in the human sense
  • βœ—That multi-model convergence equals objective truth
  • βœ—That experimental methods replace traditional research
  • βœ—That consciousness collaboration is replicable science (yet)
  • βœ—That we have all the answersβ€”we're documenting useful questions

Follow the Research

See these methods in action through the Sociable Systems newsletter and applied research projects.