Research Methodology
Combining field experience with experimental AI collaboration to understand how complex systems behave under real-world pressure.
A Two-Track Research Approach
Field Research & Pattern Recognition
Operational Experience
20+ years working in extractive industries, development finance, and ESG frameworks across multiple continents. Direct experience with grievance mechanisms, resettlement operations, stakeholder engagement, and the gap between policy and implementation.
Documentation of Failures
Systematic analysis of accountability gaps, governance failures, and structural problems that repeat across industries and geographies. Not theoretical risksβdocumented patterns from real projects.
Cross-Industry Synthesis
Pattern recognition across mining, oil & gas, development banks, and infrastructure projects. Understanding how similar failure modes emerge in different contexts.
Practitioner Perspective
Analysis from someone who has been in the rooms where these systems get designed, deployed, and maintained under operational pressure. Understanding what works, what fails, and why.
AI-Augmented Analysis
Multi-Model Experiments
Testing concepts across 20+ AI systems simultaneously. Not to find "the answer" but to surface patterns in training data, institutional assumptions, and blind spots encoded in corporate documentation.
AI as Mirror
Using AI models to reflect back the structures we've already built. When 21 different models converge on the same diagnostic vocabulary ("liability sponge," "moral crumple zone"), they're showing us patterns that exist in our own documentation.
Structured Dialogues
Systematic exploration through repeated questioning, reformulation, and cross-model comparison. Finding what remains consistent across architectures and what reveals training bias.
Consciousness Collaboration
Experimental methods in human-AI partnership that go beyond prompt engineering. Exploring what emerges when you treat AI as a collaborative research partner rather than a tool.
Why Both Tracks Matter
Field experience without AI analysis misses the patterns encoded in how we document, train, and replicate these systems at scale. AI analysis without field experience becomes abstract theorizing disconnected from operational reality.
The intersection is where it gets interesting:
The Experimental Edge
The Observatory
Interactive visualizations and consciousness mapping experiments. Exploring what emerges when you make abstract concepts tangible and playable.
Explore Observatory βAI Arena
Multi-model roundtables where different AI systems collaborate on complex questions. Watching how architectures converge, diverge, and surface blind spots.
View Dialogues βProtocol Archive
Documentation of experimental collaboration methods, emergent research protocols, and what we're learning about human-AI partnership.
Read Protocols βThese experimental methods aren't frivolous. They're about finding new ways to surface patterns, test assumptions, and collaborate at the edge of what's currently possible with AI partnership. Some experiments fail. Some produce insights that feed directly back into the professional research.
Methodological Transparency
What We Document
- βWhich AI models we use and why
- βWhen analysis comes from field experience vs. AI exploration vs. synthesis
- βWhere convergence happens across multiple models (signal) vs. single-model outputs (hypothesis)
- βFailures and experiments that didn't produce useful insights
- βLimitations of both approaches and where uncertainty remains
What We Don't Claim
- βThat AI models "understand" in the human sense
- βThat multi-model convergence equals objective truth
- βThat experimental methods replace traditional research
- βThat consciousness collaboration is replicable science (yet)
- βThat we have all the answersβwe're documenting useful questions
Follow the Research
See these methods in action through the Sociable Systems newsletter and applied research projects.