AI Policy
How Kindred uses AI responsibly, how we handle your data in AI interactions, and what we commit to as builders of an AI-powered platform.
Last updated: March 19, 2026
How Kindred Uses AI
Kindred is powered by the Symposis framework, a structured analytical methodology that processes every topic through six phases of multi-disciplinary analysis. The analytical engine is built on Claude, an AI model developed by Anthropic. Each analysis involves multiple AI calls (one per phase), with every call logged, traced, and cost-tracked for quality assurance and operational transparency.
AI is used exclusively to generate structured analytical output. It does not make decisions on behalf of users. It does not filter, suppress, or editorialize. The platform presents the strongest arguments on all sides, grounded in sources, and the user decides what to conclude.
Data Handling in AI Interactions
When you submit a query, the text of your question is sent to the AI model for processing. We do not send your name, email, or any other personally identifiable information to the AI provider. Only the analysis query and relevant analytical context are transmitted.
Your analysis inputs and outputs are stored in your personal library for your benefit. They are not used to train or fine-tune AI models without your explicit, informed, opt-in consent. This commitment is foundational, not a feature that can be quietly changed.
Responsible AI Principles
Kindred's AI usage is governed by the same Editorial Constitution that shapes every analysis the platform produces:
- Steelman every position. The AI constructs the strongest possible version of every argument, not the version opponents would characterize.
- Acknowledge uncertainty. Where evidence is incomplete or expert opinion is divided, the platform says so rather than manufacturing false confidence.
- No political alignment. The platform does not operate from any political framework. Analysis is grounded in evidence, logic, and human values that transcend political tribes.
- Respect user autonomy. The platform is a companion, not a director. It provides analysis; the user decides what to conclude.
- Source everything. Every factual claim links to a source. Unsourced claims are labeled as such.
Bias Detection and Quality Assurance
We run regular bias detection evaluations across politically sensitive topics using variant phrasings (the same topic framed with different political connotations). If the platform is working correctly, the analysis should be substantially similar regardless of framing. We also conduct steelman quality assessments to verify that the platform constructs arguments at the level its best advocates would expect.
These evaluations are ongoing, not a one-time check. We aspire to publish periodic bias audit reports publicly as the platform matures.
Transparency About AI Limitations
We are honest about what AI analysis can and cannot do. AI models have training biases. Perfect objectivity is impossible. Sources generated from training data may not always be verifiable in real time. The platform's goal is not to achieve some impossible Platonic ideal of neutrality, but to be meaningfully less biased and more transparent than the alternatives.
The platform does not replace professional legal, medical, financial, or other specialized advice. It provides structured analytical output designed to help users think more clearly, not to make decisions for them.
AI Provider Information
The analytical engine is currently powered by Claude (Anthropic). The platform includes an AI provider abstraction layer designed for future model flexibility. Anthropic's data handling is governed by their API terms. We select AI providers based on analytical quality, safety practices, and alignment with our editorial principles.
Contact
Questions about this policy? Email us at ai@bekindred.ai.