Built on Trust

A platform that asks you to trust its analysis must first earn that trust through how it handles your data. Security at Kindred was not bolted on after the fact. It was foundational from the first line of code. The same commitment to transparency that powers every analysis extends to how we protect every piece of information entrusted to us.

Core Principles

Zero Trust by Default

Every component assumes every other component could be compromised. Authentication is required at every boundary. No implicit trust between services, no shared credentials, no overprivileged service accounts.

Your Data is Your Data

Kindred does not sell, share, or monetize user data. Analysis inputs and outputs are stored for your benefit only. They are not used to train AI models without your explicit, informed, opt-in consent.

Encryption Everywhere

All data is encrypted at rest (AES-256) and in transit (TLS 1.3 minimum). API keys and secrets are managed through secure environment variables, never hardcoded, never in version control.

Row-Level Security

Every database table enforces user isolation at the database level, not the application level. User A cannot see, edit, or delete User B's data. This is enforced by PostgreSQL, not application code.

Defense in Depth

Multiple layers of protection across network, application, data, and user levels. No single point of failure in the security model. Every layer is designed to function independently.

Right to Deletion

You can delete your account and all associated data at any time. Deletion is permanent: a true hard delete of all analyses, preferences, feedback, and trace logs. When you say delete, we mean it.

AI-Specific Security

An AI-powered platform introduces security concerns that traditional applications do not face. We address each one directly.

Prompt Injection Defense

User inputs are sanitized before reaching the AI engine. The system prompt architecture uses structured XML tagging and instruction hierarchy to resist manipulation attempts.

Output Validation

AI responses are parsed and validated against expected schemas before being presented to users. Malformed, off-topic, or potentially harmful outputs are caught and handled gracefully.

API Key Isolation

The AI engine API key never touches the frontend. All AI calls route through the backend. Keys are stored in secure environment variables and rotated on a regular schedule.

Cost Controls

Hard spending limits prevent runaway costs from bugs, abuse, or flooding. Per-user and per-IP rate limits enforce fair usage and protect both the platform and its users.

Application Security

  • Input validation and sanitization at every entry point
  • Rate limiting on all API endpoints (per-user and per-IP)
  • CSRF protection on all state-changing operations
  • Content Security Policy headers configured to prevent XSS
  • Automated dependency scanning for vulnerabilities
  • Secrets scanning in the development pipeline
  • CORS locked to known origins only

Infrastructure

  • Frontend hosted on Vercel with DDoS protection, CDN, and edge caching
  • Backend runs in isolated containers with no public SSH access
  • Database with network-level access controls (only the backend can reach it)
  • Environment separation: development, staging, and production with distinct credentials
  • Automated daily backups with point-in-time recovery capability

Privacy Principles

Data Minimization

We collect only what is needed to provide the service. Analysis inputs and outputs are stored for your benefit. We do not collect browsing behavior or sell data to third parties.

Compliance Ready

GDPR and CCPA readiness from day one. Retrofitting privacy is expensive and error-prone, so we designed for the strictest applicable regulations from the start.

We do not use third-party analytics services that track individual user behavior or leak personally identifiable information. The same “show the work” principle that governs every analysis governs how we handle your data.

Frequently Asked Questions