Responsible AI

Commitments

  • Transparency for synthetic content.
  • Privacy by design and consent‑first data practices.
  • Continuous red‑teaming and iterative guardrails.
  • Watermarking/provenance where feasible.

EU AI governance posture

We track and align with applicable transparency requirements across portfolio products. We provide model and dataset summaries where appropriate at the portfolio level.

Safety layering

We use multiple layers of safeguards, including pre‑ and post‑generation guardrails and abuse reporting channels. See our Privacy and Cookie pages for more information.