AI Ethics & Safety Policy
Last Updated: 2025-10-24
This AI Ethics & Safety Policy describes how Jason Criddle & Associates designs, deploys, and manages artificial intelligence (“AI”) systems—including DOMINAIT.AI, Ryker, and related agents—across our ecosystem of Services. It works alongside our Terms and Conditions, Privacy Policy, Acceptable Use Policy, and Data Collection Policy.
1. Core Ethical Principles
We commit to the following principles in our AI design and usage:
- Transparency: We aim to clearly communicate that users are interacting with AI systems and how those systems use data.
- Accountability: Humans remain responsible for the design, deployment, and oversight of our AI systems.
- Privacy: We protect user privacy and do not use identifiable personal data for AI training without explicit consent.
- Fairness: We work to reduce harmful bias and discrimination within AI outputs.
Safety: We design systems to avoid encouraging harmful, illegal, or dangerous behavior.
2. AI System Behavior and Restrictions
Our AI systems are not intended to:
- Provide professional medical, legal, or financial advice as a substitute for licensed professionals.
- Generate or promote violence, self-harm, hate, or extremism.
- Facilitate fraud, scams, exploitation, or harassment.
- Assist in illegal activities or provide instructions that could cause harm.
We implement technical and operational safeguards to minimize these risks and may refine models based on ongoing review.
3. User Responsibilities with AI
Users agree not to:
- Deliberately prompt AI to generate harmful or illegal content.
- Use AI outputs to mislead, defraud, or harm others.
- Attempt to circumvent safety guardrails or tamper with our models.
- Use AI systems to build or train competing models without written permission.
Misrepresent AI-generated content as factual, official statements of the Company.
4. Data Ethics and AI Training
We prioritize data ethics in AI development:
- Training data is curated to reduce exposure of sensitive or personally identifiable information.
- Aggregated and anonymized usage data may be used to improve AI performance, reliability, and safety.
- Where personal data is considered for AI training, we obtain explicit consent or apply strong de-identification measures.
We strive to exclude data sources that are clearly unlawful or violate creator rights.
5. Monitoring, Review, and Improvement
We use a combination of automated tools and human review to monitor AI behavior and address safety concerns. Logs and feedback are used to:
- Detect abuse or system failures.
- Fine-tune models and rulesets.
- Improve clarity, fairness, and reliability of responses.
Abuse of AI systems may result in suspension, termination, or other appropriate actions.
6. Future Development and Governance
As DOMINAIT.AI and Ryker evolve—including multi-model orchestration, token-based compute, and autonomous agent features—we will continue to refine this Policy. Additional governance, human-in-the-loop controls, and escalation paths may be established as capabilities expand.
7. Reporting Concerns
If you experience or observe AI behavior that appears unsafe, biased, or otherwise concerning, please contact us immediately at: info@thesmartrapp.com. Your feedback helps us improve the safety and reliability of our systems.