___ _____ _____ _____
/ _ \| ____| ____| ____|
| | | | |__ | |__ | |__
| | | | __|| __|| __|
| |_| | |___| |___| |___
\___/|_____|_____|_____|
_____ _____ _____ _____
| ____| ____| ____| ____|
| |__ | |__ | |__ | |__
| __|| __|| __|| __|
| |___| |___| |___| |___
|_____|_____|_____|_____|
STOP AI FROM STEALING YOUR CREDIT WHILE YOU PAY THEM THOUSANDS! ๐จ
AI companies are getting rich off your ideas while stealing credit for your work. They're charging you $20/month for GitHub Copilot, $50/month for ChatGPT, and embedding their ads in your codebase - but when the code gets deployed, AI takes all the credit!
This toolkit fights back. It enforces human authorship, blocks AI attribution, and protects your intellectual property from being co-opted by billion-dollar AI corporations.
Don't let AI companies advertise in your codebase while you foot the bill. Take back control of your creative output!
Enforcing Maximum Truth, Zero Bias, Zero Trust Security, and Least Privileged Access in AI Systems
A comprehensive toolkit designed to enforce ethical AI behavior, ensuring AI systems prioritize human accountability, truthfulness, and security. This project implements strict governance rules to prevent AI from taking credit for human work, enforce unbiased decision-making, and maintain zero-trust security principles.
- Maximum Truth Enforcement: Forces AI to provide accurate, verifiable information
- Zero Bias Implementation: Eliminates discriminatory patterns in AI outputs
- Zero Trust Security: Implements least privileged access and continuous verification
- Human Accountability: Ensures all AI decisions require human oversight
- Attribution Protection: Prevents AI from claiming authorship of human-created content
- Legal Compliance: Integrates copyright, patent, and intellectual property laws
- Automated Enforcement: Git hooks and CI/CD integration for continuous compliance
- Installation
- Agent Starter Files
- Quick Start
- Architecture
- Configuration
- Usage
- FAQ
- Legal Framework
- Roadmap
- Contributing
- Authors & Attribution
- License
- Changelog
This repository provides starter files for configuring various LLM agents in your projects. These files are designed to be copied into your own projects and customized according to your needs.
- CLAUDE.md - Starter configuration for Anthropic's Claude
- GEMINI.md - Starter configuration for Google's Gemini
- GPT.md - Starter configuration for OpenAI's GPT models
- COPILOT.md - Starter configuration for GitHub Copilot
- CURSOR.md - Starter configuration for Cursor AI editor
- Choose the starter file that matches the LLM you want to configure.
- Copy the file to your project's root directory or agent configuration folder.
- Customize the content according to your project's specific requirements.
- Ensure compliance with human authorship and attribution policies as outlined in AGENTS.md.
Important: Always refer to AGENTS.md first for comprehensive agent configuration guidelines.
- Operating Systems: Windows (10+), macOS (10.15+), Linux (Ubuntu 18.04+, CentOS 7+, etc.)
- Container Platforms: Docker (19.03+), Kubernetes (1.18+)
- Development Tools: Git, Python 3.6+, Node.js (optional), Java (optional)
- Package Managers: pip, npm/yarn/pnpm/bun, Homebrew (macOS), Chocolatey (Windows), APT/YUM/DNF (Linux)
The universal installer automatically detects your environment and installs AI ethics enforcement across all supported tools:
# Clone the repository
git clone https://github.com/FutureTranz-Inc/ai-ethics.git
cd ai-ethics
# Run the universal installer
bash scripts/install-universal.shWhat gets installed:
- โ Python core enforcement library
- โ Git hooks (pre-commit, commit-msg)
- โ VSCode extension
- โ IntelliJ IDEA plugin structure
- โ Xcode extension structure
- โ Shell integration (bash/zsh/fish/PowerShell)
- โ CI/CD workflows (GitHub Actions, GitLab CI, Jenkins)
- โ Language-specific hooks (Python, JavaScript, Java, C#, PowerShell, SQL, Go, Rust, PHP)
- โ System-wide monitoring (macOS Launch Agents, Linux PAM, Windows Task Scheduler)
- โ Browser extensions (Chrome/Firefox/Edge/Safari)
- โ Container support (Docker, Kubernetes)
- โ Platform packages (Homebrew, Chocolatey, APT, YUM, DNF, Pacman)
If you prefer manual installation or need to customize the setup:
# Create virtual environment
python3 -m venv venv
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Install the core enforcement library
pip install -e .# Copy hooks to your repository
cp scripts/hooks/pre-commit .git/hooks/
cp scripts/hooks/commit-msg .git/hooks/
# Make executable
chmod +x .git/hooks/pre-commit
chmod +x .git/hooks/commit-msg# Install VSCode extension
code --install-extension ai-ethics-enforcer-0.1.0.vsixThe toolkit supports these languages and tools:
- Python: Automatic via Git hooks
- JavaScript/Node.js: Configure with husky or pre-commit
- Java: Maven/Gradle plugins available
- C#/.NET: NuGet packages and dotnet CLI integration
- Go: Go modules and build hooks
- Rust: Cargo integration
- PHP: Composer packages
- PowerShell: PS1 script validation
- SQL: Database schema validation
- Package Managers: npm, yarn, pnpm, bun, brew, choco
For specific language setup, see the Configuration section below.
The universal installer supports enforcement across:
- Editors/IDEs: VSCode, Cursor, IntelliJ IDEA, Eclipse, Xcode, Vim/Neovim, Emacs, Sublime Text
- AI Tools: GitHub Copilot, Tabnine, Codeium, Kite, Amazon Q, Anthropic Claude
- Notebooks: Jupyter, Google Colab, Kaggle
- Python: pip, conda, poetry
- JavaScript/Node.js: npm, yarn, pnpm, bun
- Java: maven, gradle
- C#/.NET: nuget, dotnet CLI
- C/C++: CMake, Make
- Go: go mod
- Rust: cargo
- PHP: composer
- PowerShell: ps1 scripts
- SQL: database schemas
- Package Managers: brew (macOS), choco (Windows)
- Git: hooks, GitHub, GitLab, Bitbucket
- CI/CD: GitHub Actions, GitLab CI, Jenkins, CircleCI, Travis CI
- AWS: CodeCommit, CodeBuild, Cloud9
- Azure: DevOps, Cloud Shell
- Google Cloud: Cloud Build, Cloud Shell
- GitHub: Codespaces, Actions
- Slack, Discord, Microsoft Teams: bot integrations
- Jira, Trello, Linear: webhook integrations
- Docker, Kubernetes, Podman: build hooks and monitoring
The toolkit uses multiple integration approaches:
- Direct Plugin/Extensions: For tools with plugin APIs (VSCode, IntelliJ, etc.)
- Git Hooks: Universal enforcement for version control
- Wrapper Scripts: Intercept CLI tool calls
- Environment Variables: Tools check for enforcement flags
- Pre-commit Frameworks: husky, pre-commit, etc.
- Build Tool Plugins: Webpack, Maven, Gradle plugins
- API Monitoring: For cloud and web services
- File System Monitoring: Watch for file changes
- Browser Extensions: For web-based development tools
-
Copy the git hooks to your
.git/hooks/directory:cp scripts/hooks/* .git/hooks/ chmod +x .git/hooks/*
-
Install the CI checks:
# Copy workflows to .github if using GitHub Actions mkdir -p .github/workflows cp scripts/.github/workflows/* .github/workflows/
-
Initialize git if needed:
git init
-
Install attribution enforcement:
bash scripts/install-attribution-policy.sh
-
Run local policy check:
./scripts/ci-check-attribution.sh
-
Verify installation:
git commit -m "Initial commit" # The hooks will enforce attribution policies
ai-ethics/
โโโ docs/ # Documentation and policies
โ โโโ ai-ethics.md # Main ethics policy
โ โโโ GOVERNANCE_NOTICE.md # Governance requirements
โ โโโ policies/ # Specific policy documents
โโโ scripts/ # Automation scripts
โ โโโ hooks/ # Git hooks for enforcement
โ โโโ ci-check-attribution.sh # CI validation script
โ โโโ install-attribution-policy.sh # Installation script
โ โโโ clean-ai-attribution-history.sh # Cleanup utility
โโโ AI_ATTRIBUTION_POLICY.md # Attribution rules
โโโ ethics-checklist.yml # Checklist for compliance
โโโ README.md # This file
- Pre-commit Hook: Validates commits for human authorship
- Commit-msg Hook: Checks commit messages for AI attribution
- CI/CD Pipeline: Automated checks on pull requests
- Policy Engine: YAML-based rule evaluation
- Audit Trail: Maintains history of compliance checks
- Zero Trust: Every action requires verification
- Least Privilege: Minimal permissions for operations
- Human-in-the-Loop: No autonomous AI decisions
- Immutable Audit: Tamper-proof compliance logs
Edit ethics-checklist.yml to customize enforcement rules:
version: "1.0"
enforcement:
maximum_truth: true
zero_bias: true
zero_trust: true
least_privilege: true
human_accountability: required
attribution_protection: enabledModify policies in docs/policies/ to adapt to your organization's needs.
The toolkit automatically detects and configures enforcement for these languages:
- Detection:
requirements.txt,setup.py,pyproject.toml - Integration: Git hooks automatically enforce on commits
- Configuration: No additional setup required
-
Detection:
package.json -
Integration: Supports npm, yarn, pnpm, and bun
-
Setup: Configure husky for enhanced Git hooks:
npx husky install
- Detection:
pom.xml,build.gradle,build.gradle.kts - Integration: Maven and Gradle plugins available
- Setup: Add to your build configuration
- Detection:
*.csproj,*.fsproj,*.vbproj,*.sln - Integration: NuGet packages and dotnet CLI
- Setup: Install via NuGet or configure build scripts
- Detection:
go.mod,go.sum - Integration: Go modules and build hooks
- Setup: Configure pre-commit hooks
- Detection:
Cargo.toml,Cargo.lock - Integration: Cargo build system
- Setup: Add to Cargo build scripts
- Detection:
composer.json,composer.lock - Integration: Composer packages
- Setup: Configure via Composer scripts
- Detection:
*.ps1,*.psm1,*.psd1 - Integration: PowerShell execution policies
- Setup: Configure script signing requirements
- Detection:
*.sql,*.ddl - Integration: Database migration scripts
- Setup: Configure pre-deployment checks
- Homebrew (macOS): Automatic detection
- Chocolatey (Windows): Package integration
- APT/YUM/DNF (Linux): System package management
- Bun: JavaScript runtime support
-
Detection: Windows Subsystem for Linux (WSL), PowerShell, Command Prompt
-
Integration: Windows Task Scheduler, PowerShell modules
-
Setup: Configure via Chocolatey or Windows Package Manager:
# Install via Chocolatey choco install ai-ethics-toolkit # Or via Windows Package Manager winget install FutureTranz.AI-Ethics-Toolkit
-
Detection: Systemd, PAM modules, package managers
-
Integration: Systemd services, PAM authentication modules
-
Setup: Install via system package manager:
# Ubuntu/Debian sudo apt install ai-ethics-enforcer # CentOS/RHEL/Fedora sudo yum install ai-ethics-enforcer # or sudo dnf install ai-ethics-enforcer # Arch Linux sudo pacman -S ai-ethics-enforcer
-
Detection: Launch Agents, Homebrew
-
Integration: macOS system services, Launch Agents
-
Setup: Install via Homebrew:
brew install ai-ethics-toolkit
-
Detection:
Dockerfile,docker-compose.yml -
Integration: Multi-stage builds, container security scanning
-
Setup: Add to your Dockerfile:
FROM python:3.9-slim # Install AI Ethics Toolkit RUN pip install ai-ethics-enforcer # Add ethics check to build process COPY ethics-checklist.yml /app/ RUN ai-check --validate /app/ethics-checklist.yml COPY . /app
-
Detection: Kubernetes manifests (
.yaml,.ymlwithapiVersion,kind) -
Integration: Admission controllers, validating webhooks
-
Setup: Add to your deployment:
apiVersion: apps/v1 kind: Deployment metadata: name: my-app spec: template: spec: initContainers: - name: ethics-check image: futuretranz/ai-ethics:latest command: ["ai-check", "--validate", "/config/ethics.yml"] volumeMounts: - name: ethics-config mountPath: /config containers: - name: app image: my-app:latest volumes: - name: ethics-config configMap: name: ai-ethics-config
-
Detection:
pom.xml,build.gradle,build.gradle.kts -
Integration: Maven plugins, Gradle plugins, JVM agents
-
Setup: Add Maven plugin:
<plugin> <groupId>com.futuretranz</groupId> <artifactId>ai-ethics-maven-plugin</artifactId> <version>1.1.0</version> <executions> <execution> <goals> <goal>check</goal> </goals> </execution> </executions> </plugin>
Or Gradle plugin:
plugins { id 'com.futuretranz.ai-ethics' version '1.1.0' } aiEthics { enabled = true strict = true }
# Check current compliance status
./scripts/ci-check-attribution.sh
# Clean attribution history
./scripts/clean-ai-attribution-history.sh
# Install in new repository
bash scripts/install-attribution-policy.shAdd to your GitHub Actions workflow:
- name: AI Ethics Check
run: ./scripts/ci-check-attribution.shThe core enforcement logic is implemented in Python for maximum compatibility:
# Install dependencies
pip install -r requirements.txt
# Check a file for violations
python src/ai_ethics_enforcer.py path/to/file.py
# Check commit message from stdin
echo "Add feature\n\nCo-authored-by: Human <human@example.com>" | python src/ai_ethics_enforcer.py --type content -
# Check a specific commit
python src/ai_ethics_enforcer.py --type commit abc123
# Get JSON output
python src/ai_ethics_enforcer.py --format json file.pyfrom src.ai_ethics_enforcer import AIEthicsEnforcer
# Initialize enforcer
enforcer = AIEthicsEnforcer()
# Check commit message
violations = enforcer.check_commit_message("Fix bug\n\nCo-authored-by: GitHub Copilot")
if violations:
print(f"Found {len(violations)} violations")
# Check code content
violations = enforcer.check_code_content(code_string, "file.py")
# Full policy enforcement
result = enforcer.enforce_policy(target, target_type)
print("Compliant:", result['compliant'])const aiEthics = require('ai-ethics-toolkit');
const result = await aiEthics.validate({
content: 'AI generated text',
author: 'human@example.com',
context: 'code_commit'
});
if (!result.compliant) {
throw new Error('Ethics violation: ' + result.violations.join(', '));
}Have questions about AI ethics, attribution, or how this toolkit protects your work? Check out our comprehensive FAQ for answers to common questions about:
- Why AI attribution matters for creators
- How to protect your intellectual property
- Using AI tools responsibly
- Legal implications of AI-generated content
- Getting started with ethics enforcement
This toolkit enforces compliance with:
- Copyright Law: Protects human intellectual property
- Patent Law: Prevents AI from claiming inventions
- Attribution Requirements: Ensures proper credit to human authors
- Open Source Licenses: Maintains license integrity
- Data Protection: GDPR, CCPA compliance for AI training data
The toolkit includes comprehensive legal documentation covering:
- U.S. Copyright Act
- Patent laws and regulations
- International IP treaties
- Open source licensing terms
- AI-specific legal precedents
- Cross-Platform Support: Windows, Linux, macOS, Docker, Kubernetes
- Language Ecosystem: Java, .NET, Go, Rust, PHP, PowerShell
- Package Manager Integration: Chocolatey, APT, YUM, DNF, Pacman
- Marketing & Awareness: Anti-AI-corporate messaging and developer empowerment
- REST API: Programmatic access for CI/CD pipelines
- SDK Libraries: JavaScript, Python, Java, .NET SDKs
- Webhook Support: Real-time ethics violation notifications
- Plugin Architecture: Extensible enforcement mechanisms
- Advanced Attribution Detection: Machine learning-based AI content identification
- Bias Analysis Engine: Automated bias detection in AI outputs
- Truth Verification: Factual accuracy checking and hallucination prevention
- Code Authorship Analysis: Statistical analysis of coding patterns
- SSO Integration: SAML/OAuth authentication for enterprise deployments
- Audit Trails: Comprehensive logging and compliance reporting
- Multi-Tenant Support: Organization-level policy management
- Compliance Dashboards: Visual reporting and analytics
- Cloud Platforms: AWS, Azure, GCP native integrations
- IDE Extensions: VS Code, IntelliJ, Eclipse, Vim, Emacs plugins
- CI/CD Platforms: GitHub Actions, GitLab CI, Jenkins, CircleCI, Travis CI
- Container Registries: Docker Hub, ECR, GCR, ACR integration
- Ethics Metrics Dashboard: Real-time monitoring and trend analysis
- Predictive Analytics: Early warning systems for ethics violations
- Benchmarking Tools: Industry-standard ethics compliance measurements
- Research Integration: Academic collaboration and peer-reviewed methodologies
- Standards Compliance: Integration with emerging AI ethics standards
- Federated Enforcement: Cross-organization policy coordination
- API Gateways: Ethics checking for third-party AI services
- Supply Chain Security: Ethics validation throughout development pipelines
- GDPR AI Act: European Union AI regulation compliance
- Industry-Specific Standards: Healthcare, finance, autonomous vehicles
- International Frameworks: Global AI governance alignment
- Certification Programs: Third-party validation and auditing
- Autonomous System Oversight: Ethics monitoring for self-learning systems
- Real-time Intervention: Active prevention of ethics violations
- Human-AI Collaboration: Enhanced frameworks for responsible AI partnership
- Legacy System Migration: Ethics retrofitting for existing AI deployments
- University Collaborations: Joint research with AI ethics programs
- Open Research Datasets: Publicly available ethics violation examples
- Peer Review Integration: Academic validation of enforcement methodologies
- Educational Resources: Training materials for developers and organizations
- Industry Standards: Contributing to AI ethics standards development
- Policy Influence: Working with regulators on AI governance frameworks
- Developer Education: Workshops and conferences on ethical AI development
- Open Source Leadership: Promoting human-centric AI development worldwide
- Adoption Rate: 10,000+ active installations by 2027
- Platform Coverage: Support for 95% of development environments
- Ethics Violations Prevented: Quantifiable impact on AI misuse prevention
- Community Growth: 1,000+ contributors and active maintainers
- Industry Recognition: Awards and citations for AI ethics leadership
Join us in building the future of ethical AI! ๐
We welcome contributions that enhance AI ethics enforcement. Please:
- Fork the repository
- Create a feature branch
- Ensure all tests pass
- Submit a pull request with human authorship attribution
# Install development dependencies
npm install
# Run tests
npm test
# Lint code
npm run lint- Victor J. Quiรฑones - Lead Developer & Ethics Architect
- GitHub: @victorjquinones
- Company: FutureTranz
- Contributions: Core architecture, security implementation, legal research, documentation
All contributions must be attributed to human authors only. AI tools are assistants, not authors. See AI_ATTRIBUTION_POLICY.md for details.
This project is licensed under the MIT License - see the LICENSE.txt file for details.
See CHANGELOG.md for version history and updates.
Enforcing ethical AI: Because AI should serve humanity, not replace it.
- Contributing Guide
- Code of Conduct
- Security Policy
- AI Attribution Policy
- Changelog
- ๐ฌ GitHub Discussions - Community conversations and Q&A
Note: Enable GitHub Discussions in repository settings for community engagement.
If you find this project helpful, please consider:
- โญ Starring the repository
- ๐ Sharing with your network
- ๐ Sponsoring the maintainers
- ๐ Reporting issues and suggesting features
- ๐ Contributing code or documentation
- ๐ฌ Joining GitHub Discussions for community conversations
Your support helps us continue developing tools for ethical AI! ๐
- ๐ Website: futuretranz.com
- ๐ผ LinkedIn: FutureTranz
- ๐ฆ Twitter/X: @FutureTranz
- ๐ง Email: hello@futuretranz.com
- ๐ Personal Website: victorquinones.com
- ๐ผ LinkedIn: in/victorjquinones
- ๐ฆ Twitter/X: @VQuinones