Skip to main content

LLM CLI - AI-Powered Command Line Toolkit

Command-line interface for AI/LLM operations with multi-provider support and test-driven development integration.

Current Status: Development - TypeScript CLI with basic LLM integration
Technology: Node.js 18+, TypeScript, Commander.js

Technical Overviewโ€‹

LLMCLI provides command-line access to LLM providers and AI operations. Built with TypeScript and Commander.js, it includes testing framework integration and basic provider management capabilities.

Architecture & Data Flowโ€‹

[CLI Command] โ†’ [Commander.js Parser] โ†’ [Provider Router]
โ†“ โ†“ โ†“
[Argument Validation] [Command Registry] [LLM Provider]
โ†“ โ†“ โ†“
[Configuration Load] [Action Execution] [API Request]
โ†“ โ†“ โ†“
[Provider Selection] [Response Format] [LLM Response]
โ†“ โ†“ โ†“
[Output Formatting] [CLI Response] [Exit Code]

Component Interactionsโ€‹

  1. Command Processing:

    • Commander.js argument parsing and validation
    • Configuration loading from environment/files
    • Provider selection and routing logic
  2. LLM Integration:

    • Multi-provider support (configurable)
    • HTTP client with axios for API calls
    • Response formatting and error handling
  3. Development Tools:

    • TDDAI framework integration
    • Test execution and coverage reporting
    • TypeScript compilation and validation

Installationโ€‹

# Install dependencies
npm install

# Build TypeScript
npm run build

# Link for global usage
npm link

Platform Installation Commandโ€‹

The llm platform:install command provides comprehensive LLM platform setup with Drupal integration, supporting multiple AI providers, vector databases, and development configurations.

Quick Startโ€‹

# Basic installation
llmcli llm platform:install --working-dir ./my-project --project-name my-llm-app

# Enterprise setup with vector database
llmcli llm platform:install \
--template enterprise \
--working-dir ./enterprise-project \
--with-vector-db --vector-db weaviate \
--provider openai --ai-models "gpt-4,gpt-3.5-turbo"

# Development setup with local AI
llmcli llm platform:install \
--template development \
--working-dir ./dev-project \
--provider ollama --ai-models "llama2,mistral" \
--vector-db sqlite

Configuration Optionsโ€‹

Basic Configurationโ€‹

  • --project-name <name> - Project name (default: llm-platform)
  • --working-dir <path> - Installation directory (required)
  • --site-name <name> - Drupal site name (default: LLM Platform)
  • --admin-user <user> - Admin username (default: admin)
  • --admin-pass <pass> - Admin password (default: admin)
  • --profile <profile> - Installation profile (standard, minimal, full)
  • --drupal-version <version> - Drupal version (default: 10)

AI Provider Configurationโ€‹

  • --provider <provider> - Primary AI provider:

    • openai - OpenAI GPT models
    • anthropic - Anthropic Claude models
    • google-gemini - Google Gemini models
    • mistral - Mistral AI models
    • fireworks - Fireworks AI models
    • huggingface - Hugging Face models
    • ollama - Local Ollama models
    • lmstudio - LM Studio models
  • --suggest-providers <providers> - Comma-separated list of additional providers

  • --ai-models <models> - Comma-separated list of AI models

  • --model-fallback <enabled|disabled> - Enable model fallback

  • --model-routing <type> - Model routing strategy

  • --model-caching <level> - Model caching level

Vector Database Configurationโ€‹

  • --with-vector-db - Include vector database

  • --vector-db <type> - Vector database type:

    • weaviate - Weaviate vector database
    • milvus - Milvus vector database
    • qdrant - Qdrant vector database
    • pinecone - Pinecone cloud service
    • postgres - PostgreSQL with pgvector
    • azure-ai-search - Azure AI Search
    • sqlite - SQLite with vector support
  • --embedding-models <models> - Comma-separated embedding models

  • --vector-dimensions <dimensions> - Vector dimensions (default: 1536)

  • --similarity-threshold <threshold> - Similarity threshold (default: 0.8)

Feature Flagsโ€‹

  • --with-ai-services - Include AI services
  • --with-ai-swarm - Include AI swarm orchestration
  • --with-secure - Include security hardening
  • --with-alternative-services - Include Alternative Services integration
  • --with-recipe-onboarding - Include Recipe Onboarding framework
  • --with-ddev-orchestration - Include DDEV service orchestration
  • --with-automated-testing - Include automated testing

DDEV Configurationโ€‹

  • --ddev-services <services> - Comma-separated DDEV services:

    • mariadb, mysql, postgres - Database services
    • redis, memcached - Caching services
    • elasticsearch, opensearch - Search services
    • mailpit, mailhog - Email testing
    • minio - Object storage
    • weaviate, milvus, qdrant - Vector databases
  • --ddev-name <name> - Custom DDEV project name

Template Presetsโ€‹

  • --template <template> - Use predefined template:
    • enterprise - Full featured with security, vector DB, AI swarm
    • development - Development-friendly with guided setup
    • minimal - Basic installation with minimal features

Security Configurationโ€‹

  • --security-level <level> - Security level:

    • basic - Basic security features
    • standard - Standard security configuration
    • maximum - Maximum security hardening
  • --encryption-at-rest <enabled|disabled> - Enable encryption at rest

  • --security-headers <headers> - Comma-separated security headers:

    • csp - Content Security Policy
    • hsts - HTTP Strict Transport Security
    • xss_protection - XSS Protection headers

Performance Configurationโ€‹

  • --performance-mode <mode> - Performance mode:

    • development - Development optimizations
    • balanced - Balanced performance
    • production - Production optimizations
  • --caching-strategy <strategy> - Caching strategy:

    • standard - Standard caching
    • aggressive - Aggressive caching
    • minimal - Minimal caching

Development Optionsโ€‹

  • --env-detect - Auto-detect environment
  • --ui-wizard - Run interactive UI wizard
  • --wizard-suggest-additional - Suggest additional features in wizard
  • --development-mode <true|false> - Enable development mode
  • --interactive - Enable interactive setup mode

Recipe Configurationโ€‹

  • --use-recipe <recipes> - Comma-separated list of recipes
  • --recipe-mode <mode> - Recipe mode:
    • guided - Guided setup with prompts
    • expert - Expert mode with minimal prompts
  • --recipe-development <true|false> - Recipe development mode
  • --recipe-validation <strict|loose> - Recipe validation mode

Testing Configurationโ€‹

  • --test-runner <runner> - Test runner (jest, phpunit)
  • --coverage-threshold <percent> - Coverage threshold (default: 80)

Path Configurationโ€‹

  • --token-location <path> - Token storage location
  • --common-npm-path <path> - Common NPM packages path
  • --workspace-root <path> - Workspace root path
  • --recipes-path <path> - Recipes path

Build & Package Managementโ€‹

  • --npm-link <true|false> - Use npm link
  • --with-local-packages - Use local packages
  • --auto-npm-install - Auto install npm packages
  • --auto-composer-install - Auto install composer packages

Documentation Generationโ€‹

  • --generate-docs <types> - Generate documentation:
    • api - API documentation
    • admin - Administrator guide
    • developer - Developer documentation
  • --docs-format <formats> - Documentation formats

Suggestion Optionsโ€‹

  • --suggest-integrations <list> - Suggested integrations
  • --suggest-security <list> - Suggested security features
  • --suggest-performance <list> - Suggested performance features
  • --suggest-content <list> - Suggested content features
  • --suggest-development <list> - Suggested development tools

Configuration Examplesโ€‹

Enterprise Setupโ€‹

llmcli llm platform:install \
--template enterprise \
--working-dir ./enterprise-app \
--project-name enterprise-llm \
--site-name "Enterprise LLM Platform" \
--with-vector-db --vector-db weaviate \
--provider openai --ai-models "gpt-4,gpt-3.5-turbo" \
--embedding-models "text-embedding-ada-002" \
--security-level maximum \
--performance-mode production \
--caching-strategy aggressive \
--with-ai-swarm \
--with-automated-testing \
--coverage-threshold 95 \
--generate-docs "api,admin,developer"

Development Setupโ€‹

llmcli llm platform:install \
--template development \
--working-dir ./dev-app \
--project-name dev-llm \
--provider ollama --ai-models "llama2,codellama" \
--vector-db sqlite \
--security-level standard \
--development-mode true \
--ui-wizard \
--with-recipe-onboarding \
--recipe-mode guided

Multi-Provider Setupโ€‹

llmcli llm platform:install \
--working-dir ./multi-provider-app \
--provider openai \
--suggest-providers "anthropic,google-gemini,ollama" \
--ai-models "gpt-4,claude-3-opus,gemini-pro,llama2" \
--model-routing "round-robin" \
--model-fallback enabled \
--with-vector-db --vector-db postgres \
--ddev-services "postgres,redis,elasticsearch"

Local AI Setupโ€‹

llmcli llm platform:install \
--working-dir ./local-ai-app \
--provider ollama \
--ai-models "llama2,mistral,codellama" \
--vector-db sqlite \
--embedding-models "BAAI/bge-small-en-v1.5" \
--performance-mode development \
--security-level basic

Cloud-Based Setupโ€‹

llmcli llm platform:install \
--working-dir ./cloud-app \
--provider openai \
--suggest-providers "anthropic,google-gemini" \
--vector-db pinecone \
--embedding-models "text-embedding-ada-002,text-embedding-3-small" \
--vector-dimensions 1536 \
--similarity-threshold 0.85 \
--security-level maximum \
--performance-mode production

Template Presets Detailsโ€‹

Enterprise Templateโ€‹

  • Full vector database support with Weaviate
  • AI swarm orchestration
  • Maximum security configuration
  • Production performance mode
  • Aggressive caching strategy
  • 95% test coverage requirement
  • Comprehensive audit logging

Development Templateโ€‹

  • Guided recipe onboarding
  • Standard security level
  • Alternative services integration
  • 80% test coverage requirement
  • Development-friendly defaults

Minimal Templateโ€‹

  • Basic features only
  • No vector database
  • No AI swarm
  • Basic security level
  • Minimal resource usage

Generated Project Structureโ€‹

project-name/
โ”œโ”€โ”€ config/
โ”‚ โ”œโ”€โ”€ sync/ # Drupal configuration
โ”‚ โ”œโ”€โ”€ security/ # Security settings
โ”‚ โ””โ”€โ”€ ai/ # AI service configuration
โ”œโ”€โ”€ web/
โ”‚ โ”œโ”€โ”€ modules/custom/ # Custom Drupal modules
โ”‚ โ”œโ”€โ”€ themes/custom/ # Custom themes
โ”‚ โ””โ”€โ”€ .htaccess # Security headers (if enabled)
โ”œโ”€โ”€ scripts/ # Build and deployment scripts
โ”œโ”€โ”€ tests/ # Test files
โ”œโ”€โ”€ docs/ # Generated documentation
โ”œโ”€โ”€ recipes/ # Drupal recipes
โ”œโ”€โ”€ composer.json # PHP dependencies
โ”œโ”€โ”€ package.json # Node.js dependencies (if enabled)
โ”œโ”€โ”€ .env # Environment configuration
โ””โ”€โ”€ README.md # Project documentation

Integration with Drupal AI Moduleโ€‹

The platform:install command is designed to work with the Drupal AI module, supporting:

  • Vector Database Providers: All providers supported by Drupal AI
  • AI Providers: OpenAI, Anthropic, Google Gemini, Mistral, Fireworks, Hugging Face, Ollama, LMStudio
  • Configuration Management: Automatic API key setup and service configuration
  • Module Integration: AI Core, AI Explorer, AI Automators, AI Search, AI Assistants API

TDDAI Worker Integrationโ€‹

The installation process integrates with TDDAI workers for:

  • Alternative Services integration (DDEV service management)
  • Recipe Onboarding framework (guided setup wizard)
  • Automated testing and coverage reporting
  • Interactive setup modes

Dry Run Modeโ€‹

Test any configuration without making changes:

llmcli llm platform:install [options] --dry-run

This shows exactly what would be installed and configured without actually performing the installation.

Run CLI (after build)

./bin/llmcli --help


## All Available Commands

### Core LLM Platform Commands

#### Platform Installation (Primary Command)
```bash
# Basic installation
llmcli llm platform:install --working-dir ./my-project --project-name my-app

# Enterprise setup
llmcli llm platform:install --template enterprise --working-dir ./enterprise-app

# Development setup with local AI
llmcli llm platform:install --template development --working-dir ./dev-app --provider ollama

Platform Managementโ€‹

llmcli llm init <name>                    # Initialize new LLM project
llmcli llm ecosystem <action> # Ecosystem management (status, health, start, test, deploy)
llmcli llm workers [action] # Worker management (list, orchestrate)
llmcli llm achieve-100 # Achieve 100% performance optimization
llmcli llm test-full # Complete platform reset and testing
llmcli llm test-quick # Quick testing without reset
llmcli llm test-run [target] # Targeted test execution
llmcli llm validate # Platform health validation

Platform Integration Commandsโ€‹

llmcli llm drupal <action>                # Drupal integration (discover, list, stats, validate)
llmcli llm config <action> # Configuration management (init, health, show, validate)
llmcli llm deps <action> # Dependencies management (check, plan, resolve, update)
llmcli llm publish <action> # Publishing management (list, check, all, status)
llmcli llm interactive <action> # Interactive mode (wizard, build, help)
llmcli llm path <action> # Path management (fix, cleanup, validate)

AI & LLM Integration Commandsโ€‹

AI Chat and Content Generationโ€‹

llmcli ai chat "<prompt>"                 # Chat with AI
llmcli ai generate "<prompt>" # Generate content with AI
llmcli ai models # List available AI models
llmcli ai status # Check AI service status
llmcli ai embeddings "<text>" # Generate text embeddings
llmcli ai compare "<text1>" "<text2>" # Compare semantic similarity
llmcli ai analyze <path> # Analyze code files or directories

AI Prompt Managementโ€‹

llmcli ai prompt save <name> <content>    # Save a prompt template
llmcli ai prompt list # List saved prompts
llmcli ai prompt load <name> # Load a saved prompt

Worker Orchestration Commandsโ€‹

llmcli worker status                      # Show worker pool status
llmcli worker orchestrate # Orchestrate multiple workers for parallel processing
llmcli worker run <script> [data] # Run a worker script
llmcli worker list [--status <status>] # List all tasks
llmcli worker clear # Clear completed tasks
llmcli worker init # Initialize worker environment

Token Management Commandsโ€‹

llmcli token set <provider> <token>       # Set an API token for a provider
llmcli token list [--verbose] # List all stored tokens
llmcli token get <provider> [--show] # Get token for a provider
llmcli token remove <provider> # Remove token for a provider
llmcli token validate [provider] # Validate token formats and check if active
llmcli token usage [--provider <name>] # View token usage statistics
llmcli token load [--export] # Load tokens into environment variables
llmcli token sync --target <path> # Sync tokens to another location

Project Management Commandsโ€‹

llmcli project init <name>                # Initialize a new project
llmcli project validate [--fix] # Validate project structure and configuration
llmcli project build [--clean] [--watch] # Build the project
llmcli project publish [--tag <tag>] # Publish project packages
llmcli project info [--json] # Show project information
llmcli project list [--workspace <path>] # List projects in workspace

System Management Commandsโ€‹

System Diagnosticsโ€‹

llmcli system doctor [--fix] [--report]   # Run comprehensive system diagnostics
llmcli system health [--json] [--metrics] # Check system health status
llmcli system init [--config-dir <path>] # Initialize LLMCLI system configuration
llmcli system update-self [--check] # Update LLMCLI to the latest version

Configuration Managementโ€‹

llmcli system config list                 # List all configuration values
llmcli system config get <key> # Get configuration value
llmcli system config set <key> <value> # Set configuration value
llmcli system config reset # Reset configuration to defaults

Cache Managementโ€‹

llmcli system cache                       # Show cache information
llmcli system cache clear # Clear system cache
llmcli system cache size # Show cache size breakdown

Log Managementโ€‹

llmcli system logs [--tail <n>] [--level <level>] # Show system logs
llmcli system logs clear # Clear system logs

Git Workflow Commandsโ€‹

Git Flowโ€‹

llmcli git flow feature start <name>      # Start a new feature branch
llmcli git flow feature finish <name> # Finish a feature branch
llmcli git flow release start <version> # Start a new release branch
llmcli git flow release finish <version> # Finish a release branch
llmcli git flow hotfix start <version> # Start a new hotfix branch
llmcli git flow hotfix finish <version> # Finish a hotfix branch

Smart Commits and PR Managementโ€‹

llmcli git smart-commit <message>         # Create conventional commits with smart formatting
llmcli git sync [branch] # Sync current or specified branch with remote
llmcli git cleanup # Clean up merged branches and stale references
llmcli git pr create <title> # Create a new pull request
llmcli git pr list # List pull requests
llmcli git pr view <number> # View pull request details
llmcli git pr merge <number> # Merge a pull request

Conventional Commitsโ€‹

llmcli git conventional validate          # Validate commit messages
llmcli git conventional changelog # Generate conventional changelog
llmcli git conventional version # Suggest next version

Git Hooksโ€‹

llmcli git hooks list                     # List available git hooks
llmcli git hooks install # Install git hooks
llmcli git hooks uninstall # Uninstall git hooks
llmcli git hooks test <hook> # Test a specific git hook

CI/CD Commandsโ€‹

Pipeline Managementโ€‹

llmcli cicd pipeline generate             # Generate GitLab CI pipeline configuration
llmcli cicd pipeline validate # Validate existing pipeline configuration
llmcli cicd pipeline components # List available pipeline components
llmcli cicd pipeline run # Run pipeline locally for testing

Deployment Managementโ€‹

llmcli cicd deploy production             # Deploy to production environment
llmcli cicd deploy staging # Deploy to staging environment
llmcli cicd deploy list # List recent deployments
llmcli cicd deploy status <environment> # Show deployment status

Monitoring and Testingโ€‹

llmcli cicd monitor start                 # Start monitoring CI/CD processes
llmcli cicd monitor pipelines # Show pipeline status and metrics
llmcli cicd monitor deployments # Show deployment metrics
llmcli cicd monitor alerts # Show CI/CD alerts and issues
llmcli cicd test run # Run tests in CI/CD environment
llmcli cicd test report # Generate test reports
llmcli cicd test validate # Validate test configuration

Security and Artifactsโ€‹

llmcli cicd security scan                 # Run security scans
llmcli cicd security report # Generate security report
llmcli cicd security compliance # Check compliance status
llmcli cicd artifacts list # List available artifacts
llmcli cicd artifacts download # Download artifacts
llmcli cicd artifacts upload <path> # Upload artifacts
llmcli cicd artifacts clean # Clean old artifacts

Drupal Integration Commandsโ€‹

Discovery and Managementโ€‹

llmcli drupal discover                    # Discover Drupal modules and commands
llmcli drupal list # List all Drupal commands
llmcli drupal stats # Show Drupal statistics
llmcli drupal register # Register Drupal commands

Module Managementโ€‹

llmcli drupal module list                 # List all modules
llmcli drupal module enable <name> # Enable a module
llmcli drupal module disable <name> # Disable a module
llmcli drupal module info <name> # Show module information

Route Managementโ€‹

llmcli drupal route list                  # List all routes
llmcli drupal route info <name> # Show route information
llmcli drupal route test <name> # Test a route

Redis Cache Managementโ€‹

llmcli drupal redis status                # Show Redis status
llmcli drupal redis clear # Clear Redis cache
llmcli drupal redis info # Show Redis information
llmcli drupal redis test # Test Redis connection

TDDAI (AI-Enhanced TDD Framework)โ€‹

llmcli drupal tddai validate-ai-tdd       # Validate AI-Enhanced TDD Compliance
llmcli drupal tddai generate-ai-tdd-structure # Generate AI-Enhanced Drupal Module Structure
llmcli drupal tddai help # Comprehensive TDDAI Help & Examples

Configuration Commandsโ€‹

llmcli config show [section]              # Show current configuration
llmcli config validate # Validate configuration
llmcli config template [filePath] # Generate configuration template
llmcli config export [filePath] # Export current configuration
llmcli config reload # Reload configuration
llmcli config init # Initialize LLMCLI configuration
llmcli config edit # Edit configuration file
llmcli config health # Configuration health check

Testing Framework Commandsโ€‹

llmcli test unit                          # Run unit tests for current package
llmcli test integration # Run integration tests
llmcli test all # Run all test suites
llmcli test drupal # Run Drupal module tests
llmcli test coverage # Generate test coverage report
llmcli test watch # Run tests in watch mode
llmcli test performance # Run performance tests

Additional Commandsโ€‹

llmcli apple-fm <command>                 # Apple Foundation Models integration
llmcli milvus <command> # Milvus vector database operations
llmcli reports <command> # Generate various reports
llmcli ui-ux <command> # UI/UX development tools
llmcli quick-fix <command> # Quick fixes and patches
llmcli deps <command> # Dependency management
llmcli path <command> # Path utilities
llmcli ecosystem <command> # Ecosystem management
llmcli publish <command> # Publishing utilities
llmcli achieve-100 <command> # Performance optimization
llmcli integration-test <command> # Integration testing
llmcli interactive <command> # Interactive commands
llmcli tddai-workers <command> # TDDAI worker management
llmcli tddai-roadmap <command> # TDDAI roadmap planning
llmcli tddai-experience <command> # TDDAI experience builder
llmcli completion <command> # Shell completion setup

Global Optionsโ€‹

All commands support these global options:

--dry-run                                 # Show what would be done without executing
--verbose # Enable verbose logging
--config <path> # Path to configuration file
--help, -h # Display help for command

Development Commandsโ€‹

# Build and development
npm run build # Compile TypeScript
npm run dev # Watch mode development
npm run clean # Clean build artifacts

# Testing
npm run test # Run Jest tests
npm run test:watch # Watch mode testing
npm run test:coverage # Generate coverage report
npm run test:ci # CI mode testing

# Code quality
npm run lint # ESLint checking
npm run lint:fix # Auto-fix lint issues
npm run typecheck # TypeScript type checking

System Requirementsโ€‹

Runtime Requirements:
Node.js: 18.0.0+
Memory: 256MB+
Storage: 100MB+

Development Requirements:
TypeScript: 5.3.3+
Jest: 29.7.0+
ESLint: 8.54.0+

Dependenciesโ€‹

Core Dependenciesโ€‹

  • commander: CLI framework and argument parsing
  • axios: HTTP client for LLM API requests
  • chalk: Terminal output coloring
  • inquirer: Interactive command prompts
  • ora: Terminal spinners and progress indicators
  • zod: Runtime type validation

Development Dependenciesโ€‹

  • typescript: TypeScript compiler and type checking
  • jest: Testing framework with coverage
  • eslint: Code linting and style enforcement
  • @types/*: TypeScript type definitions

Configurationโ€‹

Configuration can be provided via:

  • Environment variables
  • Configuration files
  • Command line arguments
  • Interactive prompts

Project Documentationโ€‹

Implementation Reportsโ€‹

Development Statusโ€‹

Current Capabilitiesโ€‹

  • CLI framework with Commander.js
  • TypeScript compilation and validation
  • Jest testing framework integration
  • ESLint code quality checking
  • Basic project structure and configuration

Limitationsโ€‹

  • LLM provider integration incomplete
  • Command implementations in development
  • Limited error handling and validation
  • No production deployment configuration

Contributingโ€‹

This project follows test-driven development practices:

  1. Write tests first (Jest framework)
  2. Implement functionality to pass tests
  3. Maintain TypeScript strict mode
  4. Follow ESLint style guidelines
  5. Ensure test coverage meets requirements

Last Updated: June 2025
Created from package.json analysis and project structure