bluefly/llm_platform

Admin UI and integration management for Bluefly LLM platform.

Installs: 0

Dependents: 0

Suggesters: 0

Security: 0

Type:drupal-module

1.0.0 2025-05-16 05:23 UTC

This package is not auto-updated.

Last update: 2025-05-16 08:42:20 UTC


README

Troubleshooting & Common Issues

  • Missing module errors:
    • If you see errors about missing modules (e.g., bluefly_ui_library, eca_core), install them via Composer or Drush as described below.
    • If a dependency is not available for Drupal 11, comment it out in llm_admin.info.yml and remove any code that references it.
  • Broken admin links:
    • If a menu link leads to a missing page, check llm_admin.links.menu.yml and comment out or remove the link.
  • Install errors:
    • Run composer install and drush cr after updating dependencies.
    • Use drush pm:list --status=not installed to see what is still missing.
  • General advice:
    • Always check the compatibility of contrib modules with Drupal 11 before enabling.
    • Keep your info.yml and composer.json files in sync with your actual codebase and installed modules.

Requirements

Drupal Core: 11.x (compatible with 10.x)

Required contrib modules:

  • eca_core (install: composer require drupal/eca_core)
  • bluefly_ui_library (install: composer require bluefly/bluefly_ui_library or custom)

Optional (if used):

  • insights_metrics
  • insights_recommendations
  • insights_workflow
  • jsonrpc_2_0
  • model_context_protocol

Install all required modules:

drush en eca_core bluefly_ui_library

Note:

  • If any modules are missing, install via Composer or Drush as above.
  • Remove or update any dependencies in llm_admin.info.yml that are not available or not needed.
  • Ensure all modules are compatible with Drupal 11+.

Central admin UI for managing LLM models, endpoints, and platform configuration. Integrates with both Drupal MCP server and external BFMCP for model orchestration.

Features

  • Unified dashboard for LLM endpoints, models, and MCP jobs
  • OpenAPI-first, ECA-integrated automation
  • Supports both local (Drupal) and external (BFMCP) MCP backends
  • Real-time metrics, logs, and observability
  • Extensible UI with feature flags and theming

Default Configuration & OpenAPI Alignment

On installation, this module provides default LLM models, MCP servers, agent profiles, ECA automations, and views. All configuration is OpenAPI-aligned and located in /config/install/.

  • LLM Models: Predefined (Mistral, Llama, Phi, etc.)
  • MCP Servers: Dev, staging, prod with env-based tokens
  • Agent Profiles: Summarizer, code assistant, evaluator
  • ECA Automations: Registration, audit, error handling, sync, etc.
  • Views: Activity, audit log, metrics
  • Permissions & Roles: LLM Admin role and granular permissions
  • REST Resources: OpenAPI-compliant endpoints for all entities
  • Settings & Templates: Thresholds, action/context templates

Customization

  • Edit YAML files in /config/install/ before install, or override via Drupal Config Management after install.
  • All config fields are documented and mapped to OpenAPI schemas (see API.md).
  • Add new models, servers, or automations by copying and editing the provided YAML templates.

See API.md for OpenAPI schemas, endpoint usage, and authentication guidance.

Security

  • Never commit real API keys or sensitive credentials to version control.
  • Always review and restrict permissions for admin/config routes.

OpenAPI & ECA

  • All endpoints and automations are OpenAPI-documented and ECA-integrated for maximum interoperability and automation.

Dynamic OpenAPI Generation

  • Enumerates all LLM, MCP, and Agent entities and bundles, generating OpenAPI schemas for each.
  • All schemas are exported as reusable OpenAPI components.
  • Admin API contract is published at /admin/llm/openapi.json.
  • Extensible for LLM, SDK, and automation use cases.

Services

  • bluefly_llm_admin.openapi_builder: Builds the OpenAPI contract dynamically for admin APIs and entities.

Roadmap

  • Add admin UI for schema preview, diff/version compare, and download/export.
  • Add REST/JSON:API resource support and LLM/agent toolkit export.

Base Configuration Import

On module install, the following entities and configuration are created automatically from config/install/:

  • Example LLMModel: mistral-7b-v1
  • Example MCPServer: dev
  • LLM Admin role and permissions

To re-import config after changes:

drush config:import

This ensures a consistent starting point for all environments and enables rapid onboarding and GitOps workflows.

License

MIT

Getting Started

  1. Install the module and import the provided config:
    drush en llm_admin
    drush config:import
    
  2. Create or edit entities via the Drupal admin UI or by editing YAML in config/install/.
  3. API access: All entities are exposed via REST/JSON:API. See API.md for details.

Entity Field Documentation

LLMModel

FieldTypeDescription
model_idstringUnique slug for the model
labelstringHuman-readable label
repostringGitLab repo for model/config
helm_chart_pathstringRelative path to Helm chart
deployment_targetstringReference to DeploymentTarget (EKS cluster)
mcp_tool_namestringMCP tool registry name
fine_tune_sourcestringGitLab issue, dataset, or job reference
deployment_statusstringCurrent deployment state (active, error, pending)
descriptionstringOptional human-readable description
tagsarrayOptional tags for search/filter
model_sizestringOptional model size (e.g., 7B, 13B)
uuidstringUUID

Example YAML

model_id: mistral-7b-v1
label: 'Mistral 7B v1'
repo: 'gitlab.com/org/models/mistral'
helm_chart_path: './charts/mistral'
deployment_target: 'eks-dev'
mcp_tool_name: 'mistral-7b-v1'
fine_tune_source: 'issue-123'
deployment_status: 'active'
description: 'Production-ready Mistral 7B model for summarization and QA.'
tags: ['mistral', '7b', 'v1', 'summarization']
model_size: '7B'
uuid: 'b1e2c3d4-5678-90ab-cdef-1234567890ab'

AgentProfile

FieldTypeDescription
profile_idstringUnique slug for the agent profile
labelstringHuman-readable label
mcp_serverstringLinked MCP server
llm_modelstringReference to LLMModel
workflow_graphstringYAML or JSON representing agent flow
tooling_configstringOptional runtime tool setup
descriptionstringOptional human-readable description
tagsarrayOptional tags for search/filter
enabledbooleanWhether the agent profile is enabled
uuidstringUUID

Example YAML

profile_id: summarizer
label: 'Summarizer Agent'
mcp_server: 'bfmcp'
llm_model: 'mistral-7b-v1'
workflow_graph: |
  type: langchain
  steps:
    - name: summarize
      model: mistral-7b-v1
      prompt: "Summarize the following text:"
tooling_config: |
  retriever: default
  moderation: enabled
  eval: basic
description: 'Agent for summarization tasks using Mistral 7B.'
tags: ['summarizer', 'langchain', 'mistral']
enabled: true
uuid: 'd4e5f6a7-8901-23bc-def4-3456789012cd'

MCPServer

FieldTypeDescription
server_idstringUnique slug for the MCP server
labelstringHuman-readable label
uristringBFMCP API endpoint
auth_tokenstringEnv var or secret for authentication
toolsstringJSON array of registered tools
health_statusstringCurrent health status
last_pingintegerLast health check timestamp
descriptionstringOptional human-readable description
tagsarrayOptional tags for search/filter
enabledbooleanWhether the MCP server is enabled
uuidstringUUID

Example YAML

server_id: bfmcp
label: 'BFMCP Server'
uri: 'https://mcp.bfmcp.io'
auth_token: 'env:BFMCP_TOKEN'
tools: '[]'
health_status: 'healthy'
last_ping: 0
description: 'Primary BFMCP instance for model orchestration.'
tags: ['bfmcp', 'production', 'mcp']
enabled: true
uuid: 'f6a7b890-1234-56de-f789-5678901234ef'

API Usage Examples

  • Get all LLM Models:
    GET /jsonapi/llm_model/llm_model
    
  • Create an AgentProfile:
    POST /jsonapi/agent_profile/agent_profile
    Content-Type: application/vnd.api+json
    ...
    
  • Get MCPServer details:
    GET /jsonapi/mcp_server/mcp_server/{uuid}
    

See API.md for more details and OpenAPI schemas.

Advanced Entities & Automations (2024-06)

DatasetVersion

Tracks versioned datasets for provenance and reproducibility.

YAML Example:

dataset_version_id: wiki_2024_v2
dataset_id: wiki_2024
version: "2.0"
source_commit: "a1b2c3d4"
created: "2024-06-15T00:00:00Z"
description: "Second version of Wikipedia 2024 dataset, post-cleanup."
tags: [wikipedia, 2024, v2]
status: "active"
uuid: ""

OpenAPI Schema:

DatasetVersion:
  type: object
  required: [dataset_version_id, dataset_id, version, created, status]
  properties:
    dataset_version_id: { type: string }
    dataset_id: { type: string }
    version: { type: string }
    source_commit: { type: string }
    created: { type: string, format: date-time }
    description: { type: string }
    tags: { type: array, items: { type: string } }
    status: { type: string, enum: [active, archived, error] }
    uuid: { type: string, format: uuid }

ExperimentRun

Tracks LLM fine-tuning or evaluation experiments, parameters, and results.

YAML Example:

experiment_id: exp_20240615_01
label: "Mistral 7B Fine-tune Run 1"
llm_model: "mistral_7b_v1"
dataset_version: "wiki_2024_v2"
parameters:
  learning_rate: 0.0002
  epochs: 3
metrics:
  accuracy: 0.87
  loss: 0.32
status: "completed"
started: "2024-06-15T10:00:00Z"
ended: "2024-06-15T12:30:00Z"
tags: [mistral, fine-tune, experiment]
uuid: ""

OpenAPI Schema:

ExperimentRun:
  type: object
  required: [experiment_id, label, llm_model, dataset_version, status, started]
  properties:
    experiment_id: { type: string }
    label: { type: string }
    llm_model: { type: string }
    dataset_version: { type: string }
    parameters: { type: object, additionalProperties: true }
    metrics: { type: object, additionalProperties: true }
    status: { type: string, enum: [running, completed, failed, stopped] }
    started: { type: string, format: date-time }
    ended: { type: string, format: date-time }
    tags: { type: array, items: { type: string } }
    uuid: { type: string, format: uuid }

ECA Automations

  • Auto-register and health check tools on creation
  • Promote models on successful experiment runs
  • Notify external registries on model promotion

YAML Example: See /config/install/llm_admin.eca.*.yml for all automation templates.

Dashboards & Views

  • Model/Agent Failure Dashboard: /admin/structure/views/view/model_agent_failure_dashboard
  • LLMModel Deployment Status: /admin/structure/views/view/llm_model_deployment_status

OpenAPI Path Example: Experiment Run Resource

paths:
  /api/experiment_run:
    get:
      summary: List experiment runs
      responses:
        '200':
          description: List of experiment runs
          content:
            application/json:
              schema:
                type: array
                items:
                  $ref: '#/components/schemas/ExperimentRun'
    post:
      summary: Create a new experiment run
      requestBody:
        required: true
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/ExperimentRun'
      responses:
        '201':
          description: Experiment run created
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/ExperimentRun'

ModelCard Entity

Documents model purpose, limitations, and metrics.

YAML Example:

model_card_id: mistral_7b_v1_card
llm_model: mistral_7b_v1
version: "1.0"
summary: "Mistral 7B v1, open-weight LLM for summarization and QA."
intended_use: "General-purpose text generation, summarization, and Q&A."
limitations: "Not suitable for legal or medical advice. May hallucinate facts."
ethical_considerations: "Biases may be present. Outputs should be reviewed."
metrics:
  accuracy: 0.87
  toxicity: 0.02
  factuality: 0.81
tags: [documentation, model_card, mistral]
created: "2024-06-15T00:00:00Z"
uuid: ""

OpenAPI Schema:

ModelCard:
  type: object
  required: [model_card_id, llm_model, version, summary, intended_use]
  properties:
    model_card_id: { type: string }
    llm_model: { type: string }
    version: { type: string }
    summary: { type: string }
    intended_use: { type: string }
    limitations: { type: string }
    ethical_considerations: { type: string }
    metrics: { type: object, additionalProperties: true }
    tags: { type: array, items: { type: string } }
    created: { type: string, format: date-time }
    uuid: { type: string, format: uuid }

ReviewQueue Entity

Defines a queue for human-in-the-loop review of flagged LLM outputs.

YAML Example:

review_queue_id: flagged_outputs
label: "Flagged LLM Outputs"
entity_type: "llm_output"
criteria:
  - field: "toxicity"
    value: ">0.05"
  - field: "confidence"
    value: "<0.7"
reviewers:
  - "alice"
  - "bob"
status: "active"
created: "2024-06-15T00:00:00Z"
uuid: ""

OpenAPI Schema:

ReviewQueue:
  type: object
  required: [review_queue_id, label, entity_type, criteria, reviewers, status]
  properties:
    review_queue_id: { type: string }
    label: { type: string }
    entity_type: { type: string }
    criteria:
      type: array
      items:
        $ref: '#/components/schemas/ReviewCriterion'
    reviewers:
      type: array
      items:
        type: string
    status: { type: string, enum: [active, archived] }
    created: { type: string, format: date-time }
    uuid: { type: string, format: uuid }

ReviewCriterion:
  type: object
  properties:
    field: { type: string }
    value: { type: string }

Drift Detection Automation

Detects model drift and triggers retraining if factuality drops below threshold.

YAML Example:

langcode: en
status: true
id: drift_detection_on_experiment
label: 'Drift Detection on Experiment Run'
event: "experiment_run_completed"
entity_type: "experiment_run"
condition:
  - field: 'metrics.factuality'
    value: '<0.75'
actions:
  - type: "notify"
    channel: "slack"
    recipients:
      - "#ml-alerts"
    message: "Drift detected: factuality dropped below 0.75 in experiment {{ entity.experiment_id }}."
  - type: "schedule_retrain"
    target: "llm_model"
    notify: true
description: "Detects model drift and triggers retraining if factuality drops."
enabled: true
tags: [drift, automation, retrain]

All new entities, automations, and dashboards are OpenAPI-aligned and ready for use. See /config/install/ for YAMLs and API.md for schema and endpoint details.

What's Included on Install

  • Multiple LLM models (Mistral, Llama, Falcon, etc.)
  • Agent profiles (summarizer, code assistant, evaluator, translator)
  • Datasets, experiment runs, evaluation runs, deployments, model cards, review queues, governance policies
  • ECA automations for retraining, audit, compliance, drift detection, review assignment, etc.
  • Views for activity, metrics, dashboards
  • REST resources and OpenAPI schemas for all entities

How to Extend

  • Add new YAMLs to config/install/ for any entity type (see examples)
  • Use the admin UI to create/edit entities and automations
  • Add new ECA automations by copying and editing YAMLs
  • Extend normalization and API output via normalization config and plugins
  • Add new fields or computed fields in entity forms and normalization config

Common Workflows

  • Add a new LLM model: Copy an example YAML, edit fields, and import config
  • Automate retraining: Use ECA YAMLs to trigger retrain on dataset update or drift
  • Review flagged outputs: Use ReviewQueue and ECA to assign flagged outputs to reviewers
  • Enforce compliance: Add GovernancePolicy YAMLs and ECA automations for quarantine/audit
  • Explore the API: Use the built-in OpenAPI/Swagger UI route for live API testing

Example YAMLs

Falcon LLM Model

model_id: falcon_40b
label: "Falcon 40B"
repo: "tiiuae/falcon-40b"
helm_chart_path: "helm/llm/falcon"
deployment_target: "prod"
mcp_tool_name: "falcon-40b"
fine_tune_source: "huggingface"
deployment_status: "ready"
description: "Falcon 40B, large open-weight LLM"
tags: [open-source, 40b, production]
model_size: "40B"
uuid: ""

Translation Agent Profile

profile_id: translator
label: "Translation Agent"
mcp_server: "prod"
llm_model: "falcon_40b"
workflow_graph: "translate"
tooling_config: {}
description: "Agent for language translation tasks"
tags: [translation, default]
enabled: true
uuid: ""

Governance Policy Example

policy_id: block_pii
label: "Block PII"
applies_to: "llm_model"
rules:
  - type: "block"
    pattern: "ssn|credit card|passport"
    description: "Block outputs containing PII."
enforced: true
created: "2024-06-15T00:00:00Z"
tags: [governance, compliance]
uuid: ""

See /config/install/ for all YAMLs and API.md for schema and endpoint details.

Monitoring & Observability

Prometheus Metrics:

  • The LLM Admin module exposes Prometheus-compatible metrics at /admin/metrics/bluefly-llm-admin.
  • Metrics include LLM model health/status and will be expanded to cover API usage, experiment runs, and audit logs.
  • Example metric:
    llm_model_status{model_id="mistral-7b-v1",status="active"} 1
    

Grafana Dashboards:

  • Import the provided dashboard JSON from grafana/llm-models-dashboard.json (coming soon) into your Grafana instance.
  • Visualize model health, API/GraphQL usage, experiment trends, and audit logs.
  • Example PromQL query:
    count by (status) (llm_model_status)
    

Alerts:

  • Set up Grafana alerts for model health degradation, high error rates, or audit anomalies.

Admin UI:

  • Access metrics from the LLM Admin menu or directly at /admin/metrics/bluefly-llm-admin.
  • If Grafana is deployed, link to dashboards from the admin menu.

Required Environment Variables

This module requires the following environment variables to be set for secure and production-ready operation:

  • GITLAB_API_URL — The base URL for the GitLab API (e.g., https://gitlab.bluefly.io/api/v4)
  • GITLAB_PROJECT_ID — The GitLab project ID for model registry operations
  • LLM_ADMIN_GITLAB_TOKEN — The GitLab API token for authentication

Example .env file

GITLAB_API_URL=https://gitlab.bluefly.io/api/v4
GITLAB_PROJECT_ID=your-real-project-id
LLM_ADMIN_GITLAB_TOKEN=your-real-token

Set these in your environment, CI/CD variables, or Docker secrets as appropriate. Never commit real secrets to version control.

🔑 Local Token Management UI

The LLM Admin module provides a secure administrative interface for managing local API tokens used by LLM integrations. This UI is intended for local development and testing environments only.

Features

  • List Tokens: View all tokens stored in ~/.tokens/ (values are masked for security).
  • Add Token: Add a new token by specifying a unique, uppercase alphanumeric/underscore name and a non-empty value.
  • Delete Token: Remove an existing token securely.
  • Input Validation: Enforces strict naming and value requirements.
  • Error Handling: All file operations are checked and user-friendly error messages are displayed.

Security Model

  • Access Control:
    Only users with the administer llm tokens permission can access this UI.
    Assign this permission via Admin > People > Permissions.

  • Token Storage:
    Tokens are stored as individual files in the user's ~/.tokens/ directory (e.g., OPENAI.token).

  • No Production Secrets:
    This UI is for local/dev use only.
    Production secrets must be managed via CI/CD variables and Kubernetes secrets.

  • No Full Token Exposure:
    Token values are always masked in the UI.

Usage

  1. Assign Permission:
    Grant administer llm tokens to trusted admin roles.

  2. Access the UI:
    Navigate to the Local Token Manager form in the Drupal admin interface.

  3. Add/Delete Tokens:
    Use the form to manage tokens as needed for local development.

Best Practices

  • Never commit tokens or secrets to version control.
  • Document all required environment variables in .env.example and this README.
  • Enforce secret presence in CI/CD using pipeline validation jobs.
  • Use the admin UI only for local/dev tokens.
    For production, use GitLab CI/CD variables and Kubernetes secrets, injected via Helm.

Local Environment Setup: Environment Variables for CLI and Web

To ensure all required secrets and API endpoints are available for both CLI (Drush, Composer) and web requests (Drupal UI), you must set environment variables in both contexts. This ensures full local/CI/CD parity and prevents errors with %env()% parameters.

1. CLI/Drush (Shell Export Method)

Add these lines to your ~/.bashrc, ~/.zshrc, or shell profile:

export GITLAB_API_URL=https://gitlab.bluefly.io/api/v4
export GITLAB_PROJECT_ID=your-local-project-id
export LLM_ADMIN_GITLAB_TOKEN=your-local-token

After editing, run:

source ~/.zshrc  # or source ~/.bashrc

Now, all Drush and Composer commands will have access to these variables.

2. Web Requests (settings.local.php Bootstrapping)

Ensure your web/sites/default/settings.local.php contains:

// Bluefly LLM Admin: Local Environment Variable Bootstrapping
if (!getenv('GITLAB_API_URL')) {
  putenv('GITLAB_API_URL=https://gitlab.bluefly.io/api/v4');
  $_ENV['GITLAB_API_URL'] = 'https://gitlab.bluefly.io/api/v4';
  $_SERVER['GITLAB_API_URL'] = 'https://gitlab.bluefly.io/api/v4';
}
if (!getenv('GITLAB_PROJECT_ID')) {
  putenv('GITLAB_PROJECT_ID=your-local-project-id');
  $_ENV['GITLAB_PROJECT_ID'] = 'your-local-project-id';
  $_SERVER['GITLAB_PROJECT_ID'] = 'your-local-project-id';
}
if (!getenv('LLM_ADMIN_GITLAB_TOKEN')) {
  putenv('LLM_ADMIN_GITLAB_TOKEN=your-local-token');
  $_ENV['LLM_ADMIN_GITLAB_TOKEN'] = 'your-local-token';
  $_SERVER['LLM_ADMIN_GITLAB_TOKEN'] = 'your-local-token';
}

Note: Make sure settings.local.php is loaded by settings.php (see the end of settings.php).

3. Why Both?

  • CLI/Drush: Symfony reads env vars at process start, before PHP code runs. Shell export is required for Drush/Composer.
  • Web: PHP code in settings.local.php sets env vars for web requests.

4. CI/CD and Production

  • In CI/CD and production, set these variables in your pipeline or container environment, not in code.

5. Verification

  • Test CLI: drush php-eval "echo getenv('GITLAB_API_URL');"
  • Test Web: Add a temporary watchdog or dpm(getenv('GITLAB_API_URL')) in a controller.

Local Developer Environment: Canonical Setup

To ensure every developer can run the platform securely and without missing environment variable errors, follow this canonical local setup pattern:

1. settings.local.php (Web Requests)

  • Copy web/sites/default/default.settings.local.php to web/sites/default/settings.local.php.
  • Edit the placeholder values for your local environment (never commit real secrets).
  • This file:
    • Loads all tokens from ~/.tokens/*.token as environment variables for web requests.
    • Sets required LLM Admin variables (e.g., GITLAB_API_URL, GITLAB_PROJECT_ID, LLM_ADMIN_GITLAB_TOKEN) for web.
  • Ensure settings.local.php is loaded by settings.php (should be at the end of the file).

2. settings.local.sh (CLI/Drush/Composer)

  • Copy web/sites/default/default.settings.local.sh to web/sites/default/settings.local.sh.
  • Edit the placeholder values for your local environment (never commit real secrets).
  • Add this line to your ~/.bashrc or ~/.zshrc:
    source /absolute/path/to/your/project/web/sites/default/settings.local.sh
    
  • This ensures all CLI/Drush/Composer commands have the required environment variables at process start.

3. Never Commit Real Secrets

  • Both settings.local.php and settings.local.sh should be in .gitignore.
  • Only distribute the default.settings.local.php and default.settings.local.sh templates.

4. Onboarding Steps for New Developers

  1. Copy the default templates to their local versions:
    • cp web/sites/default/default.settings.local.php web/sites/default/settings.local.php
    • cp web/sites/default/default.settings.local.sh web/sites/default/settings.local.sh
  2. Edit both files to set your local tokens and environment variables.
  3. Add source /absolute/path/to/your/project/web/sites/default/settings.local.sh to your shell profile.
  4. Run source ~/.bashrc or source ~/.zshrc to load the variables for CLI.
  5. Run drush cr and drush updb -y—everything should work with no missing env errors.

5. Summary Table

FileCommitted?Contains Secrets?Used ForNotes
settings.phpYesNoMain configAlways include settings.local.php
settings.local.phpNoYes (local only)Local dev (web)Loads tokens/env vars for web
default.settings.local.phpYesNoTemplate for onboardingDistribute to new devs
settings.local.shNoYes (local only)Local dev (CLI/Drush)Exports env vars for CLI
default.settings.local.shYesNoTemplate for onboardingDistribute to new devs

6. CI/CD and Production

  • In CI/CD and production, set all required environment variables in your pipeline or container environment, not in code.

This pattern ensures full local/CI/CD parity, zero risk of leaking secrets, and a seamless onboarding experience for all developers.

Admin Listings (Models & Agents)

  • The LLM Model and Agent Profile admin listings now use Drupal Views for display, filtering, and search.
  • Bluefly UI styling is applied to all admin tables for a modern look.
  • Custom PHP list builders are deprecated and should not be used for new features.
  • To customize listings, use the Views UI or edit the YAML config in config/install.

Audit Logs & Activity

  • Use ECA Log and/or AI Logging for all audit/event tracking.
  • Views displays are used for log browsing and filtering.

Event-Driven Automation

  • Use ECA and BPMN modeller for all workflow automation.
  • Custom event/automation PHP is deprecated in favor of ECA workflows.

LLM Admin Module

Environment Variable & Secret Configuration

This module does not hardcode any secrets or environment-specific values.

GitLab Integration

The following settings must be provided via environment variables, your site's settings.php, or the Drupal admin UI:

  • llm_admin.gitlab_api_url
  • llm_admin.gitlab_project_id
  • llm_admin.gitlab_token

Do not hardcode these values in llm_admin.settings.yml.

Production/CI/CD

  • Set these as environment variables in your hosting or CI/CD environment.
  • In settings.php, inject them into the service container:
// After all includes (e.g., after DDEV's settings.ddev.php)
$settings['container_parameters']['llm_admin.gitlab_api_url'] = getenv('GITLAB_API_URL');
$settings['container_parameters']['llm_admin.gitlab_project_id'] = getenv('GITLAB_PROJECT_ID');
$settings['container_parameters']['llm_admin.gitlab_token'] = getenv('LLM_ADMIN_GITLAB_TOKEN');

Local Development (DDEV Example)

  1. Add your secrets to .env in your project root:
    GITLAB_API_URL=https://gitlab.com/api/v4
    GITLAB_PROJECT_ID=your_project_id
    LLM_ADMIN_GITLAB_TOKEN=your_gitlab_token
    
  2. Run ddev restart to load the new environment variables.
  3. Ensure your settings.php includes the above container parameter injection after the DDEV include.
  4. Run ddev drush cr to rebuild the Drupal cache.

Admin UI

  • You may also set these values via the Drupal admin UI at /admin/config/llm-admin/settings for non-secret, non-production use.

Security Note

  • Never commit secrets or production endpoints to version control.
  • Always use environment variables or secure configuration management for sensitive data.

Troubleshooting

  • If you see You have requested a non-existent parameter "llm_admin.gitlab_api_url", ensure:
    • The environment variable is set in your container (ddev ssh && printenv | grep GITLAB).
    • The parameter is injected in settings.php after all includes.
    • You have cleared the Drupal cache (drush cr).

Overview

Central admin and configuration interface for LLMs, including MCP, ECA, OpenAPI, and UI integration. Provides a unified dashboard, settings, and management for all AI & LLM features.

Features

  • LLM model management
  • MCP backend integration (via mcp, mcp_client)
  • ECA workflow orchestration (via eca)
  • OpenAPI contract management (via openapi)
  • Unified admin dashboard and menu
  • Observability integration (via monitoring)
  • Secure secret management (via key)
  • Modern UI widgets (via bluefly_ui_library)

Integration Points

  • MCP: Uses mcp, mcp_client for model registry and orchestration
  • ECA: Uses eca for event-driven automation
  • OpenAPI: Uses openapi for API contract management
  • Monitoring: Uses monitoring for observability
  • Key: Uses key for secret management
  • AI: Uses ai for AI/ML features
  • UI: Uses bluefly_ui_library for React/Chakra UI widgets

Permissions

  • administer llm admin: Configure and manage LLM Admin, MCP, ECA, and integration settings
  • access llm admin dashboard: View the LLM Admin dashboard and widgets
  • manage llm models: Add, edit, and delete LLM models
  • manage mcp integration: Configure and execute MCP commands
  • manage eca workflows: Create and manage ECA workflows for LLMs

API/CLI/Drush Commands

  • See Drush help for available commands (e.g., drush llm-admin:export, drush llm-admin:mcp-execute)

Example Workflows

  • Register a new LLM model and deploy via MCP
  • Create an ECA workflow to trigger model evaluation
  • Monitor system health via the dashboard observability widget

Security

  • All secrets managed via the key module or environment variables
  • Least-privilege permissions by default

Observability

  • System health and metrics displayed via the monitoring module

UI/UX

  • Modern dashboard widgets using Bluefly UI Library (React/Chakra)