kanopi/quicksilver-scrubber

Trigger scrubber via CircleCI.

Installs: 63

Dependents: 0

Suggesters: 0

Security: 0

Stars: 0

Watchers: 6

Forks: 0

Open Issues: 0

Type:quicksilver-script

1.0.0 2025-04-30 21:15 UTC

This package is auto-updated.

Last update: 2025-04-30 22:05:56 UTC


README

Quicksilver Scrubber v1.x Terminus v3.x Compatible

This Quicksilver project is used in conjunction with the Scrubber tool to help scrub the database of PII or sensitive data. The goal of this is

Requirements

  • Configured Scrubber Data Configuration File
    • For more information on the Scrubber(database obfuscation) tool, refer to the documentation.
    • There is a starter file in the examples directory of this repo.
  • Configured CircleCI Job

Installation

This project is designed to be included from a site's composer.json file, and placed in its appropriate installation directory by Composer Installers.

In order for this to work, you should have the following in your composer.json file:

{
  "require": {
    "composer/installers": "^1.0.20"
  },
  "extra": {
    "installer-paths": {
      "web/private/scripts/{$name}/": ["type:quicksilver-script"]
    }
  }
}

The project can be included by using the command:

composer require kanopi/quicksilver-scrubber:^1

Setting Secrets

The project uses Terminus Secrets Manager Plugin to operate. The following secrets will need to be set in order for you to move forward with the scripts. These are what is read within the scrubber file and used.

terminus secret:set site-id scrubber_processor 'circleci' --type=env --scope=web
terminus secret:set site-id token 'XXXXX' --type=env --scope=web
terminus secret:set site-id repo_source 'github' --type=env --scope=web
terminus secret:set site-id repo_owner 'XXXXX' --type=env --scope=web
terminus secret:set site-id repo_name 'XXXX' --type=env --scope=web
terminus secret:set site-id primary_branch 'XXXX' --type=env --scope=web

Example pantheon.yml

Here's an example of what your pantheon.yml would look like if this were the only Quicksilver operation you wanted to use.

api_version: 1

workflows:
  clone_database:
    after:
      - type: webphp
        description: Trigger Scrubber
        script: private/scripts/quicksilver-scrubber/scrubber.php

Example scrubber.yml

Look at scubber.yml for the basic config that will work with CircleCI.

Place the scrubber.yml file at the root of your repo.

One of the key things around the example file is that it has the environment values set to access the Pantheon database that are set in CircleCI.

Also make sure that you take the time to make the scrubber project specific by scrubbing the specific tables for your project. For notes on how to set that refer to the documentation. https://github.com/kanopi/scrubber/blob/main/docs/datatables.md

Example CircleCI config

Here are the updates you'd need to make to your CircleCI config.yml

Add the orbs

orbs:
  ci-tools: kanopi/ci-tools@2  
  scrubber: kanopi/scrubber@1

Add the parameters

after_db_clone: &after_db_clone << pipeline.parameters.after_db_clone >>
parameters:
  after_db_clone:
    type: boolean
    default: false
  target_url:
    type: string
    default: ''
  site_name:
    type: string
    default: ''
  site_env:
    type: string
    default: ''

Add the scrub data workflow.

workflows: 
  scrub-data:
    when: *after_db_clone
    jobs:
      - scrubber/scrub:
          config: scrubber.yml
          context: kanopi-code
          pre-steps:
              - run:
                  name: Setup quicksilver environment variables
                  command: |
                    echo "export SITE_URL=<< pipeline.parameters.target_url >>" >> "$BASH_ENV"
                    echo "export SITE_NAME=<< pipeline.parameters.site_name >>" >> "$BASH_ENV"
                    echo "export SITE_ENV=<< pipeline.parameters.site_env >>" >> "$BASH_ENV"
              - ci-tools/install-terminus
              - run:
                  name: Setup Pantheon database variables
                  command: |
                    terminus auth:login --machine-token="$TERMINUS_TOKEN"
                    # Capture the JSON
                    TERMINUS_DB_JSON=$(terminus connection:info \
                      --format=json \
                      --fields=mysql_username,mysql_host,mysql_password,mysql_port,mysql_database \
                      -- "$SITE_NAME.$SITE_ENV")

                    # Parse into shell vars
                    MYSQL_USER=$(echo "$TERMINUS_DB_JSON" | jq -r '.mysql_username')
                    MYSQL_HOST=$(echo "$TERMINUS_DB_JSON" | jq -r '.mysql_host')
                    MYSQL_PASSWORD=$(echo "$TERMINUS_DB_JSON" | jq -r '.mysql_password')
                    MYSQL_PORT=$(echo "$TERMINUS_DB_JSON" | jq -r '.mysql_port')
                    MYSQL_DATABASE=$(echo "$TERMINUS_DB_JSON" | jq -r '.mysql_database')

                    # Append to $BASH_ENV with proper quoting
                    echo "export MYSQL_USER=\"$MYSQL_USER\"" >> "$BASH_ENV"
                    echo "export MYSQL_HOST=\"$MYSQL_HOST\""       >> "$BASH_ENV"
                    echo "export MYSQL_PASSWORD=\"$MYSQL_PASSWORD\"" >> "$BASH_ENV"
                    echo "export MYSQL_PORT=\"$MYSQL_PORT\""       >> "$BASH_ENV"
                    echo "export MYSQL_DATABASE=\"$MYSQL_DATABASE\"" >> "$BASH_ENV"

If you have other workflows, they may be triggered by the API calls made by the hook. To make sure they are not triggered you can do something like this

  build-test:
    when:
      not:
        or:
          - equal: [ scheduled_pipeline, << pipeline.trigger_source >> ]
          - << pipeline.parameters.after_db_clone >>

or

  PHPcs:
    when:
      and:
        - not: << pipeline.parameters.after_db_clone >>