scrubber: Extend scrubber service with backend model management
(better review commit by commit)
Like the other templates, it's now able to managed backend initialization (when starting from a scratch environment), migration (for future release) or the plain backend version check (which prevents container from starting when a migration is due).
Another commit allows to declare the (optional) scrubber configuration inside the "backend" management key to initialize such configuration so the service can start without crashing when starting from empty environment.
The diff is large because the utilities are installed in the database-utils which is used all over the place. The actual impact is very limited.
helm diff
[swh] Comparing changes between branches production and make-next-version-independent (per environment)...
Your branch is up to date with 'origin/production'.
[swh] Generate config in production branch for environment staging, namespace swh...
[swh] Generate config in production branch for environment staging, namespace swh-cassandra...
[swh] Generate config in production branch for environment staging, namespace swh-cassandra-next-version...
[swh] Generate config in make-next-version-independent branch for environment staging...
[swh] Generate config in make-next-version-independent branch for environment staging...
[swh] Generate config in make-next-version-independent branch for environment staging...
Your branch is up to date with 'origin/production'.
[swh] Generate config in production branch for environment production, namespace swh...
[swh] Generate config in production branch for environment production, namespace swh-cassandra...
[swh] Generate config in production branch for environment production, namespace swh-cassandra-next-version...
[swh] Generate config in make-next-version-independent branch for environment production...
[swh] Generate config in make-next-version-independent branch for environment production...
[swh] Generate config in make-next-version-independent branch for environment production...
------------- diff for environment staging namespace swh -------------
_ __ __
_| |_ _ / _|/ _| between /tmp/swh-chart.swh.HO3grYti/staging-swh.before, 134 documents
/ _' | | | | |_| |_ and /tmp/swh-chart.swh.HO3grYti/staging-swh.after, 134 documents
| (_| | |_| | _| _|
\__,_|\__, |_| |_| returned 19 differences
|___/
data (v1/ConfigMap/swh/database-utils)
+ one map entry added:
register-scrubber-configuration.sh: |
#!/usr/bin/env bash
set -eux
# Note: The subcommand swh uses internally the environment variable
# SWH_CONFIG_FILENAME
# Usage: swh scrubber check init [OPTIONS] {storage|journal|objstorage}
#
# Initialise a scrubber check configuration for the datastore defined in the
# configuration file and given object_type.
#
# A checker configuration configuration consists simply in a set of:
#
# - backend: the datastore type being scrubbed (storage, objstorage or
# journal),
#
# - object-type: the type of object being checked,
#
# - nb-partitions: the number of partitions the hash space is divided in;
# must be a power of 2,
#
# - name: an unique name for easier reference,
#
# - check-hashes: flag (default to True) to select the hash validation step
# for this scrubbing configuration,
#
# - check-references: flag (default to True for storage and False for the
# journal backend) to select the reference validation step for this
# scrubbing configuration.
#
# Options:
# --object-type [snapshot|revision|release|directory|content]
# --nb-partitions INTEGER
# --name TEXT
# --check-hashes / --no-check-hashes
# --check-references / --no-check-references
# -h, --help Show this message and exit.
extra_cmd=""
[ ! -z "${NB_PARTITIONS}" ] && extra_cmd="${extra_cmd} --nb-partitions $NB_PARTITIONS"
[ "${CHECK_HASHES}" = "false" ] && extra_cmd="${extra_cmd} --no-check-hashes"
[ "${CHECK_REFERENCES}" = "false" ] && extra_cmd="${extra_cmd} --no-check-references"
# Check whether the configuration already exists (the subcommand script is not
# idempotent)
config_exists=$(swh scrubber check list | awk -v name=$NAME '/name/{print substr($2,1,length($2)-1)}')
if [ "${config_exists}" = "${NAME}" ]; then
echo "Configuration ${NAME} already exists in scrubber, do nothing"
exit 0
fi
swh scrubber check init \
--name $NAME \
--object-type $OBJECT_TYPE \
$extra_cmd \
$BACKEND
spec.template.spec.initContainers (apps/v1/Deployment/swh/scrubber-journalchecker-directory)
- one list entry removed:
- name: check-migration
image: "container-registry.softwareheritage.org/swh/infra/swh-apps/scrubber:20240618.2"
command:
- /entrypoints/check-backend-version.sh
env:
- name: MODULE
value: scrubber
- name: MODULE_CONFIG_KEY
value:
- name: SWH_CONFIG_FILENAME
value: /etc/swh/config.yml
volumeMounts:
- name: configuration
mountPath: /etc/swh
- name: database-utils
mountPath: /entrypoints
+ one list entry added:
- name: check-scrubber-migration
# TODO: Add the "datastore" registration
# A workaround is needed as the registration is not idempotent
# and can't be launched each time a scrubber is launched
image: "container-registry.softwareheritage.org/swh/infra/swh-apps/scrubber:20240618.2"
command:
- /entrypoints/check-backend-version.sh
env:
- name: MODULE
value: scrubber
- name: MODULE_CONFIG_KEY
value:
- name: SWH_CONFIG_FILENAME
value: /etc/swh/config.yml
volumeMounts:
- name: configuration
mountPath: /etc/swh
- name: database-utils
mountPath: /entrypoints
spec.template.spec.initContainers (apps/v1/Deployment/swh/scrubber-journalchecker-release)
- one list entry removed:
- name: check-migration
image: "container-registry.softwareheritage.org/swh/infra/swh-apps/scrubber:20240618.2"
command:
- /entrypoints/check-backend-version.sh
env:
- name: MODULE
value: scrubber
- name: MODULE_CONFIG_KEY
value:
- name: SWH_CONFIG_FILENAME
value: /etc/swh/config.yml
volumeMounts:
- name: configuration
mountPath: /etc/swh
- name: database-utils
mountPath: /entrypoints
+ one list entry added:
- name: check-scrubber-migration
# TODO: Add the "datastore" registration
# A workaround is needed as the registration is not idempotent
# and can't be launched each time a scrubber is launched
image: "container-registry.softwareheritage.org/swh/infra/swh-apps/scrubber:20240618.2"
command:
- /entrypoints/check-backend-version.sh
env:
- name: MODULE
value: scrubber
- name: MODULE_CONFIG_KEY
value:
- name: SWH_CONFIG_FILENAME
value: /etc/swh/config.yml
volumeMounts:
- name: configuration
mountPath: /etc/swh
- name: database-utils
mountPath: /entrypoints
spec.template.spec.initContainers (apps/v1/Deployment/swh/scrubber-journalchecker-revision)
- one list entry removed:
- name: check-migration
image: "container-registry.softwareheritage.org/swh/infra/swh-apps/scrubber:20240618.2"
command:
- /entrypoints/check-backend-version.sh
env:
- name: MODULE
value: scrubber
- name: MODULE_CONFIG_KEY
value:
- name: SWH_CONFIG_FILENAME
value: /etc/swh/config.yml
volumeMounts:
- name: configuration
mountPath: /etc/swh
- name: database-utils
mountPath: /entrypoints
+ one list entry added:
- name: check-scrubber-migration
# TODO: Add the "datastore" registration
# A workaround is needed as the registration is not idempotent
# and can't be launched each time a scrubber is launched
image: "container-registry.softwareheritage.org/swh/infra/swh-apps/scrubber:20240618.2"
command:
- /entrypoints/check-backend-version.sh
env:
- name: MODULE
value: scrubber
- name: MODULE_CONFIG_KEY
value:
- name: SWH_CONFIG_FILENAME
value: /etc/swh/config.yml
volumeMounts:
- name: configuration
mountPath: /etc/swh
- name: database-utils
mountPath: /entrypoints
spec.template.spec.initContainers (apps/v1/Deployment/swh/scrubber-journalchecker-snapshot)
- one list entry removed:
- name: check-migration
image: "container-registry.softwareheritage.org/swh/infra/swh-apps/scrubber:20240618.2"
command:
- /entrypoints/check-backend-version.sh
env:
- name: MODULE
value: scrubber
- name: MODULE_CONFIG_KEY
value:
- name: SWH_CONFIG_FILENAME
value: /etc/swh/config.yml
volumeMounts:
- name: configuration
mountPath: /etc/swh
- name: database-utils
mountPath: /entrypoints
+ one list entry added:
- name: check-scrubber-migration
# TODO: Add the "datastore" registration
# A workaround is needed as the registration is not idempotent
# and can't be launched each time a scrubber is launched
image: "container-registry.softwareheritage.org/swh/infra/swh-apps/scrubber:20240618.2"
command:
- /entrypoints/check-backend-version.sh
env:
- name: MODULE
value: scrubber
- name: MODULE_CONFIG_KEY
value:
- name: SWH_CONFIG_FILENAME
value: /etc/swh/config.yml
volumeMounts:
- name: configuration
mountPath: /etc/swh
- name: database-utils
mountPath: /entrypoints
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh/storage-replayer-content)
± value change
- f9f2a0335479926eb95cc81db0380c75e02842906faba46d33671990aa3e26b5
+ c9119c215addb0f487d78b556bec6a671d863bf4c6d951a273a592f72d75b90a
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh/storage-replayer-directory)
± value change
- f9f2a0335479926eb95cc81db0380c75e02842906faba46d33671990aa3e26b5
+ c9119c215addb0f487d78b556bec6a671d863bf4c6d951a273a592f72d75b90a
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh/storage-replayer-extid)
± value change
- f9f2a0335479926eb95cc81db0380c75e02842906faba46d33671990aa3e26b5
+ c9119c215addb0f487d78b556bec6a671d863bf4c6d951a273a592f72d75b90a
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh/storage-replayer-metadata)
± value change
- f9f2a0335479926eb95cc81db0380c75e02842906faba46d33671990aa3e26b5
+ c9119c215addb0f487d78b556bec6a671d863bf4c6d951a273a592f72d75b90a
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh/storage-replayer-origin)
± value change
- f9f2a0335479926eb95cc81db0380c75e02842906faba46d33671990aa3e26b5
+ c9119c215addb0f487d78b556bec6a671d863bf4c6d951a273a592f72d75b90a
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh/storage-replayer-origin-visit)
± value change
- f9f2a0335479926eb95cc81db0380c75e02842906faba46d33671990aa3e26b5
+ c9119c215addb0f487d78b556bec6a671d863bf4c6d951a273a592f72d75b90a
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh/storage-replayer-origin-visit-status)
± value change
- f9f2a0335479926eb95cc81db0380c75e02842906faba46d33671990aa3e26b5
+ c9119c215addb0f487d78b556bec6a671d863bf4c6d951a273a592f72d75b90a
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh/storage-replayer-raw-extrinsic-metadata)
± value change
- f9f2a0335479926eb95cc81db0380c75e02842906faba46d33671990aa3e26b5
+ c9119c215addb0f487d78b556bec6a671d863bf4c6d951a273a592f72d75b90a
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh/storage-replayer-release)
± value change
- f9f2a0335479926eb95cc81db0380c75e02842906faba46d33671990aa3e26b5
+ c9119c215addb0f487d78b556bec6a671d863bf4c6d951a273a592f72d75b90a
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh/storage-replayer-revision)
± value change
- f9f2a0335479926eb95cc81db0380c75e02842906faba46d33671990aa3e26b5
+ c9119c215addb0f487d78b556bec6a671d863bf4c6d951a273a592f72d75b90a
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh/storage-replayer-skipped-content)
± value change
- f9f2a0335479926eb95cc81db0380c75e02842906faba46d33671990aa3e26b5
+ c9119c215addb0f487d78b556bec6a671d863bf4c6d951a273a592f72d75b90a
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh/storage-replayer-snapshot)
± value change
- f9f2a0335479926eb95cc81db0380c75e02842906faba46d33671990aa3e26b5
+ c9119c215addb0f487d78b556bec6a671d863bf4c6d951a273a592f72d75b90a
spec.template.metadata.annotations.checksum/database-utils (apps/v1/Deployment/swh/storage-postgresql-read-only)
± value change
- f9f2a0335479926eb95cc81db0380c75e02842906faba46d33671990aa3e26b5
+ c9119c215addb0f487d78b556bec6a671d863bf4c6d951a273a592f72d75b90a
spec.template.metadata.annotations.checksum/database-utils (apps/v1/Deployment/swh/storage-postgresql-read-write)
± value change
- f9f2a0335479926eb95cc81db0380c75e02842906faba46d33671990aa3e26b5
+ c9119c215addb0f487d78b556bec6a671d863bf4c6d951a273a592f72d75b90a
------------- diff for environment staging namespace swh-cassandra -------------
_ __ __
_| |_ _ / _|/ _| between /tmp/swh-chart.swh.HO3grYti/staging-swh-cassandra.before, 422 documents
/ _' | | | | |_| |_ and /tmp/swh-chart.swh.HO3grYti/staging-swh-cassandra.after, 422 documents
| (_| | |_| | _| _|
\__,_|\__, |_| |_| returned 16 differences
|___/
data (v1/ConfigMap/swh-cassandra/database-utils)
+ one map entry added:
register-scrubber-configuration.sh: |
#!/usr/bin/env bash
set -eux
# Note: The subcommand swh uses internally the environment variable
# SWH_CONFIG_FILENAME
# Usage: swh scrubber check init [OPTIONS] {storage|journal|objstorage}
#
# Initialise a scrubber check configuration for the datastore defined in the
# configuration file and given object_type.
#
# A checker configuration configuration consists simply in a set of:
#
# - backend: the datastore type being scrubbed (storage, objstorage or
# journal),
#
# - object-type: the type of object being checked,
#
# - nb-partitions: the number of partitions the hash space is divided in;
# must be a power of 2,
#
# - name: an unique name for easier reference,
#
# - check-hashes: flag (default to True) to select the hash validation step
# for this scrubbing configuration,
#
# - check-references: flag (default to True for storage and False for the
# journal backend) to select the reference validation step for this
# scrubbing configuration.
#
# Options:
# --object-type [snapshot|revision|release|directory|content]
# --nb-partitions INTEGER
# --name TEXT
# --check-hashes / --no-check-hashes
# --check-references / --no-check-references
# -h, --help Show this message and exit.
extra_cmd=""
[ ! -z "${NB_PARTITIONS}" ] && extra_cmd="${extra_cmd} --nb-partitions $NB_PARTITIONS"
[ "${CHECK_HASHES}" = "false" ] && extra_cmd="${extra_cmd} --no-check-hashes"
[ "${CHECK_REFERENCES}" = "false" ] && extra_cmd="${extra_cmd} --no-check-references"
# Check whether the configuration already exists (the subcommand script is not
# idempotent)
config_exists=$(swh scrubber check list | awk -v name=$NAME '/name/{print substr($2,1,length($2)-1)}')
if [ "${config_exists}" = "${NAME}" ]; then
echo "Configuration ${NAME} already exists in scrubber, do nothing"
exit 0
fi
swh scrubber check init \
--name $NAME \
--object-type $OBJECT_TYPE \
$extra_cmd \
$BACKEND
spec.template.metadata.annotations.checksum/database-utils (apps/v1/Deployment/swh-cassandra/indexer-storage-rpc)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-content)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-directory)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-extid)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-metadata)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-origin)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-origin-visit)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-origin-visit-status)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-raw-extrinsic-metadata)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-release)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-revision)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-skipped-content)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-snapshot)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/database-utils (apps/v1/Deployment/swh-cassandra/storage-cassandra)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/database-utils (apps/v1/Deployment/swh-cassandra/storage-cassandra-read-only)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
------------- diff for environment staging namespace swh-cassandra-next-version -------------
_ __ __
_| |_ _ / _|/ _| between /tmp/swh-chart.swh.HO3grYti/staging-swh-cassandra-next-version.before, 306 documents
/ _' | | | | |_| |_ and /tmp/swh-chart.swh.HO3grYti/staging-swh-cassandra-next-version.after, 306 documents
| (_| | |_| | _| _|
\__,_|\__, |_| |_| returned 14 differences
|___/
data (v1/ConfigMap/swh-cassandra-next-version/database-utils)
+ one map entry added:
register-scrubber-configuration.sh: |
#!/usr/bin/env bash
set -eux
# Note: The subcommand swh uses internally the environment variable
# SWH_CONFIG_FILENAME
# Usage: swh scrubber check init [OPTIONS] {storage|journal|objstorage}
#
# Initialise a scrubber check configuration for the datastore defined in the
# configuration file and given object_type.
#
# A checker configuration configuration consists simply in a set of:
#
# - backend: the datastore type being scrubbed (storage, objstorage or
# journal),
#
# - object-type: the type of object being checked,
#
# - nb-partitions: the number of partitions the hash space is divided in;
# must be a power of 2,
#
# - name: an unique name for easier reference,
#
# - check-hashes: flag (default to True) to select the hash validation step
# for this scrubbing configuration,
#
# - check-references: flag (default to True for storage and False for the
# journal backend) to select the reference validation step for this
# scrubbing configuration.
#
# Options:
# --object-type [snapshot|revision|release|directory|content]
# --nb-partitions INTEGER
# --name TEXT
# --check-hashes / --no-check-hashes
# --check-references / --no-check-references
# -h, --help Show this message and exit.
extra_cmd=""
[ ! -z "${NB_PARTITIONS}" ] && extra_cmd="${extra_cmd} --nb-partitions $NB_PARTITIONS"
[ "${CHECK_HASHES}" = "false" ] && extra_cmd="${extra_cmd} --no-check-hashes"
[ "${CHECK_REFERENCES}" = "false" ] && extra_cmd="${extra_cmd} --no-check-references"
# Check whether the configuration already exists (the subcommand script is not
# idempotent)
config_exists=$(swh scrubber check list | awk -v name=$NAME '/name/{print substr($2,1,length($2)-1)}')
if [ "${config_exists}" = "${NAME}" ]; then
echo "Configuration ${NAME} already exists in scrubber, do nothing"
exit 0
fi
swh scrubber check init \
--name $NAME \
--object-type $OBJECT_TYPE \
$extra_cmd \
$BACKEND
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra-next-version/storage-replayer-content)
± value change
- af34f341e68217e6b4a54ac0c90c7c7c6e4f1dc9bed8243dbedd057a7f43872b
+ 261c3b71d65c091fcbec252185d2b25d512db1d838b2695c837a4bfe48b1c5d8
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra-next-version/storage-replayer-directory)
± value change
- af34f341e68217e6b4a54ac0c90c7c7c6e4f1dc9bed8243dbedd057a7f43872b
+ 261c3b71d65c091fcbec252185d2b25d512db1d838b2695c837a4bfe48b1c5d8
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra-next-version/storage-replayer-extid)
± value change
- af34f341e68217e6b4a54ac0c90c7c7c6e4f1dc9bed8243dbedd057a7f43872b
+ 261c3b71d65c091fcbec252185d2b25d512db1d838b2695c837a4bfe48b1c5d8
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra-next-version/storage-replayer-metadata)
± value change
- af34f341e68217e6b4a54ac0c90c7c7c6e4f1dc9bed8243dbedd057a7f43872b
+ 261c3b71d65c091fcbec252185d2b25d512db1d838b2695c837a4bfe48b1c5d8
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra-next-version/storage-replayer-origin)
± value change
- af34f341e68217e6b4a54ac0c90c7c7c6e4f1dc9bed8243dbedd057a7f43872b
+ 261c3b71d65c091fcbec252185d2b25d512db1d838b2695c837a4bfe48b1c5d8
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra-next-version/storage-replayer-origin-visit)
± value change
- af34f341e68217e6b4a54ac0c90c7c7c6e4f1dc9bed8243dbedd057a7f43872b
+ 261c3b71d65c091fcbec252185d2b25d512db1d838b2695c837a4bfe48b1c5d8
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra-next-version/storage-replayer-origin-visit-status)
± value change
- af34f341e68217e6b4a54ac0c90c7c7c6e4f1dc9bed8243dbedd057a7f43872b
+ 261c3b71d65c091fcbec252185d2b25d512db1d838b2695c837a4bfe48b1c5d8
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra-next-version/storage-replayer-raw-extrinsic-metadata)
± value change
- af34f341e68217e6b4a54ac0c90c7c7c6e4f1dc9bed8243dbedd057a7f43872b
+ 261c3b71d65c091fcbec252185d2b25d512db1d838b2695c837a4bfe48b1c5d8
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra-next-version/storage-replayer-release)
± value change
- af34f341e68217e6b4a54ac0c90c7c7c6e4f1dc9bed8243dbedd057a7f43872b
+ 261c3b71d65c091fcbec252185d2b25d512db1d838b2695c837a4bfe48b1c5d8
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra-next-version/storage-replayer-revision)
± value change
- af34f341e68217e6b4a54ac0c90c7c7c6e4f1dc9bed8243dbedd057a7f43872b
+ 261c3b71d65c091fcbec252185d2b25d512db1d838b2695c837a4bfe48b1c5d8
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra-next-version/storage-replayer-skipped-content)
± value change
- af34f341e68217e6b4a54ac0c90c7c7c6e4f1dc9bed8243dbedd057a7f43872b
+ 261c3b71d65c091fcbec252185d2b25d512db1d838b2695c837a4bfe48b1c5d8
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra-next-version/storage-replayer-snapshot)
± value change
- af34f341e68217e6b4a54ac0c90c7c7c6e4f1dc9bed8243dbedd057a7f43872b
+ 261c3b71d65c091fcbec252185d2b25d512db1d838b2695c837a4bfe48b1c5d8
spec.template.metadata.annotations.checksum/database-utils (apps/v1/Deployment/swh-cassandra-next-version/storage-postgresql-read-write)
± value change
- af34f341e68217e6b4a54ac0c90c7c7c6e4f1dc9bed8243dbedd057a7f43872b
+ 261c3b71d65c091fcbec252185d2b25d512db1d838b2695c837a4bfe48b1c5d8
------------- diff for environment production namespace swh -------------
_ __ __
_| |_ _ / _|/ _| between /tmp/swh-chart.swh.HO3grYti/production-swh.before, 448 documents
/ _' | | | | |_| |_ and /tmp/swh-chart.swh.HO3grYti/production-swh.after, 448 documents
| (_| | |_| | _| _|
\__,_|\__, |_| |_| returned eight differences
|___/
data (v1/ConfigMap/swh/database-utils)
+ one map entry added:
register-scrubber-configuration.sh: |
#!/usr/bin/env bash
set -eux
# Note: The subcommand swh uses internally the environment variable
# SWH_CONFIG_FILENAME
# Usage: swh scrubber check init [OPTIONS] {storage|journal|objstorage}
#
# Initialise a scrubber check configuration for the datastore defined in the
# configuration file and given object_type.
#
# A checker configuration configuration consists simply in a set of:
#
# - backend: the datastore type being scrubbed (storage, objstorage or
# journal),
#
# - object-type: the type of object being checked,
#
# - nb-partitions: the number of partitions the hash space is divided in;
# must be a power of 2,
#
# - name: an unique name for easier reference,
#
# - check-hashes: flag (default to True) to select the hash validation step
# for this scrubbing configuration,
#
# - check-references: flag (default to True for storage and False for the
# journal backend) to select the reference validation step for this
# scrubbing configuration.
#
# Options:
# --object-type [snapshot|revision|release|directory|content]
# --nb-partitions INTEGER
# --name TEXT
# --check-hashes / --no-check-hashes
# --check-references / --no-check-references
# -h, --help Show this message and exit.
extra_cmd=""
[ ! -z "${NB_PARTITIONS}" ] && extra_cmd="${extra_cmd} --nb-partitions $NB_PARTITIONS"
[ "${CHECK_HASHES}" = "false" ] && extra_cmd="${extra_cmd} --no-check-hashes"
[ "${CHECK_REFERENCES}" = "false" ] && extra_cmd="${extra_cmd} --no-check-references"
# Check whether the configuration already exists (the subcommand script is not
# idempotent)
config_exists=$(swh scrubber check list | awk -v name=$NAME '/name/{print substr($2,1,length($2)-1)}')
if [ "${config_exists}" = "${NAME}" ]; then
echo "Configuration ${NAME} already exists in scrubber, do nothing"
exit 0
fi
swh scrubber check init \
--name $NAME \
--object-type $OBJECT_TYPE \
$extra_cmd \
$BACKEND
spec.template.metadata.annotations.checksum/database-utils (apps/v1/Deployment/swh/indexer-storage-read-only)
± value change
- f9f2a0335479926eb95cc81db0380c75e02842906faba46d33671990aa3e26b5
+ c9119c215addb0f487d78b556bec6a671d863bf4c6d951a273a592f72d75b90a
spec.template.metadata.annotations.checksum/database-utils (apps/v1/Deployment/swh/indexer-storage-read-write)
± value change
- f9f2a0335479926eb95cc81db0380c75e02842906faba46d33671990aa3e26b5
+ c9119c215addb0f487d78b556bec6a671d863bf4c6d951a273a592f72d75b90a
spec.template.spec.initContainers (apps/v1/Deployment/swh/scrubber-journalchecker-release)
- one list entry removed:
- name: check-migration
image: "container-registry.softwareheritage.org/swh/infra/swh-apps/scrubber:20240618.2"
command:
- /entrypoints/check-backend-version.sh
env:
- name: MODULE
value: scrubber
- name: MODULE_CONFIG_KEY
value:
- name: SWH_CONFIG_FILENAME
value: /etc/swh/config.yml
volumeMounts:
- name: configuration
mountPath: /etc/swh
- name: database-utils
mountPath: /entrypoints
+ one list entry added:
- name: check-scrubber-migration
# TODO: Add the "datastore" registration
# A workaround is needed as the registration is not idempotent
# and can't be launched each time a scrubber is launched
image: "container-registry.softwareheritage.org/swh/infra/swh-apps/scrubber:20240618.2"
command:
- /entrypoints/check-backend-version.sh
env:
- name: MODULE
value: scrubber
- name: MODULE_CONFIG_KEY
value:
- name: SWH_CONFIG_FILENAME
value: /etc/swh/config.yml
volumeMounts:
- name: configuration
mountPath: /etc/swh
- name: database-utils
mountPath: /entrypoints
spec.template.spec.initContainers (apps/v1/Deployment/swh/scrubber-journalchecker-revision)
- one list entry removed:
- name: check-migration
image: "container-registry.softwareheritage.org/swh/infra/swh-apps/scrubber:20240618.2"
command:
- /entrypoints/check-backend-version.sh
env:
- name: MODULE
value: scrubber
- name: MODULE_CONFIG_KEY
value:
- name: SWH_CONFIG_FILENAME
value: /etc/swh/config.yml
volumeMounts:
- name: configuration
mountPath: /etc/swh
- name: database-utils
mountPath: /entrypoints
+ one list entry added:
- name: check-scrubber-migration
# TODO: Add the "datastore" registration
# A workaround is needed as the registration is not idempotent
# and can't be launched each time a scrubber is launched
image: "container-registry.softwareheritage.org/swh/infra/swh-apps/scrubber:20240618.2"
command:
- /entrypoints/check-backend-version.sh
env:
- name: MODULE
value: scrubber
- name: MODULE_CONFIG_KEY
value:
- name: SWH_CONFIG_FILENAME
value: /etc/swh/config.yml
volumeMounts:
- name: configuration
mountPath: /etc/swh
- name: database-utils
mountPath: /entrypoints
spec.template.spec.initContainers (apps/v1/Deployment/swh/scrubber-journalchecker-snapshot)
- one list entry removed:
- name: check-migration
image: "container-registry.softwareheritage.org/swh/infra/swh-apps/scrubber:20240618.2"
command:
- /entrypoints/check-backend-version.sh
env:
- name: MODULE
value: scrubber
- name: MODULE_CONFIG_KEY
value:
- name: SWH_CONFIG_FILENAME
value: /etc/swh/config.yml
volumeMounts:
- name: configuration
mountPath: /etc/swh
- name: database-utils
mountPath: /entrypoints
+ one list entry added:
- name: check-scrubber-migration
# TODO: Add the "datastore" registration
# A workaround is needed as the registration is not idempotent
# and can't be launched each time a scrubber is launched
image: "container-registry.softwareheritage.org/swh/infra/swh-apps/scrubber:20240618.2"
command:
- /entrypoints/check-backend-version.sh
env:
- name: MODULE
value: scrubber
- name: MODULE_CONFIG_KEY
value:
- name: SWH_CONFIG_FILENAME
value: /etc/swh/config.yml
volumeMounts:
- name: configuration
mountPath: /etc/swh
- name: database-utils
mountPath: /entrypoints
spec.template.metadata.annotations.checksum/database-utils (apps/v1/Deployment/swh/storage-postgresql-azure-readonly)
± value change
- f9f2a0335479926eb95cc81db0380c75e02842906faba46d33671990aa3e26b5
+ c9119c215addb0f487d78b556bec6a671d863bf4c6d951a273a592f72d75b90a
spec.template.metadata.annotations.checksum/database-utils (apps/v1/Deployment/swh/storage-postgresql-winery)
± value change
- f9f2a0335479926eb95cc81db0380c75e02842906faba46d33671990aa3e26b5
+ c9119c215addb0f487d78b556bec6a671d863bf4c6d951a273a592f72d75b90a
------------- diff for environment production namespace swh-cassandra -------------
_ __ __
_| |_ _ / _|/ _| between /tmp/swh-chart.swh.HO3grYti/production-swh-cassandra.before, 117 documents
/ _' | | | | |_| |_ and /tmp/swh-chart.swh.HO3grYti/production-swh-cassandra.after, 117 documents
| (_| | |_| | _| _|
\__,_|\__, |_| |_| returned 15 differences
|___/
data (v1/ConfigMap/swh-cassandra/database-utils)
+ one map entry added:
register-scrubber-configuration.sh: |
#!/usr/bin/env bash
set -eux
# Note: The subcommand swh uses internally the environment variable
# SWH_CONFIG_FILENAME
# Usage: swh scrubber check init [OPTIONS] {storage|journal|objstorage}
#
# Initialise a scrubber check configuration for the datastore defined in the
# configuration file and given object_type.
#
# A checker configuration configuration consists simply in a set of:
#
# - backend: the datastore type being scrubbed (storage, objstorage or
# journal),
#
# - object-type: the type of object being checked,
#
# - nb-partitions: the number of partitions the hash space is divided in;
# must be a power of 2,
#
# - name: an unique name for easier reference,
#
# - check-hashes: flag (default to True) to select the hash validation step
# for this scrubbing configuration,
#
# - check-references: flag (default to True for storage and False for the
# journal backend) to select the reference validation step for this
# scrubbing configuration.
#
# Options:
# --object-type [snapshot|revision|release|directory|content]
# --nb-partitions INTEGER
# --name TEXT
# --check-hashes / --no-check-hashes
# --check-references / --no-check-references
# -h, --help Show this message and exit.
extra_cmd=""
[ ! -z "${NB_PARTITIONS}" ] && extra_cmd="${extra_cmd} --nb-partitions $NB_PARTITIONS"
[ "${CHECK_HASHES}" = "false" ] && extra_cmd="${extra_cmd} --no-check-hashes"
[ "${CHECK_REFERENCES}" = "false" ] && extra_cmd="${extra_cmd} --no-check-references"
# Check whether the configuration already exists (the subcommand script is not
# idempotent)
config_exists=$(swh scrubber check list | awk -v name=$NAME '/name/{print substr($2,1,length($2)-1)}')
if [ "${config_exists}" = "${NAME}" ]; then
echo "Configuration ${NAME} already exists in scrubber, do nothing"
exit 0
fi
swh scrubber check init \
--name $NAME \
--object-type $OBJECT_TYPE \
$extra_cmd \
$BACKEND
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-content)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-directory)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-extid)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-metadata)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-origin)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-origin-visit)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-origin-visit-status)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-raw-extrinsic-metadata)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-release)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-revision)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-skipped-content)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/config_utils (apps/v1/Deployment/swh-cassandra/storage-replayer-snapshot)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/database-utils (apps/v1/Deployment/swh-cassandra/storage-cassandra-readonly)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
spec.template.metadata.annotations.checksum/database-utils (apps/v1/Deployment/swh-cassandra/storage-cassandra-readonly-internal)
± value change
- 769fc7404b2ab1054abe4a7c2021abed585a8086ef44d3430a1864c40419992d
+ 36e12c3859589c91a735438604a56000650a56e5b3522276da3d4a3e89464a64
Edited by Antoine R. Dumont