Upgrade overview
The upgrade procedure for Project Quay depends on the type of installation you are using.
The Project Quay Operator provides a simple method to deploy and manage a Project Quay cluster. This is the preferred procedure for deploying Project Quay on OpenShift.
The Project Quay Operator should be upgraded using the Operator Lifecycle Manager (OLM) as described in the section "Upgrading Quay using the Quay Operator".
The procedure for upgrading a proof-of-concept or highly available installation of Project Quay and Clair is documented in the section "Standalone upgrade".
Upgrading the Project Quay Operator Overview
Note
|
Currently, upgrading the Project Quay Operator is not supported on IBM Power and IBM Z. |
The Project Quay Operator follows a synchronized versioning scheme, which means that each version of the Operator is tied to the version of Project Quay and the components that it manages. There is no field on the QuayRegistry
custom resource which sets the version of Project Quay to deploy
; the Operator can only deploy a single version of all components. This scheme was chosen to ensure that all components work well together and to reduce the complexity of the Operator needing to know how to manage the lifecycles of many different versions of Project Quay on Kubernetes.
Operator Lifecycle Manager
The Project Quay Operator should be installed and upgraded using the Operator Lifecycle Manager (OLM). When creating a Subscription
with the default approvalStrategy: Automatic
, OLM will automatically upgrade the Project Quay Operator whenever a new version becomes available.
Warning
|
When the Project Quay Operator is installed by Operator Lifecycle Manager, it might be configured to support automatic or manual upgrades. This option is shown on the Operator Hub page for the Project Quay Operator during installation. It can also be found in the Project Quay Operator |
Upgrading the Project Quay Operator
The standard approach for upgrading installed Operators on OpenShift Container Platform is documented at Upgrading installed Operators.
In general, Project Quay supports upgrades from a prior (N-1) minor version only. For example, upgrading directly from Project Quay 3.0.5 to the latest version of 3.5 is not supported. Instead, users would have to upgrade as follows:
-
3.0.5 → 3.1.3
-
3.1.3 → 3.2.2
-
3.2.2 → 3.3.4
-
3.3.4 → 3.4.z
-
3.4.z → 3.5.z
This is required to ensure that any necessary database migrations are done correctly and in the right order during the upgrade.
In some cases, Project Quay supports direct, single-step upgrades from prior (N-2, N-3) minor versions. This simplifies the upgrade procedure for customers on older releases. The following upgrade paths are supported for Project Quay 3.10:
-
3.7.z → 3.10.z
-
3.8.z → 3.10.z
-
3.9.z → 3.10.z
For users on standalone deployments of Project Quay wanting to upgrade to 3.9, see the Standalone upgrade guide.
Upgrading Project Quay
To update Project Quay from one minor version to the next, for example, 3.9 → 3.10, you must change the update channel for the Project Quay Operator.
For z
stream upgrades, for example, 3.9.1 → 3.9.2, updates are released in the major-minor channel that the user initially selected during install. The procedure to perform a z
stream upgrade depends on the approvalStrategy
as outlined above. If the approval strategy is set to Automatic
, the Project Quay Operator upgrades automatically to the newest z
stream. This results in automatic, rolling Project Quay updates to newer z
streams with little to no downtime. Otherwise, the update must be manually approved before installation can begin.
Removing config editor objects on Project Quay Operator
The config editor has been removed from the Project Quay Operator on OpenShift Container Platform deployments. As a result, the quay-config-editor
pod no longer deploys, and users cannot check the status of the config editor route. Additionally, the Config Editor Endpoint no longer generates on the Project Quay Operator Details page.
Users with existing Project Quay Operators who are upgrading from 3.7, 3.8, or 3.9 to 3.10 must manually remove the Project Quay config editor by removing the pod
, deployment
, route,
service
, and secret
objects.
To remove the deployment
, route,
service
, and secret
objects, use the following procedure.
-
You have deployed Project Quay version 3.7, 3.8, or 3.9.
-
You have a valid
QuayRegistry
object.
-
Obtain the
quayregistry-quay-config-editor
route object by entering the following command:$ oc get route
Example output--- quayregistry-quay-config-editor-c866f64c4-68gtb 1/1 Running 0 49m ---
-
Remove the
quayregistry-quay-config-editor
route object by entering the following command:$ oc delete route quayregistry-quay-config-editor
-
Obtain the
quayregistry-quay-config-editor
deployment object by entering the following command:$ oc get deployment
Example output--- quayregistry-quay-config-editor ---
-
Remove the
quayregistry-quay-config-editor
deployment object by entering the following command:$ oc delete deployment quayregistry-quay-config-editor
-
Obtain the
quayregistry-quay-config-editor
service object by entering the following command:$ oc get svc | grep config-editor
Example outputquayregistry-quay-config-editor ClusterIP 172.30.219.194 <none> 80/TCP 6h15m
-
Remove the
quayregistry-quay-config-editor
service object by entering the following command:$ oc delete service quayregistry-quay-config-editor
-
Obtain the
quayregistry-quay-config-editor-credentials
secret by entering the following command:$ oc get secret | grep config-editor
Example outputquayregistry-quay-config-editor-credentials-mb8kchfg92 Opaque 2 52m
-
Delete the
quayregistry-quay-config-editor-credentials
secret by entering the following command:$ oc delete secret quayregistry-quay-config-editor-credentials-mb8kchfg92
-
Obtain the
quayregistry-quay-config-editor
pod by entering the following command:$ $ oc get pod
Example output--- quayregistry-quay-config-editor-c866f64c4-68gtb 1/1 Running 0 49m ---
-
Delete the
quayregistry-quay-config-editor
pod by entering the following command:$ oc delete pod quayregistry-quay-app-6bc4fbd456-8bc9c
Upgrading with custom SSL/TLS certificate/key pairs without Subject Alternative Names
There is an issue for customers using their own SSL/TLS certificate/key pairs without Subject Alternative Names (SANs) when upgrading from Project Quay 3.3.4 to Project Quay 3.6 directly. During the upgrade to Project Quay 3.6, the deployment is blocked, with the error message from the Project Quay Operator pod logs indicating that the Project Quay SSL/TLS certificate must have SANs.
If possible, you should regenerate your SSL/TLS certificates with the correct hostname in the SANs. A possible workaround involves defining an environment variable in the quay-app
, quay-upgrade
and quay-config-editor
pods after upgrade to enable CommonName matching:
GODEBUG=x509ignoreCN=0
The GODEBUG=x509ignoreCN=0
flag enables the legacy behavior of treating the CommonName field on X.509 certificates as a hostname when no SANs are present. However, this workaround is not recommended, as it will not persist across a redeployment.
Changing the update channel for the Project Quay Operator
The subscription of an installed Operator specifies an update channel, which is used to track and receive updates for the Operator. To upgrade the Project Quay Operator to start tracking and receiving updates from a newer channel, change the update channel in the Subscription tab for the installed Project Quay Operator. For subscriptions with an Automatic
approval strategy, the upgrade begins automatically and can be monitored on the page that lists the Installed Operators.
Manually approving a pending Operator upgrade
If an installed Operator has the approval strategy in its subscription set to Manual
, when new updates are released in its current update channel, the update must be manually approved before installation can begin. If the Project Quay Operator has a pending upgrade, this status will be displayed in the list of Installed Operators. In the Subscription
tab for the Project Quay Operator, you can preview the install plan and review the resources that are listed as available for upgrade. If satisfied, click Approve
and return to the page that lists Installed Operators to monitor the progress of the upgrade.
The following image shows the Subscription tab in the UI, including the update Channel
, the Approval
strategy, the Upgrade status
and the InstallPlan
:
The list of Installed Operators provides a high-level summary of the current Quay installation:
Upgrading a QuayRegistry resource
When the Project Quay Operator starts, it immediately looks for any QuayRegistries
it can find in the namespace(s) it is configured to watch. When it finds one, the following logic is used:
-
If
status.currentVersion
is unset, reconcile as normal. -
If
status.currentVersion
equals the Operator version, reconcile as normal. -
If
status.currentVersion
does not equal the Operator version, check if it can be upgraded. If it can, perform upgrade tasks and set thestatus.currentVersion
to the Operator’s version once complete. If it cannot be upgraded, return an error and leave theQuayRegistry
and its deployed Kubernetes objects alone.
Upgrading a QuayEcosystem
Upgrades are supported from previous versions of the Operator which used the QuayEcosystem
API for a limited set of configurations. To ensure that migrations do not happen unexpectedly, a special label needs to be applied to the QuayEcosystem
for it to be migrated. A new QuayRegistry
will be created for the Operator to manage, but the old QuayEcosystem
will remain until manually deleted to ensure that you can roll back and still access Quay in case anything goes wrong. To migrate an existing QuayEcosystem
to a new QuayRegistry
, use the following procedure.
-
Add
"quay-operator/migrate": "true"
to themetadata.labels
of theQuayEcosystem
.$ oc edit quayecosystem <quayecosystemname>
metadata: labels: quay-operator/migrate: "true"
-
Wait for a
QuayRegistry
to be created with the samemetadata.name
as yourQuayEcosystem
. TheQuayEcosystem
will be marked with the label"quay-operator/migration-complete": "true"
. -
After the
status.registryEndpoint
of the newQuayRegistry
is set, access Project Quay and confirm that all data and settings were migrated successfully. -
If everything works correctly, you can delete the
QuayEcosystem
and Kubernetes garbage collection will clean up all old resources.
Reverting QuayEcosystem Upgrade
If something goes wrong during the automatic upgrade from QuayEcosystem
to QuayRegistry
, follow these steps to revert back to using the QuayEcosystem
:
-
Delete the
QuayRegistry
using either the UI orkubectl
:$ kubectl delete -n <namespace> quayregistry <quayecosystem-name>
-
If external access was provided using a
Route
, change theRoute
to point back to the originalService
using the UI orkubectl
.
Note
|
If your |
Supported QuayEcosystem Configurations for Upgrades
The Project Quay Operator reports errors in its logs and in status.conditions
if migrating a QuayEcosystem
component fails or is unsupported. All unmanaged components should migrate successfully because no Kubernetes resources need to be adopted and all the necessary values are already provided in Project Quay’s config.yaml
file.
Database
Ephemeral database not supported (volumeSize
field must be set).
Redis
Nothing special needed.
External Access
Only passthrough Route
access is supported for automatic migration. Manual migration required for other methods.
-
LoadBalancer
without custom hostname: After theQuayEcosystem
is marked with label"quay-operator/migration-complete": "true"
, delete themetadata.ownerReferences
field from existingService
before deleting theQuayEcosystem
to prevent Kubernetes from garbage collecting theService
and removing the load balancer. A newService
will be created withmetadata.name
format<QuayEcosystem-name>-quay-app
. Edit thespec.selector
of the existingService
to match thespec.selector
of the newService
so traffic to the old load balancer endpoint will now be directed to the new pods. You are now responsible for the oldService
; the Quay Operator will not manage it. -
LoadBalancer
/NodePort
/Ingress
with custom hostname: A newService
of typeLoadBalancer
will be created withmetadata.name
format<QuayEcosystem-name>-quay-app
. Change your DNS settings to point to thestatus.loadBalancer
endpoint provided by the newService
.
Clair
Nothing special needed.
Object Storage
QuayEcosystem
did not have a managed object storage component, so object storage will always be marked as unmanaged. Local storage is not supported.
Repository Mirroring
Nothing special needed.
Standalone upgrade
In general, Project Quay supports upgrades from a prior (N-1) minor version only. For example, upgrading directly from Project Quay 3.0.5 to the latest version of 3.5 is not supported. Instead, users would have to upgrade as follows:
-
3.0.5 → 3.1.3
-
3.1.3 → 3.2.2
-
3.2.2 → 3.3.4
-
3.3.4 → 3.4.z
-
3.4.z → 3.5.z
This is required to ensure that any necessary database migrations are done correctly and in the right order during the upgrade.
In some cases, Project Quay supports direct, single-step upgrades from prior (N-2, N-3) minor versions. This exception to the normal, prior minor version-only, upgrade simplifies the upgrade procedure for customers on older releases. The following upgrade paths are supported for Project Quay 3.10:
-
3.7.z → 3.10.z
-
3.8.z → 3.10.z
-
3.9.z → 3.10.z
For users wanting to upgrade the Project Quay Operator, see Upgrading the Project Quay Operator Overview.
This document describes the steps needed to perform each individual upgrade. Determine your current version and then follow the steps in sequential order, starting with your current version and working up to your desired target version.
See the Project Quay Release Notes for information on features for individual releases.
The general procedure for a manual upgrade consists of the following steps:
-
Stop the Quay and Clair containers.
-
Backup the database and image storage (optional but recommended).
-
Start Clair using the new version of the image.
-
Wait until Clair is ready to accept connections before starting the new version of Quay.
Accessing images
Images for Quay 3.4.0 and later are available from registry.redhat.io and registry.access.redhat.com, with authentication set up as described in Red Hat Container Registry Authentication.
Images for Quay 3.3.4 and earlier are available from quay.io, with authentication set up as described in Accessing Project Quay without a CoreOS login.
Upgrade to 3.10.z from 3.9.z
Target images
-
Quay: quay.io/projectquay/quay:v3.10.0
-
Clair: quay.io/projectquay/clair:4.7.2
-
PostgreSQL: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33
-
Redis: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542
Upgrade to 3.10.z from 3.8.z
Target images
-
Quay: quay.io/projectquay/quay:v3.10.0
-
Clair: quay.io/projectquay/clair:4.7.2
-
PostgreSQL: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33
-
Redis: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542
Upgrade to 3.10.z from 3.7.z
Target images
-
Quay: quay.io/projectquay/quay:v3.10.0
-
Clair: quay.io/projectquay/clair:4.7.2
-
PostgreSQL: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33
-
Redis: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542
Upgrade to 3.9.z from 3.8.z
If you are upgrading your standalone Project Quay deployment from 3.8.z → 3.9, it is highly recommended that you upgrade PostgreSQL from version 10 → 13. To upgrade PostgreSQL from 10 → 13, you must bring down your PostgreSQL 10 database and run a migration script to initiate the process.
Use the following procedure to upgrade PostgreSQL from 10 → 13 on a standalone Project Quay deployment.
-
Enter the following command to scale down the Project Quay container:
$ sudo podman stop <quay_container_name>
-
Optional. If you are using Clair, enter the following command to stop the Clair container:
$ sudo podman stop <clair_container_id>
-
Run the Podman process from SCLOrg’s Data Migration procedure, which allows for data migration from a remote PostgreSQL server:
$ sudo podman run -d --name <migration_postgresql_database> (1) -e POSTGRESQL_MIGRATION_REMOTE_HOST=172.17.0.2 \ (2) -e POSTGRESQL_MIGRATION_ADMIN_PASSWORD=remoteAdminP@ssword \ -v </host/data/directory:/var/lib/pgsql/data:Z> (3) [ OPTIONAL_CONFIGURATION_VARIABLES ] rhel8/postgresql-13
-
The name of your PostgreSQL 13 migration database.
-
Your current Project Quay PostgreSQL 13 database container IP address. Can obtained by running the following command:
sudo podman inspect -f "{{.NetworkSettings.IPAddress}}" postgresql-quay
. -
You must specify a different volume mount point than the one from your initial PostgreSQL 10 deployment, and modify the access control lists for said directory. For example:
$ mkdir -p /host/data/directory
$ setfacl -m u:26:-wx /host/data/directory
This prevents data from being overwritten by the new container.
-
-
Optional. If you are using Clair, repeat the previous step for the Clair PostgreSQL database container.
-
Stop the PostgreSQL 10 container:
$ sudo podman stop <postgresql_container_name>
-
After completing the PostgreSQL migration, run the PostgreSQL 13 container, using the new data volume mount from Step 3, for example,
</host/data/directory:/var/lib/postgresql/data>
:$ sudo podman run -d --rm --name postgresql-quay \ -e POSTGRESQL_USER=<username> \ -e POSTGRESQL_PASSWORD=<password> \ -e POSTGRESQL_DATABASE=<quay_database_name> \ -e POSTGRESQL_ADMIN_PASSWORD=<admin_password> \ -p 5432:5432 \ -v </host/data/directory:/var/lib/pgsql/data:Z> \ registry.redhat.io/rhel8/postgresql-13:1-109
-
Optional. If you are using Clair, repeat the previous step for the Clair PostgreSQL database container.
-
Start the Project Quay container:
$ sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay \ -v /home/<quay_user>/quay-poc/config:/conf/stack:Z \ -v /home/<quay_user>/quay-poc/storage:/datastorage:Z \ {productrepo}/{quayimage}:{productminv}
-
Optional. Restart the Clair container, for example:
$ sudo podman run -d --name clairv4 \ -p 8081:8081 -p 8088:8088 \ -e CLAIR_CONF=/clair/config.yaml \ -e CLAIR_MODE=combo \ registry.redhat.io/quay/clair-rhel8:v3.9.0
For more information, see Data Migration.
Target images
-
Quay: quay.io/projectquay/quay:v3.9.0
-
Clair: quay.io/projectquay/clair:4.7.2
-
PostgreSQL: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33
-
Redis: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542
Upgrade to 3.9.z from 3.7.z
If you are upgrading your standalone Project Quay deployment from 3.7.z → 3.9, it is highly recommended that you upgrade PostgreSQL from version 10 → 13. To upgrade PostgreSQL from 10 → 13, you must bring down your PostgreSQL 10 database and run a migration script to initiate the process:
Note
|
|
Target images
-
Quay: quay.io/projectquay/quay:v3.9.0
-
Clair: quay.io/projectquay/clair:4.7.2
-
PostgreSQL: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33
-
Redis: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542
Upgrade to 3.8.z from 3.7.z
Target images
-
Quay: quay.io/projectquay/quay:v3.8.0
-
Clair: quay.io/projectquay/clair:4.7.2
-
PostgreSQL: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33
-
Redis: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542
Upgrade to 3.7.z from 3.6.z
Target images
-
Quay: quay.io/projectquay/quay:v3.7.0
-
Clair: quay.io/projectquay/clair:4.7.2
-
PostgreSQL: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33
-
Redis: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542
Upgrade to 3.7.z from 3.5.z
Target images
-
Quay: quay.io/projectquay/quay:v3.7.0
-
Clair: quay.io/projectquay/clair:4.7.2
-
PostgreSQL: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33
-
Redis: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542
Upgrade to 3.7.z from 3.4.z
Target images
-
Quay: quay.io/projectquay/quay:v3.7.0
-
Clair: quay.io/projectquay/clair:4.7.2
-
PostgreSQL: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33
-
Redis: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542
Upgrade to 3.7.z from 3.3.z
Upgrading to Project Quay 3.7 from 3.3. is unsupported. Users must first upgrade to 3.6 from 3.3, and then upgrade to 3.7. For more information, see Upgrade to 3.6.z from 3.3.z.
Upgrade to 3.6.z from 3.5.z
Target images
-
Quay: quay.io/projectquay/quay:v3.6.0
-
Clair: quay.io/projectquay/clair:4.7.2
-
PostgreSQL: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33
-
Redis: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542
Upgrade to 3.6.z from 3.4.z
Note
|
Project Quay 3.6 supports direct, single-step upgrade from 3.4.z. This exception to the normal, prior minor version-only, upgrade simplifies the upgrade procedure for customers on older releases. |
Upgrading to Project Quay 3.6 from 3.4.z requires a database migration which does not support downgrading back to a prior version of Project Quay. Please back up your database before performing this migration.
Users will also need to configure a completely new Clair v4 instance to replace the old Clair v2 when upgrading from 3.4.z. For instructions on configuring Clair v4, see Setting up Clair on a non-OpenShift Project Quay deployment.
Target images
-
Quay: quay.io/projectquay/quay:v3.6.0
-
Clair: quay.io/projectquay/clair:4.7.2
-
PostgreSQL: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33
-
Redis: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542
Upgrade to 3.6.z from 3.3.z
Note
|
Project Quay 3.6 supports direct, single-step upgrade from 3.3.z. This exception to the normal, prior minor version-only, upgrade simplifies the upgrade procedure for customers on older releases. |
Upgrading to Project Quay 3.6.z from 3.3.z requires a database migration which does not support downgrading back to a prior version of Project Quay. Please back up your database before performing this migration.
Users will also need to configure a completely new Clair v4 instance to replace the old Clair v2 when upgrading from 3.3.z. For instructions on configuring Clair v4, see Setting up Clair on a non-OpenShift Project Quay deployment.
Target images
-
Quay: quay.io/projectquay/quay:v3.6.0
-
Clair: quay.io/projectquay/clair:4.7.2
-
PostgreSQL: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33
-
Redis: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542
Swift configuration when upgrading from 3.3.z to 3.6
When upgrading from Project Quay 3.3.z to 3.6.z, some users might receive the following error: Switch auth v3 requires tenant_id (string) in os_options
. As a workaround, you can manually update your DISTRIBUTED_STORAGE_CONFIG
to add the os_options
and tenant_id
parameters:
DISTRIBUTED_STORAGE_CONFIG:
brscale:
- SwiftStorage
- auth_url: http://****/v3
auth_version: "3"
os_options:
tenant_id: ****
project_name: ocp-base
user_domain_name: Default
storage_path: /datastorage/registry
swift_container: ocp-svc-quay-ha
swift_password: *****
swift_user: *****
Upgrade to 3.5.7 from 3.4.z
Target images
-
Quay: quay.io/projectquay/quay:v3.5.1
-
Clair: quay.io/projectquay/clair:4.7.2
-
PostgreSQL: registry.redhat.io/rhel8/postgresql-13:1-109
-
Redis: registry.redhat.io/rhel8/redis-6:1-110)
Upgrading a geo-replication deployment of Project Quay
Use the following procedure to upgrade your geo-replication Project Quay deployment.
Important
|
|
-
You have logged into
registry.redhat.io
Note
|
Procedure
This procedure assumes that you are running Project Quay services on three (or more) systems. For more information, see Preparing for Project Quay high availability. |
-
Obtain a list of all Project Quay instances on each system running a Project Quay instance.
-
Enter the following command on System A to reveal the Project Quay instances:
$ sudo podman ps
Example outputCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ec16ece208c0 registry.redhat.io/quay/quay-rhel8:v3.7.0 registry 6 minutes ago Up 6 minutes ago 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp quay01
-
Enter the following command on System B to reveal the Project Quay instances:
$ sudo podman ps
Example outputCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7ae0c9a8b37d registry.redhat.io/quay/quay-rhel8:v3.7.0 registry 5 minutes ago Up 2 seconds ago 0.0.0.0:82->8080/tcp, 0.0.0.0:445->8443/tcp quay02
-
Enter the following command on System C to reveal the Project Quay instances:
$ sudo podman ps
Example outputCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e75c4aebfee9 registry.redhat.io/quay/quay-rhel8:v3.7.0 registry 4 seconds ago Up 4 seconds ago 0.0.0.0:84->8080/tcp, 0.0.0.0:447->8443/tcp quay03
-
-
Temporarily shut down all Project Quay instances on each system.
-
Enter the following command on System A to shut down the Project Quay instance:
$ sudo podman stop ec16ece208c0
-
Enter the following command on System B to shut down the Project Quay instance:
$ sudo podman stop 7ae0c9a8b37d
-
Enter the following command on System C to shut down the Project Quay instance:
$ sudo podman stop e75c4aebfee9
-
-
Obtain the latest Project Quay version, for example, Project Quay 3.10, on each system.
-
Enter the following command on System A to obtain the latest Project Quay version:
$ sudo podman pull registry.redhat.io/quay/quay-rhel8:v3.8.0
-
Enter the following command on System B to obtain the latest Project Quay version:
$ sudo podman pull registry.redhat.io/quay/quay-rhel8:v3.8.0
-
Enter the following command on System C to obtain the latest Project Quay version:
$ sudo podman pull registry.redhat.io/quay/quay-rhel8:v3.8.0
-
-
On System A of your highly available Project Quay deployment, run the new image version, for example, Project Quay 3.10:
# sudo podman run --restart=always -p 443:8443 -p 80:8080 \ --sysctl net.core.somaxconn=4096 \ --name=quay01 \ -v /mnt/quay/config:/conf/stack:Z \ -v /mnt/quay/storage:/datastorage:Z \ -d registry.redhat.io/quay/quay-rhel8:v3.8.0
-
Wait for the new Project Quay container to become fully operational on System A. You can check the status of the container by entering the following command:
$ sudo podman ps
Example outputCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 70b9f38c3fb4 registry.redhat.io/quay/quay-rhel8:v3.8.0 registry 2 seconds ago Up 2 seconds ago 0.0.0.0:82->8080/tcp, 0.0.0.0:445->8443/tcp quay01
-
Optional: Ensure that Project Quay is fully operation by navigating to the Project Quay UI.
-
After ensuring that Project Quay on System A is fully operational, run the new image versions on System B and on System C.
-
On System B of your highly available Project Quay deployment, run the new image version, for example, Project Quay 3.10:
# sudo podman run --restart=always -p 443:8443 -p 80:8080 \ --sysctl net.core.somaxconn=4096 \ --name=quay02 \ -v /mnt/quay/config:/conf/stack:Z \ -v /mnt/quay/storage:/datastorage:Z \ -d registry.redhat.io/quay/quay-rhel8:v3.8.0
-
On System C of your highly available Project Quay deployment, run the new image version, for example, Project Quay 3.10:
# sudo podman run --restart=always -p 443:8443 -p 80:8080 \ --sysctl net.core.somaxconn=4096 \ --name=quay03 \ -v /mnt/quay/config:/conf/stack:Z \ -v /mnt/quay/storage:/datastorage:Z \ -d registry.redhat.io/quay/quay-rhel8:v3.8.0
-
-
You can check the status of the containers on System B and on System C by entering the following command:
$ sudo podman ps
Upgrade Quay Bridge Operator
To upgrade the Quay Bridge Operator (QBO), change the Channel Subscription update channel in the Subscription tab to the desired channel.
When upgrading QBO from version 3.5 to 3.7, a number of extra steps are required:
-
You need to create a new
QuayIntegration
custom resource. This can be completed in the Web Console or from the command line.upgrade-quay-integration.yaml- apiVersion: quay.redhat.com/v1 kind: QuayIntegration metadata: name: example-quayintegration-new spec: clusterID: openshift (1) credentialsSecret: name: quay-integration namespace: openshift-operators insecureRegistry: false quayHostname: https://registry-quay-quay35.router-default.apps.cluster.openshift.com
-
Make sure that the
clusterID
matches the value for the existingQuayIntegration
resource.
-
-
Create the new
QuayIntegration
custom resource:$ oc create -f upgrade-quay-integration.yaml
-
Delete the old
QuayIntegration
custom resource. -
Delete the old
mutatingwebhookconfigurations
:$ oc delete mutatingwebhookconfigurations.admissionregistration.k8s.io quay-bridge-operator
Upgrading a geo-replication deployment of the Project Quay Operator
Use the following procedure to upgrade your geo-replicated Project Quay Operator.
Important
|
|
Note
|
Procedure
This procedure assumes that you are running the Project Quay Operator on three (or more) systems. For this procedure, we will assume three systems named |
-
On System B and System C, scale down your Project Quay Operator deployment. This is done by disabling auto scaling and overriding the replica county for Project Quay, mirror workers, and Clair (if it is managed). Use the following
quayregistry.yaml
file as a reference:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: … - kind: horizontalpodautoscaler managed: false (1) - kind: quay managed: true overrides: (2) replicas: 0 - kind: clair managed: true overrides: replicas: 0 - kind: mirror managed: true overrides: replicas: 0 …
-
Disable auto scaling of Quay, Clair and Mirroring workers
-
Set the replica count to 0 for components accessing the database and objectstorage
NoteYou must keep the Project Quay Operator running on System A. Do not update the
quayregistry.yaml
file on System A. -
-
Wait for the
registry-quay-app
,registry-quay-mirror
, andregistry-clair-app
pods to disappear. Enter the following command to check their status:oc get pods -n <quay-namespace>
Example outputquay-operator.v3.7.1-6f9d859bd-p5ftc 1/1 Running 0 12m quayregistry-clair-postgres-7487f5bd86-xnxpr 1/1 Running 1 (12m ago) 12m quayregistry-quay-app-upgrade-xq2v6 0/1 Completed 0 12m quayregistry-quay-redis-84f888776f-hhgms 1/1 Running 0 12m
-
On System A, initiate a Project Quay Operator upgrade to the latest y-stream version. This is a manual process. For more information about upgrading installed Operators, see Upgrading installed Operators. For more information about Project Quay upgrade paths, see Upgrading the Project Quay Operator.
-
After the new Project Quay Operator is installed, the necessary upgrades on the cluster are automatically completed. Afterwards, new Project Quay pods are started with the latest y-stream version. Additionally, new
Quay
pods are scheduled and started. -
Confirm that the update has properly worked by navigating to the Project Quay UI:
-
In the OpenShift console, navigate to Operators → Installed Operators, and click the Registry Endpoint link.
ImportantDo not execute the following step until the Project Quay UI is available. Do not upgrade the Project Quay Operator on System B and on System C until the UI is available on System A.
-
-
After confirming that the update has properly worked on System A, initiate the Project Quay Operator on System B and on System C. The Operator upgrade results in an upgraded Project Quay installation, and the pods are restarted.
NoteBecause the database schema is correct for the new y-stream installation, the new pods on System B and on System C should quickly start.
Downgrading Project Quay
Project Quay only supports rolling back, or downgrading, to previous z-stream versions, for example, 3.7.2 → 3.7.1. Rolling back to previous y-stream versions (3.7.0 → 3.6.0) is not supported. This is because Project Quay updates might contain database schema upgrades that are applied when upgrading to a new version of Project Quay. Database schema upgrades are not considered backwards compatible.
Important
|
Downgrading to previous z-streams is neither recommended nor supported by either Operator based deployments or virtual machine based deployments. Downgrading should only be done in extreme circumstances. The decision to rollback your Project Quay deployment must be made in conjunction with the Project Quay support and development teams. For more information, contact Project Quay support. |