Upgrade overview
The upgrade procedure for Project Quay depends on the type of installation you are using.
The Project Quay Operator provides a simple method to deploy and manage a Project Quay cluster. This is the preferred procedure for deploying Project Quay on OpenShift.
The Project Quay Operator should be upgraded using the Operator Lifecycle Manager (OLM) as described in the section "Upgrading Quay using the Quay Operator".
The procedure for upgrading a proof of concept or highly available installation of Project Quay and Clair is documented in the section "Standalone upgrade".
Upgrading the Project Quay Operator Overview
The Project Quay Operator follows a synchronized versioning scheme, which means that each version of the Operator is tied to the version of Project Quay and the components that it manages. There is no field on the QuayRegistry
custom resource which sets the version of Project Quay to deploy
; the Operator can only deploy a single version of all components. This scheme was chosen to ensure that all components work well together and to reduce the complexity of the Operator needing to know how to manage the lifecycles of many different versions of Project Quay on Kubernetes.
Operator Lifecycle Manager
The Project Quay Operator should be installed and upgraded using the Operator Lifecycle Manager (OLM). When creating a Subscription
with the default approvalStrategy: Automatic
, OLM will automatically upgrade the Project Quay Operator whenever a new version becomes available.
Warning
|
When the Project Quay Operator is installed by Operator Lifecycle Manager, it might be configured to support automatic or manual upgrades. This option is shown on the OperatorHub page for the Project Quay Operator during installation. It can also be found in the Project Quay Operator |
Upgrading the Project Quay Operator
The standard approach for upgrading installed Operators on OpenShift Container Platform is documented at Upgrading installed Operators.
In general, Project Quay supports upgrades from a prior (N-1) minor version only. For example, upgrading directly from Project Quay 3.0.5 to the latest version of 3.5 is not supported. Instead, users would have to upgrade as follows:
-
3.0.5 → 3.1.3
-
3.1.3 → 3.2.2
-
3.2.2 → 3.3.4
-
3.3.4 → 3.4.z
-
3.4.z → 3.5.z
This is required to ensure that any necessary database migrations are done correctly and in the right order during the upgrade.
In some cases, Project Quay supports direct, single-step upgrades from prior (N-2, N-3) minor versions. This simplifies the upgrade procedure for customers on older releases. The following upgrade paths are supported for Project Quay 3.13:
-
3.11.z → 3.13.z
-
3.12.z → 3.13.z
For users on standalone deployments of Project Quay wanting to upgrade to 3.13, see the Standalone upgrade guide.
Upgrading Project Quay to version 3.13
To update Project Quay from one minor version to the next, for example, 3.12.z → 3.13, you must change the update channel for the Project Quay Operator.
-
In the OpenShift Container Platform Web Console, navigate to Operators → Installed Operators.
-
Click on the Project Quay Operator.
-
Navigate to the Subscription tab.
-
Under Subscription details click Update channel.
-
Select stable-3.13 → Save.
-
Check the progress of the new installation under Upgrade status. Wait until the upgrade status changes to 1 installed before proceeding.
-
In your OpenShift Container Platform cluster, navigate to Workloads → Pods. Existing pods should be terminated, or in the process of being terminated.
-
Wait for the following pods, which are responsible for upgrading the database and alembic migration of existing data, to spin up:
clair-postgres-upgrade
,quay-postgres-upgrade
, andquay-app-upgrade
. -
After the
clair-postgres-upgrade
,quay-postgres-upgrade
, andquay-app-upgrade
pods are marked as Completed, the remaining pods for your Project Quay deployment spin up. This takes approximately ten minutes. -
Verify that the
quay-database
uses thepostgresql-13
image, andclair-postgres
pods now uses thepostgresql-15
image. -
After the
quay-app
pod is marked as Running, you can reach your Project Quay registry.
Upgrading to the next minor release version
For z
stream upgrades, for example, 3.12.1 → 3.12.2, updates are released in the major-minor channel that the user initially selected during install. The procedure to perform a z
stream upgrade depends on the approvalStrategy
as outlined above. If the approval strategy is set to Automatic
, the Project Quay Operator upgrades automatically to the newest z
stream. This results in automatic, rolling Project Quay updates to newer z
streams with little to no downtime. Otherwise, the update must be manually approved before installation can begin.
Upgrading from Project Quay 3.12 to 3.13
With Project Quay 3.13, the volumeSize
parameter has been implemented for use with the clairpostgres
component of the QuayRegistry
custom resource definition (CRD). This replaces the volumeSize
parameter that was previously used for the clair
component of the same CRD.
If your Project Quay 3.12 QuayRegistry
custom resource definition (CRD) implemented a volume override for the clair
component, you must ensure that the volumeSize
field is included under the clairpostgres
component of the QuayRegistry
CRD.
Important
|
Failure to move |
For example:
spec:
components:
- kind: clair
managed: true
- kind: clairpostgres
managed: true
overrides:
volumeSize: <volume_size>
Changing the update channel for the Project Quay Operator
The subscription of an installed Operator specifies an update channel, which is used to track and receive updates for the Operator. To upgrade the Project Quay Operator to start tracking and receiving updates from a newer channel, change the update channel in the Subscription tab for the installed Project Quay Operator. For subscriptions with an Automatic
approval strategy, the upgrade begins automatically and can be monitored on the page that lists the Installed Operators.
Manually approving a pending Operator upgrade
If an installed Operator has the approval strategy in its subscription set to Manual
, when new updates are released in its current update channel, the update must be manually approved before installation can begin. If the Project Quay Operator has a pending upgrade, this status will be displayed in the list of Installed Operators. In the Subscription
tab for the Project Quay Operator, you can preview the install plan and review the resources that are listed as available for upgrade. If satisfied, click Approve
and return to the page that lists Installed Operators to monitor the progress of the upgrade.
The following image shows the Subscription tab in the UI, including the update Channel
, the Approval
strategy, the Upgrade status
and the InstallPlan
:
The list of Installed Operators provides a high-level summary of the current Quay installation:
Upgrading a QuayRegistry resource
When the Project Quay Operator starts, it immediately looks for any QuayRegistries
it can find in the namespace(s) it is configured to watch. When it finds one, the following logic is used:
-
If
status.currentVersion
is unset, reconcile as normal. -
If
status.currentVersion
equals the Operator version, reconcile as normal. -
If
status.currentVersion
does not equal the Operator version, check if it can be upgraded. If it can, perform upgrade tasks and set thestatus.currentVersion
to the Operator’s version once complete. If it cannot be upgraded, return an error and leave theQuayRegistry
and its deployed Kubernetes objects alone.
Upgrading a QuayEcosystem
Upgrades are supported from previous versions of the Operator which used the QuayEcosystem
API for a limited set of configurations. To ensure that migrations do not happen unexpectedly, a special label needs to be applied to the QuayEcosystem
for it to be migrated. A new QuayRegistry
will be created for the Operator to manage, but the old QuayEcosystem
will remain until manually deleted to ensure that you can roll back and still access Quay in case anything goes wrong. To migrate an existing QuayEcosystem
to a new QuayRegistry
, use the following procedure.
-
Add
"quay-operator/migrate": "true"
to themetadata.labels
of theQuayEcosystem
.$ oc edit quayecosystem <quayecosystemname>
metadata: labels: quay-operator/migrate: "true"
-
Wait for a
QuayRegistry
to be created with the samemetadata.name
as yourQuayEcosystem
. TheQuayEcosystem
will be marked with the label"quay-operator/migration-complete": "true"
. -
After the
status.registryEndpoint
of the newQuayRegistry
is set, access Project Quay and confirm that all data and settings were migrated successfully. -
If everything works correctly, you can delete the
QuayEcosystem
and Kubernetes garbage collection will clean up all old resources.
Reverting QuayEcosystem Upgrade
If something goes wrong during the automatic upgrade from QuayEcosystem
to QuayRegistry
, follow these steps to revert back to using the QuayEcosystem
:
-
Delete the
QuayRegistry
using either the UI orkubectl
:$ kubectl delete -n <namespace> quayregistry <quayecosystem-name>
-
If external access was provided using a
Route
, change theRoute
to point back to the originalService
using the UI orkubectl
.
Note
|
If your |
Supported QuayEcosystem Configurations for Upgrades
The Project Quay Operator reports errors in its logs and in status.conditions
if migrating a QuayEcosystem
component fails or is unsupported. All unmanaged components should migrate successfully because no Kubernetes resources need to be adopted and all the necessary values are already provided in Project Quay’s config.yaml
file.
Database
Ephemeral database not supported (volumeSize
field must be set).
Redis
Nothing special needed.
External Access
Only passthrough Route
access is supported for automatic migration. Manual migration required for other methods.
-
LoadBalancer
without custom hostname: After theQuayEcosystem
is marked with label"quay-operator/migration-complete": "true"
, delete themetadata.ownerReferences
field from existingService
before deleting theQuayEcosystem
to prevent Kubernetes from garbage collecting theService
and removing the load balancer. A newService
will be created withmetadata.name
format<QuayEcosystem-name>-quay-app
. Edit thespec.selector
of the existingService
to match thespec.selector
of the newService
so traffic to the old load balancer endpoint will now be directed to the new pods. You are now responsible for the oldService
; the Quay Operator will not manage it. -
LoadBalancer
/NodePort
/Ingress
with custom hostname: A newService
of typeLoadBalancer
will be created withmetadata.name
format<QuayEcosystem-name>-quay-app
. Change your DNS settings to point to thestatus.loadBalancer
endpoint provided by the newService
.
Clair
Nothing special needed.
Object Storage
QuayEcosystem
did not have a managed object storage component, so object storage will always be marked as unmanaged. Local storage is not supported.
Repository Mirroring
Nothing special needed.
Standalone upgrade
In general, Project Quay supports upgrades from a prior (N-1) minor version only. For example, upgrading directly from Project Quay 3.8 to the latest version of 3.13 is not supported. Instead, users would have to upgrade as follows:
-
3.8.z → 3.9.z
-
3.9.z → 3.10.z
-
3.10.z → 3.11.z
-
3.11.z → 3.12.z
-
3.12.z → 3.13.z
This is required to ensure that any necessary database migrations are done correctly and in the right order during the upgrade.
In some cases, Project Quay supports direct, single-step upgrades from prior (N-2, N-3) minor versions. This exception to the normal, prior minor version-only, upgrade simplifies the upgrade procedure for customers on older releases. The following upgrade paths are supported for Project Quay 3.13:
-
3.10.z → 3.13.z
-
3.11.z → 3.13.z
-
3.12.z → 3.13.z
For users wanting to upgrade the Project Quay Operator, see Upgrading the Project Quay Operator Overview.
This document describes the steps needed to perform each individual upgrade. Determine your current version and then follow the steps in sequential order, starting with your current version and working up to your desired target version.
See the Project Quay Release Notes for information on features for individual releases.
The general procedure for a manual upgrade consists of the following steps:
-
Stop the Quay and Clair containers.
-
Backup the database and image storage (optional but recommended).
-
Start Clair using the new version of the image.
-
Wait until Clair is ready to accept connections before starting the new version of Quay.
Accessing images
Project Quay image from version 3.4.0 and later are available from registry.redhat.io and registry.access.redhat.com, with authentication set up as described in Red Hat Container Registry Authentication.
Upgrading the Clair PostgreSQL database
If you are upgrading Project Quay to version 13, you must migrate your Clair PostgreSQL database version from PostgreSQL version 13 → version 15. This requires bringing down your Clair PostgreSQL 13 database and running a migration script to initiate the process.
Use the following procedure to upgrade your Clair PostgreSQL database from version 13 → to version 15.
Important
|
Clair security scans might become temporarily disrupted after the migration procedure has succeeded. |
-
Stop the Project Quay container by entering the following command:
$ sudo podman stop <quay_container_name>
-
Stop the Clair container by running the following command:
$ sudo podman stop <clair_container_id>
-
Run the following Podman process from SCLOrg’s Data Migration procedure, which allows for data migration from a remote PostgreSQL server:
$ sudo podman run -d --name <clair_migration_postgresql_database> (1) -e POSTGRESQL_MIGRATION_REMOTE_HOST=<container_ip_address> \ (2) -e POSTGRESQL_MIGRATION_ADMIN_PASSWORD=remoteAdminP@ssword \ -v </host/data/directory:/var/lib/pgsql/data:Z> \ (3) [ OPTIONAL_CONFIGURATION_VARIABLES ] registry.redhat.io/rhel8/postgresql-15
-
Insert a name for your Clair PostgreSQL 15 migration database.
-
Your new Clair PostgreSQL 15 database container IP address. Can obtained by running the following command:
sudo podman inspect -f "{{.NetworkSettings.IPAddress}}" postgresql-quay
. -
You must specify a different volume mount point than the one from your initial Clair PostgreSQL 13 deployment, and modify the access control lists for said directory. For example:
$ mkdir -p /host/data/clair-postgresql15-directory
$ setfacl -m u:26:-wx /host/data/clair-postgresql15-directory
This prevents data from being overwritten by the new container.
-
-
Stop the Clair PostgreSQL 13 container:
$ sudo podman stop <clair_postgresql13_container_name>
-
After completing the PostgreSQL migration, run the Clair PostgreSQL 15 container, using the new data volume mount from Step 3, for example,
</host/data/clair-postgresql15-directory:/var/lib/postgresql/data>
:$ sudo podman run -d --rm --name <postgresql15-clairv4> \ -e POSTGRESQL_USER=<clair_username> \ -e POSTGRESQL_PASSWORD=<clair_password> \ -e POSTGRESQL_DATABASE=<clair_database_name> \ -e POSTGRESQL_ADMIN_PASSWORD=<admin_password> \ -p 5433:5432 \ -v </host/data/clair-postgresql15-directory:/var/lib/postgresql/data:Z> \ registry.redhat.io/rhel8/postgresql-15
-
Start the Project Quay container by entering the following command:
$ sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay \ -v /home/<quay_user>/quay-poc/config:/conf/stack:Z \ -v /home/<quay_user>/quay-poc/storage:/datastorage:Z \ {productrepo}/{quayimage}:{productminv}
-
Start the Clair container by entering the following command:
$ sudo podman run -d --name clairv4 \ -p 8081:8081 -p 8088:8088 \ -e CLAIR_CONF=/clair/config.yaml \ -e CLAIR_MODE=combo \ registry.redhat.io/quay/clair-rhel8:{productminv}
For more information, see Data Migration.
Upgrade to 3.13.z from 3.12.z
Target images
-
Quay: quay.io/projectquay/quay:v3.13.1
-
Clair: quay.io/projectquay/clair:4.8
-
PostgreSQL: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33
-
Redis: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542
-
Clair-PosgreSQL: registry.redhat.io/rhel8/postgresql-15
Upgrade to 3.13.z from 3.11.z
Target images
-
Quay: quay.io/projectquay/quay:v3.13.1
-
Clair: quay.io/projectquay/clair:4.8
-
PostgreSQL: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33
-
Redis: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542
-
Clair-PosgreSQL: registry.redhat.io/rhel8/postgresql-15
Upgrade to 3.13.z from 3.10.z
Target images
-
Quay: quay.io/projectquay/quay:v3.13.1
-
Clair: quay.io/projectquay/clair:4.8
-
PostgreSQL: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33
-
Redis: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542
-
Clair-PosgreSQL: registry.redhat.io/rhel8/postgresql-15
Upgrading a geo-replication deployment of standalone Project Quay
Use the following procedure to upgrade your geo-replication Project Quay deployment.
Important
|
|
-
You have logged into
registry.redhat.io
Note
|
Procedure
This procedure assumes that you are running Project Quay services on three (or more) systems. For more information, see Preparing for Project Quay high availability. |
-
Obtain a list of all Project Quay instances on each system running a Project Quay instance.
-
Enter the following command on System A to reveal the Project Quay instances:
$ sudo podman ps
Example outputCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ec16ece208c0 registry.redhat.io/quay/quay-rhel8:v{producty-n1} registry 6 minutes ago Up 6 minutes ago 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp quay01
-
Enter the following command on System B to reveal the Project Quay instances:
$ sudo podman ps
Example outputCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7ae0c9a8b37d registry.redhat.io/quay/quay-rhel8:v{producty-n1} registry 5 minutes ago Up 2 seconds ago 0.0.0.0:82->8080/tcp, 0.0.0.0:445->8443/tcp quay02
-
Enter the following command on System C to reveal the Project Quay instances:
$ sudo podman ps
Example outputCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e75c4aebfee9 registry.redhat.io/quay/quay-rhel8:v{producty-n1} registry 4 seconds ago Up 4 seconds ago 0.0.0.0:84->8080/tcp, 0.0.0.0:447->8443/tcp quay03
-
-
Temporarily shut down all Project Quay instances on each system.
-
Enter the following command on System A to shut down the Project Quay instance:
$ sudo podman stop ec16ece208c0
-
Enter the following command on System B to shut down the Project Quay instance:
$ sudo podman stop 7ae0c9a8b37d
-
Enter the following command on System C to shut down the Project Quay instance:
$ sudo podman stop e75c4aebfee9
-
-
Obtain the latest Project Quay version, for example, Project Quay 3.13, on each system.
-
Enter the following command on System A to obtain the latest Project Quay version:
$ sudo podman pull registry.redhat.io/quay/quay-rhel8:{productminv}
-
Enter the following command on System B to obtain the latest Project Quay version:
$ sudo podman pull registry.redhat.io/quay/quay-rhel8:v{producty}
-
Enter the following command on System C to obtain the latest Project Quay version:
$ sudo podman pull registry.redhat.io/quay/quay-rhel8:{productminv}
-
-
On System A of your highly available Project Quay deployment, run the new image version, for example, Project Quay 3.13:
# sudo podman run --restart=always -p 443:8443 -p 80:8080 \ --sysctl net.core.somaxconn=4096 \ --name=quay01 \ -v /mnt/quay/config:/conf/stack:Z \ -v /mnt/quay/storage:/datastorage:Z \ -d registry.redhat.io/quay/quay-rhel8:{productminv}
-
Wait for the new Project Quay container to become fully operational on System A. You can check the status of the container by entering the following command:
$ sudo podman ps
Example outputCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 70b9f38c3fb4 registry.redhat.io/quay/quay-rhel8:v{producty} registry 2 seconds ago Up 2 seconds ago 0.0.0.0:82->8080/tcp, 0.0.0.0:445->8443/tcp quay01
-
Optional: Ensure that Project Quay is fully operation by navigating to the Project Quay UI.
-
After ensuring that Project Quay on System A is fully operational, run the new image versions on System B and on System C.
-
On System B of your highly available Project Quay deployment, run the new image version, for example, Project Quay 3.13:
# sudo podman run --restart=always -p 443:8443 -p 80:8080 \ --sysctl net.core.somaxconn=4096 \ --name=quay02 \ -v /mnt/quay/config:/conf/stack:Z \ -v /mnt/quay/storage:/datastorage:Z \ -d registry.redhat.io/quay/quay-rhel8:{productminv}
-
On System C of your highly available Project Quay deployment, run the new image version, for example, Project Quay 3.13:
# sudo podman run --restart=always -p 443:8443 -p 80:8080 \ --sysctl net.core.somaxconn=4096 \ --name=quay03 \ -v /mnt/quay/config:/conf/stack:Z \ -v /mnt/quay/storage:/datastorage:Z \ -d registry.redhat.io/quay/quay-rhel8:{productminv}
-
-
You can check the status of the containers on System B and on System C by entering the following command:
$ sudo podman ps
Upgrading a geo-replication deployment of Red Hat Quay on OpenShift Container Platform
Use the following procedure to upgrade your geo-replicated Red Hat Quay on OpenShift Container Platform deployment.
Important
|
|
Note
|
Procedure
This procedure assumes that you are running the Project Quay registry on three or more systems. For this procedure, we will assume three systems named |
-
On System B and System C, scale down your Project Quay registry. This is done by disabling auto scaling and overriding the replica county for Project Quay, mirror workers, and Clair if it is managed. Use the following
quayregistry.yaml
file as a reference:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: … - kind: horizontalpodautoscaler managed: false (1) - kind: quay managed: true overrides: (2) replicas: 0 - kind: clair managed: true overrides: replicas: 0 - kind: mirror managed: true overrides: replicas: 0 …
-
Disable auto scaling of
Quay
,Clair
andMirroring
workers -
Set the replica count to 0 for components accessing the database and objectstorage
NoteYou must keep the Project Quay registry running on System A. Do not update the
quayregistry.yaml
file on System A.
-
-
Wait for the
registry-quay-app
,registry-quay-mirror
, andregistry-clair-app
pods to disappear. Enter the following command to check their status:oc get pods -n <quay-namespace>
Example outputquay-operator.v3.7.1-6f9d859bd-p5ftc 1/1 Running 0 12m quayregistry-clair-postgres-7487f5bd86-xnxpr 1/1 Running 1 (12m ago) 12m quayregistry-quay-app-upgrade-xq2v6 0/1 Completed 0 12m quayregistry-quay-redis-84f888776f-hhgms 1/1 Running 0 12m
-
On System A, initiate a Project Quay upgrade to the latest y-stream version. This is a manual process. For more information about upgrading installed Operators, see Upgrading installed Operators. For more information about Project Quay upgrade paths, see Upgrading the Project Quay Operator.
-
After the new Project Quay registry is installed, the necessary upgrades on the cluster are automatically completed. Afterwards, new Project Quay pods are started with the latest y-stream version. Additionally, new
Quay
pods are scheduled and started. -
Confirm that the update has properly worked by navigating to the Project Quay UI:
-
In the OpenShift console, navigate to Operators → Installed Operators, and click the Registry Endpoint link.
ImportantDo not execute the following step until the Project Quay UI is available. Do not upgrade the Project Quay registry on System B and on System C until the UI is available on System A.
-
-
Confirm that the update has properly worked on System A, initiate the Project Quay upgrade on System B and on System C. The Operator upgrade results in an upgraded Project Quay installation, and the pods are restarted.
NoteBecause the database schema is correct for the new y-stream installation, the new pods on System B and on System C should quickly start.
-
After updating, revert the changes made in step 1 of this procedure by removing
overrides
for the components. For example:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: … - kind: horizontalpodautoscaler managed: true (1) - kind: quay managed: true - kind: clair managed: true - kind: mirror managed: true …
-
If the
horizontalpodautoscaler
resource was set totrue
before the upgrade procedure, or if you want Project Quay to scale in case of a resource shortage, set it totrue
.
-
Upgrade Quay Bridge Operator
To upgrade the Quay Bridge Operator (QBO), change the Channel Subscription update channel in the Subscription tab to the desired channel.
When upgrading QBO from version 3.5 to 3.7, a number of extra steps are required:
-
You need to create a new
QuayIntegration
custom resource. This can be completed in the Web Console or from the command line.upgrade-quay-integration.yaml- apiVersion: quay.redhat.com/v1 kind: QuayIntegration metadata: name: example-quayintegration-new spec: clusterID: openshift (1) credentialsSecret: name: quay-integration namespace: openshift-operators insecureRegistry: false quayHostname: https://registry-quay-quay35.router-default.apps.cluster.openshift.com
-
Make sure that the
clusterID
matches the value for the existingQuayIntegration
resource.
-
-
Create the new
QuayIntegration
custom resource:$ oc create -f upgrade-quay-integration.yaml
-
Delete the old
QuayIntegration
custom resource. -
Delete the old
mutatingwebhookconfigurations
:$ oc delete mutatingwebhookconfigurations.admissionregistration.k8s.io quay-bridge-operator
Downgrading Project Quay
Project Quay only supports rolling back, or downgrading, to previous z-stream versions, for example, 3.12.3 → 3.12.2. Rolling back to previous y-stream versions (3.13 → {producty-n1}) is not supported. This is because Project Quay updates might contain database schema upgrades that are applied when upgrading to a new version of Project Quay. Database schema upgrades are not considered backwards compatible.
Important
|
Downgrading to previous z-streams is neither recommended nor supported by either Operator based deployments or virtual machine based deployments. Downgrading should only be done in extreme circumstances. The decision to rollback your Project Quay deployment must be made in conjunction with the Project Quay support and development teams. For more information, contact Project Quay support. |