This document guides you through the process of deploying and configuring Project Quay in your environment using the Project Quay Operator. The Operator simplifies the installation, configuration, and maintenance of your registry, ensuring you have a production-ready container image repository for your enterprise.
Introduction to the Project Quay Operator
The Project Quay Operator is designed to simplify the installation, deployment, and management of the Project Quay container registry on OpenShift Container Platform. By leveraging the Operator framework, you can treat Quay as a native OpenShift Container Platform application, automating common tasks and managing its full lifecycle.
This chapter provides a conceptual overview of the Project Quay Operator’s architecture and configuration model. It covers the following information:
-
A configuration overview of Project Quay when deployed on OpenShift Container Platform.
-
How the Operator manages Quay’s components, or managed components.
-
When and why to use external, or unmanaged, components for dependencies like the database and object storage.
-
The function and structure of the
configBundleSecret
, which handles Quay’s configuration. -
The prerequisites required before installation.
Red Hat Quay on OpenShift Container Platform configuration overview
When deploying Red Hat Quay on OpenShift Container Platform, the registry configuration is managed declaratively through two primary mechanisms: the QuayRegistry
custom resource (CR) and the configBundleSecret
resource.
Understanding the QuayRegistry CR
The QuayRegistry
custom resource (CR) is the interface for defining the desired state of your Quay deployment. This resource focuses on managing the core components of the registry, such as the database, cache, and stroage.
The QuayRegistry
CR is used to determine whether a component is managed, or automatically handled by the Operator, or unmanaged, or provided externally by the user.
By default, the QuayRegistry
CR contains the following key fields:
-
configBundleSecret
: The name of a Kubernetes Secret containing theconfig.yaml
file which defines additional configuration parameters. -
name
: The name of your Project Quay registry. -
namespace
: The namespace, or project, in which the registry was created. -
spec.components
: A list of component that the Operator automatically manages. Eachspec.component
field contains two fields:-
kind
: The name of the component -
managed
: A boolean that addresses whether the component lifecycle is handled by the Project Quay Operator. Settingmanaged: true
to a component in theQuayRegistry
CR means that the Operator manages the component.
-
All QuayRegistry
components are automatically managed and auto-filled upon reconciliation for visibility unless specified otherwise. The following sections highlight the major QuayRegistry
components and provide an example YAML file that shows the default settings.
Managed components
By default, the Operator handles all required configuration and installation needed for Project Quay’s managed components.
Field | Type | Description |
---|---|---|
|
Boolean |
Holds overrides for deployment of Red Hat Quay on OpenShift Container Platform, such as environment variables and number of replicas. This component cannot be set to unmanaged ( |
|
Boolean |
Used for storing registry metadata. Currently, PostgreSQL version 13 is used. |
|
Boolean |
Provides image vulnerability scanning. |
|
Boolean |
Storage live builder logs and the locking mechanism that is required for garbage collection. |
|
Boolean |
Adjusts the number of |
|
Boolean |
Stores image layer blobs. When set to |
|
Boolean |
Provides an external entrypoint to the Project Quay registry from outside of OpenShift Container Platform. |
|
Boolean |
Configures repository mirror workers to support optional repository mirroring. |
|
Boolean |
Features include a Grafana dashboard, access to individual metrics, and notifications for frequently restarting |
|
Boolean |
Configures whether SSL/TLS is automatically handled. |
|
Boolean |
Configures a managed Clair database. This is a separate database than the PostgreSQL database that is used to deploy Project Quay. |
The following example shows you the default configuration for the QuayRegistry
custom resource provided by the Project Quay Operator. It is available on the OpenShift Container Platform web console.
QuayRegistry
custom resourceapiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: <example_registry>
namespace: <namespace>
spec:
configBundleSecret: config-bundle-secret
components:
- kind: quay
managed: true
- kind: postgres
managed: true
- kind: clair
managed: true
- kind: redis
managed: true
- kind: horizontalpodautoscaler
managed: true
- kind: objectstorage
managed: true
- kind: route
managed: true
- kind: mirror
managed: true
- kind: monitoring
managed: true
- kind: tls
managed: true
- kind: clairpostgres
managed: true
Using unmanaged components for dependencies
Although the Project Quay Operator provides an opinionated deployment by automatically managing all required dependencies, this approach might not be suitable for every environment. If you need to integrate existing infrastructure or require specific configurations, you can leverage the Operator to use external, or unmanaged, resources instead. An unmanaged component is any core dependency—such as PostgreSQL, Redis, or object storage—that you deploy and maintain outside of the Operator’s control.
Note
|
If you are using an unmanaged PostgreSQL database, and the version is PostgreSQL 10, it is highly recommended that you upgrade to PostgreSQL 13. PostgreSQL 10 had its final release on November 10, 2022 and is no longer supported. For more information, see the PostgreSQL Versioning Policy. |
For more information about unmanaged components, see "Advanced configurations".
Understanding the configBundleSecret
The spec.configBundleSecret
field is an optional reference to the name of a Secret in the same namespace as the QuayRegistry
resource. This Secret must contain a config.yaml
key/value pair, where the value is a Project Quay configuration file.
The configBundleSecret
stores the config.yaml
file. Project Quay administrators can define the following settings through the config.yaml
file:
-
Authentication backends (for example, OIDC, LDAP)
-
External TLS termination settings
-
Repository creation policies
-
Feature flags
-
Notification settings
Project Quay administrators might update this secret for the following reasons:
-
Enable a new authentication method
-
Add custom SSL/TLS certificates
-
Enable features
-
Modify security scanning settings
If this field is omitted, the Project Quay Operator automatically generates a configuration secret based on default values and managed component settings. If the field is provided, the contents of the config.yaml
are used as the base configuration and are merged with values from managed components to form the final configuration, which is mounted into the quay
application pods.
Prerequisites for Project Quay on OpenShift Container Platform
Before deploying the Project Quay Operator, ensure that your environment meets the following prerequisites. These requirements cover the minimum cluster version, administrative access, resource capacity, and storage configuration necessary for a successful installation.
OpenShift Container Platform cluster
To deploy and manage the Project Quay Operator, you must meet the following requirements:
-
An OpenShift Container Platform cluster running version 4.5 or later.
-
An administrative account with sufficient permissions to perform cluster-scoped actions, including the ability to create namespaces.
Resource Requirements
Project Quay requires dedicated compute resources to function effectively. You must ensure that your OpenShift Container Platform cluster has sufficient capacity to accommodate the following requirements for each Project Quay application pod:
Resource type | Requirement |
---|---|
Memory |
8 Gi |
CPU |
2000 millicores (2 vCPUs) |
The Operator creates at least one main application pod per Project Quay deployment that it manages. Plan your cluster capacity accordingly.
Object Storage
Project Quay requires object storage to store all container image layer blobs. You have two options for providing this storage: managed (automated by the Operator) or unmanaged (using an existing external service).
Managed storage overview
If you want the Operator to manage object storage for Project Quay, your cluster needs to be capable of providing it through the ObjectBucketClaim
API. There are multiple implementations of this API available, for instance, NooBaa in combination with Kubernetes PersistentVolumes
or scalable storage backends like Ceph. Refer to the NooBaa documentation for more details on how to deploy this component.
Unmanaged storage overview
When your environment requires a connection to a storage provider that you manage, for example, AWS S3, Google Cloud Storage, or a self-hosted S3-compatible service, you can leverage unmanaged storage. Project Quay supports the following major cloud and on-premises object storage providers:
-
Amazon Web Services (AWS) S3
-
AWS STS S3 (Security Token Service)
-
AWS CloudFront (CloudFront S3Storage)
-
Google Cloud Storage
-
Microsoft Azure Blob Storage
-
Swift Storage
-
Nutanix Object Storage
-
IBM Cloud Object Storage
-
NetApp ONTAP S3 Object Storage
-
Hitachi Content Platform (HCP) Object Storage
For a complete list of object storage providers, the Quay Enterprise 3.x support matrix.
For example configurations of external object storage, see Storage object configuration fields, which provides the required YAML configuration examples, credential formatting, and full field descriptions for all supported external storage providers.
StorageClass
The Project Quay Operator automatically deploys dedicated PostgreSQL databases for both the main Quay registry and the Clair vulnerability scanner. Both of these databases require persistent storage to ensure data integrity and availability.
To enable the Operator to provision this storage seamlessly, your cluster must have a default StorageClass
configured. The Operator uses this default StorageClass
to create the Persistent Volume Claims (PVCs) required by the Quay and Clair databases. These PVCs ensure that your registry metadata and vulnerability data persist across pod restarts, node failures, and upgrades.
Important
|
Before proceeding with the installation, verify that a default |
Installing the Project Quay Operator from the OperatorHub
To install the Project Quay Operator from the OpenShift Container Platform OperatorHub, configure the installation mode and update approval strategy. You should install the Operator cluster-wide to ensure the monitoring component is available; deploying to a specific namespace renders monitoring unavailable.
-
On the OpenShift Container Platform web console, click Operators → OperatorHub.
-
In the search box, type Project Quay and select the official Project Quay Operator provided by Red Hat.
-
Select Install.
-
Select the update channel, for example, stable-3.15 and the version.
-
For the Installation mode, select one of the following:
-
All namespaces on the cluster. Select this option if you want the Project Quay Operator to be available cluster-wide. It is recommended that you install the Project Quay Operator cluster-wide. If you choose a single namespace, the monitoring component is not available.
-
A specific namespace on the cluster. Select this option if you want Project Quay deployed within a single namespace. Note that selecting this option renders the
monitoring
component unavailable.
-
-
Select an Approval Strategy. Choose to approve either automatic or manual updates. Automatic update strategy is recommended.
-
Select Install.
Deploying the Project Quay registry
To deploy the Project Quay registry after installing the Operator, you must create an instance based on the QuayRegistry
custom resource (CR), which can be done using the OpenShift Container Platform web console or the oc cli
(command-line interface). For the registry to deploy successfully, you must have, or configure, an object storage provider.
The following sections provide you with the information necessary to configure managed or unmanaged object storage, and then deploy the Project Quay registry.
Note
|
The following procedures show you how to create a basic Project Quay registry in all namespaces of the OpenShift Container Platform deployment. Depending on your needs, advanced configuration might be necessary. For example, you might need to configure SSL/TLS for your deployment or disable certain components. Advanced configuration practices are covered in later chapters of this guide. |
Deploying the Project Quay registry by using the OpenShift Container Platform web console
Use the OpenShift Container Platform web console to create and deploy a basic Project Quay registry instance.
-
You have installed the Project Quay Operator.
-
You have have administrative privileges to the cluster.
-
On the OpenShift Container Platform web console, click Operators → Installed Operators → Red Hat Quay.
-
On the Red Hat Quay dashboard, click Create instance.
-
On the Create QuayRegistry page, review the default settings of the
QuayRegistry
custom resource (CR). Here, you decide whether to to use managed or unmanaged object storage.-
If you are using the Multicloud Object Gateway or Red Hat OpenShift Data Foundation as your object storage, keep the following settings:
- kind: objectstorage managed: true
-
If you are using a different storage provider, such as Google Cloud Platform, AWS S3, or Nutanix, set the
objectstorage
component as follows:- kind: objectstorage managed: false
-
-
Click Create. You are redirected to the Quay Registry tab on the Operator page.
-
Click the name of the Project Quay registry that you created, then click Events to view the status of creation. If you used managed storage and leveraged the Multicloud Object Gateway, the registry completes creation. If you are using Red Hat OpenShift Data Foundation or an unmanaged storage backend provider, complete the following steps:
-
Click the Details page of the Project Quay registry.
-
Click the name of the Config Bundle Secret resource, for example, <example_registry_name_config-bundle-secret-12345>.
-
Click Actions → Edit Secret, and pass in the following information from your backend storage provider:
# ... DISTRIBUTED_STORAGE_CONFIG: <storage_provider>: - <storage_provider_name> - access_key: <access_key> bucket_name: <bucket_name> secret_key: <secret_key> storage_path: /datastorage/registry # ...
NoteDepending on your storage provider, different information is required. For more information, see see Storage object configuration fields.
-
Click Save, and then re-navigate to the Events page of the registry to ensure successful deployment.
-
Deploying the Project Quay registry by using the CLI
Use the oc
command-line interface (CLI) to create and deploy a basic Project Quay registry instance.
Note
|
The following For more information, see Automation configuration options. |
-
You have logged into OpenShift Container Platform using the CLI.
-
Create a namespace, for example,
quay-enterprise
, by entering the following command:$ oc new-project quay-enterprise
-
Create the
QuayRegistry
custom resource (CR).-
If the
objectstorage
component is set tomanaged: true
, complete the following steps:-
Create the
QuayRegistry
CR by entering the following command:$ cat <<EOF | oc create -n quay-enterprise -f - apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise EOF
-
-
If the
objectstorage
component is set tomanaged: false
, complete the following steps:-
Create the
config.yaml
file for Project Quay by entering the following command. You must include the information required for your backend storage provider. During this step, you can enable additional Project Quay features. The following example is for a minimal configuration that includes the configuration options for automating early setup tasks:$ cat <<EOF > config.yaml ALLOW_PULLS_WITHOUT_STRICT_LOGGING: false AUTHENTICATION_TYPE: Database DEFAULT_TAG_EXPIRATION: 2w FEATURE_USER_INITIALIZE: true (1) SUPER_USERS: (2) - <username> BROWSER_API_CALLS_XHR_ONLY: false (3) FEATURE_USER_CREATION: false (4) DISTRIBUTED_STORAGE_CONFIG: <storage_provider>: - <storage_provider_name> - access_key: <access_key> bucket_name: <bucket_name> secret_key: <secret_key> storage_path: /datastorage/registry ENTERPRISE_LOGO_URL: /static/img/RH_Logo_Quay_Black_UX-horizontal.svg FEATURE_BUILD_SUPPORT: false FEATURE_DIRECT_LOGIN: true FEATURE_MAILING: false REGISTRY_TITLE: Red Hat Quay REGISTRY_TITLE_SHORT: Red Hat Quay SETUP_COMPLETE: true TAG_EXPIRATION_OPTIONS: - 2w TEAM_RESYNC_STALE_TIME: 60m TESTING: false EOF
-
Set this field to
true
if you plan to create the first user by using API. -
Include this field and the username that you plan to leverage as a Project Quay administrator.
-
When set to
False
, allows general browser-based access to the API. -
When set to
False
, relegates the creation of new users to only superusers.
-
-
-
Create a secret for the configuration by entering the following command:
$ oc create secret generic <quay_config_bundle_name> \ --from-file=config.yaml=</path/to/config.yaml> \ -n quay-enterprise \ --dry-run=client -o yaml | oc apply -f -
-
Create the
QuayRegistry
CR by entering the following command:$ cat <<EOF | oc create -n quay-enterprise -f - apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: <quay_config_bundle_name> components: - kind: clair managed: true - kind: objectstorage managed: false (1) - kind: mirror managed: true - kind: monitoring managed: true EOF
-
Must be set to false when providing your own storage backend.
-
-
-
Check the status of your registry by entering the following command:
$ oc describe quayregistry <registry_name> -n quay-enterprise
Example output... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ComponentsCreationSuccess 23s (x2458 over 42h) quayregistry-controller All objects created/updated successfully
-
Alternatively, you can check pod statuses for your registry deployment by entering the following command:
-
Enter the following command to view the deployed components:
$ oc get pods -n quay-enterprise
Example outputNAME READY STATUS RESTARTS AGE example-registry-clair-app-5ffc9f77d6-jwr9s 1/1 Running 0 3m42s example-registry-clair-app-5ffc9f77d6-wgp7d 1/1 Running 0 3m41s example-registry-clair-postgres-54956d6d9c-rgs8l 1/1 Running 0 3m5s example-registry-quay-app-79c6b86c7b-8qnr2 1/1 Running 4 3m42s example-registry-quay-app-79c6b86c7b-xk85f 1/1 Running 4 3m41s example-registry-quay-app-upgrade-5kl5r 0/1 Completed 4 3m50s example-registry-quay-database-b466fc4d7-tfrnx 1/1 Running 2 3m42s example-registry-quay-mirror-6d9bd78756-6lj6p 1/1 Running 0 2m58s example-registry-quay-mirror-6d9bd78756-bv6gq 1/1 Running 0 2m58s example-registry-quay-postgres-init-dzbmx 0/1 Completed 0 3m43s example-registry-quay-redis-8bd67b647-skgqx 1/1 Running 0 3m42s
-
For more information about how to track the progress of your Project Quay deployment, see Monitoring and debugging the deployment process.
Creating the first user
This section guides you through creating the initial administrative user for your Project Quay registry. Completing this step confirms that your deployment is fully operational and grants you the necessary credentials to begin using and managing your registry. This can be completed by using the Project Quay UI or by leveraging the API.
Creating the first user by using the UI
Creating the first user by using the UI offers a visual workflow and is often preferred after initial setup to ensure that the user interface is functional. For most users, the UI offers a simpler path to creating the first user, as it does not require additional configuration in the config.yaml
file.
-
You have deployed the Project Quay registry.
-
On the OpenShift Container Platform web console, click Operators → Installed Operators → Project Quay.
-
On the Project Quay Operators page, click Quay Registry, and then the name of your registry.
-
On the QuayRegistry details page, click the Registry Endpoint link, for example, example-registry-quay.username-cluster-new.gcp.quaydev.org. You are navigated to the registry’s main page.
-
Click Create Account.
-
Enter the details for Username, Password, Email, and then click Create Account. After creating the first user, you are automatically logged in to the Project Quay registry.
Using the API to create the first user
You can use the API to create the first user with administrative privileges for your registry.
-
You have set
FEATURE_USER_INITIALIZE: true
and established a superuser in yourconfig.yaml
file. For example:# ... FEATURE_USER_INITIALIZE: true SUPER_USERS: - <username> # ..
If you did not configure these settings upon registry creation, and need to re-configure your registry to enable these settings, see "Enabling features after deployment".
-
You have not created a user by using the Project Quay UI.
-
On the command-line interface, generate a new user with a username, password, email, and access token by entering the following
CURL
command:$ curl -X POST -k http:/</quay-server.example.com>/api/v1/user/initialize --header 'Content-Type: application/json' --data '{ "username": "<username>", "password":"<password>", "email": "<email>@example.com", "access_token": true}'
If successful, the command returns an object with the username, email, and encrypted password. For example:
{"access_token":"123456789", "email":"quayadmin@example.com","encrypted_password":"<password>","username":"quayadmin"} # gitleaks:allow
If a user already exists in the database, an error is returned. For example:
{"message":"Cannot initialize user in a non-empty database"}
If your password is not at least eight characters or contains whitespace, an error is returned. For example:
{"message":"Failed to initialize user: Invalid password, password must be at least 8 characters and contain no whitespace."}
-
You can log in to your registry by navigating to the UI or by leveraging Podman on the CLI.
-
Log in to the registry by running the following
podman
command:$ podman login -u <username> -p <password> http://<quay-server.example.com>
Example outputLogin Succeeded!
-
Modifying the QuayRegistry CR after deployment
After you have installed the Project Quay Operator and created an initial deployment, you can modify the QuayRegistry
custom resource (CR) to customize or reconfigure aspects of the Red Hat Quay environment.
Project Quay administrators might modify the QuayRegistry CR for the following reasons:
-
To change component management: Switch components from
managed: true
tomanaged: false
in order to bring your own infrastructure. For example, you might setkind: objectstorage
to unmanaged to integrate external object storage platforms such as Google Cloud Storage or Nutanix. -
To apply custom configuration: Update or replace the
configBundleSecret
to apply new configuration settings, for example, authentication providers, external SSL/TLS settings, feature flags. -
To enable or disable features: Toggle features like repository mirroring, Clair scanning, or horizontal pod autoscaling by modifying the
spec.components
list. -
To scale the deployment: Adjust environment variables or replica counts for the Quay application.
-
To integrate with external services: Provide configuration for external PostgreSQL, Redis, or Clair databases, and update endpoints or credentials.
Modifying the QuayRegistry CR by using the OpenShift Container Platform web console
The QuayRegistry
can be modified by using the OpenShift Container Platform web console. This allows you to set managed components to unamanged (managed: false
) and use your own infrastructure.
-
You are logged into OpenShift Container Platform as a user with admin privileges.
-
You have installed the Project Quay Operator.
-
On the OpenShift Container Platform web console, click Operators → Installed Operators.
-
Click Red Hat Quay.
-
Click Quay Registry.
-
Click the name of your Project Quay registry, for example, example-registry.
-
Click YAML.
-
Adjust the
managed
field of the desired component to eitherTrue
orFalse
. -
Click Save.
NoteSetting a component to unmanaged (
managed: false
) might require additional configuration. For more information about setting unmanaged components in theQuayRegistry
CR, see Using unmanaged components for dependencies.
Modifying the QuayRegistry CR by using the CLI
The QuayRegistry
CR can be modified by using the CLI. This allows you to set managed components to unamanged (managed: false
) and use your own infrastructure.
-
You are logged in to your OpenShift Container Platform cluster as a user with admin privileges.
-
Edit the
QuayRegistry
CR by entering the following command:$ oc edit quayregistry <registry_name> -n <namespace>
-
Make the desired changes to the
QuayRegistry
CR.NoteSetting a component to unmanaged (
managed: false
) might require additional configuration. For more information about setting unmanaged components in theQuayRegistry
CR, see Using unmanaged components for dependencies. -
Save the changes.
Enabling features after deployment
After deployment, you can customize to the Project Quay registry to enable new features and better suit the needs of your organization. This entails editing the Project Quay configuration bundle secret (spec.configBundleSecret
) resource. You can use the OpenShift Container Platform web console or the command-line interface to enable features after deployment. Using the OpenShift Container Platform web console is generally considered a simpler method.
Enabling features by using the OpenShift Container Platform web console
To enable features in the OpenShift Container Platform web console, you can edit the configBundleSecret
resource.
-
You have have administrative privileges to the cluster.
-
On the OpenShift Container Platform web console, click Operators → Installed Operators → Red Hat Quay.
-
Click Quay Registry and then the name of your registry.
-
Under Config Bundle Secret, click the name of your secret, for example,
quay-config-bundle
. -
On the Secret details page, click Actions → Edit secret.
-
In the Value text box, add the new configuration fields for the features that you want to enable. For a list of all configuration fields, see Configure Project Quay.
-
Click Save. The Project Quay Operator automatically reconciles the changes by restarting all Quay-related pods. After all pods are restarted, the features are enabled.
Modifying the configuration file by using the CLI
You can modify the config.yaml
file that is stored by the configBundleSecret
by downloading the existing configuration using the CLI. After making changes, you can re-upload the configBundleSecret
resource to make changes to the Project Quay registry.
Note
|
Modifying the |
-
You are logged in to the OpenShift Container Platform cluster as a user with admin privileges.
-
Describe the
QuayRegistry
resource by entering the following command:$ oc describe quayregistry -n <quay_namespace>
Example output# ... Config Bundle Secret: example-registry-config-bundle-v123x # ...
-
Obtain the secret data by entering the following command:
$ oc get secret -n <quay_namespace> <example-registry-config-bundle-v123x> -o jsonpath='{.data}'
Example output{ "config.yaml": "RkVBVFVSRV9VU0 ... MDAwMAo=" }
-
Decode the data into a YAML file into the current directory by passing in the
>> config.yaml
flag. For example:$ echo 'RkVBVFVSRV9VU0 ... MDAwMAo=' | base64 --decode >> config.yaml
-
Make the desired changes to your
config.yaml
file, and then save the file asconfig.yaml
. -
Create a new
configBundleSecret
YAML by entering the following command.$ touch <new_configBundleSecret_name>.yaml
-
Create the new
configBundleSecret
resource, passing in theconfig.yaml
file` by entering the following command:$ oc -n <namespace> create secret generic <secret_name> \ --from-file=config.yaml=</path/to/config.yaml> \ (1) --dry-run=client -o yaml > <new_configBundleSecret_name>.yaml
-
Where
<config.yaml>
is yourbase64 decoded
config.yaml
file.
-
-
Create the
configBundleSecret
resource by entering the following command:$ oc create -n <namespace> -f <new_configBundleSecret_name>.yaml
Example outputsecret/config-bundle created
-
Update the
QuayRegistry
YAML file to reference the newconfigBundleSecret
object by entering the following command:$ oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{"spec":{"configBundleSecret":"<new_configBundleSecret_name>"}}'
Example outputquayregistry.quay.redhat.com/example-registry patched
-
Verify that the
QuayRegistry
CR has been updated with the newconfigBundleSecret
:$ oc describe quayregistry -n <quay_namespace>
Example output# ... Config Bundle Secret: <new_configBundleSecret_name> # ...
After patching the registry, the Project Quay Operator automatically reconciles the changes.
Deploying Project Quay on infrastructure nodes
By default, all quay
-related pods are scheduled on available worker nodes in your OpenShift Container Platform cluster. In some environments, you might want to dedicate certain nodes specifically for infrastructure workloads—such as registry, database, and monitoring pods—to improve performance, isolate critical components, or simplify maintenance.
OpenShift Container Platform supports this approach using infrastructure machine sets, which automatically create and manage nodes reserved for infrastructure.
As an OpenShift Container Platform administrator, you can achieve the same result by labeling and tainting worker nodes. This ensures that only infrastructure workloads, like quay
pods, are scheduled on these nodes. After your infrastructure nodes are configured, you can control where quay
pods run using node selectors and tolerations.
The following procedures is intended for new deployments that install the Project Quay Operator in a single namespace and provide their own backend storage. The procedure shows you how to prepare nodes and deploy Project Quay on dedicated infrastructure nodes. In this procedure, all quay
-related pods are placed on dedicated infrastructure nodes.
Labeling and tainting nodes for infrastructure use
Use the following procedure to label and taint nodes for infrastructure use.
Note
|
The following procedure labels three worker nodes with the |
-
Obtain a list of worker nodes in your deployment by entering the following command:
$ oc get nodes | grep worker
Example outputNAME STATUS ROLES AGE VERSION --- example-cluster-new-c5qqp-worker-b-4zxx5.c.quay-devel.internal Ready worker 401d v1.31.11 example-cluster-new-c5qqp-worker-b-kz6jn.c.quay-devel.internal Ready worker 402d v1.31.11 example-cluster-new-c5qqp-worker-b-wrhw4.c.quay-devel.internal Ready worker 401d v1.31.11 ---
-
Add the
node-role.kubernetes.io/infra=
label to the worker nodes by entering the following command. The number of infrastructure nodes required depends on your environment. Production environments should provision enough infra nodes to ensure high availability and sufficient resources for allquay
-related components. Monitor CPU, memory, and storage utilization to determine if additional infra nodes are required.$ oc label node --overwrite <infra_node_one> <infra_node_two> <infra_node_three> node-role.kubernetes.io/infra=
-
Confirm that the
node-role.kubernetes.io/infra=
label has been added to the proper nodes by entering the following command:$ oc get node | grep infra
--- example-cluster-new-c5qqp-worker-b-4zxx5.c.quay-devel.internal Ready infra,worker 405d v1.32.8 example-cluster-new-c5qqp-worker-b-kz6jn.c.quay-devel.internal Ready infra,worker 406d v1.32.8 example-cluster-new-c5qqp-worker-b-wrhw4.c.quay-devel.internal Ready infra,worker 405d v1.32.8 ---
-
When a worker node is assigned the
infra
role, there is a chance that user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node, and then add tolerations for the pods that you want to control. Taint the worker nodes with theinfra
label by entering the following command:$ oc adm taint nodes -l node-role.kubernetes.io/infra \ node-role.kubernetes.io/infra=reserved:NoSchedule --overwrite
Example outputnode/example-cluster-new-c5qqp-worker-b-4zxx5.c.quay-devel.internal modified node/example-cluster-new-c5qqp-worker-b-kz6jn.c.quay-devel.internal modified node/example-cluster-new-c5qqp-worker-b-wrhw4.c.quay-devel.internal modified
Creating a project with node selector and tolerations
Use the following procedure to create a project with the node-selector
and tolerations
annotations.
-
Add the
node-selector
annotation to the namespace by entering the following command:$ oc annotate namespace <namespace> openshift.io/node-selector='node-role.kubernetes.io/infra='
Example outputnamespace/<namespace> annotated
-
Add the
tolerations
annotation to the namespace by entering the following command:$ oc annotate namespace <namespace> scheduler.alpha.kubernetes.io/defaultTolerations='[{"operator":"Equal","value":"reserved","effect":"NoSchedule","key":"node-role.kubernetes.io/infra"},{"operator":"Equal","value":"reserved","effect":"NoExecute","key":"node-role.kubernetes.io/infra"}]' --overwrite
Example outputnamespace/<namespace> annotated
ImportantThe tolerations in this example are specific to two taints commonly applied to infra nodes. The taints configured in your environment might differ. You must set the tolerations accordingly to match the taints applied to your infra nodes.
Installing the Project Quay Operator on the annotated namespace
After you have added the node-role.kubernetes.io/infra=
label to worker nodes and added the node-selector
and tolerations
annotations to the namespace, you must download the Project Quay Operator in that namespace.
The following procedure shows you how to download the Project Quay Operator on the annotated namespace and how to update the subscription to ensure successful installation.
-
On the OpenShift Container Platform web console, click Operators → OperatorHub.
-
In the search box, type Project Quay.
-
Click Project Quay → Install.
-
Select the update channel, for example, stable-3.15 and the version.
-
Click A specific namespace on the cluster for the installation mode, and then select the namespace that you applied the
node-selector
andtolerations
annotations to. -
Click Install.
-
Confirm that the Operator is installed by entering the following command:
$ oc get pods -n <annotated_namespace> -o wide | grep quay-operator
Example outputquay-operator.v3.15.1-858b5c5fdc-lf5kj 1/1 Running 0 29m 10.130.6.18 example-cluster-new-c5qqp-worker-f-mhngl.c.quay-devel.internal <none> <none>
Creating the Project Quay registry
After you have downloaded the Project Quay Operator, you must create the Project Quay registry. The registry’s components, for example, clair
, postgres
, redis
, and so on, must be patched with the toleration
annotation so that they can schedule onto the infra
worker nodes.
The following procedure shows you how to create a Project Quay registry that runs on infrastructure nodes.
-
On the OpenShift Container Platform web console, click Operators → Installed Operators → Red Hat Quay.
-
On the Project Quay Operator details page, click Quay Registry → Create QuayRegistry.
-
On the Create QuayRegistry page, set the
monitoring
andobjectstorage
fields tofalse
. The monitoring component cannot be enabled when Project Quay is installed in a single namespace. For example:# ... - kind: monitoring managed: false - kind: objectstorage managed: false # ...
-
Click Create.
-
Optional: Confirm that the pods are running on infra nodes.
-
List all
Quay
-related pods along with the nodes that they are scheduled on by entering the following command:$ oc get pods -n <annotated_namespace> -o wide | grep example-registry
Example output... NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES example-registry-clair-app-5f95d685bd-dgjf6 1/1 Running 0 52m 10.128.4.12 example-cluster-new-c5qqp-worker-b-wrhw4.c.quay-devel.internal <none> <none> ...
-
Confirm that the nodes listed include only nodes labeled
infra
by running the following command:$ oc get nodes -l node-role.kubernetes.io/infra -o name
Example outputnode/example-cluster-new-c5qqp-worker-b-4zxx5.c.quay-devel.internal modified node/example-cluster-new-c5qqp-worker-b-kz6jn.c.quay-devel.internal modified node/example-cluster-new-c5qqp-worker-b-wrhw4.c.quay-devel.internal modified
NoteIf any pod appears on a non-infra node, revisit your namespace annotations and deployment patching.
-
-
Restart all pods for the Project Quay registry by entering the following command:
$ oc delete pod -n <annotated_namespace> --all
-
Check the status of the pods by entering the following command:
$ oc get pods -n <annotated_namespace>
Example output... NAME READY STATUS RESTARTS AGE example-registry-clair-app-5f95d685bd-dgjf6 1/1 Running 0 5m4s ...
Advanced configuration
The following sections cover advanced configuration options for when the default deployment settings do not meet your organization’s needs for performance, security, or existing infrastructure integration.
Using an external PostgreSQL database
When using your own PostgreSQL database with Project Quay, you must ensure that the required configuration and extensions are in place before deployment.
Important
|
Do not share the same PostgreSQL database between Project Quay and Clair deployments. Each service must use its own database instance. Sharing databases with other workloads is also not supported, because connection-intensive components such as Project Quay and Clair can quickly exceed PostgreSQL’s connection limits. Connection poolers such as pgBouncer are not supported with Project Quay or Clair. |
When managing your own PostgreSQL database for use with Project Quay, the following best practices are recommended:
-
*
pg_trgm
extenion: Thepg_trgm
extension must be enabled on the database for a successful deployment. -
Backups: Perform regular database backups using PostgreSQL-native tools or your existing backup infrastructure. The Project Quay Operator does not manage database backups.
-
Restores: When restoring a backup, ensure that all Project Quay pods are stopped before beginning the restore process.
-
Storage sizing: When using the Operator-managed PostgreSQL database, the default storage allocation is 50 GiB. For external databases, you must ensure sufficient storage capacity for your environment, as the Operator does not handle volume resizing.
-
Monitoring: Monitor disk usage, connection limits, and query performance to prevent outages caused by resource exhaustion.
Integrating an existing PostgreSQL database
Configure Red Hat Quay on OpenShift Container Platform to use an existing PostgreSQL database to leverage your current data storage setup.
Note
|
The following procedure uses the OpenShift Container Platform web console to configure the Project Quay registry to use an external PostgreSQL database. For most users, use the web console is simpler. This procedure can also be done by using the |
-
On the OpenShift Container Platform web console, click Operators → Installed Operators.
-
Click Red Hat Quay.
-
Click Quay Registry.
-
Click the name of your Project Quay registry, for example, example-registry.
-
Click YAML.
-
Set the
postgres
field of theQuayRegistry
CR tomanaged: false
. For example:- kind: postgres managed: false
-
Click Save.
-
Click Details → the name of your
Config Bundle Secret
resource. -
On the Secret Details page, click Actions → Edit Secret.
-
Add the
DB_URI
field to yourconfig.yaml
file. For example:DB_URI: postgresql://test-quay-database:postgres@test-quay-database:5432/test-quay-database
-
Optional: Add additional database configuration fields, such as
DB_CONNECTION_ARGS
or SSL/TLS connection arguments. For more information, see Database connection arguments. -
Click Save.
Using an external Redis database
Redis is a critical component that supports several Project Quay features, such as build logs and user event tracking. When using an externally managed Redis database with Project Quay, you must ensure that it is properly configured and available before deployment.
Important
|
Do not share the same Redis instance between Project Quay and Clair deployments. Each service must use its own dedicated Redis instance. Sharing Redis with other workloads is not supported, because connection-intensive components such as Project Quay and Clair can quickly exhaust available Redis connections and degrade performance. |
Integrating an external Redis database
Configuring Project Quay to use an external Redis database
You can configure Red Hat Quay on OpenShift Container Platform to use an existing Redis deployment for build logs and user event processing.
Note
|
The following procedure uses the OpenShift Container Platform web console to configure Project Quay to use an external Redis database. For most users, using the web console is simpler. You can also complete this procedure by using the |
-
In the OpenShift Container Platform web console, navigate to Operators → Installed Operators.
-
Click Red Hat Quay.
-
Click QuayRegistry.
-
Click the name of your Project Quay registry, for example, example-registry.
-
Click YAML.
-
Set the
redis
component to unmanaged by adding the following entry underspec.components
:- kind: redis managed: false
-
Click Save.
-
Click Details → the name of your
Config Bundle Secret
resource. -
On the Secret details page, click Actions → Edit Secret.
-
In the
config.yaml
section, add entries for your external Redis instance. For example:BUILDLOGS_REDIS: host: redis.example.com port: 6379 ssl: false USER_EVENTS_REDIS: host: redis.example.com port: 6379 ssl: false
ImportantIf both the
BUILDLOGS_REDIS
andUSER_EVENTS_REDIS
fields reference the same Redis deployment, ensure that your Redis service can handle the combined connection load. For large or high-throughput registries, use separate Redis databases or clusters for these components. -
Optional: Add additional database configuration fields, such as
DB_CONNECTION_ARGS
or SSL/TLS connection arguments. For more information, see Redis configuration fields. -
Click Save.
About Horizontal Pod Autoscaling (HPA)
By default, Project Quay deployments include managed Horizontal Pod Autoscalers (HPAs) for key components to ensure availability and performance during load spikes or maintenance events. HPAs automatically adjust the number of running pods based on observed CPU and memory utilization.
A typical Project Quay deployment includes the following pods:
-
Two pods for the Project Quay application (
example-registry-quay-app-*
) -
One Redis pod for Project Quay logging (
example-registry-quay-redis-*
) -
One PostgreSQL pod for metadata storage (
example-registry-quay-database-*
) -
Two
Quay
mirroring pods (example-registry-quay-mirror-*
) -
Two pods for Clair (
example-registry-clair-app-*
) -
One PostgreSQL pod for Clair (
example-registry-clair-postgres-*
)
HPAs are managed by default for the Quay
, Clair
, and Mirror
components, each starting with two replicas to prevent downtime during upgrades, reconfigurations, or pod rescheduling ev
Managing Horizontal Pod Autoscaling
Managing Horizontal Pod Autoscalers (HPA) by setting the HPA component to unmanaged (managed: false
) in the QuayRegistry
custom resource allows you to customize scaling thresholds or replica limits.
The following procedure shows you how to disable the horizontalpodautoscaler
component and explicitly set replicas: null
in the quay
, clair
, and mirror
component definitions.
Note
|
The following procedure uses the OpenShift Container Platform web console to configure the Project Quay registry to use an external PostgreSQL database. For most users, use the web console is simpler. This procedure can also be done by using the |
-
Edit your
QuayRegistry
CR:$ oc edit quayregistry <quay_registry_name> -n <quay_namespace>
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay-registry namespace: quay-enterprise spec: components: - kind: horizontalpodautoscaler managed: false - kind: quay managed: true overrides: replicas: null (1) - kind: clair managed: true overrides: replicas: null - kind: mirror managed: true overrides: replicas: null # ...
-
After setting
replicas: null
, a new replica set might be generated because theQuay
deployment manifest changes toreplicas: 1
.
-
-
Create a custom
HorizontalPodAutoscaler
resource with your desired configuration, for example:kind: HorizontalPodAutoscaler apiVersion: autoscaling/v2 metadata: name: quay-registry-quay-app namespace: quay-enterprise spec: scaleTargetRef: kind: Deployment name: quay-registry-quay-app apiVersion: apps/v1 minReplicas: 3 maxReplicas: 20 metrics: - type: Resource resource: name: memory target: type: Utilization averageUtilization: 90 - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 90
-
Apply the new HPA configuration to your cluster:
$ oc apply -f <custom_hpa>.yaml
Example outputhorizontalpodautoscaler.autoscaling/quay-registry-quay-app created
-
Verify that your Project Quay application pods are running:
$ oc get pod | grep quay-app
Example outputquay-registry-quay-app-5b8fd49d6b-7wvbk 1/1 Running 0 34m quay-registry-quay-app-5b8fd49d6b-jslq9 1/1 Running 0 3m42s quay-registry-quay-app-5b8fd49d6b-pskpz 1/1 Running 0 43m
-
Verify that your custom HPA is active:
$ oc get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE quay-registry-quay-app Deployment/quay-registry-quay-app 67%/90%, 54%/90% 3 20 3 51m
Configuring custom ingress
You can configure custom ingress for Project Quay by disabling the Operator-managed route
component and managing your own routes or ingress controllers. This configuration is useful when your environment requires a custom SSL/TLS setup, specific DNS naming conventions, or when Project Quay is deployed behind a load balancer or proxy that handles TLS termination.
The Project Quay Operator separates route management from SSL/TLS configuration by introducing a distinct tls
component. You can therefore manage each independently, depending on whether Project Quay or the cluster should handle TLS termination. For more information about using SSL/TLS certificates with your deployment, see "Securing Project Quay".
Note
|
If you disable the managed |
Disabling the Route component
Use the following procedure to prevent the Project Quay Operator from creating a route.
-
In your
quayregistry.yaml
file, set theroute
component asmanaged: false
:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: route managed: false
-
In your
config.yaml
file, configure Project Quay to handle SSL/TLS. For example:# ... EXTERNAL_TLS_TERMINATION: false SERVER_HOSTNAME: example-registry-quay-quay-enterprise.apps.user1.example.com PREFERRED_URL_SCHEME: https # ...
If the configuration is incomplete, the following error might appear:
{ "reason":"ConfigInvalid", "message":"required component `route` marked as unmanaged, but `configBundleSecret` is missing necessary fields" }
Configuring SSL/TLS and routes
Support for OpenShift Container Platform edge termination routes is provided through the tls
component. This separation allows independent control of route management and TLS certificate handling.
EXTERNAL_TLS_TERMINATION: true
is the default, opinionated setting, which assumes the cluster manages TLS termination.
Note
|
|
Multiple valid configurations are possible, as shown in the following table:
Option | Route | TLS | Certs provided | Result |
---|---|---|---|---|
My own load balancer handles TLS |
Managed |
Managed |
No |
Edge route using default cluster wildcard certificate |
Project Quay handles TLS |
Managed |
Unmanaged |
Yes |
Passthrough route with certificates mounted in the Project Quay pod |
Project Quay handles TLS |
Unmanaged |
Unmanaged |
Yes |
Certificates set inside the Project Quay pod; user must manually create a route |
Disabling the monitoring component
When installed in a single namespace, the monitoring
component of the Project Quay Operator must be set to managed: false
, because it does not have permission to create cluster-wide monitoring resources. You can also explicitly disable monitoring in a multi-namespace installation if you prefer to use your own monitoring stack.
Note
|
Monitoring cannot be enabled when the Project Quay Operator is installed in a single namespace. You might also disable monitoring in multi-namespace deployments if you use an external Prometheus or Grafana instance, want to reduce resource overhead, or require custom observability integration. |
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: quay-enterprise
spec:
components:
- kind: monitoring
managed: false
Disabling the mirroring component
Repository mirroring in Project Quay allows you to automatically synchronize container images from remote registries into your local Project Quay instance. The Project Quay Operator deploys a separate mirroring worker component that handles these synchronization tasks.
You can disable the managed mirroring component by setting it to managed: false
in the QuayRegistry
custom resource.
Note
|
Disabling managed mirroring means that the Operator does not deploy or reconcile any mirroring pods. You are responsible for creating, scheduling, and maintaining mirroring jobs manually. For most production deployments, leaving mirroring as |
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: quay-enterprise
spec:
components:
- kind: mirroring
managed: false
Configuring QuayRegistry CR resources
You can manually adjust the resources on Red Hat Quay on OpenShift Container Platform for the following components that have running pods:
-
quay
-
clair
-
mirroring
-
clairpostgres
-
postgres
This feature allows users to run smaller test clusters, or to request more resources upfront in order to avoid partially degraded Quay
pods. Limitations and requests can be set in accordance with Kubernetes resource units.
The following components should not be set lower than their minimum requirements. This can cause issues with your deployment and, in some cases, result in failure of the pod’s deployment.
-
quay
: Minimum of 6 GB, 2vCPUs -
clair
: Recommended of 2 GB memory, 2 vCPUs -
clairpostgres
: Minimum of 200 MB
You can configure resource requests on the OpenShift Container Platform UI or directly by updating the QuayRegistry
CR via the CLI.
Important
|
The default values set for these components are the suggested values. Setting resource requests too high or too low might lead to inefficient resource utilization, or performance degradation, respectively. |
Configuring resource requests by using the OpenShift Container Platform web console
Use the following procedure to configure resources by using the OpenShift Container Platform web console.
-
On the OpenShift Container Platform developer console, click Operators → Installed Operators → Red Hat Quay.
-
Click QuayRegistry.
-
Click the name of your registry, for example, example-registry.
-
Click YAML.
-
In the
spec.components
field, you can override the resource of thequay
,clair
,mirroring
clairpostgres
, andpostgres
resources by setting values for the.overrides.resources.limits
and theoverrides.resources.requests
fields. For example:spec: components: - kind: clair managed: true overrides: resources: limits: cpu: "5" # Limiting to 5 CPU (equivalent to 5000m or 5000 millicpu) memory: "18Gi" # Limiting to 18 Gibibytes of memory requests: cpu: "4" # Requesting 4 CPU memory: "4Gi" # Requesting 4 Gibibytes of memory - kind: postgres managed: true overrides: resources: limits: {} (1) requests: cpu: "700m" # Requesting 700 millicpu or 0.7 CPU memory: "4Gi" # Requesting 4 Gibibytes of memory - kind: mirror managed: true overrides: resources: limits: (2) requests: cpu: "800m" # Requesting 800 millicpu or 0.8 CPU memory: "1Gi" # Requesting 1 Gibibyte of memory - kind: quay managed: true overrides: resources: limits: cpu: "4" # Limiting to 4 CPU memory: "10Gi" # Limiting to 10 Gibibytes of memory requests: cpu: "4" # Requesting 4 CPU memory: "10Gi" # Requesting 10 Gibi of memory - kind: clairpostgres managed: true overrides: resources: limits: cpu: "800m" # Limiting to 800 millicpu or 0.8 CPU memory: "3Gi" # Limiting to 3 Gibibytes of memory requests: {}
-
Setting the
limits
orrequests
fields to{}
uses the default values for these resources. -
Leaving the
limits
orrequests
field empty puts no limitations on these resources.
-
Configuring resource requests by using the CLI
You can re-configure Project Quay to configure resource requests after you have already deployed a registry. This can be done by editing the QuayRegistry
YAML file directly and then re-deploying the registry.
-
Edit the
QuayRegistry
CR by entering the following command:$ oc edit quayregistry <registry_name> -n <namespace>
-
Make any desired changes. For example:
- kind: quay managed: true overrides: resources: limits: {} requests: cpu: "0.7" # Requesting 0.7 CPU (equivalent to 500m or 500 millicpu) memory: "512Mi" # Requesting 512 Mebibytes of memory
-
Save the changes.
Troubleshooting the QuayRegistry CR
You can troubleshoot the QuayRegistry
CR to reveal issues during registry deployment by checking the Events page on the OpenShift Container Platform web console, or by using the oc
CLI.
Monitoring and debugging the QuayRegistry CR by using the OpenShift Container Platform web console
Lifecycle observability for a Project Quay registry is reported on the Events page of the registry. If leveraging the OpenShift Container Platform web console, this is the first place to look for any problems related to registry deployment.
-
You have deployed a Project Quay registry.
-
On the OpenShift Container Platform web console, click Operators → Installed Operators → Red Hat Quay.
-
On the Project Quay Operator details page, click Quay Registry.
-
Click the name of the registry → Events. On this page, events a streamed in real-time.
-
Optional: To reveal more information about deployment issues, you can click the name of the registry on the Events page to navigate to the QuayRegistry details page. On the QuayRegistry details page, you can view the condition of all
QuayRegistry
CR components.
Monitoring and debugging the QuayRegistry CR by using the CLI
The oc
CLI tool can be used to troubleshoot problems related to registry deployment. With the oc
CLI, you can obtain the following information about the QuayRegistry
CR:
-
The
conditions
field, which field shows the status of allQuayRegistry
components. -
The
currentVersion
field, which shows the version of Project Quay. -
The
registryEndpoint
field, which shows the publicly available hostname of the registry.
When troubleshooting deployment issues, you can check the Status
field of the QuayRegistry
custom resource (CR). This field reveals the health of the components during the deployment and can help you debug various problems with the deployment.
-
You have deployed a Project Quay registry by using the web console or the CLI.
-
View the state of deployed components by entering the following command:
$ oc get pods -n quay-enterprise
Example outputNAME READY STATUS RESTARTS AGE example-registry-clair-app-86554c6b49-ds7bl 0/1 ContainerCreating 0 2s example-registry-clair-app-86554c6b49-hxp5s 0/1 Running 1 17s example-registry-clair-postgres-68d8857899-lbc5n 0/1 ContainerCreating 0 17s example-registry-quay-app-upgrade-h2v7h 0/1 ContainerCreating 0 9s example-registry-quay-database-66f495c9bc-wqsjf 0/1 ContainerCreating 0 17s example-registry-quay-mirror-854c88457b-d845g 0/1 Init:0/1 0 2s example-registry-quay-mirror-854c88457b-fghxv 0/1 Init:0/1 0 17s example-registry-quay-postgres-init-bktdt 0/1 Terminating 0 17s example-registry-quay-redis-f9b9d44bf-4htpz 0/1 ContainerCreating 0 17s
-
Return information about your deployment by entering the following command:
$ oc get quayregistry -n <namespace> -o yaml
Example outputapiVersion: v1 items: - apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: annotations: ... spec: components: - kind: clair managed: true - kind: objectstorage managed: false ... status: conditions: (1) - lastTransitionTime: "2025-10-01T18:46:13Z" lastUpdateTime: "2025-10-07T13:12:54Z" message: Horizontal pod autoscaler found reason: ComponentReady status: "True" type: ComponentHPAReady ... currentVersion: v3.15.2 (2) lastUpdated: 2025-10-07 13:12:54.48811705 +0000 UTC registryEndpoint: https://example-registry-quay-cluster-new.gcp.quaydev.org (3)
-
Shows information about the status of all
QuayRegistry
components. -
Shows the current version that the registry is using.
-
Shows the publicly available hostname of the registry.
-