To deploy and configure Project Quay in your OpenShift Container Platform environment, you can use the Project Quay Operator. The Operator provides automated installation, configuration, and maintenance for your container image registry.
Introduction to the Project Quay Operator
The Project Quay Operator simplifies installation, deployment, and management of the Project Quay container registry on OpenShift Container Platform. You can use the Operator to treat Quay as a native OpenShift Container Platform application and manage its full lifecycle.
This chapter provides a conceptual overview of the Project Quay Operator’s architecture and configuration model. It covers the following information:
-
A configuration overview of Project Quay when deployed on OpenShift Container Platform.
-
How the Operator manages Quay’s components, or managed components.
-
When and why to use external, or unmanaged, components for dependencies like the database and object storage.
-
The function and structure of the
configBundleSecret, which handles Quay’s configuration. -
The prerequisites required before installation.
Red Hat Quay on OpenShift Container Platform configuration overview
When deploying Red Hat Quay on OpenShift Container Platform, the registry configuration is managed declaratively through two primary mechanisms: the QuayRegistry custom resource (CR) and the configBundleSecret resource. You use these mechanisms to configure and manage your registry deployment.
Understanding the QuayRegistry CR
The QuayRegistry custom resource (CR) defines the desired state of your Quay deployment. You use this resource to specify which components the Operator manages and which components you provide externally.
The QuayRegistry CR is used to determine whether a component is managed, or automatically handled by the Operator, or unmanaged, or provided externally by the user.
By default, the QuayRegistry CR contains the following key fields:
-
configBundleSecret: The name of a Kubernetes Secret containing theconfig.yamlfile which defines additional configuration parameters. -
name: The name of your Project Quay registry. -
namespace: The namespace, or project, in which the registry was created. -
spec.components: A list of component that the Operator automatically manages. Eachspec.componentfield contains two fields:-
kind: The name of the component -
managed: A boolean that addresses whether the component lifecycle is handled by the Project Quay Operator. Settingmanaged: trueto a component in theQuayRegistryCR means that the Operator manages the component.
-
All QuayRegistry components are automatically managed and auto-filled upon reconciliation for visibility unless specified otherwise. The following sections highlight the major QuayRegistry components and provide an example YAML file that shows the default settings.
Managed components
Managed components are Project Quay registry components that the Operator automatically configures and installs. By using managed components, you simplify deployment and reduce manual configuration tasks.
| Field | Type | Description |
|---|---|---|
|
Boolean |
Holds overrides for deployment of Red Hat Quay on OpenShift Container Platform, such as environment variables and number of replicas. This component cannot be set to unmanaged ( |
|
Boolean |
Used for storing registry metadata. Currently, PostgreSQL version 13 is used. |
|
Boolean |
Provides image vulnerability scanning. |
|
Boolean |
Storage live builder logs and the locking mechanism that is required for garbage collection. |
|
Boolean |
Adjusts the number of |
|
Boolean |
Stores image layer blobs. When set to |
|
Boolean |
Provides an external entrypoint to the Project Quay registry from outside of OpenShift Container Platform. |
|
Boolean |
Configures repository mirror workers to support optional repository mirroring. |
|
Boolean |
Features include a Grafana dashboard, access to individual metrics, and notifications for frequently restarting |
|
Boolean |
Configures whether SSL/TLS is automatically handled. |
|
Boolean |
Configures a managed Clair database. This is a separate database than the PostgreSQL database that is used to deploy Project Quay. |
The following example shows you the default configuration for the QuayRegistry custom resource provided by the Project Quay Operator. It is available on the OpenShift Container Platform web console.
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: <example_registry>
namespace: <namespace>
spec:
configBundleSecret: config-bundle-secret
components:
- kind: quay
managed: true
- kind: postgres
managed: true
- kind: clair
managed: true
- kind: redis
managed: true
- kind: horizontalpodautoscaler
managed: true
- kind: objectstorage
managed: true
- kind: route
managed: true
- kind: mirror
managed: true
- kind: monitoring
managed: true
- kind: tls
managed: true
- kind: clairpostgres
managed: true
Using unmanaged components for dependencies
Unmanaged components are Project Quay dependencies such as PostgreSQL, Redis, or object storage that you deploy and maintain outside of the Operator’s control. You use unmanaged components to integrate existing infrastructure or meet specific configuration requirements.
|
Note
|
If you are using an unmanaged PostgreSQL database, and the version is PostgreSQL 10, it is highly recommended that you upgrade to PostgreSQL 13. PostgreSQL 10 had its final release on November 10, 2022 and is no longer supported. For more information, see the PostgreSQL Versioning Policy. |
For more information about unmanaged components, see "Advanced configurations".
Understanding the configBundleSecret
The configBundleSecret is a Kubernetes Secret that stores the config.yaml file for Project Quay. You use this secret to configure authentication backends, feature flags, and other registry settings. For example:
-
Authentication backends (for example, OIDC, LDAP)
-
External TLS termination settings
-
Repository creation policies
-
Feature flags
-
Notification settings
Project Quay administrators might update this secret for the following reasons:
-
Enable a new authentication method
-
Add custom SSL/TLS certificates
-
Enable features
-
Modify security scanning settings
If this field is omitted, the Project Quay Operator automatically generates a configuration secret based on default values and managed component settings. If the field is provided, the contents of the config.yaml are used as the base configuration and are merged with values from managed components to form the final configuration, which is mounted into the quay application pods.
Prerequisites for Project Quay on OpenShift Container Platform
The prerequisites for Project Quay on OpenShift Container Platform include minimum cluster version, administrative access, resource capacity, and storage configuration. Ensure your environment meets these requirements before deployment.
OpenShift Container Platform cluster
To deploy and manage the Project Quay Operator, you need an OpenShift Container Platform cluster running version 4.5 or later and an administrative account with sufficient permissions to perform cluster-scoped actions.
Resource Requirements
The Project Quay Operator requires dedicated compute resources for each application pod. Ensure that your OpenShift Container Platform cluster meets the following minimum requirements for sufficient capacity.
| Resource type | Requirement |
|---|---|
Memory |
8 Gi |
CPU |
2000 millicores (2 vCPUs) |
The Operator creates at least one main application pod per Project Quay deployment that it manages. Plan your cluster capacity accordingly.
Object Storage
Project Quay requires object storage to store all container image layer blobs. You can provide this storage through managed storage that the Operator configures automatically, or through unmanaged storage using an existing external service.
Managed storage overview
If you want the Operator to manage object storage for Project Quay, your cluster needs to be capable of providing it through the ObjectBucketClaim API. There are multiple implementations of this API available, for instance, NooBaa in combination with Kubernetes PersistentVolumes or scalable storage backends like Ceph. Refer to the NooBaa documentation for more details on how to deploy this component.
Unmanaged storage overview
Unmanaged storage is Project Quay object storage that you provide and manage externally, such as AWS S3, Google Cloud Storage, or self-hosted S3-compatible services. You use unmanaged storage when you need to connect to a specific storage provider that you manage yourself.
Project Quay supports the following major cloud and on-premises object storage providers:
-
Amazon Web Services (AWS) S3
-
AWS STS S3 (Security Token Service)
-
AWS CloudFront (CloudFront S3Storage)
-
Google Cloud Storage
-
Microsoft Azure Blob Storage
-
Swift Storage
-
Nutanix Object Storage
-
IBM Cloud Object Storage
-
NetApp ONTAP S3 Object Storage
-
Hitachi Content Platform (HCP) Object Storage
For a complete list of object storage providers, the Quay Enterprise 3.x support matrix.
For example configurations of external object storage, see Storage object configuration fields, which provides the required YAML configuration examples, credential formatting, and full field descriptions for all supported external storage providers.
StorageClass
The Project Quay Operator uses the default StorageClass in your cluster to provision persistent storage for the Quay and Clair PostgreSQL databases. Ensure that your cluster has a default StorageClass configured before installation so that the Operator can create the required Persistent Volume Claims.
|
Important
|
Before proceeding with the installation, verify that a default |
Installing the Project Quay Operator from the OperatorHub
To install the Project Quay Operator from the OpenShift Container Platform OperatorHub, you can configure the installation mode and update approval strategy. Install the Operator cluster-wide to ensure the monitoring component is available.
-
On the OpenShift Container Platform web console, click Operators → OperatorHub.
-
In the search box, type Project Quay and select the official Project Quay Operator provided by Red Hat.
-
Select Install.
-
Select the update channel, for example, stable-3.16 and the version.
-
For the Installation mode, select one of the following:
-
All namespaces on the cluster. Select this option if you want the Project Quay Operator to be available cluster-wide. It is recommended that you install the Project Quay Operator cluster-wide. If you choose a single namespace, the monitoring component is not available.
-
A specific namespace on the cluster. Select this option if you want Project Quay deployed within a single namespace. Note that selecting this option renders the
monitoringcomponent unavailable.
-
-
Select an Approval Strategy. Choose to approve either automatic or manual updates. Automatic update strategy is recommended.
-
Select Install.
Deploying the Project Quay registry
To deploy the Project Quay registry after installing the Operator, you can create a QuayRegistry custom resource using the OpenShift Container Platform web console or the oc CLI. Ensure you have an object storage provider configured before deployment.
The following sections provide you with the information necessary to configure managed or unmanaged object storage, and then deploy the Project Quay registry.
|
Note
|
The following procedures show you how to create a basic Project Quay registry in all namespaces of the OpenShift Container Platform deployment. Depending on your needs, advanced configuration might be necessary. For example, you might need to configure SSL/TLS for your deployment or disable certain components. Advanced configuration practices are covered in later chapters of this guide. |
Deploying the Project Quay registry by using the OpenShift Container Platform web console
To deploy a basic Project Quay registry instance, you can use the OpenShift Container Platform web console to create a QuayRegistry custom resource. You configure managed or unmanaged object storage during the deployment process.
-
You have installed the Project Quay Operator.
-
You have have administrative privileges to the cluster.
-
On the OpenShift Container Platform web console, click Operators → Installed Operators → Red Hat Quay.
-
On the Red Hat Quay dashboard, click Create instance.
-
On the Create QuayRegistry page, review the default settings of the
QuayRegistrycustom resource (CR). Here, you decide whether to to use managed or unmanaged object storage.-
If you are using the Multicloud Object Gateway or Red Hat OpenShift Data Foundation as your object storage, keep the following settings:
- kind: objectstorage managed: true -
If you are using a different storage provider, such as Google Cloud Platform, AWS S3, or Nutanix, set the
objectstoragecomponent as follows:- kind: objectstorage managed: false
-
-
Click Create. You are redirected to the Quay Registry tab on the Operator page.
-
Click the name of the Project Quay registry that you created, then click Events to view the status of creation. If you used managed storage and leveraged the Multicloud Object Gateway, the registry completes creation. If you are using Red Hat OpenShift Data Foundation or an unmanaged storage backend provider, complete the following steps:
-
Click the Details page of the Project Quay registry.
-
Click the name of the Config Bundle Secret resource, for example, <example_registry_name_config-bundle-secret-12345>.
-
Click Actions → Edit Secret, and pass in the following information from your backend storage provider:
# ... DISTRIBUTED_STORAGE_CONFIG: <storage_provider>: - <storage_provider_name> - access_key: <access_key> bucket_name: <bucket_name> secret_key: <secret_key> storage_path: /datastorage/registry # ...NoteDepending on your storage provider, different information is required. For more information, see see Storage object configuration fields.
-
Click Save, and then re-navigate to the Events page of the registry to ensure successful deployment.
-
Deploying the Project Quay registry by using the CLI
To deploy a basic Project Quay registry instance, you can use the oc CLI to create a QuayRegistry custom resource. You configure managed or unmanaged object storage during the deployment process.
|
Note
|
The following For more information, see Automation configuration options. |
-
You have logged into OpenShift Container Platform using the CLI.
-
Create a namespace, for example,
quay-enterprise, by entering the following command:$ oc new-project quay-enterprise -
Create the
QuayRegistrycustom resource (CR).-
If the
objectstoragecomponent is set tomanaged: true, complete the following steps:-
Create the
QuayRegistryCR by entering the following command:$ cat <<EOF | oc create -n quay-enterprise -f - apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise EOF
-
-
If the
objectstoragecomponent is set tomanaged: false, complete the following steps:-
Create the
config.yamlfile for Project Quay by entering the following command. You must include the information required for your backend storage provider. During this step, you can enable additional Project Quay features. The following example is for a minimal configuration that includes the configuration options for automating early setup tasks:$ cat <<EOF > config.yaml ALLOW_PULLS_WITHOUT_STRICT_LOGGING: false AUTHENTICATION_TYPE: Database DEFAULT_TAG_EXPIRATION: 2w FEATURE_USER_INITIALIZE: true SUPER_USERS: - <username> BROWSER_API_CALLS_XHR_ONLY: false FEATURE_USER_CREATION: false DISTRIBUTED_STORAGE_CONFIG: <storage_provider>: - <storage_provider_name> - access_key: <access_key> bucket_name: <bucket_name> secret_key: <secret_key> storage_path: /datastorage/registry ENTERPRISE_LOGO_URL: /static/img/RH_Logo_Quay_Black_UX-horizontal.svg FEATURE_BUILD_SUPPORT: false FEATURE_DIRECT_LOGIN: true FEATURE_MAILING: false REGISTRY_TITLE: Red Hat Quay REGISTRY_TITLE_SHORT: Red Hat Quay SETUP_COMPLETE: true TAG_EXPIRATION_OPTIONS: - 2w TEAM_RESYNC_STALE_TIME: 60m TESTING: false EOF-
FEATURE_USER_INITIALIZE: Set this field totrueif you plan to create the first user by using API. -
SUPER_USERS: Include this field and the username that you plan to leverage as a Project Quay administrator. -
BROWSER_API_CALLS_XHR_ONLY: Set this field tofalseto allow general browser-based access to the API. -
FEATURE_USER_CREATION: Set this field tofalseto relegate the creation of new users to only superusers.
-
-
-
Create a secret for the configuration by entering the following command:
$ oc create secret generic <quay_config_bundle_name> \ --from-file=config.yaml=</path/to/config.yaml> \ -n quay-enterprise \ --dry-run=client -o yaml | oc apply -f - -
Create the
QuayRegistryCR by entering the following command:$ cat <<EOF | oc create -n quay-enterprise -f - apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: <quay_config_bundle_name> components: - kind: clair managed: true - kind: objectstorage managed: false - kind: mirror managed: true - kind: monitoring managed: true EOFobjectstorage: Set this field tofalsewhen providing your own storage backend.
-
-
Check the status of your registry by entering the following command:
$ oc describe quayregistry <registry_name> -n quay-enterprise... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ComponentsCreationSuccess 23s (x2458 over 42h) quayregistry-controller All objects created/updated successfully -
Alternatively, you can check pod statuses for your registry deployment by entering the following command:
-
Enter the following command to view the deployed components:
$ oc get pods -n quay-enterpriseNAME READY STATUS RESTARTS AGE example-registry-clair-app-5ffc9f77d6-jwr9s 1/1 Running 0 3m42s example-registry-clair-app-5ffc9f77d6-wgp7d 1/1 Running 0 3m41s example-registry-clair-postgres-54956d6d9c-rgs8l 1/1 Running 0 3m5s example-registry-quay-app-79c6b86c7b-8qnr2 1/1 Running 4 3m42s example-registry-quay-app-79c6b86c7b-xk85f 1/1 Running 4 3m41s example-registry-quay-app-upgrade-5kl5r 0/1 Completed 4 3m50s example-registry-quay-database-b466fc4d7-tfrnx 1/1 Running 2 3m42s example-registry-quay-mirror-6d9bd78756-6lj6p 1/1 Running 0 2m58s example-registry-quay-mirror-6d9bd78756-bv6gq 1/1 Running 0 2m58s example-registry-quay-postgres-init-dzbmx 0/1 Completed 0 3m43s example-registry-quay-redis-8bd67b647-skgqx 1/1 Running 0 3m42s
Creating the first user
Creating the first user establishes the initial administrative account for your Project Quay registry. This step confirms that your deployment is operational and provides the credentials needed to use and manage your registry.
You can create the first user by using the Project Quay UI or the API.
Creating the first user by using the UI
To create the first user for your Project Quay registry, you can use the UI for a visual workflow. The UI method is simpler because it does not require additional configuration in the config.yaml file.
-
You have deployed the Project Quay registry.
-
On the OpenShift Container Platform web console, click Operators → Installed Operators → Project Quay.
-
On the Project Quay Operators page, click Quay Registry, and then the name of your registry.
-
On the QuayRegistry details page, click the Registry Endpoint link, for example, example-registry-quay.username-cluster-new.gcp.quaydev.org. You are navigated to the registry’s main page.
-
Click Create Account.
-
Enter the details for Username, Password, Email, and then click Create Account. After creating the first user, you are automatically logged in to the Project Quay registry.
Using the API to create the first user
To create the first user with administrative privileges for your Project Quay registry, you can use the API. This method requires configuring FEATURE_USER_INITIALIZE and SUPER_USERS in your config.yaml file before deployment.
-
You have set
FEATURE_USER_INITIALIZE: trueand established a superuser in yourconfig.yamlfile. For example:# ... FEATURE_USER_INITIALIZE: true SUPER_USERS: - <username> # ..If you did not configure these settings upon registry creation, and need to re-configure your registry to enable these settings, see "Enabling features after deployment".
-
You have not created a user by using the Project Quay UI.
-
On the command-line interface, generate a new user with a username, password, email, and access token by entering the following
CURLcommand:$ curl -X POST -k http:/</quay-server.example.com>/api/v1/user/initialize --header 'Content-Type: application/json' --data '{ "username": "<username>", "password":"<password>", "email": "<email>@example.com", "access_token": true}'If successful, the command returns an object with the username, email, and encrypted password. For example:
{"access_token":"123456789", "email":"quayadmin@example.com","encrypted_password":"<password>","username":"quayadmin"} # gitleaks:allowIf a user already exists in the database, an error is returned. For example:
{"message":"Cannot initialize user in a non-empty database"}If your password is not at least eight characters or contains whitespace, an error is returned. For example:
{"message":"Failed to initialize user: Invalid password, password must be at least 8 characters and contain no whitespace."} -
You can log in to your registry by navigating to the UI or by leveraging Podman on the CLI.
-
Log in to the registry by running the following
podmancommand:$ podman login -u <username> -p <password> http://<quay-server.example.com>
-
Modifying the QuayRegistry CR after deployment
Modifying the QuayRegistry custom resource (CR) in Project Quay after deployment lets you customize or reconfigure aspects of your Project Quay environment.
Project Quay administrators might modify the QuayRegistry CR for the following reasons:
-
To change component management: Switch components from
managed: truetomanaged: falsein order to bring your own infrastructure. For example, you might setkind: objectstorageto unmanaged to integrate external object storage platforms such as Google Cloud Storage or Nutanix. -
To apply custom configuration: Update or replace the
configBundleSecretto apply new configuration settings, for example, authentication providers, external SSL/TLS settings, feature flags. -
To enable or disable features: Toggle features like repository mirroring, Clair scanning, or horizontal pod autoscaling by modifying the
spec.componentslist. -
To scale the deployment: Adjust environment variables or replica counts for the Quay application.
-
To integrate with external services: Provide configuration for external PostgreSQL, Redis, or Clair databases, and update endpoints or credentials.
Modifying the QuayRegistry CR by using the OpenShift Container Platform web console
To modify the QuayRegistry custom resource in Project Quay, you can use the OpenShift Container Platform web console to change component management settings. You can set managed components to unmanaged and use your own infrastructure.
-
You are logged into OpenShift Container Platform as a user with admin privileges.
-
You have installed the Project Quay Operator.
-
On the OpenShift Container Platform web console, click Operators → Installed Operators.
-
Click Red Hat Quay.
-
Click Quay Registry.
-
Click the name of your Project Quay registry, for example, example-registry.
-
Click YAML.
-
Adjust the
managedfield of the desired component to eitherTrueorFalse. -
Click Save.
NoteSetting a component to unmanaged (
managed: false) might require additional configuration. For more information about setting unmanaged components in theQuayRegistryCR, see Using unmanaged components for dependencies.
Modifying the QuayRegistry CR by using the CLI
To modify the QuayRegistry custom resource in Project Quay, you can use the CLI to change component management settings. You can set managed components to unmanaged and use your own infrastructure.
-
You are logged in to your OpenShift Container Platform cluster as a user with admin privileges.
-
Edit the
QuayRegistryCR by entering the following command:$ oc edit quayregistry <registry_name> -n <namespace> -
Make the desired changes to the
QuayRegistryCR.NoteSetting a component to unmanaged (
managed: false) might require additional configuration. For more information about setting unmanaged components in theQuayRegistryCR, see Using unmanaged components for dependencies. -
Save the changes.
Enabling features after deployment
To enable new features for your Project Quay registry after deployment, you can edit the configBundleSecret resource. You can use the OpenShift Container Platform web console or the CLI to make these changes.
|
Note
|
Using the OpenShift Container Platform web console to enable features is generally considered a simpler method. |
Enabling features by using the OpenShift Container Platform web console
To enable features for your Project Quay registry, you can edit the configBundleSecret resource using the OpenShift Container Platform web console. The Operator automatically reconciles changes by restarting Quay-related pods.
-
You have have administrative privileges to the cluster.
-
On the OpenShift Container Platform web console, click Operators → Installed Operators → Red Hat Quay.
-
Click Quay Registry and then the name of your registry.
-
Under Config Bundle Secret, click the name of your secret, for example,
quay-config-bundle. -
On the Secret details page, click Actions → Edit secret.
-
In the Value text box, add the new configuration fields for the features that you want to enable. For a list of all configuration fields, see Configure Project Quay.
-
Click Save. The Project Quay Operator automatically reconciles the changes by restarting all Quay-related pods. After all pods are restarted, the features are enabled.
Modifying the configuration file by using the CLI
To modify the config.yaml file for your Project Quay registry and enable new features, you can download the existing configuration from the configBundleSecret by using the CLI. After making changes, you can re-upload the configBundleSecret resource to apply the changes.
|
Note
|
Modifying the |
-
You are logged in to the OpenShift Container Platform cluster as a user with admin privileges.
-
Describe the
QuayRegistryresource by entering the following command:$ oc describe quayregistry -n <quay_namespace># ... Config Bundle Secret: example-registry-config-bundle-v123x # ... -
Obtain the secret data by entering the following command:
$ oc get secret -n <quay_namespace> <example-registry-config-bundle-v123x> -o jsonpath='{.data}'{ "config.yaml": "RkVBVFVSRV9VU0 ... MDAwMAo=" } -
Decode the data into a YAML file into the current directory by passing in the
>> config.yamlflag. For example:$ echo 'RkVBVFVSRV9VU0 ... MDAwMAo=' | base64 --decode >> config.yaml -
Make the desired changes to your
config.yamlfile, and then save the file asconfig.yaml. -
Create a new
configBundleSecretYAML by entering the following command.$ touch <new_configBundleSecret_name>.yaml -
Create the new
configBundleSecretresource, passing in theconfig.yamlfile` by entering the following command:$ oc -n <namespace> create secret generic <secret_name> \ --from-file=config.yaml=</path/to/config.yaml> \ --dry-run=client -o yaml > <new_configBundleSecret_name>.yamlwhere:
- </path/to/config.yaml>
-
Specifies your base64 decoded
config.yamlfile.
-
Create the
configBundleSecretresource by entering the following command:$ oc create -n <namespace> -f <new_configBundleSecret_name>.yamlsecret/config-bundle created -
Update the
QuayRegistryYAML file to reference the newconfigBundleSecretobject by entering the following command:$ oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{"spec":{"configBundleSecret":"<new_configBundleSecret_name>"}}'quayregistry.quay.redhat.com/example-registry patched
-
Verify that the
QuayRegistryCR has been updated with the newconfigBundleSecret:$ oc describe quayregistry -n <quay_namespace># ... Config Bundle Secret: <new_configBundleSecret_name> # ...After patching the registry, the Project Quay Operator automatically reconciles the changes.
Deploying Project Quay on infrastructure nodes
Deploying Project Quay on infrastructure nodes dedicates specific nodes for registry workloads to improve performance and isolate critical components. You can use infrastructure machine sets or label and taint worker nodes to control where quay pods are scheduled.
By default, all quay-related pods are scheduled on available worker nodes in your OpenShift Container Platform cluster. In some environments, you might want to dedicate certain nodes specifically for infrastructure workloads—such as registry, database, and monitoring pods—to improve performance, isolate critical components, or simplify maintenance.
OpenShift Container Platform supports this approach using infrastructure machine sets, which automatically create and manage nodes reserved for infrastructure.
As an OpenShift Container Platform administrator, you can achieve the same result by labeling and tainting worker nodes. This ensures that only infrastructure workloads, like quay pods, are scheduled on these nodes. After your infrastructure nodes are configured, you can control where quay pods run using node selectors and tolerations.
The following procedures is intended for new deployments that install the Project Quay Operator in a single namespace and provide their own backend storage. The procedure shows you how to prepare nodes and deploy Project Quay on dedicated infrastructure nodes. In this procedure, all quay-related pods are placed on dedicated infrastructure nodes.
Labeling and tainting nodes for infrastructure use
To dedicate nodes for infrastructure workloads like quay pods, you can label and taint worker nodes with the infra role. This prevents user workloads from being scheduled on infrastructure nodes and ensures only infrastructure pods run on these dedicated nodes.
|
Note
|
The following procedure labels three worker nodes with the |
-
Obtain a list of worker nodes in your deployment by entering the following command:
$ oc get nodes | grep workerNAME STATUS ROLES AGE VERSION --- example-cluster-new-c5qqp-worker-b-4zxx5.c.quay-devel.internal Ready worker 401d v1.31.11 example-cluster-new-c5qqp-worker-b-kz6jn.c.quay-devel.internal Ready worker 402d v1.31.11 example-cluster-new-c5qqp-worker-b-wrhw4.c.quay-devel.internal Ready worker 401d v1.31.11 --- -
Add the
node-role.kubernetes.io/infra=label to the worker nodes by entering the following command. The number of infrastructure nodes required depends on your environment. Production environments should provision enough infra nodes to ensure high availability and sufficient resources for allquay-related components. Monitor CPU, memory, and storage utilization to determine if additional infra nodes are required.$ oc label node --overwrite <infra_node_one> <infra_node_two> <infra_node_three> node-role.kubernetes.io/infra= -
Confirm that the
node-role.kubernetes.io/infra=label has been added to the proper nodes by entering the following command:$ oc get node | grep infra--- example-cluster-new-c5qqp-worker-b-4zxx5.c.quay-devel.internal Ready infra,worker 405d v1.32.8 example-cluster-new-c5qqp-worker-b-kz6jn.c.quay-devel.internal Ready infra,worker 406d v1.32.8 example-cluster-new-c5qqp-worker-b-wrhw4.c.quay-devel.internal Ready infra,worker 405d v1.32.8 --- -
When a worker node is assigned the
infrarole, there is a chance that user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node, and then add tolerations for the pods that you want to control. Taint the worker nodes with theinfralabel by entering the following command:$ oc adm taint nodes -l node-role.kubernetes.io/infra \ node-role.kubernetes.io/infra=reserved:NoSchedule --overwritenode/example-cluster-new-c5qqp-worker-b-4zxx5.c.quay-devel.internal modified node/example-cluster-new-c5qqp-worker-b-kz6jn.c.quay-devel.internal modified node/example-cluster-new-c5qqp-worker-b-wrhw4.c.quay-devel.internal modified
Creating a project with node selector and tolerations
To ensure that Project Quay pods run on infrastructure nodes, you can create a project with node selector and tolerations annotations. These annotations direct pods to infrastructure nodes and allow them to tolerate the node taints.
-
Add the
node-selectorannotation to the namespace by entering the following command:$ oc annotate namespace <namespace> openshift.io/node-selector='node-role.kubernetes.io/infra=' -
Add the
tolerationsannotation to the namespace by entering the following command:$ oc annotate namespace <namespace> scheduler.alpha.kubernetes.io/defaultTolerations='[{"operator":"Equal","value":"reserved","effect":"NoSchedule","key":"node-role.kubernetes.io/infra"},{"operator":"Equal","value":"reserved","effect":"NoExecute","key":"node-role.kubernetes.io/infra"}]' --overwritenamespace/<namespace> annotatedImportantThe tolerations in this example are specific to two taints commonly applied to infra nodes. The taints configured in your environment might differ. You must set the tolerations accordingly to match the taints applied to your infra nodes.
Installing the Project Quay Operator on the annotated namespace
To install the Project Quay Operator on infrastructure nodes, you can install it in the namespace that has node-selector and tolerations annotations. This ensures the Operator and its pods run on the dedicated infrastructure nodes.
-
On the OpenShift Container Platform web console, click Operators → OperatorHub.
-
In the search box, type Project Quay.
-
Click Project Quay → Install.
-
Select the update channel, for example, stable-3.16 and the version.
-
Click A specific namespace on the cluster for the installation mode, and then select the namespace that you applied the
node-selectorandtolerationsannotations to. -
Click Install.
-
Confirm that the Operator is installed by entering the following command:
$ oc get pods -n <annotated_namespace> -o wide | grep quay-operatorquay-operator.v3.15.1-858b5c5fdc-lf5kj 1/1 Running 0 29m 10.130.6.18 example-cluster-new-c5qqp-worker-f-mhngl.c.quay-devel.internal <none> <none>
Creating the Project Quay registry
To create a Project Quay registry that runs on infrastructure nodes, you can create a QuayRegistry custom resource in the annotated namespace. You must patch the registry components (clair, postgres, redis, and so on) with toleration annotations so they can schedule onto the infra worker nodes.
-
On the OpenShift Container Platform web console, click Operators → Installed Operators → Red Hat Quay.
-
On the Project Quay Operator details page, click Quay Registry → Create QuayRegistry.
-
On the Create QuayRegistry page, set the
monitoringandobjectstoragefields tofalse. The monitoring component cannot be enabled when Project Quay is installed in a single namespace. For example:# ... - kind: monitoring managed: false - kind: objectstorage managed: false # ... -
Click Create.
-
Optional: Confirm that the pods are running on infra nodes.
-
List all
Quay-related pods along with the nodes that they are scheduled on by entering the following command:$ oc get pods -n <annotated_namespace> -o wide | grep example-registry... NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES example-registry-clair-app-5f95d685bd-dgjf6 1/1 Running 0 52m 10.128.4.12 example-cluster-new-c5qqp-worker-b-wrhw4.c.quay-devel.internal <none> <none> ... -
Confirm that the nodes listed include only nodes labeled
infraby running the following command:$ oc get nodes -l node-role.kubernetes.io/infra -o namenode/example-cluster-new-c5qqp-worker-b-4zxx5.c.quay-devel.internal modified node/example-cluster-new-c5qqp-worker-b-kz6jn.c.quay-devel.internal modified node/example-cluster-new-c5qqp-worker-b-wrhw4.c.quay-devel.internal modified
NoteIf any pod appears on a non-infra node, revisit your namespace annotations and deployment patching.
-
-
Restart all pods for the Project Quay registry by entering the following command:
$ oc delete pod -n <annotated_namespace> --all -
Check the status of the pods by entering the following command:
$ oc get pods -n <annotated_namespace>... NAME READY STATUS RESTARTS AGE example-registry-clair-app-5f95d685bd-dgjf6 1/1 Running 0 5m4s ...
Advanced configuration
Advanced configuration options let you customize your Project Quay deployment when default settings do not meet your needs. You can configure external databases, custom ingress, monitoring, and other components to integrate with existing infrastructure.
Using an external PostgreSQL database
Using an external PostgreSQL database with Project Quay lets you manage your own database infrastructure instead of using the Operator-managed database. You must ensure that required configuration and extensions, such as pg_trgm, are in place before deployment.
|
Important
|
Do not share the same PostgreSQL database between Project Quay and Clair deployments. Each service must use its own database instance. Sharing databases with other workloads is also not supported, because connection-intensive components such as Project Quay and Clair can quickly exceed PostgreSQL’s connection limits. Connection poolers such as pgBouncer are not supported with Project Quay or Clair. |
When managing your own PostgreSQL database for use with Project Quay, the following best practices are recommended:
-
*
pg_trgmextenion: Thepg_trgmextension must be enabled on the database for a successful deployment. -
Backups: Perform regular database backups using PostgreSQL-native tools or your existing backup infrastructure. The Project Quay Operator does not manage database backups.
-
Restores: When restoring a backup, ensure that all Project Quay pods are stopped before beginning the restore process.
-
Storage sizing: When using the Operator-managed PostgreSQL database, the default storage allocation is 50 GiB. For external databases, you must ensure sufficient storage capacity for your environment, as the Operator does not handle volume resizing.
-
Monitoring: Monitor disk usage, connection limits, and query performance to prevent outages caused by resource exhaustion.
Integrating an existing PostgreSQL database
To integrate an existing PostgreSQL database with your Project Quay registry, you can set the postgres component to unmanaged and configure the DB_URI in the configBundleSecret. This lets you leverage your current database infrastructure instead of using the Operator-managed database.
|
Note
|
The following procedure uses the OpenShift Container Platform web console to configure the Project Quay registry to use an external PostgreSQL database. For most users, use the web console is simpler. This procedure can also be done by using the |
-
On the OpenShift Container Platform web console, click Operators → Installed Operators.
-
Click Red Hat Quay.
-
Click Quay Registry.
-
Click the name of your Project Quay registry, for example, example-registry.
-
Click YAML.
-
Set the
postgresfield of theQuayRegistryCR tomanaged: false. For example:- kind: postgres managed: false -
Click Save.
-
Click Details → the name of your
Config Bundle Secretresource. -
On the Secret Details page, click Actions → Edit Secret.
-
Add the
DB_URIfield to yourconfig.yamlfile. For example:DB_URI: postgresql://test-quay-database:postgres@test-quay-database:5432/test-quay-database -
Optional: Add additional database configuration fields, such as
DB_CONNECTION_ARGSor SSL/TLS connection arguments. For more information, see Database connection arguments. -
Click Save.
Using an external Redis database
Using an external Redis database with Project Quay lets you manage your own Redis infrastructure instead of using the Operator-managed Redis. You must ensure that Redis is properly configured and available before deployment, and use a dedicated instance separate from Clair.
|
Important
|
Do not share the same Redis instance between Project Quay and Clair deployments. Each service must use its own dedicated Redis instance. Sharing Redis with other workloads is not supported, because connection-intensive components such as Project Quay and Clair can quickly exhaust available Redis connections and degrade performance. |
Integrating an external Redis database
To integrate an existing Redis database with your Project Quay registry, you can set the redis component to unmanaged and configure BUILDLOGS_REDIS and USER_EVENTS_REDIS in the configBundleSecret. This lets you use your own Redis infrastructure for build logs and user event processing.
|
Note
|
The following procedure uses the OpenShift Container Platform web console to configure Project Quay to use an external Redis database. For most users, using the web console is simpler. You can also complete this procedure by using the |
-
In the OpenShift Container Platform web console, navigate to Operators → Installed Operators.
-
Click Red Hat Quay.
-
Click QuayRegistry.
-
Click the name of your Project Quay registry, for example, example-registry.
-
Click YAML.
-
Set the
rediscomponent to unmanaged by adding the following entry underspec.components:- kind: redis managed: false -
Click Save.
-
Click Details → the name of your
Config Bundle Secretresource. -
On the Secret details page, click Actions → Edit Secret.
-
In the
config.yamlsection, add entries for your external Redis instance. For example:BUILDLOGS_REDIS: host: redis.example.com port: 6379 ssl: false USER_EVENTS_REDIS: host: redis.example.com port: 6379 ssl: falseImportantIf both the
BUILDLOGS_REDISandUSER_EVENTS_REDISfields reference the same Redis deployment, ensure that your Redis service can handle the combined connection load. For large or high-throughput registries, use separate Redis databases or clusters for these components. -
Optional: Add additional database configuration fields, such as
DB_CONNECTION_ARGSor SSL/TLS connection arguments. For more information, see Redis configuration fields. -
Click Save.
About Horizontal Pod Autoscaling (HPA)
Horizontal Pod Autoscalers (HPAs) automatically adjust the number of running pods based on CPU and memory utilization. Project Quay deployments include managed HPAs for key components to ensure availability and performance during load spikes or maintenance events.
A typical Project Quay deployment includes the following pods:
-
Two pods for the Project Quay application (
example-registry-quay-app-*) -
One Redis pod for Project Quay logging (
example-registry-quay-redis-*) -
One PostgreSQL pod for metadata storage (
example-registry-quay-database-*) -
Two
Quaymirroring pods (example-registry-quay-mirror-*) -
Two pods for Clair (
example-registry-clair-app-*) -
One PostgreSQL pod for Clair (
example-registry-clair-postgres-*)
HPAs are managed by default for the Quay, Clair, and Mirror components, each starting with two replicas to prevent downtime during upgrades, reconfigurations, or pod rescheduling ev
Managing Horizontal Pod Autoscaling
To customize scaling thresholds or replica limits for your Project Quay registry, you can set the horizontalpodautoscaler component to unmanaged in the QuayRegistry custom resource. You can then explicitly set replica counts for the quay, clair, and mirror components.
|
Note
|
The following procedure uses the OpenShift Container Platform web console to configure the Project Quay registry to use an external PostgreSQL database. For most users, use the web console is simpler. This procedure can also be done by using the |
-
Edit your
QuayRegistryCR:$ oc edit quayregistry <quay_registry_name> -n <quay_namespace>apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay-registry namespace: quay-enterprise spec: components: - kind: horizontalpodautoscaler managed: false - kind: quay managed: true overrides: replicas: null - kind: clair managed: true overrides: replicas: null - kind: mirror managed: true overrides: replicas: null # ... -
Create a custom
HorizontalPodAutoscalerresource with your desired configuration, for example:kind: HorizontalPodAutoscaler apiVersion: autoscaling/v2 metadata: name: quay-registry-quay-app namespace: quay-enterprise spec: scaleTargetRef: kind: Deployment name: quay-registry-quay-app apiVersion: apps/v1 minReplicas: 3 maxReplicas: 20 metrics: - type: Resource resource: name: memory target: type: Utilization averageUtilization: 90 - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 90 -
Apply the new HPA configuration to your cluster:
$ oc apply -f <custom_hpa>.yamlhorizontalpodautoscaler.autoscaling/quay-registry-quay-app created
-
Verify that your Project Quay application pods are running:
$ oc get pod | grep quay-appquay-registry-quay-app-5b8fd49d6b-7wvbk 1/1 Running 0 34m quay-registry-quay-app-5b8fd49d6b-jslq9 1/1 Running 0 3m42s quay-registry-quay-app-5b8fd49d6b-pskpz 1/1 Running 0 43m -
Verify that your custom HPA is active:
$ oc get hpaNAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE quay-registry-quay-app Deployment/quay-registry-quay-app 67%/90%, 54%/90% 3 20 3 51m
Configuring custom ingress
You can configure custom ingress for Project Quay by disabling the Operator-managed route component and managing your own routes or ingress controllers. This configuration is useful when your environment requires a custom SSL/TLS setup, specific DNS naming conventions, or when Project Quay is deployed behind a load balancer or proxy that handles TLS termination.
The Project Quay Operator separates route management from SSL/TLS configuration by introducing a distinct tls component. You can therefore manage each independently, depending on whether Project Quay or the cluster should handle TLS termination. For more information about using SSL/TLS certificates with your deployment, see "Securing Project Quay".
|
Note
|
If you disable the managed |
Disabling the Route component
To prevent the Project Quay Operator from creating a route, you can set the route component to unmanaged in the QuayRegistry custom resource. You must then configure SSL/TLS handling in your config.yaml file.
-
In your
quayregistry.yamlfile, set theroutecomponent asmanaged: false:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: route managed: false -
In your
config.yamlfile, configure Project Quay to handle SSL/TLS. For example:# ... EXTERNAL_TLS_TERMINATION: false SERVER_HOSTNAME: example-registry-quay-quay-enterprise.apps.user1.example.com PREFERRED_URL_SCHEME: https # ...If the configuration is incomplete, the following error might appear:
{ "reason":"ConfigInvalid", "message":"required component `route` marked as unmanaged, but `configBundleSecret` is missing necessary fields" }
Configuring SSL/TLS and routes
Configuring SSL/TLS and routes for Project Quay lets you control how TLS termination and route management work together. The tls component provides support for OpenShift Container Platform edge termination routes and enables independent control of route management and TLS certificate handling.
EXTERNAL_TLS_TERMINATION: true is the default, opinionated setting, which assumes the cluster manages TLS termination.
|
Note
|
|
Multiple valid configurations are possible, as shown in the following table:
| Option | Route | TLS | Certs provided | Result |
|---|---|---|---|---|
My own load balancer handles TLS |
Managed |
Managed |
No |
Edge route using default cluster wildcard certificate |
Project Quay handles TLS |
Managed |
Unmanaged |
Yes |
Passthrough route with certificates mounted in the Project Quay pod |
Project Quay handles TLS |
Unmanaged |
Unmanaged |
Yes |
Certificates set inside the Project Quay pod; user must manually create a route |
Disabling the monitoring component
Disabling the monitoring component sets the monitoring component to unmanaged in the QuayRegistry custom resource. You must disable monitoring when you install the Project Quay Operator in a single namespace, or you can disable it in multi-namespace installations to use your own monitoring stack.
|
Note
|
Monitoring cannot be enabled when the Project Quay Operator is installed in a single namespace. You might also disable monitoring in multi-namespace deployments if you use an external Prometheus or Grafana instance, want to reduce resource overhead, or require custom observability integration. |
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: quay-enterprise
spec:
components:
- kind: monitoring
managed: false
Disabling the mirroring component
Repository mirroring in Project Quay allows you to automatically synchronize container images from remote registries into your local Project Quay instance. The Project Quay Operator deploys a separate mirroring worker component that handles these synchronization tasks.
You can disable the managed mirroring component by setting it to managed: false in the QuayRegistry custom resource.
|
Note
|
Disabling managed mirroring means that the Operator does not deploy or reconcile any mirroring pods. You are responsible for creating, scheduling, and maintaining mirroring jobs manually. For most production deployments, leaving mirroring as |
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: quay-enterprise
spec:
components:
- kind: mirroring
managed: false
Configuring QuayRegistry CR resources
Configuring resources for managed components lets you adjust CPU and memory requests for quay, clair, mirroring, and database pods. You can configure resources to run smaller test clusters or request more resources upfront to avoid performance issues.
The following components should not be set lower than their minimum requirements. Setting resources too low can cause issues with your deployment and, in some cases, result in failure of the pod’s deployment.
-
quay: Minimum of 6 GB, 2vCPUs -
clair: Recommended of 2 GB memory, 2 vCPUs -
clairpostgres: Minimum of 200 MB
You can configure resource requests on the OpenShift Container Platform UI or directly by updating the QuayRegistry CR via the CLI.
|
Important
|
The default values set for these components are the suggested values. Setting resource requests too high or too low might lead to inefficient resource utilization, or performance degradation, respectively. |
Configuring resource requests by using the OpenShift Container Platform web console
To configure resource requests for your Project Quay registry components, you can use the OpenShift Container Platform web console to edit the QuayRegistry custom resource. You can set CPU and memory limits and requests for quay, clair, mirroring, and database pods.
-
On the OpenShift Container Platform developer console, click Operators → Installed Operators → Red Hat Quay.
-
Click QuayRegistry.
-
Click the name of your registry, for example, example-registry.
-
Click YAML.
-
In the
spec.componentsfield, you can override the resources of all components by setting values for the.overrides.resources.limitsand theoverrides.resources.requestsfields. You can also specify astorageClassNameforpostgresandclairpostgresresources, however, these fields must be defined during initial installation of the component.. For example:spec: components: - kind: clair managed: true overrides: resources: limits: cpu: "5" # Limiting to 5 CPU (equivalent to 5000m or 5000 millicpu) memory: "18Gi" # Limiting to 18 Gibibytes of memory requests: cpu: "4" # Requesting 4 CPU memory: "4Gi" # Requesting 4 Gibibytes of memory - kind: postgres managed: true overrides: storageClassName: "local-path" resources: limits: {} requests: cpu: "700m" # Requesting 700 millicpu or 0.7 CPU memory: "4Gi" # Requesting 4 Gibibytes of memory - kind: mirror managed: true overrides: resources: limits: requests: cpu: "800m" # Requesting 800 millicpu or 0.8 CPU memory: "1Gi" # Requesting 1 Gibibyte of memory - kind: quay managed: true overrides: resources: limits: cpu: "4" # Limiting to 4 CPU memory: "10Gi" # Limiting to 10 Gibibytes of memory requests: cpu: "4" # Requesting 4 CPU memory: "10Gi" # Requesting 10 Gibi of memory - kind: clairpostgres managed: true overrides: storageClassName: "local-path" resources: limits: cpu: "800m" # Limiting to 800 millicpu or 0.8 CPU memory: "3Gi" # Limiting to 3 Gibibytes of memory requests: {}-
limits: Setting thelimitsorrequestsfields to{}uses the default values for these resources. -
limits: Leaving thelimitsorrequestsfield empty puts no limitations on these resources.
-
Configuring resource requests by using the CLI
To configure resource requests for your Project Quay registry components after deployment, you can edit the QuayRegistry custom resource using the CLI. You can set CPU and memory limits and requests for quay, clair, mirroring, and database pods.
-
Edit the
QuayRegistryCR by entering the following command:$ oc edit quayregistry <registry_name> -n <namespace> -
Make any desired changes. For example:
- kind: quay managed: true overrides: resources: limits: {} requests: cpu: "0.7" # Requesting 0.7 CPU (equivalent to 500m or 500 millicpu) memory: "512Mi" # Requesting 512 Mebibytes of memory -
Save the changes.
Troubleshooting the QuayRegistry CR
To troubleshoot issues during Project Quay registry deployment, you can check the QuayRegistry custom resource status using the OpenShift Container Platform web console Events page or the oc CLI. These methods help you identify and resolve deployment problems.
Monitoring and debugging the QuayRegistry CR by using the OpenShift Container Platform web console
To monitor and debug your Project Quay registry deployment, you can check the Events page on the OpenShift Container Platform web console. The Events page shows lifecycle observability and helps you identify problems related to registry deployment.
-
You have deployed a Project Quay registry.
-
On the OpenShift Container Platform web console, click Operators → Installed Operators → Red Hat Quay.
-
On the Project Quay Operator details page, click Quay Registry.
-
Click the name of the registry → Events. On this page, events a streamed in real-time.
-
Optional: To reveal more information about deployment issues, you can click the name of the registry on the Events page to navigate to the QuayRegistry details page. On the QuayRegistry details page, you can view the condition of all
QuayRegistryCR components.
Monitoring and debugging the QuayRegistry CR by using the CLI
To monitor and debug your Project Quay registry deployment, you can use the oc CLI to check the QuayRegistry custom resource status. The Status field shows component health and helps you troubleshoot deployment issues.
With the oc CLI, you can obtain the following information about the QuayRegistry CR:
-
The
conditionsfield, which field shows the status of allQuayRegistrycomponents. -
The
currentVersionfield, which shows the version of Project Quay. -
The
registryEndpointfield, which shows the publicly available hostname of the registry.
-
You have deployed a Project Quay registry by using the web console or the CLI.
-
View the state of deployed components by entering the following command:
$ oc get pods -n quay-enterpriseNAME READY STATUS RESTARTS AGE example-registry-clair-app-86554c6b49-ds7bl 0/1 ContainerCreating 0 2s example-registry-clair-app-86554c6b49-hxp5s 0/1 Running 1 17s example-registry-clair-postgres-68d8857899-lbc5n 0/1 ContainerCreating 0 17s example-registry-quay-app-upgrade-h2v7h 0/1 ContainerCreating 0 9s example-registry-quay-database-66f495c9bc-wqsjf 0/1 ContainerCreating 0 17s example-registry-quay-mirror-854c88457b-d845g 0/1 Init:0/1 0 2s example-registry-quay-mirror-854c88457b-fghxv 0/1 Init:0/1 0 17s example-registry-quay-postgres-init-bktdt 0/1 Terminating 0 17s example-registry-quay-redis-f9b9d44bf-4htpz 0/1 ContainerCreating 0 17s -
Return information about your deployment by entering the following command:
$ oc get quayregistry -n <namespace> -o yamlapiVersion: v1 items: - apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: annotations: ... spec: components: - kind: clair managed: true - kind: objectstorage managed: false ... status: conditions: - lastTransitionTime: "2025-10-01T18:46:13Z" lastUpdateTime: "2025-10-07T13:12:54Z" message: Horizontal pod autoscaler found reason: ComponentReady status: "True" type: ComponentHPAReady ... currentVersion: v3.15.2 lastUpdated: 2025-10-07 13:12:54.48811705 +0000 UTC registryEndpoint: https://example-registry-quay-cluster-new.gcp.quaydev.org (3)-
conditionsshows information about the status of allQuayRegistrycomponents. -
currentVersionshows the current version that the registry is using. -
registryEndpointshows the publicly available hostname of the registry.
-