Project Quay is an enterprise-quality container registry. Use Project Quay to build and store container images, then make them available to deploy across your enterprise.

The Project Quay Operator provides a simple method to deploy and manage a Project Quay cluster. This is the preferred procedure for deploying Project Quay on OpenShift and is covered in this guide.

Note that this version of the Project Quay Operator has been completely rewritten and differs substantially from earlier versions. Please review this documentation carefully. If you are looking for documentation for prior versions of the Project Quay Operator, please check here.

Prerequisites for Project Quay on OpenShift

Here are a few things you need to know before you begin the Project Quay Operator on OpenShift deployment:

  • OpenShift cluster: You need a privileged account to an OpenShift 4.5 or later cluster on which to deploy the Project Quay Operator. That account must have the ability to create namespaces at the cluster scope.

  • Resource Requirements: Each Project Quay application pod has the following resource requirements:

    • 8Gi of memory

    • 2 milicores of CPU.

The Project Quay Operator will create at least one application pod per Project Quay deployment it manages. Ensure your OpenShift cluster has sufficient compute resources for these requirements.

  • Object Storage: By default, the Project Quay Operator uses the ObjectBucketClaim Kubernetes API to provision object storage. Consuming this API decouples the Operator from any vendor-specific implementation. OpenShift Container Storage provides this API via its NooBaa component, which will be used in this example. Otherwise, Project Quay can be manually configured to use any of the following supported cloud storage options:

    • Amazon S3 (see S3 IAM Bucket Policy for details on configuring an S3 bucket policy for Project Quay)

    • Azure Blob Storage

    • Google Cloud Storage

    • Ceph Object Gateway (RADOS)

    • OpenStack Swift

    • CloudFront + S3

Installing the Quay Operator

Differences from Earlier Versions

As of Project Quay 3.4.0, the Operator has been completey re-written to provide an improved out of the box experience as well as support for more Day 2 operations. As a result the new Operator is simpler to use and is more opinionated. The key differences from earlier versions of the Operator are:

  • The QuayEcosystem custom resource has been replaced with the QuayRegistry custom resource

  • The default installation options produces a fully supported Quay environment with all managed dependencies (database, object storage, etc) ready for production use

  • A new robust validation library for Quay’s configuration which is shared by the Quay application and config tool for consistency

  • Registry object storage can now be managed by the Operator using the ObjectBucketClaim Kubernetes API (the NooBaa component of Red Hat OpenShift Container Storage (RHOCS) is one implementation of this API)

  • Customization of the container images used by deployed pods for testing and development scenarios

Before Installing the Quay Operator

Deciding On a Storage Solution

If you want the Operator to manage its own object storage, you will first need to ensure the RHOCS is available on your OpenShift cluster to provide the ObjectBucketClaim API. If you already have object storage ready to be used by the Operator, skip to Installing the Operator.

Enabling OpenShift Container Storage

To install the RHOCS Operator and configure a lightweight NooBaa (S3-compatible) object storage:

  1. Open the OpenShift console and select Operators → OperatorHub, then select the OpenShift Container Storage Operator.

  2. Select Install. Accept all default options and select Install again.

  3. After a minute or so, the Operator will install and create a namespace openshift-storage. You can confirm it is completed when the Status column is marked Succeeded.

  4. Create NooBaa object storage. Save the following YAML to a file called noobaa.yml.

    apiVersion: noobaa.io/v1alpha1
    kind: NooBaa
    metadata:
      name: noobaa
      namespace: openshift-storage
    spec:
     dbResources:
       requests:
         cpu: '0.1'
         memory: 1Gi
     coreResources:
       requests:
         cpu: '0.1'
         memory: 1Gi

    Then run the following:

    $ oc create -n openshift-storage -f noobaa.yml
    noobaa.noobaa.io/noobaa created
  5. After a minute or so, you should see the object storage ready for use (PHASE column is marked Ready)

    $ oc get -n openshift-storage noobaas noobaa -w
    NAME     MGMT-ENDPOINTS              S3-ENDPOINTS                IMAGE                                                                                                            PHASE   AGE
    noobaa   [https://10.0.32.3:30318]   [https://10.0.32.3:31958]   registry.redhat.io/ocs4/mcg-core-rhel8@sha256:56624aa7dd4ca178c1887343c7445a9425a841600b1309f6deace37ce6b8678d   Ready   3d18h

Installing the Operator from OperatorHub

  1. Using the OpenShift console, Select Operators → OperatorHub, then select the Quay Operator. If there is more than one, be sure to use the Red Hat certified Operator and not the community version.

  2. Select Install. The Operator Subscription page appears.

  3. Choose the following then select Subscribe:

    • Installation Mode: Choose either 'All namespaces' or 'A specific namespace' depending on whether you want the Operator to be available cluster-wide or only within a single namespace (all-namespaces recommended)

    • Update Channel: Choose the update channel (only one may be available)

    • Approval Strategy: Choose to approve automatic or manual updates

  4. Select Install.

  5. After a minute you will see the Operator installed successfully in the Installed Operators page.

High Level Concepts

QuayRegistry API

The Quay Operator provides the QuayRegistry custom resource API to declaratively manage Quay container registries on the cluster. Use either the OpenShift UI or a command-line tool to interact with this API.

  • Creating a QuayRegistry will result in the Operator deploying and configuring all necessary resources needed to run Quay on the cluster.

  • Editing a QuayRegistry will result in the Operator reconciling the changes and creating/updating/deleting objects to match the desired configuration.

  • Deleting a QuayRegistry will result in garbage collection of all previously created resources and the Quay container registry will no longer be available.

The QuayRegistry API is fairly simple, and the fields are outlined in the following sections.

Components

Quay is a powerful container registry platform and as a result, requires a decent number of dependencies. These include a database, object storage, Redis, and others. The Quay Operator manages an opinionated deployment of Quay and its dependencies on Kubernetes. These dependencies are treated as components and are configured through the QuayRegistry API.

In the QuayRegistry custom resource, the spec.components field configures components. Each component contains two fields: kind - the name of the component, and managed - boolean whether the component lifecycle is handled by the Operator. By default (omitting this field), all components are managed and will be autofilled upon reconciliation for visibility:

spec:
  components:
    - kind: postgres
      managed: true
    ...

Unless your QuayRegistry custom resource specifies otherwise, the Operator will use defaults for the following managed components:

  • postgres Stores the registry metadata. Uses an upstream (CentOS) version of Postgres 10.

  • redis Handles Quay builder coordination and some internal logging.

  • objectstorage Stores image layer blobs. Utilizes the ObjectBucketClaim Kubernetes API which is provided by Noobaa/RHOCS.

  • clair Provides image vulnerability scanning.

  • horizontalpodautoscaler Adjusts the number of Quay pods depending on memory/cpu consumption.

  • mirror Configures a repository mirror worker (to support optional repository mirroring).

  • route Provides an external entrypoint to the Quay registry from outside of OpenShift.

Considerations For Managed Components

While the Operator will handle any required configuration and installation work needed for Project Quay to use the managed components, there are several considerations to keep in mind.

  • Database backups should be performed regularly using either the supplied tools on the Postgres image or your own backup infrastructure. The Operator does not currently ensure the Postgres database is backed up.

  • Restoring the Postgres database from a backup must be done using Postgres tools and procedures. Be aware that your Quay Pods should not be running while the database restore is in progress.

  • Database disk space is allocated automatically by the Operator with 50 GiB. This number represents a usable amount of storage for most small to medium Project Quay installations but may not be sufficient for your use cases. Resizing the database volume is currently not handled by the Operator.

  • Object storage disk space is allocated automatically by the Operator witih 50 GiB. This number represents a usable amount of storage for most small to medium Project Quay installations but may not be sufficient for your use cases. Resizing the RHOCS volume is currently not handled by the Operator. See the section below on resizing managed storage for more details.

  • The Operator will deploy an OpenShift Route as the default entrpoint to the registry. If you prefer a different entrypoint (e.g. Ingress or direct Service access that configuration will need to be done manually).

If any of these considerations are unacceptable for your environment, it would be suggested to provide the Operator with unmanaged resources or overrides as described in the following sections.

Using Existing (Un-Managed) Components With the Quay Operator

If you have existing components such as Postgres, Redis or object storage that you would like to use with Quay, you first configure them within the Quay configuration bundle (config.yaml) and then reference the bundle in your QuayRegistry (as a Kubernetes Secret) while indicating which components are unmanaged.

For example, to use an existing Postgres database:

  1. Create a Secret with the necessary database fields in a config.yaml file:

    config.yaml:
    DB_URI: postgresql://test-quay-database:postgres@test-quay-database:5432/test-quay-database
    $ kubectl create secret generic --from-file config.yaml=./config.yaml test-config-bundle
  2. Create a QuayRegistry which marks postgres component as unmanaged and references the created Secret:

    quayregistry.yaml
    apiVersion: quay.redhat.com/v1
    kind: QuayRegistry
    metadata:
      name: test
    spec:
      configBundleSecret: test-config-bundle
      components:
        - kind: postgres
          managed: false

    The deployed Quay application will now use the external database.

Note

The Quay config editor can also be used to create or modify an existing config bundle and simplify the process of updating the Kubernetes Secret, especially for multiple changes. When Quay’s configuration is changed via the config editor and sent to the Operator, the Quay deployment will be updated to reflect the new configuration.

Config Bundle Secret

The spec.configBundleSecret field is a reference to the metadata.name of a Secret in the same namespace as the QuayRegistry. This Secret must contain a config.yaml key/value pair. This config.yaml file is a Quay config YAML file. This field is optional, and will be auto-filled by the Operator if not provided. If provided, it serves as the base set of config fields which are later merged with other fields from any managed components to form a final output Secret, which is then mounted into the Quay application pods.

QuayRegistry Status

Lifecycle observability for a given Quay deployment is reported in the status section of the corresponding QuayRegistry object. The Operator constantly updates this section, and this should be the first place to look for any problems or state changes in Quay or its managed depenendencies.

Registry Endpoint

Once Quay is ready to be used, the status.registryEndpoint field will be populated with the publicly availble hostname of the registry.

Config Editor Endpoint

Access Quay’s UI-based config editor using status.configEditorEndpoint.

Config Editor Credentials Secret

The username/password for the config editor UI will be stored in a Secret in the same namespace as the QuayRegistry referenced by status.configEditorCredentialsSecret.

Current Version

The current version of Quay that is running will be reported in status.currentVersion.

Conditions

Certain conditions will be reported in status.conditions.

Deploying Quay using the Quay Operator

Creating a Quay Registry

The default configuration tells the Operator to manage all of Quay’s dependencies (database, Redis, object storage, etc).

OpenShift Console

  1. Select Operators → Installed Operators, then select the Quay Operator to navigate to the Operator detail view.

  2. Click 'Create Instance' on the 'Quay Registry' tile under 'Provided APIs'.

  3. Optionally change the 'Name' of the QuayRegistry. This will affect the hostname of the registry. All other fields have been populated with defaults.

  4. Click 'Create' to submit the QuayRegistry to be deployed by the Quay Operator.

  5. You should be redirected to the QuayRegistry list view. Click on the QuayRegistry you just created to see the detail view.

  6. Once the 'Registry Endpoint' has a value, click it to access your new Quay registry via the UI. You can now select 'Create Account' to create a user and sign in.

Command Line

The same result can be achieved using the CLI.

  1. Create the following QuayRegistry custom resource in a file called quay.yaml.

quay.yaml:
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
  name: my-registry
  1. Create the QuayRegistry in your namespace:

$ oc create -n <your-namespace> -f quay.yaml
  1. Wait until the status.registryEndpoint is populated.

$ oc get -n <your-namespace> quayregistry my-registry -o jsonpath="{.status.registryEndpoint}" -w
  1. Once the status.registryEndpoint has a value, navigate to it using your web browser to access your new Quay registry via the UI. You can now select 'Create Account' to create a user and sign in.

Upgrading Quay using the Quay Operator

The Quay Operator follows a synchronized versioning scheme, which means that each version of the Operator is tied to the version of Quay and its components which it manages. There is no field on the QuayRegistry custom resource which sets the version of Quay to deploy; the Operator only knows how to deploy a single version of all components. This scheme was chosen to ensure that all components work well together and to reduce complexity of the Operator needing to know how to manage the lifecycles of many different versions of Quay on Kubernetes.

Operator Lifecycle Manager

The Quay Operator should be installed and upgraded using the Operator Lifecycle Manager (OLM). This powerful and complex piece of software takes care of the full lifecycle of Operators, including installation, configuration, automatic upgrades, UX enhancements, etc. When creating a Subscription with the default approvalStrategy: Automatic, OLM will automatically upgrade the Quay Operator whenever a new version becomes available. NOTE:

Warning

When the Quay Operator is installed via Operator Lifecycle Manager it may be configured to support automatic or manual upgrades. This option is shown on the Operator Hub page for the Quay Operator during installation. It can also be found in the Quay Operator Subscription object via the approvalStrategy field. Choosing Automatic` means that your Quay Operator will automatically be upgraded whenever a new Operator version is released. If this is not desireable, then the Manual approval strategy should be selected.

Upgrading a QuayRegistry

When the Quay Operator starts up, it immediately looks for any QuayRegistries it can find in the namespace(s) it is configured to watch. When it finds one, the following logic is used:

  • If status.currentVersion is unset, reconcile as normal.

  • If status.currentVersion equals the Operator version, reconcile as normal.

  • If status.currentVersion does not equal the Operator version, check if it can be upgraded. If it can, perform upgrade tasks and set the status.currentVersion to the Operator’s version once complete. If it cannot be upgraded, return an error and leave the QuayRegistry and its deployed Kubernetes objects alone.

Ugrading a QuayEcosystem

Upgrades are supported from previous versions of the Operator which used the QuayEcosystem API for a limited set of configurations. To ensure that migrations do not happen unexpectedly, a special label needs to be applied to the QuayEcosystem for it to be migrated. A new QuayRegistry will be created for the Operator to manage, but the old QuayEcosystem will remain until manually deleted to ensure that you can roll back and still access Quay in case anything goes wrong. To migrate an existing QuayEcosystem to a new QuayRegistry, follow these steps:

  1. Add "quay-operator/migrate": "true" to the metadata.labels of the QuayEcosystem.

  2. Wait for a QuayRegistry to be created with the same metadata.name as your QuayEcosystem. The QuayEcosystem will be marked with the label "quay-operator/migration-complete": "true".

  3. Once the status.registryEndpoint of the new QuayRegistry is set, access Quay and confirm all data and settings were migrated successfully.

  4. When you are confident everything worked correctly, you may delete the QuayEcosystem and Kubernetes garbage collection will clean up all old resources.

Reverting QuayEcosystem Upgrade

If something goes wrong during the automatic upgrade from QuayEcosystem to QuayRegistry, follow these steps to revert back to using the QuayEcosystem:

  • Delete the QuayRegistry using either the UI or kubectl:

$ kubectl delete -n <namespace> quayregistry <quayecosystem-name>
  • If external access was provided using a Route, change the Route to point back to the original Service using the UI or kubectl.

Note

If your QuayEcosystem was managing the Postgres database, the upgrade process will migrate your data to a new Postgres database managed by the upgraded Operator. Your old database will not be changed or removed but Quay will no longer use it once the migration is completed. If there are issues during the data migration, the upgrade process will exit and it is recommended that you continue with your database as an unmanaged component.

Supported QuayEcosystem Configurations for Upgrades

The Quay Operator will report errors in its logs and in status.conditions if migrating a QuayEcosystem component fails or is unsupported. All unmanaged components should migrate successfully because no Kubernetes resources need to be adopted and all the necessary values are already provided in Quay’s config.yaml.

Database

Ephemeral database not supported (volumeSize field must be set).

Redis

Nothing special needed.

External Access

Only Route access supported for automatic migration. Manual migration required for other methods.

  • LoadBalancer without custom hostname: After the QuayEcosystem is marked with label "quay-operator/migration-complete": "true", delete the metadata.ownerReferences field from existing Service before deleting the QuayEcosystem to prevent Kubernetes from garbage collecting the Service and removing the load balancer. A new Service will be created with metadata.name format <QuayEcosystem-name>-quay-app. Edit the spec.selector of the existing Service to match the spec.selector of the new Service so traffic to the old load balancer endpoint will now be directed to the new pods. You are now responsible for the old Service; the Quay Operator will not manage it.

  • LoadBalancer/NodePort/Ingress with custom hostname: A new Service of type LoadBalancer will be created with metadata.name format <QuayEcosystem-name>-quay-app. Change your DNS settings to point to the status.loadBalancer endpoint provided by the new Service.

Clair

Nothing special needed.

Object Storage

QuayEcosystem did not have a managed object storage component, so object storage will always be marked as unmanaged. Local storage is not supported.

Repository Mirroring

Nothing special needed.

Advanced Concepts

Customizing the Quay Deployment

The Quay Operator takes an opinionated strategy towards deploying Quay and its dependencies, however there are places where the Quay deployment can be customized.

Quay Application Configuration

Once deployed, the Quay application itself can be configured as normal using the config editor UI or by modifying the Secret containing the Quay configuration bundle. The Operator uses the Secret named in the spec.configBundleSecret field but does not watch this resource for changes. It is recommended that configuration changes be made to a new Secret resource and the spec.configBundleSecret field be updated to reflect the change. In the event there are issues with the new configuration, it is simple to revert the value of spec.configBundleSecret to the older Secret.

Customizing External Access to the Registry

When running on OpenShift, the Routes API is available and will automatically be used as a managed component. After creating the QuayRegistry, the external access point can be found in the status block of the QuayRegistry:

status:
  registryEndpoint: some-quay.my-namespace.apps.mycluster.com

The Operator creates a Service of type: Loadbalancer for your registry. You can configure your DNS provider to point the SERVER_HOSTNAME to the external IP address of the service.

$ oc get services -n <namespace>
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP          PORT(S)             AGE
some-quay               ClusterIP   172.30.143.199   34.123.133.39        443/TCP,9091/TCP    23h
Using a Custom Hostname and TLS

By default, a Route will be created with the default generated hostname and a certificate/key pair will be generated for TLS. If you want to access Project Quay using a custom hostname and bring your own TLS certificate/key pair, first create a Secret which contains the following:

apiVersion: v1
kind: Secret
metadata:
  name: my-config-bundle
data:
  config.yaml: <must include SERVER_HOSTNAME field with your custom hostname>
  ssl.cert: <your TLS certificate>
  ssl.key: <your TLS key>

Then, create a QuayRegistry which references the created Secret:

apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
  name: some-quay
spec:
  configBundleSecret: my-config-bundle

Disabling Route Component

To prevent the Operator from creating a Route, mark the component as unmanaged in the QuayRegistry:

apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
  name: some-quay
spec:
  components:
    - kind: route
      managed: false
Note

Disabling the default Route means you are now responsible for creating a Route, Service, or Ingress in order to access the Quay instance and that whatever DNS you use must match the SERVER_HOSTNAME in the Quay config.

Resizing Managed Storage

The Quay Operator creates default object storage using the defaults provided by RHOCS when creating a NooBaa object (50 Gib). There are two ways to extend this storage; you can resize an existing PVC or add more PVCs to a new storage pool.

Resize Noobaa PVC
  1. Log into the OpenShift console and select StoragePersistent Volume Claims.

  2. Select the PersistentVolumeClaim named like noobaa-default-backing-store-noobaa-pvc-*.

  3. From the Action menu, select Expand PVC.

  4. Enter the new size of the Persistent Volume Claim and select Expand.

After a few minutes (depending on the size of the PVC), the expanded size should reflect in the PVC’s Capacity field.

Note

This feature is still considered Technology Preview and not recommended for production deployments.

Add Another Storage Pool
  1. Log into the OpenShift console and select NetworkingRoutes. Make sure the openshift-storage project is selected.

  2. Click on the Location field for the noobaa-mgmt Route.

  3. Log into the Noobaa Management Console.

  4. On the main dashboard, under Storage Resources, select Add Storage Resources.

  5. Select Deploy Kubernetes Pool

  6. Enter a new pool name. Click Next.

  7. Choose the number of Pods to manage the pool and set the size per node. Click Next.

  8. Click Deploy.

After a few minutes, the additional storage pool will be added to the Noobaa resources and available for use by Project Quay.

Disabling the Horizontal Pod Autoscaler

If you wish to disable autoscaling or create your own HorizontalPodAutoscaler, simply specify the component as unmanaged in the QuayRegistry instance:

apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
  name: some-quay
spec:
  components:
    - kind: horizontalpodautoscaler
      managed: false

Customizing Default Operator Images

Note

Using this mechanism is not supported for production Quay environments and is strongly encouraged only for development/testing purposes. There is no guarantee your deployment will work correctly when using non-default images with the Quay Operator.

In certain circumstances, it may be useful to override the default images used by the Operator. This can be done by setting one or more environment variables in the Quay Operator ClusterServiceVersion.

Environment Variables

The following environment variables are used in the Operator to override component images:

Environment Variable

Component

RELATED_IMAGE_COMPONENT_QUAY

base

RELATED_IMAGE_COMPONENT_CLAIR

clair

RELATED_IMAGE_COMPONENT_POSTGRES

postgres and clair databases

RELATED_IMAGE_COMPONENT_REDIS

redis

Note

Override images must be referenced by manifest (@sha256:), not by tag (:latest).

Applying Overrides to a Running Operator

When the Quay Operator is installed in a cluster via the Operator Lifecycle Manager (OLM), the managed component container images can be easily overridden by modifying the ClusterServiceVersion object, which is OLM’s representation of a running Operator in the cluster. Find the Quay Operator’s ClusterServiceVersion either by using a Kubernetes UI or kubectl/oc:

$ oc get clusterserviceversions -n <your-namespace>

Using the UI, oc edit, or any other method, modify the Quay ClusterServiceVersion to include the environment variables outlined above to point to the override images:

JSONPath: spec.install.spec.deployments[0].spec.template.spec.containers[0].env

- name: RELATED_IMAGE_COMPONENT_QUAY
  value: quay.io/projectquay/quay@sha256:c35f5af964431673f4ff5c9e90bdf45f19e38b8742b5903d41c10cc7f6339a6d
- name: RELATED_IMAGE_COMPONENT_CLAIR
  value: quay.io/projectquay/clair@sha256:70c99feceb4c0973540d22e740659cd8d616775d3ad1c1698ddf71d0221f3ce6
- name: RELATED_IMAGE_COMPONENT_POSTGRES
  value: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33
- name: RELATED_IMAGE_COMPONENT_REDIS
  value: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542

Note that this is done at the Operator level, so every QuayRegistry will be deployed using these same overrides.

Additional resources

  • For more details on the Project Quay Operator, see the upstream quay-operator project.