Project Quay is an enterprise-quality container registry. Use Project Quay to build and store container images, then make them available to deploy across your enterprise.

The Quay Operator provides a simple method to deploy and manage a Project Quay cluster. This is the preferred procedure for deploying Project Quay on OpenShift and is covered in this guide.

Note that this version of the Quay Operator has been completely rewritten and differs substantially from earlier versions. Please review this documentation carefully. If you are looking for documentation for prior versions of the Quay Operator, please check here.

Overview

Features of Project Quay include:

  • High availability

  • Geo-replication

  • Repository mirroring

  • Docker v2, schema 2 (multiarch) support

  • Continuous integration

  • Security scanning with Clair

  • Custom log rotation

  • Zero downtime garbage collection

  • 24/7 support

Project Quay provides support for:

  • Multiple authentication and access methods

  • Multiple storage backends

  • Custom certificates for Quay, Clair, and storage backends

  • Application registries

  • Different container image types

Architecture

Project Quay is made up of several core components.

  • Database: Used by Project Quay as its primary metadata storage (not for image storage).

  • Redis (key, value store): Stores live builder logs and the Project Quay tutorial.

  • Quay (container registry): Runs the quay container as a service, consisting of several components in the pod.

  • Clair: Scans container images for vulnerabilities and suggests fixes.

For supported deployments, you need to use one of the following types of storage:

  • Public cloud storage: In public cloud environments, you should use the cloud provider’s object storage, such as Amazon S3 (for AWS) or Google Cloud Storage (for Google Cloud).

  • Private cloud storage: In private clouds, an S3 or Swift compliant Object Store is needed, such as Ceph RADOS, or OpenStack Swift.

Warning

Do not use "Locally mounted directory" Storage Engine for any production configurations. Mounted NFS volumes are not supported. Local storage is meant for Project Quay test-only installations.

Prerequisites for Project Quay on OpenShift

Here are a few things you need to know before you begin the Project Quay on OpenShift deployment:

  • OpenShift cluster: You need a privileged account to an OpenShift 4.2 or later cluster on which to deploy the Project Quay. That account must have the ability to create namespaces at the cluster scope.

  • Storage: AWS cloud storage is used as an example in the following procedure. As an alternative, you can create Ceph cloud storage using steps from the Set up Ceph section of the high availability Project Quay deployment guide. The following is a list of other supported cloud storage:

  • Services: The OpenShift cluster must have enough capacity to run the following containerized services:

    • Database: We recommend you use an enterprise-quality database for production use of Project Quay. PostgreSQL is used as an example in this document. Other options include:

      • Crunchy Data PostgreSQL Operator: Although not supported directly by Red Hat, the CrunchDB Operator is available from Crunchy Data for use with Project Quay. If you take this route, you should have a support contract with Crunchy Data and work directly with them for usage guidance or issues relating to the operator and their database.

      • If your organization already has a high-availability (HA) database, you can use that database with Project Quay. See the Project Quay Support Policy for details on support for third-party databases and other components.

    • Key-value database: Redis is used to serve live builder logs and Project Quay tutorial content to your Project Quay configuration.

    • Project Quay: The quay container provides the features to manage the Project Quay registry.

    • Clair: The clair-jwt container provides Clair scanning services for the registry.

Using the Quay Operator

This procedure:

  • Installs the Operator on OpenShift from the OperatorHub

  • Deploys a Project Quay cluster on OpenShift via the Operator

The Operator automates the entire start-up process, by-passing the need to use the Project Quay config tool. With the Operator, you can get a full Quay installation created with database, storage and security scanning with a single button click.

Prerequisites
  • An OpenShift 4.2 (or later) cluster

  • Cluster-scope admin privilege to the OpenShift cluster

Differences from Earlier Versions

As of Project Quay 3.4.0, the Operator has been completey re-written to provide an improved out of the box experience as well as support for more Day 2 operations. As a result the new Operator is simpler to use and handles more of the Quay installation decisions for you. The key differences from earlier versions of the Operator are:

  • The default installation options produces a fully supported Quay environment ready for production use.

  • The Operator provides 'sensible' default values for a managed database, object storage and Clair out of the box .

  • The quayecosystem Custom Resource has been replaced with the QuayRegistry Custom Resource

  • The Operator shares the same internal configuration validation logic of the Config Tool and Quay itself to ensure its results are consistent.

  • The Operator now relies on Red Hat OpenShift Container Storage (RHOCS) to supply light-weight object storage as the default managed storage choice. It can also utilize an RHOCS presence already on your OpenShift cluster if so desired.

  • The Operator’s image references can be customized for testing and development scenarios as needed.

Install the Quay Operator

Deciding On a Storage Solution

If you want the Operator to managed its own object storage, you will first need to insure RHOCS is available on your OpenShift cluster. If you already have object storage ready to be used by the Operator, skip to Installing the Operator.

To install the RHOCS Operator and configure a lightweight NooBaa (S3-compatible) object storage:

  1. Select Operators → OperatorHub, then select the OpenShift Container Storage Operator.

  2. Select Install. Accept all default options and select Install again.

  3. After a minute or so, the Operator will install and create a namespace openshift-storage. You can confirm it is completed when the Status column is marked Succeeded.

  4. Create NooBaa object storage. Save the following YAML to a file called noobaa.yml.

    apiVersion: noobaa.io/v1alpha1
    kind: NooBaa
    metadata:
      name: noobaa
      namespace: openshift-storage
    spec:
     dbResources:
       requests:
         cpu: '0.1'
         memory: 1Gi
     coreResources:
       requests:
         cpu: '0.1'
         memory: 1Gi

    Then run the following :

    $ oc create -n openshift-storage -f noobaa.yml
    noobaa.noobaa.io/noobaa created
  5. After a minute or so, you should see the object storage ready for use (PHASE column is marked Ready)

    $ oc get -n openshift-storage noobaas noobaa -w
    NAME     MGMT-ENDPOINTS              S3-ENDPOINTS                IMAGE                                                                                                            PHASE   AGE
    noobaa   [https://10.0.32.3:30318]   [https://10.0.32.3:31958]   registry.redhat.io/ocs4/mcg-core-rhel8@sha256:56624aa7dd4ca178c1887343c7445a9425a841600b1309f6deace37ce6b8678d   Ready   3d18h

Installing the Operator

  1. Select Operators → OperatorHub, then select the Quay Operator. If there is more than one, be sure to use the Red Hat certified Operator and not the community version.

  2. Select Install. The Operator Subscription page appears.

  3. Choose the following then select Subscribe:

    • Installation Mode: Choose either 'All namespaces' or 'A specific namespace' depending on whether you want the Operator to be available cluster-wide or only within a single namespace.

    • Update Channel: Choose the update channel (only one may be available)

    • Approval Strategy: Choose to approve automatic or manual updates

  4. Select Install.

  5. After a minute you will see the Operator installed successfully in the Installed Operators page.

Basic Quay Installation Via The Operator

The simplest installation choice allows the Operator to manage all of Quay’s dependent resources (database, redis, storage). Create the following QuayRegistry Custom Resource in a file called quay.yaml.

quay.yaml:
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
  name: skynet

Then create the CR in your project as follows:

$ oc create -n <your-namespace> -f quay.yaml

Navigate to your project page in the OpenShift console, or use the CLI to watch the deployments. You should eventually see the Deployment called skynet-quay-app completed with a single Pod running. Navigate to the Networking tab on the OpenShift console and select Routes. Click on the Location entry for the skynet-quay Route. This should bring up the Project Quay login page. You can select 'Create Account' to create a user and sign in.

Managed Components

In the QuayRegistry Custom Resource, the spec.components field configures components. Each component contains two fields: kind - the name of the component, and managed - boolean whether the component lifecycle is handled by the Operator. By default (omitting this field), all components are managed and will be autofilled upon reconciliation for visibility:

spec:
  components:
    - kind: postgres
      managed: true
    ...

Unless your QuayRegistry Custom Resource specifies otherwise, the Operator will use defaults for the following managed components:

  • postgres Stores the registry metadata. Uses an upstream (CentOS) version of Postgres 10.

  • redis Handles Quay builder coordination and some internal logging.

  • objectstorage Stores image layer blobs. Uses the NooBaa BackingStore and BucketClass created prior to installing the Quay Operator.

  • clair Provides image vulnerability scanning.

  • horizontalpodautoscaler Adjusts the number of Quay pods depending on memory/cpu consumption.

  • mirror Configures a repository mirror worker (to support optional repository mirroring).

  • route Provides an external entrypoint to the Quay registry from outside of OpenShift.

Considerations For Managed Components

While the Operator will handle any required configuration and installation work needed for Project Quay to use the managed components, there are several considerations to keep in mind.

  • Database backups should be performed regularly using either the supplied tools on the Postgres image or your own backup infrastructure. The Operator does not currently ensure the Postgres database is backed up.

  • Restoring the Postgres database from a backup must be done using Postgres tools and procedures. Be aware that your Quay `Pod`s should not be running while the database restore is in progress.

  • Database disk space is allocated automatically by the Operator with 50 GiB. This number represents a usable amount of storage for most small to medium Project Quay installations but may not be sufficient for your use cases. Resizing the database volume is currently not handled by the Operator.

  • Object storage disk space is allocated automatically by the Operator witih 50 GiB. This number represents a usable amount of storage for most small to medium Project Quay installations but may not be sufficient for your use cases. Resizing the RHOCS volume is currently not handled by the Operator. See the section below on resizing managed storage for more details.

  • The Operator will deploy an OpenShift Route as the default entrpoint to the registry. If you prefer a different entrypoint (e.g. Ingress or direct Service access that configuration will need to be done manually).

If any of these considerations are unacceptable for your environment, it would be suggested to provide the Operator with unmanaged resources or overrides as described in the following sections.

Using Existing (Un-Managed) Components With the Operator

If you have an existing components such as Postgres, Redis or Object Storage that you would like to use with the operator, you first configure then within the Quay configuration bundle (config.yaml) and then reference the bundle in your QuayRegistry (as a Kubernetes secret) while indicating which Components are unmanaged.

For example, to use an existing Postgres database:

  1. Create a Secret with the necessary database fields in a config.yaml:

    config.yaml:
    DB_URI: postgresql://test-quay-database:postgres@test-quay-database:5432/
    test-quay-database
    $ kubectl create secret generic --from-file config.yaml=./config.yaml test-config-bundle
  2. Create a QuayRegistry which marks postgres component as unmanaged and references the created Secret:

    quayregistry.yaml
    apiVersion: quay.redhat.com/v1
    kind: QuayRegistry
    metadata:
      name: test
    spec:
      configBundleSecret: test-config-bundle
      components:
        - kind: postgres
          managed: false

    The deployed Quay application will now use the external database.

Note

The Quay ConfigTool can also be used to create or modify an existing config bundle and simplify the process of updating the Kubernetes secret, especially for multiple changes. When Quay’s configuration is changed via the ConfigTool and sent to the Operator, the Quay deployment will be updated to reflect the new configuration.

Using Existing Object Storage With the Operator

Follow the instructions above to register the objectstorage component as unmanaged and provide the necessary configuration values for your object storage solution.

Most S3-compatible object storage solutions should work well with Project Quay. If possible, use a cloud hosted service such as Amazon S3, or Azure Blobs. For on-premise deployments RHOCS or minio should work.

Quay Operator and Upgrades

The Quay Operator follows a synchronized versioning scheme with Quay itself. This means that the version of the Quay Operator must match the version of Quay running. This not only greatly simplifies the complexity of the Operator, it also helps to ensure the Operator being run is fully tested with the version of Quay in use.

A consequence of this, however, is that once the Operator is upgraded the Quay installation that it manages must be upgraded as well. Please review Quay release notes prior to upgrading your Quay Operator to ensure that the new Quay version is desired as well.

Warning

When the Quay Operator is installed via Operator Lifecycle Manager it may be configured to support automatic or manual upgrades. This option is shown on the Operator Hub page for the Quay Operator during installation. It can also be found in the Quay Operator Subscription object via the approvalStrategy field. Choosing Automatic` means that your Quay Operator will automatically be upgraded whenever a new Operator version is released. If this is not desireable, then the Manual approval strategy should be selected.

How the Quay Operator Performs Upgrades

When the Quay Operator performs an upgrade, the exact actions it takes depends on which version it is upgrading from. This can be determined by whether you are upgrading from a QuayEcosystem Resource (Project Quay} 3.3 and prior) or a QuayRegistry resource (Project Quay 3.4 and higher).

Upgrading a QuayRegistry

When the Quay Operator starts up, it immediately looks for any QuayRegistries it can find in the namespace(s) it is configured to watch. When it finds one it consults the status.currentVersion field and takes the following actions:

  • If status.currentVersion is empty or same as Operator’s version:

    • do nothing

  • If status.currentVersion is not same as the Operator’s version:

    • if Quay can be ugpraded:

      • Perform Quay upgrade. Set status.currentVersion to Operator version

    • if Quay cannot be upgraded:

      • Set an error status in QuayRegistry and do nothing

Ugrading a QuayEcosystem

Earlier versions of the Quay Operator used the QuayEcosystem resource. To ensure that migrations do not happen unexpectedly, a special label needs to be applied to the QuayEcosystem for it to be migrated. A new QuayRegistry will be created for the Operator to manage, but the old QuayEcosystem will remain until manually deleted to ensure that you can roll back and still access Quay in case anything goes wrong. To migrate an existing QuayEcosystem to a new QuayRegistry, follow these steps:

  1. Add "quay-operator/migrate": "true" to the metadata.labels of the QuayEcosystem.

  2. Wait for a QuayRegistry to be created with the same metadata.name as your QuayEcosystem.

  3. Once the status.registryEndpoint of the new QuayRegistry is set, access Quay and confirm all data and settings were migrated successfully.

  4. When you are confident everything worked correctly, you may delete the QuayEcosystem and Kubernetes garbage collection will clean up all old resources.

Note

If your QuayEcosystem was managing the Project Quay Postgres database, the upgrade process will migrate your data to a new Postgres database managed by the upgraded Operator. Your old database will not be changed or removed but Quay will no longer use it once the migration is completed. If there are issues during the data migration, the upgrade process will exit and it is recommended that you continue with your database as an unmanaged component.

Customizing the Quay Deployment

The Quay Operator takes an opinionated strategy towards deploying Quay and Clair, however there are places where the Quay installation can be customized.

Customizing Access to the Registry

By default. the Operator creates a Service of type: Loadbalancer for your registry. You can configure your DNS provider to point the SERVER_HOSTNAME to the external IP address of the service.

$ oc get services -n <namespace>
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP          PORT(S)             AGE
some-quay               ClusterIP   172.30.143.199   34.123.133.39        443/TCP,9091/TCP    23h

When running on OpenShift, the Routes API is available and will automatically be used as a managed component. After creating the QuayRegistry, the external access point can be found in the status block of the QuayRegistry:

status:
  registryEndpoint: some-quay.my-namespace.apps.mycluster.com

Using a Custom Hostname and TLS

By default, a Route will be created with the default generated hostname and a certificate/key pair will be generated for TLS. If you want to access Project Quay using a custom hostname and bring your own TLS certificate/key pair, first create a Secret which contains the following:

apiVersion: v1
kind: Secret
metadata:
  name: my-config-bundle
data:
  config.yaml: <must include SERVER_HOSTNAME field with your custom hostname>
  ssl.cert: <your TLS certificate>
  ssl.key: <your TLS key>

Then, create a QuayRegistry which references the created Secret:

apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
  name: some-quay
spec:
  configBundleSecret: my-config-bundle

Disabling the Default Route to the Registry

To instruct the Operator not to deploy a Route with your registry, add the following to your QuayRegistry Custom Resource.

apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
  name: some-quay
spec:
  components:
    - kind: route
      managed: false
Note

Disabling the default Route means you are now responsible for creating a Route, Service, or Ingress in order to access the Quay instance and that whatever DNS you use must match the SERVER_HOSTNAME in the Quay config.

Resizing Managed Storage

The Quay Operator creates default object storage using the defaults provided by RHOCS when creating a NooBaa object (50 Gib). There are two ways to extend this storage; you can resize an existing PVC or add more PVCs to a new storage pool.

Resize Noobaa PVC
  1. Log into the OpenShift console and select StoragePersistent Volume Claims.

  2. Select the PersistentVolumeClaim named like noobaa-default-backing-store-noobaa-pvc-*.

  3. From the Action menu, select Expand PVC.

  4. Enter the new size of the Persistent Volume Claim and select Expand.

After a few minutes (depending on the size of the PVC), the expanded size should reflect in the PVC’s Capacity field.

Note

This feature is still considered Technology Preview and not recommended for production deployments.

Add Another Storage Pool
  1. Log into the OpenShift console and select NetworkingRoutes. Make sure the openshift-storage project is selected.

  2. Click on the Location field for the noobaa-mgmt Route.

  3. Log into the Noobaa Management Console.

  4. On the main dashboard, under Storage Resources, select Add Storage Resources.

  5. Select Deploy Kubernetes Pool

  6. Enter a new pool name. Click Next.

  7. Choose the number of Pods to manage the pool and set the size per node. Click Next.

  8. Click Deploy.

After a few minutes, the additional storage pool will be added to the Noobaa resources and available for use by Project Quay.

Disabling the Horizontal Pod Autoscaler

To instruct the Operator not to deploy a HorizontalPodAutoscaler with your registry, add the following to your QuayRegistry Custom Resource.

apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
  name: some-quay
spec:
  components:
    - kind: horizontalpodautoscaler
      managed: false

Customizing Default Operator Images

Note

Using this mechanism is not supported for production Quay environments and is strongly encouraged only for development/testing purposes. There is no guarantee your deployment will work correctly when using non-default images with the Quay Operator.

In certain circumstances, it may be useful to override the default images used by the Operator. This can be done by setting one or more environment variables in the Quay Operator ClusterServiceVersion.

Environment Variables

The following environment variables are used in the Operator to override component images:

Environment Variable

Component

RELATED_IMAGE_COMPONENT_QUAY

base

RELATED_IMAGE_COMPONENT_CLAIR

clair

RELATED_IMAGE_COMPONENT_POSTGRES

postgres and clair databases

RELATED_IMAGE_COMPONENT_REDIS

redis

Note

Override images must be referenced by manifest (@sha256:), not by tag (:latest).

Applying Overrides to a Running Operator

When the Quay Operator is installed in a cluster via the Operator Lifecycle Manager (OLM), the managed component container images can be easily overridden by modifying the ClusterServiceVersion object, which is OLM’s representation of a running Operator in the cluster. Find the Quay Operator’s ClusterServiceVersion either by using a Kubernetes UI or kubectl/oc:

$ oc get clusterserviceversions -n <your-namespace>

Using the UI, oc edit, or any other method, modify the Quay ClusterServiceVersion to include the environment variables outlined above to point to the override images:

JSONPath: spec.install.spec.deployments[0].spec.template.spec.containers[0].env

- name: RELATED_IMAGE_COMPONENT_QUAY
  value: quay.io/projectquay/quay@sha256:c35f5af964431673f4ff5c9e90bdf45f19e38b8742b5903d41c10cc7f6339a6d
- name: RELATED_IMAGE_COMPONENT_CLAIR
  value: quay.io/projectquay/clair@sha256:70c99feceb4c0973540d22e740659cd8d616775d3ad1c1698ddf71d0221f3ce6
- name: RELATED_IMAGE_COMPONENT_POSTGRES
  value: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33
- name: RELATED_IMAGE_COMPONENT_REDIS
  value: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542

Note that this is done at the Operator level, so every QuayRegistry will be deployed using these same overrides.

Additional resources

  • For more details on the Project Quay Operator, see the upstream quay-operator project.