Warning

This is a draft version of the Project Quay 3.6 documentation and is NOT finalized or approved

Project Quay is an enterprise-quality container registry. Use Project Quay to build and store container images, then make them available to deploy across your enterprise.

The Project Quay Operator provides a simple method to deploy and manage Project Quay on an OpenShift cluster.

As of Project Quay 3.4.0, the Operator has been completely re-written to provide an improved out of the box experience as well as support for more Day 2 operations. As a result the new Operator is simpler to use and is more opinionated. The key differences from earlier versions of the Operator are:

  • The QuayEcosystem custom resource has been replaced with the QuayRegistry custom resource

  • The default installation options produces a fully supported Quay environment with all managed dependencies (database, caches, object storage, etc) supported for production use (some components may not be highly available)

  • A new robust validation library for Quay’s configuration which is shared by the Quay application and config tool for consistency

  • Object storage can now be provided by the Operator using the ObjectBucketClaim Kubernetes API (e.g. the NooBaa Operator can be from OperatorHub.io can be used to provide an implementation of that API)

  • Customization of the container images used by deployed pods for testing and development scenarios

Introduction to the Project Quay Operator

This document outlines the steps for configuring, deploying, managing and upgrading Project Quay on OpenShift using the Project Quay Operator.

It shows you how to:

  • Install the Project Quay Operator

  • Configure object storage, either managed or unmanaged

  • Configure other unmanaged components, if required, including database, Redis, routes, TLS, etc.

  • Deploy the Project Quay registry on OpenShift using the Operator

  • Use advanced features supported by the Operator

  • Upgrade the registry by upgrading the Operator

QuayRegistry API

The Quay Operator provides the QuayRegistry custom resource API to declaratively manage Quay container registries on the cluster. Use either the OpenShift UI or a command-line tool to interact with this API.

  • Creating a QuayRegistry will result in the Operator deploying and configuring all necessary resources needed to run Quay on the cluster.

  • Editing a QuayRegistry will result in the Operator reconciling the changes and creating/updating/deleting objects to match the desired configuration.

  • Deleting a QuayRegistry will result in garbage collection of all previously created resources and the Quay container registry will no longer be available.

The QuayRegistry API is fairly simple, and the fields are outlined in the following sections.

Quay components

Quay is a powerful container registry platform and as a result, has a significant number of dependencies. These include a database, object storage, Redis, and others. The Quay Operator manages an opinionated deployment of Quay and its dependencies on Kubernetes. These dependencies are treated as components and are configured through the QuayRegistry API.

In the QuayRegistry custom resource, the spec.components field configures components. Each component contains two fields: kind - the name of the component, and managed - boolean whether the component lifecycle is handled by the Operator. By default (omitting this field), all components are managed and will be autofilled upon reconciliation for visibility:

spec:
  components:
    - managed: true
      kind: clair
    - managed: true
      kind: postgres
    - managed: true
      kind: objectstorage
    - managed: true
      kind: redis
    - managed: true
      kind: horizontalpodautoscaler
    - managed: true
      kind: route
    - managed: true
      kind: mirror
    - managed: true
      kind: monitoring
    - managed: true
      kind: tls

Using managed components

Unless your QuayRegistry custom resource specifies otherwise, the Operator will use defaults for the following managed components:

  • postgres: For storing the registry metadata, uses an upstream (CentOS) version of Postgres 10

  • redis: Handles Quay builder coordination and some internal logging

  • objectstorage: For storing image layer blobs, utilizes the ObjectBucketClaim Kubernetes API which is provided by Noobaa/RHOCS

  • clair: Provides image vulnerability scanning

  • horizontalpodautoscaler: Adjusts the number of Quay pods depending on memory/cpu consumption

  • mirror: Configures a repository mirror worker (to support optional repository mirroring)

  • route: Provides an external entrypoint to the Quay registry from outside OpenShift

  • monitoring: Features include a Grafana dashboard, access to individual metrics, and alerting to notify for frequently restarting Quay pods

  • tls: Configures whether Project Quay or OpenShift handles TLS

The Operator will handle any required configuration and installation work needed for Project Quay to use the managed components. If the opinionated deployment performed by the Quay Operator is unsuitable for your environment, you can provide the Operator with unmanaged resources (overrides) as described in the following sections.

Using unmanaged components for dependencies

If you have existing components such as Postgres, Redis or object storage that you would like to use with Quay, you first configure them within the Quay configuration bundle (config.yaml) and then reference the bundle in your QuayRegistry (as a Kubernetes Secret) while indicating which components are unmanaged.

Note

The Quay config editor can also be used to create or modify an existing config bundle and simplifies the process of updating the Kubernetes Secret, especially for multiple changes. When Quay’s configuration is changed via the config editor and sent to the Operator, the Quay deployment will be updated to reflect the new configuration.

Config bundle secret

The spec.configBundleSecret field is a reference to the metadata.name of a Secret in the same namespace as the QuayRegistry. This Secret must contain a config.yaml key/value pair. This config.yaml file is a Quay config YAML file. This field is optional, and will be auto-filled by the Operator if not provided. If provided, it serves as the base set of config fields which are later merged with other fields from any managed components to form a final output Secret, which is then mounted into the Quay application pods.

Prerequisites for Project Quay on OpenShift

Before you begin the deployment of Project Quay Operator on OpenShift, you should consider the following.

OpenShift cluster

You need a privileged account to an OpenShift 4.5 or later cluster on which to deploy the Project Quay Operator. That account must have the ability to create namespaces at the cluster scope.

Resource Requirements

Each Project Quay application pod has the following resource requirements:

  • 8Gi of memory

  • 2000 millicores of CPU.

The Project Quay Operator will create at least one application pod per Project Quay deployment it manages. Ensure your OpenShift cluster has sufficient compute resources for these requirements.

Object Storage

By default, the Project Quay Operator uses the ObjectBucketClaim Kubernetes API to provision object storage. Consuming this API decouples the Operator from any vendor-specific implementation. OpenShift Container Storage provides this API via its NooBaa component, which will be used in this example.

Project Quay can be manually configured to use any of the following supported cloud storage options:

  • Amazon S3 (see S3 IAM Bucket Policy for details on configuring an S3 bucket policy for Project Quay)

  • Azure Blob Storage

  • Google Cloud Storage

  • Ceph Object Gateway (RADOS)

  • OpenStack Swift

  • CloudFront + S3

Installing the Quay Operator from OperatorHub

  1. Using the OpenShift console, Select Operators → OperatorHub, then select the Quay Operator. If there is more than one, be sure to use the Red Hat certified Operator and not the community version.

  2. Select Install. The Operator Subscription page appears.

  3. Choose the following then select Subscribe:

    • Installation Mode: Choose either 'All namespaces' or 'A specific namespace' depending on whether you want the Operator to be available cluster-wide or only within a single namespace (all-namespaces recommended)

    • Update Channel: Choose the update channel (only one may be available)

    • Approval Strategy: Choose to approve automatic or manual updates

  4. Select Install.

  5. After a minute you will see the Operator installed successfully in the Installed Operators page.

Configuring components before deployment

You can allow the Operator manage all the Project Quay components when deploying on OpenShift and this is the default configuration. Alternatively, you can manage one or more components externally yourself, where you want more control over the set up, and then allow the Operator to manage the remaining components.

The standard pattern for configuring unmanaged components is:

  1. Create a config.yaml configuration file with the appropriate settings

  2. Create a Secret using the configuration file

    $ kubectl create secret generic --from-file config.yaml=./config.yaml config-bundle-secret
  3. Create a QuayRegistry YAML file quayregistry.yaml, identifying the unmanaged components and also referencing the created Secret, for example:

    quayregistry.yaml
    apiVersion: quay.redhat.com/v1
    kind: QuayRegistry
    metadata:
      name: example-registry
    spec:
      configBundleSecret: config-bundle-secret
      components:
        - kind: storage
          managed: false
  4. Deploy the registry using the YAML file

    oc create -f quayregistry.yaml

Configuring object storage

You need to configure object storage before installing Project Quay, irrespective of whether you are allowing the Operator to manage the storage or managing it yourself.

Unmanaged storage

The configuration guide for Project Quay provides details for setting up object storage. Some examples are provided below for convenience.

AWS S3 storage
DISTRIBUTED_STORAGE_CONFIG
  s3Storage:
    - S3Storage
    - host: s3.us-east-2.amazonaws.com
      s3_access_key: ABCDEFGHIJKLMN
      s3_secret_key: OL3ABCDEFGHIJKLMN
      s3_bucket: quay_bucket
      storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
    - s3Storage
Google cloud storage
DISTRIBUTED_STORAGE_CONFIG:
    default:
        - GoogleCloudStorage
        - access_key: GOOGQIMFB3ABCDEFGHIJKLMN
          bucket_name: quay-bucket
          secret_key: FhDAYe2HeuAKfvZCAGyOioNaaRABCDEFGHIJKLMN
          storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
    - default
NooBaa unmanaged storage
  1. Create a NooBaa Object Bucket Claim in the console at Storage → Object Bucket Claims.

  2. Retrieve the Object Bucket Claim Data details including the Access Key, Bucket Name, Endpoint (hostname) and Secret Key.

  3. Create a config.yaml configuration file, using the information for the Object Bucket Claim:

    DISTRIBUTED_STORAGE_CONFIG:
      default:
        - RHOCSStorage
        - access_key: WmrXtSGk8B3nABCDEFGH
          bucket_name: my-noobaa-bucket-claim-8b844191-dc6c-444e-9ea4-87ece0abcdef
          hostname: s3.openshift-storage.svc
          is_secure: true
          port: "443"
          secret_key: X9P5SDGJtmSuHFCMSLMbdNCMfUABCDEFGH+C5QD
          storage_path: /datastorage/registry
    DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
    DISTRIBUTED_STORAGE_PREFERENCE:
      - default

Managed storage

If you want the Operator to manage object storage for Quay, your cluster needs to be capable of providing it via the ObjectBucketClaim API. There are multiple implementations of this API available, for instance, NooBaa in combination with Kubernetes PersistentVolumes or scalable storage backends like Ceph. Refer to the NooBaa documentation for more details on how to deploy this component.

Note

Object storage disk space is allocated automatically by the Operator with 50 GiB. This number represents a usable amount of storage for most small to medium Project Quay installations but may not be sufficient for your use cases. Resizing the RHOCS volume is currently not handled by the Operator. See the section below on resizing managed storage for more details.

Configuring the database

Using an existing Postgres database

  1. Create a configuration file config.yaml with the necessary database fields:

    config.yaml:
    DB_URI: postgresql://test-quay-database:postgres@test-quay-database:5432/test-quay-database
  2. Create a Secret using the configuration file:

    $ kubectl create secret generic --from-file config.yaml=./config.yaml config-bundle-secret
  3. Create a QuayRegistry YAML file quayregistry.yaml which marks postgres component as unmanaged and references the created Secret:

    quayregistry.yaml
    apiVersion: quay.redhat.com/v1
    kind: QuayRegistry
    metadata:
      name: test
    spec:
      configBundleSecret: config-bundle-secret
      components:
        - kind: postgres
          managed: false
  4. Deploy the registry

Database configuration

You configure the connection to the database using the required DB_URI field and optional connection arguments in the DB_CONNECTION_ARGS structure. Some key-value pairs defined under DB_CONNECTION_ARGS are generic while others are database-specific. In particular, SSL configuration depends on the database you are deploying, and examples for PostgreSQL and MySQL are given below.

Database URI
Table 1. Database URI
Field Type Description

DB_URI
(Required)

String

The URI for accessing the database, including any credentials

Example:

postgresql://quayuser:quaypass@quay-server.example.com:5432/quay
Database connection arguments
Table 2. Database connection arguments
Field Type Description

DB_CONNECTION_ARGS

Object

Optional connection arguments for the database, such as timeouts and SSL

   .autorollback

Boolean

Whether to use thread-local connections
 
Should ALWAYS be true

   .threadlocals

Boolean

Whether to use auto-rollback connections
 
Should ALWAYS be true

PostgreSQL SSL connection arguments

A sample PostgreSQL SSL configuration is given below:

DB_CONNECTION_ARGS:
  sslmode: verify-ca
  sslrootcert: /path/to/cacert

The sslmode option determines whether or with what priority a secure SSL TCP/IP connection will be negotiated with the server. There are six modes:

  • disable: only try a non-SSL connection

  • allow: first try a non-SSL connection; if that fails, try an SSL connection

  • prefer: (default) first try an SSL connection; if that fails, try a non-SSL connection

  • require: only try an SSL connection. If a root CA file is present, verify the certificate in the same way as if verify-ca was specified

  • verify-ca: only try an SSL connection, and verify that the server certificate is issued by a trusted certificate authority (CA)

  • verify-full: only try an SSL connection, verify that the server certificate is issued by a trusted CA and that the requested server host name matches that in the certificate

More information on the valid arguments for PostgreSQL is available at https://www.postgresql.org/docs/current/libpq-connect.html.

MySQL SSL connection arguments

A sample MySQL SSL configuration follows:

DB_CONNECTION_ARGS:
  ssl:
    ca: /path/to/cacert

Information on the valid connection arguments for MySQL is available at https://dev.mysql.com/doc/refman/8.0/en/connecting-using-uri-or-key-value-pairs.html.

Using the managed PostgreSQL

  • Database backups should be performed regularly using either the supplied tools on the Postgres image or your own backup infrastructure. The Operator does not currently ensure the Postgres database is backed up.

  • Restoring the Postgres database from a backup must be done using Postgres tools and procedures. Be aware that your Quay Pods should not be running while the database restore is in progress.

  • Database disk space is allocated automatically by the Operator with 50 GiB. This number represents a usable amount of storage for most small to medium Project Quay installations but may not be sufficient for your use cases. Resizing the database volume is currently not handled by the Operator.

Configuring TLS and routes

Multiple permutations are possible when configuring TLS and routes, but the following rules apply:

  • If TLS is managed, then route must also be managed

  • In general, if TLS is unmanaged then you must supply certs, either with the config tool or directly in the config bundle

  • However, it is possible to have both TLS and route unmanaged and not supply certs.

The following table outlines the valid options:

Table 3. Valid configuration options for TLS and routes
Option Route TLS Certs provided Result

My own load balancer handles TLS

Managed

Managed

No

Edge Route with default wildcard cert

Project Quay handles TLS

Managed

Unmanaged

Yes

Passthrough route with certs mounted inside the pod

Project Quay handles TLS

Unmanaged

Unmanaged

Yes

Certificates are set inside the quay pod but route must be created manually

None (Not for production)

Unmanaged

Unmanaged

No

Sets a passthrough route, allows HTTP traffic directly from the route and into the Pod

Configuring other components

  • Monitoring

  • Redis TODO

  • Clair TODO

  • Mirror TODO

  • HPA

Using external Redis

If you wish to use an external Redis database, set the component as unmanaged in the QuayRegistry instance:

  1. Create a configuration file config.yaml with the necessary redis fields:

    BUILDLOGS_REDIS:
        host: quay-server.example.com
        password: strongpassword
        port: 6379
    
    USER_EVENTS_REDIS:
        host: quay-server.example.com
        password: strongpassword
        port: 6379
  2. Create a Secret using the configuration file

    $ kubectl create secret generic --from-file config.yaml=./config.yaml config-bundle-secret
  3. Create a QuayRegistry YAML file quayregistry.yaml which marks redis component as unmanaged and references the created Secret:

    apiVersion: quay.redhat.com/v1
    kind: QuayRegistry
    metadata:
      name: example-registry
    spec:
      configBundleSecret: config-bundle-secret
      components:
        - kind: redis
          managed: false
  4. Deploy the registry

Redis configuration fields

Build logs
Table 4. Build logs configuration
Field Type Description

BUILDLOGS_REDIS
(Required)

Object

Redis connection details for build logs caching

   .host
   (Required)

String

The hostname at which Redis is accessible
 
Example:
quay-server.example.com

   .port
   (Required)

Number

The port at which Redis is accessible
 
Example:
6379

   .password

String

The port at which Redis is accessible
 
Example:
strongpassword

User events
Table 5. User events config
Field Type Description

USER_EVENTS_REDIS
(Required)

Object

Redis connection details for user event handling

   .host
   (Required)

String

The hostname at which Redis is accessible
 
Example:
quay-server.example.com

   .port
   (Required)

Number

The port at which Redis is accessible
 
Example:
6379

   .password

String

The port at which Redis is accessible
 
Example:
strongpassword

Example redis configuration
BUILDLOGS_REDIS:
    host: quay-server.example.com
    password: strongpassword
    port: 6379

USER_EVENTS_REDIS:
    host: quay-server.example.com
    password: strongpassword
    port: 6379

Disabling the Horizontal Pod Autoscaler

As HPA is configured by default to be managed, the number of pods for Quay, Clair and repository mirroring is set to two. This facilitates the avoidance of downtime when updating / reconfiguring Quay via the Operator or during rescheduling events.

If you wish to disable autoscaling or create your own HorizontalPodAutoscaler, simply specify the component as unmanaged in the QuayRegistry instance:

apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
  name: example-registry
spec:
  components:
    - kind: horizontalpodautoscaler
      managed: false

Deploying Quay using the Quay Operator

Creating a Quay Registry

The default configuration tells the Operator to manage all of Quay’s dependencies (database, Redis, object storage, etc).

OpenShift Console

  1. Select Operators → Installed Operators, then select the Quay Operator to navigate to the Operator detail view.

  2. Click 'Create Instance' on the 'Quay Registry' tile under 'Provided APIs'.

  3. Optionally change the 'Name' of the QuayRegistry. This will affect the hostname of the registry. All other fields have been populated with defaults.

  4. Click 'Create' to submit the QuayRegistry to be deployed by the Quay Operator.

  5. You should be redirected to the QuayRegistry list view. Click on the QuayRegistry you just created to see the detail view.

  6. Once the 'Registry Endpoint' has a value, click it to access your new Quay registry via the UI. You can now select 'Create Account' to create a user and sign in.

Command Line

The same result can be achieved using the CLI.

  1. Create the following QuayRegistry custom resource in a file called quay.yaml.

    quay.yaml:
    apiVersion: quay.redhat.com/v1
    kind: QuayRegistry
    metadata:
      name: example-registry
  2. Create the QuayRegistry in your namespace:

    $ oc create -n <your-namespace> -f quay.yaml
  3. Wait until the status.registryEndpoint is populated.

    $ oc get -n <your-namespace> quayregistry example-registry -o jsonpath="{.status.registryEndpoint}" -w
  4. Once the status.registryEndpoint has a value, navigate to it using your web browser to access your new Quay registry via the UI. You can now select 'Create Account' to create a user and sign in.

Monitoring and debugging the deployment process

Project Quay 3.6 provides new functionality to troubleshoot problems during the deployment phase. The status in the QuayRegistry object can help you monitor the health of the components during the deployment an help you debug any problems that may arise:

$ oc get quayregistry -n quay-enterprise -o yaml

Immediately after deployment, the QuayRegistry object will show the basic configuration:

apiVersion: v1
items:
- apiVersion: quay.redhat.com/v1
  kind: QuayRegistry
  metadata:
    creationTimestamp: "2021-09-14T10:51:22Z"
    generation: 3
    name: example-registry
    namespace: quay-enterprise
    resourceVersion: "50147"
    selfLink: /apis/quay.redhat.com/v1/namespaces/quay-enterprise/quayregistries/example-registry
    uid: e3fc82ba-e716-4646-bb0f-63c26d05e00e
  spec:
    components:
    - kind: postgres
      managed: true
    - kind: clair
      managed: true
    - kind: redis
      managed: true
    - kind: horizontalpodautoscaler
      managed: true
    - kind: objectstorage
      managed: true
    - kind: route
      managed: true
    - kind: mirror
      managed: true
    - kind: monitoring
      managed: true
    - kind: tls
      managed: true
    configBundleSecret: example-registry-config-bundle-kt55s
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Use the oc get pods command to view the current state of the deployed components:

$ oc get pods -n quay-enterprise

NAME                                                   READY   STATUS              RESTARTS   AGE
example-registry-clair-app-86554c6b49-ds7bl            0/1     ContainerCreating   0          2s
example-registry-clair-app-86554c6b49-hxp5s            0/1     Running             1          17s
example-registry-clair-postgres-68d8857899-lbc5n       0/1     ContainerCreating   0          17s
example-registry-quay-app-upgrade-h2v7h                0/1     ContainerCreating   0          9s
example-registry-quay-config-editor-5f646cbcb7-lbnc2   0/1     ContainerCreating   0          17s
example-registry-quay-database-66f495c9bc-wqsjf        0/1     ContainerCreating   0          17s
example-registry-quay-mirror-854c88457b-d845g          0/1     Init:0/1            0          2s
example-registry-quay-mirror-854c88457b-fghxv          0/1     Init:0/1            0          17s
example-registry-quay-postgres-init-bktdt              0/1     Terminating         0          17s
example-registry-quay-redis-f9b9d44bf-4htpz            0/1     ContainerCreating   0          17s

While the deployment is in progress, the QuayRegistry object will show the current status. In this instance, database migrations are taking place, and other components are waiting until this completes.

  status:
    conditions:
    - lastTransitionTime: "2021-09-14T10:52:04Z"
      lastUpdateTime: "2021-09-14T10:52:04Z"
      message: all objects created/updated successfully
      reason: ComponentsCreationSuccess
      status: "False"
      type: RolloutBlocked
    - lastTransitionTime: "2021-09-14T10:52:05Z"
      lastUpdateTime: "2021-09-14T10:52:05Z"
      message: running database migrations
      reason: MigrationsInProgress
      status: "False"
      type: Available
    configEditorCredentialsSecret: example-registry-quay-config-editor-credentials-btbkcg8dc9
    configEditorEndpoint: https://example-registry-quay-config-editor-quay-enterprise.apps.docs.quayteam.org
    lastUpdated: 2021-09-14 10:52:05.371425635 +0000 UTC
    unhealthyComponents:
      clair:
      - lastTransitionTime: "2021-09-14T10:51:32Z"
        lastUpdateTime: "2021-09-14T10:51:32Z"
        message: 'Deployment example-registry-clair-postgres: Deployment does not have minimum availability.'
        reason: MinimumReplicasUnavailable
        status: "False"
        type: Available
      - lastTransitionTime: "2021-09-14T10:51:32Z"
        lastUpdateTime: "2021-09-14T10:51:32Z"
        message: 'Deployment example-registry-clair-app: Deployment does not have minimum availability.'
        reason: MinimumReplicasUnavailable
        status: "False"
        type: Available
      mirror:
      - lastTransitionTime: "2021-09-14T10:51:32Z"
        lastUpdateTime: "2021-09-14T10:51:32Z"
        message: 'Deployment example-registry-quay-mirror: Deployment does not have minimum availability.'
        reason: MinimumReplicasUnavailable
        status: "False"
        type: Available

When the deployment process finishes successfully, the status in the QuayRegistry object shows no unhealthy components:

  status:
    conditions:
    - lastTransitionTime: "2021-09-14T10:52:36Z"
      lastUpdateTime: "2021-09-14T10:52:36Z"
      message: all registry component healthchecks passing
      reason: HealthChecksPassing
      status: "True"
      type: Available
    - lastTransitionTime: "2021-09-14T10:52:46Z"
      lastUpdateTime: "2021-09-14T10:52:46Z"
      message: all objects created/updated successfully
      reason: ComponentsCreationSuccess
      status: "False"
      type: RolloutBlocked
    configEditorCredentialsSecret: example-registry-quay-config-editor-credentials-hg7gg7h57m
    configEditorEndpoint: https://example-registry-quay-config-editor-quay-enterprise.apps.docs.quayteam.org
    currentVersion: 3.6.0
    lastUpdated: 2021-09-14 10:52:46.104181633 +0000 UTC
    registryEndpoint: https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org
    unhealthyComponents: {}

Viewing created components

Use the oc get pods command to view the deployed components:

$ oc get pods -n quay-enterprise

NAME                                                   READY   STATUS      RESTARTS   AGE
example-registry-clair-app-5ffc9f77d6-jwr9s            1/1     Running     0          3m42s
example-registry-clair-app-5ffc9f77d6-wgp7d            1/1     Running     0          3m41s
example-registry-clair-postgres-54956d6d9c-rgs8l       1/1     Running     0          3m5s
example-registry-quay-app-79c6b86c7b-8qnr2             1/1     Running     4          3m42s
example-registry-quay-app-79c6b86c7b-xk85f             1/1     Running     4          3m41s
example-registry-quay-app-upgrade-5kl5r                0/1     Completed   4          3m50s
example-registry-quay-config-editor-597b47c995-svqrl   1/1     Running     0          3m42s
example-registry-quay-database-b466fc4d7-tfrnx         1/1     Running     2          3m42s
example-registry-quay-mirror-6d9bd78756-6lj6p          1/1     Running     0          2m58s
example-registry-quay-mirror-6d9bd78756-bv6gq          1/1     Running     0          2m58s
example-registry-quay-postgres-init-dzbmx              0/1     Completed   0          3m43s
example-registry-quay-redis-8bd67b647-skgqx            1/1     Running     0          3m42s

A default deployment shows the following running pods:

  • Two pods for the Quay application itself (example-registry-quay-app-*`)

  • One Redis pod for Quay logging (example-registry-quay-redis-*)

  • One database pod for PostgreSQL used by Quay for metadata storage (example-registry-quay-database-*)

  • One pod for the Quay config editor (example-registry-quay-config-editor-*)

  • Two Quay mirroring pods (example-registry-quay-mirror-*)

  • Two pods for the Clair application (example-registry-clair-app-*)

  • One PostgreSQL pod for Clair (example-registry-clair-postgres-*)

As HPA is configured by default to be managed, the number of pods for Quay, Clair and repository mirroring is set to two. This facilitates the avoidance of downtime when updating / reconfiguring Quay via the Operator or during rescheduling events.

Customizing Quay after deployment

The Quay Operator takes an opinionated strategy towards deploying Quay and its dependencies, however there are places where the Quay deployment can be customized.

Quay Application Configuration

Once deployed, the Quay application itself can be configured as normal using the config editor UI or by modifying the Secret containing the Quay configuration bundle. The Operator uses the Secret named in the spec.configBundleSecret field but does not watch this resource for changes. It is recommended that configuration changes be made to a new Secret resource and the spec.configBundleSecret field be updated to reflect the change. In the event there are issues with the new configuration, it is simple to revert the value of spec.configBundleSecret to the older Secret.

Config editor details

In the Details section of the QuayRegistry screen, the endpoint for the config editor is available, along with a link to the secret containing the credentials for logging into the config editor:

Config editor details

Retrieving the config editor credentials

  1. Click on the link for the config editor secret:

    Config editor secret

  2. In the Data section of the Secret details screen, click Reveal values to see the credentials for logging in to the config editor:

    Config editor secret reveal

Logging in to the config editor

Browse to the config editor endpoint and then enter the username, typically quayconfig, and the corresponding password to access the config tool:

Config editor user interface

Reconfiguring Quay on OpenShift using config tool UI

Before making a change to the Quay configuration, list the pods that are currently running:

$ oc get pods

Updating configuration

In this example of updating the configuration, a superuser is added via the config editor tool:

  1. Add a super user to Quay:

    Add super user

  2. Validate the new configuration and the apply the changes by pressing the Reconfigure Quay button:

    Reconfigure

  3. The config tool notifies you that the change has been submitted to Quay:

    Reconfigured

Monitoring reconfiguration

  1. Use the oc get pods command to watch Quay reconciling. The existing pods are terminated and new pods are created using the updated configuration:

    $ oc get pods -n quay-enterprise
    
    NAME                                                   READY   STATUS              RESTARTS   AGE
    example-registry-clair-app-5f896c5cb7-2t8sl            1/1     Running             0          1d
    example-registry-clair-app-5f896c5cb7-826sg            1/1     Terminating         0          1d
    example-registry-clair-app-666d4dd49-xblsz             0/1     ContainerCreating   0          0s
    example-registry-clair-postgres-54956d6d9c-g9q2l       1/1     Terminating         0          1d
    example-registry-quay-app-64b8db6fb8-8brtg             0/1     ContainerCreating   0          0s
    example-registry-quay-app-6984d7fdc6-55rft             1/1     Terminating         0          1d
    example-registry-quay-app-6984d7fdc6-zh77k             1/1     Running             0          1d
    example-registry-quay-app-upgrade-p8fzp                0/1     ContainerCreating   0          7s
    example-registry-quay-config-editor-5f79d55b74-ll6k8   1/1     Running             0          1d
    example-registry-quay-config-editor-85cc6b4c48-m94tf   0/1     ContainerCreating   0          0s
    example-registry-quay-database-56b9cddb47-bnksb        0/1     ContainerCreating   0          0s
    example-registry-quay-database-78bf6868c9-mxlvk        1/1     Running             0          1d
    example-registry-quay-mirror-84bd968858-n8jk7          1/1     Terminating         0          1d
    example-registry-quay-mirror-84bd968858-tmswk          1/1     Terminating         0          1d
    example-registry-quay-postgres-init-qbkn2              0/1     ContainerCreating   0          0s
    example-registry-quay-redis-8bd67b647-jwpg9            1/1     Running             0          1d
  2. You can also monitor the health of the components during reconcilliation:

    $ oc get quayregistry -n quay-enterprise -o yaml
    
    apiVersion: v1
    items:
    - apiVersion: quay.redhat.com/v1
      kind: QuayRegistry
    ...
        unhealthyComponents:
          base:
          - lastTransitionTime: "2021-09-13T14:52:23Z"
            lastUpdateTime: "2021-09-13T14:52:23Z"
            message: 'Deployment example-registry-quay-app: Deployment does not have minimum availability.'
            reason: MinimumReplicasUnavailable
            status: "False"
            type: Available
          clair:
          - lastTransitionTime: "2021-09-13T14:52:23Z"
            lastUpdateTime: "2021-09-13T14:52:23Z"
            message: 'Deployment example-registry-clair-app: Deployment does not have minimum availability.'
            reason: MinimumReplicasUnavailable
            status: "False"
            type: Available
          mirror:
          - lastTransitionTime: "2021-09-13T14:52:16Z"
            lastUpdateTime: "2021-09-13T14:52:16Z"
            message: 'Deployment example-registry-quay-mirror: Deployment does not have minimum availability.'
            reason: MinimumReplicasUnavailable
            status: "False"
            type: Available
    ...
  3. Once Quay has reconciled the changes, the pods should show as Running:

    $ oc get pods
    
    NAME                                                   READY   STATUS             RESTARTS   AGE
    example-registry-clair-app-666d4dd49-xblsz             0/1     Running            0          89s
    example-registry-clair-app-666d4dd49-zv5ls             0/1     Running            0          82s
    example-registry-clair-postgres-54956d6d9c-g9q2l       0/1     Running            0
    example-registry-quay-app-64b8db6fb8-8brtg             1/1     Running            0          89s
    example-registry-quay-app-64b8db6fb8-lpfjw             1/1     Running            0          82s
    example-registry-quay-app-upgrade-p8fzp                0/1     Completed          0          96s
    example-registry-quay-config-editor-85cc6b4c48-m94tf   1/1     Running            0          89s
    example-registry-quay-database-56b9cddb47-bnksb        1/1     Running            0          89s
    example-registry-quay-mirror-6cb99dc448-gkpjr          1/1     Running            0          52s
    example-registry-quay-mirror-6cb99dc448-s92tp          1/1     Running            0          52s
    example-registry-quay-postgres-init-qbkn2              0/1     Completed          0          89s
    example-registry-quay-redis-8bd67b647-jwpg9            1/1     Running            0          1d
    
    
    == Accessing the updated secret
    
    Since a new pod has been created for the config tool, a new secret will have been created, and you will need to use the updated password when you next attempt to login:
    
    image:config-editor-secret-updated.png[Config editor secret updated]
    
    
    
    == Accessing the updated config.yaml
    
    Use the config bundle to access the updated `config.yaml` file.
    
    . On the QuayRegistry details screen, click on the Config Bundle Secret
    
    . In the Data section of the Secret details screen, click Reveal values to see the `config.yaml` file
    
    . Check that the change has been applied. In this case, `quayadmin` should be in the list of superusers:
    +
    [source,yaml]

    …​ SERVER_HOSTNAME: example-quay-openshift-operators.apps.docs.quayteam.org SETUP_COMPLETE: true SUPER_USERS:

    • quayadmin TAG_EXPIRATION_OPTIONS:

    • 2w …​

:leveloffset!:

// TODO 36
//include::modules/operator-customize-external-access.adoc[leveloffset=+2]
//include::modules/operator-disable-route.adoc[leveloffset=+2]


== Quay Operator features
// TODO 36 MOVE Helm to Use Quay guide, and fix up

//include::modules/helm-oci-intro.adoc[leveloffset=+2]
//include::modules/helm-oci-prereqs.adoc[leveloffset=+3]
//include::modules/helm-oci-quay.adoc[leveloffset=+3]
//include::modules/config-fields-helm-oci.adoc[leveloffset=+3]
//include::modules/operator-helm-oci.adoc[leveloffset=+3]

:leveloffset: +2

[[operator-console-monitoring-alerting]]
= Console monitoring and alerting

{productname} {producty} provides support for monitoring Quay instances that were deployed using the Operator, from inside the OpenShift console. The new monitoring features include a Grafana dashboard, access to individual metrics, and alerting to notify for frequently restarting Quay pods.

[NOTE]
====
To enable the monitoring features, the Operator must be installed in  "all namespaces" mode.
====

== Dashboard

In the OpenShift console, navigate to Monitoring -> Dashboards and search for the dashboard of your desired Quay registry instance:

image:choose-dashboard.png[Choose Quay dashboard]

The dashboard shows various statistics including:

* The number of Organizations, Repositories, Users and Robot accounts
* CPU Usage and Max Memory Usage
* Rates of Image Pulls and Pushes, and Authentication requests
* API request rate
* Latencies

image:console-dashboard-1.png[Console dashboard]

== Metrics

You can see the underlying metrics behind the Quay dashboard, by accessing Monitoring -> Metrics in the UI. In the Expression field, enter the text `quay_` to see the list of metrics available:

image:quay-metrics.png[Quay metrics]

Select a sample metric, for example, `quay_org_rows`:

image:quay-metrics-org-rows.png[Number of Quay organizations]

This metric shows the number of organizations in the registry, and it is directly surfaced in the dashboard as well.

== Alerting

An alert is raised if the Quay pods restart too often. The alert can be configured by accessing the Alerting rules tab from Monitoring -> Alerting in the consol UI and searching for the Quay-specific alert:

image:alerting-rules.png[Alerting rules]

Select the QuayPodFrequentlyRestarting rule detail to configure the alert:

image:quay-pod-frequently-restarting.png[Alerting rule details]


:leveloffset: 2


:leveloffset: +2

[[clair-openshift-airgap-update]]
=  Manually updating the vulnerability databases for Clair in an air-gapped OpenShift cluster

Clair utilizes packages called `updaters` that encapsulate the logic of fetching and parsing different vulnerability databases. Clair supports running updaters in a different environment and importing the results. This is aimed at supporting installations that disallow the Clair cluster from talking to the Internet directly.

To manually update the vulnerability databases for Clair in an air-gapped OpenShift cluster, use the following steps:

* Obtain the `clairctl` program
* Retrieve the Clair config
* Use `clairctl` to export the updaters bundle from a Clair instance that has access to the internet
* Update the Clair config in the air-gapped OpenShift cluster to allow access to the Clair database
* Transfer the updaters bundle from the system with internet access, to make it available inside the air-gapped environment
* Use `clairctl` to import the updaters bundle into the Clair instance for the air-gapped OpenShift cluster

:leveloffset: 2
:leveloffset: +3

[[clair-clairctl]]
= Obtaining clairctl

To obtain the `clairctl` program from a Clair deployment in an OpenShift cluster, use the `oc cp` command, for example:

$ oc -n quay-enterprise cp example-registry-clair-app-64dd48f866-6ptgw:/usr/bin/clairctl ./clairctl $ chmod u+x ./clairctl

For a standalone Clair deployment, use the `podman cp` command, for example:

$ sudo podman cp clairv4:/usr/bin/clairctl ./clairctl $ chmod u+x ./clairctl

:leveloffset: 2
==== Retrieving the Clair config
:leveloffset: +4

[[clair-openshift-config]]
= Clair on OpenShift config

To retrieve the configuration file for a Clair instance deployed using the OpenShift Operator, retrieve and decode the config secret using the appropriate namespace, and save it to file, for example:

$ kubectl get secret -n quay-enterprise example-registry-clair-config-secret -o "jsonpath={$.data['config\.yaml']}" | base64 -d > clair-config.yaml

An excerpt from a Clair configuration file is shown below:

.clair-config.yaml
[source,yaml]

http_listen_addr: :8080 introspection_addr: "" log_level: info indexer: connstring: host=example-registry-clair-postgres port=5432 dbname=postgres user=postgres password=postgres sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: package: {} dist: {} repo: {} airgap: false matcher: connstring: host=example-registry-clair-postgres port=5432 dbname=postgres user=postgres password=postgres sslmode=disable max_conn_pool: 100 indexer_addr: "" migrations: true period: null disable_updaters: false notifier: connstring: host=example-registry-clair-postgres port=5432 dbname=postgres user=postgres password=postgres sslmode=disable migrations: true indexer_addr: "" matcher_addr: "" poll_interval: 5m delivery_interval: 1m …​

:leveloffset: 2
:leveloffset: +4

[[clair-standalone-config-location]]
= Standalone Clair config

For standalone Clair deployments, the config file is the one specified in CLAIR_CONF environment variable in the `podman run` command, for example:

[subs="verbatim,attributes"]
....
sudo podman run -d --rm --name clairv4 \
  -p 8081:8081 -p 8089:8089 \
  -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo \
  -v /etc/clairv4/config:/clair:Z \
  {productrepo}/{clairimage}:{productminv}
....

:leveloffset: 2
:leveloffset: +3

[[clair-export-bundle]]
= Exporting the updaters bundle

From a Clair instance that has access to the internet, use `clairctl` with the appropriate configuration file to export the updaters bundle:

$ ./clairctl --config ./config.yaml export-updaters updates.gz

:leveloffset: 2
:leveloffset: +3

[[clair-openshift-airgap-database]]
= Configuring access to the Clair database in the air-gapped OpenShift cluster

* Use `kubectl` to determine the Clair database service:
+

$ kubectl get svc -n quay-enterprise

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h …​

* Forward the Clair database port so that it is accessible from the local machine, for example:
+

$ kubectl port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432

* Update the Clair configuration file, replacing the value of the `host` in the multiple `connstring` fields with `localhost`, for example:
+
.clair-config.yaml
[source,yaml]
...
connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable
...
[NOTE]
====
As an alternative to using `kubectl port-forward`, you can use `kubefwd` instead. With this method, there is no need to modify the `connstring` field in the Clair configuration file to use `localhost`.
====

:leveloffset: 2
:leveloffset: +3

[[clair-openshift-airgap-import-bundle]]
= Importing the updaters bundle into the air-gapped environment

After transferring the updaters bundle to the air-gapped environment, use `clairctl` to import the bundle into the Clair database deployed by the OpenShift Operator:

$ ./clairctl --config ./clair-config.yaml import-updaters updates.gz

:leveloffset: 2


:leveloffset: +2

[[fips-overview]]
= FIPS readiness and compliance

FIPS (the Federal Information Processing Standard developed by the National Institute of Standards and Technology, NIST) is regarded as the gold standard for securing and encrypting sensitive data, particularly in heavily regulated areas such as banking, healthcare and the public sector. Red Hat Enterprise Linux and Red Hat OpenShift Container Platform support this standard by providing a FIPS mode in which the system would only allow usage of certain, FIPS-validated cryptographic modules, like `openssl`. This ensures FIPS compliance.


{productname} supports running on RHEL and OCP in FIPS mode in production since version 3.5. Furthermore, {productname} itself also commits to exclusively using cryptography libraries that are validated or are in the process of being validated by NIST. {productname} 3.5 has pending FIPS 140-2 validation based on the RHEL 8.3 cryptography libraries. As soon as that validation is finalized, {productname} will be officially FIPS compliant.



:leveloffset: 2


:leveloffset: +1

= Advanced Concepts



:leveloffset: 2
:leveloffset: +2

[[operator-deploy-infrastructure]]
= Deploying Quay on infrastructure nodes

By default, Quay-related pods are placed on arbitrary worker nodes when using the Operator to deploy the registry. The OpenShift Container Platform documentation shows how to use machine sets to configure nodes to only host infrastructure components (see link:https://docs.openshift.com/container-platform/4.7/machine_management/creating-infrastructure-machinesets.html[]).


If you are not using OCP MachineSet resources to deploy infra nodes, this section shows you how to manually label and taint nodes for infrastructure purposes.

Once you have your configured your infrastructure nodes, either manually or using machine sets, you can then control the placement of Quay pods on these nodes using node selectors and tolerations.

== Label and taint nodes for infrastructure use

In the cluster used in this example, there are three master nodes and six worker nodes:

$ oc get nodes NAME                                               STATUS   ROLES    AGE     VERSION user1-jcnp6-master-0.c.quay-devel.internal         Ready    master   3h30m   v1.20.0+ba45583 user1-jcnp6-master-1.c.quay-devel.internal         Ready    master   3h30m   v1.20.0+ba45583 user1-jcnp6-master-2.c.quay-devel.internal         Ready    master   3h30m   v1.20.0+ba45583 user1-jcnp6-worker-b-65plj.c.quay-devel.internal   Ready    worker   3h21m   v1.20.0+ba45583 user1-jcnp6-worker-b-jr7hc.c.quay-devel.internal   Ready    worker   3h21m   v1.20.0+ba45583 user1-jcnp6-worker-c-jrq4v.c.quay-devel.internal   Ready    worker   3h21m   v1.20.0+ba45583 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal   Ready    worker   3h21m   v1.20.0+ba45583 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal   Ready    worker   3h22m   v1.20.0+ba45583 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal   Ready    worker   3h21m   v1.20.0+ba45583

Label the final three worker nodes for infrastructure use:

$ oc label node --overwrite user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal node-role.kubernetes.io/infra= $ oc label node --overwrite user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal node-role.kubernetes.io/infra= $ oc label node --overwrite user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal node-role.kubernetes.io/infra=

Now, when you list the nodes in the cluster, the last 3 worker nodes will have an added role of `infra`:

$ oc get nodes NAME                                               STATUS   ROLES          AGE     VERSION user1-jcnp6-master-0.c.quay-devel.internal         Ready    master         4h14m   v1.20.0+ba45583 user1-jcnp6-master-1.c.quay-devel.internal         Ready    master         4h15m   v1.20.0+ba45583 user1-jcnp6-master-2.c.quay-devel.internal         Ready    master         4h14m   v1.20.0+ba45583 user1-jcnp6-worker-b-65plj.c.quay-devel.internal   Ready    worker         4h6m    v1.20.0+ba45583 user1-jcnp6-worker-b-jr7hc.c.quay-devel.internal   Ready    worker         4h5m    v1.20.0+ba45583 user1-jcnp6-worker-c-jrq4v.c.quay-devel.internal   Ready    worker         4h5m    v1.20.0+ba45583 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal   Ready    infra,worker   4h6m    v1.20.0+ba45583 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal   Ready    infra,worker   4h6m    v1.20.0+ba45583 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal   Ready    infra,worker   4h6m    v1.20.0+ba45583

With an infra node being assigned as a worker, there is a chance that user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node and then add tolerations for the pods you want to control.

$ oc adm taint nodes user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule $ oc adm taint nodes user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule $ oc adm taint nodes user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule

== Create a Project with node selector and toleration

If you have already deployed Quay using the Quay Operator, remove the installed operator and any specific namespace(s) you created for the deployment.

Create a Project resource, specifying a node selector and toleration as shown in the following example:

.quay-registry.yaml

kind: Project apiVersion: project.openshift.io/v1 metadata:   name: quay-registry   annotations:     openshift.io/node-selector: 'node-role.kubernetes.io/infra='     scheduler.alpha.kubernetes.io/defaultTolerations: >-       [{"operator": "Exists", "effect": "NoSchedule", "key":       "node-role.kubernetes.io/infra"}       ]

Use the `oc apply` command to create the project:

$ oc apply -f quay-registry.yaml project.project.openshift.io/quay-registry created

Any subsequent resources created in the `quay-registry` namespace should now be scheduled on the dedicated infrastructure nodes.


== Install the Quay Operator in the namespace

When installing the Quay Operator, specify the appropriate project namespace explicitly, in this case `quay-registry`. This will result in the operator pod itself landing on one of the three infrastructure nodes:

$ oc get pods -n quay-registry -o wide NAME                                    READY   STATUS    RESTARTS   AGE   IP            NODE                                               quay-operator.v3.4.1-6f6597d8d8-bd4dp   1/1     Running   0          30s   10.131.0.16   user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal

== Create the registry

Create the registry as explained earlier, and then wait for the deployment to be ready. When you list the Quay pods, you should now see that they have only been scheduled on the three nodes that you have labelled for infrastructure purposes:

$ oc get pods -n quay-registry -o wide NAME                                                   READY   STATUS      RESTARTS   AGE     IP            NODE                                                 example-registry-clair-app-789d6d984d-gpbwd            1/1     Running     1          5m57s   10.130.2.80   user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal example-registry-clair-postgres-7c8697f5-zkzht         1/1     Running     0          4m53s   10.129.2.19   user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-app-56dd755b6d-glbf7             1/1     Running     1          5m57s   10.129.2.17   user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-config-editor-7bf9bccc7b-dpc6d   1/1     Running     0          5m57s   10.131.0.23   user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal example-registry-quay-database-8dc7cfd69-dr2cc         1/1     Running     0          5m43s   10.129.2.18   user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-mirror-78df886bcc-v75p9          1/1     Running     0          5m16s   10.131.0.24   user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal example-registry-quay-postgres-init-8s8g9              0/1     Completed   0          5m54s   10.130.2.79   user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal example-registry-quay-redis-5688ddcdb6-ndp4t           1/1     Running     0          5m56s   10.130.2.78   user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal quay-operator.v3.4.1-6f6597d8d8-bd4dp                  1/1     Running     0          22m     10.131.0.16   user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal

:leveloffset: 2
:leveloffset: +2

[[monitoring-single-namespace]]
= Enabling monitoring when Operator is installed in a single namespace

When {productname} Operator is installed in a single namespace, the monitoring component is unmanaged. To configure monitoring, you need to enable it for user-defined namespaces in OpenShift Container Platform. For more information, see the OCP documentation for link:https://docs.openshift.com/container-platform/4.7/monitoring/configuring-the-monitoring-stack.html[Configuring the monitoring stack] and link:https://docs.openshift.com/container-platform/4.7/monitoring/enabling-monitoring-for-user-defined-projects.html[Enabling monitoring for user-defined projects].

The following steps show you how to configure monitoring for Quay, based on the OCP documentation.

== Creating a cluster monitoring config map

. Check whether the `cluster-monitoring-config` ConfigMap object exists:
+
```
$ oc -n openshift-monitoring get configmap cluster-monitoring-config

Error from server (NotFound): configmaps "cluster-monitoring-config" not found
```

. If the ConfigMap object does not exist: 
.. Create the following YAML manifest. In this example, the file is called `cluster-monitoring-config.yaml`:
+

$ cat cluster-monitoring-config.yaml

apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |

.. Create the ConfigMap object:
+

$ oc apply -f cluster-monitoring-config.yaml configmap/cluster-monitoring-config created

+

$ oc -n openshift-monitoring get configmap cluster-monitoring-config

NAME DATA AGE cluster-monitoring-config 1 12s

== Creating a user-defined workload monitoring config map

. Check whether the `user-workload-monitoring-config` ConfigMap object exists:
+

$ oc -n openshift-user-workload-monitoring get configmap user-workload-monitoring-config

Error from server (NotFound): configmaps "user-workload-monitoring-config" not found

. If the ConfigMap object does not exist:
.. Create the following YAML manifest. In this example, the file is called `user-workload-monitoring-config.yaml`:
+

$ cat user-workload-monitoring-config.yaml

apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: |

.. Create the ConfigMap object:
+

$ oc apply -f user-workload-monitoring-config.yaml

configmap/user-workload-monitoring-config created

== Enable monitoring for user-defined projects

. Check whether monitoring for user-define projects is runnning:
+

$ oc get pods -n openshift-user-workload-monitoring

No resources found in openshift-user-workload-monitoring namespace.

. Edit the  `cluster-monitoring-config` ConfigMap:
+

$ oc -n openshift-monitoring edit configmap cluster-monitoring-config

 
. Set `enableUserWorkload: true` to enable monitoring for user-defined projects on the cluster:
+
[source,yaml]

apiVersion: v1 data: config.yaml: | enableUserWorkload: true kind: ConfigMap metadata: annotations:

. Save the file to apply the changes and then check that the appropriate pods are running:
+

$ oc get pods -n openshift-user-workload-monitoring

NAME READY STATUS RESTARTS AGE prometheus-operator-6f96b4b8f8-gq6rl 2/2 Running 0 15s prometheus-user-workload-0 5/5 Running 1 12s prometheus-user-workload-1 5/5 Running 1 12s thanos-ruler-user-workload-0 3/3 Running 0 8s thanos-ruler-user-workload-1 3/3 Running 0 8s

 

== Create a Service object to expose Quay metrics

. Create a YAML file for the Service object:
+

$ cat quay-service.yaml

apiVersion: v1 kind: Service metadata: annotations: labels: quay-component: monitoring quay-operator/quayregistry: example-registry name: example-registry-quay-metrics namespace: quay-enterprise spec: ports: - name: quay-metrics port: 9091 protocol: TCP targetPort: 9091 selector: quay-component: quay-app quay-operator/quayregistry: example-registry type: ClusterIP

 
 
. Create the Service object:
+

$ oc apply -f quay-service.yaml

service/example-registry-quay-metrics created

== Create a ServiceMonitor object

Configure OpenShift Monitoring to scrape the metrics by creating a ServiceMonitor resource.


. Create a YAML file for the ServiceMonitor resource:
+

$ cat quay-service-monitor.yaml

apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: quay-operator/quayregistry: example-registry name: example-registry-quay-metrics-monitor namespace: quay-enterprise spec: endpoints: - port: quay-metrics namespaceSelector: any: true selector: matchLabels: quay-component: monitoring

. Create the ServiceMonitor:
+

$ oc apply -f quay-service-monitor.yaml

servicemonitor.monitoring.coreos.com/example-registry-quay-metrics-monitor created

== View the metrics in OpenShift

You can access the metrics in the OpenShift console under  Monitoring -> Metrics. In the Expression field, enter the text `quay_` to see the list of metrics available:

image:metrics-single-namespace.png[Quay metrics]


For example, if you have added users to your registry, select the `quay-users_rows` metric:

image:metrics-single-namespace-users.png[Quay metrics]

:leveloffset: 2
:leveloffset: +2

[[operator-resize-storage]]
= Resizing Managed Storage

The Quay Operator creates default object storage using the defaults provided by RHOCS when creating a `NooBaa` object (50 Gib).  There are two ways to extend this storage; you can resize an existing PVC or add more PVCs to a new storage pool.

== Resize Noobaa PVC

. Log into the OpenShift console and select `Storage` -> `Persistent Volume Claims`.
. Select the `PersistentVolumeClaim` named like `noobaa-default-backing-store-noobaa-pvc-*`.
. From the Action menu, select `Expand PVC`.
. Enter the new size of the Persistent Volume Claim and select `Expand`.

After a few minutes (depending on the size of the PVC), the expanded size should reflect in the PVC's `Capacity` field.

[NOTE]
====
Expanding CSI volumes is a Technology Preview feature only. For more information, see link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/storage/expanding-persistent-volumes[].
====

== Add Another Storage Pool

. Log into the OpenShift console and select `Networking` -> `Routes`.  Make sure the `openshift-storage` project is selected.
. Click on the `Location` field for the `noobaa-mgmt` Route.
. Log into the Noobaa Management Console.
. On the main dashboard, under `Storage Resources`, select `Add Storage Resources`.
. Select `Deploy Kubernetes Pool`
. Enter a new pool name.  Click `Next`.
. Choose the number of Pods to manage the pool and set the size per node.  Click `Next`.
. Click `Deploy`.

After a few minutes, the additional storage pool will be added to the Noobaa resources and available for use by {productname}.


:leveloffset: 2
:leveloffset: +2

[[operator-customize-images]]
= Customizing Default Operator Images

[NOTE]
====
Using this mechanism is not supported for production Quay environments and is strongly encouraged only for development/testing purposes.  There is no guarantee your deployment will work correctly when using non-default images with the Quay Operator.
====

In certain circumstances, it may be useful to override the default images used by the Operator.  This can be done by setting one or more environment variables in the Quay Operator `ClusterServiceVersion`.

== Environment Variables
The following environment variables are used in the Operator to override component images:

[cols=2*]
|===
|Environment Variable
|Component

|`RELATED_IMAGE_COMPONENT_QUAY`
|`base`

|`RELATED_IMAGE_COMPONENT_CLAIR`
|`clair`

|`RELATED_IMAGE_COMPONENT_POSTGRES`
|`postgres` and `clair` databases

|`RELATED_IMAGE_COMPONENT_REDIS`
|`redis`
|===

[NOTE]
====
Override images *must* be referenced by manifest (@sha256:), not by tag (:latest).
====

== Applying Overrides to a Running Operator

When the Quay Operator is installed in a cluster via the link:https://docs.openshift.com/container-platform/4.6/operators/understanding/olm/olm-understanding-olm.html[Operator Lifecycle Manager (OLM)], the managed component container images can be easily overridden by modifying the `ClusterServiceVersion` object, which is OLM's representation of a running Operator in the cluster. Find the Quay Operator's `ClusterServiceVersion` either by using a Kubernetes UI or `kubectl`/`oc`:

```
$ oc get clusterserviceversions -n <your-namespace>
```

Using the UI, `oc edit`, or any other method, modify the Quay `ClusterServiceVersion` to include the environment variables outlined above to point to the override images:

*JSONPath*: `spec.install.spec.deployments[0].spec.template.spec.containers[0].env`

[source,yaml]
  • name: RELATED_IMAGE_COMPONENT_QUAY value: quay.io/projectquay/quay@sha256:c35f5af964431673f4ff5c9e90bdf45f19e38b8742b5903d41c10cc7f6339a6d

  • name: RELATED_IMAGE_COMPONENT_CLAIR value: quay.io/projectquay/clair@sha256:70c99feceb4c0973540d22e740659cd8d616775d3ad1c1698ddf71d0221f3ce6

  • name: RELATED_IMAGE_COMPONENT_POSTGRES value: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33

  • name: RELATED_IMAGE_COMPONENT_REDIS value: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542

Note that this is done at the Operator level, so every QuayRegistry will be deployed using these same overrides.




:leveloffset: 2
:leveloffset: +2

[[operator-cloudfront]]
= AWS S3 CloudFront

If you use AWS S3 CloudFront for backend registry storage, specify the private key as shown in the following example:
....
$ oc create secret generic --from-file config.yaml=./config_awss3cloudfront.yaml --from-file default-cloudfront-signing-key.pem=./default-cloudfront-signing-key.pem test-config-bundle
....

:leveloffset: 2



:leveloffset: +1

[[operator-upgrade]]
= Upgrading Quay using the Quay Operator

The Quay Operator follows a _synchronized versioning_ scheme, which means that each version of the Operator is tied to the version of Quay and its components which it manages. There is no field on the `QuayRegistry` custom resource which sets the version of Quay to deploy; the Operator only knows how to deploy a single version of all components. This scheme was chosen to ensure that all components work well together and to reduce the complexity of the Operator needing to know how to manage the lifecycles of many different versions of Quay on Kubernetes.

== Operator Lifecycle Manager

The Quay Operator should be installed and upgraded using the link:https://docs.openshift.com/container-platform/4.6/operators/understanding/olm/olm-understanding-olm.html[Operator Lifecycle Manager (OLM)]. When creating a `Subscription` with the default `approvalStrategy: Automatic`, OLM will automatically upgrade the Quay Operator whenever a new version becomes available.

[WARNING]
====
When the Quay Operator is installed via Operator Lifecycle Manager it may be configured to support automatic or manual upgrades.  This option is shown on the Operator Hub page for the Quay Operator during installation.  It can also be found in the Quay Operator `Subscription` object via the `approvalStrategy` field.  Choosing `Automatic` means that your Quay Operator will automatically be upgraded whenever a new Operator version is released.  If this is not desireable, then the `Manual` approval strategy should be selected.
====


== Upgrading Quay by upgrading the Quay Operator

The general approach for upgrading installed Operators on OpenShift is documented at link:https://docs.openshift.com/container-platform/4.7/operators/admin/olm-upgrading-operators.html[Upgrading installed Operators].

[NOTE]
====
In general, {productname} only supports upgrading from one minor version to the next, for example, 3.4 -> 3.5.

However, for v3.6, multiple upgrade paths are supported:

* 3.3 -> 3.6
* 3.4 -> 3.6
* 3.5 -> 3.6

TODO more details here
====


=== Upgrading Quay
From a {productname} point of view, to update from one minor version to the next, for example, 3.4 -> 3.5, you need to actively change the update channel for the Quay Operator.

For `z` stream upgrades, for example, 3.4.2 -> 3.4.3, updates are released in the major-minor channel that the user initially selected during install. The procedure to perform a `z` stream upgrade depends on the `approvalStrategy` as outlined above. If the approval strategy is set to `Automatic`, the Operator will upgrade automatically to the newest `z` stream, resulting in automatic, rolling Quay updates to newer `z` streams with little to no downtime. Otherwise, the update must be manually approved before installation can begin.


=== Changing the update channel for an Operator

The subscription of an installed Operator specifies an update channel, which is used to track and receive updates for the Operator. To upgrade the Quay Operator to start tracking and receiving updates from a newer channel, change the update channel in the *Subscription* tab for the installed Quay Operator. For subscriptions with an `Automatic` approval strategy, the upgrade begins automatically and can be monitored on the page that lists the Installed Operators.



=== Manually approving a pending Operator upgrade

If an installed Operator has the approval strategy in its subscription set to `Manual`, when new updates are released in its current update channel, the update must be manually approved before installation can begin. If the Quay Operator has a pending upgrade, this status will be displayed in the list of Installed Operators. In the `Subscription` tab for the Quay Operator, you can preview the install plan and review the resources that are listed as available for upgrade. If satisfied, click `Approve` and return to the page that lists Installed Operators to monitor the progress of the upgrade.

The following image shows the *Subscription* tab in the UI, including the update `Channel`, the `Approval` strategy, the `Upgrade status` and the `InstallPlan`:

image:update-channel-approval-strategy.png[Subscription tab including upgrade Channel and Approval strategy]

The list of Installed Operators provides a high-level summary of the current Quay installation:

image:installed-operators-list.png[Installed Operators]


== Upgrading a QuayRegistry

When the Quay Operator starts up, it immediately looks for any `QuayRegistries` it can find in the namespace(s) it is configured to watch. When it finds one, the following logic is used:

* If `status.currentVersion` is unset, reconcile as normal.
* If `status.currentVersion` equals the Operator version, reconcile as normal.
* If `status.currentVersion` does not equal the Operator version, check if it can be upgraded. If it can, perform upgrade tasks and set the `status.currentVersion` to the Operator's version once complete. If it cannot be upgraded, return an error and leave the `QuayRegistry` and its deployed Kubernetes objects alone.

== Enabling features in Quay 3.6

=== Console monitoring and alerting

The support for monitoring of Quay 3.6 in the OpenShift console requires that the Operator is installed in all namespaces. If you previously installed the Operator in a specific namespace, delete the Operator itself and re-install it for all namespaces, once the upgrade has taken place.

=== OCI and Helm support

Support for Helm and some OCI artifacts is now enabled by default in {productname} {producty}. If you want to explicitly enable the feature, for example, if you are upgrading from a version where it is not enabled by default, you need to reconfigure your Quay deployment to enable the use of OCI artifacts using the following properties:

TODO

////
[source,yaml]

FEATURE_GENERAL_OCI_SUPPORT: true FEATURE_HELM_OCI_SUPPORT: true

////

== Upgrading a QuayEcosystem

Upgrades are supported from previous versions of the Operator which used the `QuayEcosystem` API for a limited set of configurations. To ensure that migrations do not happen unexpectedly, a special label needs to be applied to the `QuayEcosystem` for it to be migrated. A new `QuayRegistry` will be created for the Operator to manage, but the old `QuayEcosystem` will remain until manually deleted to ensure that you can roll back and still access Quay in case anything goes wrong. To migrate an existing `QuayEcosystem` to a new `QuayRegistry`, follow these steps:

. Add `"quay-operator/migrate": "true"` to the `metadata.labels` of the `QuayEcosystem`.
+
```
$ oc edit quayecosystem <quayecosystemname>
```
+
[source, json]

metadata: labels: quay-operator/migrate: "true"

. Wait for a `QuayRegistry` to be created with the same `metadata.name` as your `QuayEcosystem`. The `QuayEcosystem` will be marked with the label `"quay-operator/migration-complete": "true"`.

. Once the `status.registryEndpoint` of the new `QuayRegistry` is set, access Quay and confirm all data and settings were migrated successfully.

. When you are confident everything worked correctly, you may delete the `QuayEcosystem` and Kubernetes garbage collection will clean up all old resources.

=== Reverting QuayEcosystem Upgrade

If something goes wrong during the automatic upgrade from `QuayEcosystem` to `QuayRegistry`, follow these steps to revert back to using the `QuayEcosystem`:

* Delete the `QuayRegistry` using either the UI or `kubectl`:
+
```sh
$ kubectl delete -n <namespace> quayregistry <quayecosystem-name>
```

* If external access was provided using a `Route`, change the `Route` to point back to the original `Service` using the UI or `kubectl`.

[NOTE]
====
If your `QuayEcosystem` was managing the Postgres database, the upgrade process will migrate your data to a new Postgres database managed by the upgraded Operator.  Your old database will not be changed or removed but Quay will no longer use it once the migration is complete.  If there are issues during the data migration, the upgrade process will exit and it is recommended that you continue with your database as an unmanaged component.
====

=== Supported QuayEcosystem Configurations for Upgrades

The Quay Operator will report errors in its logs and in `status.conditions` if migrating a `QuayEcosystem` component fails or is unsupported. All unmanaged components should migrate successfully because no Kubernetes resources need to be adopted and all the necessary values are already provided in Quay's `config.yaml`.

*Database*

Ephemeral database not supported (`volumeSize` field must be set).

*Redis*

Nothing special needed.

*External Access*

Only passthrough `Route` access supported for automatic migration. Manual migration required for other methods.

* `LoadBalancer` without custom hostname:
After the `QuayEcosystem` is marked with label `"quay-operator/migration-complete": "true"`, delete the `metadata.ownerReferences` field from existing `Service` _before_ deleting the `QuayEcosystem` to prevent Kubernetes from garbage collecting the `Service` and removing the load balancer. A new `Service` will be created with `metadata.name` format `<QuayEcosystem-name>-quay-app`. Edit the `spec.selector` of the existing `Service` to match the `spec.selector` of the new `Service` so traffic to the old load balancer endpoint will now be directed to the new pods. You are now responsible for the old `Service`; the Quay Operator will not manage it.

* `LoadBalancer`/`NodePort`/`Ingress` with custom hostname:
A new `Service` of type `LoadBalancer` will be created with `metadata.name` format `<QuayEcosystem-name>-quay-app`. Change your DNS settings to point to the `status.loadBalancer` endpoint provided by the new `Service`.

*Clair*

Nothing special needed.

*Object Storage*

`QuayEcosystem` did not have a managed object storage component, so object storage will always be marked as unmanaged. Local storage is not supported.

*Repository Mirroring*

Nothing special needed.

:leveloffset: 2


[discrete]
== Additional resources
* For more details on the {productname} Operator, see the upstream
link:https://github.com/quay/quay-operator/[quay-operator] project.