Getting started with Project Quay configuration

Project Quay is a secure artifact registry that can be deployed as a self-managed installation, or through the Red Hat Quay on OpenShift Container Platform Operator. Each deployment type offers a different approach to configuration and management, but each rely on the same set of configuration parameters to control registry behavior. Common configuration parameters allow administrators to define how their registry interacts with users, storage backends, authentication providers, security policies, and other integrated services.

There are one of two ways to configure Project Quay that depend on your deployment type:

  • On prem Project Quay: With an on prem Project Quay deployment, a registry administrator provides a config.yaml file that includes all required parameters. For this deployment type, the registry is unable to start without a valid configuration.

  • Project Quay Operator: By default, the Project Quay Operator automatically configures your Project Quay deployment by generating the minimal required values and deploying the necessary components for you. After the initial deployment, you can customize your registry’s behavior by modifying the QuayRegistry custom resource, or by using the OpenShift Container Platform Web Console.

This guide offers an overview of the following configuration concepts:

  • How to retrieve, inspect, and modify your current configuration for both on prem and Operator-based Project Quay deployment types.

  • The minimal configuration fields required for startup.

  • An overview of all available Project Quay configuration fields and YAML examples for those fields.

Project Quay configuration disclaimer

In both self-managed and Operator-based deployments of Project Quay, certain features and configuration parameters are not actively used or implemented. As a result, some feature flags, such as those that enable or disable specific functionality, or configuration parameters that are not explicitly documented or supported by or requested for documentation by Red Hat Support, should only be modified with caution.

Unused or undocumented features might not be fully tested, supported, or compatible with Project Quay. Modifying these settings could result in unexpected behavior or disruptions to your deployment.

Understanding the Project Quay configuration file

Whether deployed on premise of by the Red Hat Quay on OpenShift Container Platform Operator, the registry’s behavior is defined by the config.yaml file. The config.yaml file must include all required configuration fields for the registry to start. Project Quay administrators can also define optional parameters that customize their registry, such as authentication parameters, storage parameters, proxy cache parameters, and so on.

The config.yaml file must be written using valid YAML ("YAML Ain’t Markup Language") syntax, and Project Quay cannot start if the file itself contains any formatting errors or missing required fields. Regardless of deployment type, whether that is on premise or Red Hat Quay on OpenShift Container Platform that is configured by the Operator, the YAML principles stay the same, even if the required configuration fields are slightly different.

The following section outlines basic YAML syntax relevant to creating and editing the Project Quay config.yaml file. For a more complete overview of YAML, see What is YAML.

Key-value pairs

Configuration fields within a config.yaml file are written as key-value pairs in the following form:

# ... (1)
EXAMPLE_FIELD_NAME: <value>
# ... (1)
  1. Denotes that there are fields before and after this specific field. Note that by supplying the #, or hash symbol, comments can be provided within the YAML file.

Each line within a config.yaml file contains a field name, followed by a colon, a space, and then an appropriate value that matches with the key. The following example shows you how the AUTHENTICATION_TYPE configuration field must be formatted in your config.yaml file.

AUTHENTICATION_TYPE: Database (1)
# ...
  1. The authentication engine to use for credential authentication.

In the previous example, the AUTHENTICATION_TYPE is set to Database, however, different deployment types require a different value. The following example shows you how your config.yaml file might look if LDAP, or Lightweight Directory Access Protocol, was used for authentication:

AUTHENTICATION_TYPE: LDAP
# ...

Indentation and nesting

Many Project Quay configuration fields require indentation to indicate nested structures. Indentation must be done by using white spaces, or literal space characters; tab characters are not allowed by design. Indentation must be consistent across the file. The following YAML snippet shows you how the BUILDLOGS_REDIS field uses indentation for the required host, password, and port fields:

# ...
BUILDLOGS_REDIS:
    host: quay-server.example.com
    password: example-password
    port: 6379
# ...

Lists

In some cases, the Project Quay configuration field relies on lists to define certain values. Lists are formatted by using a hyphen (-) followed by a space. The following example shows you how the SUPER_USERS configuration field uses a list to define superusers:

# ...
SUPER_USERS:
- quayadmin
# ...

Quoted values

Some Project Quay configuration fields require the use of quotation marks ("") to properly define a variable. This is generally not required. The following examples shows you how the FOOTER_LINKS configuration field uses quotation marks to define the TERMS_OF_SERVICE_URL, PRIVACY_POLICY_URL, SECURITY_URL, and ABOUT_URL:

FOOTER_LINKS:
  "TERMS_OF_SERVICE_URL": "https://www.index.hr"
  "PRIVACY_POLICY_URL": "https://www.jutarnji.hr"
  "SECURITY_URL": "https://www.bug.hr"
  "ABOUT_URL": "https://www.zagreb.hr"

Comments

The hash symbol, or #, can be placed at the beginning of a line to add comments or to temporarily disable a configuration field. They are ignored by the configuration parser and will not affect the behavior of the registry. For example:

# ...
# FEATURE_UI_V2: true
# ...

In this example, the FEATURE_UI_V2 configuration is ignored by the parser, meaning that the option to use the v2 UI is disabled. Using the # symbol on a required configuration field results in failure for the registry to start.

On prem Project Quay configuration overview

For on premise deployments of Project Quay, the config.yaml file that is managed by the administrator is mounted into the container at startup and read by Project Quay during initialization. The config.yaml file is not dynamically reloaded, meaning that any changes made to the file require restarting the registry container to take effect.

This chapter provides an overview of the following concepts:

  • The minimal required configuration fields.

  • How to edit and manage your configuration after deployment.

This section applies specifically to on premise Project Quay deployment types. For information about configuring Red Hat Quay on OpenShift Container Platform, see "Red Hat Quay on OpenShift Container Platform configuration overview".

Required configuration fields

The following configuration fields are required for an on premise deployment of Project Quay:

Field

Type

Description

AUTHENTICATION_TYPE
(Required)

String

The authentication engine to use for credential authentication.

Values:
One of Database, LDAP, JWT, Keystone, OIDC

Default: Database

BUILDLOGS_REDIS
(Required)

Object

Redis connection details for build logs caching.

.host
(Required)

String

The hostname at which Redis is accessible.

.password

String

The password to connect to the Redis instance.

DATABASE_SECRET_KEY
(Required)

String

Key used to encrypt sensitive fields within the database. This value should never be changed once set, otherwise all reliant fields, for example, repository mirror username and password configurations, are invalidated.
This value is set automatically by the Project Quay Operator for Operator-based deployments. For standalone deployments, administrators can provide their own key using Open SSL or a similar tool. Key length should not exceed 63 characters.

DB_URI
(Required)

String

The URI for accessing the database, including any credentials.

DISTRIBUTED_STORAGE_CONFIG
(Required)

Object

Configuration for storage engine(s) to use in Project Quay. Each key represents an unique identifier for a storage engine. The value consists of a tuple of (key, value) forming an object describing the storage engine parameters.

Default: []

SECRET_KEY
(Required)

String

Key used to encrypt the session cookie and the CSRF token needed for correct interpretation of the user session. The value should not be changed when set. Should be persistent across all Project Quay instances. If not persistent across all instances, login failures and other errors related to session persistence might occur.

SERVER_HOSTNAME
(Required)

String

The URL at which Project Quay is accessible, without the scheme.

SETUP_COMPLETE
(Required)

Boolean

This is an artifact left over from earlier versions of the software and currently it must be specified with a value of true.

USER_EVENTS_REDIS
(Required)

Object

Redis connection details for user event handling.

.host
(Required)

String

The hostname at which Redis is accessible.

.port
(Required)

Number

The port at which Redis is accessible.

.password

String

The password to connect to the Redis instance.

Minimal configuration file examples

This section provides two examples of a minimal configuration file: one example that uses local storage, and another example that uses cloud-based storage with Google Cloud Platform.

Minimal configuration using local storage

The following example shows a sample minimal configuration file that uses local storage for images.

Important

Only use local storage when deploying a registry for proof of concept purposes. It is not intended for production purposes. When using local storage, you must map the registry to a local directory to the datastorage path in the container when starting the registry. For more information, see Proof of Concept - Deploying Project Quay

Local storage minimal configuration
AUTHENTICATION_TYPE: Database
BUILDLOGS_REDIS:
    host: <quay-server.example.com>
    password: <password>
    port: <port>
DATABASE_SECRET_KEY: <example_database_secret_key>
DB_URI: postgresql://<username>:<password>@<registry_url>.com:<port>/quay
DISTRIBUTED_STORAGE_CONFIG:
  default:
    - LocalStorage
    - storage_path: /datastorage/registry
SECRET_KEY: <example_secret_key>
SERVER_HOSTNAME: <server_host_name>
SETUP_COMPLETE: true
USER_EVENTS_REDIS:
  host: <redis_events_url>
  password: <password>
  port: <port>
Minimal configuration using cloud-based storage

In most production environments, Project Quay administrators use cloud or enterprise-grade storage backends provided by supported vendors. The following example shows you how to configure Project Quay to use Google Cloud Platform for image storage. For a complete list of supported storage providers, see Image storage.

Note

When using a cloud or enterprise-grade storage backend, additional configuration, such as mapping the registry to a local directory, is not required.

Cloud storage minimal configuration
AUTHENTICATION_TYPE: Database
BUILDLOGS_REDIS:
    host: <quay-server.example.com>
    password: <password>
    port: <port>
DATABASE_SECRET_KEY: <example_database_secret_key>
DB_URI: postgresql://<username>:<password>@<registry_url>.com:<port>/quay
DISTRIBUTED_STORAGE_CONFIG:
    default:
        - GoogleCloudStorage
        - access_key: <access_key>
          bucket_name: <bucket_name>
          secret_key: <secret_key>
          storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
    - default
SECRET_KEY: <example_secret_key>
SERVER_HOSTNAME: <server_host_name>
SETUP_COMPLETE: true
USER_EVENTS_REDIS:
  host: <redis_events_url>
  password: <password>
  port: <port>

Modifying your configuration file after deployment

After deploying a Project Quay registry with an initial config.yaml file, Project Quay administrators can update the configuration file to enable or disable features as needed. This flexibility allows administrators to tailor the registry to fit their specific environment needs, or to meet certain security policies.

Note

Because the config.yaml file is not dynamically reloaded, you must restart the Project Quay container after making changes for them to take effect.

The following procedure shows you how to retrieve the config.yaml file from the quay-registry container, how to enable a new feature by adding that feature’s configuration field to the file, and how to restart the quay-registry container using Podman.

Prerequisites
  • You have deployed Project Quay.

  • You are a registry administrator.

Procedure
  1. If you have access to the config.yaml file:

    1. Navigate to the directory that is storing the config.yaml file. For example:

      $ cd /home/<username>/<quay-deployment-directory>/config
    2. Make changes to the config.yaml file by adding a new feature flag. The following example enables the v2 UI:

      # ...
      FEATURE_UI_V2: true
      # ...
    3. Save the changes made to the config.yaml file.

    4. Restart the quay-registry pod by entering the following command:

      $ podman restart <container_id>
  2. If you do not have access to the config.yaml file and need to create a new file while keeping the same credentials:

    1. Retrieve the container ID of your quay-registry pod by entering the following command:

      $ podman ps
      Example output
      CONTAINER ID  IMAGE                                                                     COMMAND         CREATED       STATUS       PORTS                                                                       NAMES
      5f2297ef53ff  registry.redhat.io/rhel8/postgresql-13:1-109                              run-postgresql  20 hours ago  Up 20 hours  0.0.0.0:5432->5432/tcp                                                      postgresql-quay
      3b40fb83bead  registry.redhat.io/rhel8/redis-5:1                                        run-redis       20 hours ago  Up 20 hours  0.0.0.0:6379->6379/tcp                                                      redis
      0b4b8fbfca6d  registry-proxy.engineering.redhat.com/rh-osbs/quay-quay-rhel8:v3.14.0-14  registry        20 hours ago  Up 20 hours  0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp, 7443/tcp, 9091/tcp, 55443/tcp  quay
    2. Copy the config.yaml file from the quay-registry pod to a directory by entering the following command:

      $ podman cp <container_id>:/quay-registry/conf/stack/config.yaml ./config.yaml
    3. Make changes to the config.yaml file by adding a new feature flag. The following example sets the AUTHENTICATION_TYPE to LDAP

      # ...
      AUTHENTICATION_TYPE: LDAP
      # ...
    4. Re-deploy the registry, mounting the config.yaml file into the quay-registry configuration volume by entering the following command:

      $ sudo podman run -d --rm -p 80:8080 -p 443:8443 \
         --name=quay \
         -v /home/<username>/<quay-deployment-directory>/config:/conf/stack:Z \
         registry.redhat.io/quay/quay-rhel8:v3.14.0

Troubleshooting the configuration file

Failure to add all of the required configuration field, or to provide the proper information for some parameters, might result in the quay-registry container failing to deploy. Use the following procedure to view and troubleshoot a failed on premise deployment type.

Prerequisites
  • You have created a minimal configuration file.

Procedure
  • Attempt to deploy the quay-registry container by entering the following command. Note that this command uses the -it, which shows you debugging information:

    $ podman run -it --rm -p 80:8080 -p 443:8443 --name=quay -v /home/<username>/<quay-deployment-directory>/config:/conf/stack:Z    -v /home/<username>/<quay-deployment-directory>/storage:/datastorage:Z 33f1c3dc86be
    Example output
    ---
    +------------------------+-------+--------+
    | LDAP                   | -     | X      |
    +------------------------+-------+--------+
    | LDAP_ADMIN_DN is required      | X      |
    +-----------------------------------------+
    | LDAP_ADMIN_PSSWD is required   | X      |
    +-----------------------------------------+
    | . . . Connection refused       | X      |
    +-----------------------------------------+
    ---

    In this example, the quay-registry container failed to deploy because improper LDAP credentials were provided.

Red Hat Quay on OpenShift Container Platform configuration overview

When deploying Project Quay using the Operator on OpenShift Container Platform, configuration is managed declaratively through the QuayRegistry custom resource (CR). This model allows cluster administrators to define the desired state of the Project Quay deployment, including which components are enabled, storage backends, SSL/TLS configuration, and other core features.

After deploying Red Hat Quay on OpenShift Container Platform with the Operator, administrators can further customize their registry by updating the config.yaml file and referencing it in a Kubernetes secret. This configuration bundle is linked to the QuayRegistry CR through the configBundleSecret field.

The Operator reconciles the state defined in the QuayRegistry CR and its associated configuration, automatically deploying or updating registry components as needed.

This guide covers the basic concepts behind the QuayRegistry CR and modifying your config.yaml file on Red Hat Quay on OpenShift Container Platform deployments. More advanced topics, such as using unmanaged components within the QuayRegistry CR, can be found in Deploying Project Quay Operator on OpenShift Container Platform.

Understanding the QuayRegistry CR

By default, the QuayRegistry CR contains the following key fields:

  • configBundleSecret: The name of a Kubernetes Secret containing the config.yaml file which defines additional configuration parameters.

  • name: The name of your Project Quay registry.

  • namespace: The namespace, or project, in which the registry was created.

  • spec.components: A list of component that the Operator automatically manages. Each spec.component field contains two fields:

    • kind: The name of the component

    • managed: A boolean that addresses whether the component lifecycle is handled by the Project Quay Operator. Setting managed: true to a component in the QuayRegistry CR means that the Operator manages the component.

All QuayRegistry components are automatically managed and auto-filled upon reconciliation for visibility unless specified otherwise. The following sections highlight the major QuayRegistry components and provide an example YAML file that shows the default settings.

Managed components

By default, the Operator handles all required configuration and installation needed for Project Quay’s managed components.

If the opinionated deployment performed by the Project Quay Operator is unsuitable for your environment, you can provide the Project Quay Operator with unmanaged resources, or overrides, as described in Using unmanaged components.

Table 1. QuayRegistry required fields
Field Type Description

quay

Boolean

Holds overrides for deployment of Red Hat Quay on OpenShift Container Platform, such as environment variables and number of replicas. This component cannot be set to unmanaged (managed: false).

postgres

Boolean

Used for storing registry metadata. Currently, PostgreSQL version 13 is used.

clair

Boolean

Provides image vulnerability scanning.

redis

Boolean

Storage live builder logs and the locking mechanism that is required for garbage collection.

horizontalpodautoscaler

Boolean

Adjusts the number of quay pods depending on your memory and CPU consumption.

objectstorage

Boolean

Stores image layer blobs. When set to managed: true, utilizes the ObjectBucketClaim Kubernetes API which is provided by NooBaa or Red Hat OpenShift Data Foundation. Setting this field to managed: false requires you to provide your own object storage.

route

Boolean

Provides an external entrypoint to the Project Quay registry from outside of OpenShift Container Platform.

mirror

Boolean

Configures repository mirror workers to support optional repository mirroring.

monitoring

Boolean

Features include a Grafana dashboard, access to individual metrics, and notifications for frequently restarting quay pods.

tls

Boolean

Configures whether SSL/TLS is automatically handled.

clairpostgres

Boolean

Configures a managed Clair database. This is a separate database than the PostgreSQL database that is used to deploy Project Quay.

The following example shows you the default configuration for the QuayRegistry custom resource provided by the Project Quay Operator. It is available on the OpenShift Container Platform web console.

Example QuayRegistry custom resource
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
  name: <example_registry>
  namespace: <namespace>
  spec:
    configBundleSecret: config-bundle-secret
    components:
    - kind: quay
      managed: true
    - kind: postgres
      managed: true
    - kind: clair
      managed: true
    - kind: redis
      managed: true
    - kind: horizontalpodautoscaler
      managed: true
    - kind: objectstorage
      managed: true
    - kind: route
      managed: true
    - kind: mirror
      managed: true
    - kind: monitoring
      managed: true
    - kind: tls
      managed: true
    - kind: clairpostgres
      managed: true

Modifying the QuayRegistry CR after deployment

After you have installed the Project Quay Operator and created an initial deployment, you can modify the QuayRegistry custom resource (CR) to customize or reconfigure aspects of the Red Hat Quay environment.

Project Quay administrators might modify the QuayRegistry CR for the following reasons:

  • To change component management: Switch components from managed: true to managed: false in order to bring your own infrastructure. For example, you might set kind: objectstorage to unmanaged to integrate external object storage platforms such as Google Cloud Storage or Nutanix.

  • To apply custom configuration: Update or replace the configBundleSecret to apply new configuration settings, for example, authentication providers, external SSL/TLS settings, feature flags.

  • To enable or disable features: Toggle features like repository mirroring, Clair scanning, or horizontal pod autoscaling by modifying the spec.components list.

  • To scale the deployment: Adjust environment variables or replica counts for the Quay application.

  • To integrate with external services: Provide configuration for external PostgreSQL, Redis, or Clair databases, and update endpoints or credentials.

Modifying the QuayRegistry CR by using the OpenShift Container Platform web console

The QuayRegistry can be modified by using the OpenShift Container Platform web console. This allows you to set managed components to unamanged (managed: false) and use your own infrastructure.

Prerequisites
  • You are logged into OpenShift Container Platform as a user with admin privileges.

  • You have installed the Project Quay Operator.

Procedure
  1. On the OpenShift Container Platform web console, click OperatorsInstalled Operators.

  2. Click Red Hat Quay.

  3. Click Quay Registry.

  4. Click the name of your Project Quay registry, for example, example-registry.

  5. Click YAML.

  6. Adjust the managed field of the desired component to either true or false.

  7. Click Save.

    Note

    Setting a component to unmanaged (managed: false) might require additional configuration. For more information about setting unmanaged components in the QuayRegistry CR, see Using unmanaged components for dependencies.

Modifying the QuayRegistry CR by using the CLI

The QuayRegistry CR can be modified by using the CLI. This allows you to set managed components to unamanged (managed: false) and use your own infrastructure.

Prerequisites
  • You are logged in to your OpenShift Container Platform cluster as a user with admin privileges.

Procedure
  1. Edit the QuayRegistry CR by entering the following command:

    $ oc edit quayregistry <registry_name> -n <namespace>
  2. Make the desired changes to the QuayRegistry CR.

    Note

    Setting a component to unmanaged (managed: false) might require additional configuration. For more information about setting unmanaged components in the QuayRegistry CR, see Using unmanaged components for dependencies.

  3. Save the changes.

Understanding the configBundleSecret

The spec.configBundleSecret field is an optional reference to the name of a Secret in the same namespace as the QuayRegistry resource. This Secret must contain a config.yaml key/value pair, where the value is a Project Quay configuration file.

The configBundleSecret stores the config.yaml file. Project Quay administrators can define the following settings through the config.yaml file:

  • Authentication backends (for example, OIDC, LDAP)

  • External TLS termination settings

  • Repository creation policies

  • Feature flags

  • Notification settings

Project Quay might update this secret for the following reasons:

  • Enable a new authentication method

  • Add custom SSL/TLS certificates

  • Enable features

  • Modify security scanning settings

If this field is omitted, the Project Quay Operator automatically generates a configuration secret based on default values and managed component settings. If the field is provided, the contents of the config.yaml are used as the base configuration and are merged with values from managed components to form the final configuration, which is mounted into the quay application pods.

How the QuayRegistry CR is configured determines which fields must be included in the configBundleSecret’s `config.yaml file for Red Hat Quay on OpenShift Container Platform. The following example shows you a default config.yaml file when all components are managed by the Operator. Note that this example looks different depending on whether components are managed or unmanaged (managed: false).

Example YAML with all components managed by the Operator
ALLOW_PULLS_WITHOUT_STRICT_LOGGING: false
AUTHENTICATION_TYPE: Database
DEFAULT_TAG_EXPIRATION: 2w
ENTERPRISE_LOGO_URL: /static/img/RH_Logo_Quay_Black_UX-horizontal.svg
FEATURE_BUILD_SUPPORT: false
FEATURE_DIRECT_LOGIN: true
FEATURE_MAILING: false
REGISTRY_TITLE: Red Hat Quay
REGISTRY_TITLE_SHORT: Red Hat Quay
SETUP_COMPLETE: true
TAG_EXPIRATION_OPTIONS:
- 2w
TEAM_RESYNC_STALE_TIME: 60m
TESTING: false

In some cases, you might opt to manage certain components yourself, for example, object storage. In that scenario, you would modify the QuayRegistry CR as follows:

Unmanaged objectstorage component
# ...
    - kind: objectstorage
      managed: false
# ...

If you are managing your own components, your deployment must be configured to include the necessary information or resources for that component. For example, if the objectstorage component is set to managed: false, you would include the relevant information depending on your storage provider inside of the config.yaml file. The following example shows you a distributed storage configuration using Google Cloud Storage:

Required information when objectstorage is unmanaged
# ...
DISTRIBUTED_STORAGE_CONFIG:
    default:
        - GoogleCloudStorage
        - access_key: <access_key>
          bucket_name: <bucket_name>
          secret_key: <secret_key>
          storage_path: /datastorage/registry
# ...

Similarly, if you are managing the horizontalpodautoscaler component, you must create an accompanying HorizontalPodAutoscaler custom resource.

Modifying the configuration file by using the OpenShift Container Platform web console

Use the following procedure to modify the config.yaml file that is stored by the configBundleSecret by using the OpenShift Container Platform web console.

Prerequisites
  • You are logged in to the OpenShift Container Platform cluster as a user with admin privileges.

Procedure
  1. On the OpenShift Container Platform web console, click OperatorsInstalled OperatorsRed Hat Quay.

  2. Click Quay Registry.

  3. Click the name of your Project Quay registry, for example, example-registry.

  4. On the QuayRegistry details page, click the name of your Config Bundle Secret, for example, example-registry-config-bundle.

  5. Click ActionsEdit Secret.

  6. In the Value box, add the desired key/value pair. For example, to add a superuser to your Red Hat Quay on OpenShift Container Platform deployment, add the following reference:

    SUPER_USERS:
    - quayadmin
  7. Click Save.

Verification
  1. Verify that the changes have been accepted:

    1. On the OpenShift Container Platform web console, click OperatorsInstalled OperatorsRed Hat Quay.

    2. Click Quay Registry.

    3. Click the name of your Project Quay registry, for example, example-registry.

    4. Click Events. If successful, the following message is displayed:

      All objects created/updated successfully
Note
You must base64 encode any updated config.yaml before placing it in the Secret. Ensure the Secret name matches the value specified in spec.configBundleSecret. Once the Secret is updated, the Operator detects the change and automatically rolls out updates to the Red Hat Quay pods.

For detailed steps, see "Updating configuration secrets through the Project Quay UI."

Modifying the configuration file by using the CLI

You can modify the config.yaml file that is stored by the configBundleSecret by downloading the existing configuration using the CLI. After making changes, you can re-upload the configBundleSecret resource to make changes to the Project Quay registry.

Note

Modifying the config.yaml file that is stored by the configBundleSecret resource is a multi-step procedure that requires base64 decoding the existing configuration file and then uploading the changes. For most cases, using the OpenShift Container Platform web console to make changes to the config.yaml file is simpler.

Prerequisites
  • You are logged in to the OpenShift Container Platform cluster as a user with admin privileges.

Procedure
  1. Describe the QuayRegistry resource by entering the following command:

    $ oc describe quayregistry -n <quay_namespace>
    # ...
      Config Bundle Secret:  example-registry-config-bundle-v123x
    # ...
  2. Obtain the secret data by entering the following command:

    $ oc get secret -n <quay_namespace> <example-registry-config-bundle-v123x> -o jsonpath='{.data}'
    Example output
    {
        "config.yaml": "RkVBVFVSRV9VU0 ... MDAwMAo="
    }
  3. Decode the data into a YAML file into the current directory by passing in the >> config.yaml flag. For example:

    $ echo 'RkVBVFVSRV9VU0 ... MDAwMAo=' | base64 --decode >> config.yaml
  4. Make the desired changes to your config.yaml file, and then save the file as config.yaml.

  5. Create a new configBundleSecret YAML by entering the following command.

    $ touch <new_configBundleSecret_name>.yaml
  6. Create the new configBundleSecret resource, passing in the config.yaml file` by entering the following command:

    $ oc -n <namespace> create secret generic <secret_name> \
      --from-file=config.yaml=</path/to/config.yaml> \ (1)
      --dry-run=client -o yaml > <new_configBundleSecret_name>.yaml
    1. Where <config.yaml> is your base64 decoded config.yaml file.

  7. Create the configBundleSecret resource by entering the following command:

    $ oc create -n <namespace> -f <new_configBundleSecret_name>.yaml
    Example output
    secret/config-bundle created
  8. Update the QuayRegistry YAML file to reference the new configBundleSecret object by entering the following command:

    $ oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{"spec":{"configBundleSecret":"<new_configBundleSecret_name>"}}'
    Example output
    quayregistry.quay.redhat.com/example-registry patched
Verification
  1. Verify that the QuayRegistry CR has been updated with the new configBundleSecret:

    $ oc describe quayregistry -n <quay_namespace>
    Example output
    # ...
      Config Bundle Secret: <new_configBundleSecret_name>
    # ...

    After patching the registry, the Project Quay Operator automatically reconciles the changes.

New configuration fields with Project Quay 3.14

The following sections detail new configuration fields added in Project Quay 3.14.

Model card rendering configuration fields

The following configuration fields have been added to support model card rendering on the v2 UI.

Field Type Description

FEATURE_UI_MODELCARD

Boolean

Enables Model card image tab in UI. Defaults to true.

UI_MODELCARD_ARTIFACT_TYPE

String

Defines the model card artifact type.

UI_MODELCARD_ANNOTATION

Object

This optional field defines the layer annotation of the model card stored in an OCI image.

UI_MODELCARD_LAYER_ANNOTATION

Object

This optional field defines the layer annotation of the model card stored in an OCI image.

Example model card YAML
FEATURE_UI_MODELCARD: true (1)
UI_MODELCARD_ARTIFACT_TYPE: application/x-mlmodel (2)
UI_MODELCARD_ANNOTATION: (3)
  org.opencontainers.image.description: "Model card metadata"
UI_MODELCARD_LAYER_ANNOTATION: (4)
  org.opencontainers.image.title: README.md
  1. Enables the Model Card image tab in the UI.

  2. Defines the model card artifact type. In this example, the artifact type is application/x-mlmodel.

  3. Optional. If an image does not have an artifactType defined, this field is checked at the manifest level. If a matching annotation is found, the system then searches for a layer with an annotation matching UI_MODELCARD_LAYER_ANNOTATION.

  4. Optional. If an image has an artifactType defined and multiple layers, this field is used to locate the specific layer containing the model card.

The following configuration fields have been added to the original (v1) UI. You can use these fields to customize the footer of your on-prem v1 UI.

Note

These fields are currently unavailable on the Project Quay v2 UI.

Field Type Description

FOOTER_LINKS

Object

Enable customization of footer links in Project Quay’s UI for on-prem installations.

.TERMS_OF_SERVICE_URL

String

Custom terms of service for on-prem installations.

Example:
https://index.hr

.PRIVACY_POLICY_URL

String

Custom privacy policy for on-prem installations.

Example:
https://index.hr

.SECURITY_URL

String

Custom security page for on-prem installations.

Example:
https://index.hr

.ABOUT_URL

String

Custom about page for on-prem installations.

Example:
https://index.hr

Example footer links YAML
FOOTER_LINKS:
  "TERMS_OF_SERVICE_URL": "https://www.index.hr"
  "PRIVACY_POLICY_URL": "https://www.example.hr"
  "SECURITY_URL": "https://www.example.hr"
  "ABOUT_URL": "https://www.example.hr"

Required configuration fields

Project Quay requires a minimal set of configuration fields to operator correctly. These fields define essential aspects of your deployment, such as how the registry is accessed, where image content is stored, how metadata is persisted, and how background services such as logs are managed.

The required configuration fields fall into five main categories:

  • General required configuration fields. Core fields such as the authentication type, URL scheme, server hostname, database secret key, and secret key are covered in this section.

  • Database configuration fields. Project Quay requires a PostgreSQL relational database to store metadata about repositories, users, teams, and tags.

  • Object storage configuration fields. Object storage define the backend where container image blobs and manifests are stored. Your storage backend must be supported by Project Quay, such as Ceph/RadosGW,AWS S3 storage, Google Cloud Storage, Nutanix, and so on.

  • Redis configuration fields. Redis is used as a backend for data such as push logs, user notifications, and other operations.

General required configuration fields

The following table describes the required configuration fields for a Project Quay deployment:

Table 2. General required fields
Field Type Description

AUTHENTICATION_TYPE
(Required)

String

The authentication engine to use for credential authentication.

Values:
One of Database, LDAP, JWT, Keystone, OIDC

Default: Database

PREFERRED_URL_SCHEME
(Required)

String

The URL scheme to use when accessing Project Quay.

Values:
One of http, https

Default: http

SERVER_HOSTNAME
(Required)

String

The URL at which Project Quay is accessible, without the scheme.

Example:
quay-server.example.com

DATABASE_SECRET_KEY
(Required)

String

Key used to encrypt sensitive fields within the database. This value should never be changed once set, otherwise all reliant fields, for example, repository mirror username and password configurations, are invalidated.
This value is set automatically by the Project Quay Operator for Operator-based deployments. For standalone deployments, administrators can provide their own key using Open SSL or a similar tool. Key length should not exceed 63 characters.

SECRET_KEY
(Required)

String

Key used to encrypt the session cookie and the CSRF token needed for correct interpretation of the user session. The value should not be changed when set. Should be persistent across all Project Quay instances. If not persistent across all instances, login failures and other errors related to session persistence might occur.

SETUP_COMPLETE
(Required)

Boolean

This is an artifact left over from earlier versions of the software and currently it must be specified with a value of true.

General required fields example
AUTHENTICATION_TYPE: Database
PREFERRED_URL_SCHEME: https
SERVER_HOSTNAME: <quay-server.example.com>
SECRET_KEY: <secret_key_value>
DATABASE_SECRET_KEY: <database_secret_key_value>
SETUP_COMPLETE: true
# ...

Database configuration fields

This section describes the database configuration fields available for Project Quay deployments.

Database URI

With Project Quay, connection to the database is configured by using the required DB_URI field.

The following table describes the DB_URI configuration field:

Table 3. Database URI
Field Type Description

DB_URI
(Required)

String

The URI for accessing the database, including any credentials.

Example DB_URI field:

postgresql://quayuser:quaypass@quay-server.example.com:5432/quay

Database URI example
# ...
DB_URI: postgresql://quayuser:quaypass@quay-server.example.com:5432/quay
# ...

Database connection arguments

Optional connection arguments are configured by the DB_CONNECTION_ARGS parameter. Some of the key-value pairs defined under DB_CONNECTION_ARGS are generic, while others are database specific.

Table 4. Database connection arguments
Field Type Description

DB_CONNECTION_ARGS

Object

Optional connection arguments for the database, such as timeouts and SSL/TLS.

.autorollback

Boolean

Whether to use thread-local connections.
Should always be true.

.threadlocals

Boolean

Whether to use auto-rollback connections.
Should always be true.

Database connection arguments example
# ...
DB_URI: postgresql://quayuser:quaypass@quay-server.example.com:5432/quay
DB_CONNECTION_ARGS:
  autorollback: true
  threadlocals: true
# ...
SSL/TLS connection arguments

With SSL/TLS, configuration depends on the database you are deploying.

The sslmode option determines whether, or with, what priority a secure SSL/TLS TCP/IP connection will be negotiated with the server. There are six modes:

Table 5. sslmode options
Mode Description

sslmode

Determines whether, or with, what priority a secure SSL/TLS or TCP/IP connection is negotiated with the server.

   *: disable

Your configuration only tries non-SSL/TLS connections.

   *: allow

Your configuration first tries a non-SSL/TLS connection. Upon failure, tries an SSL/TLS connection.

   *: prefer
    (Default)

Your configuration first tries an SSL/TLS connection. Upon failure, tries a non-SSL/TLS connection.

   *: require

Your configuration only tries an SSL/TLS connection. If a root CA file is present, it verifies the certificate in the same way as if verify-ca was specified.

   *: verify-ca

Your configuration only tries an SSL/TLS connection, and verifies that the server certificate is issued by a trusted certificate authority (CA).

   *: verify-full

Only tries an SSL/TLS connection, and verifies that the server certificate is issued by a trusted CA and that the requested server hostname matches that in the certificate.

For more information on the valid arguments for PostgreSQL, see Database Connection Control Functions.

PostgreSQL SSL/TLS configuration
# ...
DB_CONNECTION_ARGS:
  sslmode: <value>
  sslrootcert: path/to/.postgresql/root.crt
# ...

Storage object configuration fields

Storage fields define the backend where container image blobs and manifests are stored. The following storage providers are supported by Project Quay:

  • Amazon Web Services (AWS) S3

  • AWS STS S3 (Security Token Service)

  • AWS CloudFront (CloudFront S3Storage)

  • Google Cloud Storage

  • Microsoft Azure Blob Storage

  • Swift Storage

  • Nutanix Object Storage

  • IBM Cloud Object Storage

  • NetApp ONTAP S3 Object Storage

  • Hitachi Content Platform (HCP) Object Storage

Note

Many of the supported storage providers use the RadosGWStorage driver due to their S3-compatible APIs.

Storage configuration fields

The following table describes the storage configuration fields for Project Quay. These fields are required when configuring backend storage.

Table 6. Storage configuration fields
Field Type Description

DISTRIBUTED_STORAGE_CONFIG
(Required)

Object

Configuration for storage engine(s) to use in Project Quay. Each key represents an unique identifier for a storage engine. The value consists of a tuple of (key, value) forming an object describing the storage engine parameters.

Default: []

DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS
(Required)

Array of string

The list of storage engine(s) (by ID in DISTRIBUTED_STORAGE_CONFIG) whose images should be fully replicated, by default, to all other storage engines.

DISTRIBUTED_STORAGE_PREFERENCE
(Required)

Array of string

The preferred storage engine(s) (by ID in DISTRIBUTED_STORAGE_CONFIG) to use. A preferred engine means it is first checked for pulling and images are pushed to it.

Default: false

MAXIMUM_LAYER_SIZE
(Optional)

String

Maximum allowed size of an image layer.

Pattern: ^[0-9]+(G|M)$

Example: 100G

Default: 20G

Storage configuration example
DISTRIBUTED_STORAGE_CONFIG:
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
    - default
MAXIMUM_LAYER_SIZE: 100G

Local storage

The following YAML shows an example configuration using local storage.

Important

Only use local storage when deploying a registry for proof of concept purposes. It is not intended for production purposes. When using local storage, you must map the registry to a local directory to the datastorage path in the container when starting the registry. For more information, see Proof of Concept - Deploying Project Quay

Local storage example
DISTRIBUTED_STORAGE_CONFIG:
  default:
    - LocalStorage
    - storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
    - default

Red Hat OpenShift Data Foundation

The following YAML shows a sample configuration using an Red Hat OpenShift Data Foundation:

DISTRIBUTED_STORAGE_CONFIG:
  rhocsStorage:
    - RHOCSStorage
    - access_key: <access_key_here>
      secret_key: <secret_key_here>
      bucket_name: <bucket_name>
      hostname: <hostname>
      is_secure: 'true'
      port: '443'
      storage_path: /datastorage/registry
      maximum_chunk_size_mb: 100 (1)
      server_side_assembly: true (2)
  1. Defines the maximum chunk size, in MB, for the final copy. Has no effect if server_side_assembly is set to false.

  2. Optional. Whether Project Quay should try and use server side assembly and the final chunked copy instead of client assembly. Defaults to true.

Ceph Object Gateway (RadosGW) storage example

Project Quay supports using Ceph Object Gateway (RadosGW) as an object storage backend. RadosGW is a component of Red Hat Ceph Storage, which is a storage platform engineered for private architecture. Red Hat Ceph Storage provides an S3-compatible REST API for interacting with Ceph.

Note

RadosGW is an on-premise S3-compatible storage solution. It implements the S3 API and requires the same authentication fields, such as access_key, secret_key, and bucket_name. For more information about Ceph Object Gateway and the S3 API, see Ceph Object Gateway.

The following YAML shows an example configuration using RadosGW.

RadosGW with general s3 access example
DISTRIBUTED_STORAGE_CONFIG:
  radosGWStorage: (1)
    - RadosGWStorage
    - access_key: <access_key_here>
      bucket_name: <bucket_name_here>
      hostname: <hostname_here>
      is_secure: true
      port: '443'
      secret_key: <secret_key_here>
      storage_path: /datastorage/registry
      maximum_chunk_size_mb: 100 (2)
      server_side_assembly: true (3)
  1. Used for general s3 access. Note that general s3 access is not strictly limited to Amazon Web Services (AWS) s3, and can be used with RadosGW or other storage services. For an example of general s3 access using the AWS S3 driver, see "AWS S3 storage".

  2. Optional. Defines the maximum chunk size in MB for the final copy. Has no effect if server_side_assembly is set to false.

  3. Optional. Whether Project Quay should try and use server side assembly and the final chunked copy instead of client assembly. Defaults to true.

Supported AWS storage backends

Project Quay supports multiple Amazon Web Services (AWS) storage backends:

  • S3 storage: Standard support for AWS S3 buckets that uses AWS’s native object storage service.

  • STS S3 storage: Support for AWS Security Token Service (STS) to assume IAM roles, allowing for more secure S3 operations.

  • CloudFront S3 storage: Integrates with AWS CloudFront to enable high-availability distribution of content while still using AWS S3 as the origin.

The following sections provide example YAMLs and additional information about each AWS storage backend.

Amazon Web Services S3 storage

Project Quay supports using AWS S3 as an object storage backend. AWS S3 is an object storage service designed for data availability, scalability, security, and performance. The following YAML shows an example configuration using AWS S3.

AWS S3 example
# ...
DISTRIBUTED_STORAGE_CONFIG:
  default:
    - S3Storage (1)
    - host: s3.us-east-2.amazonaws.com
      s3_access_key: ABCDEFGHIJKLMN
      s3_secret_key: OL3ABCDEFGHIJKLMN
      s3_bucket: quay_bucket
      s3_region: <region> (2)
      storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
    - default
# ...
  1. The S3Storage storage driver should only be used for AWS S3 buckets. Note that this differs from general S3 access, where the RadosGW driver or other storage services can be used. For an example, see "Example B: Using RadosGW with general S3 access".

  2. Optional. The Amazon Web Services region. Defaults to us-east-1.

Amazon Web Services STS S3 storage

AWS Security Token Service (STS) provides temporary, limited-privilege credentials for accessing AWS resources, improving security by avoiding the need to store long-term access keys. This is useful in environments such as OpenShift Container Platform where credentials can be rotated or managed through IAM roles.

The following YAML shows an example configuration for using AWS STS with Red Hat Quay on OpenShift Container Platform configurations.

AWS STS S3 storage example
# ...
DISTRIBUTED_STORAGE_CONFIG:
   default:
    - STSS3Storage
    - sts_role_arn: <role_arn> (1)
      s3_bucket: <s3_bucket_name>
      storage_path: <storage_path>
      sts_user_access_key: <s3_user_access_key> (2)
      sts_user_secret_key: <s3_user_secret_key> (3)
      s3_region: <region> (4)
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
    - default
# ...
  1. The unique Amazon Resource Name (ARN).

  2. The generated AWS S3 user access key.

  3. The generated AWS S3 user secret key.

  4. Optional. The Amazon Web Services region. Defaults to us-east-1.

AWS CloudFront storage

AWS CloudFront is a content delivery network (CDN) service that caches and distributes content closer to users for improved performance and lower latency. Project Quay supports CloudFront through the CloudFrontedS3Storage driver, which enables secure, signed access to S3 buckets via CloudFront distributions.

Use the following example when configuring AWS CloudFront for your Project Quay deployment.

Note
  • When configuring AWS Cloudfront storage, the following conditions must be met for proper use with Project Quay:

    • You must set an Origin path that is consistent with Project Quay’s storage path as defined in your config.yaml file. Failure to meet this require results in a 403 error when pulling an image. For more information, see Origin path.

    • You must configure a Bucket policy and a Cross-origin resource sharing (CORS) policy.

Cloudfront S3 example YAML
DISTRIBUTED_STORAGE_CONFIG:
    default:
      - CloudFrontedS3Storage
      - cloudfront_distribution_domain: <CLOUDFRONT_DISTRIBUTION_DOMAIN>
        cloudfront_key_id: <CLOUDFRONT_KEY_ID>
        cloudfront_privatekey_filename: <CLOUDFRONT_PRIVATE_KEY_FILENAME>
        host: <S3_HOST>
        s3_access_key: <S3_ACCESS_KEY>
        s3_bucket: <S3_BUCKET_NAME>
        s3_secret_key: <S3_SECRET_KEY>
        storage_path: <STORAGE_PATH>
        s3_region: <S3_REGION>
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS:
  - default
DISTRIBUTED_STORAGE_PREFERENCE:
  - default
Bucket policy example
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::<AWS_ACCOUNT_ID>:user/CloudFront Origin Access Identity <CLOUDFRONT_OAI_ID>" (1) (2)
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::<S3_BUCKET_NAME>/*" (3)
        },
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::<AWS_ACCOUNT_ID>:user/CloudFront Origin Access Identity <CLOUDFRONT_OAI_ID>" (1) (2)
            },
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::<S3_BUCKET_NAME>"
        }
    ]
}
  1. The identifier, or account ID, of the AWS account that owns the CloudFront OAI and S3 bucket.

  2. The CloudFront Origin Access Identity (OAI) that accesses the S3 bucket.

  3. Specifies that CloudFront can access all objects (/*) inside of the S3 bucket.

Google Cloud Storage

Project Quay supports using Google Cloud Storage (GCS) as an object storage backend. When used with Project Quay, it provides a cloud-native solution for storing container images and artifacts.

The following YAML shows a sample configuration using Google Cloud Storage.

Google Cloud Storage example
DISTRIBUTED_STORAGE_CONFIG:
    googleCloudStorage:
        - GoogleCloudStorage
        - access_key: <access_key>
          bucket_name: <bucket_name>
          secret_key: <secret_key>
          storage_path: /datastorage/registry
          boto_timeout: 120 (1)
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
    - googleCloudStorage
  1. Optional. The time, in seconds, until a timeout exception is thrown when attempting to read from a connection. The default is 60 seconds. Also encompasses the time, in seconds, until a timeout exception is thrown when attempting to make a connection. The default is 60 seconds.

Microsoft Azure Blob Storage

Project Quay supports using Microsoft Azure Blob Storage as an object storage backend. Azure Blob Storage can be used to persist container images, metadata, and other artifacts in a secure and cloud-native manner.

The following YAML shows a sample configuration using Azure Storage.

Microsoft Azure Blob Storage example
DISTRIBUTED_STORAGE_CONFIG:
  azureStorage:
    - AzureStorage
    - azure_account_name: <azure_account_name>
      azure_container: <azure_container_name>
      storage_path: /datastorage/registry
      azure_account_key: <azure_account_key>
      sas_token: some/path/
      endpoint_url: https://[account-name].blob.core.usgovcloudapi.net (1)
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
    - azureStorage
  1. The endpoint_url parameter for Azure storage is optional and can be used with Microsoft Azure Government (MAG) endpoints. If left blank, the endpoint_url will connect to the normal Azure region.

    As of Project Quay 3.7, you must use the Primary endpoint of your MAG Blob service. Using the Secondary endpoint of your MAG Blob service will result in the following error: AuthenticationErrorDetail:Cannot find the claimed account when trying to GetProperties for the account whusc8-secondary.

Swift object storage

Project Quay supports using Red Hat OpenStack Platform (RHOSP) Object Storage service, or Swift, as an object storage backend. Swift offers S3-like functionality with its own API and authentication mechanisms.

The following YAML shows a sample configuration using Swift storage.

Swift object storage example
DISTRIBUTED_STORAGE_CONFIG:
  swiftStorage:
    - SwiftStorage
    - swift_user: <swift_username>
      swift_password: <swift_password>
      swift_container: <swift_container>
      auth_url: https://example.org/swift/v1/quay
      auth_version: 3
      os_options:
        tenant_id: <osp_tenant_id>
        user_domain_name: <osp_domain_name>
      ca_cert_path: /conf/stack/swift.cert"
      storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
    - swiftStorage

Nutanix Objects Storage

Project Quay supports Nutanix Objects Storage as an object storage backend. Nutanix Object Storage is suitable for organizations running private cloud infrastructure using Nutanix.

The following YAML shows a sample configuration using Nutanix Object Storage.

Nutanix Objects Storage example
DISTRIBUTED_STORAGE_CONFIG:
  nutanixStorage: # storage config name
    - RadosGWStorage # actual driver
    - access_key: <access_key>
      secret_key: <secret_key>
      bucket_name: <bucket_name>
      hostname: <hostname>
      is_secure: 'true'
      port: '443'
      storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE: # must contain name of the storage config
    - nutanixStorage

IBM Cloud Object Storage

Project Quay supports IBM Cloud Object Storage as an object storage backend. IBM Cloud Object Storage is suitable for cloud-native applications requiring scalable and secure storage on IBM Cloud.

The following YAML shows a sample configuration using IBM Cloud Object Storage.

IBM Cloud Object Storage
DISTRIBUTED_STORAGE_CONFIG:
  default:
  - IBMCloudStorage # actual driver
  - access_key: <access_key> # parameters
    secret_key: <secret_key>
    bucket_name: <bucket_name>
    hostname: <hostname>
    is_secure: 'true'
    port: '443'
    storage_path: /datastorage/registry
    maximum_chunk_size_mb: 100mb (1)
    minimum_chunk_size_mb: 5mb (2)
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS:
- default
DISTRIBUTED_STORAGE_PREFERENCE:
- default
  1. Optional. Recommended to be set to 100mb.

  2. Optional. Defaults to 5mb. Do not adjust this field without consulting Red  Support, because it can have unintended consequences.

NetApp ONTAP S3 object storage

Project Quay supports using NetApp ONTAP S3 as an object storage backend.

The following YAML shows a sample configuration using NetApp ONTAP S3.

Netapp ONTAP S3 example
DISTRIBUTED_STORAGE_CONFIG:
  local_us:
  - RadosGWStorage
  - access_key: <access_key>
    bucket_name: <bucket_name>
    hostname: <host_url_address>
    is_secure: true
    port: <port>
    secret_key: <secret_key>
    storage_path: /datastorage/registry
    signature_version: v4
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS:
- local_us
DISTRIBUTED_STORAGE_PREFERENCE:
- local_us

Hitachi Content Platform object storage

Project Quay supports using Hitachi Content Platform (HCP) as an object storage backend.

The following YAML shows a sample configuration using HCP for object storage.

HCP storage configuration example
DISTRIBUTED_STORAGE_CONFIG:
  hcp_us:
  - RadosGWStorage
  - access_key: <access_key>
    bucket_name: <bucket_name>
    hostname: <hitachi_hostname_example>
    is_secure: true
    secret_key: <secret_key>
    storage_path: /datastorage/registry
    signature_version: v4
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS:
- hcp_us
DISTRIBUTED_STORAGE_PREFERENCE:
- hcp_us

Redis configuration fields

Redis is used by Project Quay to support backend tasks and services, such as build triggers and notifications. There are configuration types related to Redis: build logs and user events. The following sections detail the configuration fields available for each type.

Build logs

Build logs are generated during the image build process and provide insights for debugging and auditing. Project Quay uses Redis to temporarily store these logs before they are accessed through the user interface or API.

The following build logs configuration fields are available for Redis deployments.

Table 7. Build logs configuration fields
Field Type Description

BUILDLOGS_REDIS
(Required)

Object

Redis connection details for build logs caching.

.host
(Required)

String

The hostname at which Redis is accessible.
Example:
quay-server.example.com

.port
(Required)

Number

The port at which Redis is accessible.
Example:
6379

.password

String

The password to connect to the Redis instance.
Example:
strongpassword

.ssl
(Optional)

Boolean

Whether to enable TLS communication between Redis and Quay. Defaults to false.

Build logs configuration example
# ...
BUILDLOGS_REDIS:
  host: <quay-server.example.com>
  password: <example_password>
  port: 6379 (1)
  ssl: true (1)
# ...
  1. If your deployment uses Azure Cache for Redis and ssl is set to true, the port defaults to 6380.

User events

User events track activity across Project Quay, such as repository pushes, tag creations, deletions, and permission changes. These events are recorded in Redis as part of the activity stream and can be accessed through the API or web interface.

The following user event fields are available for Redis deployments.

Table 8. User events config
Field Type Description

USER_EVENTS_REDIS
(Required)

Object

Redis connection details for user event handling.

.host
(Required)

String

The hostname at which Redis is accessible.
Example:
quay-server.example.com

.port
(Required)

Number

The port at which Redis is accessible.
Example:
6379

.password

String

The password to connect to the Redis instance.
Example:
strongpassword

.ssl

Boolean

Whether to enable TLS communication between Redis and Quay. Defaults to false.

.ssl_keyfile
(Optional)

String

The name of the key database file, which houses the client certificate to be used.
Example:
ssl_keyfile: /path/to/server/privatekey.pem

.ssl_certfile
(Optional)

String

Used for specifying the file path of the SSL certificate.
Example:
ssl_certfile: /path/to/server/certificate.pem

.ssl_cert_reqs
(Optional)

String

Used to specify the level of certificate validation to be performed during the SSL/TLS handshake.
Example:
ssl_cert_reqs: CERT_REQUIRED

.ssl_ca_certs
(Optional)

String

Used to specify the path to a file containing a list of trusted Certificate Authority (CA) certificates.
Example:
ssl_ca_certs: /path/to/ca_certs.pem

.ssl_ca_data
(Optional)

String

Used to specify a string containing the trusted CA certificates in PEM format.
Example:
ssl_ca_data: <certificate>

.ssl_check_hostname
(Optional)

Boolean

Used when setting up an SSL/TLS connection to a server. It specifies whether the client should check that the hostname in the server’s SSL/TLS certificate matches the hostname of the server it is connecting to.
Example:
ssl_check_hostname: true

Redis user events example
# ...
USER_EVENTS_REDIS:
  host: <quay-redis.example.com>
  port: 6379
  password: <example_password>
  ssl: true
  ssl_keyfile: /etc/ssl/private/redis-client.key
  ssl_certfile: /etc/ssl/certs/redis-client.crt
  ssl_cert_reqs: <required_certificate>
  ssl_ca_certs: /etc/ssl/certs/ca-bundle.crt
  ssl_check_hostname: true
# ...

Automation configuration options

Project Quay supports various mechanisms for automating deployment and configuration, which allows the integration of Project Quay into GitOps and CI/CD pipelines. By defining these options and leveraging the API, Project Quay can be initialized and managed without using the UI.

Note

Because the Project Quay Operator manages the config.yaml file through the configBundleSecret custom resource (CR), pre-configuring Red Hat Quay on OpenShift Container Platform requires an administrator to manually create a valid config.yaml file with the desired configuration. This file must then be bundled into a new Kubernetes Secret and used to replace the default configBundleSecret CR referenced by the QuayRegistry CR. This allows Red Hat Quay on OpenShift Container Platform to be deployed in a fully automated manner, bypassing the web-based configuration UI. For more information, see Modifying the QuayRegistry CR after deployment.

For on-premise Project Quay deployments, pre-configuration is done by manually creating a valid config.yaml file and then deploying the registry.

Automation options are ideal for environments that require declarative Project Quay deployments, such as disconnected or air-gapped clusters.

Pre-configuration options for automation

Project Quay provides configuration options that enable registry administrators to automate early setup tasks and API accessibility. These options are useful for new deployments and controlling how API calls can be made. The following options support automation and administrative control.

Table 9. Automation configuration fields
Field Type Description

FEATURE_USER_INITIALIZE

Boolean

Enables initial user bootstrapping in a newly deployed Project Quay registry. When this field is set to true in the config.yaml file prior to deployment, it allows an administrator to create the first user by calling the api/v1/user/initialize endpoint.

Note

Unlike all other registry API calls that require an OAuth 2 access token generated by an OAuth application in an existing organization, the api/v1/user/initialize endpoint does not require authentication.

BROWSER_API_CALLS_XHR_ONLY

Boolean

Controls whether the registry API only accepts calls from browsers. To allow general browser-based access to the API, administrators must set this field to false. If set to true, API calls are blocked, preventing both administrators and users from interacting with the API.

SUPER_USERS

String

Defines a list of administrative users, or superusers, who have full privileges and unrestricted access to the registry. Project Quay administrators should configure SUPER_USERS in the config.yaml before deployment to ensure immediate administrative access without requiring a redeploy. Setting this field post-deployment requires restarting the registry to take effect.

FEATURE_USER_CREATION

Boolean

Relegates the creation of new users to only superusers when this field is set to false. This setting is useful in controlled environments where user access must be provisioned manually by administrators.

The following YAML shows you the suggested configuration for automation:

Suggested configuration for automation
# ...
FEATURE_USER_INITIALIZE: true
BROWSER_API_CALLS_XHR_ONLY: false
SUPER_USERS:
- quayadmin
FEATURE_USER_CREATION: false
# ...

Component and feature configuration fields

The Component and Feature Configuration section describes the configurable fields available for fine-tuning Project Quay across its various subsystems. These fields allow administrators to customize registry behavior, enable or disable specific features, and integrate with external services and infrastructure. While not required for a basic deployment, these options support advanced use cases related to security, automation, scalability, compliance, and performance.

Core configuration overview

Use these core fields to configure the registry’s basic behavior, including hostname, protocol, authentication settings, and more.

Registry branding and identity fields

The following configuration fields allow you to modify the branding, identity, and contact information displayed in your Project Quay deployment. With these fields, you can customize how the registry appears to users by specifying titles, headers, footers, and organizational contact links shown throughout the UI.

Note

Some of the following fields are not available on the Project Quay v2 UI.

Table 10. Registry branding and identity configuration fields
Field Type Description

REGISTRY_TITLE

String

If specified, the long-form title for the registry. Displayed in frontend of your Project Quay deployment, for example, at the sign in page of your organization. Should not exceed 35 characters.
Default:
Red Hat Quay

REGISTRY_TITLE_SHORT

String

If specified, the short-form title for the registry. Title is displayed on various pages of your organization, for example, as the title of the tutorial on your organization’s Tutorial page.
Default:
Red Hat Quay

CONTACT_INFO

Array of String

If specified, contact information to display on the contact page. If only a single piece of contact information is specified, the contact footer will link directly.

[0]

String

Adds a link to send an e-mail.

Pattern:
^mailto:(.)+$
Example:
mailto:support@quay.io

[1]

String

Adds a link to visit an IRC chat room.

Pattern:
^irc://(.)+$
Example:
irc://chat.freenode.net:6665/quay

[2]

String

Adds a link to call a phone number.

Pattern:
^tel:(.)+$
Example:
tel:+1-888-930-3475

[3]

String

Adds a link to a defined URL.

Pattern:
^http(s)?://(.)+$
Example:
https://twitter.com/quayio

Table 11. Branding configuration fields
Field Type Description

BRANDING

Object

Custom branding for logos and URLs in the Project Quay UI.

.logo
(Required)

String

Main logo image URL.

The header logo defaults to 205x30 PX. The form logo on the Project Quay sign in screen of the web UI defaults to 356.5x39.7 PX.
Example:
/static/img/quay-horizontal-color.svg

.footer_img

String

Logo for UI footer. Defaults to 144x34 PX.

Example:
/static/img/RedHat.svg

.footer_url

String

Link for footer image.

Example:
https://redhat.com

Table 12. Footer links configuration fields
Field Type Description

FOOTER_LINKS

Object

Enable customization of footer links in Project Quay’s UI for on-prem installations.

.TERMS_OF_SERVICE_URL

String

Custom terms of service for on-prem installations.

Example:
https://index.hr

.PRIVACY_POLICY_URL

String

Custom privacy policy for on-prem installations.

Example:
https://example.hr

.SECURITY_URL

String

Custom security page for on-prem installations.

Example:
https://example.hr

.ABOUT_URL

String

Custom about page for on-prem installations.

Example:
https://example.hr

Registry branding and identity example YAML
# ...
REGISTRY_TITLE: "Example Container Registry"
REGISTRY_TITLE_SHORT: "Example Quay"
CONTACT_INFO:
  - mailto:support@example.io
  - irc://chat.freenode.net:6665/examplequay
  - tel:+1-800-555-1234
  - https://support.example.io
BRANDING:
    logo: https://www.mend.io/wp-content/media/2020/03/5-tips_small.jpg
    footer_img: https://www.mend.io/wp-content/media/2020/03/5-tips_small.jpg
    footer_url: https://opensourceworld.org/
FOOTER_LINKS:
  "TERMS_OF_SERVICE_URL": "https://www.index.hr"
  "PRIVACY_POLICY_URL": "https://www.example.hr"
  "SECURITY_URL": "https://www.example.hr"
  "ABOUT_URL": "https://www.example.hr"
# ...

SSL/TLS configuration fields

This section describes the available configuration fields for enabling and managing SSL/TLS encryption in your Project Quay deployment.

Table 13. SSL configuration fields
Field Type Description

PREFERRED_URL_SCHEME

String

One of http or https. Note that users only set their PREFERRED_URL_SCHEME to http when there is no TLS encryption in the communication path from the client to Quay.
Users must set their PREFERRED_URL_SCHEME`to `https when using a TLS-terminating load balancer, a reverse proxy (for example, Nginx), or when using Quay with custom SSL certificates directly. In most cases, the PREFERRED_URL_SCHEME should be https.
Default: http

SERVER_HOSTNAME
(Required)

String

The URL at which Project Quay is accessible, without the scheme

Example:
quay-server.example.com

SSL_CIPHERS

Array of String

If specified, the nginx-defined list of SSL ciphers to enabled and disabled

Example:
[ECDHE-RSA-AES128-GCM-SHA256, ECDHE-ECDSA-AES128-GCM-SHA256, ECDHE-RSA-AES256-GCM-SHA384, ECDHE-ECDSA-AES256-GCM-SHA384, DHE-RSA-AES128-GCM-SHA256, DHE-DSS-AES128-GCM-SHA256, kEDH+AESGCM, ECDHE-RSA-AES128-SHA256, ECDHE-ECDSA-AES128-SHA256, ECDHE-RSA-AES128-SHA, ECDHE-ECDSA-AES128-SHA, ECDHE-RSA-AES256-SHA384, ECDHE-ECDSA-AES256-SHA384, ECDHE-RSA-AES256-SHA, ECDHE-ECDSA-AES256-SHA, DHE-RSA-AES128-SHA256, DHE-RSA-AES128-SHA, DHE-DSS-AES128-SHA256, DHE-RSA-AES256-SHA256, DHE-DSS-AES256-SHA, DHE-DSS-AES256-SHA, AES128-GCM-SHA256, AES256-GCM-SHA384, AES128-SHA256, AES256-SHA256, AES128-SHA, AES256-SHA, AES, !3DES", !aNULL, !eNULL, !EXPORT, DES, !RC4, MD5, !PSK, !aECDH, !EDH-DSS-DES-CBC3-SHA, !EDH-RSA-DES-CBC3-SHA, !KRB5-DES-CBC3-SHA]

SSL_PROTOCOLS

Array of String

If specified, nginx is configured to enabled a list of SSL protocols defined in the list. Removing an SSL protocol from the list disables the protocol during Project Quay startup.

Example:
['TLSv1','TLSv1.1','TLSv1.2', `TLSv1.3]`

SESSION_COOKIE_SECURE

Boolean

Whether the secure property should be set on session cookies

Default:
False

Recommendation:
Set to True for all installations using SSL

EXTERNAL_TLS_TERMINATION

Boolean

Set to true if TLS is supported, but terminated at a layer before Quay. Set to false when Quay is running with its own SSL certificates and receiving TLS traffic directly.

SSL configuration example YAML
# ...
PREFERRED_URL_SCHEME: https
SERVER_HOSTNAME: quay-server.example.com
SSL_CIPHERS:
  - ECDHE-RSA-AES128-GCM-SHA256
SSL_PROTOCOLS:
  - TLSv1.3
SESSION_COOKIE_SECURE: true
EXTERNAL_TLS_TERMINATION: true
# ...

IPv6 configuration field

You can use the FEATURE_LISTEN_IP_VERSION configuration field to specify which IP protocol family Project Quay should listen on: IPv4, IPv6, or both (dual-stack). This field is critical in environments where the registry must operate on IPv6-only or dual-stack networks.

Table 14. IPv6 configuration field
Field Type Description

FEATURE_LISTEN_IP_VERSION

String

Enables IPv4, IPv6, or dual-stack protocol family. This configuration field must be properly set, otherwise Project Quay fails to start. Default: IPv4 Additional configurations: IPv6, dual-stack

IPv6 example YAML
# ...
FEATURE_LISTEN_IP_VERSION: dual-stack
# ...

Logging and debugging variables

The following variables control how Project Quay logs events, exposes debugging information, and interacts with system health checks. These settings are useful for troubleshooting and monitoring your registry

Table 15. Logging and debug configuration variables
Variable Type Description

DEBUGLOG

Boolean

Whether to enable or disable debug logs.

USERS_DEBUG

Integer. Either 0 or 1.

Used to debug LDAP operations in clear text, including passwords. Must be used with DEBUGLOG=TRUE.

Important

Setting USERS_DEBUG=1 exposes credentials in clear text. This variable should be removed from the Project Quay deployment after debugging. The log file that is generated with this environment variable should be scrutinized, and passwords should be removed before sending to other users. Use with caution.

ALLOW_PULLS_WITHOUT_STRICT_LOGGING

String

If true, pulls will still succeed even if the pull audit log entry cannot be written . This is useful if the database is in a read-only state and it is desired for pulls to continue during that time.

Default: False

ENABLE_HEALTH_DEBUG_SECRET

String

If specified, a secret that can be given to health endpoints to see full debug info when not authenticated as a superuser

HEALTH_CHECKER

String

The configured health check

Example: ('RDSAwareHealthCheck', {'access_key': 'foo', 'secret_key': 'bar'})

FEATURE_AGGREGATED_LOG_COUNT_RETRIEVAL

Boolean

Whether to allow retrieval of aggregated log counts

Default: True

Logging and debugging example YAML
#...
DEBUGLOG: true
USERS_DEBUG: 1
ALLOW_PULLS_WITHOUT_STRICT_LOGGING: "true"
ENABLE_HEALTH_DEBUG_SECRET: "<secret_value>"
HEALTH_CHECKER: "('RDSAwareHealthCheck', {'access_key': 'foo', 'secret_key': 'bar'})"
FEATURE_AGGREGATED_LOG_COUNT_RETRIEVAL: true
# ...

Registry state and system behavior configuration fields

The following configuration fields control the operational state of the Project Quay registry and how it interacts with external systems. These settings allow administrators to place the registry into a restricted read-only mode for maintenance purposes, and to enforce additional security by blocking specific hostnames from being targeted by webhooks.

Table 16. Registry state and system behavior configuration fields
Field Type Description

REGISTRY_STATE

String

The state of the registry

Values: normal or read-only

WEBHOOK_HOSTNAME_BLACKLIST

Array of String

The set of hostnames to disallow from webhooks when validating, beyond localhost

Registry state and system behavior example YAML
# ...
REGISTRY_STATE: normal
WEBHOOK_HOSTNAME_BLACKLIST:
  - "169.254.169.254"
  - "internal.example.com"
  - "127.0.0.2"
# ...

User Experience and Interface

These fields configure how users interact with the UI, including branding, pagination, browser behavior, and accessibility options like recaptcha. This also covers user-facing performance and display settings.

Web UI and user experience configuration fields

These configuration fields control the behavior and appearance of the Project Quay web interface and overall user experience. Options in this section allow administrators to customize login behavior, avatar display, user autocomplete, session handling, and catalog visibility.

Table 17. Web UI and UX configuration fields
Field Type Description

AVATAR_KIND

String

The types of avatars to display, either generated inline (local) or Gravatar (gravatar)

Values: local, gravatar

FRESH_LOGIN_TIMEOUT

String

The time after which a fresh login requires users to re-enter their password

Example: 5m

FEATURE_UI_V2

Boolean

When set, allows users to try the v2 beta UI environment.

Default: True

FEATURE_UI_V2_REPO_SETTINGS

Boolean

When set to True, enables repository settings in the Project Quay v2 UI.

+ Default: False

FEATURE_DIRECT_LOGIN

Boolean

Whether users can directly login to the UI

Default: True

FEATURE_PARTIAL_USER_AUTOCOMPLETE

Boolean

If set to true, autocompletion will apply to partial usernames+
Default: True

FEATURE_LIBRARY_SUPPORT

Boolean

Whether to allow for "namespace-less" repositories when pulling and pushing from Docker

Default: True

FEATURE_PERMANENT_SESSIONS

Boolean

Whether sessions are permanent

Default: True

FEATURE_PUBLIC_CATALOG

Boolean

If set to true, the _catalog endpoint returns public repositories. Otherwise, only private repositories can be returned.

Default: False

Example YAML
# ...
AVATAR_KIND: local
FRESH_LOGIN_TIMEOUT: 5m
FEATURE_UI_V2: true
FEATURE_UI_V2_REPO_SETTINGS: false
FEATURE_DIRECT_LOGIN: true
FEATURE_PARTIAL_USER_AUTOCOMPLETE: true
FEATURE_LIBRARY_SUPPORT: true
FEATURE_PERMANENT_SESSIONS: true
FEATURE_PUBLIC_CATALOG: false
# ...
v2 user interface configuration

With FEATURE_UI_V2 enabled, you can toggle between the current version of the user interface and the new version of the user interface.

Important
  • This UI is currently in beta and subject to change. In its current state, users can only create, view, and delete organizations, repositories, and image tags.

  • When running Project Quay in the old UI, timed-out sessions would require that the user input their password again in the pop-up window. With the new UI, users are returned to the main page and required to input their username and password credentials. This is a known issue and will be fixed in a future version of the new UI.

  • There is a discrepancy in how image manifest sizes are reported between the legacy UI and the new UI. In the legacy UI, image manifests were reported in mebibytes. In the new UI, Project Quay uses the standard definition of megabyte (MB) to report image manifest sizes.

Session timeout configuration field

The following configuration field relies on on the Flask API configuration field of the same name.

Important

Altering session lifetime is not recommended. Administrators should be aware of the allotted time when setting a session timeout. If you set the time too early, it might interrupt your workflow.

Table 18. Session logout configuration field
Field Type Description

PERMANENT_SESSION_LIFETIME

Integer

A timedelta which is used to set the expiration date of a permanent session. The default is 31 days, which makes a permanent session survive for roughly one month.

Default: 2678400

Session timeout example YAML
# ...
PERMANENT_SESSION_LIFETIME: 3000
# ...

User and Access Management

Use these fields to configure how users are created, authenticated, and managed. This includes settings for superusers, account recovery, app-specific tokens, login behavior, and external identity providers like LDAP, OAuth, and OIDC.

User configuration fields

The user configuration fields define how user accounts behave in your Project Quay deployment. These fields enable control over user creation, access levels, metadata tracking, recovery options, and namespace management. You can also enforce restrictions, such as invite-only creation or superuser privileges, to match your organization’s governance and security policies.

Table 19. User configuration fields
Field Type Description

FEATURE_SUPER_USERS

Boolean

Whether superusers are supported

Default: true

FEATURE_USER_CREATION

Boolean

Whether users can be created (by non-superusers)

Default: true

FEATURE_USER_LAST_ACCESSED

Boolean

Whether to record the last time a user was accessed

Default: true

FEATURE_USER_LOG_ACCESS

Boolean

If set to true, users will have access to audit logs for their namespace

Default: false

FEATURE_USER_METADATA

Boolean

Whether to collect and support user metadata

Default: false

FEATURE_USERNAME_CONFIRMATION

Boolean

If set to true, users can confirm and modify their initial usernames when logging in via OpenID Connect (OIDC) or a non-database internal authentication provider like LDAP.
Default: true

FEATURE_USER_RENAME

Boolean

If set to true, users can rename their own namespace

Default: false

FEATURE_INVITE_ONLY_USER_CREATION

Boolean

Whether users being created must be invited by another user

Default: false

FRESH_LOGIN_TIMEOUT

String

The time after which a fresh login requires users to re-enter their password

Example: 5m

USERFILES_LOCATION

String

ID of the storage engine in which to place user-uploaded files

Example: s3_us_east

USERFILES_PATH

String

Path under storage in which to place user-uploaded files

Example: userfiles

USER_RECOVERY_TOKEN_LIFETIME

String

The length of time a token for recovering a user accounts is valid

Pattern: ^[0-9]+(w|m|d|h|s)$
Default: 30m

FEATURE_SUPERUSERS_FULL_ACCESS

Boolean

Grants superusers the ability to read, write, and delete content from other repositories in namespaces that they do not own or have explicit permissions for.

Default: False

FEATURE_SUPERUSERS_ORG_CREATION_ONLY

Boolean

Whether to only allow superusers to create organizations.

Default: False

FEATURE_RESTRICTED_USERS

Boolean

When set to True with RESTRICTED_USERS_WHITELIST:

  • All normal users and superusers are restricted from creating organizations or content in their own namespace unless they are allowlisted via RESTRICTED_USERS_WHITELIST.

  • Restricted users retain their normal permissions within organizations based on team memberships.

Default: False

RESTRICTED_USERS_WHITELIST

String

When set with FEATURE_RESTRICTED_USERS: true, specific users are excluded from the FEATURE_RESTRICTED_USERS setting.

GLOBAL_READONLY_SUPER_USERS

String

When set, grants users of this list read access to all repositories, regardless of whether they are public repositories. Only works for those superusers defined with the SUPER_USERS configuration field.

User example YAML
# ...
FEATURE_SUPER_USERS: true
FEATURE_USER_CREATION: true
FEATURE_INVITE_ONLY_USER_CREATION: false
FEATURE_USER_RENAME: true
FEATURE_SUPERUSERS_FULL_ACCESS: true
FEATURE_SUPERUSERS_ORG_CREATION_ONLY: false
FEATURE_RESTRICTED_USERS: true
RESTRICTED_USERS_WHITELIST: (1)
      - user1
GLOBAL_READONLY_SUPER_USERS:
      - quayadmin
FRESH_LOGIN_TIMEOUT: "5m"
USER_RECOVERY_TOKEN_LIFETIME: "30m"
USERFILES_LOCATION: "s3_us_east"
USERFILES_PATH: "userfiles"
# ...
  1. When the RESTRICTED_USERS_WHITELIST field is set, whitelisted users can create organizations, or read or write content from the repository even if FEATURE_RESTRICTED_USERS is set to true. Other users, for example, user2, user3, and user4 are restricted from creating organizations, reading, or writing content.

Robot account configuration fields

The following configuration field allows for globally disallowing robot account creation and interaction.

Table 20. Robot account configuration fields
Field Type Description

ROBOTS_DISALLOW

Boolean

When set to true, robot accounts are prevented from all interactions, as well as from being created
Default: False

Robot account disallow example YAML
# ...
ROBOTS_DISALLOW: true
# ...

LDAP configuration fields

The following configuration fields allow administrators to integrate Project Quay with an LDAP-based authentication system. When AUTHENTICATION_TYPE is set to LDAP, Project Quay can authenticate users against an LDAP directory and support additional, optional features such as team synchronization, superuser access control, restricted user roles, and secure connection parameters.

This section provides YAML examples for the following LDAP scenarios:

  • Basic LDAP configuration

  • LDAP restricted user configuration

  • LDAP superuser configuration

Table 21. LDAP configuration
Field Type Description

AUTHENTICATION_TYPE
(Required)

String

Must be set to LDAP.

FEATURE_TEAM_SYNCING

Boolean

Whether to allow for team membership to be synced from a backing group in the authentication engine (OIDC, LDAP, or Keystone).

Default: true

FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP

Boolean

If enabled, non-superusers can setup team syncrhonization.

Default: false

LDAP_ADMIN_DN

String

The admin DN for LDAP authentication.

LDAP_ADMIN_PASSWD

String

The admin password for LDAP authentication.

LDAP_ALLOW_INSECURE_FALLBACK

Boolean

Whether or not to allow SSL insecure fallback for LDAP authentication.

LDAP_BASE_DN

Array of String

The base DN for LDAP authentication.

LDAP_EMAIL_ATTR

String

The email attribute for LDAP authentication.

LDAP_UID_ATTR

String

The uid attribute for LDAP authentication.

LDAP_URI

String

The LDAP URI.

LDAP_USER_FILTER

String

The user filter for LDAP authentication.

LDAP_USER_RDN

Array of String

The user RDN for LDAP authentication.

LDAP_SECONDARY_USER_RDNS

Array of String

Provide Secondary User Relative DNs if there are multiple Organizational Units where user objects are located.

TEAM_RESYNC_STALE_TIME

String

If team syncing is enabled for a team, how often to check its membership and resync if necessary.

Pattern:
^[0-9]+(w|m|d|h|s)$
Example:
2h
Default:
30m

LDAP_SUPERUSER_FILTER

String

Subset of the LDAP_USER_FILTER configuration field. When configured, allows Project Quay administrators the ability to configure Lightweight Directory Access Protocol (LDAP) users as superusers when Project Quay uses LDAP as its authentication provider.

With this field, administrators can add or remove superusers without having to update the Project Quay configuration file and restart their deployment.

This field requires that your AUTHENTICATION_TYPE is set to LDAP.

LDAP_GLOBAL_READONLY_SUPERUSER_FILTER

String

When set, grants users of this list read access to all repositories, regardless of whether they are public repositories. Only works for those superusers defined with the LDAP_SUPERUSER_FILTER configuration field.

LDAP_RESTRICTED_USER_FILTER

String

Subset of the LDAP_USER_FILTER configuration field. When configured, allows Project Quay administrators the ability to configure Lightweight Directory Access Protocol (LDAP) users as restricted users when Project Quay uses LDAP as its authentication provider.

This field requires that your AUTHENTICATION_TYPE is set to LDAP.

FEATURE_RESTRICTED_USERS

Boolean

When set to True with LDAP_RESTRICTED_USER_FILTER active, only the listed users in the defined LDAP group are restricted.

Default: False

LDAP_TIMEOUT

Integer

Specifies the time limit, in seconds, for LDAP operations. This limits the amount of time an LDAP search, bind, or other operation can take. Similar to the -l option in ldapsearch, it sets a client-side operation timeout.

Default: 10

LDAP_NETWORK_TIMEOUT

Integer

Specifies the time limit, in seconds, for establishing a connection to the LDAP server. This is the maximum time Project Quay waits for a response during network operations, similar to the -o nettimeout option in ldapsearch.

Default: 10

Basic LDAP configuration example YAML
# ...
AUTHENTICATION_TYPE: LDAP (1)
# ...
LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com (2)
LDAP_ADMIN_PASSWD: ABC123 (3)
LDAP_ALLOW_INSECURE_FALLBACK: false (4)
LDAP_BASE_DN: (5)
  - dc=example
  - dc=com
LDAP_EMAIL_ATTR: mail (6)
LDAP_UID_ATTR: uid (7)
LDAP_URI: ldap://<example_url>.com (8)
LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,dc=<domain_name>,dc=com) (9)
LDAP_USER_RDN: (10)
  - ou=people
LDAP_SECONDARY_USER_RDNS: (11)
    - ou=<example_organization_unit_one>
    - ou=<example_organization_unit_two>
    - ou=<example_organization_unit_three>
    - ou=<example_organization_unit_four>
  1. Required. Must be set to LDAP.

  2. Required. The admin DN for LDAP authentication.

  3. Required. The admin password for LDAP authentication.

  4. Required. Whether to allow SSL/TLS insecure fallback for LDAP authentication.

  5. Required. The base DN for LDAP authentication.

  6. Required. The email attribute for LDAP authentication.

  7. Required. The UID attribute for LDAP authentication.

  8. Required. The LDAP URI.

  9. Required. The user filter for LDAP authentication.

  10. Required. The user RDN for LDAP authentication.

  11. Optional. Secondary User Relative DNs if there are multiple Organizational Units where user objects are located.

LDAP restricted user configuration example YAML
# ...
AUTHENTICATION_TYPE: LDAP
# ...
FEATURE_RESTRICTED_USERS: true (1)
# ...
LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com
LDAP_ADMIN_PASSWD: ABC123
LDAP_ALLOW_INSECURE_FALLBACK: false
LDAP_BASE_DN:
    - o=<organization_id>
    - dc=<example_domain_component>
    - dc=com
LDAP_EMAIL_ATTR: mail
LDAP_UID_ATTR: uid
LDAP_URI: ldap://<example_url>.com
LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com)
LDAP_RESTRICTED_USER_FILTER: (<filterField>=<value>) (2)
LDAP_USER_RDN:
    - ou=<example_organization_unit>
    - o=<organization_id>
    - dc=<example_domain_component>
    - dc=com
# ...
  1. Must be set to true when configuring an LDAP restricted user.

  2. Configures specified users as restricted users.

LDAP superuser configuration reference example YAML
# ...
AUTHENTICATION_TYPE: LDAP
# ...
LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com
LDAP_ADMIN_PASSWD: ABC123
LDAP_ALLOW_INSECURE_FALLBACK: false
LDAP_BASE_DN:
    - o=<organization_id>
    - dc=<example_domain_component>
    - dc=com
LDAP_EMAIL_ATTR: mail
LDAP_UID_ATTR: uid
LDAP_URI: ldap://<example_url>.com
LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com)
LDAP_SUPERUSER_FILTER: (<filterField>=<value>) (1)
LDAP_USER_RDN:
    - ou=<example_organization_unit>
    - o=<organization_id>
    - dc=<example_domain_component>
    - dc=com
# ...
  1. Configures specified users as superusers.

OAuth configuration fields

The following fields define the behavior of Project Quay when handling authentication through external identity providers using OAuth. You can configure global OAuth options such as token assignment and whitelisted client IDs, as well as provider-specific settings for GitHub and Google.

Table 22. OAuth fields
Field Type Description

DIRECT_OAUTH_CLIENTID_WHITELIST

Array of String

A list of client IDs for Quay-managed applications that are allowed to perform direct OAuth approval without user approval.

FEATURE_ASSIGN_OAUTH_TOKEN

Boolean

Allows organization administrators to assign OAuth tokens to other users.

Global OAuth example YAML
# ...
DIRECT_OAUTH_CLIENTID_WHITELIST:
  - <quay_robot_client>
  - <quay_app_token_issuer>
FEATURE_ASSIGN_OAUTH_TOKEN: true
# ...
Table 23. GitHub OAuth configuration fields
Field Type Description

FEATURE_GITHUB_LOGIN

Boolean

Whether GitHub login is supported

**Default: False

GITHUB_LOGIN_CONFIG

Object

Configuration for using GitHub (Enterprise) as an external login provider.

   .ALLOWED_ORGANIZATIONS

Array of String

The names of the GitHub (Enterprise) organizations whitelisted to work with the ORG_RESTRICT option.

   .API_ENDPOINT

String

The endpoint of the GitHub (Enterprise) API to use. Must be overridden for github.com

Example: https://api.github.com/

   .CLIENT_ID
   (Required)

String

The registered client ID for this Project Quay instance; cannot be shared with GITHUB_TRIGGER_CONFIG.

Example: <client_id>

   .CLIENT_SECRET
   (Required)

String

The registered client secret for this Project Quay instance.

Example: <client_secret>

   .GITHUB_ENDPOINT
   (Required)

String

The endpoint for GitHub (Enterprise).

Example: https://github.com/

   .ORG_RESTRICT

Boolean

If true, only users within the organization whitelist can login using this provider.

Github OAth example YAML
# ...
FEATURE_GITHUB_LOGIN: true
GITHUB_LOGIN_CONFIG:
  ALLOWED_ORGANIZATIONS:
    - <myorg>
    - <dev-team>
  API_ENDPOINT: <https://api.github.com/>
  CLIENT_ID: <client_id>
  CLIENT_SECRET: <client_secret>
  GITHUB_ENDPOINT: <https://github.com/>
  ORG_RESTRICT: true
# ...
Table 24. Google OAuth configuration fields
Field Type Description

FEATURE_GOOGLE_LOGIN

Boolean

Whether Google login is supported.

**Default: False

GOOGLE_LOGIN_CONFIG

Object

Configuration for using Google for external authentication.

   .CLIENT_ID
   (Required)

String

The registered client ID for this Project Quay instance.

Example: <client_id>

   .CLIENT_SECRET
   (Required)

String

The registered client secret for this Project Quay instance.

Example: <client_secret>

Google OAuth example YAML
# ...
FEATURE_GOOGLE_LOGIN: true
GOOGLE_LOGIN_CONFIG:
  CLIENT_ID: <client_id>
  CLIENT_SECRET: <client_secret>
# ...

OIDC configuration fields

You can configure Project Quay to authenticate users through any OpenID Connect (OIDC)-compatible identity provider, including Azure Entra ID (formerly Azure AD), Okta, Keycloak, and others. These fields define the necessary client credentials, endpoints, and token behavior used during the OIDC login flow.

Table 25. OIDC fields
Field Type Description

<string>_LOGIN_CONFIG
(Required)

String

The parent key that holds the OIDC configuration settings. Typically the name of the OIDC provider, for example, AZURE_LOGIN_CONFIG, however any arbitrary string is accepted.

   .CLIENT_ID
(Required)

String

The registered client ID for this Project Quay instance.

Example: 0e8dbe15c4c7630b6780

   .CLIENT_SECRET
(Required)

String

The registered client secret for this Project Quay instance.

Example: e4a58ddd3d7408b7aec109e85564a0d153d3e846

   .DEBUGLOG

Boolean

Whether to enable debugging.

   .LOGIN_BINDING_FIELD

String

Used when the internal authorization is set to LDAP. Project Quay reads this parameter and tries to search through the LDAP tree for the user with this username. If it exists, it automatically creates a link to that LDAP account.

   .LOGIN_SCOPES

Object

Adds additional scopes that Project Quay uses to communicate with the OIDC provider.

   .OIDC_ENDPOINT_CUSTOM_PARAMS

String

Support for custom query parameters on OIDC endpoints. The following endpoints are supported: authorization_endpoint, token_endpoint, and user_endpoint.

   .OIDC_ISSUER

String

Allows the user to define the issuer to verify. For example, JWT tokens container a parameter known as iss which defines who issued the token. By default, this is read from the .well-know/openid/configuration endpoint, which is exposed by every OIDC provider. If this verification fails, there is no login.

   .OIDC_SERVER
(Required)

String

The address of the OIDC server that is being used for authentication.

Example: https://sts.windows.net/6c878…​/

   .PREFERRED_USERNAME_CLAIM_NAME

String

Sets the preferred username to a parameter from the token.

   .SERVICE_ICON

String

Changes the icon on the login screen.

   .SERVICE_NAME
(Required)

String

The name of the service that is being authenticated.

Example: Microsoft Entra ID

   .VERIFIED_EMAIL_CLAIM_NAME

String

The name of the claim that is used to verify the email address of the user.

   .PREFERRED_GROUP_CLAIM_NAME

String

The key name within the OIDC token payload that holds information about the user’s group memberships.

   .OIDC_DISABLE_USER_ENDPOINT

Boolean

Whether to allow or disable the /userinfo endpoint. If using Azure Entra ID, this field must be set to true because Azure obtains the user’s information from the token instead of calling the /userinfo endpoint.

Default: false

OIDC example YAML
AUTHENTICATION_TYPE: OIDC
# ...
<oidc_provider>_LOGIN_CONFIG:
  CLIENT_ID: <client_id>
  CLIENT_SECRET: <client_secret>
  DEBUGLOG: true
  LOGIN_BINDING_FIELD: <login_binding_field>
  LOGIN_SCOPES:
    - openid
    - email
    - profile
  OIDC_ENDPOINT_CUSTOM_PARAMS:
    authorization_endpoint:
      some: "param"
    token_endpoint:
      some: "param"
    user_endpoint:
      some: "param"
  OIDC_ISSUER: <oidc_issuer_url>
  OIDC_SERVER: <oidc_server_address>
  PREFERRED_USERNAME_CLAIM_NAME: <preferred_username_claim>
  SERVICE_ICON: <service_icon_url>
  SERVICE_NAME: <service_name>
  VERIFIED_EMAIL_CLAIM_NAME: <verified_email_claim>
  PREFERRED_GROUP_CLAIM_NAME: <preferred_group_claim>
  OIDC_DISABLE_USER_ENDPOINT: true
# ...

Recaptcha configuration fields

You can enable Recaptcha support in your Project Quay instance to help protect user login and account recovery forms from abuse by automated systems.

Table 26. Recaptcha configuration fields
Field Type Description

FEATURE_RECAPTCHA

Boolean

Whether Recaptcha is necessary for user login and recovery

Default: False

RECAPTCHA_SECRET_KEY

String

If recaptcha is enabled, the secret key for the Recaptcha service

RECAPTCHA_SITE_KEY

String

If recaptcha is enabled, the site key for the Recaptcha service

Recaptcha example YAML
# ...
FEATURE_RECAPTCHA: true
RECAPTCHA_SITE_KEY: "<site_key>"
RECAPTCHA_SECRET_KEY: "<secret_key>"
# ...

JWT configuration fields

Project Quay can be configured to support external authentication using JSON Web Tokens (JWT). This integration allows third-party identity providers or token issuers to authenticate and authorize users by calling specific endpoints that handle token verification, user lookup, and permission queries.

Table 27. JWT configuration fields
Field Type Description

JWT_AUTH_ISSUER

String

The endpoint for JWT users

Pattern: ^http(s)?://(.)+$
Example: http://192.168.99.101:6060

JWT_GETUSER_ENDPOINT

String

The endpoint for JWT users
Pattern: ^http(s)?://(.)+$
Example: http://192.168.99.101:6060

JWT_QUERY_ENDPOINT

String

The endpoint for JWT queries

Pattern: ^http(s)?://(.)+$
Example: http://192.168.99.101:6060

JWT_VERIFY_ENDPOINT

String

The endpoint for JWT verification

Pattern: ^http(s)?://(.)+$
Example: http://192.168.99.101:6060

JWT example YAML
# ...
JWT_AUTH_ISSUER: "http://192.168.99.101:6060"
JWT_GETUSER_ENDPOINT: "http://192.168.99.101:6060/getuser"
JWT_QUERY_ENDPOINT: "http://192.168.99.101:6060/query"
JWT_VERIFY_ENDPOINT: "http://192.168.99.101:6060/verify"
# ...

App tokens configuration fields

App-specific tokens allow users to authenticate with Project Quay using token-based credentials. These fields might be useful for CLI tools like Docker.

Table 28. App tokens configuration fields
Field Type Description

FEATURE_APP_SPECIFIC_TOKENS

Boolean

If enabled, users can create tokens for use by the Docker CLI

Default: True

APP_SPECIFIC_TOKEN_EXPIRATION

String

The expiration for external app tokens.

Default None
Pattern: ^[0-9]+(w|m|d|h|s)$

EXPIRED_APP_SPECIFIC_TOKEN_GC

String

Duration of time expired external app tokens will remain before being garbage collected

Default: 1d

App tokens example YAML
# ...
FEATURE_APP_SPECIFIC_TOKENS: true
APP_SPECIFIC_TOKEN_EXPIRATION: "30d"
EXPIRED_APP_SPECIFIC_TOKEN_GC: "1d"
# ...

Security and Permissions

This section describes configuration fields that govern core security behaviors and access policies within Project Quay.

Namespace and repository management configuration fields

The following configuration fields govern how Project Quay manages namespaces and repositories, including behavior during automated image pushes, visibility defaults, and rate limiting exceptions.

Table 29. Namespace and repository management configuration fields
Field Type Description

DEFAULT_NAMESPACE_MAXIMUM_BUILD_COUNT

Number

The default maximum number of builds that can be queued in a namespace.

Default: None

CREATE_PRIVATE_REPO_ON_PUSH

Boolean

Whether new repositories created by push are set to private visibility

Default: True

CREATE_NAMESPACE_ON_PUSH

Boolean

Whether new push to a non-existent organization creates it

Default: False

PUBLIC_NAMESPACES

Array of String

If a namespace is defined in the public namespace list, then it will appear on all users' repository list pages, regardless of whether the user is a member of the namespace. Typically, this is used by an enterprise customer in configuring a set of "well-known" namespaces.

NON_RATE_LIMITED_NAMESPACES

Array of String

If rate limiting has been enabled using FEATURE_RATE_LIMITS, you can override it for specific namespace that require unlimited access.

DISABLE_PUSHES

Boolean

Disables pushes of new content to the registry while retaining all other functionality. Differs from read-only mode because database is not set as read-only. When DISABLE_PUSHES is set to true, the Project Quay garbage collector is disabled. As a result, when PERMANENTLY_DELETE_TAGS is enabled, using the Project Quay UI to permanently delete a tag does not result in the immediate deletion of a tag. Instead, the image stays in the backend storage until DISABLE_PUSHES is set to false, which re-enables the garbage collector. Project Quay administrators should be aware of this caveat when using DISABLE_PUSHES and PERMANENTLY_DELETE_TAGS together.

Default: False

Namespace and repository management example YAML
# ...
DEFAULT_NAMESPACE_MAXIMUM_BUILD_COUNT: 10
CREATE_PRIVATE_REPO_ON_PUSH: true
CREATE_NAMESPACE_ON_PUSH: false
PUBLIC_NAMESPACES:
  - redhat
  - opensource
  - infra-tools
NON_RATE_LIMITED_NAMESPACES:
  - ci-pipeline
  - trusted-partners
DISABLE_PUSHES: false
# ...

Nested repositories configuration fields

Support for nested repository path names has been added by the FEATURE_EXTENDED_REPOSITORY_NAMES property. This optional configuration is added to the config.yaml by default. Enablement allows the use of / in repository names.

Table 30. Nested repositories configuration fields
Field Type Description

FEATURE_EXTENDED_REPOSITORY_NAMES

Boolean

Enable support for nested repositories

Default: True

Nested repositories example YAML
# ...
FEATURE_EXTENDED_REPOSITORY_NAMES: true
# ...

Additional security configuration fields

The following configuration fields provide additional security controls for your Project Quay deployment. These options allow administrators to enforce authentication practices, control anonymous access to content, require team invitations, and enable FIPS-compliant cryptographic functions for environments with enhanced security requirements.

Table 31. Additional security configuration fields
Feature Type Description

FEATURE_REQUIRE_TEAM_INVITE

Boolean

Whether to require invitations when adding a user to a team

Default: True

FEATURE_REQUIRE_ENCRYPTED_BASIC_AUTH

Boolean

Whether non-encrypted passwords (as opposed to encrypted tokens) can be used for basic auth

Default: False

FEATURE_ANONYMOUS_ACCESS

Boolean

Whether to allow anonymous users to browse and pull public repositories

Default: True

FEATURE_FIPS

Boolean

If set to true, Project Quay will run using FIPS-compliant hash functions

Default: False

Additional security example YAML
# ...
FEATURE_REQUIRE_TEAM_INVITE: true
FEATURE_REQUIRE_ENCRYPTED_BASIC_AUTH: false
FEATURE_ANONYMOUS_ACCESS: true
FEATURE_FIPS: false
# ...

Rate limiting and performance configuration fields

The following fields control rate limiting and performance-related behavior for your Project Quay deployment.

Table 32. Rate limiting and performance configuration fields
Field Type Description

FEATURE_RATE_LIMITS

Boolean

Whether to enable rate limits on API and registry endpoints. Setting FEATURE_RATE_LIMITS to true causes nginx to limit certain API calls to 30 per second. If that feature is not set, API calls are limited to 300 per second (effectively unlimited).

Default: false

PROMETHEUS_NAMESPACE

String

The prefix applied to all exposed Prometheus metrics

Default: quay

Rate limiting and performance example YAML
# ...
FEATURE_RATE_LIMITS: false
PROMETHEUS_NAMESPACE: quay
# ...

The following configuration fields define how search results are paginated in the Project Quay user interface.

Table 33. Search configuration fields
Field Type Description

SEARCH_MAX_RESULT_PAGE_COUNT

Number

Maximum number of pages the user can paginate in search before they are limited

Default: 10

SEARCH_RESULTS_PER_PAGE

Number

Number of results returned per page by search page

Default: 10

Search example YAML
# ...
SEARCH_MAX_RESULT_PAGE_COUNT: 10
SEARCH_RESULTS_PER_PAGE: 10
# ...

Storage and Data Management

This section describes the configuration fields that govern how Project Quay stores, manages, and audits data.

Image storage features

Project Quay supports image storage features that enhance scalability, resilience, and flexibility in managing container image data. These features allow Project Quay to mirror repositories, proxy storage access through NGINX, and replicate data across multiple storage engines.

Table 34. Storage configuration features
Field Type Description

FEATURE_REPO_MIRROR

Boolean

If set to true, enables repository mirroring.

Default: false

FEATURE_PROXY_STORAGE

Boolean

Whether to proxy all direct download URLs in storage through NGINX.

Default: false

FEATURE_STORAGE_REPLICATION

Boolean

Whether to automatically replicate between storage engines.

Default: false

Image storage example YAML
# ...
FEATURE_REPO_MIRROR: true
FEATURE_PROXY_STORAGE: false
FEATURE_STORAGE_REPLICATION: true
# ...

Action log storage configuration fields

Project Quay maintains a detailed action log to track user and system activity, including repository events, authentication actions, and image operations. By default, this log data is stored in the database, but administrators can configure their deployment to export or forward logs to external systems like Elasticsearch or Splunk for advanced analysis, auditing, or compliance.

Table 35. Action log storage configuration fields
Field Type Description

FEATURE_LOG_EXPORT

Boolean

Whether to allow exporting of action logs.

Default: True

LOGS_MODEL

String

Specifies the preferred method for handling log data.

Values: One of database, transition_reads_both_writes_es, elasticsearch, splunk
Default: database

LOGS_MODEL_CONFIG

Object

Logs model config for action logs.

ALLOW_WITHOUT_STRICT_LOGGING

Boolean

When set to True, if the external log system like Splunk or ElasticSearch is intermittently unavailable, allows users to push images normally. Events are logged to the stdout instead. Overrides ALLOW_PULLS_WITHOUT_STRICT_LOGGING if set.

Default: False

Action log storage example YAML
# ...
FEATURE_LOG_EXPORT: true
LOGS_MODEL: elasticsearch
LOGS_MODEL_CONFIG:
  elasticsearch:
    endpoint: http://elasticsearch.example.com:9200
    index_prefix: quay-logs
    username: elastic
    password: changeme
ALLOW_WITHOUT_STRICT_LOGGING: true
# ...
Action log rotation and archiving configuration

This section describes configuration fields related to action log rotation and archiving in Project Quay. When enabled, older logs can be automatically rotated and archived to designated storage locations, helping to manage log retention and storage utilization efficiently.

Table 36. Action log rotation and archiving configuration
Field Type Description

FEATURE_ACTION_LOG_ROTATION

Boolean

Enabling log rotation and archival will move all logs older than 30 days to storage.

Default: false

ACTION_LOG_ARCHIVE_LOCATION

String

If action log archiving is enabled, the storage engine in which to place the archived data.

Example:: s3_us_east

ACTION_LOG_ARCHIVE_PATH

String

If action log archiving is enabled, the path in storage in which to place the archived data.

Example: archives/actionlogs

ACTION_LOG_ROTATION_THRESHOLD

String

The time interval after which to rotate logs.

Example: 30d

Action log rotation and archiving example YAML
# ...
FEATURE_ACTION_LOG_ROTATION: true
ACTION_LOG_ARCHIVE_LOCATION: s3_us_east
ACTION_LOG_ARCHIVE_PATH: archives/actionlogs
ACTION_LOG_ROTATION_THRESHOLD: 30d
# ...
Action log audit configuration

This section covers the configuration fields for audit logging within Project Quay. When enabled, audit logging tracks detailed user activity such as UI logins, logouts, and Docker logins for regular users, robot accounts, and token-based accounts.

Table 37. Audit logs configuration field
Field Type Description

ACTION_LOG_AUDIT_LOGINS

Boolean

When set to True, tracks advanced events such as logging into, and out of, the UI, and logging in using Docker for regular users, robot accounts, and for application-specific token accounts.

Default: True

Audit logs configuration example YAML
# ...
ACTION_LOG_AUDIT_LOGINS: true
# ...

Elasticsearch configuration fields

Use the following configuration fields to integrate Project Quay with an external Elasticsearch service. This enables storing and querying structured data such as action logs, repository events, and other operational records outside of the internal database.

Table 38. Logs model configuration (LOGS_MODEL_CONFIG) fields
Field Type Description

LOGS_MODEL_CONFIG.elasticsearch_config.access_key

String

Elasticsearch user (or IAM key for AWS ES).
Example: some_string

.elasticsearch_config.host

String

Elasticsearch cluster endpoint.
Example: host.elasticsearch.example

.elasticsearch_config.index_prefix

String

Prefix for Elasticsearch indexes.
Example: logentry_

.elasticsearch_config.index_settings

Object

Index settings for Elasticsearch.

LOGS_MODEL_CONFIG.elasticsearch_config.use_ssl

Boolean

Whether to use SSL for Elasticsearch.
Default: True
Example: True

.elasticsearch_config.secret_key

String

Elasticsearch password (or IAM secret for AWS ES).
Example: some_secret_string

.elasticsearch_config.aws_region

String

AWS region.
Example: us-east-1

.elasticsearch_config.port

Number

Port of the Elasticsearch cluster.
Example: 1234

.kinesis_stream_config.aws_secret_key

String

AWS secret key.
Example: some_secret_key

.kinesis_stream_config.stream_name

String

AWS Kinesis stream to send action logs to.
Example: logentry-kinesis-stream

.kinesis_stream_config.aws_access_key

String

AWS access key.
Example: some_access_key

.kinesis_stream_config.retries

Number

Max number of retry attempts for a single request.
Example: 5

.kinesis_stream_config.read_timeout

Number

Read timeout in seconds.
Example: 5

.kinesis_stream_config.max_pool_connections

Number

Max number of connections in the pool.
Example: 10

.kinesis_stream_config.aws_region

String

AWS region.
Example: us-east-1

.kinesis_stream_config.connect_timeout

Number

Connection timeout in seconds.
Example: 5

.producer

String

Logs producer type.
Accepted values: kafka, elasticsearch, kinesis_stream
Example: kafka

.kafka_config.topic

String

Kafka topic used to publish log entries.
Example: logentry

.kafka_config.bootstrap_servers

Array

List of Kafka brokers used to bootstrap the client.

.kafka_config.max_block_seconds

Number

Max seconds to block during a send() operation.
Example: 10

Elasticsearch example YAML
# ...
FEATURE_LOG_EXPORT: true
LOGS_MODEL: elasticsearch
LOGS_MODEL_CONFIG:
  producer: elasticsearch
  elasticsearch_config:
    access_key: elastic_user
    secret_key: elastic_password
    host: es.example.com
    port: 9200
    use_ssl: true
    aws_region: us-east-1
    index_prefix: logentry_
    index_settings:
      number_of_shards: 3
      number_of_replicas: 1
ALLOW_WITHOUT_STRICT_LOGGING: true
# ...
Splunk configuration fields

Use the following fields to configure Project Quay to export action logs to a Splunk endpoint. This configuration allows audit and event logs to be sent to an external Splunk server for centralized analysis, search, and long-term storage.

Table 39. Splunk configuration fields
Field Type Description

producer

String

Must be set to splunk when configuring Splunk as the log exporter.

splunk_config

Object

Logs model configuration for Splunk action logs or Splunk cluster configuration.

.host

String

The Splunk cluster endpoint.

.port

Integer

The port number for the Splunk management cluster endpoint.

.bearer_token

String

The bearer token used for authentication with Splunk.

.verify_ssl

Boolean

Enable (True) or disable (False) TLS/SSL verification for HTTPS connections.

.index_prefix

String

The index prefix used by Splunk.

.ssl_ca_path

String

The relative container path to a .pem file containing the certificate authority (CA) for SSL validation.

Splunk configuration example YAML
# ...
LOGS_MODEL: splunk
LOGS_MODEL_CONFIG:
    producer: splunk
    splunk_config:
        host: http://<user_name>.remote.csb
        port: 8089
        bearer_token: <bearer_token>
        url_scheme: <http/https>
        verify_ssl: False
        index_prefix: <splunk_log_index_name>
        ssl_ca_path: <location_to_ssl-ca-cert.pem>
# ...
Splunk HEC configuration fields

The following fields are available when configuring Splunk HTTP Event Collector (HEC) for Project Quay.

Table 40. Splunk HEC configuration fields
Field Type Description

producer

String

Must be set to splunk_hec when configuring Splunk HTTP Event Collector (HEC).

splunk_hec_config

Object

Logs model configuration for Splunk HTTP Event Collector action logs.

.host

String

Splunk cluster endpoint.

.port

Integer

Splunk management cluster endpoint port.

.hec_token

String

HEC token used for authenticating with Splunk.

.url_scheme

String

URL scheme to access the Splunk service. Use https if Splunk is behind SSL/TLS.

.verify_ssl

Boolean

Enable (true) or disable (false) SSL/TLS verification for HTTPS connections.

.index

String

The Splunk index to use for log storage.

.splunk_host

String

The hostname to assign to the logged event.

.splunk_sourcetype

String

The Splunk sourcetype to associate with the event.

Splunk HEC example YAML
# ...
LOGS_MODEL: splunk
LOGS_MODEL_CONFIG:
  producer: splunk_hec
  splunk_hec_config:
    host: prd-p-aaaaaq.splunkcloud.com
    port: 8088
    hec_token: 12345678-1234-1234-1234-1234567890ab
    url_scheme: https
    verify_ssl: False
    index: quay
    splunk_host: quay-dev
    splunk_sourcetype: quay_logs
# ...

Builds and Automation

This section outlines the configuration options available for managing automated builds within Project Quay. These settings control how Dockerfile builds are triggered, processed, and stored, and how build logs are managed and accessed.

You can use these fields to:

  • Enable or disable automated builds from source repositories.

  • Configure the behavior and resource management of the build manager.

  • Control access to and retention of build logs for auditing or debugging purposes.

These options help you streamline your CI/CD pipeline, enforce build policies, and retain visibility into your build history across the registry.

Dockerfile build triggers fields

This section describes the configuration fields used to enable and manage automated builds in Project Quay from Dockerfiles and source code repositories. These fields allow you to define build behavior, enable or disable support for GitHub, GitLab, and Bitbucket triggers, and provide OAuth credentials and endpoints for each SCM provider.

Table 41. Dockerfile build support
Field Type Description

FEATURE_BUILD_SUPPORT

Boolean

Whether to support Dockerfile build.

Default: False

SUCCESSIVE_TRIGGER_FAILURE_DISABLE_THRESHOLD

Number

If not set to None, the number of successive failures that can occur before a build trigger is automatically disabled.

Default: 100

SUCCESSIVE_TRIGGER_INTERNAL_ERROR_DISABLE_THRESHOLD

Number

If not set to None, the number of successive internal errors that can occur before a build trigger is automatically disabled

Default: 5

Dockerfile build support example YAML
# ...
FEATURE_BUILD_SUPPORT: true
SUCCESSIVE_TRIGGER_FAILURE_DISABLE_THRESHOLD: 100
SUCCESSIVE_TRIGGER_INTERNAL_ERROR_DISABLE_THRESHOLD: 5
# ...
Table 42. GitHub build triggers
Field Type Description

FEATURE_GITHUB_BUILD

Boolean

Whether to support GitHub build triggers.

Default: False

GITHUB_TRIGGER_CONFIG

Object

Configuration for using GitHub Enterprise for build triggers.

   .GITHUB_ENDPOINT
   (Required)

String

The endpoint for GitHub Enterprise.

Example: https://github.com/

   .API_ENDPOINT

String

The endpoint of the GitHub Enterprise API to use. Must be overridden for github.com.

Example: https://api.github.com/

   .CLIENT_ID
   (Required)

String

The registered client ID for this Project Quay instance; this cannot be shared with GITHUB_LOGIN_CONFIG.

   .CLIENT_SECRET
   (Required)

String

The registered client secret for this Project Quay instance.

Github build triggers example YAML
# ...
FEATURE_GITHUB_BUILD: true
GITHUB_TRIGGER_CONFIG:
  GITHUB_ENDPOINT: https://github.com/
  API_ENDPOINT: https://api.github.com/
  CLIENT_ID: your-client-id
  CLIENT_SECRET: your-client-secret
# ...
Table 43. BitBucket build triggers
Field Type Description

FEATURE_BITBUCKET_BUILD

Boolean

Whether to support Bitbucket build triggers.

Default: False

BITBUCKET_TRIGGER_CONFIG

Object

Configuration for using BitBucket for build triggers.

   .CONSUMER_KEY
   (Required)

String

The registered consumer key (client ID) for this Project Quay instance.

   .CONSUMER_SECRET
   (Required)

String

The registered consumer secret (client secret) for this Project Quay instance.

Bitbucket build triggers example YAML
# ...
FEATURE_BITBUCKET_BUILD: true
BITBUCKET_TRIGGER_CONFIG:
  CONSUMER_KEY: <your_consumer_key>
  CONSUMER_SECRET: <your-consumer-secret>
# ...
Table 44. GitLab build triggers
Field Type Description

FEATURE_GITLAB_BUILD

Boolean

Whether to support GitLab build triggers.

Default: False

GITLAB_TRIGGER_CONFIG

Object

Configuration for using Gitlab for build triggers.

   .GITLAB_ENDPOINT
   (Required)

String

The endpoint at which Gitlab Enterprise is running.

   .CLIENT_ID
   (Required)

String

The registered client ID for this Project Quay instance.

   .CLIENT_SECRET
   (Required)

String

The registered client secret for this Project Quay instance.

GitLab build triggers example YAML
# ...
FEATURE_GITLAB_BUILD: true
GITLAB_TRIGGER_CONFIG:
  GITLAB_ENDPOINT: https://gitlab.example.com/
  CLIENT_ID: <your_gitlab_client_id>
  CLIENT_SECRET: <your_gitlab_client_secret>
# ...

Build manager configuration fields

The following configuration fields control how the build manager component of Project Quay orchestrates and manages container image builds. This includes settings for Redis coordination, executor backends such as Kubernetes or EC2, builder image configuration, and advanced scheduling and retry policies.

These fields must be configured to align with your infrastructure environment and workload requirements.

Table 45. Build manager configuration fields
Field Type Description

ALLOWED_WORKER_COUNT

String

Defines how many Build Workers are instantiated per Project Quay pod. Typically set to 1.

ORCHESTRATOR_PREFIX

String

Defines a unique prefix to be added to all Redis keys. This is useful to isolate Orchestrator values from other Redis keys.

REDIS_HOST

Object

The hostname for your Redis service.

REDIS_PASSWORD

String

The password to authenticate into your Redis service.

REDIS_SSL

Boolean

Defines whether or not your Redis connection uses SSL/TLS.

REDIS_SKIP_KEYSPACE_EVENT_SETUP

Boolean

By default, Project Quay does not set up the keyspace events required for key events at runtime. To do so, set REDIS_SKIP_KEYSPACE_EVENT_SETUP to false.

EXECUTOR

String

Starts a definition of an Executor of this type. Valid values are kubernetes and ec2.

BUILDER_NAMESPACE

String

Kubernetes namespace where Project Quay Builds will take place.

K8S_API_SERVER

Object

Hostname for API Server of the OpenShift Container Platform cluster where Builds will take place.

K8S_API_TLS_CA

Object

The filepath in the Quay container of the Build cluster’s CA certificate for the Quay application to trust when making API calls.

KUBERNETES_DISTRIBUTION

String

Indicates which type of Kubernetes is being used. Valid values are openshift and k8s.

CONTAINER_*

Object

Define the resource requests and limits for each build pod.

NODE_SELECTOR_*

Object

Defines the node selector label name-value pair where build Pods should be scheduled.

CONTAINER_RUNTIME

Object

Specifies whether the Builder should run docker or podman. Customers using Red Hat’s quay-builder image should set this to podman.

SERVICE_ACCOUNT_NAME/SERVICE_ACCOUNT_TOKEN

Object

Defines the Service Account name or token that will be used by build pods.

QUAY_USERNAME/QUAY_PASSWORD

Object

Defines the registry credentials needed to pull the Project Quay build worker image that is specified in the WORKER_IMAGE field. This is useful if pulling a non-public quay-builder image from quay.io.

WORKER_IMAGE

Object

Image reference for the Project Quay Builder image. quay.io/quay/quay-builder

WORKER_TAG

Object

Tag for the Builder image desired. The latest version is 3.14.

BUILDER_VM_CONTAINER_IMAGE

Object

The full reference to the container image holding the internal VM needed to run each Project Quay Build. (quay.io/quay/quay-builder-qemu-fedoracoreos:latest).

SETUP_TIME

String

Specifies the number of seconds at which a Build times out if it has not yet registered itself with the Build Manager. Defaults at 500 seconds. Builds that time out are attempted to be restarted three times. If the Build does not register itself after three attempts it is considered failed.

MINIMUM_RETRY_THRESHOLD

String

This setting is used with multiple Executors. It indicates how many retries are attempted to start a Build before a different Executor is chosen. Setting to 0 means there are no restrictions on how many tries the build job needs to have. This value should be kept intentionally small (three or less) to ensure failovers happen quickly during infrastructure failures. You must specify a value for this setting. For example, Kubernetes is set as the first executor and EC2 as the second executor. If you want the last attempt to run a job to always be executed on EC2 and not Kubernetes, you can set the Kubernetes executor’s MINIMUM_RETRY_THRESHOLD to 1 and EC2’s MINIMUM_RETRY_THRESHOLD to 0 (defaults to 0 if not set). In this case, the Kubernetes' MINIMUM_RETRY_THRESHOLD retries_remaining(1) would evaluate to False, therefore falling back to the second executor configured.

SSH_AUTHORIZED_KEYS

Object

List of SSH keys to bootstrap in the ignition config. This allows other keys to be used to SSH into the EC2 instance or QEMU virtual machine (VM).

Build manager configuration fields
# ...
ALLOWED_WORKER_COUNT: "1"
ORCHESTRATOR_PREFIX: "quaybuild:"
REDIS_HOST: redis.example.com
REDIS_PASSWORD: examplepassword
REDIS_SSL: true
REDIS_SKIP_KEYSPACE_EVENT_SETUP: false
EXECUTOR: kubernetes
BUILDER_NAMESPACE: quay-builder
K8S_API_SERVER: https://api.openshift.example.com:6443
K8S_API_TLS_CA: /etc/ssl/certs/ca.crt
KUBERNETES_DISTRIBUTION: openshift
CONTAINER_RUNTIME: podman
CONTAINER_MEMORY_LIMITS: 2Gi
NODE_SELECTOR_ROLE: quay-build-node
SERVICE_ACCOUNT_NAME: quay-builder-sa
QUAY_USERNAME: quayuser
QUAY_PASSWORD: quaypassword
WORKER_IMAGE: quay.io/quay/quay-builder
WORKER_TAG: latest
BUILDER_VM_CONTAINER_IMAGE: quay.io/quay/vm-builder:latest
SETUP_TIME: "500"
MINIMUM_RETRY_THRESHOLD: "1"
SSH_AUTHORIZED_KEYS:
  - ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAsomekey user@example.com
  - ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAnotherkey user2@example.com
# ...

Build logs configuration fields

This section describes the available configuration fields for managing build logs in Project Quay. These settings determine where build logs are archived, who can access them, and how they are stored.

Table 46. Build logs configuration fields
Field Type Description

FEATURE_READER_BUILD_LOGS

Boolean

If set to true, build logs can be read by those with read access to the repository, rather than only write access or admin access.

Default: False

LOG_ARCHIVE_LOCATION

String

The storage location, defined in DISTRIBUTED_STORAGE_CONFIG, in which to place the archived build logs.

Example: s3_us_east

LOG_ARCHIVE_PATH

String

The path under the configured storage engine in which to place the archived build logs in .JSON format.

Example: archives/buildlogs

Build logs example YAML
# ...
FEATURE_READER_BUILD_LOGS: true
LOG_ARCHIVE_LOCATION: s3_us_east
LOG_ARCHIVE_PATH: archives/buildlogs
# ...

Tag and image management

This section describes the configuration fields that control how tags and images are managed within Project Quay. These settings help automate image cleanup, manage repository mirrors, and enhance performance through caching.

You can use these fields to:

  • Define expiration policies for untagged or outdated images.

  • Enable and schedule mirroring of external repositories into your registry.

  • Leverage model caching to optimize performance for tag and repository operations.

These options help maintain an up-to-date image registry environment.

Tag expiration configuration fields

The following configuration options are available to automate tag expiration and garbage collection. These features help manage storage usage by enabling cleanup of unused or expired tags based on defined policies.

Table 47. Tag expiration configuration fields
Field Type Description

FEATURE_GARBAGE_COLLECTION

Boolean

Whether garbage collection of repositories is enabled.

Default: True

TAG_EXPIRATION_OPTIONS
(Required)

Array of string

If enabled, the options that users can select for expiration of tags in their namespace.

Pattern:
^[0-9]+(y|w|m|d|h|s)$

DEFAULT_TAG_EXPIRATION
(Required)

String

The default, configurable tag expiration time for time machine.

Pattern:
^[0-9]+(y\w|m|d|h|s)$
Default: 2w

FEATURE_CHANGE_TAG_EXPIRATION

Boolean

Whether users and organizations are allowed to change the tag expiration for tags in their namespace.

Default: True

FEATURE_AUTO_PRUNE

Boolean

When set to True, enables functionality related to the auto-pruning of tags.
Default: False

NOTIFICATION_TASK_RUN_MINIMUM_INTERVAL_MINUTES

Integer

The interval, in minutes, that defines the frequency to re-run notifications for expiring images.

Default: 300

DEFAULT_NAMESPACE_AUTOPRUNE_POLICY

Object

The default organization-wide auto-prune policy.

    .method: number_of_tags

Object

The option specifying the number of tags to keep.

    .value: <integer>

Integer

When used with method: number_of_tags, denotes the number of tags to keep.

For example, to keep two tags, specify 2.

    .creation_date

Object

The option specifying the duration of which to keep tags.

    .value: <integer>

Integer

When used with creation_date, denotes how long to keep tags.

Can be set to seconds (s), days (d), months (m), weeks (w), or years (y). Must include a valid integer. For example, to keep tags for one year, specify 1y.

AUTO_PRUNING_DEFAULT_POLICY_POLL_PERIOD

Integer

The period in which the auto-pruner worker runs at the registry level. By default, it is set to run one time per day (one time per 24 hours). Value must be in seconds.

Example tag expiration example YAML
# ...
FEATURE_GARBAGE_COLLECTION: true
TAG_EXPIRATION_OPTIONS:
  - 1w
  - 2w
  - 1m
  - 90d
DEFAULT_TAG_EXPIRATION: 2w
FEATURE_CHANGE_TAG_EXPIRATION: true
FEATURE_AUTO_PRUNE: true
NOTIFICATION_TASK_RUN_MINIMUM_INTERVAL_MINUTES: 300
DEFAULT_NAMESPACE_AUTOPRUNE_POLICY:
  method: number_of_tags
  value: 10 (1)
AUTO_PRUNING_DEFAULT_POLICY_POLL_PERIOD: 86400
# ...
  1. Specifies ten tags to remain.

Registry auto-prune policy by creation date example YAML
# ...
DEFAULT_NAMESPACE_AUTOPRUNE_POLICY:
  method: creation_date
  value: 1y (1)
# ...
  1. Specifies tags to be pruned one year after their creation date.

Mirroring configuration fields

Mirroring in Project Quay enables automatic synchronization of repositories with upstream sources. This feature is useful for maintaining local mirrors of remote container images, ensuring availability in disconnected environments or improving performance through caching.

Table 48. Mirroring configuration
Field Type Description

FEATURE_REPO_MIRROR

Boolean

Enable or disable repository mirroring

Default: false

REPO_MIRROR_INTERVAL

Number

The number of seconds between checking for repository mirror candidates

Default: 30

REPO_MIRROR_SERVER_HOSTNAME

String

Replaces the SERVER_HOSTNAME as the destination for mirroring.

Default: None

Example:
openshift-quay-service

REPO_MIRROR_TLS_VERIFY

Boolean

Require HTTPS and verify certificates of Quay registry during mirror.

Default: true

REPO_MIRROR_ROLLBACK

Boolean

When set to true, the repository rolls back after a failed mirror attempt.

Default: false

Mirroring configuration example YAML
# ...
FEATURE_REPO_MIRROR: true
REPO_MIRROR_INTERVAL: 30
REPO_MIRROR_SERVER_HOSTNAME: "openshift-quay-service"
REPO_MIRROR_TLS_VERIFY: true
REPO_MIRROR_ROLLBACK: false
# ...

ModelCache configuration fields

ModelCache is a caching mechanism used by Project Quay to store accessed data and reduce database load. Quay supports multiple backends for caching, including the default Memcache, as well as Redis and Redis Cluster.

  • Memcache (default): requires no additional configuration.

  • Redis: can be configured as a single instance or with a read-only replica.

  • Redis Cluster: provides high availability and sharding for larger deployments.

Table 49. ModelCache configuration fields
Field Type Description

DATA_MODEL_CACHE_CONFIG.engine

String

The cache backend engine.
Values: memcache, redis, rediscluster
Default: memcache

.redis_config.primary.host

String

The hostname of the primary Redis instance when using the redis engine.

.redis_config.primary.port

Number

The port used by the primary Redis instance.

.redis_config.primary.password

String

The password for authenticating with the primary Redis instance. Only required if ssl is set to true.

.redis_config.primary.ssl

Boolean

Whether to use SSL/TLS for the primary Redis connection.

.redis_config.startup_nodes

Array of Map

For rediscluster engine. The list of initial Redis cluster nodes with host and port.

redis_config.password

String

Password used for authentication with the Redis cluster. Required if ssl is true.

.redis_config.read_from_replicas

Boolean

Whether to allow read operations from Redis cluster replicas.

.redis_config.skip_full_coverage_check

Boolean

If set to true, skips the Redis cluster full coverage check.

.redis_config.ssl

Boolean

Whether to use SSL/TLS for Redis cluster communication.

.replica.host

String

The hostname of the Redis replica instance. Optional.

.replica.port

Number

The port used by the Redis replica instance.

.replica.password

String

The password for the Redis replica. Required if ssl is true.

.replica.ssl

Boolean

Whether to use SSL/TLS for the Redis replica connection.

Single Redis with optional replica example YAML
# ...
DATA_MODEL_CACHE_CONFIG:
  engine: redis
  redis_config:
    primary:
      host: <redis-primary.example.com>
      port: 6379
      password: <redis_password>>
      ssl: true
    replica:
      host: <redis-replica.example.com>
      port: 6379
      password: <redis_password>
      ssl: true
# ...
Clustered Redis example YAML
# ...
DATA_MODEL_CACHE_CONFIG:
  engine: <rediscluster>
  redis_config:
    startup_nodes:
      - host: <redis-node-1.example.com>
        port: 6379
      - host: <redis-node-2.example.com>
        port: 6379
    password: <cluster_password>
    read_from_replicas: true
    skip_full_coverage_check: true
    ssl: true
# ...

Scanner and Metadata

This section describes configuration fields related to security scanning, metadata presentation, and artifact relationships within Project Quay.

These settings enable enhanced visibility and security by allowing Project Quay to:

  • Integrate with a vulnerability scanner to assess container images for known CVEs.

  • Render AI/ML model metadata through model cards stored in the registry.

  • Expose relationships between container artifacts using the Referrers API, aligning with the OCI artifact specification.

Together, these features help improve software supply chain transparency, enforce security policies, and support emerging metadata-driven workflows.

Clair security scanner configuration fields

Project Quay can leverage Clair security scanner to detect vulnerabilities in container images. These configuration fields control how the scanner is enabled, how frequently it indexes new content, which endpoints are used, and how notifications are handled.

Table 50. Security scanner configuration
Field Type Description

FEATURE_SECURITY_SCANNER

Boolean

Enable or disable the security scanner

Default: false

FEATURE_SECURITY_NOTIFICATIONS

Boolean

If the security scanner is enabled, turn on or turn off security notifications

Default: false

SECURITY_SCANNER_V4_REINDEX_THRESHOLD

String

This parameter is used to determine the minimum time, in seconds, to wait before re-indexing a manifest that has either previously failed or has changed states since the last indexing. The data is calculated from the last_indexed datetime in the manifestsecuritystatus table. This parameter is used to avoid trying to re-index every failed manifest on every indexing run. The default time to re-index is 300 seconds.

SECURITY_SCANNER_V4_ENDPOINT

String

The endpoint for the V4 security scanner

Pattern:
^http(s)?://(.)+$

Example:
http://192.168.99.101:6060

SECURITY_SCANNER_V4_PSK

String

The generated pre-shared key (PSK) for Clair

SECURITY_SCANNER_ENDPOINT

String

The endpoint for the V2 security scanner

Pattern:
^http(s)?://(.)+$

Example:
http://192.168.99.100:6060

SECURITY_SCANNER_INDEXING_INTERVAL

Integer

This parameter is used to determine the number of seconds between indexing intervals in the security scanner. When indexing is triggered, Project Quay will query its database for manifests that must be indexed by Clair. These include manifests that have not yet been indexed and manifests that previously failed indexing.

Default: 30

FEATURE_SECURITY_SCANNER_NOTIFY_ON_NEW_INDEX

Boolean

Whether to allow sending notifications about vulnerabilities for new pushes.
Default: True

SECURITY_SCANNER_V4_MANIFEST_CLEANUP

Boolean

Whether the Project Quay garbage collector removes manifests that are not referenced by other tags or manifests.
Default: True

NOTIFICATION_MIN_SEVERITY_ON_NEW_INDEX

String

Set minimal security level for new notifications on detected vulnerabilities. Avoids creation of large number of notifications after first index. If not defined, defaults to High. Available options include Critical, High, Medium, Low, Negligible, and Unknown.

SECURITY_SCANNER_V4_INDEX_MAX_LAYER_SIZE

String

The maximum layer size allowed for indexing. If the layer size exceeds the configured size, the Project Quay UI returns the following message: The manifest for this tag has layer(s) that are too large to index by the Quay Security Scanner. The default is 8G, and the maximum recommended is 10G. Accepted values are B, K, M, T, and G.
Default: 8G

Security scanner YAML configuration
# ...
FEATURE_SECURITY_NOTIFICATIONS: true
FEATURE_SECURITY_SCANNER: true
FEATURE_SECURITY_SCANNER_NOTIFY_ON_NEW_INDEX: true
...
SECURITY_SCANNER_INDEXING_INTERVAL: 30
SECURITY_SCANNER_V4_MANIFEST_CLEANUP: true
SECURITY_SCANNER_V4_ENDPOINT: http://quay-server.example.com:8081
SECURITY_SCANNER_V4_PSK: MTU5YzA4Y2ZkNzJoMQ==
SERVER_HOSTNAME: quay-server.example.com
SECURITY_SCANNER_V4_INDEX_MAX_LAYER_SIZE: 8G (1)
# ...
  1. Recommended maximum is 10G.

Re-indexing with Clair v4

When Clair v4 indexes a manifest, the result should be deterministic. For example, the same manifest should produce the same index report. This is true until the scanners are changed, as using different scanners will produce different information relating to a specific manifest to be returned in the report. Because of this, Clair v4 exposes a state representation of the indexing engine (/indexer/api/v1/index_state) to determine whether the scanner configuration has been changed.

Project Quay leverages this index state by saving it to the index report when parsing to Quay’s database. If this state has changed since the manifest was previously scanned, Project Quay will attempt to re-index that manifest during the periodic indexing process.

By default this parameter is set to 30 seconds. Users might decrease the time if they want the indexing process to run more frequently, for example, if they did not want to wait 30 seconds to see security scan results in the UI after pushing a new tag. Users can also change the parameter if they want more control over the request pattern to Clair and the pattern of database operations being performed on the Project Quay database.

Model card rendering configuration fields

Project Quay supports the rendering of Model Cards—a form of metadata documentation commonly used in machine learning workflows—to improve the visibility and management of model-related content within OCI-compliant images.

Table 51. Model card rendering configuration fields
Field Type Description

FEATURE_UI_MODELCARD

Boolean

Enables Model Card image tab in UI. Defaults to true.

UI_MODELCARD_ARTIFACT_TYPE

String

Defines the model card artifact type.

UI_MODELCARD_ANNOTATION

Object

This optional field defines the layer annotation of the model card stored in an OCI image.

UI_MODELCARD_LAYER_ANNOTATION

Object

This optional field defines the layer annotation of the model card stored in an OCI image.

Model card example YAML
FEATURE_UI_MODELCARD: true (1)
UI_MODELCARD_ARTIFACT_TYPE: application/x-mlmodel (2)
UI_MODELCARD_ANNOTATION: (3)
  org.opencontainers.image.description: "Model card metadata"
UI_MODELCARD_LAYER_ANNOTATION: (4)
  org.opencontainers.image.title: README.md
  1. Enables the Model Card image tab in the UI.

  2. Defines the model card artifact type. In this example, the artifact type is application/x-mlmodel.

  3. Optional. If an image does not have an artifactType defined, this field is checked at the manifest level. If a matching annotation is found, the system then searches for a layer with an annotation matching UI_MODELCARD_LAYER_ANNOTATION.

  4. Optional. If an image has an artifactType defined and multiple layers, this field is used to locate the specific layer containing the model card.

Open Container Initiative referrers API configuration field

The Open Container Initiative (OCI) referrers API aids in the retrieval and management of referrers helps improve container image management.

Table 52. Referrers API configuration field
Field Type Description

FEATURE_REFERRERS_API

Boolean

Enables OCI 1.1’s referrers API.

OCI referrers enablement example YAML
# ...
FEATURE_REFERRERS_API: True
# ...

Quota management and proxy cache features

This section outlines configuration fields related to enforcing storage limits and improving image availability through proxy caching.

These features help registry administrators:

  • Control how much storage organizations and users consume with configurable quotas.

  • Improve access to upstream images by caching remote content locally via proxy cache.

  • Monitor and manage resource consumption and availability across distributed environments.

Collectively, these capabilities ensure better performance, governance, and resiliency in managing container image workflows.

Quota management configuration fields

The following configuration fields enable and customize quota management functionality in Project Quay. Quota management helps administrators enforce storage usage policies at the organization level by allowing them to set usage limits, calculate blob sizes, and control tag deletion behavior.

Table 53. Quota management configuration
Field Type Description

FEATURE_QUOTA_MANAGEMENT

Boolean

Enables configuration, caching, and validation for quota management feature.

**Default:** `False`

DEFAULT_SYSTEM_REJECT_QUOTA_BYTES

String

Enables system default quota reject byte allowance for all organizations.

By default, no limit is set.

QUOTA_BACKFILL

Boolean

Enables the quota backfill worker to calculate the size of pre-existing blobs.

Default: True

QUOTA_TOTAL_DELAY_SECONDS

String

The time delay for starting the quota backfill. Rolling deployments can cause incorrect totals. This field must be set to a time longer than it takes for the rolling deployment to complete.

Default: 1800

PERMANENTLY_DELETE_TAGS

Boolean

Enables functionality related to the removal of tags from the time machine window.

Default: False

RESET_CHILD_MANIFEST_EXPIRATION

Boolean

Resets the expirations of temporary tags targeting the child manifests. With this feature set to True, child manifests are immediately garbage collected.

Default: False

Quota management example YAML
# ...
FEATURE_QUOTA_MANAGEMENT: true
DEFAULT_SYSTEM_REJECT_QUOTA_BYTES: "100gb"
QUOTA_BACKFILL: true
QUOTA_TOTAL_DELAY_SECONDS: "3600"
PERMANENTLY_DELETE_TAGS: true
RESET_CHILD_MANIFEST_EXPIRATION: true
# ...

Proxy cache configuration fields

The proxy cache configuration in Project Quay enables Project Quay to act as a pull-through cache for upstream container registries. When FEATURE_PROXY_CACHE is enabled, Project Quay can cache images that are pulled from external registries, reducing bandwidth consumption and improving image retrieval speed on subsequent requests.

Table 54. Proxy cache configuration fields
Field Type Description

FEATURE_PROXY_CACHE

Boolean

Enables Project Quay to act as a pull through cache for upstream registries.

Default: false

Proxy cache example YAML
# ...
FEATURE_PROXY_CACHE: true
# ...

QuayIntegration configuration fields

The QuayIntegration custom resource enables integration between your OpenShift Container Platform cluster and a Project Quay registry instance.

Table 55. QuayIntegration configuration fields
Name Description Schema

allowlistNamespaces
(Optional)

A list of namespaces to include.

Array

clusterID
(Required)

The ID associated with this cluster.

String

credentialsSecret.key
(Required)

The secret containing credentials to communicate with the Quay registry.

Object

denylistNamespaces
(Optional)

A list of namespaces to exclude.

Array

insecureRegistry
(Optional)

Whether to skip TLS verification to the Quay registry

Boolean

quayHostname
(Required)

The hostname of the Quay registry.

String

scheduledImageStreamImport
(Optional)

Whether to enable image stream importing.

Boolean

QuayIntegration example CR
apiVersion: quay.redhat.com/v1
kind: QuayIntegration
metadata:
  name: example-quayintegration
spec:
  clusterID: 1df512fc-bf70-11ee-bb31-001a4a160100
  quayHostname: quay.example.com
  credentialsSecret:
    name: quay-creds-secret
    key: token
  allowlistNamespaces:
    - dev-team
    - prod-team
  denylistNamespaces:
    - test
  insecureRegistry: false
  scheduledImageStreamImport: true

Mail configuration fields

To enable email notifications from your Project Quay instance, such as account confirmation, password reset, and security alerts. These settings allow Project Quay to connect to your SMTP server and send outbound messages on behalf of your registry.

Table 56. Mail configuration fields
Field Type Description

FEATURE_MAILING

Boolean

Whether emails are enabled

Default: False

MAIL_DEFAULT_SENDER

String

If specified, the e-mail address used as the from when Project Quay sends e-mails. If none, defaults to support@quay.io

Example: support@example.com

MAIL_PASSWORD

String

The SMTP password to use when sending e-mails

MAIL_PORT

Number

The SMTP port to use. If not specified, defaults to 587.

MAIL_SERVER

String

The SMTP server to use for sending e-mails. Only required if FEATURE_MAILING is set to true.

Example: smtp.example.com

MAIL_USERNAME

String

The SMTP username to use when sending e-mails

MAIL_USE_TLS

Boolean

If specified, whether to use TLS for sending e-mails

Default: True

Mail example YAML
# ...
FEATURE_MAILING: true
MAIL_DEFAULT_SENDER: "support@example.com"
MAIL_SERVER: "smtp.example.com"
MAIL_PORT: 587
MAIL_USERNAME: "smtp-user@example.com"
MAIL_PASSWORD: "your-smtp-password"
MAIL_USE_TLS: true
# ...

Environment variable configuration

Project Quay supports a limited set of environment variables that control runtime behavior and performance tuning. These values provide flexibility in specific scenarios where per-process behavior, connection counts, or regional configuration must be adjusted dynamically.

Use environment variables cautiously. These options typically override or augment existing configuration mechanisms.

This section documents environment variables related to the following components:

  • Geo-replication preferences

  • Database connection pooling

  • HTTP connection concurrency

  • Worker process scaling

Geo-replication

Project Quay supports multi-region deployments where multiple instances operate across geographically distributed sites. In these scenarios, each site shares the same configuration and metadata, but storage backends might vary between regions.

To accommodate this, Project Quay allows specifying a preferred storage engine for each deployment using an environment variable. This ensures that while metadata remains synchronized across all regions, each region can use its own optimized storage backend without requiring separate configuration files.

Use the QUAY_DISTRIBUTED_STORAGE_PREFERENCE environment variable to explicitly set the preferred storage engine by its ID, as defined in DISTRIBUTED_STORAGE_CONFIG.

Table 57. Geo-replication configuration
Variable Type Description

QUAY_DISTRIBUTED_STORAGE_PREFERENCE

String

The preferred storage engine (by ID in DISTRIBUTED_STORAGE_CONFIG) to use.

Database connection pooling

Project Quay is composed of many different processes which all run within the same container. Many of these processes interact with the database.

Database connection pooling is enabled by default, and each process that interacts with the database contains a connection pool. These per-process connection pools are configured to maintain a maximum of 20 connections. Under heavy load, it is possible to fill the connection pool for every process within a Project Quay container. Under certain deployments and loads, this might require analysis to ensure that Project Quay does not exceed the configured database’s maximum connection count.

Overtime, the connection pools release idle connections. To release all connections immediately, Project Quay requires a restart.

Table 58. Database connection pooling configuration
Variable Type Description

DB_CONNECTION_POOLING

String

Whether to enable or disable database connection pooling. Defaults to true. Accepted values are "true" or "false"

If database connection pooling is enabled, it is possible to change the maximum size of the connection pool. This can be done through the following config.yaml option:

Database connection pooling example YAML
# ...
DB_CONNECTION_ARGS:
  max_connections: 10
# ...

Disabling database pooling in standalone deployments

For standalone Project Quay deployments, database connection pooling can be toggled off when starting your deployment. For example:

$ sudo podman run -d --rm -p 80:8080 -p 443:8443  \
   --name=quay \
   -v $QUAY/config:/conf/stack:Z \
   -v $QUAY/storage:/datastorage:Z \
   -e DB_CONNECTION_POOLING=false
   registry.redhat.io/quay/quay-rhel8:v3.12.1

Disabling database pooling for Red Hat Quay on OpenShift Container Platform

For Red Hat Quay on OpenShift Container Platform, database connection pooling can be configured by modifying the QuayRegistry custom resource definition (CRD). For example:

Example QuayRegistry CRD
spec:
  components:
  - kind: quay
    managed: true
    overrides:
      env:
      - name: DB_CONNECTION_POOLING
        value: "false"

HTTP connection counts

You can control the number of simultaneous HTTP connections handled by Project Quay using environment variables. These limits apply either globally or can be scoped to individual components (registry, web UI, or security scanning). By default, each worker process allows up to 50 parallel connections.

This setting is distinct from the number of worker processes.

These connection-related environment variables can be configured differently depending on your deployment type:

  • In standalone deployments, configure connection counts in the config.yaml file.

  • In Red Hat Quay on OpenShift Container Platform deployments, define the values in the env block of the QuayRegistry CR.

Table 59. HTTP connection count configuration variables
Variable Type Description

WORKER_CONNECTION_COUNT

Number

Global default for the maximum number of HTTP connections per worker process.

Default: 50

WORKER_CONNECTION_COUNT_REGISTRY

Number

HTTP connections per registry worker.

Default: WORKER_CONNECTION_COUNT

WORKER_CONNECTION_COUNT_WEB

Number

HTTP connections per web UI worker.

Default: WORKER_CONNECTION_COUNT

WORKER_CONNECTION_COUNT_SECSCAN

Number

HTTP connections per Clair security scanner worker.

Default: WORKER_CONNECTION_COUNT

HTTP connection configuration for standalone Project Quay deployments
# config.yaml
WORKER_CONNECTION_COUNT: 10
WORKER_CONNECTION_COUNT_REGISTRY: 10
WORKER_CONNECTION_COUNT_WEB: 10
WORKER_CONNECTION_COUNT_SECSCAN: 10
HTTP connection configuration for Red Hat Quay on OpenShift Container Platform
env:
  - name: WORKER_CONNECTION_COUNT
    value: "10"
  - name: WORKER_CONNECTION_COUNT_REGISTRY
    value: "10"
  - name: WORKER_CONNECTION_COUNT_WEB
    value: "10"
  - name: WORKER_CONNECTION_COUNT_SECSCAN
    value: "10"

Worker process counts

You can control the number of worker processes that handle incoming requests in Project Quay using environment variables. These values define how many parallel processes are started to handle tasks for different components of the system, such as the registry, the web UI, and security scanning.

If not explicitly set, Project Quay calculates the number of worker processes automatically based on the number of available CPU cores. While this dynamic scaling can optimize performance on larger machines, it may also lead to unnecessary resource usage in smaller or more controlled environments.

In Red Hat Quay on OpenShift Container Platform deployments, the Operator sets the following default values:

  • WORKER_COUNT_REGISTRY: 8

  • WORKER_COUNT_WEB: 4

  • WORKER_COUNT_SECSCAN: 2

Table 60. Worker count configuration variables
Variable Type Description

WORKER_COUNT

Number

Global override for the number of worker processes across all components.

If set, this value applies to all component-specific worker counts unless they are explicitly overridden.

WORKER_COUNT_REGISTRY

Number

Number of worker processes assigned to handle registry API traffic.

Recommended range: 8–64

WORKER_COUNT_WEB

Number

Number of worker processes assigned to handle web UI and user interface requests.

Recommended range: 2–32

WORKER_COUNT_SECSCAN

Number

Number of worker processes assigned to handle security scanning operations (e.g., Clair integration).

Because the Operator requests 2 vCPUs by default for this component, setting this value between 2 and 4 is generally safe. Higher values (e.g., 16) can be used in performance-sensitive environments.

Worker count configuration for standalone Project Quay deployments
WORKER_COUNT: 10
WORKER_COUNT_REGISTRY: 16
WORKER_COUNT_WEB: 8
WORKER_COUNT_SECSCAN: 4
Worker count configuration for Red Hat Quay on OpenShift Container Platform
env:
  - name: WORKER_COUNT
    value: "10"
  - name: WORKER_COUNT_REGISTRY
    value: "16"
  - name: WORKER_COUNT_WEB
    value: "8"
  - name: WORKER_COUNT_SECSCAN
    value: "4"

Clair security scanner

Configuration fields for Clair have been moved to Clair configuration overview. This chapter will be removed in a future version of Project Quay.

Project Quay Security Scanning with Clair V2

Project Quay supports scanning container images for known vulnerabilities with a scanning engine such as Clair. This document explains how to configure Clair with Project Quay.

Note

With the release of Project Quay 3.4, the default version of Clair is V4. This new version V4 is no longer being released as Technology Preview and is supported for production use. Customers are strongly encouraged to use Clair V4 for with Project Quay 3.4. It is possible to run both Clair V4 and Clair V2 simultaneously if so desired. In future versions of Project Quay, Clair V2 will eventually be removed.

Set up Clair V2 in the Project Quay config tool

Enabling Clair V2 in Project Quay consists of:

  • Starting the Project Quay config tool. See the Project Quay deployment guide for the type of deployment you are doing (OpenShift, Basic, or HA) for how to start the config tool for that environment.

  • Enabling security scanning, then generating a private key and PEM file in the config tool

  • Including the key and PEM file in the Clair config file

  • Start the Clair container

The procedure varies, based on whether you are running Project Quay on OpenShift or directly on a host.

Enabling Clair V2 on a Project Quay OpenShift deployment

To set up Clair V2 on Project Quay in OpenShift, see Add Clair image scanning to Project Quay.

Enabling Clair V2 on a Project Quay Basic or HA deployment

To set up Clair V2 on a Project Quay deployment where the container is running directly on the host system, do the following:

  1. Restart the Project Quay config tool: Run the Quay container again in config mode, open the configuration UI in a browser, then select Modify an existing configuration. When prompted, upload the quay-config.tar.gz file that was originally created for the deployment.

  2. Enable Security Scanning: Scroll to the Security Scanner section and select the "Enable Security Scanning" checkbox. From the fields that appear you need to create an authentication key and enter the security scanner endpoint. Here’s how:

    • Generate key: Click Create Key, then from the pop-up window type a name for the Clair private key and an optional expiration date (if blank, the key never expires). Then select Generate Key.

    • Copy the Clair key and PEM file: Save the Key ID (to a notepad or similar) and download a copy of the Private Key PEM file (named security_scanner.pem) by selecting "Download Private Key" (if you lose the key, you need to generate a new one). You will need the key and PEM file when you start the Clair container later.

      Close the pop-up when you are done. Here is an example of a completed Security Scanner config:

      Create authentication key and set scan endpoint

  3. Save the configuration: Click Save Configuration Changes and then select Download Configuration to save it to your local system.

  4. Deploy the configuration: To pick up the changes enabling scanning, as well as other changes you may have made to the configuration, unpack the quay-config.tar.gz and copy the resulting files to the config directory. For example:

    $ tar xvf quay-config.tar.gz
    config.yaml  ssl.cert  ssl.key
    $ cp config.yaml ssl* /mnt/quay/config

Next, start the Clair V2 container and associated database, as described in the following sections.

Setting Up Clair V2 Security Scanning

Once you have created the necessary key and pem files from the Project Quay config UI, you are ready to start up the Clair V2 container and associated database. Once that is done, you an restart your Project Quay cluster to have those changes take effect.

Procedures for running the Clair V2 container and associated database are different on OpenShift than they are for running those containers directly on a host.

Run Clair V2 on a Project Quay OpenShift deployment

To run the Clair V2 image scanning container and its associated database on an OpenShift environment with your Project Quay cluster, see Add Clair image scanning to Project Quay.

Run Clair V2 on a Project Quay Basic or HA deployment

To run Clair V2 and its associated database on non-OpenShift environments (directly on a host), you need to:

  • Start up a database

  • Configure and start Clair V2

Get Postgres and Clair

In order to run Clair, a database is required. For production deployments, MySQL is not supported. For production, we recommend you use PostgreSQL or other supported database:

  • Running on machines other than those running Project Quay

  • Ideally with automatic replication and failover

For testing purposes, a single PostgreSQL instance can be started locally:

  1. To start Postgres locally, do the following:

    # sudo podman run --name postgres -p 5432:5432 -d postgres
    # sleep 5
    # sudo podman run --rm --link postgres:postgres postgres \
       sh -c 'echo "create database clairtest" | psql -h \
       "$POSTGRES_PORT_5432_TCP_ADDR" -p  \
       "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'

    The configuration string for this test database is:

    postgresql://postgres@{DOCKER HOST GOES HERE}:5432/clairtest?sslmode=disable
  2. Pull the security-enabled Clair image:

You will need to build your own Clair container and pull it during this step. Instructions for building the Clair container are not yet available.

  1. Make a configuration directory for Clair

    # mkdir clair-config
    # cd clair-config

Configure Clair V2

Clair V2 can run either as a single instance or in high-availability mode. It is recommended to run more than a single instance of Clair, ideally in an auto-scaling group with automatic healing.

  1. Create a config.yaml file to be used in the Clair V2 config directory (/clair/config) from one of the two Clair configuration files shown here.

  2. If you are doing a high-availability installation, go through the procedure in Authentication for high-availability scanners to create a Key ID and Private Key (PEM).

  3. Save the Private Key (PEM) to a file (such as, $HOME/config/security_scanner.pem).

  4. Replace the value of key_id (CLAIR_SERVICE_KEY_ID) with the Key ID you generated and the value of private_key_path with the location of the PEM file (for example, /config/security_scanner.pem).

    For example, those two value might now appear as:

    key_id: { 4fb9063a7cac00b567ee921065ed16fed7227afd806b4d67cc82de67d8c781b1 }
    private_key_path: /clair/config/security_scanner.pem
  5. Change other values in the configuration file as needed.

Clair V2 configuration: High availability
clair:
  database:
    type: pgsql
    options:
      # A PostgreSQL Connection string pointing to the Clair Postgres database.
      # Documentation on the format can be found at: http://www.postgresql.org/docs/9.4/static/libpq-connect.html
      source: { POSTGRES_CONNECTION_STRING }
      cachesize: 16384
  api:
    # The port at which Clair will report its health status. For example, if Clair is running at
    # https://clair.mycompany.com, the health will be reported at
    # http://clair.mycompany.com:6061/health.
    healthport: 6061

    port: 6062
    timeout: 900s

    # paginationkey can be any random set of characters. *Must be the same across all Clair instances*.
    paginationkey: "XxoPtCUzrUv4JV5dS+yQ+MdW7yLEJnRMwigVY/bpgtQ="

  updater:
    # interval defines how often Clair will check for updates from its upstream vulnerability databases.
    interval: 6h
  notifier:
    attempts: 3
    renotifyinterval: 1h
    http:
      # QUAY_ENDPOINT defines the endpoint at which Quay is running.
      # For example: https://myregistry.mycompany.com
      endpoint: { QUAY_ENDPOINT }/secscan/notify
      proxy: http://localhost:6063

jwtproxy:
  signer_proxy:
    enabled: true
    listen_addr: :6063
    ca_key_file: /certificates/mitm.key # Generated internally, do not change.
    ca_crt_file: /certificates/mitm.crt # Generated internally, do not change.
    signer:
      issuer: security_scanner
      expiration_time: 5m
      max_skew: 1m
      nonce_length: 32
      private_key:
        type: preshared
        options:
          # The ID of the service key generated for Clair. The ID is returned when setting up
          # the key in [Quay Setup](security-scanning.md)
          key_id: { CLAIR_SERVICE_KEY_ID }
          private_key_path: /clair/config/security_scanner.pem

  verifier_proxies:
  - enabled: true
    # The port at which Clair will listen.
    listen_addr: :6060

    # If Clair is to be served via TLS, uncomment these lines. See the "Running Clair under TLS"
    # section below for more information.
    # key_file: /clair/config/clair.key
    # crt_file: /clair/config/clair.crt

    verifier:
      # CLAIR_ENDPOINT is the endpoint at which this Clair will be accessible. Note that the port
      # specified here must match the listen_addr port a few lines above this.
      # Example: https://myclair.mycompany.com:6060
      audience: { CLAIR_ENDPOINT }

      upstream: http://localhost:6062
      key_server:
        type: keyregistry
        options:
          # QUAY_ENDPOINT defines the endpoint at which Quay is running.
          # Example: https://myregistry.mycompany.com
          registry: { QUAY_ENDPOINT }/keys/
Clair V2 configuration: Single instance
clair:
  database:
    type: pgsql
    options:
      # A PostgreSQL Connection string pointing to the Clair Postgres database.
      # Documentation on the format can be found at: http://www.postgresql.org/docs/9.4/static/libpq-connect.html
      source: { POSTGRES_CONNECTION_STRING }
      cachesize: 16384
  api:
    # The port at which Clair will report its health status. For example, if Clair is running at
    # https://clair.mycompany.com, the health will be reported at
    # http://clair.mycompany.com:6061/health.
    healthport: 6061

    port: 6062
    timeout: 900s

    # paginationkey can be any random set of characters. *Must be the same across all Clair instances*.
    paginationkey:

  updater:
    # interval defines how often Clair will check for updates from its upstream vulnerability databases.
    interval: 6h
  notifier:
    attempts: 3
    renotifyinterval: 1h
    http:
      # QUAY_ENDPOINT defines the endpoint at which Quay is running.
      # For example: https://myregistry.mycompany.com
      endpoint: { QUAY_ENDPOINT }/secscan/notify
      proxy: http://localhost:6063

jwtproxy:
  signer_proxy:
    enabled: true
    listen_addr: :6063
    ca_key_file: /certificates/mitm.key # Generated internally, do not change.
    ca_crt_file: /certificates/mitm.crt # Generated internally, do not change.
    signer:
      issuer: security_scanner
      expiration_time: 5m
      max_skew: 1m
      nonce_length: 32
      private_key:
        type: autogenerated
        options:
          rotate_every: 12h
          key_folder: /clair/config/
          key_server:
            type: keyregistry
            options:
              # QUAY_ENDPOINT defines the endpoint at which Quay is running.
              # For example: https://myregistry.mycompany.com
              registry: { QUAY_ENDPOINT }/keys/


  verifier_proxies:
  - enabled: true
    # The port at which Clair will listen.
    listen_addr: :6060

    # If Clair is to be served via TLS, uncomment these lines. See the "Running Clair under TLS"
    # section below for more information.
    # key_file: /clair/config/clair.key
    # crt_file: /clair/config/clair.crt

    verifier:
      # CLAIR_ENDPOINT is the endpoint at which this Clair will be accessible. Note that the port
      # specified here must match the listen_addr port a few lines above this.
      # Example: https://myclair.mycompany.com:6060
      audience: { CLAIR_ENDPOINT }

      upstream: http://localhost:6062
      key_server:
        type: keyregistry
        options:
          # QUAY_ENDPOINT defines the endpoint at which Quay is running.
          # Example: https://myregistry.mycompany.com
          registry: { QUAY_ENDPOINT }/keys/

Configuring Clair V2 for TLS

To configure Clair to run with TLS, a few additional steps are required.

Using certificates from a public CA

For certificates that come from a public certificate authority, follow these steps:

  1. Generate a TLS certificate and key pair for the DNS name at which Clair will be accessed

  2. Place these files as clair.crt and clair.key in your Clair configuration directory

  3. Uncomment the key_file and crt_file lines under verifier_proxies in your Clair config.yaml

If your certificates use a public CA, you are now ready to run Clair. If you are using your own certificate authority, configure Clair to trust it below.

Configuring trust of self-signed SSL

Similar to the process for setting up Docker to trust your self-signed certificates, Clair must also be configured to trust your certificates. Using the same CA certificate bundle used to configure Docker, complete the following steps:

  1. Rename the same CA certificate bundle used to set up Quay Registry to ca.crt

  2. Make sure the ca.crt file is mounted inside the Clair container under /etc/pki/ca-trust/source/anchors/ as in the example below: You will need to build your own Clair container and run it during this step. Instructions for building the Clair container are not yet available.

Now Clair will be able to trust the source of your TLS certificates and use them to secure communication between Clair and Quay.

Using Clair V2 data sources

Before scanning container images, Clair tries to figure out the operating system on which the container was built. It does this by looking for specific filenames inside that image (see Table 1). Once Clair knows the operating system, it uses specific security databases to check for vulnerabilities (see Table 2).

Table 61. Container files that identify its operating system
Operating system Files identifying OS type

Redhat/CentOS/Oracle

etc/oracle-release

etc/centos-release

etc/redhat-release

etc/system-release

Alpine

etc/alpine-release

Debian/Ubuntu:

etc/os-release

usr/lib/os-release

etc/apt/sources.list

Ubuntu

etc/lsb-release

The data sources that Clair uses to scan containers are shown in Table 2.

Note

You must be sure that Clair has access to all listed data sources by whitelisting access to each data source’s location. You might need to add a wild-card character (*) at the end of some URLS that may not be fully complete because they are dynamically built by code.

Table 62. Clair V2 data sources and data collected
Data source Data collected Whitelist links Format License

Debian 6, 7, 8, unstable namespaces

Ubuntu 12.04, 12.10, 13.04, 14.04, 14.10, 15.04, 15.10, 16.04 namespaces

CentOS 5, 6, 7 namespace

rpm

Oracle Linux 5, 6, 7 namespaces

rpm

Alpine 3.3, 3.4, 3.5 namespaces

apk

MIT

Generic vulnerability metadata

N/A

Amazon Linux 2018.03, 2 namespaces

rpm

Run Clair V2

Execute the following command to run Clair V2:

You will need to build your own Clair container and run it during this step. Instructions for building the Clair container are not yet available.

Output similar to the following will be seen on success:

2016-05-04 20:01:05,658 CRIT Supervisor running as root (no user in config file)
2016-05-04 20:01:05,662 INFO supervisord started with pid 1
2016-05-04 20:01:06,664 INFO spawned: 'jwtproxy' with pid 8
2016-05-04 20:01:06,666 INFO spawned: 'clair' with pid 9
2016-05-04 20:01:06,669 INFO spawned: 'generate_mitm_ca' with pid 10
time="2016-05-04T20:01:06Z" level=info msg="No claims verifiers specified, upstream should be configured to verify authorization"
time="2016-05-04T20:01:06Z" level=info msg="Starting reverse proxy (Listening on ':6060')"
2016-05-04 20:01:06.715037 I | pgsql: running database migrations
time="2016-05-04T20:01:06Z" level=error msg="Failed to create forward proxy: open /certificates/mitm.crt: no such file or directory"
goose: no migrations to run. current version: 20151222113213
2016-05-04 20:01:06.730291 I | pgsql: database migration ran successfully
2016-05-04 20:01:06.730657 I | notifier: notifier service is disabled
2016-05-04 20:01:06.731110 I | api: starting main API on port 6062.
2016-05-04 20:01:06.736558 I | api: starting health API on port 6061.
2016-05-04 20:01:06.736649 I | updater: updater service is disabled.
2016-05-04 20:01:06,740 INFO exited: jwtproxy (exit status 0; not expected)
2016-05-04 20:01:08,004 INFO spawned: 'jwtproxy' with pid 1278
2016-05-04 20:01:08,004 INFO success: clair entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2016-05-04 20:01:08,004 INFO success: generate_mitm_ca entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
time="2016-05-04T20:01:08Z" level=info msg="No claims verifiers specified, upstream should be configured to verify authorization"
time="2016-05-04T20:01:08Z" level=info msg="Starting reverse proxy (Listening on ':6060')"
time="2016-05-04T20:01:08Z" level=info msg="Starting forward proxy (Listening on ':6063')"
2016-05-04 20:01:08,541 INFO exited: generate_mitm_ca (exit status 0; expected)
2016-05-04 20:01:09,543 INFO success: jwtproxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

To verify Clair V2 is running, execute the following command:

curl -X GET -I http://path/to/clair/here:6061/health

If a 200 OK code is returned, Clair is running:

HTTP/1.1 200 OK
Server: clair
Date: Wed, 04 May 2016 20:02:16 GMT
Content-Length: 0
Content-Type: text/plain; charset=utf-8

Once Clair V2 and its associated database are running, you man need to restart your quay application for the changes to take effect.