Getting started with Project Quay configuration
Project Quay is a secure artifact registry that can be deployed as a self-managed installation, or through the Red Hat Quay on OpenShift Container Platform Operator. Each deployment type offers a different approach to configuration and management, but each rely on the same set of configuration parameters to control registry behavior. Common configuration parameters allow administrators to define how their registry interacts with users, storage backends, authentication providers, security policies, and other integrated services.
There are one of two ways to configure Project Quay that depend on your deployment type:
-
On prem Project Quay: With an on prem Project Quay deployment, a registry administrator provides a
config.yaml
file that includes all required parameters. For this deployment type, the registry is unable to start without a valid configuration. -
Project Quay Operator: By default, the Project Quay Operator automatically configures your Project Quay deployment by generating the minimal required values and deploying the necessary components for you. After the initial deployment, you can customize your registry’s behavior by modifying the
QuayRegistry
custom resource, or by using the OpenShift Container Platform Web Console.
This guide offers an overview of the following configuration concepts:
-
How to retrieve, inspect, and modify your current configuration for both on prem and Operator-based Project Quay deployment types.
-
The minimal configuration fields required for startup.
-
An overview of all available Project Quay configuration fields and YAML examples for those fields.
Project Quay configuration disclaimer
In both self-managed and Operator-based deployments of Project Quay, certain features and configuration parameters are not actively used or implemented. As a result, some feature flags, such as those that enable or disable specific functionality, or configuration parameters that are not explicitly documented or supported by or requested for documentation by Red Hat Support, should only be modified with caution.
Unused or undocumented features might not be fully tested, supported, or compatible with Project Quay. Modifying these settings could result in unexpected behavior or disruptions to your deployment.
Understanding the Project Quay configuration file
Whether deployed on premise of by the Red Hat Quay on OpenShift Container Platform Operator, the registry’s behavior is defined by the config.yaml
file. The config.yaml
file must include all required configuration fields for the registry to start. Project Quay administrators can also define optional parameters that customize their registry, such as authentication parameters, storage parameters, proxy cache parameters, and so on.
The config.yaml
file must be written using valid YAML ("YAML Ain’t Markup Language") syntax, and Project Quay cannot start if the file itself contains any formatting errors or missing required fields. Regardless of deployment type, whether that is on premise or Red Hat Quay on OpenShift Container Platform that is configured by the Operator, the YAML principles stay the same, even if the required configuration fields are slightly different.
The following section outlines basic YAML syntax relevant to creating and editing the Project Quay config.yaml
file. For a more complete overview of YAML, see What is YAML.
Key-value pairs
Configuration fields within a config.yaml
file are written as key-value pairs in the following form:
# ... (1)
EXAMPLE_FIELD_NAME: <value>
# ... (1)
-
Denotes that there are fields before and after this specific field. Note that by supplying the
#
, or hash symbol, comments can be provided within the YAML file.
Each line within a config.yaml
file contains a field name, followed by a colon, a space, and then an appropriate value that matches with the key. The following example shows you how the AUTHENTICATION_TYPE
configuration field must be formatted in your config.yaml
file.
AUTHENTICATION_TYPE: Database (1)
# ...
-
The authentication engine to use for credential authentication.
In the previous example, the AUTHENTICATION_TYPE
is set to Database
, however, different deployment types require a different value. The following example shows you how your config.yaml
file might look if LDAP
, or Lightweight Directory Access Protocol, was used for authentication:
AUTHENTICATION_TYPE: LDAP
# ...
Indentation and nesting
Many Project Quay configuration fields require indentation to indicate nested structures. Indentation must be done by using white spaces, or literal space characters; tab characters are not allowed by design. Indentation must be consistent across the file. The following YAML snippet shows you how the BUILDLOGS_REDIS
field uses indentation for the required host
, password,
and port
fields:
# ...
BUILDLOGS_REDIS:
host: quay-server.example.com
password: example-password
port: 6379
# ...
Lists
In some cases, the Project Quay configuration field relies on lists to define certain values. Lists are formatted by using a hyphen (-
) followed by a space. The following example shows you how the SUPER_USERS
configuration field uses a list to define superusers:
# ...
SUPER_USERS:
- quayadmin
# ...
Quoted values
Some Project Quay configuration fields require the use of quotation marks (""
) to properly define a variable. This is generally not required. The following examples shows you how the FOOTER_LINKS
configuration field uses quotation marks to define the TERMS_OF_SERVICE_URL
, PRIVACY_POLICY_URL
, SECURITY_URL
, and ABOUT_URL
:
FOOTER_LINKS:
"TERMS_OF_SERVICE_URL": "https://www.index.hr"
"PRIVACY_POLICY_URL": "https://www.jutarnji.hr"
"SECURITY_URL": "https://www.bug.hr"
"ABOUT_URL": "https://www.zagreb.hr"
Comments
The hash symbol, or #
, can be placed at the beginning of a line to add comments or to temporarily disable a configuration field. They are ignored by the configuration parser and will not affect the behavior of the registry. For example:
# ...
# FEATURE_UI_V2: true
# ...
In this example, the FEATURE_UI_V2
configuration is ignored by the parser, meaning that the option to use the v2 UI is disabled. Using the #
symbol on a required configuration field results in failure for the registry to start.
On prem Project Quay configuration overview
For on premise deployments of Project Quay, the config.yaml
file that is managed by the administrator is mounted into the container at startup and read by Project Quay during initialization. The config.yaml
file is not dynamically reloaded, meaning that any changes made to the file require restarting the registry container to take effect.
This chapter provides an overview of the following concepts:
-
The minimal required configuration fields.
-
How to edit and manage your configuration after deployment.
This section applies specifically to on premise Project Quay deployment types. For information about configuring Red Hat Quay on OpenShift Container Platform, see "Red Hat Quay on OpenShift Container Platform configuration overview".
Required configuration fields
The following configuration fields are required for an on premise deployment of Project Quay:
Field |
Type |
Description |
AUTHENTICATION_TYPE |
String |
The authentication engine to use for credential authentication. |
BUILDLOGS_REDIS |
Object |
Redis connection details for build logs caching. |
.host |
String |
The hostname at which Redis is accessible. |
.password |
String |
The password to connect to the Redis instance. |
DATABASE_SECRET_KEY |
String |
Key used to encrypt sensitive fields within the database. This value should never be changed once set, otherwise all reliant fields, for example, repository mirror username and password configurations, are invalidated. |
DB_URI |
String |
The URI for accessing the database, including any credentials. |
DISTRIBUTED_STORAGE_CONFIG |
Object |
Configuration for storage engine(s) to use in Project Quay. Each key represents an unique identifier for a storage engine. The value consists of a tuple of (key, value) forming an object describing the storage engine parameters. |
SECRET_KEY |
String |
Key used to encrypt the session cookie and the CSRF token needed for correct interpretation of the user session. The value should not be changed when set. Should be persistent across all Project Quay instances. If not persistent across all instances, login failures and other errors related to session persistence might occur. |
SERVER_HOSTNAME |
String |
The URL at which Project Quay is accessible, without the scheme. |
SETUP_COMPLETE |
Boolean |
This is an artifact left over from earlier versions of the software and currently it must be specified with a value of |
USER_EVENTS_REDIS |
Object |
Redis connection details for user event handling. |
.host |
String |
The hostname at which Redis is accessible. |
.port |
Number |
The port at which Redis is accessible. |
.password |
String |
The password to connect to the Redis instance. |
Minimal configuration file examples
This section provides two examples of a minimal configuration file: one example that uses local storage, and another example that uses cloud-based storage with Google Cloud Platform.
Minimal configuration using local storage
The following example shows a sample minimal configuration file that uses local storage for images.
Important
|
Only use local storage when deploying a registry for proof of concept purposes. It is not intended for production purposes. When using local storage, you must map the registry to a local directory to the |
AUTHENTICATION_TYPE: Database
BUILDLOGS_REDIS:
host: <quay-server.example.com>
password: <password>
port: <port>
DATABASE_SECRET_KEY: <example_database_secret_key>
DB_URI: postgresql://<username>:<password>@<registry_url>.com:<port>/quay
DISTRIBUTED_STORAGE_CONFIG:
default:
- LocalStorage
- storage_path: /datastorage/registry
SECRET_KEY: <example_secret_key>
SERVER_HOSTNAME: <server_host_name>
SETUP_COMPLETE: true
USER_EVENTS_REDIS:
host: <redis_events_url>
password: <password>
port: <port>
Minimal configuration using cloud-based storage
In most production environments, Project Quay administrators use cloud or enterprise-grade storage backends provided by supported vendors. The following example shows you how to configure Project Quay to use Google Cloud Platform for image storage. For a complete list of supported storage providers, see Image storage.
Note
|
When using a cloud or enterprise-grade storage backend, additional configuration, such as mapping the registry to a local directory, is not required. |
AUTHENTICATION_TYPE: Database
BUILDLOGS_REDIS:
host: <quay-server.example.com>
password: <password>
port: <port>
DATABASE_SECRET_KEY: <example_database_secret_key>
DB_URI: postgresql://<username>:<password>@<registry_url>.com:<port>/quay
DISTRIBUTED_STORAGE_CONFIG:
default:
- GoogleCloudStorage
- access_key: <access_key>
bucket_name: <bucket_name>
secret_key: <secret_key>
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- default
SECRET_KEY: <example_secret_key>
SERVER_HOSTNAME: <server_host_name>
SETUP_COMPLETE: true
USER_EVENTS_REDIS:
host: <redis_events_url>
password: <password>
port: <port>
Modifying your configuration file after deployment
After deploying a Project Quay registry with an initial config.yaml
file, Project Quay administrators can update the configuration file to enable or disable features as needed. This flexibility allows administrators to tailor the registry to fit their specific environment needs, or to meet certain security policies.
Note
|
Because the |
The following procedure shows you how to retrieve the config.yaml
file from the quay-registry
container, how to enable a new feature by adding that feature’s configuration field to the file, and how to restart the quay-registry
container using Podman.
-
You have deployed Project Quay.
-
You are a registry administrator.
-
If you have access to the
config.yaml
file:-
Navigate to the directory that is storing the
config.yaml
file. For example:$ cd /home/<username>/<quay-deployment-directory>/config
-
Make changes to the
config.yaml
file by adding a new feature flag. The following example enables the v2 UI:# ... FEATURE_UI_V2: true # ...
-
Save the changes made to the
config.yaml
file. -
Restart the
quay-registry
pod by entering the following command:$ podman restart <container_id>
-
-
If you do not have access to the
config.yaml
file and need to create a new file while keeping the same credentials:-
Retrieve the container ID of your
quay-registry
pod by entering the following command:$ podman ps
Example outputCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5f2297ef53ff registry.redhat.io/rhel8/postgresql-13:1-109 run-postgresql 20 hours ago Up 20 hours 0.0.0.0:5432->5432/tcp postgresql-quay 3b40fb83bead registry.redhat.io/rhel8/redis-5:1 run-redis 20 hours ago Up 20 hours 0.0.0.0:6379->6379/tcp redis 0b4b8fbfca6d registry-proxy.engineering.redhat.com/rh-osbs/quay-quay-rhel8:v3.14.0-14 registry 20 hours ago Up 20 hours 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp, 7443/tcp, 9091/tcp, 55443/tcp quay
-
Copy the
config.yaml
file from thequay-registry
pod to a directory by entering the following command:$ podman cp <container_id>:/quay-registry/conf/stack/config.yaml ./config.yaml
-
Make changes to the
config.yaml
file by adding a new feature flag. The following example sets theAUTHENTICATION_TYPE
toLDAP
# ... AUTHENTICATION_TYPE: LDAP # ...
-
Re-deploy the registry, mounting the
config.yaml
file into thequay-registry
configuration volume by entering the following command:$ sudo podman run -d --rm -p 80:8080 -p 443:8443 \ --name=quay \ -v /home/<username>/<quay-deployment-directory>/config:/conf/stack:Z \ registry.redhat.io/quay/quay-rhel8:v3.14.0
-
Troubleshooting the configuration file
Failure to add all of the required configuration field, or to provide the proper information for some parameters, might result in the quay-registry
container failing to deploy. Use the following procedure to view and troubleshoot a failed on premise deployment type.
-
You have created a minimal configuration file.
-
Attempt to deploy the
quay-registry
container by entering the following command. Note that this command uses the-it
, which shows you debugging information:$ podman run -it --rm -p 80:8080 -p 443:8443 --name=quay -v /home/<username>/<quay-deployment-directory>/config:/conf/stack:Z -v /home/<username>/<quay-deployment-directory>/storage:/datastorage:Z 33f1c3dc86be
Example output--- +------------------------+-------+--------+ | LDAP | - | X | +------------------------+-------+--------+ | LDAP_ADMIN_DN is required | X | +-----------------------------------------+ | LDAP_ADMIN_PSSWD is required | X | +-----------------------------------------+ | . . . Connection refused | X | +-----------------------------------------+ ---
In this example, the
quay-registry
container failed to deploy because improper LDAP credentials were provided.
Red Hat Quay on OpenShift Container Platform configuration overview
When deploying Project Quay using the Operator on OpenShift Container Platform, configuration is managed declaratively through the QuayRegistry
custom resource (CR). This model allows cluster administrators to define the desired state of the Project Quay deployment, including which components are enabled, storage backends, SSL/TLS configuration, and other core features.
After deploying Red Hat Quay on OpenShift Container Platform with the Operator, administrators can further customize their registry by updating the config.yaml
file and referencing it in a Kubernetes secret. This configuration bundle is linked to the QuayRegistry
CR through the configBundleSecret
field.
The Operator reconciles the state defined in the QuayRegistry
CR and its associated configuration, automatically deploying or updating registry components as needed.
This guide covers the basic concepts behind the QuayRegistry
CR and modifying your config.yaml
file on Red Hat Quay on OpenShift Container Platform deployments. More advanced topics, such as using unmanaged components within the QuayRegistry
CR, can be found in Deploying Project Quay Operator on OpenShift Container Platform.
Understanding the QuayRegistry CR
By default, the QuayRegistry
CR contains the following key fields:
-
configBundleSecret
: The name of a Kubernetes Secret containing theconfig.yaml
file which defines additional configuration parameters. -
name
: The name of your Project Quay registry. -
namespace
: The namespace, or project, in which the registry was created. -
spec.components
: A list of component that the Operator automatically manages. Eachspec.component
field contains two fields:-
kind
: The name of the component -
managed
: A boolean that addresses whether the component lifecycle is handled by the Project Quay Operator. Settingmanaged: true
to a component in theQuayRegistry
CR means that the Operator manages the component.
-
All QuayRegistry
components are automatically managed and auto-filled upon reconciliation for visibility unless specified otherwise. The following sections highlight the major QuayRegistry
components and provide an example YAML file that shows the default settings.
Managed components
By default, the Operator handles all required configuration and installation needed for Project Quay’s managed components.
If the opinionated deployment performed by the Project Quay Operator is unsuitable for your environment, you can provide the Project Quay Operator with unmanaged
resources, or overrides, as described in Using unmanaged components.
Field | Type | Description |
---|---|---|
|
Boolean |
Holds overrides for deployment of Red Hat Quay on OpenShift Container Platform, such as environment variables and number of replicas. This component cannot be set to unmanaged ( |
|
Boolean |
Used for storing registry metadata. Currently, PostgreSQL version 13 is used. |
|
Boolean |
Provides image vulnerability scanning. |
|
Boolean |
Storage live builder logs and the locking mechanism that is required for garbage collection. |
|
Boolean |
Adjusts the number of |
|
Boolean |
Stores image layer blobs. When set to |
|
Boolean |
Provides an external entrypoint to the Project Quay registry from outside of OpenShift Container Platform. |
|
Boolean |
Configures repository mirror workers to support optional repository mirroring. |
|
Boolean |
Features include a Grafana dashboard, access to individual metrics, and notifications for frequently restarting |
|
Boolean |
Configures whether SSL/TLS is automatically handled. |
|
Boolean |
Configures a managed Clair database. This is a separate database than the PostgreSQL database that is used to deploy Project Quay. |
The following example shows you the default configuration for the QuayRegistry
custom resource provided by the Project Quay Operator. It is available on the OpenShift Container Platform web console.
QuayRegistry
custom resourceapiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: <example_registry>
namespace: <namespace>
spec:
configBundleSecret: config-bundle-secret
components:
- kind: quay
managed: true
- kind: postgres
managed: true
- kind: clair
managed: true
- kind: redis
managed: true
- kind: horizontalpodautoscaler
managed: true
- kind: objectstorage
managed: true
- kind: route
managed: true
- kind: mirror
managed: true
- kind: monitoring
managed: true
- kind: tls
managed: true
- kind: clairpostgres
managed: true
Modifying the QuayRegistry CR after deployment
After you have installed the Project Quay Operator and created an initial deployment, you can modify the QuayRegistry
custom resource (CR) to customize or reconfigure aspects of the Red Hat Quay environment.
Project Quay administrators might modify the QuayRegistry CR for the following reasons:
-
To change component management: Switch components from
managed: true
tomanaged: false
in order to bring your own infrastructure. For example, you might setkind: objectstorage
to unmanaged to integrate external object storage platforms such as Google Cloud Storage or Nutanix. -
To apply custom configuration: Update or replace the
configBundleSecret
to apply new configuration settings, for example, authentication providers, external SSL/TLS settings, feature flags. -
To enable or disable features: Toggle features like repository mirroring, Clair scanning, or horizontal pod autoscaling by modifying the
spec.components
list. -
To scale the deployment: Adjust environment variables or replica counts for the Quay application.
-
To integrate with external services: Provide configuration for external PostgreSQL, Redis, or Clair databases, and update endpoints or credentials.
Modifying the QuayRegistry CR by using the OpenShift Container Platform web console
The QuayRegistry
can be modified by using the OpenShift Container Platform web console. This allows you to set managed components to unamanged (managed: false
) and use your own infrastructure.
-
You are logged into OpenShift Container Platform as a user with admin privileges.
-
You have installed the Project Quay Operator.
-
On the OpenShift Container Platform web console, click Operators → Installed Operators.
-
Click Red Hat Quay.
-
Click Quay Registry.
-
Click the name of your Project Quay registry, for example, example-registry.
-
Click YAML.
-
Adjust the
managed
field of the desired component to eithertrue
orfalse
. -
Click Save.
NoteSetting a component to unmanaged (
managed: false
) might require additional configuration. For more information about setting unmanaged components in theQuayRegistry
CR, see Using unmanaged components for dependencies.
Modifying the QuayRegistry CR by using the CLI
The QuayRegistry
CR can be modified by using the CLI. This allows you to set managed components to unamanged (managed: false
) and use your own infrastructure.
-
You are logged in to your OpenShift Container Platform cluster as a user with admin privileges.
-
Edit the
QuayRegistry
CR by entering the following command:$ oc edit quayregistry <registry_name> -n <namespace>
-
Make the desired changes to the
QuayRegistry
CR.NoteSetting a component to unmanaged (
managed: false
) might require additional configuration. For more information about setting unmanaged components in theQuayRegistry
CR, see Using unmanaged components for dependencies. -
Save the changes.
Understanding the configBundleSecret
The spec.configBundleSecret
field is an optional reference to the name of a Secret in the same namespace as the QuayRegistry
resource. This Secret must contain a config.yaml
key/value pair, where the value is a Project Quay configuration file.
The configBundleSecret
stores the config.yaml
file. Project Quay administrators can define the following settings through the config.yaml
file:
-
Authentication backends (for example, OIDC, LDAP)
-
External TLS termination settings
-
Repository creation policies
-
Feature flags
-
Notification settings
Project Quay might update this secret for the following reasons:
-
Enable a new authentication method
-
Add custom SSL/TLS certificates
-
Enable features
-
Modify security scanning settings
If this field is omitted, the Project Quay Operator automatically generates a configuration secret based on default values and managed component settings. If the field is provided, the contents of the config.yaml
are used as the base configuration and are merged with values from managed components to form the final configuration, which is mounted into the quay
application pods.
How the QuayRegistry
CR is configured determines which fields must be included in the configBundleSecret’s `config.yaml
file for Red Hat Quay on OpenShift Container Platform. The following example shows you a default config.yaml
file when all components are managed by the Operator. Note that this example looks different depending on whether components are managed or unmanaged (managed: false
).
ALLOW_PULLS_WITHOUT_STRICT_LOGGING: false
AUTHENTICATION_TYPE: Database
DEFAULT_TAG_EXPIRATION: 2w
ENTERPRISE_LOGO_URL: /static/img/RH_Logo_Quay_Black_UX-horizontal.svg
FEATURE_BUILD_SUPPORT: false
FEATURE_DIRECT_LOGIN: true
FEATURE_MAILING: false
REGISTRY_TITLE: Red Hat Quay
REGISTRY_TITLE_SHORT: Red Hat Quay
SETUP_COMPLETE: true
TAG_EXPIRATION_OPTIONS:
- 2w
TEAM_RESYNC_STALE_TIME: 60m
TESTING: false
In some cases, you might opt to manage certain components yourself, for example, object storage. In that scenario, you would modify the QuayRegistry
CR as follows:
# ...
- kind: objectstorage
managed: false
# ...
If you are managing your own components, your deployment must be configured to include the necessary information or resources for that component. For example, if the objectstorage
component is set to managed: false
, you would include the relevant information depending on your storage provider inside of the config.yaml
file. The following example shows you a distributed storage configuration using Google Cloud Storage:
# ...
DISTRIBUTED_STORAGE_CONFIG:
default:
- GoogleCloudStorage
- access_key: <access_key>
bucket_name: <bucket_name>
secret_key: <secret_key>
storage_path: /datastorage/registry
# ...
Similarly, if you are managing the horizontalpodautoscaler
component, you must create an accompanying HorizontalPodAutoscaler
custom resource.
Modifying the configuration file by using the OpenShift Container Platform web console
Use the following procedure to modify the config.yaml
file that is stored by the configBundleSecret
by using the OpenShift Container Platform web console.
-
You are logged in to the OpenShift Container Platform cluster as a user with admin privileges.
-
On the OpenShift Container Platform web console, click Operators → Installed Operators → Red Hat Quay.
-
Click Quay Registry.
-
Click the name of your Project Quay registry, for example, example-registry.
-
On the QuayRegistry details page, click the name of your Config Bundle Secret, for example, example-registry-config-bundle.
-
Click Actions → Edit Secret.
-
In the Value box, add the desired key/value pair. For example, to add a superuser to your Red Hat Quay on OpenShift Container Platform deployment, add the following reference:
SUPER_USERS: - quayadmin
-
Click Save.
-
Verify that the changes have been accepted:
-
On the OpenShift Container Platform web console, click Operators → Installed Operators → Red Hat Quay.
-
Click Quay Registry.
-
Click the name of your Project Quay registry, for example, example-registry.
-
Click Events. If successful, the following message is displayed:
All objects created/updated successfully
-
Note
|
You must base64 encode any updated config.yaml before placing it in the Secret. Ensure the Secret name matches the value specified in spec.configBundleSecret. Once the Secret is updated, the Operator detects the change and automatically rolls out updates to the Red Hat Quay pods. |
For detailed steps, see "Updating configuration secrets through the Project Quay UI."
Modifying the configuration file by using the CLI
You can modify the config.yaml
file that is stored by the configBundleSecret
by downloading the existing configuration using the CLI. After making changes, you can re-upload the configBundleSecret
resource to make changes to the Project Quay registry.
Note
|
Modifying the |
-
You are logged in to the OpenShift Container Platform cluster as a user with admin privileges.
-
Describe the
QuayRegistry
resource by entering the following command:$ oc describe quayregistry -n <quay_namespace>
# ... Config Bundle Secret: example-registry-config-bundle-v123x # ...
-
Obtain the secret data by entering the following command:
$ oc get secret -n <quay_namespace> <example-registry-config-bundle-v123x> -o jsonpath='{.data}'
Example output{ "config.yaml": "RkVBVFVSRV9VU0 ... MDAwMAo=" }
-
Decode the data into a YAML file into the current directory by passing in the
>> config.yaml
flag. For example:$ echo 'RkVBVFVSRV9VU0 ... MDAwMAo=' | base64 --decode >> config.yaml
-
Make the desired changes to your
config.yaml
file, and then save the file asconfig.yaml
. -
Create a new
configBundleSecret
YAML by entering the following command.$ touch <new_configBundleSecret_name>.yaml
-
Create the new
configBundleSecret
resource, passing in theconfig.yaml
file` by entering the following command:$ oc -n <namespace> create secret generic <secret_name> \ --from-file=config.yaml=</path/to/config.yaml> \ (1) --dry-run=client -o yaml > <new_configBundleSecret_name>.yaml
-
Where
<config.yaml>
is yourbase64 decoded
config.yaml
file.
-
-
Create the
configBundleSecret
resource by entering the following command:$ oc create -n <namespace> -f <new_configBundleSecret_name>.yaml
Example outputsecret/config-bundle created
-
Update the
QuayRegistry
YAML file to reference the newconfigBundleSecret
object by entering the following command:$ oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{"spec":{"configBundleSecret":"<new_configBundleSecret_name>"}}'
Example outputquayregistry.quay.redhat.com/example-registry patched
-
Verify that the
QuayRegistry
CR has been updated with the newconfigBundleSecret
:$ oc describe quayregistry -n <quay_namespace>
Example output# ... Config Bundle Secret: <new_configBundleSecret_name> # ...
After patching the registry, the Project Quay Operator automatically reconciles the changes.
New configuration fields with Project Quay 3.14
The following sections detail new configuration fields added in Project Quay 3.14.
Model card rendering configuration fields
The following configuration fields have been added to support model card rendering on the v2 UI.
Field | Type | Description |
---|---|---|
FEATURE_UI_MODELCARD |
Boolean |
Enables Model card image tab in UI. Defaults to |
UI_MODELCARD_ARTIFACT_TYPE |
String |
Defines the model card artifact type. |
UI_MODELCARD_ANNOTATION |
Object |
This optional field defines the layer annotation of the model card stored in an OCI image. |
UI_MODELCARD_LAYER_ANNOTATION |
Object |
This optional field defines the layer annotation of the model card stored in an OCI image. |
FEATURE_UI_MODELCARD: true (1)
UI_MODELCARD_ARTIFACT_TYPE: application/x-mlmodel (2)
UI_MODELCARD_ANNOTATION: (3)
org.opencontainers.image.description: "Model card metadata"
UI_MODELCARD_LAYER_ANNOTATION: (4)
org.opencontainers.image.title: README.md
-
Enables the Model Card image tab in the UI.
-
Defines the model card artifact type. In this example, the artifact type is
application/x-mlmodel
. -
Optional. If an image does not have an
artifactType
defined, this field is checked at the manifest level. If a matching annotation is found, the system then searches for a layer with an annotation matchingUI_MODELCARD_LAYER_ANNOTATION
. -
Optional. If an image has an
artifactType
defined and multiple layers, this field is used to locate the specific layer containing the model card.
Footer configuration fields
The following configuration fields have been added to the original (v1) UI. You can use these fields to customize the footer of your on-prem v1 UI.
Note
|
These fields are currently unavailable on the Project Quay v2 UI. |
Field | Type | Description |
---|---|---|
FOOTER_LINKS |
Object |
Enable customization of footer links in Project Quay’s UI for on-prem installations. |
.TERMS_OF_SERVICE_URL |
String |
Custom terms of service for on-prem installations. |
.PRIVACY_POLICY_URL |
String |
Custom privacy policy for on-prem installations. |
.SECURITY_URL |
String |
Custom security page for on-prem installations. |
.ABOUT_URL |
String |
Custom about page for on-prem installations. |
FOOTER_LINKS:
"TERMS_OF_SERVICE_URL": "https://www.index.hr"
"PRIVACY_POLICY_URL": "https://www.example.hr"
"SECURITY_URL": "https://www.example.hr"
"ABOUT_URL": "https://www.example.hr"
Required configuration fields
Project Quay requires a minimal set of configuration fields to operator correctly. These fields define essential aspects of your deployment, such as how the registry is accessed, where image content is stored, how metadata is persisted, and how background services such as logs are managed.
The required configuration fields fall into five main categories:
-
General required configuration fields. Core fields such as the authentication type, URL scheme, server hostname, database secret key, and secret key are covered in this section.
-
Database configuration fields. Project Quay requires a PostgreSQL relational database to store metadata about repositories, users, teams, and tags.
-
Object storage configuration fields. Object storage define the backend where container image blobs and manifests are stored. Your storage backend must be supported by Project Quay, such as Ceph/RadosGW,AWS S3 storage, Google Cloud Storage, Nutanix, and so on.
-
Redis configuration fields. Redis is used as a backend for data such as push logs, user notifications, and other operations.
General required configuration fields
The following table describes the required configuration fields for a Project Quay deployment:
Field | Type | Description |
---|---|---|
AUTHENTICATION_TYPE |
String |
The authentication engine to use for credential authentication. |
PREFERRED_URL_SCHEME |
String |
The URL scheme to use when accessing Project Quay. |
SERVER_HOSTNAME |
String |
The URL at which Project Quay is accessible, without the scheme. |
DATABASE_SECRET_KEY |
String |
Key used to encrypt sensitive fields within the database. This value should never be changed once set, otherwise all reliant fields, for example, repository mirror username and password configurations, are invalidated. |
SECRET_KEY |
String |
Key used to encrypt the session cookie and the CSRF token needed for correct interpretation of the user session. The value should not be changed when set. Should be persistent across all Project Quay instances. If not persistent across all instances, login failures and other errors related to session persistence might occur. |
SETUP_COMPLETE |
Boolean |
This is an artifact left over from earlier versions of the software and currently it must be specified with a value of |
AUTHENTICATION_TYPE: Database
PREFERRED_URL_SCHEME: https
SERVER_HOSTNAME: <quay-server.example.com>
SECRET_KEY: <secret_key_value>
DATABASE_SECRET_KEY: <database_secret_key_value>
SETUP_COMPLETE: true
# ...
Database configuration fields
This section describes the database configuration fields available for Project Quay deployments.
Database URI
With Project Quay, connection to the database is configured by using the required DB_URI
field.
The following table describes the DB_URI
configuration field:
Field | Type | Description |
---|---|---|
DB_URI |
String |
The URI for accessing the database, including any credentials. Example postgresql://quayuser:quaypass@quay-server.example.com:5432/quay |
# ...
DB_URI: postgresql://quayuser:quaypass@quay-server.example.com:5432/quay
# ...
Database connection arguments
Optional connection arguments are configured by the DB_CONNECTION_ARGS
parameter. Some of the key-value pairs defined under DB_CONNECTION_ARGS
are generic, while others are database specific.
Field | Type | Description |
---|---|---|
DB_CONNECTION_ARGS |
Object |
Optional connection arguments for the database, such as timeouts and SSL/TLS. |
.autorollback |
Boolean |
Whether to use thread-local connections. |
.threadlocals |
Boolean |
Whether to use auto-rollback connections. |
# ...
DB_URI: postgresql://quayuser:quaypass@quay-server.example.com:5432/quay
DB_CONNECTION_ARGS:
autorollback: true
threadlocals: true
# ...
SSL/TLS connection arguments
With SSL/TLS, configuration depends on the database you are deploying.
The sslmode
option determines whether, or with, what priority a secure SSL/TLS TCP/IP connection will be negotiated with the server. There are six modes:
Mode | Description |
---|---|
sslmode |
Determines whether, or with, what priority a secure SSL/TLS or TCP/IP connection is negotiated with the server. |
*: disable |
Your configuration only tries non-SSL/TLS connections. |
*: allow |
Your configuration first tries a non-SSL/TLS connection. Upon failure, tries an SSL/TLS connection. |
*: prefer |
Your configuration first tries an SSL/TLS connection. Upon failure, tries a non-SSL/TLS connection. |
*: require |
Your configuration only tries an SSL/TLS connection. If a root CA file is present, it verifies the certificate in the same way as if verify-ca was specified. |
*: verify-ca |
Your configuration only tries an SSL/TLS connection, and verifies that the server certificate is issued by a trusted certificate authority (CA). |
*: verify-full |
Only tries an SSL/TLS connection, and verifies that the server certificate is issued by a trusted CA and that the requested server hostname matches that in the certificate. |
For more information on the valid arguments for PostgreSQL, see Database Connection Control Functions.
# ...
DB_CONNECTION_ARGS:
sslmode: <value>
sslrootcert: path/to/.postgresql/root.crt
# ...
Storage object configuration fields
Storage fields define the backend where container image blobs and manifests are stored. The following storage providers are supported by Project Quay:
-
Amazon Web Services (AWS) S3
-
AWS STS S3 (Security Token Service)
-
AWS CloudFront (CloudFront S3Storage)
-
Google Cloud Storage
-
Microsoft Azure Blob Storage
-
Swift Storage
-
Nutanix Object Storage
-
IBM Cloud Object Storage
-
NetApp ONTAP S3 Object Storage
-
Hitachi Content Platform (HCP) Object Storage
Note
|
Many of the supported storage providers use the |
Storage configuration fields
The following table describes the storage configuration fields for Project Quay. These fields are required when configuring backend storage.
Field | Type | Description |
---|---|---|
DISTRIBUTED_STORAGE_CONFIG |
Object |
Configuration for storage engine(s) to use in Project Quay. Each key represents an unique identifier for a storage engine. The value consists of a tuple of (key, value) forming an object describing the storage engine parameters. |
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS |
Array of string |
The list of storage engine(s) (by ID in |
DISTRIBUTED_STORAGE_PREFERENCE |
Array of string |
The preferred storage engine(s) (by ID in |
MAXIMUM_LAYER_SIZE |
String |
Maximum allowed size of an image layer. |
DISTRIBUTED_STORAGE_CONFIG:
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- default
MAXIMUM_LAYER_SIZE: 100G
Local storage
The following YAML shows an example configuration using local storage.
Important
|
Only use local storage when deploying a registry for proof of concept purposes. It is not intended for production purposes. When using local storage, you must map the registry to a local directory to the |
DISTRIBUTED_STORAGE_CONFIG:
default:
- LocalStorage
- storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- default
Red Hat OpenShift Data Foundation
The following YAML shows a sample configuration using an Red Hat OpenShift Data Foundation:
DISTRIBUTED_STORAGE_CONFIG:
rhocsStorage:
- RHOCSStorage
- access_key: <access_key_here>
secret_key: <secret_key_here>
bucket_name: <bucket_name>
hostname: <hostname>
is_secure: 'true'
port: '443'
storage_path: /datastorage/registry
maximum_chunk_size_mb: 100 (1)
server_side_assembly: true (2)
-
Defines the maximum chunk size, in MB, for the final copy. Has no effect if
server_side_assembly
is set tofalse
. -
Optional. Whether Project Quay should try and use server side assembly and the final chunked copy instead of client assembly. Defaults to
true
.
Ceph Object Gateway (RadosGW) storage example
Project Quay supports using Ceph Object Gateway (RadosGW) as an object storage backend. RadosGW is a component of Red Hat Ceph Storage, which is a storage platform engineered for private architecture. Red Hat Ceph Storage provides an S3-compatible REST API for interacting with Ceph.
Note
|
RadosGW is an on-premise S3-compatible storage solution. It implements the S3 API and requires the same authentication fields, such as |
The following YAML shows an example configuration using RadosGW.
DISTRIBUTED_STORAGE_CONFIG:
radosGWStorage: (1)
- RadosGWStorage
- access_key: <access_key_here>
bucket_name: <bucket_name_here>
hostname: <hostname_here>
is_secure: true
port: '443'
secret_key: <secret_key_here>
storage_path: /datastorage/registry
maximum_chunk_size_mb: 100 (2)
server_side_assembly: true (3)
-
Used for general s3 access. Note that general s3 access is not strictly limited to Amazon Web Services (AWS) s3, and can be used with RadosGW or other storage services. For an example of general s3 access using the AWS S3 driver, see "AWS S3 storage".
-
Optional. Defines the maximum chunk size in MB for the final copy. Has no effect if
server_side_assembly
is set tofalse
. -
Optional. Whether Project Quay should try and use server side assembly and the final chunked copy instead of client assembly. Defaults to
true
.
Supported AWS storage backends
Project Quay supports multiple Amazon Web Services (AWS) storage backends:
-
S3 storage: Standard support for AWS S3 buckets that uses AWS’s native object storage service.
-
STS S3 storage: Support for AWS Security Token Service (STS) to assume IAM roles, allowing for more secure S3 operations.
-
CloudFront S3 storage: Integrates with AWS CloudFront to enable high-availability distribution of content while still using AWS S3 as the origin.
The following sections provide example YAMLs and additional information about each AWS storage backend.
Amazon Web Services S3 storage
Project Quay supports using AWS S3 as an object storage backend. AWS S3 is an object storage service designed for data availability, scalability, security, and performance. The following YAML shows an example configuration using AWS S3.
# ...
DISTRIBUTED_STORAGE_CONFIG:
default:
- S3Storage (1)
- host: s3.us-east-2.amazonaws.com
s3_access_key: ABCDEFGHIJKLMN
s3_secret_key: OL3ABCDEFGHIJKLMN
s3_bucket: quay_bucket
s3_region: <region> (2)
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- default
# ...
-
The
S3Storage
storage driver should only be used for AWS S3 buckets. Note that this differs from general S3 access, where the RadosGW driver or other storage services can be used. For an example, see "Example B: Using RadosGW with general S3 access". -
Optional. The Amazon Web Services region. Defaults to
us-east-1
.
Amazon Web Services STS S3 storage
AWS Security Token Service (STS) provides temporary, limited-privilege credentials for accessing AWS resources, improving security by avoiding the need to store long-term access keys. This is useful in environments such as OpenShift Container Platform where credentials can be rotated or managed through IAM roles.
The following YAML shows an example configuration for using AWS STS with Red Hat Quay on OpenShift Container Platform configurations.
# ...
DISTRIBUTED_STORAGE_CONFIG:
default:
- STSS3Storage
- sts_role_arn: <role_arn> (1)
s3_bucket: <s3_bucket_name>
storage_path: <storage_path>
sts_user_access_key: <s3_user_access_key> (2)
sts_user_secret_key: <s3_user_secret_key> (3)
s3_region: <region> (4)
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- default
# ...
-
The unique Amazon Resource Name (ARN).
-
The generated AWS S3 user access key.
-
The generated AWS S3 user secret key.
-
Optional. The Amazon Web Services region. Defaults to
us-east-1
.
AWS CloudFront storage
AWS CloudFront is a content delivery network (CDN) service that caches and distributes content closer to users for improved performance and lower latency. Project Quay supports CloudFront through the CloudFrontedS3Storage
driver, which enables secure, signed access to S3 buckets via CloudFront distributions.
Use the following example when configuring AWS CloudFront for your Project Quay deployment.
Note
|
|
DISTRIBUTED_STORAGE_CONFIG:
default:
- CloudFrontedS3Storage
- cloudfront_distribution_domain: <CLOUDFRONT_DISTRIBUTION_DOMAIN>
cloudfront_key_id: <CLOUDFRONT_KEY_ID>
cloudfront_privatekey_filename: <CLOUDFRONT_PRIVATE_KEY_FILENAME>
host: <S3_HOST>
s3_access_key: <S3_ACCESS_KEY>
s3_bucket: <S3_BUCKET_NAME>
s3_secret_key: <S3_SECRET_KEY>
storage_path: <STORAGE_PATH>
s3_region: <S3_REGION>
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS:
- default
DISTRIBUTED_STORAGE_PREFERENCE:
- default
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AWS_ACCOUNT_ID>:user/CloudFront Origin Access Identity <CLOUDFRONT_OAI_ID>" (1) (2)
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<S3_BUCKET_NAME>/*" (3)
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AWS_ACCOUNT_ID>:user/CloudFront Origin Access Identity <CLOUDFRONT_OAI_ID>" (1) (2)
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::<S3_BUCKET_NAME>"
}
]
}
-
The identifier, or account ID, of the AWS account that owns the CloudFront OAI and S3 bucket.
-
The CloudFront Origin Access Identity (OAI) that accesses the S3 bucket.
-
Specifies that CloudFront can access all objects (
/*
) inside of the S3 bucket.
Google Cloud Storage
Project Quay supports using Google Cloud Storage (GCS) as an object storage backend. When used with Project Quay, it provides a cloud-native solution for storing container images and artifacts.
The following YAML shows a sample configuration using Google Cloud Storage.
DISTRIBUTED_STORAGE_CONFIG:
googleCloudStorage:
- GoogleCloudStorage
- access_key: <access_key>
bucket_name: <bucket_name>
secret_key: <secret_key>
storage_path: /datastorage/registry
boto_timeout: 120 (1)
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- googleCloudStorage
-
Optional. The time, in seconds, until a timeout exception is thrown when attempting to read from a connection. The default is
60
seconds. Also encompasses the time, in seconds, until a timeout exception is thrown when attempting to make a connection. The default is60
seconds.
Microsoft Azure Blob Storage
Project Quay supports using Microsoft Azure Blob Storage as an object storage backend. Azure Blob Storage can be used to persist container images, metadata, and other artifacts in a secure and cloud-native manner.
The following YAML shows a sample configuration using Azure Storage.
DISTRIBUTED_STORAGE_CONFIG:
azureStorage:
- AzureStorage
- azure_account_name: <azure_account_name>
azure_container: <azure_container_name>
storage_path: /datastorage/registry
azure_account_key: <azure_account_key>
sas_token: some/path/
endpoint_url: https://[account-name].blob.core.usgovcloudapi.net (1)
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- azureStorage
-
The
endpoint_url
parameter for Azure storage is optional and can be used with Microsoft Azure Government (MAG) endpoints. If left blank, theendpoint_url
will connect to the normal Azure region.As of Project Quay 3.7, you must use the Primary endpoint of your MAG Blob service. Using the Secondary endpoint of your MAG Blob service will result in the following error:
AuthenticationErrorDetail:Cannot find the claimed account when trying to GetProperties for the account whusc8-secondary
.
Swift object storage
Project Quay supports using Red Hat OpenStack Platform (RHOSP) Object Storage service, or Swift, as an object storage backend. Swift offers S3-like functionality with its own API and authentication mechanisms.
The following YAML shows a sample configuration using Swift storage.
DISTRIBUTED_STORAGE_CONFIG:
swiftStorage:
- SwiftStorage
- swift_user: <swift_username>
swift_password: <swift_password>
swift_container: <swift_container>
auth_url: https://example.org/swift/v1/quay
auth_version: 3
os_options:
tenant_id: <osp_tenant_id>
user_domain_name: <osp_domain_name>
ca_cert_path: /conf/stack/swift.cert"
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- swiftStorage
Nutanix Objects Storage
Project Quay supports Nutanix Objects Storage as an object storage backend. Nutanix Object Storage is suitable for organizations running private cloud infrastructure using Nutanix.
The following YAML shows a sample configuration using Nutanix Object Storage.
DISTRIBUTED_STORAGE_CONFIG:
nutanixStorage: # storage config name
- RadosGWStorage # actual driver
- access_key: <access_key>
secret_key: <secret_key>
bucket_name: <bucket_name>
hostname: <hostname>
is_secure: 'true'
port: '443'
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE: # must contain name of the storage config
- nutanixStorage
IBM Cloud Object Storage
Project Quay supports IBM Cloud Object Storage as an object storage backend. IBM Cloud Object Storage is suitable for cloud-native applications requiring scalable and secure storage on IBM Cloud.
The following YAML shows a sample configuration using IBM Cloud Object Storage.
DISTRIBUTED_STORAGE_CONFIG:
default:
- IBMCloudStorage # actual driver
- access_key: <access_key> # parameters
secret_key: <secret_key>
bucket_name: <bucket_name>
hostname: <hostname>
is_secure: 'true'
port: '443'
storage_path: /datastorage/registry
maximum_chunk_size_mb: 100mb (1)
minimum_chunk_size_mb: 5mb (2)
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS:
- default
DISTRIBUTED_STORAGE_PREFERENCE:
- default
-
Optional. Recommended to be set to
100mb
. -
Optional. Defaults to
5mb
. Do not adjust this field without consulting Red Support, because it can have unintended consequences.
NetApp ONTAP S3 object storage
Project Quay supports using NetApp ONTAP S3 as an object storage backend.
The following YAML shows a sample configuration using NetApp ONTAP S3.
DISTRIBUTED_STORAGE_CONFIG:
local_us:
- RadosGWStorage
- access_key: <access_key>
bucket_name: <bucket_name>
hostname: <host_url_address>
is_secure: true
port: <port>
secret_key: <secret_key>
storage_path: /datastorage/registry
signature_version: v4
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS:
- local_us
DISTRIBUTED_STORAGE_PREFERENCE:
- local_us
Hitachi Content Platform object storage
Project Quay supports using Hitachi Content Platform (HCP) as an object storage backend.
The following YAML shows a sample configuration using HCP for object storage.
DISTRIBUTED_STORAGE_CONFIG:
hcp_us:
- RadosGWStorage
- access_key: <access_key>
bucket_name: <bucket_name>
hostname: <hitachi_hostname_example>
is_secure: true
secret_key: <secret_key>
storage_path: /datastorage/registry
signature_version: v4
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS:
- hcp_us
DISTRIBUTED_STORAGE_PREFERENCE:
- hcp_us
Redis configuration fields
Redis is used by Project Quay to support backend tasks and services, such as build triggers and notifications. There are configuration types related to Redis: build logs and user events. The following sections detail the configuration fields available for each type.
Build logs
Build logs are generated during the image build process and provide insights for debugging and auditing. Project Quay uses Redis to temporarily store these logs before they are accessed through the user interface or API.
The following build logs configuration fields are available for Redis deployments.
Field | Type | Description |
---|---|---|
BUILDLOGS_REDIS |
Object |
Redis connection details for build logs caching. |
.host |
String |
The hostname at which Redis is accessible. |
.port |
Number |
The port at which Redis is accessible. |
.password |
String |
The password to connect to the Redis instance. |
.ssl |
Boolean |
Whether to enable TLS communication between Redis and Quay. Defaults to false. |
# ...
BUILDLOGS_REDIS:
host: <quay-server.example.com>
password: <example_password>
port: 6379 (1)
ssl: true (1)
# ...
-
If your deployment uses Azure Cache for Redis and
ssl
is set totrue
, the port defaults to6380
.
User events
User events track activity across Project Quay, such as repository pushes, tag creations, deletions, and permission changes. These events are recorded in Redis as part of the activity stream and can be accessed through the API or web interface.
The following user event fields are available for Redis deployments.
Field | Type | Description |
---|---|---|
USER_EVENTS_REDIS |
Object |
Redis connection details for user event handling. |
.host |
String |
The hostname at which Redis is accessible. |
.port |
Number |
The port at which Redis is accessible. |
.password |
String |
The password to connect to the Redis instance. |
.ssl |
Boolean |
Whether to enable TLS communication between Redis and Quay. Defaults to false. |
.ssl_keyfile |
String |
The name of the key database file, which houses the client certificate to be used. |
.ssl_certfile |
String |
Used for specifying the file path of the SSL certificate. |
.ssl_cert_reqs |
String |
Used to specify the level of certificate validation to be performed during the SSL/TLS handshake. |
.ssl_ca_certs |
String |
Used to specify the path to a file containing a list of trusted Certificate Authority (CA) certificates. |
.ssl_ca_data |
String |
Used to specify a string containing the trusted CA certificates in PEM format. |
.ssl_check_hostname |
Boolean |
Used when setting up an SSL/TLS connection to a server. It specifies whether the client should check that the hostname in the server’s SSL/TLS certificate matches the hostname of the server it is connecting to. |
# ...
USER_EVENTS_REDIS:
host: <quay-redis.example.com>
port: 6379
password: <example_password>
ssl: true
ssl_keyfile: /etc/ssl/private/redis-client.key
ssl_certfile: /etc/ssl/certs/redis-client.crt
ssl_cert_reqs: <required_certificate>
ssl_ca_certs: /etc/ssl/certs/ca-bundle.crt
ssl_check_hostname: true
# ...
Automation configuration options
Project Quay supports various mechanisms for automating deployment and configuration, which allows the integration of Project Quay into GitOps and CI/CD pipelines. By defining these options and leveraging the API, Project Quay can be initialized and managed without using the UI.
Note
|
Because the Project Quay Operator manages the For on-premise Project Quay deployments, pre-configuration is done by manually creating a valid |
Automation options are ideal for environments that require declarative Project Quay deployments, such as disconnected or air-gapped clusters.
Pre-configuration options for automation
Project Quay provides configuration options that enable registry administrators to automate early setup tasks and API accessibility. These options are useful for new deployments and controlling how API calls can be made. The following options support automation and administrative control.
Field | Type | Description | ||
---|---|---|---|---|
FEATURE_USER_INITIALIZE |
Boolean |
Enables initial user bootstrapping in a newly deployed Project Quay registry. When this field is set to
|
||
BROWSER_API_CALLS_XHR_ONLY |
Boolean |
Controls whether the registry API only accepts calls from browsers. To allow general browser-based access to the API, administrators must set this field to |
||
SUPER_USERS |
String |
Defines a list of administrative users, or superusers, who have full privileges and unrestricted access to the registry. Project Quay administrators should configure |
||
FEATURE_USER_CREATION |
Boolean |
Relegates the creation of new users to only superusers when this field is set to |
The following YAML shows you the suggested configuration for automation:
# ...
FEATURE_USER_INITIALIZE: true
BROWSER_API_CALLS_XHR_ONLY: false
SUPER_USERS:
- quayadmin
FEATURE_USER_CREATION: false
# ...
Component and feature configuration fields
The Component and Feature Configuration section describes the configurable fields available for fine-tuning Project Quay across its various subsystems. These fields allow administrators to customize registry behavior, enable or disable specific features, and integrate with external services and infrastructure. While not required for a basic deployment, these options support advanced use cases related to security, automation, scalability, compliance, and performance.
Core configuration overview
Use these core fields to configure the registry’s basic behavior, including hostname, protocol, authentication settings, and more.
Registry branding and identity fields
The following configuration fields allow you to modify the branding, identity, and contact information displayed in your Project Quay deployment. With these fields, you can customize how the registry appears to users by specifying titles, headers, footers, and organizational contact links shown throughout the UI.
Note
|
Some of the following fields are not available on the Project Quay v2 UI. |
Field | Type | Description |
---|---|---|
REGISTRY_TITLE |
String |
If specified, the long-form title for the registry. Displayed in frontend of your Project Quay deployment, for example, at the sign in page of your organization. Should not exceed 35 characters.
|
REGISTRY_TITLE_SHORT |
String |
If specified, the short-form title for the registry. Title is displayed on various pages of your organization, for example, as the title of the tutorial on your organization’s Tutorial page.
|
CONTACT_INFO |
Array of String |
If specified, contact information to display on the contact page. If only a single piece of contact information is specified, the contact footer will link directly. |
[0] |
String |
Adds a link to send an e-mail. |
[1] |
String |
Adds a link to visit an IRC chat room. |
[2] |
String |
Adds a link to call a phone number. |
[3] |
String |
Adds a link to a defined URL. |
Field | Type | Description |
---|---|---|
BRANDING |
Object |
Custom branding for logos and URLs in the Project Quay UI. |
.logo |
String |
Main logo image URL. The header logo defaults to 205x30 PX. The form logo on the Project Quay sign in screen of the web UI defaults to 356.5x39.7 PX.
|
.footer_img |
String |
Logo for UI footer. Defaults to 144x34 PX. |
.footer_url |
String |
Link for footer image. |
Field | Type | Description |
---|---|---|
FOOTER_LINKS |
Object |
Enable customization of footer links in Project Quay’s UI for on-prem installations. |
.TERMS_OF_SERVICE_URL |
String |
Custom terms of service for on-prem installations. |
.PRIVACY_POLICY_URL |
String |
Custom privacy policy for on-prem installations. |
.SECURITY_URL |
String |
Custom security page for on-prem installations. |
.ABOUT_URL |
String |
Custom about page for on-prem installations. |
# ...
REGISTRY_TITLE: "Example Container Registry"
REGISTRY_TITLE_SHORT: "Example Quay"
CONTACT_INFO:
- mailto:support@example.io
- irc://chat.freenode.net:6665/examplequay
- tel:+1-800-555-1234
- https://support.example.io
BRANDING:
logo: https://www.mend.io/wp-content/media/2020/03/5-tips_small.jpg
footer_img: https://www.mend.io/wp-content/media/2020/03/5-tips_small.jpg
footer_url: https://opensourceworld.org/
FOOTER_LINKS:
"TERMS_OF_SERVICE_URL": "https://www.index.hr"
"PRIVACY_POLICY_URL": "https://www.example.hr"
"SECURITY_URL": "https://www.example.hr"
"ABOUT_URL": "https://www.example.hr"
# ...
SSL/TLS configuration fields
This section describes the available configuration fields for enabling and managing SSL/TLS encryption in your Project Quay deployment.
Field | Type | Description |
---|---|---|
PREFERRED_URL_SCHEME |
String |
One of |
SERVER_HOSTNAME |
String |
The URL at which Project Quay is accessible, without the scheme |
SSL_CIPHERS |
Array of String |
If specified, the nginx-defined list of SSL ciphers to enabled and disabled |
SSL_PROTOCOLS |
Array of String |
If specified, nginx is configured to enabled a list of SSL protocols defined in the list. Removing an SSL protocol from the list disables the protocol during Project Quay startup. |
SESSION_COOKIE_SECURE |
Boolean |
Whether the |
EXTERNAL_TLS_TERMINATION |
Boolean |
Set to |
# ...
PREFERRED_URL_SCHEME: https
SERVER_HOSTNAME: quay-server.example.com
SSL_CIPHERS:
- ECDHE-RSA-AES128-GCM-SHA256
SSL_PROTOCOLS:
- TLSv1.3
SESSION_COOKIE_SECURE: true
EXTERNAL_TLS_TERMINATION: true
# ...
IPv6 configuration field
You can use the FEATURE_LISTEN_IP_VERSION
configuration field to specify which IP protocol family Project Quay should listen on: IPv4, IPv6, or both (dual-stack). This field is critical in environments where the registry must operate on IPv6-only or dual-stack networks.
Field | Type | Description |
---|---|---|
FEATURE_LISTEN_IP_VERSION |
String |
Enables IPv4, IPv6, or dual-stack protocol family. This configuration field must be properly set, otherwise Project Quay fails to start.
Default: |
# ...
FEATURE_LISTEN_IP_VERSION: dual-stack
# ...
Logging and debugging variables
The following variables control how Project Quay logs events, exposes debugging information, and interacts with system health checks. These settings are useful for troubleshooting and monitoring your registry
Variable | Type | Description | ||
---|---|---|---|---|
DEBUGLOG |
Boolean |
Whether to enable or disable debug logs. |
||
USERS_DEBUG |
Integer. Either |
Used to debug LDAP operations in clear text, including passwords. Must be used with
|
||
ALLOW_PULLS_WITHOUT_STRICT_LOGGING |
String |
If true, pulls will still succeed even if the pull audit log entry cannot be written . This is useful if the database is in a read-only state and it is desired for pulls to continue during that time. |
||
ENABLE_HEALTH_DEBUG_SECRET |
String |
If specified, a secret that can be given to health endpoints to see full debug info when not authenticated as a superuser |
||
HEALTH_CHECKER |
String |
The configured health check |
||
FEATURE_AGGREGATED_LOG_COUNT_RETRIEVAL |
Boolean |
Whether to allow retrieval of aggregated log counts |
#...
DEBUGLOG: true
USERS_DEBUG: 1
ALLOW_PULLS_WITHOUT_STRICT_LOGGING: "true"
ENABLE_HEALTH_DEBUG_SECRET: "<secret_value>"
HEALTH_CHECKER: "('RDSAwareHealthCheck', {'access_key': 'foo', 'secret_key': 'bar'})"
FEATURE_AGGREGATED_LOG_COUNT_RETRIEVAL: true
# ...
Registry state and system behavior configuration fields
The following configuration fields control the operational state of the Project Quay registry and how it interacts with external systems. These settings allow administrators to place the registry into a restricted read-only mode for maintenance purposes, and to enforce additional security by blocking specific hostnames from being targeted by webhooks.
Field | Type | Description |
---|---|---|
REGISTRY_STATE |
String |
The state of the registry |
WEBHOOK_HOSTNAME_BLACKLIST |
Array of String |
The set of hostnames to disallow from webhooks when validating, beyond localhost |
# ...
REGISTRY_STATE: normal
WEBHOOK_HOSTNAME_BLACKLIST:
- "169.254.169.254"
- "internal.example.com"
- "127.0.0.2"
# ...
User Experience and Interface
These fields configure how users interact with the UI, including branding, pagination, browser behavior, and accessibility options like recaptcha. This also covers user-facing performance and display settings.
Web UI and user experience configuration fields
These configuration fields control the behavior and appearance of the Project Quay web interface and overall user experience. Options in this section allow administrators to customize login behavior, avatar display, user autocomplete, session handling, and catalog visibility.
Field | Type | Description |
---|---|---|
AVATAR_KIND |
String |
The types of avatars to display, either generated inline (local) or Gravatar (gravatar) |
FRESH_LOGIN_TIMEOUT |
String |
The time after which a fresh login requires users to re-enter their password |
FEATURE_UI_V2 |
Boolean |
When set, allows users to try the v2 beta UI environment. Default: |
FEATURE_UI_V2_REPO_SETTINGS |
Boolean |
When set to +
Default: |
FEATURE_DIRECT_LOGIN |
Boolean |
Whether users can directly login to the UI |
FEATURE_PARTIAL_USER_AUTOCOMPLETE |
Boolean |
If set to true, autocompletion will apply to partial usernames+
|
FEATURE_LIBRARY_SUPPORT |
Boolean |
Whether to allow for "namespace-less" repositories when pulling and pushing from Docker |
FEATURE_PERMANENT_SESSIONS |
Boolean |
Whether sessions are permanent |
FEATURE_PUBLIC_CATALOG |
Boolean |
If set to true, the |
# ...
AVATAR_KIND: local
FRESH_LOGIN_TIMEOUT: 5m
FEATURE_UI_V2: true
FEATURE_UI_V2_REPO_SETTINGS: false
FEATURE_DIRECT_LOGIN: true
FEATURE_PARTIAL_USER_AUTOCOMPLETE: true
FEATURE_LIBRARY_SUPPORT: true
FEATURE_PERMANENT_SESSIONS: true
FEATURE_PUBLIC_CATALOG: false
# ...
v2 user interface configuration
With FEATURE_UI_V2
enabled, you can toggle between the current version of the user interface and the new version of the user interface.
Important
|
|
Session timeout configuration field
The following configuration field relies on on the Flask API configuration field of the same name.
Important
|
Altering session lifetime is not recommended. Administrators should be aware of the allotted time when setting a session timeout. If you set the time too early, it might interrupt your workflow. |
Field | Type | Description |
---|---|---|
PERMANENT_SESSION_LIFETIME |
Integer |
A Default: |
# ...
PERMANENT_SESSION_LIFETIME: 3000
# ...
User and Access Management
Use these fields to configure how users are created, authenticated, and managed. This includes settings for superusers, account recovery, app-specific tokens, login behavior, and external identity providers like LDAP, OAuth, and OIDC.
User configuration fields
The user configuration fields define how user accounts behave in your Project Quay deployment. These fields enable control over user creation, access levels, metadata tracking, recovery options, and namespace management. You can also enforce restrictions, such as invite-only creation or superuser privileges, to match your organization’s governance and security policies.
Field | Type | Description |
---|---|---|
FEATURE_SUPER_USERS |
Boolean |
Whether superusers are supported |
FEATURE_USER_CREATION |
Boolean |
Whether users can be created (by non-superusers) |
FEATURE_USER_LAST_ACCESSED |
Boolean |
Whether to record the last time a user was accessed |
FEATURE_USER_LOG_ACCESS |
Boolean |
If set to true, users will have access to audit logs for their namespace |
FEATURE_USER_METADATA |
Boolean |
Whether to collect and support user metadata |
FEATURE_USERNAME_CONFIRMATION |
Boolean |
If set to true, users can confirm and modify their initial usernames when logging in via OpenID Connect (OIDC) or a non-database internal authentication provider like LDAP.
|
FEATURE_USER_RENAME |
Boolean |
If set to true, users can rename their own namespace |
FEATURE_INVITE_ONLY_USER_CREATION |
Boolean |
Whether users being created must be invited by another user |
FRESH_LOGIN_TIMEOUT |
String |
The time after which a fresh login requires users to re-enter their password |
USERFILES_LOCATION |
String |
ID of the storage engine in which to place user-uploaded files |
USERFILES_PATH |
String |
Path under storage in which to place user-uploaded files |
USER_RECOVERY_TOKEN_LIFETIME |
String |
The length of time a token for recovering a user accounts is valid |
FEATURE_SUPERUSERS_FULL_ACCESS |
Boolean |
Grants superusers the ability to read, write, and delete content from other repositories in namespaces that they do not own or have explicit permissions for. Default: |
FEATURE_SUPERUSERS_ORG_CREATION_ONLY |
Boolean |
Whether to only allow superusers to create organizations. Default: |
FEATURE_RESTRICTED_USERS |
Boolean |
When set to
Default: |
RESTRICTED_USERS_WHITELIST |
String |
When set with |
GLOBAL_READONLY_SUPER_USERS |
String |
When set, grants users of this list read access to all repositories, regardless of whether they are public repositories. Only works for those superusers defined with the |
# ...
FEATURE_SUPER_USERS: true
FEATURE_USER_CREATION: true
FEATURE_INVITE_ONLY_USER_CREATION: false
FEATURE_USER_RENAME: true
FEATURE_SUPERUSERS_FULL_ACCESS: true
FEATURE_SUPERUSERS_ORG_CREATION_ONLY: false
FEATURE_RESTRICTED_USERS: true
RESTRICTED_USERS_WHITELIST: (1)
- user1
GLOBAL_READONLY_SUPER_USERS:
- quayadmin
FRESH_LOGIN_TIMEOUT: "5m"
USER_RECOVERY_TOKEN_LIFETIME: "30m"
USERFILES_LOCATION: "s3_us_east"
USERFILES_PATH: "userfiles"
# ...
-
When the
RESTRICTED_USERS_WHITELIST
field is set, whitelisted users can create organizations, or read or write content from the repository even ifFEATURE_RESTRICTED_USERS
is set totrue
. Other users, for example,user2
,user3
, anduser4
are restricted from creating organizations, reading, or writing content.
Robot account configuration fields
The following configuration field allows for globally disallowing robot account creation and interaction.
Field | Type | Description |
---|---|---|
ROBOTS_DISALLOW |
Boolean |
When set to |
# ...
ROBOTS_DISALLOW: true
# ...
LDAP configuration fields
The following configuration fields allow administrators to integrate Project Quay with an LDAP-based authentication system. When AUTHENTICATION_TYPE
is set to LDAP
, Project Quay can authenticate users against an LDAP directory and support additional, optional features such as team synchronization, superuser access control, restricted user roles, and secure connection parameters.
This section provides YAML examples for the following LDAP scenarios:
-
Basic LDAP configuration
-
LDAP restricted user configuration
-
LDAP superuser configuration
Field | Type | Description |
---|---|---|
AUTHENTICATION_TYPE |
String |
Must be set to |
FEATURE_TEAM_SYNCING |
Boolean |
Whether to allow for team membership to be synced from a backing group in the authentication engine (OIDC, LDAP, or Keystone). |
FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP |
Boolean |
If enabled, non-superusers can setup team syncrhonization. |
LDAP_ADMIN_DN |
String |
The admin DN for LDAP authentication. |
LDAP_ADMIN_PASSWD |
String |
The admin password for LDAP authentication. |
LDAP_ALLOW_INSECURE_FALLBACK |
Boolean |
Whether or not to allow SSL insecure fallback for LDAP authentication. |
LDAP_BASE_DN |
Array of String |
The base DN for LDAP authentication. |
LDAP_EMAIL_ATTR |
String |
The email attribute for LDAP authentication. |
LDAP_UID_ATTR |
String |
The uid attribute for LDAP authentication. |
LDAP_URI |
String |
The LDAP URI. |
LDAP_USER_FILTER |
String |
The user filter for LDAP authentication. |
LDAP_USER_RDN |
Array of String |
The user RDN for LDAP authentication. |
LDAP_SECONDARY_USER_RDNS |
Array of String |
Provide Secondary User Relative DNs if there are multiple Organizational Units where user objects are located. |
TEAM_RESYNC_STALE_TIME |
String |
If team syncing is enabled for a team, how often to check its membership and resync if necessary. |
LDAP_SUPERUSER_FILTER |
String |
Subset of the With this field, administrators can add or remove superusers without having to update the Project Quay configuration file and restart their deployment. This field requires that your |
LDAP_GLOBAL_READONLY_SUPERUSER_FILTER |
String |
When set, grants users of this list read access to all repositories, regardless of whether they are public repositories. Only works for those superusers defined with the |
LDAP_RESTRICTED_USER_FILTER |
String |
Subset of the This field requires that your |
FEATURE_RESTRICTED_USERS |
Boolean |
When set to Default: |
LDAP_TIMEOUT |
Integer |
Specifies the time limit, in seconds, for LDAP operations. This limits the amount of time an LDAP search, bind, or other operation can take. Similar to the |
LDAP_NETWORK_TIMEOUT |
Integer |
Specifies the time limit, in seconds, for establishing a connection to the LDAP server. This is the maximum time Project Quay waits for a response during network operations, similar to the |
# ...
AUTHENTICATION_TYPE: LDAP (1)
# ...
LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com (2)
LDAP_ADMIN_PASSWD: ABC123 (3)
LDAP_ALLOW_INSECURE_FALLBACK: false (4)
LDAP_BASE_DN: (5)
- dc=example
- dc=com
LDAP_EMAIL_ATTR: mail (6)
LDAP_UID_ATTR: uid (7)
LDAP_URI: ldap://<example_url>.com (8)
LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,dc=<domain_name>,dc=com) (9)
LDAP_USER_RDN: (10)
- ou=people
LDAP_SECONDARY_USER_RDNS: (11)
- ou=<example_organization_unit_one>
- ou=<example_organization_unit_two>
- ou=<example_organization_unit_three>
- ou=<example_organization_unit_four>
-
Required. Must be set to
LDAP
. -
Required. The admin DN for LDAP authentication.
-
Required. The admin password for LDAP authentication.
-
Required. Whether to allow SSL/TLS insecure fallback for LDAP authentication.
-
Required. The base DN for LDAP authentication.
-
Required. The email attribute for LDAP authentication.
-
Required. The UID attribute for LDAP authentication.
-
Required. The LDAP URI.
-
Required. The user filter for LDAP authentication.
-
Required. The user RDN for LDAP authentication.
-
Optional. Secondary User Relative DNs if there are multiple Organizational Units where user objects are located.
# ...
AUTHENTICATION_TYPE: LDAP
# ...
FEATURE_RESTRICTED_USERS: true (1)
# ...
LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com
LDAP_ADMIN_PASSWD: ABC123
LDAP_ALLOW_INSECURE_FALLBACK: false
LDAP_BASE_DN:
- o=<organization_id>
- dc=<example_domain_component>
- dc=com
LDAP_EMAIL_ATTR: mail
LDAP_UID_ATTR: uid
LDAP_URI: ldap://<example_url>.com
LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com)
LDAP_RESTRICTED_USER_FILTER: (<filterField>=<value>) (2)
LDAP_USER_RDN:
- ou=<example_organization_unit>
- o=<organization_id>
- dc=<example_domain_component>
- dc=com
# ...
-
Must be set to
true
when configuring an LDAP restricted user. -
Configures specified users as restricted users.
# ...
AUTHENTICATION_TYPE: LDAP
# ...
LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com
LDAP_ADMIN_PASSWD: ABC123
LDAP_ALLOW_INSECURE_FALLBACK: false
LDAP_BASE_DN:
- o=<organization_id>
- dc=<example_domain_component>
- dc=com
LDAP_EMAIL_ATTR: mail
LDAP_UID_ATTR: uid
LDAP_URI: ldap://<example_url>.com
LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com)
LDAP_SUPERUSER_FILTER: (<filterField>=<value>) (1)
LDAP_USER_RDN:
- ou=<example_organization_unit>
- o=<organization_id>
- dc=<example_domain_component>
- dc=com
# ...
-
Configures specified users as superusers.
OAuth configuration fields
The following fields define the behavior of Project Quay when handling authentication through external identity providers using OAuth. You can configure global OAuth options such as token assignment and whitelisted client IDs, as well as provider-specific settings for GitHub and Google.
Field | Type | Description |
---|---|---|
DIRECT_OAUTH_CLIENTID_WHITELIST |
Array of String |
A list of client IDs for Quay-managed applications that are allowed to perform direct OAuth approval without user approval. |
FEATURE_ASSIGN_OAUTH_TOKEN |
Boolean |
Allows organization administrators to assign OAuth tokens to other users. |
# ...
DIRECT_OAUTH_CLIENTID_WHITELIST:
- <quay_robot_client>
- <quay_app_token_issuer>
FEATURE_ASSIGN_OAUTH_TOKEN: true
# ...
Field | Type | Description |
---|---|---|
FEATURE_GITHUB_LOGIN |
Boolean |
Whether GitHub login is supported |
GITHUB_LOGIN_CONFIG |
Object |
Configuration for using GitHub (Enterprise) as an external login provider. |
.ALLOWED_ORGANIZATIONS |
Array of String |
The names of the GitHub (Enterprise) organizations whitelisted to work with the ORG_RESTRICT option. |
.API_ENDPOINT |
String |
The endpoint of the GitHub (Enterprise) API to use. Must be overridden for github.com |
.CLIENT_ID |
String |
The registered client ID for this Project Quay instance; cannot be shared with |
.CLIENT_SECRET |
String |
The registered client secret for this Project Quay instance. |
.GITHUB_ENDPOINT |
String |
The endpoint for GitHub (Enterprise). |
.ORG_RESTRICT |
Boolean |
If true, only users within the organization whitelist can login using this provider. |
# ...
FEATURE_GITHUB_LOGIN: true
GITHUB_LOGIN_CONFIG:
ALLOWED_ORGANIZATIONS:
- <myorg>
- <dev-team>
API_ENDPOINT: <https://api.github.com/>
CLIENT_ID: <client_id>
CLIENT_SECRET: <client_secret>
GITHUB_ENDPOINT: <https://github.com/>
ORG_RESTRICT: true
# ...
Field | Type | Description |
---|---|---|
FEATURE_GOOGLE_LOGIN |
Boolean |
Whether Google login is supported. |
GOOGLE_LOGIN_CONFIG |
Object |
Configuration for using Google for external authentication. |
.CLIENT_ID |
String |
The registered client ID for this Project Quay instance. |
.CLIENT_SECRET |
String |
The registered client secret for this Project Quay instance. |
# ...
FEATURE_GOOGLE_LOGIN: true
GOOGLE_LOGIN_CONFIG:
CLIENT_ID: <client_id>
CLIENT_SECRET: <client_secret>
# ...
OIDC configuration fields
You can configure Project Quay to authenticate users through any OpenID Connect (OIDC)-compatible identity provider, including Azure Entra ID (formerly Azure AD), Okta, Keycloak, and others. These fields define the necessary client credentials, endpoints, and token behavior used during the OIDC login flow.
Field | Type | Description |
---|---|---|
<string>_LOGIN_CONFIG |
String |
The parent key that holds the OIDC configuration settings. Typically the name of the OIDC provider, for example, |
.CLIENT_ID |
String |
The registered client ID for this Project Quay instance. |
.CLIENT_SECRET |
String |
The registered client secret for this Project Quay instance. |
.DEBUGLOG |
Boolean |
Whether to enable debugging. |
.LOGIN_BINDING_FIELD |
String |
Used when the internal authorization is set to LDAP. Project Quay reads this parameter and tries to search through the LDAP tree for the user with this username. If it exists, it automatically creates a link to that LDAP account. |
.LOGIN_SCOPES |
Object |
Adds additional scopes that Project Quay uses to communicate with the OIDC provider. |
.OIDC_ENDPOINT_CUSTOM_PARAMS |
String |
Support for custom query parameters on OIDC endpoints. The following endpoints are supported:
|
.OIDC_ISSUER |
String |
Allows the user to define the issuer to verify. For example, JWT tokens container a parameter known as |
.OIDC_SERVER |
String |
The address of the OIDC server that is being used for authentication. |
.PREFERRED_USERNAME_CLAIM_NAME |
String |
Sets the preferred username to a parameter from the token. |
.SERVICE_ICON |
String |
Changes the icon on the login screen. |
.SERVICE_NAME |
String |
The name of the service that is being authenticated. |
.VERIFIED_EMAIL_CLAIM_NAME |
String |
The name of the claim that is used to verify the email address of the user. |
.PREFERRED_GROUP_CLAIM_NAME |
String |
The key name within the OIDC token payload that holds information about the user’s group memberships. |
.OIDC_DISABLE_USER_ENDPOINT |
Boolean |
Whether to allow or disable the |
AUTHENTICATION_TYPE: OIDC
# ...
<oidc_provider>_LOGIN_CONFIG:
CLIENT_ID: <client_id>
CLIENT_SECRET: <client_secret>
DEBUGLOG: true
LOGIN_BINDING_FIELD: <login_binding_field>
LOGIN_SCOPES:
- openid
- email
- profile
OIDC_ENDPOINT_CUSTOM_PARAMS:
authorization_endpoint:
some: "param"
token_endpoint:
some: "param"
user_endpoint:
some: "param"
OIDC_ISSUER: <oidc_issuer_url>
OIDC_SERVER: <oidc_server_address>
PREFERRED_USERNAME_CLAIM_NAME: <preferred_username_claim>
SERVICE_ICON: <service_icon_url>
SERVICE_NAME: <service_name>
VERIFIED_EMAIL_CLAIM_NAME: <verified_email_claim>
PREFERRED_GROUP_CLAIM_NAME: <preferred_group_claim>
OIDC_DISABLE_USER_ENDPOINT: true
# ...
Recaptcha configuration fields
You can enable Recaptcha support in your Project Quay instance to help protect user login and account recovery forms from abuse by automated systems.
Field | Type | Description |
---|---|---|
FEATURE_RECAPTCHA |
Boolean |
Whether Recaptcha is necessary for user login and recovery |
RECAPTCHA_SECRET_KEY |
String |
If recaptcha is enabled, the secret key for the Recaptcha service |
RECAPTCHA_SITE_KEY |
String |
If recaptcha is enabled, the site key for the Recaptcha service |
# ...
FEATURE_RECAPTCHA: true
RECAPTCHA_SITE_KEY: "<site_key>"
RECAPTCHA_SECRET_KEY: "<secret_key>"
# ...
JWT configuration fields
Project Quay can be configured to support external authentication using JSON Web Tokens (JWT). This integration allows third-party identity providers or token issuers to authenticate and authorize users by calling specific endpoints that handle token verification, user lookup, and permission queries.
Field | Type | Description |
---|---|---|
JWT_AUTH_ISSUER |
String |
The endpoint for JWT users |
JWT_GETUSER_ENDPOINT |
String |
The endpoint for JWT users |
JWT_QUERY_ENDPOINT |
String |
The endpoint for JWT queries |
JWT_VERIFY_ENDPOINT |
String |
The endpoint for JWT verification |
# ...
JWT_AUTH_ISSUER: "http://192.168.99.101:6060"
JWT_GETUSER_ENDPOINT: "http://192.168.99.101:6060/getuser"
JWT_QUERY_ENDPOINT: "http://192.168.99.101:6060/query"
JWT_VERIFY_ENDPOINT: "http://192.168.99.101:6060/verify"
# ...
App tokens configuration fields
App-specific tokens allow users to authenticate with Project Quay using token-based credentials. These fields might be useful for CLI tools like Docker.
Field | Type | Description |
---|---|---|
FEATURE_APP_SPECIFIC_TOKENS |
Boolean |
If enabled, users can create tokens for use by the Docker CLI |
APP_SPECIFIC_TOKEN_EXPIRATION |
String |
The expiration for external app tokens. |
EXPIRED_APP_SPECIFIC_TOKEN_GC |
String |
Duration of time expired external app tokens will remain before being garbage collected |
# ...
FEATURE_APP_SPECIFIC_TOKENS: true
APP_SPECIFIC_TOKEN_EXPIRATION: "30d"
EXPIRED_APP_SPECIFIC_TOKEN_GC: "1d"
# ...
Security and Permissions
This section describes configuration fields that govern core security behaviors and access policies within Project Quay.
Namespace and repository management configuration fields
The following configuration fields govern how Project Quay manages namespaces and repositories, including behavior during automated image pushes, visibility defaults, and rate limiting exceptions.
Field | Type | Description |
---|---|---|
DEFAULT_NAMESPACE_MAXIMUM_BUILD_COUNT |
Number |
The default maximum number of builds that can be queued in a namespace. |
CREATE_PRIVATE_REPO_ON_PUSH |
Boolean |
Whether new repositories created by push are set to private visibility |
CREATE_NAMESPACE_ON_PUSH |
Boolean |
Whether new push to a non-existent organization creates it |
PUBLIC_NAMESPACES |
Array of String |
If a namespace is defined in the public namespace list, then it will appear on all users' repository list pages, regardless of whether the user is a member of the namespace. Typically, this is used by an enterprise customer in configuring a set of "well-known" namespaces. |
NON_RATE_LIMITED_NAMESPACES |
Array of String |
If rate limiting has been enabled using |
DISABLE_PUSHES |
Boolean |
Disables pushes of new content to the registry while retaining all other functionality. Differs from |
# ...
DEFAULT_NAMESPACE_MAXIMUM_BUILD_COUNT: 10
CREATE_PRIVATE_REPO_ON_PUSH: true
CREATE_NAMESPACE_ON_PUSH: false
PUBLIC_NAMESPACES:
- redhat
- opensource
- infra-tools
NON_RATE_LIMITED_NAMESPACES:
- ci-pipeline
- trusted-partners
DISABLE_PUSHES: false
# ...
Nested repositories configuration fields
Support for nested repository path names has been added by the FEATURE_EXTENDED_REPOSITORY_NAMES
property. This optional configuration is added to the config.yaml
by default. Enablement allows the use of /
in repository names.
Field | Type | Description |
---|---|---|
FEATURE_EXTENDED_REPOSITORY_NAMES |
Boolean |
Enable support for nested repositories |
# ...
FEATURE_EXTENDED_REPOSITORY_NAMES: true
# ...
Additional security configuration fields
The following configuration fields provide additional security controls for your Project Quay deployment. These options allow administrators to enforce authentication practices, control anonymous access to content, require team invitations, and enable FIPS-compliant cryptographic functions for environments with enhanced security requirements.
Feature | Type | Description |
---|---|---|
FEATURE_REQUIRE_TEAM_INVITE |
Boolean |
Whether to require invitations when adding a user to a team |
FEATURE_REQUIRE_ENCRYPTED_BASIC_AUTH |
Boolean |
Whether non-encrypted passwords (as opposed to encrypted tokens) can be used for basic auth |
FEATURE_ANONYMOUS_ACCESS |
Boolean |
Whether to allow anonymous users to browse and pull public repositories |
FEATURE_FIPS |
Boolean |
If set to true, Project Quay will run using FIPS-compliant hash functions |
# ...
FEATURE_REQUIRE_TEAM_INVITE: true
FEATURE_REQUIRE_ENCRYPTED_BASIC_AUTH: false
FEATURE_ANONYMOUS_ACCESS: true
FEATURE_FIPS: false
# ...
Rate limiting and performance configuration fields
The following fields control rate limiting and performance-related behavior for your Project Quay deployment.
Field | Type | Description |
---|---|---|
FEATURE_RATE_LIMITS |
Boolean |
Whether to enable rate limits on API and registry endpoints. Setting FEATURE_RATE_LIMITS to |
PROMETHEUS_NAMESPACE |
String |
The prefix applied to all exposed Prometheus metrics |
# ...
FEATURE_RATE_LIMITS: false
PROMETHEUS_NAMESPACE: quay
# ...
Search configuration fields
The following configuration fields define how search results are paginated in the Project Quay user interface.
Field | Type | Description |
---|---|---|
SEARCH_MAX_RESULT_PAGE_COUNT |
Number |
Maximum number of pages the user can paginate in search before they are limited |
SEARCH_RESULTS_PER_PAGE |
Number |
Number of results returned per page by search page |
# ...
SEARCH_MAX_RESULT_PAGE_COUNT: 10
SEARCH_RESULTS_PER_PAGE: 10
# ...
Storage and Data Management
This section describes the configuration fields that govern how Project Quay stores, manages, and audits data.
Image storage features
Project Quay supports image storage features that enhance scalability, resilience, and flexibility in managing container image data. These features allow Project Quay to mirror repositories, proxy storage access through NGINX, and replicate data across multiple storage engines.
Field | Type | Description |
---|---|---|
FEATURE_REPO_MIRROR |
Boolean |
If set to true, enables repository mirroring. |
FEATURE_PROXY_STORAGE |
Boolean |
Whether to proxy all direct download URLs in storage through NGINX. |
FEATURE_STORAGE_REPLICATION |
Boolean |
Whether to automatically replicate between storage engines. |
# ...
FEATURE_REPO_MIRROR: true
FEATURE_PROXY_STORAGE: false
FEATURE_STORAGE_REPLICATION: true
# ...
Action log storage configuration fields
Project Quay maintains a detailed action log to track user and system activity, including repository events, authentication actions, and image operations. By default, this log data is stored in the database, but administrators can configure their deployment to export or forward logs to external systems like Elasticsearch or Splunk for advanced analysis, auditing, or compliance.
Field | Type | Description |
---|---|---|
FEATURE_LOG_EXPORT |
Boolean |
Whether to allow exporting of action logs. |
LOGS_MODEL |
String |
Specifies the preferred method for handling log data. |
LOGS_MODEL_CONFIG |
Object |
Logs model config for action logs. |
ALLOW_WITHOUT_STRICT_LOGGING |
Boolean |
When set to |
# ...
FEATURE_LOG_EXPORT: true
LOGS_MODEL: elasticsearch
LOGS_MODEL_CONFIG:
elasticsearch:
endpoint: http://elasticsearch.example.com:9200
index_prefix: quay-logs
username: elastic
password: changeme
ALLOW_WITHOUT_STRICT_LOGGING: true
# ...
Action log rotation and archiving configuration
This section describes configuration fields related to action log rotation and archiving in Project Quay. When enabled, older logs can be automatically rotated and archived to designated storage locations, helping to manage log retention and storage utilization efficiently.
Field | Type | Description |
---|---|---|
FEATURE_ACTION_LOG_ROTATION |
Boolean |
Enabling log rotation and archival will move all logs older than 30 days to storage. |
ACTION_LOG_ARCHIVE_LOCATION |
String |
If action log archiving is enabled, the storage engine in which to place the archived data. |
ACTION_LOG_ARCHIVE_PATH |
String |
If action log archiving is enabled, the path in storage in which to place the archived data. |
ACTION_LOG_ROTATION_THRESHOLD |
String |
The time interval after which to rotate logs. |
# ...
FEATURE_ACTION_LOG_ROTATION: true
ACTION_LOG_ARCHIVE_LOCATION: s3_us_east
ACTION_LOG_ARCHIVE_PATH: archives/actionlogs
ACTION_LOG_ROTATION_THRESHOLD: 30d
# ...
Action log audit configuration
This section covers the configuration fields for audit logging within Project Quay. When enabled, audit logging tracks detailed user activity such as UI logins, logouts, and Docker logins for regular users, robot accounts, and token-based accounts.
Field | Type | Description |
---|---|---|
ACTION_LOG_AUDIT_LOGINS |
Boolean |
When set to |
# ...
ACTION_LOG_AUDIT_LOGINS: true
# ...
Elasticsearch configuration fields
Use the following configuration fields to integrate Project Quay with an external Elasticsearch service. This enables storing and querying structured data such as action logs, repository events, and other operational records outside of the internal database.
Field | Type | Description |
---|---|---|
LOGS_MODEL_CONFIG.elasticsearch_config.access_key |
String |
Elasticsearch user (or IAM key for AWS ES). |
.elasticsearch_config.host |
String |
Elasticsearch cluster endpoint. |
.elasticsearch_config.index_prefix |
String |
Prefix for Elasticsearch indexes. |
.elasticsearch_config.index_settings |
Object |
Index settings for Elasticsearch. |
LOGS_MODEL_CONFIG.elasticsearch_config.use_ssl |
Boolean |
Whether to use SSL for Elasticsearch. |
.elasticsearch_config.secret_key |
String |
Elasticsearch password (or IAM secret for AWS ES). |
.elasticsearch_config.aws_region |
String |
AWS region. |
.elasticsearch_config.port |
Number |
Port of the Elasticsearch cluster. |
.kinesis_stream_config.aws_secret_key |
String |
AWS secret key. |
.kinesis_stream_config.stream_name |
String |
AWS Kinesis stream to send action logs to. |
.kinesis_stream_config.aws_access_key |
String |
AWS access key. |
.kinesis_stream_config.retries |
Number |
Max number of retry attempts for a single request. |
.kinesis_stream_config.read_timeout |
Number |
Read timeout in seconds. |
.kinesis_stream_config.max_pool_connections |
Number |
Max number of connections in the pool. |
.kinesis_stream_config.aws_region |
String |
AWS region. |
.kinesis_stream_config.connect_timeout |
Number |
Connection timeout in seconds. |
.producer |
String |
Logs producer type. |
.kafka_config.topic |
String |
Kafka topic used to publish log entries. |
.kafka_config.bootstrap_servers |
Array |
List of Kafka brokers used to bootstrap the client. |
.kafka_config.max_block_seconds |
Number |
Max seconds to block during a |
# ...
FEATURE_LOG_EXPORT: true
LOGS_MODEL: elasticsearch
LOGS_MODEL_CONFIG:
producer: elasticsearch
elasticsearch_config:
access_key: elastic_user
secret_key: elastic_password
host: es.example.com
port: 9200
use_ssl: true
aws_region: us-east-1
index_prefix: logentry_
index_settings:
number_of_shards: 3
number_of_replicas: 1
ALLOW_WITHOUT_STRICT_LOGGING: true
# ...
Splunk configuration fields
Use the following fields to configure Project Quay to export action logs to a Splunk endpoint. This configuration allows audit and event logs to be sent to an external Splunk server for centralized analysis, search, and long-term storage.
Field | Type | Description |
---|---|---|
producer |
String |
Must be set to |
splunk_config |
Object |
Logs model configuration for Splunk action logs or Splunk cluster configuration. |
.host |
String |
The Splunk cluster endpoint. |
.port |
Integer |
The port number for the Splunk management cluster endpoint. |
.bearer_token |
String |
The bearer token used for authentication with Splunk. |
.verify_ssl |
Boolean |
Enable ( |
.index_prefix |
String |
The index prefix used by Splunk. |
.ssl_ca_path |
String |
The relative container path to a |
# ...
LOGS_MODEL: splunk
LOGS_MODEL_CONFIG:
producer: splunk
splunk_config:
host: http://<user_name>.remote.csb
port: 8089
bearer_token: <bearer_token>
url_scheme: <http/https>
verify_ssl: False
index_prefix: <splunk_log_index_name>
ssl_ca_path: <location_to_ssl-ca-cert.pem>
# ...
Splunk HEC configuration fields
The following fields are available when configuring Splunk HTTP Event Collector (HEC) for Project Quay.
Field | Type | Description |
---|---|---|
producer |
String |
Must be set to |
splunk_hec_config |
Object |
Logs model configuration for Splunk HTTP Event Collector action logs. |
.host |
String |
Splunk cluster endpoint. |
.port |
Integer |
Splunk management cluster endpoint port. |
.hec_token |
String |
HEC token used for authenticating with Splunk. |
.url_scheme |
String |
URL scheme to access the Splunk service. Use |
.verify_ssl |
Boolean |
Enable ( |
.index |
String |
The Splunk index to use for log storage. |
.splunk_host |
String |
The hostname to assign to the logged event. |
.splunk_sourcetype |
String |
The Splunk |
# ...
LOGS_MODEL: splunk
LOGS_MODEL_CONFIG:
producer: splunk_hec
splunk_hec_config:
host: prd-p-aaaaaq.splunkcloud.com
port: 8088
hec_token: 12345678-1234-1234-1234-1234567890ab
url_scheme: https
verify_ssl: False
index: quay
splunk_host: quay-dev
splunk_sourcetype: quay_logs
# ...
Builds and Automation
This section outlines the configuration options available for managing automated builds within Project Quay. These settings control how Dockerfile builds are triggered, processed, and stored, and how build logs are managed and accessed.
You can use these fields to:
-
Enable or disable automated builds from source repositories.
-
Configure the behavior and resource management of the build manager.
-
Control access to and retention of build logs for auditing or debugging purposes.
These options help you streamline your CI/CD pipeline, enforce build policies, and retain visibility into your build history across the registry.
Dockerfile build triggers fields
This section describes the configuration fields used to enable and manage automated builds in Project Quay from Dockerfiles and source code repositories. These fields allow you to define build behavior, enable or disable support for GitHub, GitLab, and Bitbucket triggers, and provide OAuth credentials and endpoints for each SCM provider.
Field | Type | Description |
---|---|---|
FEATURE_BUILD_SUPPORT |
Boolean |
Whether to support Dockerfile build. |
SUCCESSIVE_TRIGGER_FAILURE_DISABLE_THRESHOLD |
Number |
If not set to |
SUCCESSIVE_TRIGGER_INTERNAL_ERROR_DISABLE_THRESHOLD |
Number |
If not set to |
# ...
FEATURE_BUILD_SUPPORT: true
SUCCESSIVE_TRIGGER_FAILURE_DISABLE_THRESHOLD: 100
SUCCESSIVE_TRIGGER_INTERNAL_ERROR_DISABLE_THRESHOLD: 5
# ...
Field | Type | Description |
---|---|---|
FEATURE_GITHUB_BUILD |
Boolean |
Whether to support GitHub build triggers. |
GITHUB_TRIGGER_CONFIG |
Object |
Configuration for using GitHub Enterprise for build triggers. |
.GITHUB_ENDPOINT |
String |
The endpoint for GitHub Enterprise. |
.API_ENDPOINT |
String |
The endpoint of the GitHub Enterprise API to use. Must be overridden for |
.CLIENT_ID |
String |
The registered client ID for this Project Quay instance; this cannot be shared with |
.CLIENT_SECRET |
String |
The registered client secret for this Project Quay instance. |
# ...
FEATURE_GITHUB_BUILD: true
GITHUB_TRIGGER_CONFIG:
GITHUB_ENDPOINT: https://github.com/
API_ENDPOINT: https://api.github.com/
CLIENT_ID: your-client-id
CLIENT_SECRET: your-client-secret
# ...
Field | Type | Description |
---|---|---|
FEATURE_BITBUCKET_BUILD |
Boolean |
Whether to support Bitbucket build triggers. |
BITBUCKET_TRIGGER_CONFIG |
Object |
Configuration for using BitBucket for build triggers. |
.CONSUMER_KEY |
String |
The registered consumer key (client ID) for this Project Quay instance. |
.CONSUMER_SECRET |
String |
The registered consumer secret (client secret) for this Project Quay instance. |
# ...
FEATURE_BITBUCKET_BUILD: true
BITBUCKET_TRIGGER_CONFIG:
CONSUMER_KEY: <your_consumer_key>
CONSUMER_SECRET: <your-consumer-secret>
# ...
Field | Type | Description |
---|---|---|
FEATURE_GITLAB_BUILD |
Boolean |
Whether to support GitLab build triggers. |
GITLAB_TRIGGER_CONFIG |
Object |
Configuration for using Gitlab for build triggers. |
.GITLAB_ENDPOINT |
String |
The endpoint at which Gitlab Enterprise is running. |
.CLIENT_ID |
String |
The registered client ID for this Project Quay instance. |
.CLIENT_SECRET |
String |
The registered client secret for this Project Quay instance. |
# ...
FEATURE_GITLAB_BUILD: true
GITLAB_TRIGGER_CONFIG:
GITLAB_ENDPOINT: https://gitlab.example.com/
CLIENT_ID: <your_gitlab_client_id>
CLIENT_SECRET: <your_gitlab_client_secret>
# ...
Build manager configuration fields
The following configuration fields control how the build manager component of Project Quay orchestrates and manages container image builds. This includes settings for Redis coordination, executor backends such as Kubernetes or EC2, builder image configuration, and advanced scheduling and retry policies.
These fields must be configured to align with your infrastructure environment and workload requirements.
Field | Type | Description |
---|---|---|
ALLOWED_WORKER_COUNT |
String |
Defines how many Build Workers are instantiated per Project Quay pod. Typically set to |
ORCHESTRATOR_PREFIX |
String |
Defines a unique prefix to be added to all Redis keys. This is useful to isolate Orchestrator values from other Redis keys. |
REDIS_HOST |
Object |
The hostname for your Redis service. |
REDIS_PASSWORD |
String |
The password to authenticate into your Redis service. |
REDIS_SSL |
Boolean |
Defines whether or not your Redis connection uses SSL/TLS. |
REDIS_SKIP_KEYSPACE_EVENT_SETUP |
Boolean |
By default, Project Quay does not set up the keyspace events required for key events at runtime. To do so, set |
EXECUTOR |
String |
Starts a definition of an Executor of this type. Valid values are |
BUILDER_NAMESPACE |
String |
Kubernetes namespace where Project Quay Builds will take place. |
K8S_API_SERVER |
Object |
Hostname for API Server of the OpenShift Container Platform cluster where Builds will take place. |
K8S_API_TLS_CA |
Object |
The filepath in the |
KUBERNETES_DISTRIBUTION |
String |
Indicates which type of Kubernetes is being used. Valid values are |
CONTAINER_* |
Object |
Define the resource requests and limits for each |
NODE_SELECTOR_* |
Object |
Defines the node selector label name-value pair where |
CONTAINER_RUNTIME |
Object |
Specifies whether the Builder should run |
SERVICE_ACCOUNT_NAME/SERVICE_ACCOUNT_TOKEN |
Object |
Defines the Service Account name or token that will be used by |
QUAY_USERNAME/QUAY_PASSWORD |
Object |
Defines the registry credentials needed to pull the Project Quay build worker image that is specified in the |
WORKER_IMAGE |
Object |
Image reference for the Project Quay Builder image. quay.io/quay/quay-builder |
WORKER_TAG |
Object |
Tag for the Builder image desired. The latest version is 3.14. |
BUILDER_VM_CONTAINER_IMAGE |
Object |
The full reference to the container image holding the internal VM needed to run each Project Quay Build.
( |
SETUP_TIME |
String |
Specifies the number of seconds at which a Build times out if it has not yet registered itself with the Build Manager. Defaults at |
MINIMUM_RETRY_THRESHOLD |
String |
This setting is used with multiple Executors. It indicates how many retries are attempted to start a Build before a different Executor is chosen. Setting to |
SSH_AUTHORIZED_KEYS |
Object |
List of SSH keys to bootstrap in the |
# ...
ALLOWED_WORKER_COUNT: "1"
ORCHESTRATOR_PREFIX: "quaybuild:"
REDIS_HOST: redis.example.com
REDIS_PASSWORD: examplepassword
REDIS_SSL: true
REDIS_SKIP_KEYSPACE_EVENT_SETUP: false
EXECUTOR: kubernetes
BUILDER_NAMESPACE: quay-builder
K8S_API_SERVER: https://api.openshift.example.com:6443
K8S_API_TLS_CA: /etc/ssl/certs/ca.crt
KUBERNETES_DISTRIBUTION: openshift
CONTAINER_RUNTIME: podman
CONTAINER_MEMORY_LIMITS: 2Gi
NODE_SELECTOR_ROLE: quay-build-node
SERVICE_ACCOUNT_NAME: quay-builder-sa
QUAY_USERNAME: quayuser
QUAY_PASSWORD: quaypassword
WORKER_IMAGE: quay.io/quay/quay-builder
WORKER_TAG: latest
BUILDER_VM_CONTAINER_IMAGE: quay.io/quay/vm-builder:latest
SETUP_TIME: "500"
MINIMUM_RETRY_THRESHOLD: "1"
SSH_AUTHORIZED_KEYS:
- ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAsomekey user@example.com
- ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAnotherkey user2@example.com
# ...
Build logs configuration fields
This section describes the available configuration fields for managing build logs in Project Quay. These settings determine where build logs are archived, who can access them, and how they are stored.
Field | Type | Description |
---|---|---|
FEATURE_READER_BUILD_LOGS |
Boolean |
If set to true, build logs can be read by those with |
LOG_ARCHIVE_LOCATION |
String |
The storage location, defined in |
LOG_ARCHIVE_PATH |
String |
The path under the configured storage engine in which to place the archived build logs in |
# ...
FEATURE_READER_BUILD_LOGS: true
LOG_ARCHIVE_LOCATION: s3_us_east
LOG_ARCHIVE_PATH: archives/buildlogs
# ...
Tag and image management
This section describes the configuration fields that control how tags and images are managed within Project Quay. These settings help automate image cleanup, manage repository mirrors, and enhance performance through caching.
You can use these fields to:
-
Define expiration policies for untagged or outdated images.
-
Enable and schedule mirroring of external repositories into your registry.
-
Leverage model caching to optimize performance for tag and repository operations.
These options help maintain an up-to-date image registry environment.
Tag expiration configuration fields
The following configuration options are available to automate tag expiration and garbage collection. These features help manage storage usage by enabling cleanup of unused or expired tags based on defined policies.
Field | Type | Description |
---|---|---|
FEATURE_GARBAGE_COLLECTION |
Boolean |
Whether garbage collection of repositories is enabled. |
TAG_EXPIRATION_OPTIONS |
Array of string |
If enabled, the options that users can select for expiration of tags in their namespace. |
DEFAULT_TAG_EXPIRATION |
String |
The default, configurable tag expiration time for time machine. |
FEATURE_CHANGE_TAG_EXPIRATION |
Boolean |
Whether users and organizations are allowed to change the tag expiration for tags in their namespace. |
FEATURE_AUTO_PRUNE |
Boolean |
When set to |
NOTIFICATION_TASK_RUN_MINIMUM_INTERVAL_MINUTES |
Integer |
The interval, in minutes, that defines the frequency to re-run notifications for expiring images. |
DEFAULT_NAMESPACE_AUTOPRUNE_POLICY |
Object |
The default organization-wide auto-prune policy. |
.method: number_of_tags |
Object |
The option specifying the number of tags to keep. |
.value: <integer> |
Integer |
When used with method: number_of_tags, denotes the number of tags to keep. For example, to keep two tags, specify |
.creation_date |
Object |
The option specifying the duration of which to keep tags. |
.value: <integer> |
Integer |
When used with creation_date, denotes how long to keep tags. Can be set to seconds ( |
AUTO_PRUNING_DEFAULT_POLICY_POLL_PERIOD |
Integer |
The period in which the auto-pruner worker runs at the registry level. By default, it is set to run one time per day (one time per 24 hours). Value must be in seconds. |
# ...
FEATURE_GARBAGE_COLLECTION: true
TAG_EXPIRATION_OPTIONS:
- 1w
- 2w
- 1m
- 90d
DEFAULT_TAG_EXPIRATION: 2w
FEATURE_CHANGE_TAG_EXPIRATION: true
FEATURE_AUTO_PRUNE: true
NOTIFICATION_TASK_RUN_MINIMUM_INTERVAL_MINUTES: 300
DEFAULT_NAMESPACE_AUTOPRUNE_POLICY:
method: number_of_tags
value: 10 (1)
AUTO_PRUNING_DEFAULT_POLICY_POLL_PERIOD: 86400
# ...
-
Specifies ten tags to remain.
# ...
DEFAULT_NAMESPACE_AUTOPRUNE_POLICY:
method: creation_date
value: 1y (1)
# ...
-
Specifies tags to be pruned one year after their creation date.
Mirroring configuration fields
Mirroring in Project Quay enables automatic synchronization of repositories with upstream sources. This feature is useful for maintaining local mirrors of remote container images, ensuring availability in disconnected environments or improving performance through caching.
Field | Type | Description |
---|---|---|
FEATURE_REPO_MIRROR |
Boolean |
Enable or disable repository mirroring |
REPO_MIRROR_INTERVAL |
Number |
The number of seconds between checking for repository mirror candidates |
REPO_MIRROR_SERVER_HOSTNAME |
String |
Replaces the |
REPO_MIRROR_TLS_VERIFY |
Boolean |
Require HTTPS and verify certificates of Quay registry during mirror. |
REPO_MIRROR_ROLLBACK |
Boolean |
When set to Default: |
# ...
FEATURE_REPO_MIRROR: true
REPO_MIRROR_INTERVAL: 30
REPO_MIRROR_SERVER_HOSTNAME: "openshift-quay-service"
REPO_MIRROR_TLS_VERIFY: true
REPO_MIRROR_ROLLBACK: false
# ...
ModelCache configuration fields
ModelCache is a caching mechanism used by Project Quay to store accessed data and reduce database load. Quay supports multiple backends for caching, including the default Memcache, as well as Redis and Redis Cluster.
-
Memcache (default): requires no additional configuration.
-
Redis: can be configured as a single instance or with a read-only replica.
-
Redis Cluster: provides high availability and sharding for larger deployments.
Field | Type | Description |
---|---|---|
DATA_MODEL_CACHE_CONFIG.engine |
String |
The cache backend engine. |
.redis_config.primary.host |
String |
The hostname of the primary Redis instance when using the |
.redis_config.primary.port |
Number |
The port used by the primary Redis instance. |
.redis_config.primary.password |
String |
The password for authenticating with the primary Redis instance. Only required if |
.redis_config.primary.ssl |
Boolean |
Whether to use SSL/TLS for the primary Redis connection. |
.redis_config.startup_nodes |
Array of Map |
For |
redis_config.password |
String |
Password used for authentication with the Redis cluster. Required if |
.redis_config.read_from_replicas |
Boolean |
Whether to allow read operations from Redis cluster replicas. |
.redis_config.skip_full_coverage_check |
Boolean |
If set to true, skips the Redis cluster full coverage check. |
.redis_config.ssl |
Boolean |
Whether to use SSL/TLS for Redis cluster communication. |
.replica.host |
String |
The hostname of the Redis replica instance. Optional. |
.replica.port |
Number |
The port used by the Redis replica instance. |
.replica.password |
String |
The password for the Redis replica. Required if |
.replica.ssl |
Boolean |
Whether to use SSL/TLS for the Redis replica connection. |
# ...
DATA_MODEL_CACHE_CONFIG:
engine: redis
redis_config:
primary:
host: <redis-primary.example.com>
port: 6379
password: <redis_password>>
ssl: true
replica:
host: <redis-replica.example.com>
port: 6379
password: <redis_password>
ssl: true
# ...
# ...
DATA_MODEL_CACHE_CONFIG:
engine: <rediscluster>
redis_config:
startup_nodes:
- host: <redis-node-1.example.com>
port: 6379
- host: <redis-node-2.example.com>
port: 6379
password: <cluster_password>
read_from_replicas: true
skip_full_coverage_check: true
ssl: true
# ...
Scanner and Metadata
This section describes configuration fields related to security scanning, metadata presentation, and artifact relationships within Project Quay.
These settings enable enhanced visibility and security by allowing Project Quay to:
-
Integrate with a vulnerability scanner to assess container images for known CVEs.
-
Render AI/ML model metadata through model cards stored in the registry.
-
Expose relationships between container artifacts using the Referrers API, aligning with the OCI artifact specification.
Together, these features help improve software supply chain transparency, enforce security policies, and support emerging metadata-driven workflows.
Clair security scanner configuration fields
Project Quay can leverage Clair security scanner to detect vulnerabilities in container images. These configuration fields control how the scanner is enabled, how frequently it indexes new content, which endpoints are used, and how notifications are handled.
Field | Type | Description |
---|---|---|
FEATURE_SECURITY_SCANNER |
Boolean |
Enable or disable the security scanner |
FEATURE_SECURITY_NOTIFICATIONS |
Boolean |
If the security scanner is enabled, turn on or turn off security notifications |
SECURITY_SCANNER_V4_REINDEX_THRESHOLD |
String |
This parameter is used to determine the minimum time, in seconds, to wait before re-indexing a manifest that has either previously failed or has changed states since the last indexing. The data is calculated from the |
SECURITY_SCANNER_V4_ENDPOINT |
String |
The endpoint for the V4 security scanner |
SECURITY_SCANNER_V4_PSK |
String |
The generated pre-shared key (PSK) for Clair |
SECURITY_SCANNER_ENDPOINT |
String |
The endpoint for the V2 security scanner |
SECURITY_SCANNER_INDEXING_INTERVAL |
Integer |
This parameter is used to determine the number of seconds between indexing intervals in the security scanner. When indexing is triggered, Project Quay will query its database for manifests that must be indexed by Clair. These include manifests that have not yet been indexed and manifests that previously failed indexing. |
FEATURE_SECURITY_SCANNER_NOTIFY_ON_NEW_INDEX |
Boolean |
Whether to allow sending notifications about vulnerabilities for new pushes.
|
SECURITY_SCANNER_V4_MANIFEST_CLEANUP |
Boolean |
Whether the Project Quay garbage collector removes manifests that are not referenced by other tags or manifests.
|
NOTIFICATION_MIN_SEVERITY_ON_NEW_INDEX |
String |
Set minimal security level for new notifications on detected vulnerabilities. Avoids creation of large number of notifications after first index. If not defined, defaults to |
SECURITY_SCANNER_V4_INDEX_MAX_LAYER_SIZE |
String |
The maximum layer size allowed for indexing. If the layer size exceeds the configured size, the Project Quay UI returns the following message: |
# ...
FEATURE_SECURITY_NOTIFICATIONS: true
FEATURE_SECURITY_SCANNER: true
FEATURE_SECURITY_SCANNER_NOTIFY_ON_NEW_INDEX: true
...
SECURITY_SCANNER_INDEXING_INTERVAL: 30
SECURITY_SCANNER_V4_MANIFEST_CLEANUP: true
SECURITY_SCANNER_V4_ENDPOINT: http://quay-server.example.com:8081
SECURITY_SCANNER_V4_PSK: MTU5YzA4Y2ZkNzJoMQ==
SERVER_HOSTNAME: quay-server.example.com
SECURITY_SCANNER_V4_INDEX_MAX_LAYER_SIZE: 8G (1)
# ...
-
Recommended maximum is
10G
.
Re-indexing with Clair v4
When Clair v4 indexes a manifest, the result should be deterministic. For example, the same manifest should produce the same index report. This is true until the scanners are changed, as using different scanners will produce different information relating to a specific manifest to be returned in the report. Because of this, Clair v4 exposes a state representation of the indexing engine (/indexer/api/v1/index_state
) to determine whether the scanner configuration has been changed.
Project Quay leverages this index state by saving it to the index report when parsing to Quay’s database. If this state has changed since the manifest was previously scanned, Project Quay will attempt to re-index that manifest during the periodic indexing process.
By default this parameter is set to 30 seconds. Users might decrease the time if they want the indexing process to run more frequently, for example, if they did not want to wait 30 seconds to see security scan results in the UI after pushing a new tag. Users can also change the parameter if they want more control over the request pattern to Clair and the pattern of database operations being performed on the Project Quay database.
Model card rendering configuration fields
Project Quay supports the rendering of Model Cards—a form of metadata documentation commonly used in machine learning workflows—to improve the visibility and management of model-related content within OCI-compliant images.
Field | Type | Description |
---|---|---|
FEATURE_UI_MODELCARD |
Boolean |
Enables Model Card image tab in UI. Defaults to |
UI_MODELCARD_ARTIFACT_TYPE |
String |
Defines the model card artifact type. |
UI_MODELCARD_ANNOTATION |
Object |
This optional field defines the layer annotation of the model card stored in an OCI image. |
UI_MODELCARD_LAYER_ANNOTATION |
Object |
This optional field defines the layer annotation of the model card stored in an OCI image. |
FEATURE_UI_MODELCARD: true (1)
UI_MODELCARD_ARTIFACT_TYPE: application/x-mlmodel (2)
UI_MODELCARD_ANNOTATION: (3)
org.opencontainers.image.description: "Model card metadata"
UI_MODELCARD_LAYER_ANNOTATION: (4)
org.opencontainers.image.title: README.md
-
Enables the Model Card image tab in the UI.
-
Defines the model card artifact type. In this example, the artifact type is
application/x-mlmodel
. -
Optional. If an image does not have an
artifactType
defined, this field is checked at the manifest level. If a matching annotation is found, the system then searches for a layer with an annotation matchingUI_MODELCARD_LAYER_ANNOTATION
. -
Optional. If an image has an
artifactType
defined and multiple layers, this field is used to locate the specific layer containing the model card.
Open Container Initiative referrers API configuration field
The Open Container Initiative (OCI) referrers API aids in the retrieval and management of referrers helps improve container image management.
Field | Type | Description |
---|---|---|
FEATURE_REFERRERS_API |
Boolean |
Enables OCI 1.1’s referrers API. |
# ...
FEATURE_REFERRERS_API: True
# ...
Quota management and proxy cache features
This section outlines configuration fields related to enforcing storage limits and improving image availability through proxy caching.
These features help registry administrators:
-
Control how much storage organizations and users consume with configurable quotas.
-
Improve access to upstream images by caching remote content locally via proxy cache.
-
Monitor and manage resource consumption and availability across distributed environments.
Collectively, these capabilities ensure better performance, governance, and resiliency in managing container image workflows.
Quota management configuration fields
The following configuration fields enable and customize quota management functionality in Project Quay. Quota management helps administrators enforce storage usage policies at the organization level by allowing them to set usage limits, calculate blob sizes, and control tag deletion behavior.
Field | Type | Description |
---|---|---|
FEATURE_QUOTA_MANAGEMENT |
Boolean |
Enables configuration, caching, and validation for quota management feature. **Default:** `False` |
DEFAULT_SYSTEM_REJECT_QUOTA_BYTES |
String |
Enables system default quota reject byte allowance for all organizations. By default, no limit is set. |
QUOTA_BACKFILL |
Boolean |
Enables the quota backfill worker to calculate the size of pre-existing blobs. Default: |
QUOTA_TOTAL_DELAY_SECONDS |
String |
The time delay for starting the quota backfill. Rolling deployments can cause incorrect totals. This field must be set to a time longer than it takes for the rolling deployment to complete. Default: |
PERMANENTLY_DELETE_TAGS |
Boolean |
Enables functionality related to the removal of tags from the time machine window. Default: |
RESET_CHILD_MANIFEST_EXPIRATION |
Boolean |
Resets the expirations of temporary tags targeting the child manifests. With this feature set to Default: |
# ...
FEATURE_QUOTA_MANAGEMENT: true
DEFAULT_SYSTEM_REJECT_QUOTA_BYTES: "100gb"
QUOTA_BACKFILL: true
QUOTA_TOTAL_DELAY_SECONDS: "3600"
PERMANENTLY_DELETE_TAGS: true
RESET_CHILD_MANIFEST_EXPIRATION: true
# ...
Proxy cache configuration fields
The proxy cache configuration in Project Quay enables Project Quay to act as a pull-through cache for upstream container registries. When FEATURE_PROXY_CACHE
is enabled, Project Quay can cache images that are pulled from external registries, reducing bandwidth consumption and improving image retrieval speed on subsequent requests.
Field | Type | Description |
---|---|---|
FEATURE_PROXY_CACHE |
Boolean |
Enables Project Quay to act as a pull through cache for upstream registries. Default: |
# ...
FEATURE_PROXY_CACHE: true
# ...
QuayIntegration configuration fields
The QuayIntegration
custom resource enables integration between your OpenShift Container Platform cluster and a Project Quay registry instance.
Name | Description | Schema |
---|---|---|
allowlistNamespaces |
A list of namespaces to include. |
Array |
clusterID |
The ID associated with this cluster. |
String |
credentialsSecret.key |
The secret containing credentials to communicate with the Quay registry. |
Object |
denylistNamespaces |
A list of namespaces to exclude. |
Array |
insecureRegistry |
Whether to skip TLS verification to the Quay registry |
Boolean |
quayHostname |
The hostname of the Quay registry. |
String |
scheduledImageStreamImport |
Whether to enable image stream importing. |
Boolean |
apiVersion: quay.redhat.com/v1
kind: QuayIntegration
metadata:
name: example-quayintegration
spec:
clusterID: 1df512fc-bf70-11ee-bb31-001a4a160100
quayHostname: quay.example.com
credentialsSecret:
name: quay-creds-secret
key: token
allowlistNamespaces:
- dev-team
- prod-team
denylistNamespaces:
- test
insecureRegistry: false
scheduledImageStreamImport: true
Mail configuration fields
To enable email notifications from your Project Quay instance, such as account confirmation, password reset, and security alerts. These settings allow Project Quay to connect to your SMTP server and send outbound messages on behalf of your registry.
Field | Type | Description |
---|---|---|
FEATURE_MAILING |
Boolean |
Whether emails are enabled |
MAIL_DEFAULT_SENDER |
String |
If specified, the e-mail address used as the |
MAIL_PASSWORD |
String |
The SMTP password to use when sending e-mails |
MAIL_PORT |
Number |
The SMTP port to use. If not specified, defaults to 587. |
MAIL_SERVER |
String |
The SMTP server to use for sending e-mails. Only required if FEATURE_MAILING is set to true. |
MAIL_USERNAME |
String |
The SMTP username to use when sending e-mails |
MAIL_USE_TLS |
Boolean |
If specified, whether to use TLS for sending e-mails |
# ...
FEATURE_MAILING: true
MAIL_DEFAULT_SENDER: "support@example.com"
MAIL_SERVER: "smtp.example.com"
MAIL_PORT: 587
MAIL_USERNAME: "smtp-user@example.com"
MAIL_PASSWORD: "your-smtp-password"
MAIL_USE_TLS: true
# ...
Environment variable configuration
Project Quay supports a limited set of environment variables that control runtime behavior and performance tuning. These values provide flexibility in specific scenarios where per-process behavior, connection counts, or regional configuration must be adjusted dynamically.
Use environment variables cautiously. These options typically override or augment existing configuration mechanisms.
This section documents environment variables related to the following components:
-
Geo-replication preferences
-
Database connection pooling
-
HTTP connection concurrency
-
Worker process scaling
Geo-replication
Project Quay supports multi-region deployments where multiple instances operate across geographically distributed sites. In these scenarios, each site shares the same configuration and metadata, but storage backends might vary between regions.
To accommodate this, Project Quay allows specifying a preferred storage engine for each deployment using an environment variable. This ensures that while metadata remains synchronized across all regions, each region can use its own optimized storage backend without requiring separate configuration files.
Use the QUAY_DISTRIBUTED_STORAGE_PREFERENCE
environment variable to explicitly set the preferred storage engine by its ID, as defined in DISTRIBUTED_STORAGE_CONFIG
.
Variable | Type | Description |
---|---|---|
QUAY_DISTRIBUTED_STORAGE_PREFERENCE |
String |
The preferred storage engine (by ID in DISTRIBUTED_STORAGE_CONFIG) to use. |
Database connection pooling
Project Quay is composed of many different processes which all run within the same container. Many of these processes interact with the database.
Database connection pooling is enabled by default, and each process that interacts with the database contains a connection pool. These per-process connection pools are configured to maintain a maximum of 20 connections. Under heavy load, it is possible to fill the connection pool for every process within a Project Quay container. Under certain deployments and loads, this might require analysis to ensure that Project Quay does not exceed the configured database’s maximum connection count.
Overtime, the connection pools release idle connections. To release all connections immediately, Project Quay requires a restart.
Variable | Type | Description |
---|---|---|
DB_CONNECTION_POOLING |
String |
Whether to enable or disable database connection pooling. Defaults to true. Accepted values are |
If database connection pooling is enabled, it is possible to change the maximum size of the connection pool. This can be done through the following config.yaml
option:
# ...
DB_CONNECTION_ARGS:
max_connections: 10
# ...
Disabling database pooling in standalone deployments
For standalone Project Quay deployments, database connection pooling can be toggled off when starting your deployment. For example:
$ sudo podman run -d --rm -p 80:8080 -p 443:8443 \
--name=quay \
-v $QUAY/config:/conf/stack:Z \
-v $QUAY/storage:/datastorage:Z \
-e DB_CONNECTION_POOLING=false
registry.redhat.io/quay/quay-rhel8:v3.12.1
Disabling database pooling for Red Hat Quay on OpenShift Container Platform
For Red Hat Quay on OpenShift Container Platform, database connection pooling can be configured by modifying the QuayRegistry
custom resource definition (CRD). For example:
spec:
components:
- kind: quay
managed: true
overrides:
env:
- name: DB_CONNECTION_POOLING
value: "false"
HTTP connection counts
You can control the number of simultaneous HTTP connections handled by Project Quay using environment variables. These limits apply either globally or can be scoped to individual components (registry, web UI, or security scanning). By default, each worker process allows up to 50
parallel connections.
This setting is distinct from the number of worker processes.
These connection-related environment variables can be configured differently depending on your deployment type:
-
In standalone deployments, configure connection counts in the
config.yaml
file. -
In Red Hat Quay on OpenShift Container Platform deployments, define the values in the
env
block of theQuayRegistry
CR.
Variable | Type | Description |
---|---|---|
WORKER_CONNECTION_COUNT |
Number |
Global default for the maximum number of HTTP connections per worker process. |
WORKER_CONNECTION_COUNT_REGISTRY |
Number |
HTTP connections per registry worker. |
WORKER_CONNECTION_COUNT_WEB |
Number |
HTTP connections per web UI worker. |
WORKER_CONNECTION_COUNT_SECSCAN |
Number |
HTTP connections per Clair security scanner worker. |
# config.yaml
WORKER_CONNECTION_COUNT: 10
WORKER_CONNECTION_COUNT_REGISTRY: 10
WORKER_CONNECTION_COUNT_WEB: 10
WORKER_CONNECTION_COUNT_SECSCAN: 10
env:
- name: WORKER_CONNECTION_COUNT
value: "10"
- name: WORKER_CONNECTION_COUNT_REGISTRY
value: "10"
- name: WORKER_CONNECTION_COUNT_WEB
value: "10"
- name: WORKER_CONNECTION_COUNT_SECSCAN
value: "10"
Worker process counts
You can control the number of worker processes that handle incoming requests in Project Quay using environment variables. These values define how many parallel processes are started to handle tasks for different components of the system, such as the registry, the web UI, and security scanning.
If not explicitly set, Project Quay calculates the number of worker processes automatically based on the number of available CPU cores. While this dynamic scaling can optimize performance on larger machines, it may also lead to unnecessary resource usage in smaller or more controlled environments.
In Red Hat Quay on OpenShift Container Platform deployments, the Operator sets the following default values:
-
WORKER_COUNT_REGISTRY
: 8 -
WORKER_COUNT_WEB
: 4 -
WORKER_COUNT_SECSCAN
: 2
Variable | Type | Description |
---|---|---|
WORKER_COUNT |
Number |
Global override for the number of worker processes across all components. |
WORKER_COUNT_REGISTRY |
Number |
Number of worker processes assigned to handle registry API traffic. |
WORKER_COUNT_WEB |
Number |
Number of worker processes assigned to handle web UI and user interface requests. |
WORKER_COUNT_SECSCAN |
Number |
Number of worker processes assigned to handle security scanning operations (e.g., Clair integration). |
WORKER_COUNT: 10
WORKER_COUNT_REGISTRY: 16
WORKER_COUNT_WEB: 8
WORKER_COUNT_SECSCAN: 4
env:
- name: WORKER_COUNT
value: "10"
- name: WORKER_COUNT_REGISTRY
value: "16"
- name: WORKER_COUNT_WEB
value: "8"
- name: WORKER_COUNT_SECSCAN
value: "4"
Clair security scanner
Configuration fields for Clair have been moved to Clair configuration overview. This chapter will be removed in a future version of Project Quay.
Project Quay Security Scanning with Clair V2
Project Quay supports scanning container images for known vulnerabilities with a scanning engine such as Clair. This document explains how to configure Clair with Project Quay.
Note
|
With the release of Project Quay 3.4, the default version of Clair is V4. This new version V4 is no longer being released as Technology Preview and is supported for production use. Customers are strongly encouraged to use Clair V4 for with Project Quay 3.4. It is possible to run both Clair V4 and Clair V2 simultaneously if so desired. In future versions of Project Quay, Clair V2 will eventually be removed. |
Set up Clair V2 in the Project Quay config tool
Enabling Clair V2 in Project Quay consists of:
-
Starting the Project Quay config tool. See the Project Quay deployment guide for the type of deployment you are doing (OpenShift, Basic, or HA) for how to start the config tool for that environment.
-
Enabling security scanning, then generating a private key and PEM file in the config tool
-
Including the key and PEM file in the Clair config file
-
Start the Clair container
The procedure varies, based on whether you are running Project Quay on OpenShift or directly on a host.
Enabling Clair V2 on a Project Quay OpenShift deployment
To set up Clair V2 on Project Quay in OpenShift, see Add Clair image scanning to Project Quay.
Enabling Clair V2 on a Project Quay Basic or HA deployment
To set up Clair V2 on a Project Quay deployment where the container is running directly on the host system, do the following:
-
Restart the Project Quay config tool: Run the
Quay
container again in config mode, open the configuration UI in a browser, then selectModify an existing configuration
. When prompted, upload thequay-config.tar.gz
file that was originally created for the deployment. -
Enable Security Scanning: Scroll to the Security Scanner section and select the "Enable Security Scanning" checkbox. From the fields that appear you need to create an authentication key and enter the security scanner endpoint. Here’s how:
-
Generate key: Click
Create Key
, then from the pop-up window type a name for the Clair private key and an optional expiration date (if blank, the key never expires). Then select Generate Key. -
Copy the Clair key and PEM file: Save the Key ID (to a notepad or similar) and download a copy of the Private Key PEM file (
named security_scanner.pem
) by selecting "Download Private Key" (if you lose the key, you need to generate a new one). You will need the key and PEM file when you start the Clair container later.Close the pop-up when you are done. Here is an example of a completed Security Scanner config:
-
-
Save the configuration: Click
Save Configuration Changes
and then selectDownload Configuration
to save it to your local system. -
Deploy the configuration: To pick up the changes enabling scanning, as well as other changes you may have made to the configuration, unpack the
quay-config.tar.gz
and copy the resulting files to the config directory. For example:$ tar xvf quay-config.tar.gz config.yaml ssl.cert ssl.key $ cp config.yaml ssl* /mnt/quay/config
Next, start the Clair V2 container and associated database, as described in the following sections.
Setting Up Clair V2 Security Scanning
Once you have created the necessary key and pem files from the Project Quay config UI, you are ready to start up the Clair V2 container and associated database. Once that is done, you an restart your Project Quay cluster to have those changes take effect.
Procedures for running the Clair V2 container and associated database are different on OpenShift than they are for running those containers directly on a host.
Run Clair V2 on a Project Quay OpenShift deployment
To run the Clair V2 image scanning container and its associated database on an OpenShift environment with your Project Quay cluster, see Add Clair image scanning to Project Quay.
Run Clair V2 on a Project Quay Basic or HA deployment
To run Clair V2 and its associated database on non-OpenShift environments (directly on a host), you need to:
-
Start up a database
-
Configure and start Clair V2
Get Postgres and Clair
In order to run Clair, a database is required. For production deployments, MySQL is not supported. For production, we recommend you use PostgreSQL or other supported database:
-
Running on machines other than those running Project Quay
-
Ideally with automatic replication and failover
For testing purposes, a single PostgreSQL instance can be started locally:
-
To start Postgres locally, do the following:
# sudo podman run --name postgres -p 5432:5432 -d postgres # sleep 5 # sudo podman run --rm --link postgres:postgres postgres \ sh -c 'echo "create database clairtest" | psql -h \ "$POSTGRES_PORT_5432_TCP_ADDR" -p \ "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'
The configuration string for this test database is:
postgresql://postgres@{DOCKER HOST GOES HERE}:5432/clairtest?sslmode=disable
-
Pull the security-enabled Clair image:
You will need to build your own Clair container and pull it during this step. Instructions for building the Clair container are not yet available.
-
Make a configuration directory for Clair
# mkdir clair-config # cd clair-config
Configure Clair V2
Clair V2 can run either as a single instance or in high-availability mode. It is recommended to run more than a single instance of Clair, ideally in an auto-scaling group with automatic healing.
-
Create a
config.yaml
file to be used in the Clair V2 config directory (/clair/config
) from one of the two Clair configuration files shown here. -
If you are doing a high-availability installation, go through the procedure in Authentication for high-availability scanners to create a Key ID and Private Key (PEM).
-
Save the Private Key (PEM) to a file (such as, $HOME/config/security_scanner.pem).
-
Replace the value of key_id (CLAIR_SERVICE_KEY_ID) with the Key ID you generated and the value of private_key_path with the location of the PEM file (for example, /config/security_scanner.pem).
For example, those two value might now appear as:
key_id: { 4fb9063a7cac00b567ee921065ed16fed7227afd806b4d67cc82de67d8c781b1 } private_key_path: /clair/config/security_scanner.pem
-
Change other values in the configuration file as needed.
Clair V2 configuration: High availability
clair:
database:
type: pgsql
options:
# A PostgreSQL Connection string pointing to the Clair Postgres database.
# Documentation on the format can be found at: http://www.postgresql.org/docs/9.4/static/libpq-connect.html
source: { POSTGRES_CONNECTION_STRING }
cachesize: 16384
api:
# The port at which Clair will report its health status. For example, if Clair is running at
# https://clair.mycompany.com, the health will be reported at
# http://clair.mycompany.com:6061/health.
healthport: 6061
port: 6062
timeout: 900s
# paginationkey can be any random set of characters. *Must be the same across all Clair instances*.
paginationkey: "XxoPtCUzrUv4JV5dS+yQ+MdW7yLEJnRMwigVY/bpgtQ="
updater:
# interval defines how often Clair will check for updates from its upstream vulnerability databases.
interval: 6h
notifier:
attempts: 3
renotifyinterval: 1h
http:
# QUAY_ENDPOINT defines the endpoint at which Quay is running.
# For example: https://myregistry.mycompany.com
endpoint: { QUAY_ENDPOINT }/secscan/notify
proxy: http://localhost:6063
jwtproxy:
signer_proxy:
enabled: true
listen_addr: :6063
ca_key_file: /certificates/mitm.key # Generated internally, do not change.
ca_crt_file: /certificates/mitm.crt # Generated internally, do not change.
signer:
issuer: security_scanner
expiration_time: 5m
max_skew: 1m
nonce_length: 32
private_key:
type: preshared
options:
# The ID of the service key generated for Clair. The ID is returned when setting up
# the key in [Quay Setup](security-scanning.md)
key_id: { CLAIR_SERVICE_KEY_ID }
private_key_path: /clair/config/security_scanner.pem
verifier_proxies:
- enabled: true
# The port at which Clair will listen.
listen_addr: :6060
# If Clair is to be served via TLS, uncomment these lines. See the "Running Clair under TLS"
# section below for more information.
# key_file: /clair/config/clair.key
# crt_file: /clair/config/clair.crt
verifier:
# CLAIR_ENDPOINT is the endpoint at which this Clair will be accessible. Note that the port
# specified here must match the listen_addr port a few lines above this.
# Example: https://myclair.mycompany.com:6060
audience: { CLAIR_ENDPOINT }
upstream: http://localhost:6062
key_server:
type: keyregistry
options:
# QUAY_ENDPOINT defines the endpoint at which Quay is running.
# Example: https://myregistry.mycompany.com
registry: { QUAY_ENDPOINT }/keys/
Clair V2 configuration: Single instance
clair:
database:
type: pgsql
options:
# A PostgreSQL Connection string pointing to the Clair Postgres database.
# Documentation on the format can be found at: http://www.postgresql.org/docs/9.4/static/libpq-connect.html
source: { POSTGRES_CONNECTION_STRING }
cachesize: 16384
api:
# The port at which Clair will report its health status. For example, if Clair is running at
# https://clair.mycompany.com, the health will be reported at
# http://clair.mycompany.com:6061/health.
healthport: 6061
port: 6062
timeout: 900s
# paginationkey can be any random set of characters. *Must be the same across all Clair instances*.
paginationkey:
updater:
# interval defines how often Clair will check for updates from its upstream vulnerability databases.
interval: 6h
notifier:
attempts: 3
renotifyinterval: 1h
http:
# QUAY_ENDPOINT defines the endpoint at which Quay is running.
# For example: https://myregistry.mycompany.com
endpoint: { QUAY_ENDPOINT }/secscan/notify
proxy: http://localhost:6063
jwtproxy:
signer_proxy:
enabled: true
listen_addr: :6063
ca_key_file: /certificates/mitm.key # Generated internally, do not change.
ca_crt_file: /certificates/mitm.crt # Generated internally, do not change.
signer:
issuer: security_scanner
expiration_time: 5m
max_skew: 1m
nonce_length: 32
private_key:
type: autogenerated
options:
rotate_every: 12h
key_folder: /clair/config/
key_server:
type: keyregistry
options:
# QUAY_ENDPOINT defines the endpoint at which Quay is running.
# For example: https://myregistry.mycompany.com
registry: { QUAY_ENDPOINT }/keys/
verifier_proxies:
- enabled: true
# The port at which Clair will listen.
listen_addr: :6060
# If Clair is to be served via TLS, uncomment these lines. See the "Running Clair under TLS"
# section below for more information.
# key_file: /clair/config/clair.key
# crt_file: /clair/config/clair.crt
verifier:
# CLAIR_ENDPOINT is the endpoint at which this Clair will be accessible. Note that the port
# specified here must match the listen_addr port a few lines above this.
# Example: https://myclair.mycompany.com:6060
audience: { CLAIR_ENDPOINT }
upstream: http://localhost:6062
key_server:
type: keyregistry
options:
# QUAY_ENDPOINT defines the endpoint at which Quay is running.
# Example: https://myregistry.mycompany.com
registry: { QUAY_ENDPOINT }/keys/
Configuring Clair V2 for TLS
To configure Clair to run with TLS, a few additional steps are required.
Using certificates from a public CA
For certificates that come from a public certificate authority, follow these steps:
-
Generate a TLS certificate and key pair for the DNS name at which Clair will be accessed
-
Place these files as
clair.crt
andclair.key
in your Clair configuration directory -
Uncomment the
key_file
andcrt_file
lines underverifier_proxies
in your Clairconfig.yaml
If your certificates use a public CA, you are now ready to run Clair. If you are using your own certificate authority, configure Clair to trust it below.
Configuring trust of self-signed SSL
Similar to the process for setting up Docker to trust your self-signed certificates, Clair must also be configured to trust your certificates. Using the same CA certificate bundle used to configure Docker, complete the following steps:
-
Rename the same CA certificate bundle used to set up Quay Registry to
ca.crt
-
Make sure the
ca.crt
file is mounted inside the Clair container under/etc/pki/ca-trust/source/anchors/
as in the example below: You will need to build your own Clair container and run it during this step. Instructions for building the Clair container are not yet available.
Now Clair will be able to trust the source of your TLS certificates and use them to secure communication between Clair and Quay.
Using Clair V2 data sources
Before scanning container images, Clair tries to figure out the operating system on which the container was built. It does this by looking for specific filenames inside that image (see Table 1). Once Clair knows the operating system, it uses specific security databases to check for vulnerabilities (see Table 2).
Operating system | Files identifying OS type |
---|---|
Redhat/CentOS/Oracle |
etc/oracle-release etc/centos-release etc/redhat-release etc/system-release |
Alpine |
etc/alpine-release |
Debian/Ubuntu: |
etc/os-release usr/lib/os-release etc/apt/sources.list |
Ubuntu |
etc/lsb-release |
The data sources that Clair uses to scan containers are shown in Table 2.
Note
|
You must be sure that Clair has access to all listed data sources by whitelisting access to each data source’s location. You might need to add a wild-card character (*) at the end of some URLS that may not be fully complete because they are dynamically built by code. |
Data source | Data collected | Whitelist links | Format | License |
---|---|---|---|---|
Debian 6, 7, 8, unstable namespaces |
||||
Ubuntu 12.04, 12.10, 13.04, 14.04, 14.10, 15.04, 15.10, 16.04 namespaces |
||||
CentOS 5, 6, 7 namespace |
||||
Oracle Linux 5, 6, 7 namespaces |
||||
Alpine 3.3, 3.4, 3.5 namespaces |
||||
Generic vulnerability metadata |
N/A |
|||
Amazon Linux 2018.03, 2 namespaces |
Run Clair V2
Execute the following command to run Clair V2:
You will need to build your own Clair container and run it during this step. Instructions for building the Clair container are not yet available.
Output similar to the following will be seen on success:
2016-05-04 20:01:05,658 CRIT Supervisor running as root (no user in config file)
2016-05-04 20:01:05,662 INFO supervisord started with pid 1
2016-05-04 20:01:06,664 INFO spawned: 'jwtproxy' with pid 8
2016-05-04 20:01:06,666 INFO spawned: 'clair' with pid 9
2016-05-04 20:01:06,669 INFO spawned: 'generate_mitm_ca' with pid 10
time="2016-05-04T20:01:06Z" level=info msg="No claims verifiers specified, upstream should be configured to verify authorization"
time="2016-05-04T20:01:06Z" level=info msg="Starting reverse proxy (Listening on ':6060')"
2016-05-04 20:01:06.715037 I | pgsql: running database migrations
time="2016-05-04T20:01:06Z" level=error msg="Failed to create forward proxy: open /certificates/mitm.crt: no such file or directory"
goose: no migrations to run. current version: 20151222113213
2016-05-04 20:01:06.730291 I | pgsql: database migration ran successfully
2016-05-04 20:01:06.730657 I | notifier: notifier service is disabled
2016-05-04 20:01:06.731110 I | api: starting main API on port 6062.
2016-05-04 20:01:06.736558 I | api: starting health API on port 6061.
2016-05-04 20:01:06.736649 I | updater: updater service is disabled.
2016-05-04 20:01:06,740 INFO exited: jwtproxy (exit status 0; not expected)
2016-05-04 20:01:08,004 INFO spawned: 'jwtproxy' with pid 1278
2016-05-04 20:01:08,004 INFO success: clair entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2016-05-04 20:01:08,004 INFO success: generate_mitm_ca entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
time="2016-05-04T20:01:08Z" level=info msg="No claims verifiers specified, upstream should be configured to verify authorization"
time="2016-05-04T20:01:08Z" level=info msg="Starting reverse proxy (Listening on ':6060')"
time="2016-05-04T20:01:08Z" level=info msg="Starting forward proxy (Listening on ':6063')"
2016-05-04 20:01:08,541 INFO exited: generate_mitm_ca (exit status 0; expected)
2016-05-04 20:01:09,543 INFO success: jwtproxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
To verify Clair V2 is running, execute the following command:
curl -X GET -I http://path/to/clair/here:6061/health
If a 200 OK
code is returned, Clair is running:
HTTP/1.1 200 OK
Server: clair
Date: Wed, 04 May 2016 20:02:16 GMT
Content-Length: 0
Content-Type: text/plain; charset=utf-8
Once Clair V2 and its associated database are running, you man need to restart your quay application for the changes to take effect.