Getting started with Project Quay configuration
Project Quay can be deployed by an independent, standalone configuration, or by using the OpenShift Container Platform Project Quay Operator.
How you create, retrieve, update, and validate the Project Quay configuration varies depending on the type of deployment you are using. However, the core configuration options are the same for either deployment type. Core configuration can be set by one of the following options:
-
Directly, by editing the
config.yaml
file. See Editing the configuration file for more information. -
Programmatically, by using the configuration API. See Using the configuration API for more information.
-
Visually, by using the configuration tool UI. See Using the configuration tool for more information.
For standalone deployments of Project Quay, you must supply the minimum required configuration parameters before the registry can be started. The minimum requirements to start a Project Quay registry can be found in the Retrieving the current configuration section.
If you install Project Quay on OpenShift Container Platform using the Project Quay Operator, you do not need to supply configuration parameters because the Project Quay Operator supplies default information to deploy the registry.
After you have deployed Project Quay with the desired configuration, you should retrieve, and save, the full configuration from your deployment. The full configuration contains additional generated values that you might need when restarting or upgrading your system.
Configuration updates for Quay 3.8
Project Quay 3.8 configuration fields
The following configuration fields have been introduced with Project Quay 3.8:
Field | Type | Description |
---|---|---|
Boolean |
When set, allows users to try the beta UI environment. Default: |
|
String |
Enables IPv4, IPv6, or dual-stack protocol family. This configuration field must be properly set, otherwise Project Quay fails to start. Default: Additional configurations: |
|
String |
Subset of the With this field, administrators can add or remove superusers without having to update the Project Quay configuration file and restart their deployment. |
|
String |
Subset of the |
|
Boolean |
Grants superusers the ability to read, write, and delete content from other repositories in namespaces that they do not own or have explicit permissions for. Default: |
|
String |
When set, grants users of this list read access to all repositories, regardless of whether they are public repositories. |
|
Boolean |
When set with Default: |
|
String |
When set with |
Configuration updates for Quay 3.7
New configuration fields for Project Quay 3.7.7
Field | Type | Description |
---|---|---|
REPO_MIRROR_ROLLBACK |
Boolean |
When set to Default: |
New configuration fields
The following configuration fields have been introduced with Project Quay 3.7:
Parameter | Description |
---|---|
FEATURE_QUOTA_MANAGEMENT |
Quota management is now supported. With this feature, users have the ability to report storage consumption and to contain registry growth by establishing configured storage quota limits. For more information about quota management, see Project Quay Quota management and enforcement. |
DEFAULT_SYSTEM_REJECT_QUOTA_BYTES |
The quota size to apply to all organizations and users. For more information about quota management, see Project Quay Quota management and enforcement. |
FEATURE_PROXY_CACHE |
Using Project Quay to proxy a remote organization is now supported. With this feature, Project Quay will act as a proxy cache to circumvent pull-rate limitations from upstream registries. For more information about quota management, see Project Quay as proxy cache for upstream registries. |
Configuration updates for Project Quay 3.6
New configuration fields
The following configuration fields have been introduced with Project Quay 3.6:
Parameter | Description |
---|---|
FEATURE_EXTENDED_REPOSITORY_NAMES |
Support for nested repositories and extended repository names has been added. This change allows the use of |
FEATURE_USER_INITIALIZE |
If set to true, the first |
ALLOWED_OCI_ARTIFACT_TYPES |
Helm, cosign, and ztsd compression scheme artifacts are built into Project Quay 3.6 by default. For any other Open Container Initiative (OCI) media types that are not supported by default, you can add them to the |
CREATE_PRIVATE_REPO_ON_PUSH |
Registry users now have the option to set |
CREATE_NAMESPACE_ON_PUSH |
Pushing to a non-existent organization can now be configured to automatically create the organization. |
Deprecated configuration fields
The following configuration fields have been deprecated with Project Quay 3.6:
Parameter | Description |
---|---|
FEATURE_HELM_OCI_SUPPORT |
This option has been deprecated and will be removed in a future version of Project Quay. In Project Quay 3.6, Helm artifacts are supported by default and included under the |
Editing the configuration file
To deploy a standalone instance of Project Quay, you must provide the minimal configuration information. The requirements for a minimal configuration can be found in Project Quay minimal configuration.
After supplying the required fields, you can validate your configuration. If there are any issues, they will be highlighted.
Note
|
It is possible to use the configuration API to validate the configuration, but this requires starting the Quay container in configuration mode. For more information, see Using the configuration tool. |
For changes to take effect, the registry must be restarted.
Location of configuration file in a standalone deployment
For standalone deployments of Project Quay, the config.yaml
file must be specified when starting the Project Quay registry. This file is located in the configuration volume. For example, the configuration file is located at $QUAY/config/config.yaml
when deploying Project Quay by the following command:
$ sudo podman run -d --rm -p 80:8080 -p 443:8443 \
--name=quay \
-v $QUAY/config:/conf/stack:Z \
-v $QUAY/storage:/datastorage:Z \
{productrepo}/{quayimage}:{productminv}
Minimal configuration
The following configuration options are required for a standalone deployment of Project Quay:
-
Server hostname
-
HTTP or HTTPS
-
Authentication type, for example, Database or Lightweight Directory Access Protocol (LDAP)
-
Secret keys for encrypting data
-
Storage for images
-
Database for metadata
-
Redis for build logs and user events
-
Tag expiration options
Sample minimal configuration file
The following example shows a sample minimal configuration file that uses local storage for images:
AUTHENTICATION_TYPE: Database
BUILDLOGS_REDIS:
host: quay-server.example.com
password: strongpassword
port: 6379
ssl: false
DATABASE_SECRET_KEY: 0ce4f796-c295-415b-bf9d-b315114704b8
DB_URI: postgresql://quayuser:quaypass@quay-server.example.com:5432/quay
DEFAULT_TAG_EXPIRATION: 2w
DISTRIBUTED_STORAGE_CONFIG:
default:
- LocalStorage
- storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- default
PREFERRED_URL_SCHEME: http
SECRET_KEY: e8f9fe68-1f84-48a8-a05f-02d72e6eccba
SERVER_HOSTNAME: quay-server.example.com
SETUP_COMPLETE: true
TAG_EXPIRATION_OPTIONS:
- 0s
- 1d
- 1w
- 2w
- 4w
USER_EVENTS_REDIS:
host: quay-server.example.com
port: 6379
ssl: false
Note
|
The |
Local storage
Using local storage for images is only recommended when deploying a registry for proof of concept purposes.
When configuring local storage, storage is specified on the command line when starting the registry. The following command maps a local directory, $QUAY/storage
to the datastorage
path in the container:
$ sudo podman run -d --rm -p 80:8080 -p 443:8443 \
--name=quay \
-v $QUAY/config:/conf/stack:Z \
-v $QUAY/storage:/datastorage:Z \
{productrepo}/{quayimage}:{productminv}
Cloud storage
Storage configuration is detailed in the Image storage section. For some users, it might be useful to compare the difference between Google Cloud Platform and local storage configurations. For example, the following YAML presents a Google Cloud Platform storage configuration:
DISTRIBUTED_STORAGE_CONFIG:
default:
- GoogleCloudStorage
- access_key: GOOGQIMFB3ABCDEFGHIJKLMN
bucket_name: quay_bucket
secret_key: FhDAYe2HeuAKfvZCAGyOioNaaRABCDEFGHIJKLMN
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- default
When starting the registry using cloud storage, no configuration is required on the command line. For example:
$ sudo podman run -d --rm -p 80:8080 -p 443:8443 \
--name=quay \
-v $QUAY/config:/conf/stack:Z \
{productrepo}/{quayimage}:{productminv}
Configuration fields
This section describes the both required and optional configuration fields when deploying Project Quay.
Required configuration fields
The fields required to configure Project Quay are covered in the following sections:
Automation options
The following sections describe the available automation options for Project Quay deployments:
Optional configuration fields
Optional fields for Project Quay can be found in the following sections:
General required fields
The following table describes the required configuration fields for a Project Quay deployment:
Field | Type | Description |
---|---|---|
AUTHENTICATION_TYPE |
String |
The authentication engine to use for credential authentication. |
PREFERRED_URL_SCHEME |
String |
The URL scheme to use when accessing Project Quay. |
SERVER_HOSTNAME |
String |
The URL at which Project Quay is accessible, without the scheme. |
DATABASE_SECRET_KEY |
String |
Key used to encrypt sensitive fields within the database. This value should never be changed once set, otherwise all reliant fields, for example, repository mirror username and password configurations, are invalidated. |
SECRET_KEY |
String |
Key used to encrypt sensitive fields within the database and at run time. This value should never be changed once set, otherwise all reliant fields, for example, encrypted password credentials, are invalidated. |
SETUP_COMPLETE |
Boolean |
This is an artefact left over from earlier versions of the software and currently it must be specified with a value of |
Database configuration
This section describes the database configuration fields available for Project Quay deployments.
Database URI
With Project Quay, connection to the database is configured by using the required DB_URI
field.
The following table describes the DB_URI
configuration field:
Field | Type | Description |
---|---|---|
DB_URI |
String |
The URI for accessing the database, including any credentials. Example postgresql://quayuser:quaypass@quay-server.example.com:5432/quay |
Database connection arguments
Optional connection arguments are configured by the DB_CONNECTION_ARGS
parameter. Some of the key-value pairs defined under DB_CONNECTION_ARGS
are generic, while others are database specific.
The following table describes database connection arguments:
Field | Type | Description |
---|---|---|
DB_CONNECTION_ARGS |
Object |
Optional connection arguments for the database, such as timeouts and SSL. |
.autorollback |
Boolean |
Whether to use thread-local connections. |
.threadlocals |
Boolean |
Whether to use auto-rollback connections. |
PostgreSQL SSL connection arguments
With SSL, configuration depends on the database you are deploying. The following example shows a PostgreSQL SSL configuration:
DB_CONNECTION_ARGS:
sslmode: verify-ca
sslrootcert: /path/to/cacert
The sslmode
option determines whether, or with, what priority a secure SSL TCP/IP connection will be negotiated with the server. There are six modes:
Mode | Description |
---|---|
disable |
Your configuration only tries non-SSL connections. |
allow |
Your configuration first tries a non-SSL connection. Upon failure, tries an SSL connection. |
prefer |
Your configuration first tries an SSL connection. Upon failure, tries a non-SSL connection. |
require |
Your configuration only tries an SSL connection. If a root CA file is present, it verifies the certificate in the same way as if verify-ca was specified. |
verify-ca |
Your configuration only tries an SSL connection, and verifies that the server certificate is issued by a trusted certificate authority (CA). |
verify-full |
Only tries an SSL connection, and verifies that the server certificate is issued by a trusted CA and that the requested server host name matches that in the certificate. |
For more information on the valid arguments for PostgreSQL, see Database Connection Control Functions.
MySQL SSL connection arguments
The following example shows a sample MySQL SSL configuration:
DB_CONNECTION_ARGS:
ssl:
ca: /path/to/cacert
Information on the valid connection arguments for MySQL is available at Connecting to the Server Using URI-Like Strings or Key-Value Pairs.
Image storage
This section details the image storage features and configuration fields that are available with Project Quay.
Image storage features
The following table describes the image storage features for Project Quay:
Field | Type | Description |
---|---|---|
FEATURE_REPO_MIRROR |
Boolean |
If set to true, enables repository mirroring. |
FEATURE_PROXY_STORAGE |
Boolean |
Whether to proxy all direct download URLs in storage through NGINX. |
FEATURE_STORAGE_REPLICATION |
Boolean |
Whether to automatically replicate between storage engines. |
Image storage configuration fields
The following table describes the image storage configuration fields for Project Quay:
Field | Type | Description |
---|---|---|
DISTRIBUTED_STORAGE_CONFIG |
Object |
Configuration for storage engine(s) to use in Project Quay. Each key represents an unique identifier for a storage engine. The value consists of a tuple of (key, value) forming an object describing the storage engine parameters. |
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS |
Array of string |
The list of storage engine(s) (by ID in |
DISTRIBUTED_STORAGE_PREFERENCE |
Array of string |
The preferred storage engine(s) (by ID in |
MAXIMUM_LAYER_SIZE |
String |
Maximum allowed size of an image layer. |
Local storage
The following YAML shows a sample configuration using local storage:
DISTRIBUTED_STORAGE_CONFIG:
default:
- LocalStorage
- storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- default
OCS/NooBaa
The following YAML shows a sample configuration using an Open Container Storage/NooBaa instance:
DISTRIBUTED_STORAGE_CONFIG:
rhocsStorage:
- RHOCSStorage
- access_key: access_key_here
secret_key: secret_key_here
bucket_name: quay-datastore-9b2108a3-29f5-43f2-a9d5-2872174f9a56
hostname: s3.openshift-storage.svc.cluster.local
is_secure: 'true'
port: '443'
storage_path: /datastorage/registry
Ceph / RadosGW Storage / Hitachi HCP
The following YAML shows a sample configuration using Ceph/RadosGW and Hitachi HCP storage:
DISTRIBUTED_STORAGE_CONFIG:
radosGWStorage:
- RadosGWStorage
- access_key: access_key_here
secret_key: secret_key_here
bucket_name: bucket_name_here
hostname: hostname_here
is_secure: 'true'
port: '443'
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- default
AWS S3 storage
The following YAML shows a sample configuration using AWS S3 storage:
DISTRIBUTED_STORAGE_CONFIG:
s3Storage:
- S3Storage
- host: s3.us-east-2.amazonaws.com
s3_access_key: ABCDEFGHIJKLMN
s3_secret_key: OL3ABCDEFGHIJKLMN
s3_bucket: quay_bucket
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- s3Storage
Google Cloud Storage
The following YAML shows a sample configuration using Google Cloud Storage:
DISTRIBUTED_STORAGE_CONFIG:
googleCloudStorage:
- GoogleCloudStorage
- access_key: GOOGQIMFB3ABCDEFGHIJKLMN
bucket_name: quay-bucket
secret_key: FhDAYe2HeuAKfvZCAGyOioNaaRABCDEFGHIJKLMN
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- googleCloudStorage
Azure Storage
The following YAML shows a sample configuration using Azure Storage:
DISTRIBUTED_STORAGE_CONFIG:
azureStorage:
- AzureStorage
azure_container: azure_container_here
storage_path: /datastorage/registry
- azure_account_name: azure_account_name_here
azure_account_key: azure_account_key_here
sas_token: some/path/
endpoint_url: https://[account-name].blob.core.usgovcloudapi.net (1)
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- azureStorage
-
The
endpoint_url
parameter for Azure storage is optional and can be used with Microsoft Azure Government (MAG) endpoints. If left blank, theendpoint_url
will connect to the normal Azure region.As of Project Quay 3.7, you must use the Primary endpoint of your MAG Blob service. Using the Secondary endpoint of your MAG Blob service will result in the following error:
AuthenticationErrorDetail:Cannot find the claimed account when trying to GetProperties for the account whusc8-secondary
.
Swift storage
The following YAML shows a sample configuration using Swift storage:
DISTRIBUTED_STORAGE_CONFIG:
swiftStorage:
- SwiftStorage
- swift_user: swift_user_here
swift_password: swift_password_here
swift_container: swift_container_here
auth_url: https://example.org/swift/v1/quay
auth_version: 1
ca_cert_path: /conf/stack/swift.cert"
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- swiftStorage
Redis configuration fields
This section details the configuration fields available for Redis deployments.
Build logs
The following build logs configuration fields are available for Redis deployments:
Field | Type | Description |
---|---|---|
BUILDLOGS_REDIS |
Object |
Redis connection details for build logs caching. |
.host |
String |
The hostname at which Redis is accessible. |
.port |
Number |
The port at which Redis is accessible. |
.password |
String |
The port at which Redis is accessible. |
.port |
Number |
The port at which Redis is accessible. |
ssl |
Boolean |
Whether to enable TLS communication between Redis and Quay. Defaults to false. |
User events
The following user event fields are available for Redis deployments:
Field | Type | Description |
---|---|---|
USER_EVENTS_REDIS |
Object |
Redis connection details for user event handling. |
.host |
String |
The hostname at which Redis is accessible. |
.port |
Number |
The port at which Redis is accessible. |
.password |
String |
The port at which Redis is accessible. |
ssl |
Boolean |
Whether to enable TLS communication between Redis and Quay. Defaults to false. |
Example Redis configuration
The following YAML shows a sample configuration using Redis:
BUILDLOGS_REDIS:
host: quay-server.example.com
password: strongpassword
port: 6379
ssl: true
USER_EVENTS_REDIS:
host: quay-server.example.com
password: strongpassword
port: 6379
ssl: true
Note
|
If your deployment uses Azure Cache for Redis and |
ModelCache configuration options
The following options are available on Project Quay for configuring ModelCache.
Memcache configuration option
Memcache is the default ModelCache configuration option. With Memcache, no additional configuration is necessary.
Single Redis configuration option
The following configuration is for a single Redis instance with optional read-only replicas:
DATA_MODEL_CACHE_CONFIG:
engine: redis
redis_config:
primary:
host: <host>
port: <port>
password: <password if ssl is true>
ssl: <true | false >
replica:
host: <host>
port: <port>
password: <password if ssl is true>
ssl: <true | false >
Clustered Redis configuration option
Use the following configuration for a clustered Redis instance:
DATA_MODEL_CACHE_CONFIG:
engine: rediscluster
redis_config:
startup_nodes:
- host: <cluster-host>
port: <port>
password: <password if ssl: true>
read_from_replicas: <true|false>
skip_full_coverage_check: <true | false>
ssl: <true | false >
Tag expiration configuration fields
The following tag expiration configuration fields are available with Project Quay:
Field | Type | Description |
---|---|---|
FEATURE_GARBAGE_COLLECTION |
Boolean |
Whether garbage collection of repositories is enabled. |
TAG_EXPIRATION_OPTIONS |
Array of string |
If enabled, the options that users can select for expiration of tags in their namespace. |
DEFAULT_TAG_EXPIRATION |
String |
The default, configurable tag expiration time for time machine. |
FEATURE_CHANGE_TAG_EXPIRATION |
Boolean |
Whether users and organizations are allowed to change the tag expiration for tags in their namespace. |
Example tag expiration configuration
The following YAML shows a sample tag expiration configuration:
DEFAULT_TAG_EXPIRATION: 2w
TAG_EXPIRATION_OPTIONS:
- 0s
- 1d
- 1w
- 2w
- 4w
Pre-configuring Project Quay for automation
Project Quay has several configuration options that support automation. These options can be set before deployment to minimize the need to interact with the user interface.
Allowing the API to create the first user
To create the first user using the /api/v1/user/initialize
API, set the FEATURE_USER_INITIALIZE
parameter to true
. Unlike all other registry API calls which require an OAuth token that is generated by an OAuth application in an existing organization, the API endpoint does not require authentication.
After you have deployed Project Quay, you can use the API to create a user, for example, quayadmin
, assuming that no other users have already been created. For more information see Using the API to create the first user.
Enabling general API access
Set the config option BROWSER_API_CALLS_XHR_ONLY
to false
to allow general access to the Project Quay registry API.
Adding a super user
After deploying Project Quay, you can create a user. We advise that the first user be given administrator privileges with full permissions. Full permissions can be configured in advance by using the SUPER_USER
configuration object. For example:
...
SERVER_HOSTNAME: quay-server.example.com
SETUP_COMPLETE: true
SUPER_USERS:
- quayadmin
...
Restricting user creation
After you have configured a super user, you can restrict the ability to create new users to the super user group. Set the FEATURE_USER_CREATION
to false
to restrict user creation. For example:
...
FEATURE_USER_INITIALIZE: true
BROWSER_API_CALLS_XHR_ONLY: false
SUPER_USERS:
- quayadmin
FEATURE_USER_CREATION: false
...
Enabling new functionality
To use new Project Quay 3.8 functionality, enable some or all of the following features:
...
FEATURE_UI_V2: true
FEATURE_LISTEN_IP_VERSION:
FEATURE_SUPERUSERS_FULL_ACCESS: true
GLOBAL_READONLY_SUPER_USERS:
-
FEATURE_RESTRICTED_USERS: true
RESTRICTED_USERS_WHITELIST:
-
...
Enabling new functionality
To use new Project Quay 3.7 functionality, enable some or all of the following features:
...
FEATURE_QUOTA_MANAGEMENT: true
FEATURE_BUILD_SUPPORT: true
FEATURE_PROXY_CACHE: true
FEATURE_STORAGE_REPLICATION: true
DEFAULT_SYSTEM_REJECT_QUOTA_BYTES: 102400000
...
Suggested configuration for automation
The following config.yaml
parameters are suggested for automation:
...
FEATURE_USER_INITIALIZE: true
BROWSER_API_CALLS_XHR_ONLY: false
SUPER_USERS:
- quayadmin
FEATURE_USER_CREATION: false
...
Deploying the Project Quay Operator using the initial configuration
Use the following procedure to deploy Project Quay on OpenShift Container Platform using the initial configuration.
-
You have installed the
oc
CLI.
-
Create a secret using the configuration file:
$ oc create secret generic -n quay-enterprise --from-file config.yaml=./config.yaml init-config-bundle-secret
-
Create a
quayregistry.yaml
file. Identify the unmanaged components and reference the created secret, for example:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: init-config-bundle-secret
-
Deploy the Project Quay registry:
$ oc create -n quay-enterprise -f quayregistry.yaml
Using the API to deploy Project Quay
This section introduces using the API to deploy Project Quay.
-
The config option
FEATURE_USER_INITIALIZE
must be set totrue
. -
No users can already exist in the database.
For more information on pre-configuring your Project Quay deployment, see the section Pre-configuring Project Quay for automation
Using the API to create the first user
Use the following procedure to create the first user in your Project Quay organization.
Note
|
This procedure requests an OAuth token by specifying |
-
Using the
status.registryEndpoint
URL, invoke the/api/v1/user/initialize
API, passing in the username, password and email address by entering the following command:$ curl -X POST -k https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/user/initialize --header 'Content-Type: application/json' --data '{ "username": "quayadmin", "password":"quaypass123", "email": "quayadmin@example.com", "access_token": true}'
If successful, the command returns an object with the username, email, and encrypted password. For example:
{"access_token":"6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED", "email":"quayadmin@example.com","encrypted_password":"1nZMLH57RIE5UGdL/yYpDOHLqiNCgimb6W9kfF8MjZ1xrfDpRyRs9NUnUuNuAitW","username":"quayadmin"}
If a user already exists in the database, an error is returned:
{"message":"Cannot initialize user in a non-empty database"}
If your password is not at least eight characters or contains whitespace, an error is returned:
{"message":"Failed to initialize user: Invalid password, password must be at least 8 characters and contain no whitespace."}
Using the OAuth token
After invoking the API, you can call out the rest of the Project Quay API by specifying the returned OAuth code.
-
You have invoked the
/api/v1/user/initialize
API, and passed in the username, password, and email address.
-
Obtain the list of current users by entering the following command:
$ curl -X GET -k -H "Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED" https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/superuser/users/
Example output:
{ "users": [ { "kind": "user", "name": "quayadmin", "username": "quayadmin", "email": "quayadmin@example.com", "verified": true, "avatar": { "name": "quayadmin", "hash": "3e82e9cbf62d25dec0ed1b4c66ca7c5d47ab9f1f271958298dea856fb26adc4c", "color": "#e7ba52", "kind": "user" }, "super_user": true, "enabled": true } ] }
In this instance, the details for the
quayadmin
user are returned as it is the only user that has been created so far.
Using the API to create an organization
The following procedure details how to use the API to create a Project Quay organization.
-
You have invoked the
/api/v1/user/initialize
API, and passed in the username, password, and email address. -
You have called out the rest of the Project Quay API by specifying the returned OAuth code.
-
To create an organization, use a POST call to
api/v1/organization/
endpoint:$ curl -X POST -k --header 'Content-Type: application/json' -H "Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED" https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/ --data '{"name": "testorg", "email": "testorg@example.com"}'
Example output:
"Created"
-
You can retrieve the details of the organization you created by entering the following command:
$ curl -X GET -k --header 'Content-Type: application/json' -H "Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED" https://min-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/testorg
Example output:
{ "name": "testorg", "email": "testorg@example.com", "avatar": { "name": "testorg", "hash": "5f113632ad532fc78215c9258a4fb60606d1fa386c91b141116a1317bf9c53c8", "color": "#a55194", "kind": "user" }, "is_admin": true, "is_member": true, "teams": { "owners": { "name": "owners", "description": "", "role": "admin", "avatar": { "name": "owners", "hash": "6f0e3a8c0eb46e8834b43b03374ece43a030621d92a7437beb48f871e90f8d90", "color": "#c7c7c7", "kind": "team" }, "can_view": true, "repo_count": 0, "member_count": 1, "is_synced": false } }, "ordered_teams": [ "owners" ], "invoice_email": false, "invoice_email_address": null, "tag_expiration_s": 1209600, "is_free_account": true }
Basic configuration fields
Field | Type | Description |
---|---|---|
REGISTRY_TITLE |
String |
If specified, the long-form title for the registry. It should not exceed 35 characters. It will be displayed in the frontend of your Project Quay deployment, for example, in your browser tab |
REGISTRY_TITLE_SHORT |
String |
If specified, the short-form title for the registry. It will be displayed in the frontend of your Project Quay deployment, for example, in your browser tab |
|
|
|
BRANDING |
Object |
Custom branding for logos and URLs in the Project Quay UI. |
.logo |
String |
Main logo image URL. |
.footer_img |
String |
Logo for UI footer. |
.footer_url |
String |
Link for footer image. |
CONTACT_INFO |
Array of String |
If specified, contact information to display on the contact page. If only a single piece of contact information is specified, the contact footer will link directly. |
[0] |
String |
Adds a link to send an e-mail. |
[1] |
String |
Adds a link to visit an IRC chat room. |
[2] |
String |
Adds a link to call a phone number.+
|
[3] |
String |
Adds a link to a defined URL. |
SSL configuration fields
Field | Type | Description |
---|---|---|
PREFERRED_URL_SCHEME |
String |
One of +
Users must set their |
SERVER_HOSTNAME |
String |
The URL at which Project Quay is accessible, without the scheme |
SSL_CIPHERS |
Array of String |
If specified, the nginx-defined list of SSL ciphers to enabled and disabled |
SSL_PROTOCOLS |
Array of String |
If specified, nginx is configured to enabled a list of SSL protocols defined in the list. Removing an SSL protocol from the list disables the protocol during Project Quay startup. |
SESSION_COOKIE_SECURE |
Boolean |
Whether the |
Configuring SSL
-
Copy the certificate file and primary key file to your configuration directory, ensuring they are named
ssl.cert
andssl.key
respectively:$ cp ~/ssl.cert $QUAY/config $ cp ~/ssl.key $QUAY/config $ cd $QUAY/config
-
Edit the
config.yaml
file and specify that you want Quay to handle TLS:config.yaml... SERVER_HOSTNAME: quay-server.example.com ... PREFERRED_URL_SCHEME: https ...
-
Stop the
Quay
container and restart the registry
Adding TLS Certificates to the Project Quay Container
To add custom TLS certificates to Project Quay, create a new
directory named extra_ca_certs/
beneath the Project Quay config
directory. Copy any required site-specific TLS certificates to this new
directory.
Add TLS certificates to Project Quay
-
View certificate to be added to the container
$ cat storage.crt -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV [...] -----END CERTIFICATE-----
-
Create certs directory and copy certificate there
$ mkdir -p quay/config/extra_ca_certs $ cp storage.crt quay/config/extra_ca_certs/ $ tree quay/config/ ├── config.yaml ├── extra_ca_certs │ ├── storage.crt
-
Obtain the
Quay
container’sCONTAINER ID
withpodman ps
:$ sudo podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS 5a3e82c4a75f <registry>/<repo>/quay:v3.8.0 "/sbin/my_init" 24 hours ago Up 18 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 443/tcp grave_keller
-
Restart the container with that ID:
$ sudo podman restart 5a3e82c4a75f
-
Examine the certificate copied into the container namespace:
$ sudo podman exec -it 5a3e82c4a75f cat /etc/ssl/certs/storage.pem -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV
LDAP configuration fields
Field | Type | Description |
---|---|---|
AUTHENTICATION_TYPE |
String |
Must be set to |
FEATURE_TEAM_SYNCING |
Boolean |
Whether to allow for team membership to be synced from a backing group in the authentication engine (LDAP or Keystone) |
FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP |
Boolean |
If enabled, non-superusers can setup syncing on teams using LDAP |
LDAP_ADMIN_DN |
String |
The admin DN for LDAP authentication. |
LDAP_ADMIN_PASSWD |
String |
The admin password for LDAP authentication. |
LDAP_ALLOW_INSECURE_FALLBACK |
Boolean |
Whether or not to allow SSL insecure fallback for LDAP authentication. |
LDAP_BASE_DN |
Array of String |
The base DN for LDAP authentication. |
LDAP_EMAIL_ATTR |
String |
The email attribute for LDAP authentication. |
LDAP_UID_ATTR |
String |
The uid attribute for LDAP authentication. |
LDAP_URI |
String |
The LDAP URI. |
LDAP_USER_FILTER |
String |
The user filter for LDAP authentication. |
LDAP_USER_RDN |
Array of String |
The user RDN for LDAP authentication. |
TEAM_RESYNC_STALE_TIME |
String |
If team syncing is enabled for a team, how often to check its membership and resync if necessary |
LDAP_SUPERUSER_FILTER |
String |
Subset of the With this field, administrators can add or remove superusers without having to update the Project Quay configuration file and restart their deployment. This field requires that your |
LDAP_RESTRICTED_USER_FILTER |
String |
Subset of the This field requires that your |
LDAP configuration field references
Use the following references to update your config.yaml
file with the desired configuration field.
Basic LDAP user configuration
---
AUTHENTICATION_TYPE: LDAP
---
LDAP_ADMIN_DN: uid=testuser,ou=Users,o=orgid,dc=jumpexamplecloud,dc=com
LDAP_ADMIN_PASSWD: samplepassword
LDAP_ALLOW_INSECURE_FALLBACK: false
LDAP_BASE_DN:
- o=orgid
- dc=example
- dc=com
LDAP_EMAIL_ATTR: mail
LDAP_UID_ATTR: uid
LDAP_URI: ldap://ldap.example.com:389
LDAP_USER_RDN:
- ou=Users
LDAP restricted user configuration
---
AUTHENTICATION_TYPE: LDAP
---
LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com
LDAP_ADMIN_PASSWD: ABC123
LDAP_ALLOW_INSECURE_FALLBACK: false
LDAP_BASE_DN:
- o=<organization_id>
- dc=<example_domain_component>
- dc=com
LDAP_EMAIL_ATTR: mail
LDAP_UID_ATTR: uid
LDAP_URI: ldap://<example_url>.com
LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com)
LDAP_RESTRICTED_USER_FILTER: (<filterField>=<value>)
LDAP_USER_RDN:
- ou=<example_organization_unit>
- o=<organization_id>
- dc=<example_domain_component>
- dc=com
---
LDAP superuser configuration reference
---
AUTHENTICATION_TYPE: LDAP
---
LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com
LDAP_ADMIN_PASSWD: ABC123
LDAP_ALLOW_INSECURE_FALLBACK: false
LDAP_BASE_DN:
- o=<organization_id>
- dc=<example_domain_component>
- dc=com
LDAP_EMAIL_ATTR: mail
LDAP_UID_ATTR: uid
LDAP_URI: ldap://<example_url>.com
LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com)
LDAP_SUPERUSER_FILTER: (<filterField>=<value>)
LDAP_USER_RDN:
- ou=<example_organization_unit>
- o=<organization_id>
- dc=<example_domain_component>
- dc=com
Mirroring configuration fields
Field | Type | Description |
---|---|---|
FEATURE_REPO_MIRROR |
Boolean |
Enable or disable repository mirroring |
REPO_MIRROR_INTERVAL |
Number |
The number of seconds between checking for repository mirror candidates |
REPO_MIRROR_SERVER_HOSTNAME |
String |
Replaces the |
REPO_MIRROR_TLS_VERIFY |
Boolean |
Require HTTPS and verify certificates of Quay registry during mirror. |
REPO_MIRROR_ROLLBACK |
Boolean |
When set to Default: |
Security scanner configuration fields
Field | Type | Description |
---|---|---|
FEATURE_SECURITY_SCANNER |
Boolean |
Enable or disable the security scanner |
FEATURE_SECURITY_NOTIFICATIONS |
Boolean |
If the security scanner is enabled, turn on or turn off security notifications |
SECURITY_SCANNER_V4_REINDEX_THRESHOLD |
String |
This parameter is used to determine the minimum time, in seconds, to wait before re-indexing a manifest that has either previously failed or has changed states since the last indexing. The data is calculated from the |
SECURITY_SCANNER_V4_ENDPOINT |
String |
The endpoint for the V4 security scanner |
SECURITY_SCANNER_V4_PSK |
String |
The generated pre-shared key (PSK) for Clair |
SECURITY_SCANNER_INDEXING_INTERVAL |
Number |
The number of seconds between indexing intervals in the security scanner |
SECURITY_SCANNER_ENDPOINT |
String |
The endpoint for the V2 security scanner |
SECURITY_SCANNER_INDEXING_INTERVAL |
String |
This parameter is used to determine the number of seconds between indexing intervals in the security scanner. When indexing is triggered, Project Quay will query its database for manifests that must be indexed by Clair. These include manifests that have not yet been indexed and manifests that previously failed indexing. |
The following is a special case for re-indexing:
When Clair v4 indexes a manifest, the result should be deterministic. For example, the same manifest should produce the same index report. This is true until the scanners are changed, as using different scanners will produce different information relating to a specific manifest to be returned in the report. Because of this, Clair v4 exposes a state representation of the indexing engine (/indexer/api/v1/index_state
) to determine whether the scanner configuration has been changed.
Project Quay leverages this index state by saving it to the index report when parsing to Quay’s database. If this state has changed since the manifest was previously scanned, Quay will attempt to re-index that manifest during the periodic indexing process.
By default this parameter is set to 30 seconds. Users might decrease the time if they want the indexing process to run more frequently, for example, if they did not want to wait 30 seconds to see security scan results in the UI after pushing a new tag. Users can also change the parameter if they want more control over the request pattern to Clair and the pattern of database operations being performed on the Quay database.
OCI and Helm configuration fields
Support for Helm is now supported under the FEATURE_GENERAL_OCI_SUPPORT
property. If you need to explicitly enable the feature, for example, if it has previously been disabled or if you have upgraded from a version where it is not enabled by default, you need to add two properties in the Quay configuration to enable the use of OCI artifacts:
FEATURE_GENERAL_OCI_SUPPORT: true
FEATURE_HELM_OCI_SUPPORT: true
Field | Type | Description |
---|---|---|
FEATURE_GENERAL_OCI_SUPPORT |
Boolean |
Enable support for OCI artifacts |
FEATURE_HELM_OCI_SUPPORT |
Boolean |
Enable support for Helm artifacts |
Important
|
As of Project Quay 3.6, |
Action log configuration fields
Action log storage configuration
Field | Type | Description |
---|---|---|
FEATURE_LOG_EXPORT |
Boolean |
Whether to allow exporting of action logs |
LOGS_MODEL |
String |
Enable or disable the security scanner |
LOGS_MODEL_CONFIG |
Object |
Logs model config for action logs |
-
LOGS_MODEL_CONFIG [object]: Logs model config for action logs
-
elasticsearch_config [object]: Elasticsearch cluster configuration
-
access_key [string]: Elasticsearch user (or IAM key for AWS ES)
-
Example:
some_string
-
-
host [string]: Elasticsearch cluster endpoint
-
Example:
host.elasticsearch.example
-
-
index_prefix [string]: Elasticsearch’s index prefix
-
Example:
logentry_
-
-
index_settings [object]: Elasticsearch’s index settings
-
use_ssl [boolean]: Use ssl for Elasticsearch. Defaults to True
-
Example:
True
-
-
secret_key [string]: Elasticsearch password (or IAM secret for AWS ES)
-
Example:
some_secret_string
-
-
aws_region [string]: Amazon web service region
-
Example:
us-east-1
-
-
port [number]: Elasticsearch cluster endpoint port
-
Example:
1234
-
-
-
kinesis_stream_config [object]: AWS Kinesis Stream configuration
-
aws_secret_key [string]: AWS secret key
-
Example:
some_secret_key
-
-
stream_name [string]: Kinesis stream to send action logs to
-
Example:
logentry-kinesis-stream
-
-
aws_access_key [string]: AWS access key
-
Example:
some_access_key
-
-
retries [number]: Max number of attempts made on a single request
-
Example:
5
-
-
read_timeout [number]: Number of seconds before timeout when reading from a connection
-
Example:
5
-
-
max_pool_connections [number]: The maximum number of connections to keep in a connection pool
-
Example:
10
-
-
aws_region [string]: AWS region
-
Example:
us-east-1
-
-
connect_timeout [number]: Number of seconds before timeout when attempting to make a connection
-
Example:
5
-
-
-
producer [string]: Logs producer if logging to Elasticsearch
-
enum: kafka, elasticsearch, kinesis_stream
-
Example:
kafka
-
-
kafka_config [object]: Kafka cluster configuration
-
topic [string]: Kafka topic to publish log entries to
-
Example:
logentry
-
-
bootstrap_servers [array]: List of Kafka brokers to bootstrap the client from
-
max_block_seconds [number]: Max number of seconds to block during a
send()
, either because the buffer is full or metadata unavailable-
Example:
10
-
-
-
Action log rotation and archiving configuration
Field | Type | Description |
---|---|---|
FEATURE_ACTION_LOG_ROTATION |
Boolean |
Enabling log rotation and archival will move all logs older than 30 days to storage |
ACTION_LOG_ARCHIVE_LOCATION |
String |
If action log archiving is enabled, the storage engine in which to place the archived data |
ACTION_LOG_ARCHIVE_PATH |
String |
If action log archiving is enabled, the path in storage in which to place the archived data |
ACTION_LOG_ROTATION_THRESHOLD |
String |
The time interval after which to rotate logs |
Build logs configuration fields
Field | Type | Description |
---|---|---|
FEATURE_READER_BUILD_LOGS |
Boolean |
If set to true, build logs may be read by those with read access to the repo, rather than only write access or admin access. |
LOG_ARCHIVE_LOCATION |
String |
The storage location, defined in DISTRIBUTED_STORAGE_CONFIG, in which to place the archived build logs |
LOG_ARCHIVE_PATH |
String |
The path under the configured storage engine in which to place the archived build logs in JSON form |
Dockerfile build triggers fields
Field | Type | Description |
---|---|---|
FEATURE_BUILD_SUPPORT |
Boolean |
Whether to support Dockerfile build. |
SUCCESSIVE_TRIGGER_FAILURE_DISABLE_THRESHOLD |
Number |
If not None, the number of successive failures that
can occur before a build trigger is automatically disabled |
SUCCESSIVE_TRIGGER_INTERNAL_ERROR_DISABLE_THRESHOLD |
Number |
If not None, the number of successive internal errors that can occur before a build trigger is automatically disabled |
GitHub build triggers
Field | Type | Description |
---|---|---|
FEATURE_GITHUB_BUILD |
Boolean |
Whether to support GitHub build triggers |
|
|
|
GITHUB_TRIGGER_CONFIG |
Object |
Configuration for using GitHub (Enterprise) for build triggers |
.GITHUB_ENDPOINT |
String |
The endpoint for GitHub (Enterprise) |
.API_ENDPOINT |
String |
The endpoint of the GitHub (Enterprise) API to use. Must be overridden for |
.CLIENT_ID |
String |
The registered client ID for this Project Quay instance; this cannot be shared with GITHUB_LOGIN_CONFIG. |
.CLIENT_SECRET |
String |
The registered client secret for this Project Quay instance. |
BitBucket build triggers
Field | Type | Description |
---|---|---|
FEATURE_BITBUCKET_BUILD |
Boolean |
Whether to support Bitbucket build triggers |
|
|
|
BITBUCKET_TRIGGER_CONFIG |
Object |
Configuration for using BitBucket for build triggers |
.CONSUMER_KEY |
String |
The registered consumer key (client ID) for this Quay instance |
.CONSUMER_SECRET |
String |
The registered consumer secret (client secret) for this Quay instance |
GitLab build triggers
Field | Type | Description |
---|---|---|
FEATURE_GITLAB_BUILD |
Boolean |
Whether to support GitLab build triggers |
|
|
|
GITLAB_TRIGGER_CONFIG |
Object |
Configuration for using Gitlab for build triggers |
.GITLAB_ENDPOINT |
String |
The endpoint at which Gitlab (Enterprise) is running |
.CLIENT_ID |
String |
The registered client ID for this Quay instance |
.CLIENT_SECRET |
String |
The registered client secret for this Quay instance |
OAuth configuration fields
Field | Type | Description |
---|---|---|
DIRECT_OAUTH_CLIENTID_WHITELIST |
Array of String |
A list of client IDs for Quay-managed applications that are allowed to perform direct OAuth approval without user approval. |
GitHub OAuth configuration fields
Field | Type | Description |
---|---|---|
FEATURE_GITHUB_LOGIN |
Boolean |
Whether GitHub login is supported |
GITHUB_LOGIN_CONFIG |
Object |
Configuration for using GitHub (Enterprise) as an external login provider. |
.ALLOWED_ORGANIZATIONS |
Array of String |
The names of the GitHub (Enterprise) organizations whitelisted to work with the ORG_RESTRICT option. |
.API_ENDPOINT |
String |
The endpoint of the GitHub (Enterprise) API to use. Must be overridden for github.com |
.CLIENT_ID |
String |
The registered client ID for this Project Quay instance; cannot be shared with GITHUB_TRIGGER_CONFIG |
.CLIENT_SECRET |
String |
The registered client secret for this Project Quay instance |
.GITHUB_ENDPOINT |
String |
The endpoint for GitHub (Enterprise) |
.ORG_RESTRICT |
Boolean |
If true, only users within the organization whitelist can login using this provider. |
Google OAuth configuration fields
Field | Type | Description |
---|---|---|
FEATURE_GOOGLE_LOGIN |
Boolean |
Whether Google login is supported |
GOOGLE_LOGIN_CONFIG |
Object |
Configuration for using Google for external authentication |
.CLIENT_ID |
String |
The registered client ID for this Project Quay instance |
.CLIENT_SECRET |
String |
The registered client secret for this Project Quay instance |
Nested repositories configuration fields
With Project Quay 3.6, support for nested repository path names has been added under the FEATURE_EXTENDED_REPOSITORY_NAMES
property. This optional configuration is added to the config.yaml by default. Enablement allows the use of /
in repository names.
FEATURE_EXTENDED_REPOSITORY_NAMES: true
Field | Type | Description |
---|---|---|
FEATURE_EXTENDED_REPOSITORY_NAMES |
Boolean |
Enable support for nested repositories |
Adding other OCI media types to Quay
Helm, cosign, and ztsd compression scheme artifacts are built into Project Quay 3.6 by default. For any other OCI media type that is not supported by default, you can add them to the ALLOWED_OCI_ARTIFACT_TYPES
configuration in Quay’s config.yaml using the following format:
ALLOWED_OCI_ARTIFACT_TYPES: <oci config type 1>: - <oci layer type 1> - <oci layer type 2> <oci config type 2>: - <oci layer type 3> - <oci layer type 4> ...
For example, you can add Singularity (SIF) support by adding the following to your config.yaml:
... ALLOWED_OCI_ARTIFACT_TYPES: application/vnd.oci.image.config.v1+json: - application/vnd.dev.cosign.simplesigning.v1+json application/vnd.cncf.helm.config.v1+json: - application/tar+gzip application/vnd.sylabs.sif.config.v1+json: - application/vnd.sylabs.sif.layer.v1+tar ...
Note
|
When adding OCI media types that are not configured by default, users will also need to manually add support for cosign and Helm if desired. The ztsd compression scheme is supported by default, so users will not need to add that OCI media type to their config.yaml to enable support. |
Mail configuration fields
Field | Type | Description |
---|---|---|
FEATURE_MAILING |
Boolean |
Whether emails are enabled |
MAIL_DEFAULT_SENDER |
String |
If specified, the e-mail address used as the |
MAIL_PASSWORD |
String |
The SMTP password to use when sending e-mails |
MAIL_PORT |
Number |
The SMTP port to use. If not specified, defaults to 587. |
MAIL_SERVER |
String |
The SMTP server to use for sending e-mails. Only required if FEATURE_MAILING is set to true. |
MAIL_USERNAME |
String |
The SMTP username to use when sending e-mails |
MAIL_USE_TLS |
Boolean |
If specified, whether to use TLS for sending e-mails |
User configuration fields
Field | Type | Description |
---|---|---|
FEATURE_SUPER_USERS |
Boolean |
Whether superusers are supported |
FEATURE_USER_CREATION |
Boolean |
Whether users can be created (by non-superusers) |
FEATURE_USER_LAST_ACCESSED |
Boolean |
Whether to record the last time a user was accessed |
FEATURE_USER_LOG_ACCESS |
Boolean |
If set to true, users will have access to audit logs for their namespace |
FEATURE_USER_METADATA |
Boolean |
Whether to collect and support user metadata |
FEATURE_USERNAME_CONFIRMATION |
Boolean |
If set to true, users can confirm and modify their initial usernames when logging in via OpenID Connect (OIDC) or a non-database internal authentication provider like LDAP.
|
FEATURE_USER_RENAME |
Boolean |
If set to true, users can rename their own namespace |
FEATURE_INVITE_ONLY_USER_CREATION |
Boolean |
Whether users being created must be invited by another user |
FRESH_LOGIN_TIMEOUT |
String |
The time after which a fresh login requires users to re-enter their password |
USERFILES_LOCATION |
String |
ID of the storage engine in which to place user-uploaded files |
USERFILES_PATH |
String |
Path under storage in which to place user-uploaded files |
USER_RECOVERY_TOKEN_LIFETIME |
String |
The length of time a token for recovering a user accounts is valid |
FEATURE_SUPERUSERS_FULL_ACCESS |
Boolean |
Grants superusers the ability to read, write, and delete content from other repositories in namespaces that they do not own or have explicit permissions for. Default: |
FEATURE_RESTRICTED_USERS |
Boolean |
When set with Default: |
RESTRICTED_USERS_WHITELIST |
String |
When set with |
GLOBAL_READONLY_SUPER_USERS |
String |
When set, grants users of this list read access to all repositories, regardless of whether they are public repositories. |
User configuration fields references
Use the following references to update your config.yaml
file with the desired configuration field.
FEATURE_SUPERUSERS_FULL_ACCESS configuration reference
---
SUPER_USERS:
- quayadmin
FEATURE_SUPERUSERS_FULL_ACCESS: True
---
GLOBAL_READONLY_SUPER_USERS configuration reference
---
GLOBAL_READONLY_SUPER_USERS:
- user1
---
FEATURE_RESTRICTED_USERS configuration reference
---
AUTHENTICATION_TYPE: Database
---
---
FEATURE_RESTRICTED_USERS: true
---
RESTRICTED_USERS_WHITELIST configuration reference
-
FEATURE_RESTRICTED_USERS
is set totrue
in yourconfig.yaml
file.
---
AUTHENTICATION_TYPE: Database
---
---
FEATURE_RESTRICTED_USERS: true
RESTRICTED_USERS_WHITELIST:
- user1
---
Note
|
When this field is set, whitelisted users can create organizations, or read or write content from the repository even if |
Recaptcha configuration fields
Field | Type | Description |
---|---|---|
FEATURE_RECAPTCHA |
Boolean |
Whether Recaptcha is necessary for user login and recovery |
RECAPTCHA_SECRET_KEY |
String |
If recaptcha is enabled, the secret key for the Recaptcha service |
RECAPTCHA_SITE_KEY |
String |
If recaptcha is enabled, the site key for the Recaptcha service |
ACI configuration fields
Field | Type | Description |
---|---|---|
FEATURE_ACI_CONVERSION |
Boolean |
Whether to enable conversion to ACIs |
GPG2_PRIVATE_KEY_FILENAME |
String |
The filename of the private key used to decrypte ACIs |
GPG2_PRIVATE_KEY_NAME |
String |
The name of the private key used to sign ACIs |
GPG2_PUBLIC_KEY_FILENAME |
String |
The filename of the public key used to encrypt ACIs |
JWT configuration fields
Field | Type | Description |
---|---|---|
JWT_AUTH_ISSUER |
String |
The endpoint for JWT users |
JWT_GETUSER_ENDPOINT |
String |
The endpoint for JWT users |
JWT_QUERY_ENDPOINT |
String |
The endpoint for JWT queries |
JWT_VERIFY_ENDPOINT |
String |
The endpoint for JWT verification |
App tokens configuration fields
Field | Type | Description |
---|---|---|
FEATURE_APP_SPECIFIC_TOKENS |
Boolean |
If enabled, users can create tokens for use by the Docker CLI |
APP_SPECIFIC_TOKEN_EXPIRATION |
String |
The expiration for external app tokens. |
EXPIRED_APP_SPECIFIC_TOKEN_GC |
String |
Duration of time expired external app tokens will remain before being garbage collected |
Miscellaneous configuration fields
Field | Type | Description |
---|---|---|
ALLOW_PULLS_WITHOUT_STRICT_LOGGING |
String |
If true, pulls will still succeed even if the pull audit log entry cannot be written . This is useful if the database is in a read-only state and it is desired for pulls to continue during that time. |
AVATAR_KIND |
String |
The types of avatars to display, either generated inline (local) or Gravatar (gravatar) |
BROWSER_API_CALLS_XHR_ONLY |
Boolean |
If enabled, only API calls marked as being made by an XHR will be allowed from browsers |
DEFAULT_NAMESPACE_MAXIMUM_BUILD_COUNT |
Number |
The default maximum number of builds that can be queued in a namespace. |
ENABLE_HEALTH_DEBUG_SECRET |
String |
If specified, a secret that can be given to health endpoints to see full debug info when not authenticated as a superuser |
EXTERNAL_TLS_TERMINATION |
Boolean |
Set to |
FRESH_LOGIN_TIMEOUT |
String |
The time after which a fresh login requires users to re-enter their password |
HEALTH_CHECKER |
String |
The configured health check |
PROMETHEUS_NAMESPACE |
String |
The prefix applied to all exposed Prometheus metrics |
PUBLIC_NAMESPACES |
Array of String |
If a namespace is defined in the public namespace list, then it will appear on all users' repository list pages, regardless of whether the user is a member of the namespace. Typically, this is used by an enterprise customer in configuring a set of "well-known" namespaces. |
REGISTRY_STATE |
String |
The state of the registry |
SEARCH_MAX_RESULT_PAGE_COUNT |
Number |
Maximum number of pages the user can paginate in search before they are limited |
SEARCH_RESULTS_PER_PAGE |
Number |
Number of results returned per page by search page |
V2_PAGINATION_SIZE |
Number |
The number of results returned per page in V2 registry APIs |
WEBHOOK_HOSTNAME_BLACKLIST |
Array of String |
The set of hostnames to disallow from webhooks when validating, beyond localhost |
CREATE_PRIVATE_REPO_ON_PUSH |
Boolean |
Whether new repositories created by push are set to private visibility |
CREATE_NAMESPACE_ON_PUSH |
Boolean |
Whether new push to a non-existent organization creates it |
NON_RATE_LIMITED_NAMESPACES |
Array of String |
If rate limiting has been enabled using |
Boolean |
When set, allows users to try the beta UI environment. Default: |
Miscellaneous configuration field references
Use the following references to update your config.yaml
file with the desired configuration field.
v2 user interface configuration
With FEATURE_UI_V2
enabled, you can toggle between the current version of the user interface and the new version of the user interface.
Important
|
|
-
In your deployment’s
config.yaml
file, add theFEATURE_UI_V2
parameter and set it totrue
, for example:--- FEATURE_TEAM_SYNCING: false FEATURE_UI_V2: true FEATURE_USER_CREATION: true ---
-
Log in to your Project Quay deployment.
-
In the navigation pane of your Project Quay deployment, you are given the option to toggle between Current UI and New UI. Click the toggle button to set it to new UI, and then click Use Beta Environment, for example:
Creating a new organization in the Project Quay 3.8 beta UI
-
You have toggled your Project Quay deployment to use the 3.8 beta UI.
Use the following procedure to create an organization using the Project Quay 3.8 beta UI.
-
Click Organization in the navigation pane.
-
Click Create Organization.
-
Enter an Organization Name, for example,
testorg
. -
Click Create.
Now, your example organization should populate under the Organizations page.
Deleting an organization using the Project Quay 3.8 beta UI
Use the following procedure to delete an organization using the Project Quay 3.8 beta UI.
-
On the Organizations page, select the name of the organization you want to delete, for example,
testorg
. -
Click the More Actions drop down menu.
-
Click Delete.
NoteOn the Delete page, there is a Search input box. With this box, users can search for specific organizations to ensure that they are properly scheduled for deletion. For example, if a user is deleting 10 organizations and they want to ensure that a specific organization was deleted, they can use the Search input box to confirm said organization is marked for deletion.
-
Confirm that you want to permanently delete the organization by typing confirm in the box.
-
Click Delete.
After deletion, you are returned to the Organizations page.
Note
|
You can delete more than one organization at a time by selecting multiple organizations, and then clicking More Actions → Delete. |
Creating a new repository using the Project Quay 3.8 beta UI
Use the following procedure to create a repository using the Project Quay 3.8 beta UI.
-
Click Repositories on the navigation pane.
-
Click Create Repository.
-
Select a namespace, for example, quayadmin, and then enter a Repository name, for example,
testrepo
. -
Click Create.
Now, your example repository should populate under the Repositories page.
Deleting a repository using the Project Quay 3.8 beta UI
-
You have created a repository.
-
On the Repositories page of the Project Quay 3.8 beta UI, click the name of the image you want to delete, for example,
quay/admin/busybox
. -
Click the More Actions drop-down menu.
-
Click Delete.
NoteIf desired, you could click Make Public or Make Private.
-
Type confirm in the box, and then click Delete.
-
After deletion, you are returned to the Repositories page.
Pushing an image to the Project Quay 3.8 beta UI
Use the following procedure to push an image to the Project Quay 3.8 beta UI.
-
Pull a sample image from an external registry:
$ podman pull busybox
-
Tag the image:
$ podman tag docker.io/library/busybox quay-server.example.com/quayadmin/busybox:test
-
Push the image to your Project Quay registry:
$ podman push quay-server.example.com/quayadmin/busybox:test
-
Navigate to the Repositories page on the Project Quay UI and ensure that your image has been properly pushed.
-
You can check the security details by selecting your image tag, and then navigating to the Security Report page.
Deleting an image using the Project Quay 3.8 beta UI
Use the following procedure to delete an image using theProject Quay 3.8 beta UI.
-
You have pushed an image to your Project Quay registry.
-
On the Repositories page of the Project Quay 3.8 beta UI, click the name of the image you want to delete, for example,
quay/admin/busybox
. -
Click the More Actions drop-down menu.
-
Click Delete.
NoteIf desired, you could click Make Public or Make Private.
-
Type confirm in the box, and then click Delete.
-
After deletion, you are returned to the Repositories page.
Enabling the Project Quay legacy UI
-
In the navigation pane of your Project Quay deployment, you are given the option to toggle between Current UI and New UI. Click the toggle button to set it to Current UI.
Legacy configuration fields
Some fields are deprecated or obsolete:
Field | Type | Description |
---|---|---|
FEATURE_BLACKLISTED_EMAILS |
Boolean |
If set to true, no new User accounts may be created if their email domain is blacklisted |
BLACKLISTED_EMAIL_DOMAINS |
Array of String |
The list of email-address domains that is used if FEATURE_BLACKLISTED_EMAILS is set to true |
BLACKLIST_V2_SPEC |
String |
The Docker CLI versions to which Project Quay will respond that V2 is unsupported |
DOCUMENTATION_ROOT |
String |
Root URL for documentation links |
SECURITY_SCANNER_V4_NAMESPACE_WHITELIST |
String |
The namespaces for which the security scanner should be enabled |
FEATURE_RESTRICTED_V1_PUSH |
Boolean |
If set to true, only namespaces listed in V1_PUSH_WHITELIST support V1 push |
V1_PUSH_WHITELIST |
Array of String |
The array of namespace names that support V1 push if FEATURE_RESTRICTED_V1_PUSH is set to true |
User interface v2 configuration field
Field | Type | Description |
---|---|---|
FEATURE_UI_V2 |
Boolean |
When set, allows users to try the beta UI environment. Default: |
IPv6 configuration field
Field | Type | Description |
---|---|---|
FEATURE_LISTEN_IP_VERSION |
String |
Enables IPv4, IPv6, or dual-stack protocol family. This configuration field must be properly set, otherwise Project Quay fails to start. Default: Additional configurations: |
Environment variables
Project Quay supports a limited number of environment variables for dynamic configuration.
Geo-replication
The exact same configuration should be used across all regions, with exception of the storage backend, which can be configured explicitly using the QUAY_DISTRIBUTED_STORAGE_PREFERENCE
environment variable.
Variable | Type | Description |
---|---|---|
QUAY_DISTRIBUTED_STORAGE_PREFERENCE |
String |
The preferred storage engine (by ID in DISTRIBUTED_STORAGE_CONFIG) to use. |
Database connection pooling
Project Quay is composed of many different processes which all run within the same container. Many of these processes interact with the database.
If enabled, each process that interacts with the database will contain a connection pool. These per-process connection pools are configured to maintain a maximum of 20 connections. Under heavy load, it is possible to fill the connection pool for every process within a Project Quay container. Under certain deployments and loads, this may require analysis to ensure Project Quay does not exceed the database’s configured maximum connection count.
Overtime, the connection pools will release idle connections. To release all connections immediately, Project Quay requires a restart.
Database connection pooling may be toggled by setting the environment variable DB_CONNECTION_POOLING={true|false}
Variable | Type | Description |
---|---|---|
DB_CONNECTION_POOLING |
Boolean |
Enable or disable database connection pooling |
If database connection pooling is enabled, it is possible to change the maximum size of the connection pool. This can be done through the following config.yaml option:
...
DB_CONNECTION_ARGS:
max_connections: 10
...
HTTP connection counts
It is possible to specify the quantity of simultaneous HTTP connections using environment variables. These can be specified as a whole, or for a specific component. The default for each is 50 parallel connections per process.
Variable | Type | Description |
---|---|---|
WORKER_CONNECTION_COUNT |
Number |
Simultaneous HTTP connections |
WORKER_CONNECTION_COUNT_REGISTRY |
Number |
Simultaneous HTTP connections for registry |
WORKER_CONNECTION_COUNT_WEB |
Number |
Simultaneous HTTP connections for web UI |
WORKER_CONNECTION_COUNT_SECSCAN |
Number |
Simultaneous HTTP connections for Clair |
Worker count variables
Variable | Type | Description |
---|---|---|
WORKER_COUNT |
Number |
Generic override for number of processes |
WORKER_COUNT_REGISTRY |
Number |
Specifies the number of processes to handle Registry requests within the |
WORKER_COUNT_WEB |
Number |
Specifies the number of processes to handle UI/Web requests within the container |
WORKER_COUNT_SECSCAN |
Number |
Specifies the number of processes to handle Security Scanning (e.g. Clair) integration within the container |
Using the config tool to reconfigure Quay on OpenShift
Accessing the config editor
In the Details section of the QuayRegistry screen, the endpoint for the config editor is available, along with a link to the secret containing the credentials for logging into the config editor:
Retrieving the config editor credentials
-
Click on the link for the config editor secret:
-
In the Data section of the Secret details screen, click
Reveal values
to see the credentials for logging in to the config editor:
Logging in to the config editor
Browse to the config editor endpoint and then enter the username, typically quayconfig
, and the corresponding password to access the config tool:
Changing configuration
In this example of updating the configuration, a superuser is added via the config editor tool:
-
Add an expiration period, for example
4w
, for the time machine functionality: -
Select
Validate Configuration Changes
to ensure that the changes are valid -
Apply the changes by pressing the
Reconfigure Quay
button: -
The config tool notifies you that the change has been submitted to Quay:
Note
|
Reconfiguring Project Quay using the config tool UI can lead to the registry being unavailable for a short time, while the updated configuration is applied. |
Monitoring reconfiguration in the UI
QuayRegistry resource
After reconfiguring the Operator, you can track the progress of the redeployment in the YAML tab for the specific instance of QuayRegistry, in this case, example-registry
:
Each time the status changes, you will be prompted to reload the data to see the updated version. Eventually, the Operator will reconcile the changes, and there will be no unhealthy components reported.
Events
The Events tab for the QuayRegistry shows some events related to the redeployment:
Streaming events, for all resources in the namespace that are affected by the reconfiguration, are available in the OpenShift console under Home → Events:
Accessing updated information after reconfiguration
Accessing the updated config tool credentials in the UI
With Project Quay 3.7, reconfiguring Quay through the UI no longer generates a new login password. The password now generates only once, and remains the same after reconciling QuayRegistry
objects.
Accessing the updated config.yaml in the UI
Use the config bundle to access the updated config.yaml
file.
-
On the QuayRegistry details screen, click on the Config Bundle Secret
-
In the Data section of the Secret details screen, click Reveal values to see the
config.yaml
file -
Check that the change has been applied. In this case,
4w
should be in the list ofTAG_EXPIRATION_OPTIONS
:... SERVER_HOSTNAME: example-quay-openshift-operators.apps.docs.quayteam.org SETUP_COMPLETE: true SUPER_USERS: - quayadmin TAG_EXPIRATION_OPTIONS: - 2w - 4w ...
Quay Operator components
Quay is a powerful container registry platform and as a result, has a significant number of dependencies. These include a database, object storage, Redis, and others. The Quay Operator manages an opinionated deployment of Quay and its dependencies on Kubernetes. These dependencies are treated as components and are configured through the QuayRegistry
API.
In the QuayRegistry
custom resource, the spec.components
field configures components. Each component contains two fields: kind
- the name of the component, and managed
- boolean whether the component lifecycle is handled by the Operator. By default (omitting this field), all components are managed and will be autofilled upon reconciliation for visibility:
spec:
components:
- kind: quay
managed: true
- kind: postgres
managed: true
- kind: clair
managed: true
- kind: redis
managed: true
- kind: horizontalpodautoscaler
managed: true
- kind: objectstorage
managed: true
- kind: route
managed: true
- kind: mirror
managed: true
- kind: monitoring
managed: true
- kind: tls
managed: true
- kind: clairpostgres
managed: true
Using managed components
Unless your QuayRegistry
custom resource specifies otherwise, the Operator will use defaults for the following managed components:
-
quay: Holds overrides for the Quay deployment, for example, environment variables and number of replicas. This component is new in Project Quay 3.7 and cannot be set to unmanaged.
-
postgres: For storing the registry metadata, uses an upstream (CentOS) version of Postgres 10
-
clair: Provides image vulnerability scanning
-
redis: Handles Quay builder coordination and some internal logging
-
horizontalpodautoscaler: Adjusts the number of Quay pods depending on memory/cpu consumption
-
objectstorage: For storing image layer blobs, utilizes the
ObjectBucketClaim
Kubernetes API which is provided by Noobaa/RHOCS -
route: Provides an external entrypoint to the Quay registry from outside OpenShift
-
mirror: Configures repository mirror workers (to support optional repository mirroring)
-
monitoring: Features include a Grafana dashboard, access to individual metrics, and alerting to notify for frequently restarting Quay pods
-
tls: Configures whether Project Quay or OpenShift handles TLS
-
clairpostgres: Configures a managed Clair database
The Operator will handle any required configuration and installation work needed for Project Quay to use the managed components. If the opinionated deployment performed by the Quay Operator is unsuitable for your environment, you can provide the Operator with unmanaged
resources (overrides) as described in the following sections.
Using unmanaged components for dependencies
If you have existing components such as Postgres, Redis or object storage that you would like to use with Quay, you first configure them within the Quay configuration bundle (config.yaml
) and then reference the bundle in your QuayRegistry
(as a Kubernetes Secret
) while indicating which components are unmanaged.
Note
|
The Quay config editor can also be used to create or modify an existing config bundle and simplifies the process of updating the Kubernetes |
Using an existing Postgres database
Requirements:
If you are using an externally managed PostgreSQL database, you must manually enable pg_trgm extension for a successful deployment.
-
Create a configuration file
config.yaml
with the necessary database fields:config.yaml:DB_URI: postgresql://test-quay-database:postgres@test-quay-database:5432/test-quay-database
-
Create a Secret using the configuration file:
$ kubectl create secret generic --from-file config.yaml=./config.yaml config-bundle-secret
-
Create a QuayRegistry YAML file
quayregistry.yaml
which marks thepostgres
component as unmanaged and references the created Secret:quayregistry.yamlapiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: postgres managed: false
-
Deploy the registry as detailed in the following sections.
NooBaa unmanaged storage
-
Create a NooBaa Object Bucket Claim in the console at Storage → Object Bucket Claims.
-
Retrieve the Object Bucket Claim Data details including the Access Key, Bucket Name, Endpoint (hostname) and Secret Key.
-
Create a
config.yaml
configuration file, using the information for the Object Bucket Claim:DISTRIBUTED_STORAGE_CONFIG: default: - RHOCSStorage - access_key: WmrXtSGk8B3nABCDEFGH bucket_name: my-noobaa-bucket-claim-8b844191-dc6c-444e-9ea4-87ece0abcdef hostname: s3.openshift-storage.svc.cluster.local is_secure: true port: "443" secret_key: X9P5SDGJtmSuHFCMSLMbdNCMfUABCDEFGH+C5QD storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default
Disabling the Horizontal Pod Autoscaler
HorizontalPodAutoscalers
have been added to the Clair, Quay, and Mirror pods, so that they now automatically scale during load spikes.
As HPA is configured by default to be managed
, the number of pods for Quay, Clair and repository mirroring is set to two. This facilitates the avoidance of downtime when updating / reconfiguring Quay via the Operator or during rescheduling events.
If you wish to disable autoscaling or create your own HorizontalPodAutoscaler
, simply specify the component as unmanaged in the QuayRegistry
instance:
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: quay-enterprise
spec:
components:
- kind: horizontalpodautoscaler
managed: false
Add certs when deployed on Kubernetes
When deployed on Kubernetes, Project Quay mounts in a secret as a volume to store config assets. Unfortunately, this currently breaks the upload certificate function of the superuser panel.
To get around this error, a base64 encoded certificate can be added to the secret after Project Quay has been deployed. Here’s how:
-
Begin by base64 encoding the contents of the certificate:
$ cat ca.crt -----BEGIN CERTIFICATE----- MIIDljCCAn6gAwIBAgIBATANBgkqhkiG9w0BAQsFADA5MRcwFQYDVQQKDA5MQUIu TElCQ09SRS5TTzEeMBwGA1UEAwwVQ2VydGlmaWNhdGUgQXV0aG9yaXR5MB4XDTE2 MDExMjA2NTkxMFoXDTM2MDExMjA2NTkxMFowOTEXMBUGA1UECgwOTEFCLkxJQkNP UkUuU08xHjAcBgNVBAMMFUNlcnRpZmljYXRlIEF1dGhvcml0eTCCASIwDQYJKoZI [...] -----END CERTIFICATE----- $ cat ca.crt | base64 -w 0 [...] c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
-
Use the
kubectl
tool to edit the quay-enterprise-config-secret.$ kubectl --namespace quay-enterprise edit secret/quay-enterprise-config-secret
-
Add an entry for the cert and paste the full base64 encoded string under the entry:
custom-cert.crt: c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
-
Finally, recycle all Project Quay pods. Use
kubectl delete
to remove all Project Quay pods. The Project Quay Deployment will automatically schedule replacement pods with the new certificate data.
Configuring OCI and Helm with the Operator
Customizations to the configuration of Quay can be provided in a secret containing the configuration bundle. Execute the following command which will create a new secret called quay-config-bundle
, in the appropriate namespace, containing the necessary properties to enable OCI support.
apiVersion: v1
stringData:
config.yaml: |
FEATURE_GENERAL_OCI_SUPPORT: true
FEATURE_HELM_OCI_SUPPORT: true
kind: Secret
metadata:
name: quay-config-bundle
namespace: quay-enterprise
type: Opaque
Important
|
As of Project Quay 3.8, |
Create the secret in the appropriate namespace, in this example quay-enterprise
:
$ oc create -n quay-enterprise -f quay-config-bundle.yaml
Specify the secret for the spec.configBundleSecret
field:
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: quay-enterprise
spec:
configBundleSecret: quay-config-bundle
Create the registry with the specified configuration:
$ oc create -n quay-enterprise -f quay-registry.yaml
Volume size overrides
As of Project Quay v3.6.2, you can specify the desired size of storage resources provisioned for managed components. The default size for Clair and Quay PostgreSQL databases is 50Gi
. You can now choose a large enough capacity upfront, either for performance reasons or in the case where your storage backend does not have resize capability.
In the following example, the volume size for the Clair and the Quay PostgreSQL databases has been set to 70Gi
:
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: quay-example
namespace: quay-enterprise
spec:
configBundleSecret: config-bundle-secret
components:
- kind: objectstorage
managed: false
- kind: route
managed: true
- kind: tls
managed: false
- kind: clair
managed: true
overrides:
volumeSize: 70Gi
- kind: postgres
managed: true
overrides:
volumeSize: 70Gi
Using the configuration API
The configuration tool exposes 4 endpoints that can be used to build, validate, bundle and deploy a configuration. The config-tool API is documented at https://github.com/quay/config-tool/blob/master/pkg/lib/editor/API.md. In this section, you will see how to use the API to retrieve the current configuration and how to validate any changes you make.
Retrieving the default configuration
If you are running the configuration tool for the first time, and do not have an existing configuration, you can retrieve the default configuration. Start the container in config mode:
$ sudo podman run --rm -it --name quay_config \ -p 8080:8080 \ quay.io/projectquay/quay:v3.8.0 config secret
Use the config
endpoint of the configuration API to get the default:
$ curl -X GET -u quayconfig:secret http://quay-server:8080/api/v1/config | jq
The value returned is the default configuration in JSON format:
{
"config.yaml": {
"AUTHENTICATION_TYPE": "Database",
"AVATAR_KIND": "local",
"DB_CONNECTION_ARGS": {
"autorollback": true,
"threadlocals": true
},
"DEFAULT_TAG_EXPIRATION": "2w",
"EXTERNAL_TLS_TERMINATION": false,
"FEATURE_ACTION_LOG_ROTATION": false,
"FEATURE_ANONYMOUS_ACCESS": true,
"FEATURE_APP_SPECIFIC_TOKENS": true,
....
}
}
Retrieving the current configuration
If you have already configured and deployed the Quay registry, stop the container and restart it in configuration mode, loading the existing configuration as a volume:
$ sudo podman run --rm -it --name quay_config \ -p 8080:8080 \ -v $QUAY/config:/conf/stack:Z \ quay.io/projectquay/quay:v3.8.0 config secret
Use the config
endpoint of the API to get the current configuration:
$ curl -X GET -u quayconfig:secret http://quay-server:8080/api/v1/config | jq
The value returned is the current configuration in JSON format, including database and Redis configuration data:
{
"config.yaml": {
....
"BROWSER_API_CALLS_XHR_ONLY": false,
"BUILDLOGS_REDIS": {
"host": "quay-server",
"password": "strongpassword",
"port": 6379
},
"DATABASE_SECRET_KEY": "4b1c5663-88c6-47ac-b4a8-bb594660f08b",
"DB_CONNECTION_ARGS": {
"autorollback": true,
"threadlocals": true
},
"DB_URI": "postgresql://quayuser:quaypass@quay-server:5432/quay",
"DEFAULT_TAG_EXPIRATION": "2w",
....
}
}
Validating configuration using the API
You can validate a configuration by posting it to the config/validate
endpoint:
curl -u quayconfig:secret --header 'Content-Type: application/json' --request POST --data ' { "config.yaml": { .... "BROWSER_API_CALLS_XHR_ONLY": false, "BUILDLOGS_REDIS": { "host": "quay-server", "password": "strongpassword", "port": 6379 }, "DATABASE_SECRET_KEY": "4b1c5663-88c6-47ac-b4a8-bb594660f08b", "DB_CONNECTION_ARGS": { "autorollback": true, "threadlocals": true }, "DB_URI": "postgresql://quayuser:quaypass@quay-server:5432/quay", "DEFAULT_TAG_EXPIRATION": "2w", .... } } http://quay-server:8080/api/v1/config/validate | jq
The returned value is an array containing the errors found in the configuration. If the configuration is valid, an empty array []
is returned.
Determining the required fields
You can determine the required fields by posting an empty configuration structure to the config/validate
endpoint:
curl -u quayconfig:secret --header 'Content-Type: application/json' --request POST --data ' { "config.yaml": { } } http://quay-server:8080/api/v1/config/validate | jq
The value returned is an array indicating which fields are required:
[
{
"FieldGroup": "Database",
"Tags": [
"DB_URI"
],
"Message": "DB_URI is required."
},
{
"FieldGroup": "DistributedStorage",
"Tags": [
"DISTRIBUTED_STORAGE_CONFIG"
],
"Message": "DISTRIBUTED_STORAGE_CONFIG must contain at least one storage location."
},
{
"FieldGroup": "HostSettings",
"Tags": [
"SERVER_HOSTNAME"
],
"Message": "SERVER_HOSTNAME is required"
},
{
"FieldGroup": "HostSettings",
"Tags": [
"SERVER_HOSTNAME"
],
"Message": "SERVER_HOSTNAME must be of type Hostname"
},
{
"FieldGroup": "Redis",
"Tags": [
"BUILDLOGS_REDIS"
],
"Message": "BUILDLOGS_REDIS is required"
}
]
Using the configuration tool
Custom SSL certificates UI
The config tool can be used to load custom certificates to facilitate access to resources such as external databases. Select the custom certs to be uploaded, ensuring that they are in PEM format, with an extension .crt
.
The config tool also displays a list of any uploaded certificates. Once you upload your custom SSL cert, it will appear in the list:
Basic configuration
Contact information
Server configuration
Server configuration choice
TLS configuration
Database configuration
You can choose between PostGreSQL and MySQL:
Note
|
The MySQL and MariaDB databases have been deprecated as of Project Quay 3.6. Support for these databases will be removed in a future version of Project Quay. If starting a new Project Quay installation, it is strongly recommended to use PostgreSQL. |
PostgreSQL configuration
Enter the details for connecting to the database:
This will generate a DB_URI field of the form postgresql://quayuser:quaypass@quay-server.example.com:5432/quay
.
If you need finer-grained control of the connection arguments, see the section "Database connection arguments" in the Configuration Guide.
Data consistency
Time machine configuration
Redis configuration
Repository mirroring configuration
Registry storage configuration
-
Proxy storage
-
Storage georeplication
-
Storage engines
Enable storage replication - standalone Quay
Use the following procedure to enable storage replication on Project Quay.
-
In your Project Quay config editor, locate the Registry Storage section.
-
Click Enable Storage Replication.
-
Add each of the storage engines to which data will be replicated. All storage engines to be used must be listed.
-
If complete replication of all images to all storage engines is required, click Replicate to storage engine by default under each storage engine configuration. This ensures that all images are replicated to that storage engine.
NoteTo enable per-namespace replication, contact Project Quay support.
-
When finished, click Save Configuration Changes. The configuration changes will take effect after Project Quay restarts.
-
After adding storage and enabling Replicate to storage engine by default for geo-replication, you must sync existing image data across all storage. To do this, you must
oc exec
(alternatively,docker exec
orkubectl exec
) into the container and enter the following commands:# scl enable python27 bash # python -m util.backfillreplication
NoteThis is a one time operation to sync content after adding new storage.