Once you have deployed a Project Quay registry, there are many ways you can further configure and manage that deployment. Topics covered here include:
-
Advanced Project Quay configuration
-
Setting notifications to alert you of a new Project Quay release
-
Securing connections with SSL and TLS certificates
-
Directing action logs storage to Elasticsearch
-
Configuring image security scanning with Clair
-
Scan pod images with the Container Security Operator
-
Integrate Project Quay into OpenShift with the Quay Bridge Operator
-
Mirroring images with repository mirroring
-
Sharing Quay images with a BitTorrent service
-
Authenticating users with LDAP
-
Enabling Quay for Prometheus and Grafana metrics
-
Setting up geo-replication
-
Troubleshooting Quay
Advanced Project Quay configuration
You can configure your Project Quay after initial deployment using several different interfaces:
-
The Project Quay Config Tool: Running the
Quay
container inconfig
mode presents a Web-based interface for configuring the Project Quay cluster. This is the recommended method for most configuration of the Project Quay service itself. -
Editing the
config.yaml
: Theconfig.yaml
file holds most of the configuration information for the Project Quay cluster. Editing that file directly is possible, but it is only recommended for advanced tuning and performance features that are not available through the Config Tool. -
Project Quay API: Some Project Quay configuration can be done through the API.
While configuration for specific features is covered in separate sections, this section describes how to use each of those interfaces and perform some more advanced configuration.
Using Project Quay Config Tool to modify Project Quay
The Project Quay Config Tool is made available by running a Quay
container
in config
mode alongside the regular Project Quay service. Running the
Config Tool is different for Project Quay clusters running on OpenShift than
it is for those running directly on host systems.
Running the Config Tool from the Project Quay Operator
If you are running the Project Quay Operator from OpenShift, the Config Tool is probably already available for you to use. To access the Config Tool, do the following:
-
From the OpenShift console, select the project in which Project Quay is running. For example, quay-enterprise.
-
From the left column, select Networking → Routes. You should see routes to both the Project Quay application and Config Tool, as shown in the following image:
-
Select the route to the Config Tool (for example, example-quayecosystem-quay-config) and select it. The Config tool Web UI should open in your browser.
-
Select
Modify configuration for this cluster
. You should see the Config Tool, ready for you to change features of your Project Quay cluster, as shown in the following image: -
When you have made the changes you want, select
Save Configuration Changes
. The Config Tool will validate your changes. -
Make any corrections as needed by selecting
Continue Editing
or selectNext
to continue on. -
When prompted, it is recommended that you select
Download Configuration
. That will download a tarball of your newconfig.yaml
, as well as any certificates and keys used with your Project Quay setup. -
Select
Go to deployment rollout
, thenPopulate the configuration to deployments
. The Project Quay pods will be restarted and the changes will take effect.
The config.yaml
file you saved can be used to make advanced
changes to your configuration or just kept for future reference.
Running the Config Tool from the command line
If you are running Project Quay directly from a host system,
using tools such as the podman
or docker
commands,
after the initial Project Quay deployment, you can restart the
Config Tool to modify your Project Quay cluster. Here’s how:
-
Start quay in config mode: On the first
quay
node run the following, replacingmy-secret-password
with your password. If you would like to modify an existing config bundle, you can simply mount your configuration directory into theQuay
container as you would in registry mode.# podman run --rm -it --name quay_config -p 8080:8080 \ -v path/to/config-bundle:/conf/stack \ quay.io/projectquay/quay:qui-gon config my-secret-password
-
Open browser: When the quay configuration tool starts up, open a browser to the URL and port 8080 of the system you are running the configuration tool on (for example https://myquay.example.com:8080). You are prompted for a username and password.
At this point, you can begin modifying your Project Quay cluster as described earlier.
Using the API to modify Project Quay
See the Project Quay API Guide for information on how to access Project Quay API.
Editing the config.yaml
file to modify Project Quay
Some advanced Project Quay configuration that is not available through
the Config Tool can be achieved by editing the config.yaml
file directly.
Available settings are described in the
Schema for Red Hat Quay configuration
The following are examples of settings you can change directly in the config.yaml
file.
Add name and company to Project Quay sign-in
Setting the following will cause users to be prompted for their name and company when they first sign in. Although this is optional, it can provide you with extra data about your Project Quay users:
+
FEATURE_USER_METADATA: true
Disable TLS Protocols
You can change the SSL_PROTOCOLS setting to remove SSL protocols that you do not want to support in your Project Quay instance. For example, to remove TLS v1 support from the default SSL_PROTOCOLS : ['TLSv1','TLSv1.1','TLSv1.2'], change it as follows:
+ SSL_PROTOCOLS : ['TLSv1.1','TLSv1.2']
Rate limit API calls
Adding the FEATURE_RATE_LIMITS parameter to the config.yaml
causes nginx
to
limit certain API calls to 30 per second. If that feature is not set, API calls
are limied to 300 per second (effectively unlimited).
Rate limiting can be an important feature, if you need to make sure the resources
available are not overwhelmed with traffic.
Some namespace may require unlimited access (perhaps they are important to CI/CD
and take priority, for example). In this case, those namespace may be placed in
a list in config.yaml
for NON_RATE_LIMITED_NAMESPACES.
Adjust database connection pooling
Project Quay is composed of many different processes which all run within the same container. Many of these processes interact with the database.
If enabled, each process that interacts with the database will contain a connection pool. These per-process connection pools are configured to maintain a maximum of 20 connections. Under heavy load, it is possible to fill the connection pool for every process within a Project Quay container. Under certain deployments and loads, this may require analysis to ensure Project Quay does not exceed the database’s configured maximum connection count.
Overtime, the connection pools will release idle connections. To release all connections immediately, Project Quay requires a restart.
Database connection pooling may be toggled by setting the environment
variable DB_CONNECTION_POOLING={true|false}
If database connection pooling is enabled, it is possible to change the
maximum size of the connection pool. This can be done through the following
config.yaml
option:
DB_CONNECTION_ARGS: max_connections: 10
Database connection arguments
You can customize Project Quay database connection settings within the
config.yaml
file. These are entirely dependent upon the underlying
database driver, such as psycopg2
for Postgres and pymysql
for MySQL.
It is also possible to pass in arguments used by Peewee’s Connection Pooling
mechanism as seen below.
DB_CONNECTION_ARGS: max_connections: n # Max Connection Pool size. (Connection Pooling only) timeout: n # Time to hold on to connections. (Connection Pooling only) stale_timeout: n # Number of seconds to block when the pool is full. (Connection Pooling only)
Database SSL configuration
Some key-value pairs defined under DB_CONNECTION_ARGS are generic while others are database-specific. In particular, SSL configuration depends on the database you are deploying.
PostgreSQL SSL connection arguments
A sample PostgreSQL SSL configuration is given below:
DB_CONNECTION_ARGS: sslmode: verify-ca sslrootcert: /path/to/cacert
The sslmode
option determines whether or with what priority a secure SSL TCP/IP connection will be negotiated with the server. There are six modes:
-
disable: only try a non-SSL connection
-
allow: first try a non-SSL connection; if that fails, try an SSL connection
-
prefer: (default) first try an SSL connection; if that fails, try a non-SSL connection
-
require: only try an SSL connection. If a root CA file is present, verify the certificate in the same way as if verify-ca was specified
-
verify-ca: only try an SSL connection, and verify that the server certificate is issued by a trusted certificate authority (CA)
-
verify-full: only try an SSL connection, verify that the server certificate is issued by a trusted CA and that the requested server host name matches that in the certificate
More information on the valid arguments for PostgreSQL is available at https://www.postgresql.org/docs/current/libpq-connect.html.
MySQL SSL connection arguments
A sample MySQL SSL configuration follows:
DB_CONNECTION_ARGS: ssl: ca: /path/to/cacert
Information on the valid connection arguments for MySQL is available at https://dev.mysql.com/doc/refman/8.0/en/connecting-using-uri-or-key-value-pairs.html.
HTTP connection counts
It is possible to specify the quantity of simultaneous HTTP connections using environment variables. These can be specified as a whole, or for a specific component. The default for each is 50 parallel connections per process.
Environment variables:
WORKER_CONNECTION_COUNT_REGISTRY=n WORKER_CONNECTION_COUNT_WEB=n WORKER_CONNECTION_COUNT_SECSCAN=n WORKER_CONNECTION_COUNT=n
Note
|
Specifying a count for a specific component will override any value set in WORKER_CONNECTION_COUNT. |
Dynamic process counts
To estimate the quantity of dynamically sized processes, the following calculation is used by default.
Note
|
Project Quay queries the available CPU count from the entire machine. Any limits applied using kubernetes or other non-virtualized mechanisms will not affect this behavior; Project Quay will makes its calculation based on the total number of processors on the Node. The default values listed are simply targets, but shall not exceed the maximum or be lower than the minimum. |
Each of the following process quantities can be overridden using the environment variable specified below.
-
registry - Provides HTTP endpoints to handle registry action
-
minimum: 8
-
maximum: 64
-
default: $CPU_COUNT x 4
-
environment variable: WORKER_COUNT_REGISTRY
-
-
web - Provides HTTP endpoints for the web-based interface
-
minimum: 2
-
maximum: 32
-
default: $CPU_COUNT x 2
-
environment_variable: WORKER_COUNT_WEB
-
-
secscan - Interacts with Clair
-
minimum: 2
-
maximum: 4
-
default: $CPU_COUNT x 2
-
environment variable: WORKER_COUNT_SECSCAN
-
Environment variables
Project Quay allows overriding default behavior using environment variables. This table lists and describes each variable and the values they can expect.
Variable | Description | Values |
---|---|---|
WORKER_COUNT_REGISTRY |
Specifies the number of processes to handle Registry requests within the |
Integer between 8 and 64 |
WORKER_COUNT_WEB |
Specifies the number of processes to handle UI/Web requests within the container. |
Integer between 2 and 32 |
WORKER_COUNT_SECSCAN |
Specifies the number of processes to handle Security Scanning (e.g. Clair) integration within the container. |
Integer between 2 and 4 |
DB_CONNECTION_POOLING |
Toggle database connection pooling. In 3.4, it is disabled by default. |
"true" or "false" |
Turning off connection pooling
Project Quay deployments with a large amount of user activity can regularly hit the 2k maximum database connection limit. In these cases, connection pooling, which is enabled by default for Project Quay, can cause database connection count to rise exponentially and require you to turn off connection pooling.
If turning off connection pooling is not enough to prevent hitting that 2k database connection limit, you need to take additional steps to deal with the problem. In this case you might need to increase the maximum database connections to better suit your workload.
Using the configuration API
The configuration tool exposes 4 endpoints that can be used to build, validate, bundle and deploy a configuration. The config-tool API is documented at https://github.com/quay/config-tool/blob/master/pkg/lib/editor/API.md. In this section, you will see how to use the API to retrieve the current configuration and how to validate any changes you make.
Retrieving the default configuration
If you are running the configuration tool for the first time, and do not have an existing configuration, you can retrieve the default configuration. Start the container in config mode:
$ sudo podman run --rm -it --name quay_config \ -p 8080:8080 \ quay.io/projectquay/quay:qui-gon config secret
Use the config
endpoint of the configuration API to get the default:
$ curl -X GET -u quayconfig:secret http://quay-server:8080/api/v1/config | jq
The value returned is the default configuration in JSON format:
{
"config.yaml": {
"AUTHENTICATION_TYPE": "Database",
"AVATAR_KIND": "local",
"DB_CONNECTION_ARGS": {
"autorollback": true,
"threadlocals": true
},
"DEFAULT_TAG_EXPIRATION": "2w",
"EXTERNAL_TLS_TERMINATION": false,
"FEATURE_ACTION_LOG_ROTATION": false,
"FEATURE_ANONYMOUS_ACCESS": true,
"FEATURE_APP_SPECIFIC_TOKENS": true,
....
}
}
Retrieving the current configuration
If you have already configured and deployed the Quay registry, stop the container and restart it in configuration mode, loading the existing configuration as a volume:
$ sudo podman run --rm -it --name quay_config \ -p 8080:8080 \ -v $QUAY/config:/conf/stack:Z \ quay.io/projectquay/quay:qui-gon config secret
Use the config
endpoint of the API to get the current configuration:
$ curl -X GET -u quayconfig:secret http://quay-server:8080/api/v1/config | jq
The value returned is the current configuration in JSON format, including database and Redis configuration data:
{
"config.yaml": {
....
"BROWSER_API_CALLS_XHR_ONLY": false,
"BUILDLOGS_REDIS": {
"host": "quay-server",
"password": "strongpassword",
"port": 6379
},
"DATABASE_SECRET_KEY": "4b1c5663-88c6-47ac-b4a8-bb594660f08b",
"DB_CONNECTION_ARGS": {
"autorollback": true,
"threadlocals": true
},
"DB_URI": "postgresql://quayuser:quaypass@quay-server:5432/quay",
"DEFAULT_TAG_EXPIRATION": "2w",
....
}
}
Validating configuration using the API
You can validate a configuration by posting it to the config/validate
endpoint:
curl -u quayconfig:secret --header 'Content-Type: application/json' --request POST --data ' { "config.yaml": { .... "BROWSER_API_CALLS_XHR_ONLY": false, "BUILDLOGS_REDIS": { "host": "quay-server", "password": "strongpassword", "port": 6379 }, "DATABASE_SECRET_KEY": "4b1c5663-88c6-47ac-b4a8-bb594660f08b", "DB_CONNECTION_ARGS": { "autorollback": true, "threadlocals": true }, "DB_URI": "postgresql://quayuser:quaypass@quay-server:5432/quay", "DEFAULT_TAG_EXPIRATION": "2w", .... } } http://quay-server:8080/api/v1/config/validate | jq
The returned value is an array containing the errors found in the configuration. If the configuration is valid, an empty array []
is returned.
Determining the required fields
You can determine the required fields by posting an empty configuration structure to the config/validate
endpoint:
curl -u quayconfig:secret --header 'Content-Type: application/json' --request POST --data ' { "config.yaml": { } } http://quay-server:8080/api/v1/config/validate | jq
The value returned is an array indicating which fields are required:
[
{
"FieldGroup": "Database",
"Tags": [
"DB_URI"
],
"Message": "DB_URI is required."
},
{
"FieldGroup": "DistributedStorage",
"Tags": [
"DISTRIBUTED_STORAGE_CONFIG"
],
"Message": "DISTRIBUTED_STORAGE_CONFIG must contain at least one storage location."
},
{
"FieldGroup": "HostSettings",
"Tags": [
"SERVER_HOSTNAME"
],
"Message": "SERVER_HOSTNAME is required"
},
{
"FieldGroup": "HostSettings",
"Tags": [
"SERVER_HOSTNAME"
],
"Message": "SERVER_HOSTNAME must be of type Hostname"
},
{
"FieldGroup": "Redis",
"Tags": [
"BUILDLOGS_REDIS"
],
"Message": "BUILDLOGS_REDIS is required"
}
]
Getting Project Quay release notifications
To keep up with the latest Project Quay releases and other changes related to Project Quay, you can sign up for update notifications on the Red Hat Customer Portal. After signing up for notifications, you will receive notifications letting you know when there is new a Project Quay version, updated documentation, or other Project Quay news.
-
Log into the Red Hat Customer Portal with your Red Hat customer account credentials.
-
Select your user name (upper-right corner) to see Red Hat Account and Customer Portal selections:
-
Select Notifications. Your profile activity page appears.
-
Select the Notifications tab.
-
Select Manage Notifications.
-
Select Follow, then choose Products from the drop-down box.
-
From the drop-down box next to the Products, search for and select Project Quay:
-
Select the SAVE NOTIFICATION button. Going forward, you will receive notifications when there are changes to the Project Quay product, such as a new release.
Using SSL to protect connections to Project Quay
Introduction to using SSL
To configure Project Quay with a self-signed certificate, you need to create a Certificate Authority (CA) and then generate the required key and certificate files.
The following examples assume you have configured the server hostname quay-server.example.com
using DNS or another naming mechanism, such as adding an entry in your /etc/hosts
file:
$ cat /etc/hosts ... 192.168.1.112 quay-server.example.com
Create a Certificate Authority and sign a certificate
At the end of this procedure, you will have a certificate file and a primary key file named ssl.cert
and ssl.key
, respectively.
Create a Certificate Authority
-
Generate the root CA key:
$ openssl genrsa -out rootCA.key 2048
-
Generate the root CA cert:
$ openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem
-
Enter the information that will be incorporated into your certificate request, including the server hostname, for example:
Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com
Sign a certificate
-
Generate the server key:
$ openssl genrsa -out ssl.key 2048
-
Generate a signing request:
$ openssl req -new -key ssl.key -out ssl.csr
-
Enter the information that will be incorporated into your certificate request, including the server hostname, for example:
Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com
-
Create a configuration file
openssl.cnf
, specifying the server hostname, for example:openssl.cnf[req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = quay-server.example.com IP.1 = 192.168.1.112
-
Use the configuration file to generate the certificate
ssl.cert
:$ openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf
Configuring SSL using the command line
Another option when configuring SSL is to use the command line interface.
-
Copy the certificate file and primary key file to your configuration directory, ensuring they are named
ssl.cert
andssl.key
respectively:$ cp ~/ssl.cert $QUAY/config $ cp ~/ssl.key $QUAY/config $ cd $QUAY/config
-
Edit the
config.yaml
file and specify that you want Quay to handle TLS:config.yaml... SERVER_HOSTNAME: quay-server.example.com ... PREFERRED_URL_SCHEME: https ...
-
Stop the
Quay
container and restart the registry:$ sudo podman rm -f quay $ sudo podman run -d --rm -p 80:8080 -p 443:8443 \ --name=quay \ -v $QUAY/config:/conf/stack:Z \ -v $QUAY/storage:/datastorage:Z \ quay.io/projectquay/quay:qui-gon
Configuring SSL using the UI
This section configures SSL using the Quay UI. To configure SSL using the command line interface, see the following section.
-
Start the
Quay
container in configuration mode:$ sudo podman run --rm -it --name quay_config -p 80:8080 -p 443:8443 quay.io/projectquay/quay:qui-gon config secret
-
In the Server Configuration section, select
Red Hat Quay handles TLS
for TLS. Upload the certificate file and private key file created earlier, ensuring that the Server Hostname matches the value used when creating the certs. Validate and download the updated configuration. -
Stop the
Quay
container and then restart the registry:$ sudo podman rm -f quay $ sudo podman run -d --rm -p 80:8080 -p 443:8443 \ --name=quay \ -v $QUAY/config:/conf/stack:Z \ -v $QUAY/storage:/datastorage:Z \ quay.io/projectquay/quay:qui-gon
Testing SSL configuration using the command line
-
Use the
podman login
command to attempt to log in to the Quay registry with SSL enabled:$ sudo podman login quay-server.example.com Username: quayadmin Password: Error: error authenticating creds for "quay-server.example.com": error pinging docker registry quay-server.example.com: Get "https://quay-server.example.com/v2/": x509: certificate signed by unknown authority
-
Podman does not trust self-signed certificates. As a workaround, use the
--tls-verify
option:$ sudo podman login --tls-verify=false quay-server.example.com Username: quayadmin Password: Login Succeeded!
Configuring Podman to trust the root Certificate Authority (CA) is covered in a subsequent section.
Testing SSL configuration using the browser
When you attempt to access the Quay registry, in this case, https://quay-server.example.com
, the browser warns of the potential risk:
Proceed to the log in screen, and the browser will notify you that the connection is not secure:
Configuring the system to trust the root Certificate Authority (CA) is covered in the subsequent section.
Configuring podman to trust the Certificate Authority
Podman uses two paths to locate the CA file, namely, /etc/containers/certs.d/
and /etc/docker/certs.d/
.
-
Copy the root CA file to one of these locations, with the exact path determined by the server hostname, and naming the file
ca.crt
:$ sudo cp rootCA.pem /etc/containers/certs.d/quay-server.example.com/ca.crt
-
Alternatively, if you are using Docker, you can copy the root CA file to the equivalent Docker directory:
$ sudo cp rootCA.pem /etc/docker/certs.d/quay-server.example.com/ca.crt
You should no longer need to use the --tls-verify=false
option when logging in to the registry:
$ sudo podman login quay-server.example.com
Username: quayadmin
Password:
Login Succeeded!
Configuring the system to trust the certificate authority
-
Copy the root CA file to the consolidated system-wide trust store:
$ sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/
-
Update the system-wide trust store configuration:
$ sudo update-ca-trust extract
-
You can use the
trust list
command to ensure that the Quay server has been configured:$ trust list | grep quay label: quay-server.example.com
Now, when you browse to the registry at
https://quay-server.example.com
, the lock icon shows that the connection is secure: -
To remove the root CA from system-wide trust, delete the file and update the configuration:
$ sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem $ sudo update-ca-trust extract $ trust list | grep quay $
More information can be found in the RHEL 8 documentation in the chapter Using shared system certificates.
Adding TLS Certificates to the Project Quay Container
To add custom TLS certificates to Project Quay, create a new
directory named extra_ca_certs/
beneath the Project Quay config
directory. Copy any required site-specific TLS certificates to this new
directory.
Add TLS certificates to Project Quay
-
View certificate to be added to the container
$ cat storage.crt -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV [...] -----END CERTIFICATE-----
-
Create certs directory and copy certificate there
$ mkdir -p quay/config/extra_ca_certs $ cp storage.crt quay/config/extra_ca_certs/ $ tree quay/config/ ├── config.yaml ├── extra_ca_certs │ ├── storage.crt
-
Obtain the
Quay
container’sCONTAINER ID
withpodman ps
:$ sudo podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS 5a3e82c4a75f <registry>/<repo>/quay:qui-gon "/sbin/my_init" 24 hours ago Up 18 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 443/tcp grave_keller
-
Restart the container with that ID:
$ sudo podman restart 5a3e82c4a75f
-
Examine the certificate copied into the container namespace:
$ sudo podman exec -it 5a3e82c4a75f cat /etc/ssl/certs/storage.pem -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV
Add certs when deployed on Kubernetes
When deployed on Kubernetes, Project Quay mounts in a secret as a volume to store config assets. Unfortunately, this currently breaks the upload certificate function of the superuser panel.
To get around this error, a base64 encoded certificate can be added to the secret after Project Quay has been deployed. Here’s how:
-
Begin by base64 encoding the contents of the certificate:
$ cat ca.crt -----BEGIN CERTIFICATE----- MIIDljCCAn6gAwIBAgIBATANBgkqhkiG9w0BAQsFADA5MRcwFQYDVQQKDA5MQUIu TElCQ09SRS5TTzEeMBwGA1UEAwwVQ2VydGlmaWNhdGUgQXV0aG9yaXR5MB4XDTE2 MDExMjA2NTkxMFoXDTM2MDExMjA2NTkxMFowOTEXMBUGA1UECgwOTEFCLkxJQkNP UkUuU08xHjAcBgNVBAMMFUNlcnRpZmljYXRlIEF1dGhvcml0eTCCASIwDQYJKoZI [...] -----END CERTIFICATE----- $ cat ca.crt | base64 -w 0 [...] c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
-
Use the
kubectl
tool to edit the quay-enterprise-config-secret.$ kubectl --namespace quay-enterprise edit secret/quay-enterprise-config-secret
-
Add an entry for the cert and paste the full base64 encoded string under the entry:
custom-cert.crt: c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
-
Finally, recycle all Project Quay pods. Use
kubectl delete
to remove all Project Quay pods. The Project Quay Deployment will automatically schedule replacement pods with the new certificate data.
Configuring action log storage for Elasticsearch
By default, the past three months of usage logs are stored in the Project Quay database and exposed via the web UI on organization and repository levels. Appropriate administrative privileges are required to see log entries. For deployments with a large amount of logged operations, you can now store the usage logs in Elasticsearch instead of the Project Quay database backend. To do this, you need to provide your own Elasticsearch stack, as it is not included with Project Quay as a customizable component.
Enabling Elasticsearch logging can be done during Project Quay deployment
or post-deployment using the Project Quay Config Tool. The resulting
configuration is stored in the config.yaml
file.
Once configured, usage log access continues to be provided the same way, via the web UI
for repositories and organizations.
Here’s how to configure action log storage to change it from the default Project Quay database to use Elasticsearch:
-
Obtain an Elasticsearch account.
-
Open the Project Quay Config Tool (either during or after Project Quay deployment).
-
Scroll to the Action Log Storage Configuration setting and select Elasticsearch instead of Database. The following figure shows the Elasticsearch settings that appear:
-
Fill in the following information for your Elasticsearch instance:
-
Elasticsearch hostname: The hostname or IP address of the system providing the Elasticsearch service.
-
Elasticsearch port: The port number providing the Elasticsearch service on the host you just entered. Note that the port must be accessible from all systems running the Project Quay registry. The default is TCP port 9200.
-
Elasticsearch access key: The access key needed to gain access to the Elastic search service, if required.
-
Elasticsearch secret key: The secret key needed to gain access to the Elastic search service, if required.
-
AWS region: If you are running on AWS, set the AWS region (otherwise, leave it blank).
-
Index prefix: Choose a prefix to attach to log entries.
-
Logs Producer: Choose either Elasticsearch (default) or Kinesis to direct logs to an intermediate Kinesis stream on AWS. You need to set up your own pipeline to send logs from Kinesis to Elasticsearch (for example, Logstash). The following figure shows additional fields you would need to fill in for Kinesis:
-
-
If you chose Elasticsearch as the Logs Producer, no further configuration is needed. If you chose Kinesis, fill in the following:
-
Stream name: The name of the Kinesis stream.
-
AWS access key: The name of the AWS access key needed to gain access to the Kinesis stream, if required.
-
AWS secret key: The name of the AWS secret key needed to gain access to the Kinesis stream, if required.
-
AWS region: The AWS region.
-
-
When you are done, save the configuration. The Config Tool checks your settings. If there is a problem connecting to the Elasticsearch or Kinesis services, you will see an error and have the opportunity to continue editing. Otherwise, logging will begin to be directed to your Elasticsearch configuration after the cluster restarts with the new configuration.
Clair Security Scanning
Clair is a set of micro services that can be used with Project Quay to perform vulnerability scanning of container images associated with a set of Linux operating systems. The micro services design of Clair makes it appropriate to run in a highly scalable configuration, where components can be scaled separately as appropriate for enterprise environments.
Clair uses the following vulnerability databases to scan for issues in your images:
-
Alpine SecDB database
-
AWS UpdateInfo
-
Debian Oval database
-
Oracle Oval database
-
RHEL Oval database
-
SUSE Oval database
-
Ubuntu Oval database
-
Pyup.io (python) database
For information on how Clair does security mapping with the different databases, see ClairCore Severity Mapping.
Note
|
With the release of Clair V4 (image clair), the previously used Clair V2 (image clair-jwt) is no longer used. See below for how to run V2 in read-only mode while V4 is updating. |
Setting Up Clair on a Project Quay OpenShift deployment
Deploying Via the Quay Operator
To set up Clair V4 on a new Project Quay deployment on OpenShift, it is highly recommended to use the Quay Operator. By default, the Quay Operator will install or upgrade a Clair deployment along with your Project Quay deployment and configure Clair security scanning automatically.
Manually Deploying Clair
To configure Clair V4 on an existing Project Quay OpenShift deployment running Clair V2, first ensure Project Quay has been upgraded to at least version 3.4.0. Then use the following steps to manually set up Clair V4 alongside Clair V2.
-
Set your current project to the name of the project in which Project Quay is running. For example:
$ oc project quay-enterprise
-
Create a Postgres deployment file for Clair v4 (for example,
clairv4-postgres.yaml
) as follows.clairv4-postgres.yaml--- apiVersion: apps/v1 kind: Deployment metadata: name: clairv4-postgres namespace: quay-enterprise labels: quay-component: clairv4-postgres spec: replicas: 1 selector: matchLabels: quay-component: clairv4-postgres template: metadata: labels: quay-component: clairv4-postgres spec: volumes: - name: postgres-data persistentVolumeClaim: claimName: clairv4-postgres containers: - name: postgres image: postgres:11.5 imagePullPolicy: "IfNotPresent" ports: - containerPort: 5432 env: - name: POSTGRES_USER value: "postgres" - name: POSTGRES_DB value: "clair" - name: POSTGRES_PASSWORD value: "postgres" - name: PGDATA value: "/etc/postgres/data" volumeMounts: - name: postgres-data mountPath: "/etc/postgres" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: clairv4-postgres labels: quay-component: clairv4-postgres spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "5Gi" volumeName: "clairv4-postgres" --- apiVersion: v1 kind: Service metadata: name: clairv4-postgres labels: quay-component: clairv4-postgres spec: type: ClusterIP ports: - port: 5432 protocol: TCP name: postgres targetPort: 5432 selector: quay-component: clairv4-postgres
-
Deploy the postgres database as follows:
$ oc create -f ./clairv4-postgres.yaml
-
Create a Clair
config.yaml
file to use for Clair v4. For example:config.yamlintrospection_addr: :8089 http_listen_addr: :8080 log_level: debug indexer: connstring: host=clairv4-postgres port=5432 dbname=clair user=postgres password=postgres sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true matcher: connstring: host=clairv4-postgres port=5432 dbname=clair user=postgres password=postgres sslmode=disable max_conn_pool: 100 run: "" migrations: true indexer_addr: clair-indexer notifier: connstring: host=clairv4-postgres port=5432 dbname=clair user=postgres password=postgres sslmode=disable delivery: 1m poll_interval: 5m migrations: true auth: psk: key: MTU5YzA4Y2ZkNzJoMQ== (1) iss: ["quay"] # tracing and metrics trace: name: "jaeger" probability: 1 jaeger: agent_endpoint: "localhost:6831" service_name: "clair" metrics: name: "prometheus"
-
To generate a Clair pre-shared key (PSK), enable
scanning
in the Security Scanner section of the User Interface and clickGenerate PSK
.
-
More information about Clair’s configuration format can be found in upstream Clair documentation.
-
Create a secret from the Clair
config.yaml
:$ oc create secret generic clairv4-config-secret --from-file=./config.yaml
-
Create the Clair v4 deployment file (for example,
clair-combo.yaml
) and modify it as necessary:clair-combo.yaml--- apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: quay-component: clair-combo name: clair-combo spec: replicas: 1 selector: matchLabels: quay-component: clair-combo template: metadata: labels: quay-component: clair-combo spec: containers: - image: quay.io/projectquay/clair:qui-gon (1) imagePullPolicy: IfNotPresent name: clair-combo env: - name: CLAIR_CONF value: /clair/config.yaml - name: CLAIR_MODE value: combo ports: - containerPort: 8080 name: clair-http protocol: TCP - containerPort: 8089 name: clair-intro protocol: TCP volumeMounts: - mountPath: /clair/ name: config imagePullSecrets: - name: redhat-pull-secret restartPolicy: Always volumes: - name: config secret: secretName: clairv4-config-secret --- apiVersion: v1 kind: Service metadata: name: clairv4 (2) labels: quay-component: clair-combo spec: ports: - name: clair-http port: 80 protocol: TCP targetPort: 8080 - name: clair-introspection port: 8089 protocol: TCP targetPort: 8089 selector: quay-component: clair-combo type: ClusterIP
-
Change image to latest clair image name and version.
-
With the Service set to clairv4, the scanner endpoint for Clair v4 is entered later into the Project Quay config.yaml in the
SECURITY_SCANNER_V4_ENDPOINT
ashttp://clairv4
.
-
-
Create the Clair v4 deployment as follows:
$ oc create -f ./clair-combo.yaml
-
Modify the
config.yaml
file for your Project Quay deployment to add the following entries at the end:FEATURE_SECURITY_NOTIFICATIONS: true FEATURE_SECURITY_SCANNER: true SECURITY_SCANNER_V4_ENDPOINT: http://clairv4 (1)
-
Identify the Clair v4 service endpoint
-
-
Redeploy the modified
config.yaml
to the secret containing that file (for example,quay-enterprise-config-secret
:$ oc delete secret quay-enterprise-config-secret $ oc create secret generic quay-enterprise-config-secret --from-file=./config.yaml
-
For the new
config.yaml
to take effect, you need to restart the Project Quay pods. Simply deleting thequay-app
pods causes pods with the updated configuration to be deployed.
At this point, images in any of the organizations identified in the namespace whitelist will be scanned by Clair v4.
Setting up Clair on a non-OpenShift Project Quay deployment
For Project Quay deployments not running on OpenShift, it is possible to configure Clair security scanning manually. Project Quay deployments already running Clair V2 can use the instructions below to add Clair V4 to their deployment.
-
Deploy a (preferably fault-tolerant) Postgres database server. Note that Clair requires the
uuid-ossp
extension to be added to its Postgres database. If the user supplied in Clair’sconfig.yaml
has the necessary privileges to create the extension then it will be added automatically by Clair itself. If not, then the extension must be added before starting Clair. If the extension is not present, the following error will be displayed when Clair attempts to start.ERROR: Please load the "uuid-ossp" extension. (SQLSTATE 42501)
-
Create a Clair config file in a specific folder, for example,
/etc/clairv4/config/config.yaml
).config.yamlintrospection_addr: :8089 http_listen_addr: :8080 log_level: debug indexer: connstring: host=clairv4-postgres port=5432 dbname=clair user=postgres password=postgres sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true matcher: connstring: host=clairv4-postgres port=5432 dbname=clair user=postgres password=postgres sslmode=disable max_conn_pool: 100 run: "" migrations: true indexer_addr: clair-indexer notifier: connstring: host=clairv4-postgres port=5432 dbname=clair user=postgres password=postgres sslmode=disable delivery_interval: 1m poll_interval: 5m migrations: true # tracing and metrics trace: name: "jaeger" probability: 1 jaeger: agent_endpoint: "localhost:6831" service_name: "clair" metrics: name: "prometheus"
More information about Clair’s configuration format can be found in upstream Clair documentation.
-
Run Clair via the container image, mounting in the configuration from the file you created.
$ podman run -p 8080:8080 -p 8089:8089 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo -v /etc/clair4/config:/clair -d quay.io/projectquay/clair:qui-gon
-
Follow the remaining instructions from the previous section for configuring Project Quay to use the new Clair V4 endpoint.
Running multiple Clair containers in this fashion is also possible, but for deployment scenarios beyond a single container the use of a container orchestrator like Kubernetes or OpenShift is strongly recommended.
Advanced Clair configuration
Unmanaged Clair configuration
With Project Quay 3.7, users can run an unmanaged Clair configuration on the Project Quay OpenShift Container Platform Operator. This feature allows users to create an unmanaged Clair database, or run their custom Clair configuration without an unmanaged database.
Unmanaging a Clair database
An unmanaged Clair database allows the Project Quay Operator to work in a geo-replicated environment, where multiple instances of the Operator must communicate with the same database. An unmanaged Clair database can also be used when a user requires a highly-available (HA) Clair database that exists outside of a cluster.
-
In the Quay Operator, set the
clairpostgres
component of the QuayRegistry custom resource to unmanaged:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: false
Configuring a custom Clair database
The Project Quay Operator for OpenShift Container Platform allows users to provide their own Clair configuration by editing the configBundleSecret
parameter.
-
Create a Quay config bundle secret that includes the
clair-config.yaml
:$ oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret
Example
clair-config.yaml
configuration:indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=clairuser password=clairpass sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=clairuser password=clairpass sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=clairuser password=clairpass sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true
Note-
The database certificate is mounted under
/run/certs/rds-ca-2019-root.pem
on the Clair application pod in theclair-config.yaml
. It must be specified when configuring yourclair-config.yaml
. -
An example
clair-config.yaml
can be found at Clair on OpenShift config.
-
-
Add the
clair-config.yaml
to your bundle secret, namedconfigBundleSecret
:apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config> extra_ca_cert_<name>: <base64 encoded ca cert> clair-ssl.crt: >- clair-ssl.key: >-
NoteWhen updated, the provided
clair-config.yaml
is mounted into the Clair pod. Any fields not provided are automatically populated with defaults using the Clair configuration module.
After proper configuration, the Clair application pod should return to a Ready
state.
Running a custom Clair configuration with a managed
database
In some cases, users might want to run a custom Clair configuration with a managed
database. This is useful in the following scenarios:
-
When a user wants to disable an updater.
-
When a user is running in an air-gapped environment.
Note-
If you are running Quay in an air-gapped environment, the
airgap
parameter of yourclair-config.yaml
must be set totrue
. -
If you are running Quay in an air-gapped environment, you should disable all updaters.
-
Use the steps in "Configuring a custom Clair database" to configure your database when clairpostgres
is set to managed
.
For more information about running Clair in an air-gapped environment, see Configuring access to the Clair database in the air-gapped OpenShift cluster.
Using Clair
-
Log in to your Project Quay cluster and select an organization for which you have configured Clair scanning.
-
Select a repository from that organization that holds some images and select Tags from the left navigation. The following figure shows an example of a repository with two images that have been scanned:
-
If vulnerabilities are found, select to under the Security Scan column for the image to see either all vulnerabilities or those that are fixable. The following figure shows information on all vulnerabilities found:
CVE ratings from the National Vulnerability Database
With Clair v4.2, enrichment data is now viewable in the Quay UI. Additionally, Clair v4.2 adds CVSS scores from the National Vulnerability Database for detected vulnerabilities.
With this change, if the vulnerability has a CVSS score that is within 2 levels of the distro’s score, the Quay UI present’s the distro’s score by default. For example:
This differs from the previous interface, which would only display the following information:
Configuring Clair for Disconnected Environments
Clair utilizes a set of components called Updaters to handle the fetching and parsing of data from various vulnerability databases. These Updaters are set up by default to pull vulnerability data directly from the internet and work out of the box. For customers in disconnected environments without direct access to the internet this poses a problem. Clair supports these environments through the ability to work with different types of update workflows that take into account network isolation. Using the clairctl
command line utility, any process can easily fetch Updater data from the internet via an open host, securely transfer the data to an isolated host, and then import the Updater data on the isolated host into Clair itself.
The steps are as follows.
-
First ensure that your Clair configuration has disabled automated Updaters from running.
config.yamlmatcher: disable_updaters: true
-
Export out the latest Updater data to a local archive. This requires the
clairctl
tool which can be run directly as a binary, or via the Clair container image. Assuming your Clair configuration is in/etc/clairv4/config/config.yaml
, to run via the container image:$ podman run -it --rm -v /etc/clairv4/config:/cfg:Z -v /path/to/output/directory:/updaters:Z --entrypoint /bin/clairctl quay.io/projectquay/clair:qui-gon --config /cfg/config.yaml export-updaters /updaters/updaters.gz
Note that you need to explicitly reference the Clair configuration. This will create the Updater archive in
/etc/clairv4/updaters/updaters.gz
. If you want to ensure the archive was created without any errors from the source databases, you can supply the--strict
flag toclairctl
. The archive file should be copied over to a volume that is accessible from the disconnected host running Clair. From the disconnected host, use the same procedure now to import the archive into Clair.$ podman run -it --rm -v /etc/clairv4/config:/cfg:Z -v /path/to/output/directory:/updaters:Z --entrypoint /bin/clairctl quay.io/projectquay/clair:qui-gon --config /cfg/config.yaml import-updaters /updaters/updaters.gz
Clair updater URLs
The following are the HTTP hosts and paths that Clair will attempt to talk to in a default configuration. This list is non-exhaustive, as some servers will issue redirects and some request URLs are constructed dynamically.
-
https://secdb.alpinelinux.org/
-
http://repo.us-west-2.amazonaws.com/2018.03/updates/x86_64/mirror.list
-
https://cdn.amazonlinux.com/2/core/latest/x86_64/mirror.list
-
https://www.debian.org/security/oval/
-
https://linux.oracle.com/security/oval/
-
https://packages.vmware.com/photon/photon_oval_definitions/
-
https://github.com/pyupio/safety-db/archive/
-
https://catalog.redhat.com/api/containers/
-
https://www.redhat.com/security/data/
-
https://support.novell.com/security/oval/
-
https://people.canonical.com/~ubuntu-security/oval/
Additional Information
For detailed documentation on the internals of Clair, including how the microservices are structured, please see the Upstream Clair and ClairCore documentation.
Scan pod images with the Container Security Operator
Using the Container Security Operator, (CSO) you can scan container images associated with active pods, running on OpenShift (4.2 or later) and other Kubernetes platforms, for known vulnerabilities. The CSO:
-
Watches containers associated with pods on all or specified namespaces
-
Queries the container registry where the containers came from for vulnerability information provided an image’s registry supports image scanning (such as a Quay registry with Clair scanning)
-
Exposes vulnerabilities via the ImageManifestVuln object in the Kubernetes API
Using the instructions here, the CSO is installed in the marketplace-operators
namespace,
so it is available to all namespaces on your OpenShift cluster.
Note
|
To see instructions on installing the CSO on Kubernetes, select the Install button from the Container Security OperatorHub.io page. |
Run the CSO in OpenShift
To start using the CSO in OpenShift, do the following:
-
Go to Operators → OperatorHub (select Security) to see the available
Container Security
Operator. -
Select the
Container Security
Operator, then selectInstall
to go to the Create Operator Subscription page. -
Check the settings (all namespaces and automatic approval strategy, by default), and select
Subscribe
. TheContainer Security
appears after a few moments on theInstalled Operators
screen. -
Optionally, you can add custom certificates to the CSO. In this example, create a certificate named quay.crt in the current directory. Then run the following command to add the cert to the CSO (restart the Operator pod for the new certs to take effect):
$ oc create secret generic container-security-operator-extra-certs --from-file=quay.crt -n openshift-operators
-
Open the OpenShift Dashboard (Home → Dashboards). A link to Image Security appears under the status section, with a listing of the number of vulnerabilities found so far. Select the link to see a Security breakdown, as shown in the following figure:
-
You can do one of two things at this point to follow up on any detected vulnerabilities:
-
Select the link to the vulnerability. You are taken to the container registry, Project Quay or other registry where the container came from, where you can see information about the vulnerability. The following figure shows an example of detected vulnerabilities from a Quay.io registry:
-
Select the namespaces link to go to the ImageManifestVuln screen, where you can see the name of the selected image and all namespaces where that image is running. The following figure indicates that a particular vulnerable image is running in two namespaces:
-
At this point, you know what images are vulnerable, what you need to do to fix those vulnerabilities, and every namespace that the image was run in. So you can:
-
Alert anyone running the image that they need to correct the vulnerability
-
Stop the images from running (by deleting the deployment or other object that started the pod the image is in)
Note that if you do delete the pod, it may take a few minutes for the vulnerability to reset on the dashboard.
Query image vulnerabilities from the CLI
You can query information on security from the command line. To query for detected vulnerabilities, type:
$ oc get vuln --all-namespaces
NAMESPACE NAME AGE
default sha256.ca90... 6m56s
skynet sha256.ca90... 9m37s
To display details for a particular vulnerability, identify one of the
vulnerabilities, along with its namespace and the describe
option.
This example shows an active container whose image includes an RPM package with a vulnerability:
$ oc describe vuln --namespace mynamespace sha256.ac50e3752...
Name: sha256.ac50e3752...
Namespace: quay-enterprise
...
Spec:
Features:
Name: nss-util
Namespace Name: centos:7
Version: 3.44.0-3.el7
Versionformat: rpm
Vulnerabilities:
Description: Network Security Services (NSS) is a set of libraries...
Project Quay Security Scanning with Clair V2
Project Quay supports scanning container images for known vulnerabilities with a scanning engine such as Clair. This document explains how to configure Clair with Project Quay.
Note
|
With the release of Red Hat Quay 3.4, the default version of Clair is V4. This new version V4 is no longer being released as Technology Preview and is supported for production use. Customers are strongly encouraged to use Clair V4 for with Red Hat Quay 3.4. It is possible to run both Clair V4 and Clair V2 simultaneously if so desired. In future versions of Red Hat Quay, Clair V2 will eventually be removed. |
Set up Clair V2 in the Project Quay config tool
Enabling Clair V2 in Project Quay consists of:
-
Starting the Project Quay config tool. See the Project Quay deployment guide for the type of deployment you are doing (OpenShift, Basic, or HA) for how to start the config tool for that environment.
-
Enabling security scanning, then generating a private key and PEM file in the config tool
-
Including the key and PEM file in the Clair config file
-
Start the Clair container
The procedure varies, based on whether you are running Project Quay on OpenShift or directly on a host.
Enabling Clair V2 on a Project Quay OpenShift deployment
To set up Clair V2 on Project Quay in OpenShift, see Add Clair image scanning to Project Quay.
Enabling Clair V2 on a Project Quay Basic or HA deployment
To set up Clair V2 on a Project Quay deployment where the container is running directly on the host system, do the following:
-
Restart the Project Quay config tool: Run the
Quay
container again in config mode, open the configuration UI in a browser, then selectModify an existing configuration
. When prompted, upload thequay-config.tar.gz
file that was originally created for the deployment. -
Enable Security Scanning: Scroll to the Security Scanner section and select the "Enable Security Scanning" checkbox. From the fields that appear you need to create an authentication key and enter the security scanner endpoint. Here’s how:
-
Generate key: Click
Create Key
, then from the pop-up window type a name for the Clair private key and an optional expiration date (if blank, the key never expires). Then select Generate Key. -
Copy the Clair key and PEM file: Save the Key ID (to a notepad or similar) and download a copy of the Private Key PEM file (
named security_scanner.pem
) by selecting "Download Private Key" (if you lose the key, you need to generate a new one). You will need the key and PEM file when you start the Clair container later.Close the pop-up when you are done. Here is an example of a completed Security Scanner config:
-
-
Save the configuration: Click
Save Configuration Changes
and then selectDownload Configuration
to save it to your local system. -
Deploy the configuration: To pick up the changes enabling scanning, as well as other changes you may have made to the configuration, unpack the
quay-config.tar.gz
and copy the resulting files to the config directory. For example:$ tar xvf quay-config.tar.gz config.yaml ssl.cert ssl.key $ cp config.yaml ssl* /mnt/quay/config
Next, start the Clair V2 container and associated database, as described in the following sections.
Setting Up Clair V2 Security Scanning
Once you have created the necessary key and pem files from the Project Quay config UI, you are ready to start up the Clair V2 container and associated database. Once that is done, you an restart your Project Quay cluster to have those changes take effect.
Procedures for running the Clair V2 container and associated database are different on OpenShift than they are for running those containers directly on a host.
Run Clair V2 on a Project Quay OpenShift deployment
To run the Clair V2 image scanning container and its associated database on an OpenShift environment with your Project Quay cluster, see Add Clair image scanning to Project Quay.
Run Clair V2 on a Project Quay Basic or HA deployment
To run Clair V2 and its associated database on non-OpenShift environments (directly on a host), you need to:
-
Start up a database
-
Configure and start Clair V2
Get Postgres and Clair
In order to run Clair, a database is required. For production deployments, MySQL is not supported. For production, we recommend you use PostgreSQL or other supported database:
-
Running on machines other than those running Project Quay
-
Ideally with automatic replication and failover
For testing purposes, a single PostgreSQL instance can be started locally:
-
To start Postgres locally, do the following:
# sudo podman run --name postgres -p 5432:5432 -d postgres # sleep 5 # sudo podman run --rm --link postgres:postgres postgres \ sh -c 'echo "create database clairtest" | psql -h \ "$POSTGRES_PORT_5432_TCP_ADDR" -p \ "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'
The configuration string for this test database is:
postgresql://postgres@{DOCKER HOST GOES HERE}:5432/clairtest?sslmode=disable
-
Pull the security-enabled Clair image:
You will need to build your own Clair container and pull it during this step. Instructions for building the Clair container are not yet available.
-
Make a configuration directory for Clair
# mkdir clair-config # cd clair-config
Configure Clair V2
Clair V2 can run either as a single instance or in high-availability mode. It is recommended to run more than a single instance of Clair, ideally in an auto-scaling group with automatic healing.
-
Create a
config.yaml
file to be used in the Clair V2 config directory (/clair/config
) from one of the two Clair configuration files shown here. -
If you are doing a high-availability installation, go through the procedure in Authentication for high-availability scanners to create a Key ID and Private Key (PEM).
-
Save the Private Key (PEM) to a file (such as, $HOME/config/security_scanner.pem).
-
Replace the value of key_id (CLAIR_SERVICE_KEY_ID) with the Key ID you generated and the value of private_key_path with the location of the PEM file (for example, /config/security_scanner.pem).
For example, those two value might now appear as:
key_id: { 4fb9063a7cac00b567ee921065ed16fed7227afd806b4d67cc82de67d8c781b1 } private_key_path: /clair/config/security_scanner.pem
-
Change other values in the configuration file as needed.
Clair V2 configuration: High availability
clair:
database:
type: pgsql
options:
# A PostgreSQL Connection string pointing to the Clair Postgres database.
# Documentation on the format can be found at: http://www.postgresql.org/docs/9.4/static/libpq-connect.html
source: { POSTGRES_CONNECTION_STRING }
cachesize: 16384
api:
# The port at which Clair will report its health status. For example, if Clair is running at
# https://clair.mycompany.com, the health will be reported at
# http://clair.mycompany.com:6061/health.
healthport: 6061
port: 6062
timeout: 900s
# paginationkey can be any random set of characters. *Must be the same across all Clair instances*.
paginationkey: "XxoPtCUzrUv4JV5dS+yQ+MdW7yLEJnRMwigVY/bpgtQ="
updater:
# interval defines how often Clair will check for updates from its upstream vulnerability databases.
interval: 6h
notifier:
attempts: 3
renotifyinterval: 1h
http:
# QUAY_ENDPOINT defines the endpoint at which Quay is running.
# For example: https://myregistry.mycompany.com
endpoint: { QUAY_ENDPOINT }/secscan/notify
proxy: http://localhost:6063
jwtproxy:
signer_proxy:
enabled: true
listen_addr: :6063
ca_key_file: /certificates/mitm.key # Generated internally, do not change.
ca_crt_file: /certificates/mitm.crt # Generated internally, do not change.
signer:
issuer: security_scanner
expiration_time: 5m
max_skew: 1m
nonce_length: 32
private_key:
type: preshared
options:
# The ID of the service key generated for Clair. The ID is returned when setting up
# the key in [Quay Setup](security-scanning.md)
key_id: { CLAIR_SERVICE_KEY_ID }
private_key_path: /clair/config/security_scanner.pem
verifier_proxies:
- enabled: true
# The port at which Clair will listen.
listen_addr: :6060
# If Clair is to be served via TLS, uncomment these lines. See the "Running Clair under TLS"
# section below for more information.
# key_file: /clair/config/clair.key
# crt_file: /clair/config/clair.crt
verifier:
# CLAIR_ENDPOINT is the endpoint at which this Clair will be accessible. Note that the port
# specified here must match the listen_addr port a few lines above this.
# Example: https://myclair.mycompany.com:6060
audience: { CLAIR_ENDPOINT }
upstream: http://localhost:6062
key_server:
type: keyregistry
options:
# QUAY_ENDPOINT defines the endpoint at which Quay is running.
# Example: https://myregistry.mycompany.com
registry: { QUAY_ENDPOINT }/keys/
Clair V2 configuration: Single instance
clair:
database:
type: pgsql
options:
# A PostgreSQL Connection string pointing to the Clair Postgres database.
# Documentation on the format can be found at: http://www.postgresql.org/docs/9.4/static/libpq-connect.html
source: { POSTGRES_CONNECTION_STRING }
cachesize: 16384
api:
# The port at which Clair will report its health status. For example, if Clair is running at
# https://clair.mycompany.com, the health will be reported at
# http://clair.mycompany.com:6061/health.
healthport: 6061
port: 6062
timeout: 900s
# paginationkey can be any random set of characters. *Must be the same across all Clair instances*.
paginationkey:
updater:
# interval defines how often Clair will check for updates from its upstream vulnerability databases.
interval: 6h
notifier:
attempts: 3
renotifyinterval: 1h
http:
# QUAY_ENDPOINT defines the endpoint at which Quay is running.
# For example: https://myregistry.mycompany.com
endpoint: { QUAY_ENDPOINT }/secscan/notify
proxy: http://localhost:6063
jwtproxy:
signer_proxy:
enabled: true
listen_addr: :6063
ca_key_file: /certificates/mitm.key # Generated internally, do not change.
ca_crt_file: /certificates/mitm.crt # Generated internally, do not change.
signer:
issuer: security_scanner
expiration_time: 5m
max_skew: 1m
nonce_length: 32
private_key:
type: autogenerated
options:
rotate_every: 12h
key_folder: /clair/config/
key_server:
type: keyregistry
options:
# QUAY_ENDPOINT defines the endpoint at which Quay is running.
# For example: https://myregistry.mycompany.com
registry: { QUAY_ENDPOINT }/keys/
verifier_proxies:
- enabled: true
# The port at which Clair will listen.
listen_addr: :6060
# If Clair is to be served via TLS, uncomment these lines. See the "Running Clair under TLS"
# section below for more information.
# key_file: /clair/config/clair.key
# crt_file: /clair/config/clair.crt
verifier:
# CLAIR_ENDPOINT is the endpoint at which this Clair will be accessible. Note that the port
# specified here must match the listen_addr port a few lines above this.
# Example: https://myclair.mycompany.com:6060
audience: { CLAIR_ENDPOINT }
upstream: http://localhost:6062
key_server:
type: keyregistry
options:
# QUAY_ENDPOINT defines the endpoint at which Quay is running.
# Example: https://myregistry.mycompany.com
registry: { QUAY_ENDPOINT }/keys/
Configuring Clair V2 for TLS
To configure Clair to run with TLS, a few additional steps are required.
Using certificates from a public CA
For certificates that come from a public certificate authority, follow these steps:
-
Generate a TLS certificate and key pair for the DNS name at which Clair will be accessed
-
Place these files as
clair.crt
andclair.key
in your Clair configuration directory -
Uncomment the
key_file
andcrt_file
lines underverifier_proxies
in your Clairconfig.yaml
If your certificates use a public CA, you are now ready to run Clair. If you are using your own certificate authority, configure Clair to trust it below.
Configuring trust of self-signed SSL
Similar to the process for setting up Docker to trust your self-signed certificates, Clair must also be configured to trust your certificates. Using the same CA certificate bundle used to configure Docker, complete the following steps:
-
Rename the same CA certificate bundle used to set up Quay Registry to
ca.crt
-
Make sure the
ca.crt
file is mounted inside the Clair container under/etc/pki/ca-trust/source/anchors/
as in the example below: You will need to build your own Clair container and run it during this step. Instructions for building the Clair container are not yet available.
Now Clair will be able to trust the source of your TLS certificates and use them to secure communication between Clair and Quay.
Using Clair V2 data sources
Before scanning container images, Clair tries to figure out the operating system on which the container was built. It does this by looking for specific filenames inside that image (see Table 1). Once Clair knows the operating system, it uses specific security databases to check for vulnerabilities (see Table 2).
Operating system | Files identifying OS type |
---|---|
Redhat/CentOS/Oracle |
etc/oracle-release etc/centos-release etc/redhat-release etc/system-release |
Alpine |
etc/alpine-release |
Debian/Ubuntu: |
etc/os-release usr/lib/os-release etc/apt/sources.list |
Ubuntu |
etc/lsb-release |
The data sources that Clair uses to scan containers are shown in Table 2.
Note
|
You must be sure that Clair has access to all listed data sources by whitelisting access to each data source’s location. You might need to add a wild-card character (*) at the end of some URLS that may not be fully complete because they are dynamically built by code. |
Data source | Data collected | Whitelist links | Format | License |
---|---|---|---|---|
Debian 6, 7, 8, unstable namespaces |
||||
Ubuntu 12.04, 12.10, 13.04, 14.04, 14.10, 15.04, 15.10, 16.04 namespaces |
||||
CentOS 5, 6, 7 namespace |
||||
Oracle Linux 5, 6, 7 namespaces |
||||
Alpine 3.3, 3.4, 3.5 namespaces |
||||
Generic vulnerability metadata |
N/A |
|||
Amazon Linux 2018.03, 2 namespaces |
Run Clair V2
Execute the following command to run Clair V2:
You will need to build your own Clair container and run it during this step. Instructions for building the Clair container are not yet available.
Output similar to the following will be seen on success:
2016-05-04 20:01:05,658 CRIT Supervisor running as root (no user in config file)
2016-05-04 20:01:05,662 INFO supervisord started with pid 1
2016-05-04 20:01:06,664 INFO spawned: 'jwtproxy' with pid 8
2016-05-04 20:01:06,666 INFO spawned: 'clair' with pid 9
2016-05-04 20:01:06,669 INFO spawned: 'generate_mitm_ca' with pid 10
time="2016-05-04T20:01:06Z" level=info msg="No claims verifiers specified, upstream should be configured to verify authorization"
time="2016-05-04T20:01:06Z" level=info msg="Starting reverse proxy (Listening on ':6060')"
2016-05-04 20:01:06.715037 I | pgsql: running database migrations
time="2016-05-04T20:01:06Z" level=error msg="Failed to create forward proxy: open /certificates/mitm.crt: no such file or directory"
goose: no migrations to run. current version: 20151222113213
2016-05-04 20:01:06.730291 I | pgsql: database migration ran successfully
2016-05-04 20:01:06.730657 I | notifier: notifier service is disabled
2016-05-04 20:01:06.731110 I | api: starting main API on port 6062.
2016-05-04 20:01:06.736558 I | api: starting health API on port 6061.
2016-05-04 20:01:06.736649 I | updater: updater service is disabled.
2016-05-04 20:01:06,740 INFO exited: jwtproxy (exit status 0; not expected)
2016-05-04 20:01:08,004 INFO spawned: 'jwtproxy' with pid 1278
2016-05-04 20:01:08,004 INFO success: clair entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2016-05-04 20:01:08,004 INFO success: generate_mitm_ca entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
time="2016-05-04T20:01:08Z" level=info msg="No claims verifiers specified, upstream should be configured to verify authorization"
time="2016-05-04T20:01:08Z" level=info msg="Starting reverse proxy (Listening on ':6060')"
time="2016-05-04T20:01:08Z" level=info msg="Starting forward proxy (Listening on ':6063')"
2016-05-04 20:01:08,541 INFO exited: generate_mitm_ca (exit status 0; expected)
2016-05-04 20:01:09,543 INFO success: jwtproxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
To verify Clair V2 is running, execute the following command:
curl -X GET -I http://path/to/clair/here:6061/health
If a 200 OK
code is returned, Clair is running:
HTTP/1.1 200 OK
Server: clair
Date: Wed, 04 May 2016 20:02:16 GMT
Content-Length: 0
Content-Type: text/plain; charset=utf-8
Once Clair V2 and its associated database are running, you man need to restart your quay application for the changes to take effect.
Integrate Project Quay into OpenShift with the Bridge Operator
Using the Quay Bridge Operator, you can replace the integrated container registry in OpenShift with a Project Quay registry. By doing this, your integrated OpenShift registry becomes a highly available, enterprise-grade Project Quay registry with enhanced role based access control (RBAC) features.
The primary goals of the Bridge Operator is to duplicate the features of the integrated OpenShift registry in the new Project Quay registry. The features enabled with this Operator include:
-
Synchronizing OpenShift namespaces as Project Quay organizations.
-
Creating Robot accounts for each default namespace service account
-
Creating Secrets for each created Robot Account (associating each Robot Secret to a Service Account as Mountable and Image Pull Secret)
-
Synchronizing OpenShift ImageStreams as Quay Repositories
-
-
Automatically rewriting new Builds making use of ImageStreams to output to Project Quay
-
Automatically importing an ImageStream tag once a build completes
Using this procedure with the Quay Bridge Operator, you enable bi-directional communication between your Project Quay and OpenShift clusters.
Running the Quay Bridge Operator
Prerequisites
Before setting up the Bridge Operator, have the following in place:
-
An existing Project Quay environment for which you have superuser permissions
-
A Red Hat OpenShift Container Platform environment (4.2 or later is recommended) for which you have cluster administrator permissions
-
An OpenShift command line tool (
oc
command)
Setting up and configuring OpenShift and Project Quay
Both Project Quay and OpenShift configuration is required:
Project Quay setup
Create a dedicated Project Quay organization, and from a new application you create within that organization, generate an OAuth token to be used with the Quay Bridge Operator in OpenShift
-
Log in to Project Quay as a user with superuser access and select the organization for which the external application will be configured.
-
In the left navigation, select Applications.
-
Select
Create New Application
and entering a name for the new application (for example,openshift
). -
With the new application displayed, select it.
-
In the left navigation, select
Generate Token
to create a new OAuth2 token. -
Select all checkboxes to grant the access needed for the integration.
-
Review the assigned permissions and then select
Authorize Application
, then confirm it. -
Copy and save the generated Access Token that appears to use in the next section.
OpenShift Setup
Setting up OpenShift for the Quay Bridge Operator requires several steps, including:
Deploying the Operator
The fastest method for deploying the operator is to deploy from OperatorHub. From the Administrator perspective in the OpenShift Web Console, navigate to the Operators tab, and then select OperatorHub.
Search for Quay Bridge Operator and then select Install.
Select an Approval Strategy and then select Install which will deploy the operator to the cluster.
Creating an OpenShift secret for the OAuth token
The Operator will use the previously obtained Access Token to communicate with Quay. Store this token within OpenShift as a secret.
Execute the following command to create a secret called quay-integration
in the openshift-operators
namespace with a key called token
containing the access token:
$ oc create secret -n openshift-operators generic quay-integration --from-literal=token=<access_token>
Create the QuayIntegration Custom Resource
Finally, to complete the integration between OpenShift and Quay, a QuayIntegration
custom resource needs to be created. This can be completed in the Web Console or from the command line.
apiVersion: quay.redhat.com/v1
kind: QuayIntegration
metadata:
name: example-quayintegration
spec:
clusterID: openshift (1)
credentialsSecret:
namespace: openshift-operators
name: quay-integration(2)
quayHostname: https://<QUAY_URL> (3)
insecureRegistry: false (4)
-
The clusterID value should be unique across the entire ecosystem. This value is optional and defaults to
openshift
. -
The
credentialsSecret
property refers to the namespace and name of the secret containing the token that was previously created. -
Replace QUAY_URL with the hostname of your Project Quay instance.
-
If Quay is using self signed certificates, set the property
insecureRegistry: true
.
Create the QuayIntegration
Custom Resource:
$ oc create -f quay-integration.yaml
At this point a Quay integration resource is created, linking the OpenShift cluster to the Project Quay instance. Organizations within Quay should be created for the related namespaces from the OpenShift environment
Repository mirroring
Repository mirroring
Project Quay repository mirroring lets you mirror images from external container registries (or another local registry) into your Project Quay cluster. Using repository mirroring, you can synchronize images to Project Quay based on repository names and tags.
From your Project Quay cluster with repository mirroring enabled, you can:
-
Choose a repository from an external registry to mirror
-
Add credentials to access the external registry
-
Identify specific container image repository names and tags to sync
-
Set intervals at which a repository is synced
-
Check the current state of synchronization
To use the mirroring functionality, you need to:
-
Enable Repository Mirroring in the Project Quay configuration
-
Run a repository mirroring worker
-
Create mirrored repositories
All repository mirroring configuration can be performed using the configuration tool UI or via the Quay API
Repository mirroring versus geo-replication
Quay geo-replication mirrors the entire image storage backend data between 2 or more different storage backends while the database is shared (one Quay registry with two different blob storage endpoints). The primary use cases for geo-replication are:
-
Speeding up access to the binary blobs for geographically dispersed setups
-
Guaranteeing that the image content is the same across regions
Repository mirroring synchronizes selected repositories (or subsets of repositories) from one registry to another. The registries are distinct, with registry is separate database and image storage. The primary use cases for mirroring are:
-
Independent registry deployments in different datacenters or regions, where a certain subset of the overall content is supposed to be shared across the datacenters / regions
-
Automatic synchronization or mirroring of selected (whitelisted) upstream repositories from external registries into a local Quay deployment
Note
|
Repository mirroring and geo-replication can be used simultaneously. |
Feature / Capability | Geo-replication | Repository mirroring |
---|---|---|
What is the feature designed to do? |
A shared, global registry |
Distinct, different registries |
What happens if replication or mirroring hasn’t been completed yet? |
The remote copy is used (slower) |
No image is served |
Is access to all storage backends in both regions required? |
Yes (all Project Quay nodes) |
No (distinct storage) |
Can users push images from both sites to the same repository? |
Yes |
No |
Is all registry content and configuration identical across all regions (shared database) |
Yes |
No |
Can users select individual namespaces or repositories to be mirrored? |
No,by default |
Yes |
Can users apply filters to synchronization rules? |
No |
Yes |
Using repository mirroring
Here are some features and limitations of Project Quay repository mirroring:
-
With repository mirroring, you can mirror an entire repository or selectively limit which images are synced. Filters can be based on a comma-separated list of tags, a range of tags, or other means of identifying tags through regular expressions.
-
Once a repository is set as mirrored, you cannot manually add other images to that repository.
-
Because the mirrored repository is based on the repository and tags you set, it will hold only the content represented by the repo/tag pair. In other words, if you change the tag so that some images in the repository no longer match, those images will be deleted.
-
Only the designated robot can push images to a mirrored repository, superseding any role-based access control permissions set on the repository.
-
With a mirrored repository, a user can pull images (given read permission) from the repository but not push images to the repository.
-
Changing settings on your mirrored repository is done from the Mirrors tab on the Repositories page for the mirrored repository you create.
-
Images are synced at set intervals, but can also be synced on demand.
Mirroring configuration UI
-
Start the
Quay
container in configuration mode and select the Enable Repository Mirroring check box. If you want to require HTTPS communications and verify certificates during mirroring, select the HTTPS and cert verification check box. -
Validate and download the
configuration
file, and then restart Quay in registry mode using the updated config file.
Mirroring configuration fields
Field | Type | Description |
---|---|---|
FEATURE_REPO_MIRROR |
Boolean |
Enable or disable repository mirroring |
|
|
|
REPO_MIRROR_INTERVAL |
Number |
The number of seconds between checking for repository mirror candidates |
REPO_MIRROR_SERVER_HOSTNAME |
String |
Replaces the |
REPO_MIRROR_TLS_VERIFY |
Boolean |
Require HTTPS and verify certificates of Quay registry during mirror. |
Mirroring worker
-
To run the repository mirroring worker, start by running a
Quay
pod with therepomirror
option:$ sudo podman run -d --name mirroring-worker \ -v $QUAY/config:/conf/stack:Z \ quay.io/projectquay/quay:qui-gon repomirror
-
If you have configured TLS communications using a certificate
/root/ca.crt
, then the following example shows how to start the mirroring worker:$ sudo podman run -d --name mirroring-worker \ -v $QUAY/config:/conf/stack:Z \ -v /root/ca.crt:/etc/pki/ca-trust/source/anchors/ca.crt \ quay.io/projectquay/quay:qui-gon repomirror
Creating a mirrored repository
The steps shown in this section assume you already have enabled repository mirroring in the configuration for your Project Quay cluster and that you have a deployed a mirroring worker.
When mirroring a repository from an external container registry, create a new private repository. Typically the same name is used as the target repository, for example, quay-rhel8
:
Repository mirroring settings
-
In the Settings tab, set the Repository State to
Mirror
: -
In the Mirror tab, enter the details for connecting to the external registry, along with the tags, scheduling and access information:
-
Enter the details as required in the following fields:
-
Registry Location: The external repository you want to mirror, for example,
registry.redhat.io/quay/quay-rhel8
-
Tags: This field is required. You may enter a comma-separated list of individual tags or tag patterns. (See Tag Patterns section for details.)
NoteIn order for Quay to get the list of tags in the remote repository, one of the following requirements must be met:
-
An image with the "latest" tag must exist in the remote repository OR
-
At least one explicit tag, without pattern matching, must exist in the list of tags that you specify
-
-
Start Date: The date on which mirroring begins. The current date and time is used by default.
-
Sync Interval: Defaults to syncing every 24 hours. You can change that based on hours or days.
-
Robot User: Create a new robot account or choose an existing robot account to do the mirroring.
-
Username: The username for accessing the external registry holding the repository you are mirroring.
-
Password: The password associated with the Username. Note that the password cannot include characters that require an escape character (\).
-
Advanced settings
-
In the Advanced Settings section, configure TLS and proxy, if required:
-
Verify TLS: Check this box if you want to require HTTPS and to verify certificates, when communicating with the target remote registry.
-
HTTP Proxy: Identify the HTTP proxy server needed to access the remote site, if one is required.
-
HTTPS Proxy: Identify the HTTPS proxy server needed to access the remote site, if one is required.
-
No Proxy: List of locations that do not require proxy
Synchronize now
-
To perform an immediate mirroring operation, press the Sync Now button on the repository’s Mirroring tab. The logs are available on the Usage Logs tab:
When the mirroring is complete, the images will appear in the Tags tab:
Below is an example of a completed Repository Mirroring screen:
Event notifications for mirroring
There are three notification events for repository mirroring:
-
Repository Mirror Started
-
Repository Mirror Success
-
Repository Mirror Unsuccessful
The events can be configured inside the Settings tab for each repository, and all existing notification methods such as email, slack, Quay UI and webhooks are supported.
Mirroring tag patterns
As noted above, at least one Tag must be explicitly entered (ie. not a tag pattern) or the tag "latest" must exist in the report repository. (The tag "latest" will not be synced unless specified in the tag list.). This is required for Quay to get the list of tags in the remote repository to compare to the specified list to mirror.
Pattern syntax
Pattern |
Description |
* |
Matches all characters |
? |
Matches any single character |
[seq] |
Matches any character in seq |
[!seq] |
Matches any character not in seq |
Example tag patterns
Example Pattern |
Example Matches |
v3* |
v32, v3.1, v3.2, v3.2-4beta, v3.3 |
v3.* |
v3.1, v3.2, v3.2-4beta |
v3.? |
v3.1, v3.2, v3.3 |
v3.[12] |
v3.1, v3.2 |
v3.[12]* |
v3.1, v3.2, v3.2-4beta |
v3.[!1]* |
v3.2, v3.2-4beta, v3.3 |
Working with mirrored repositories
Once you have created a mirrored repository, there are several ways you can work with that repository. Select your mirrored repository from the Repositories page and do any of the following:
-
Enable/disable the repository: Select the Mirroring button in the left column, then toggle the Enabled check box to enable or disable the repository temporarily.
-
Check mirror logs: To make sure the mirrored repository is working properly, you can check the mirror logs. To do that, select the Usage Logs button in the left column. Here’s an example:
-
Sync mirror now: To immediately sync the images in your repository, select the Sync Now button.
-
Change credentials: To change the username and password, select DELETE from the Credentials line. Then select None and add the username and password needed to log into the external registry when prompted.
-
Cancel mirroring: To stop mirroring, which keeps the current images available but stops new ones from being synced, select the CANCEL button.
-
Set robot permissions: Project Quay robot accounts are named tokens that hold credentials for accessing external repositories. By assigning credentials to a robot, that robot can be used across multiple mirrored repositories that need to access the same external registry.
You can assign an existing robot to a repository by going to Account Settings, then selecting the Robot Accounts icon in the left column. For the robot account, choose the link under the REPOSITORIES column. From the pop-up window, you can:
-
Check which repositories are assigned to that robot.
-
Assign read, write or Admin privileges to that robot from the PERMISSION field shown in this figure:
-
-
Change robot credentials: Robots can hold credentials such as Kubernetes secrets, Docker login information, and Mesos bundles. To change robot credentials, select the Options gear on the robot’s account line on the Robot Accounts window and choose View Credentials. Add the appropriate credentials for the external repository the robot needs to access.
-
Check and change general setting: Select the Settings button (gear icon) from the left column on the mirrored repository page. On the resulting page, you can change settings associated with the mirrored repository. In particular, you can change User and Robot Permissions, to specify exactly which users and robots can read from or write to the repo.
Repository mirroring recommendations
-
Repository mirroring pods can run on any node including other nodes where Quay is already running
-
Repository mirroring is scheduled in the database and run in batches. As a result, more workers could mean faster mirroring, since more batches will be processed.
-
The optimal number of mirroring pods depends on:
-
The total number of repositories to be mirrored
-
The number of images and tags in the repositories and the frequency of changes
-
Parallel batches
-
-
You should balance your mirroring schedule across all mirrored repositories, so that they do not all start up at the same time.
-
For a mid-size deployment, with approximately 1000 users and 1000 repositories, and with roughly 100 mirrored repositories, it is expected that you would use 3-5 mirroring pods, scaling up to 10 if required.
LDAP Authentication Setup for Project Quay
The Lightweight Directory Access Protocol (LDAP) is an open, vendor-neutral, industry standard application protocol for accessing and maintaining distributed directory information services over an Internet Protocol (IP) network. Project Quay supports using LDAP as an identity provider.
Considerations prior to enabling LDAP
Existing Quay deployments
Conflicts between user names can arise when you enable LDAP for an existing Quay deployment that already has users configured. Consider the scenario where a particular user, alice
, was manually created in Quay prior to enabling LDAP. If the user name alice
also exists in the LDAP directory, Quay will create a new user alice-1
when alice
logs in for the first time using LDAP, and will map the LDAP credentials to this account. This might not be want you want, for consistency reasons, and it is recommended that you remove any potentially conflicting local account names from Quay prior to enabling LDAP.
Manual User Creation and LDAP authentication
When Quay is configured for LDAP, LDAP-authenticated users are automatically created in Quay’s database on first log in, if the configuration option FEATURE_USER_CREATION
is set to true
. If this option is set to false
, the automatic user creation for LDAP users will fail and the user is not allowed to log in. In this scenario, the superuser needs to create the desired user account first.
Conversely, if FEATURE_USER_CREATION
is set to true
, this also means that a user can still create an account from the Quay login screen, even if there is an equivalent user in LDAP.
Set Up LDAP Configuration
In the config tool, locate the Authentication section and select “LDAP” from the drop-down menu. Update LDAP configuration fields as required.
-
Here is an example of the resulting entry in the config.yaml file:
AUTHENTICATION_TYPE: LDAP
Full LDAP URI
-
The full LDAP URI, including the ldap:// or ldaps:// prefix.
-
A URI beginning with ldaps:// will make use of the provided SSL certificate(s) for TLS setup.
-
Here is an example of the resulting entry in the config.yaml file:
LDAP_URI: ldaps://ldap.example.org
Team Synchronization
-
If enabled, organization administrators who are also superusers can set teams to have their membership synchronized with a backing group in LDAP.
-
The resynchronization duration is the period at which a team must be re-synchronized. Must be expressed in a duration string form: 30m, 1h, 1d.
-
Optionally allow non-superusers to enable and manage team syncing under organizations in which they are administrators.
-
Here is an example of the resulting entries in the config.yaml file:
FEATURE_TEAM_SYNCING: true TEAM_RESYNC_STALE_TIME: 60m FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP: true
Base and Relative Distinguished Names
-
A Distinguished Name path which forms the base path for looking up all LDAP records. Example: dc=my,dc=domain,dc=com
-
Optional list of Distinguished Name path(s) which form the secondary base path(s) for looking up all user LDAP records, relative to the Base DN defined above. These path(s) will be tried if the user is not found via the primary relative DN.
-
User Relative DN is relative to BaseDN. Example: ou=NYC not ou=NYC,dc=example,dc=org
-
Multiple “Secondary User Relative DNs” may be entered if there are multiple Organizational Units where User objects are located at. Simply type in the Organizational Units and click on Add button to add multiple RDNs. Example: ou=Users,ou=NYC and ou=Users,ou=SFO
-
The "User Relative DN" searches with subtree scope. For example, if your Organization has Organizational Units NYC and SFO under the Users OU (ou=SFO,ou=Users and ou=NYC,ou=Users), Project Quay can authenticate users from both the NYC and SFO Organizational Units if the User Relative DN is set to Users (ou=Users).
-
Here is an example of the resulting entries in the config.yaml file:
LDAP_BASE_DN: - dc=example - dc=com LDAP_USER_RDN: - ou=users LDAP_SECONDARY_USER_RDNS: - ou=bots - ou=external
Additional User Filters
-
If specified, the additional filter used for all user lookup queries. Note that all Distinguished Names used in the filter must be full paths; the Base DN is not added automatically here. Must be wrapped in parens. Example: (&(someFirstField=someValue)(someOtherField=someOtherValue))
-
Here is an example of the resulting entry in the config.yaml file:
LDAP_USER_FILTER: (memberof=cn=developers,ou=groups,dc=example,dc=com)
Administrator DN
-
The Distinguished Name and password for the administrator account. This account must be able to login and view the records for all user accounts. Example: uid=admin,ou=employees,dc=my,dc=domain,dc=com
-
The password will be stored in plaintext inside the config.yaml, so setting up a dedicated account or using a password hash is highly recommended.
-
Here is an example of the resulting entries in the config.yaml file:
LDAP_ADMIN_DN: cn=admin,dc=example,dc=com LDAP_ADMIN_PASSWD: changeme
UID and Mail attributes
-
The UID attribute is the name of the property field in LDAP user record to use as the username. Typically "uid".
-
The Mail attribute is the name of the property field in LDAP user record that stores user e-mail address(es). Typically "mail".
-
Either of these may be used during login.
-
The logged in username must exist in User Relative DN.
-
sAMAccountName is the UID attribute for against Microsoft Active Directory setups.
-
Here is an example of the resulting entries in the config.yaml file:
LDAP_UID_ATTR: uid LDAP_EMAIL_ATTR: mail
Validation
Once the configuration is completed, click on “Save Configuration Changes” button to validate the configuration.
All validation must succeed before proceeding, or additional configuration may be performed by selecting the "Continue Editing" button.
Common Issues
Invalid credentials
Administrator DN or Administrator DN Password values are incorrect
Verification of superuser %USERNAME% failed: Username not found The user either does not exist in the remote authentication system OR LDAP auth is misconfigured.
Project Quay can connect to the LDAP server via Username/Password specified in the Administrator DN fields however cannot find the current logged in user with the UID Attribute or Mail Attribute fields in the User Relative DN Path. Either current logged in user does not exist in User Relative DN Path, or Administrator DN user do not have rights to search/read this LDAP path.
Configure an LDAP user as superuser
Once LDAP is configured, you can log in to your Project Quay instance with a valid LDAP username and password. You are prompted to confirm your Project Quay username as shown in the following figure:
To attach superuser privilege to an LDAP user, modify the config.yaml file with the username. For example:
SUPER_USERS: - testadmin
Restart the Red Hat Quay
container with the updated config.yaml file.
The next time you log in, the user will have superuser privileges.
Prometheus and Grafana metrics under Project Quay
Project Quay exports a Prometheus- and Grafana-compatible endpoint on each instance to allow for easy monitoring and alerting.
Exposing the Prometheus endpoint
Standalone Project Quay
When using podman run
to start the Quay
container, expose the metrics port 9091
:
$ sudo podman run -d --rm -p 80:8080 -p 443:8443 -p 9091:9091\ --name=quay \ -v $QUAY/config:/conf/stack:Z \ -v $QUAY/storage:/datastorage:Z \ quay.io/projectquay/quay:qui-gon
The metrics will now be available:
$ curl quay.example.com:9091/metrics
See Monitoring Quay with Prometheus and Grafana for details on configuring Prometheus and Grafana to monitor Quay repository counts.
Project Quay Operator
Determine the cluster IP for the quay-metrics
service:
$ oc get services -n quay-enterprise
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example-registry-clair-app ClusterIP 172.30.61.161 <none> 80/TCP,8089/TCP 18h
example-registry-clair-postgres ClusterIP 172.30.122.136 <none> 5432/TCP 18h
example-registry-quay-app ClusterIP 172.30.72.79 <none> 443/TCP,80/TCP,8081/TCP,55443/TCP 18h
example-registry-quay-config-editor ClusterIP 172.30.185.61 <none> 80/TCP 18h
example-registry-quay-database ClusterIP 172.30.114.192 <none> 5432/TCP 18h
example-registry-quay-metrics ClusterIP 172.30.37.76 <none> 9091/TCP 18h
example-registry-quay-redis ClusterIP 172.30.157.248 <none> 6379/TCP 18h
Connect to your cluster and access the metrics using the cluster IP and port for the quay-metrics
service:
$ oc debug node/master-0
sh-4.4# curl 172.30.37.76:9091/metrics
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 4.0447e-05
go_gc_duration_seconds{quantile="0.25"} 6.2203e-05
...
Setting up Prometheus to consume metrics
Prometheus needs a way to access all Project Quay instances running in a cluster. In the typical setup, this is done by listing all the Project Quay instances in a single named DNS entry, which is then given to Prometheus.
DNS configuration under Kubernetes
A simple Kubernetes service can be configured to provide the DNS entry for Prometheus.
DNS configuration for a manual cluster
SkyDNS is a simple solution for managing this DNS record when not using Kubernetes. SkyDNS can run on an etcd cluster. Entries for each Project Quay instance in the cluster can be added and removed in the etcd store. SkyDNS will regularly read them from there and update the list of Quay instances in the DNS record accordingly.
Introduction to metrics
Project Quay provides metrics to help monitor the registry, including metrics for general registry usage, uploads, downloads, garbage collection, and authentication.
General registry statistics
General registry statistics can indicate how large the registry has grown.
Metric name | Description |
---|---|
quay_user_rows |
Number of users in the database |
quay_robot_rows |
Number of robot accounts in the database |
quay_org_rows |
Number of organizations in the database |
quay_repository_rows |
Number of repositories in the database |
quay_security_scanning_unscanned_images_remaining_total |
Number of images that are not scanned by the latest security scanner |
# HELP quay_user_rows number of users in the database
# TYPE quay_user_rows gauge
quay_user_rows{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="65",process_name="globalpromstats.py"} 3
# HELP quay_robot_rows number of robot accounts in the database
# TYPE quay_robot_rows gauge
quay_robot_rows{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="65",process_name="globalpromstats.py"} 2
# HELP quay_org_rows number of organizations in the database
# TYPE quay_org_rows gauge
quay_org_rows{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="65",process_name="globalpromstats.py"} 2
# HELP quay_repository_rows number of repositories in the database
# TYPE quay_repository_rows gauge
quay_repository_rows{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="65",process_name="globalpromstats.py"} 4
# HELP quay_security_scanning_unscanned_images_remaining number of images that are not scanned by the latest security scanner
# TYPE quay_security_scanning_unscanned_images_remaining gauge
quay_security_scanning_unscanned_images_remaining{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 5
Queue items
The queue items metrics provide information on the multiple queues used by Quay for managing work.
Metric name | Description |
---|---|
quay_queue_items_available |
Number of items in a specific queue |
quay_queue_items_locked |
Number of items that are running |
quay_queue_items_available_unlocked |
Number of items that are waiting to be processed |
-
queue_name: The name of the queue. One of:
-
exportactionlogs: Queued requests to export action logs. These logs are then processed and put in storage. A link is then sent to the requester via email.
-
namespacegc: Queued namespaces to be garbage collected
-
notification: Queue for repository notifications to be sent out
-
repositorygc: Queued repositories to be garbage collected
-
secscanv4: Notification queue specific for Clair V4
-
dockerfilebuild: Queue for Quay docker builds
-
imagestoragereplication: Queued blob to be replicated across multiple storages
-
chunk_cleanup: Queued blob segments that needs to be deleted. This is only used by some storage implementations, for example, Swift.
-
For example, the queue labelled repositorygc contains the repositories marked for deletion by the repository garbage collection worker. For metrics with a queue_name label of repositorygc:
-
quay_queue_items_locked is the number of repositories currently being deleted.
-
quay_queue_items_available_unlocked is the number of repositories waiting to get processed by the worker.
# HELP quay_queue_items_available number of queue items that have not expired
# TYPE quay_queue_items_available gauge
quay_queue_items_available{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="63",process_name="exportactionlogsworker.py",queue_name="exportactionlogs"} 0
...
# HELP quay_queue_items_available_unlocked number of queue items that have not expired and are not locked
# TYPE quay_queue_items_available_unlocked gauge
quay_queue_items_available_unlocked{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="63",process_name="exportactionlogsworker.py",queue_name="exportactionlogs"} 0
...
# HELP quay_queue_items_locked number of queue items that have been acquired
# TYPE quay_queue_items_locked gauge
quay_queue_items_locked{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="63",process_name="exportactionlogsworker.py",queue_name="exportactionlogs"} 0
Garbage collection metrics
These metrics show you how many resources have been removed from garbage collection (gc). They show many times the gc workers have run and how many namespaces, repositories, and blobs were removed.
Metric name | Description |
---|---|
quay_gc_iterations_total |
Number of iterations by the GCWorker |
quay_gc_namespaces_purged_total |
Number of namespaces purged by the NamespaceGCWorker |
quay_gc_repos_purged_total |
Number of repositories purged by the RepositoryGCWorker or NamespaceGCWorker |
quay_gc_storage_blobs_deleted_total |
Number of storage blobs deleted |
# TYPE quay_gc_iterations_created gauge
quay_gc_iterations_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189714e+09
...
# HELP quay_gc_iterations_total number of iterations by the GCWorker
# TYPE quay_gc_iterations_total counter
quay_gc_iterations_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0
...
# TYPE quay_gc_namespaces_purged_created gauge
quay_gc_namespaces_purged_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189433e+09
...
# HELP quay_gc_namespaces_purged_total number of namespaces purged by the NamespaceGCWorker
# TYPE quay_gc_namespaces_purged_total counter
quay_gc_namespaces_purged_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0
....
# TYPE quay_gc_repos_purged_created gauge
quay_gc_repos_purged_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.631782319018925e+09
...
# HELP quay_gc_repos_purged_total number of repositories purged by the RepositoryGCWorker or NamespaceGCWorker
# TYPE quay_gc_repos_purged_total counter
quay_gc_repos_purged_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0
...
# TYPE quay_gc_storage_blobs_deleted_created gauge
quay_gc_storage_blobs_deleted_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189059e+09
...
# HELP quay_gc_storage_blobs_deleted_total number of storage blobs deleted
# TYPE quay_gc_storage_blobs_deleted_total counter
quay_gc_storage_blobs_deleted_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0
...
Multipart uploads metrics
The multipart uploads metrics show the number of blobs uploads to storage (S3, Rados, GoogleCloudStorage, RHOCS). These can help identify issues when Quay is unable to correctly upload blobs to storage.
Metric name | Description |
---|---|
quay_multipart_uploads_started_total |
Number of multipart uploads to Quay storage that completed |
quay_multipart_uploads_completed_total |
Number of multipart uploads to Quay storage that started |
# TYPE quay_multipart_uploads_completed_created gauge
quay_multipart_uploads_completed_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823308284895e+09
...
# HELP quay_multipart_uploads_completed_total number of multipart uploads to Quay storage that completed
# TYPE quay_multipart_uploads_completed_total counter
quay_multipart_uploads_completed_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0
# TYPE quay_multipart_uploads_started_created gauge
quay_multipart_uploads_started_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823308284352e+09
...
# HELP quay_multipart_uploads_started_total number of multipart uploads to Quay storage that started
# TYPE quay_multipart_uploads_started_total counter
quay_multipart_uploads_started_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0
...
Image push / pull metrics
A number of metrics are available related to pushing and pulling images.
Image pulls total
Metric name | Description |
---|---|
quay_registry_image_pulls_total |
The number of images downloaded from the registry. |
-
protocol: the registry protocol used (should always be v2)
-
ref: ref used to pull - tag, manifest
-
status: http return code of the request
Image bytes pulled
Metric name | Description |
---|---|
quay_registry_image_pulled_estimated_bytes_total |
The number of bytes downloaded from the registry |
-
protocol: the registry protocol used (should always be v2)
Image pushes total
Metric name | Description |
---|---|
quay_registry_image_pushes_total |
The number of images uploaded from the registry. |
-
protocol: the registry protocol used (should always be v2)
-
pstatus: http return code of the request
-
pmedia_type: the uploaded manifest type
Image bytes pushed
Metric name | Description |
---|---|
quay_registry_image_pushed_bytes_total |
The number of bytes uploaded to the registry |
# HELP quay_registry_image_pushed_bytes_total number of bytes pushed to the registry
# TYPE quay_registry_image_pushed_bytes_total counter
quay_registry_image_pushed_bytes_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="221",process_name="registry:application"} 0
...
Authentication metrics
The authentication metrics provide the number of authentication requests, labeled by type and whether it succeeded or not. For example, this metric could be used to monitor failed basic authentication requests.
Metric name | Description |
---|---|
quay_authentication_attempts_total |
Number of authentication attempts across the registry and API |
-
auth_kind: The type of auth used, including:
-
basic
-
oauth
-
credentials
-
-
success: true or false
# TYPE quay_authentication_attempts_created gauge
quay_authentication_attempts_created{auth_kind="basic",host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="221",process_name="registry:application",success="True"} 1.6317843039374158e+09
...
# HELP quay_authentication_attempts_total number of authentication attempts across the registry and API
# TYPE quay_authentication_attempts_total counter
quay_authentication_attempts_total{auth_kind="basic",host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="221",process_name="registry:application",success="True"} 2
...
Project Quay quota management and enforcement
With Project Quay 3.7, users have the ability to report storage consumption and to contain registry growth by establishing configured storage quota limits. On-premise Quay users are now equipped with the following capabilities to manage the capacity limits of their environment:
-
Quota reporting: With this feature, a superuser can track the storage consumption of all their organizations. Additionally, users can track the storage consumption of their assigned organization.
-
Quota management: With this feature, a superuser can define soft and hard checks for Project Quay users. Soft checks tell users if the storage consumption of an organization reaches their configured threshold. Hard checks prevent users from pushing to the registry when storage consumption reaches the configured limit.
Together, these features allow service owners of a Quay registry to define service level agreements and support a healthy resource budget.
Quota management configuration
Quota management is now supported under the FEATURE_QUOTA_MANAGEMENT
property and is turned off by default. To enable quota management, set the feature flag in your config.yaml
to true
:
FEATURE_QUOTA_MANAGEMENT: true
Note
|
In Project Quay 3.7, superuser privileges are required to create, update and delete quotas. While quotas can be set for users as well as organizations, you cannot reconfigure the user quota using the Project Quay UI and you must use the API instead. |
Default quota
To specify a system-wide default storage quota that is applied to every organization and user, use the DEFAULT_SYSTEM_REJECT_QUOTA_BYTES configuration flag.
Field | Type | Description |
---|---|---|
DEFAULT_SYSTEM_REJECT_QUOTA_BYTES |
String |
The quota size to apply to all organizations and users. |
If you configure a specific quota for an organization or user, and then delete that quota, the system-wide default quota will apply if one has been set. Similarly, if you have configured a specific quota for an organization or user, and then modify the system-wide default quota, the updated system-wide default will override any specific settings.
Quota management architecture
The RepositorySize
database table holds the storage consumption, in bytes, of a Project Quay repository within an organization. The sum of all repository sizes for an organization defines the current storage size of a Project Quay organization. When an image push is initialized, the user’s organization storage is validated to check if it is beyond the configured quota limits. If an image push exceeds defined quota limitations, a soft or hard check occurs:
-
For a soft check, users are notified.
-
For a hard check, the push is stopped.
If storage consumption is within configured quota limits, the push is allowed to proceed.
Image manifest deletion follows a similar flow, whereby the links between associated image tags and the manifest are deleted. Additionally, after the image manifest is deleted, the repository size is recalculated and updated in the RepositorySize
table.
Establishing quota in Project Quay UI
The following procedure describes how you can report storage consumption and establish storage quota limits.
-
A Project Quay registry.
-
A superuser account.
-
Enough storage to meet the demands of quota limitations.
-
Create a new organization or choose an existing one. Initially, no quota is configured, as can be seen on the Organization Settings tab:
-
Log in to the registry as a superuser and navigate to the Manage Organizations tab on the Super User Admin Panel. Click the Options icon of the organization for which you want to create storage quota limits:
-
Click Configure Quota and enter the initial quota, for example, 10 MB. Then click Apply and Close:
-
Check that the quota consumed shows 0 of 10 MB on the Manage Organizations tab of the superuser panel:
The consumed quota information is also available directly on the Organization page:
Initial consumed quota -
To increase the quota to 100MB, navigate to the Manage Organizations tab on the superuser panel. Click the Options icon and select Configure Quota, setting the quota to 100 MB. Click Apply and then Close:
-
Push a sample image to the organization from the command line:
Sample commands$ podman pull ubuntu:18.04 $ podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 $ podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04
-
On the superuser panel, the quota consumed per organization is displayed:
-
The Organization page shows the total proportion of the quota used by the image:
Total Quota Consumed for first image -
Pull, tag, and push a second image, for example,
nginx
:Sample commands$ podman pull nginx $ podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx $ podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx
-
The Organization page shows the total proportion of the quota used by each repository in that organization:
Total Quota Consumed for each repository -
Create reject and warning limits:
From the superuser panel, navigate to the Manage Organizations tab. Click the Options icon for the organization and select Configure Quota. In the Quota Policy section, with the Action type set to Reject, set the Quota Threshold to 80 and click Add Limit:
-
To create a warning limit, select Warning as the Action type, set the Quota Threshold to 70 and click Add Limit:
-
Click Close on the quota popup. The limits are viewable, but not editable, on the Settings tab of the Organization page:
-
Push an image where the reject limit is exceeded:
Because the reject limit (80%) has been set to below the current repository size (~83%), the next push is rejected automatically.
Sample image push$ podman pull ubuntu:20.04 $ podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 $ podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04
Sample output when quota exceededGetting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace
-
When limits are exceeded, notifications are displayed in the UI:
Quota notifications
Establishing quota with the Project Quay API
When an organization is first created, it does not have a quota applied. Use the /api/v1/organization/{organization}/quota endpoint:
$ curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq
[]
Setting the quota
To set a quota for an organization, POST data to the /api/v1/organization/{orgname}/quota endpoint: .Sample command
$ curl -k -X POST -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"limit_bytes": 10485760}' https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/namespacequota/testorg/quota | jq
"Created"
Viewing the quota
To see the applied quota, GET
data from the /api/v1/organization/{orgname}/quota endpoint:
$ curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq
[
{
"id": 1,
"limit_bytes": 10485760,
"default_config": false,
"limits": [],
"default_config_exists": false
}
]
Modifying the quota
To change the existing quota, in this instance from 10 MB to 100 MB, PUT data to the /api/v1/organization/{orgname}/quota/{quota_id} endpoint:
$ curl -k -X PUT -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"limit_bytes": 104857600}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1 | jq
{
"id": 1,
"limit_bytes": 104857600,
"default_config": false,
"limits": [],
"default_config_exists": false
}
Pushing images
To see the storage consumed, push various images to the organization.
Pushing ubuntu:18.04
Push ubuntu:18.04 to the organization from the command line:
$ podman pull ubuntu:18.04
$ podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04
$ podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04
Using the API to view quota usage
To view the storage consumed, GET
data from the /api/v1/repository endpoint:
$ curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true"a=true' | jq
{
"repositories": [
{
"namespace": "testorg",
"name": "ubuntu",
"description": null,
"is_public": false,
"kind": "image",
"state": "NORMAL",
"quota_report": {
"quota_bytes": 27959066,
"configured_quota": 104857600
},
"last_modified": 1651225630,
"popularity": 0,
"is_starred": false
}
]
}
Pushing another image
-
Pull, tag, and push a second image, for example,
nginx
:Sample commands$ podman pull nginx $ podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx $ podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx
-
To view the quota report for the repositories in the organization, use the /api/v1/repository endpoint:
Sample command$ curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true"a=true'
Sample output{ "repositories": [ { "namespace": "testorg", "name": "ubuntu", "description": null, "is_public": false, "kind": "image", "state": "NORMAL", "quota_report": { "quota_bytes": 27959066, "configured_quota": 104857600 }, "last_modified": 1651225630, "popularity": 0, "is_starred": false }, { "namespace": "testorg", "name": "nginx", "description": null, "is_public": false, "kind": "image", "state": "NORMAL", "quota_report": { "quota_bytes": 59231659, "configured_quota": 104857600 }, "last_modified": 1651229507, "popularity": 0, "is_starred": false } ] }
-
To view the quota information in the organization details, use the /api/v1/organization/{orgname} endpoint:
Sample command$ curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg' | jq
Sample output{ "name": "testorg", ... "quotas": [ { "id": 1, "limit_bytes": 104857600, "limits": [] } ], "quota_report": { "quota_bytes": 87190725, "configured_quota": 104857600 } }
Rejecting pushes using quota limits
If an image push exceeds defined quota limitations, a soft or hard check occurs:
-
For a soft check, or warning, users are notified.
-
For a hard check, or reject, the push is terminated.
Setting reject and warning limits
To set reject and warning limits, POST data to the /api/v1/organization/{orgname}/quota/{quota_id}/limit endpoint:
$ curl -k -X POST -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"type":"Reject","threshold_percent":80}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit
$ curl -k -X POST -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"type":"Warning","threshold_percent":50}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit
Viewing reject and warning limits
To view the reject and warning limits, use the /api/v1/organization/{orgname}/quota endpoint:
$ curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq
[
{
"id": 1,
"limit_bytes": 104857600,
"default_config": false,
"limits": [
{
"id": 2,
"type": "Warning",
"limit_percent": 50
},
{
"id": 1,
"type": "Reject",
"limit_percent": 80
}
],
"default_config_exists": false
}
]
Pushing an image when the reject limit is exceeded
In this example, the reject limit (80%) has been set to below the current repository size (~83%), so the next push should automatically be rejected.
Push a sample image to the organization from the command line:
$ podman pull ubuntu:20.04
$ podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04
$ podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04
Getting image source signatures
Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB
Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB
Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB
WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace
Getting image source signatures
Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB
Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB
Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB
WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace
Getting image source signatures
Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB
Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB
Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB
WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace
Getting image source signatures
Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB
Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB
Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB
Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace
Notifications for limits exceeded
When limits are exceeded, a notification appears:
Quota management limitations
Quota management helps organizations to maintain resource consumption. One limitation of quota management is that calculating resource consumption on push results in the calculation becoming part of the push’s critical path. Without this, usage data might drift.
The maximum storage quota size is dependent on the selected database:
Variable | Description |
---|---|
Postgres |
8388608 TB |
MySQL |
8388608 TB |
SQL Server |
16777216 TB |
Geo-replication
Geo-replication allows multiple, geographically distributed Quay deployments to work as a single registry from the perspective of a client or user. It significantly improves push and pull performance in a globally-distributed Quay setup. Image data is asynchronously replicated in the background with transparent failover / redirect for clients.
With Project Quay 3.7, deployments of Project Quay with geo-replication is supported by standalone and Operator deployments.
Geo-replication features
-
When geo-replication is configured, container image pushes will be written to the preferred storage engine for that Red Hat Quay instance (typically the nearest storage backend within the region).
-
After the initial push, image data will be replicated in the background to other storage engines.
-
The list of replication locations is configurable and those can be different storage backends.
-
An image pull will always use the closest available storage engine, to maximize pull performance.
-
If replication hasn’t been completed yet, the pull will use the source storage backend instead.
Geo-replication requirements and constraints
-
A single database, and therefore all metadata and Quay configuration, is shared across all regions.
-
A single Redis cache is shared across the entire Quay setup and needs to accessible by all Quay pods.
-
The exact same configuration should be used across all regions, with exception of the storage backend, which can be configured explicitly using the
QUAY_DISTRIBUTED_STORAGE_PREFERENCE
environment variable. -
Geo-Replication requires object storage in each region. It does not work with local storage or NFS.
-
Each region must be able to access every storage engine in each region (requires a network path).
-
Alternatively, the storage proxy option can be used.
-
The entire storage backend (all blobs) is replicated. This is in contrast to repository mirroring, which can be limited to an organization or repository or image.
-
All Quay instances must share the same entrypoint, typically via load balancer.
-
All Quay instances must have the same set of superusers, as they are defined inside the common configuration file.
-
Geo-Replication requires SSL/TSL certificates and keys. For more information, see Using SSL to protect connections to Red Hat Quay.
If the above requirements cannot be met, you should instead use two or more distinct Quay deployments and take advantage of repository mirroring functionality.
Geo-replication - standalone Quay
Geo-replication architecture - standalone Quay
In the example shown above, Quay is running standalone in two separate regions, with a common database and a common Redis instance. Localized image storage is provided in each region and image pulls are served from the closest available storage engine. Container image pushes are written to the preferred storage engine for the Quay instance, and will then be replicated, in the background, to the other storage engines.
Enable storage replication - standalone Quay
-
Scroll down to the section entitled
Registry Storage
. -
Click
Enable Storage Replication
. -
Add each of the storage engines to which data will be replicated. All storage engines to be used must be listed.
-
If complete replication of all images to all storage engines is required, under each storage engine configuration click
Replicate to storage engine by default
. This will ensure that all images are replicated to that storage engine. To instead enable per-namespace replication, please contact support. -
When you are done, click
Save Configuration Changes
. Configuration changes will take effect the next time Project Quay restarts. -
After adding storage and enabling “Replicate to storage engine by default” for Georeplications, you need to sync existing image data across all storage. To do this, you need to
oc exec
(or docker/kubectl exec) into the container and run:# scl enable python27 bash # python -m util.backfillreplication
This is a one time operation to sync content after adding new storage.
Run Project Quay with storage preferences
-
Copy the config.yaml to all machines running Project Quay
-
For each machine in each region, add a
QUAY_DISTRIBUTED_STORAGE_PREFERENCE
environment variable with the preferred storage engine for the region in which the machine is running.For example, for a machine running in Europe with the config directory on the host available from
$QUAY/config
:$ sudo podman run -d --rm -p 80:8080 -p 443:8443 \ --name=quay \ -v $QUAY/config:/conf/stack:Z \ -e QUAY_DISTRIBUTED_STORAGE_PREFERENCE=europestorage \ quay.io/projectquay/quay:qui-gon
NoteThe value of the environment variable specified must match the name of a Location ID as defined in the config panel.
-
Restart all Project Quay containers
Geo-replication - Quay Operator
Geo-replication architecture - Quay Operator
In the example shown above, Quay Operator is deployed in two separate regions, with a common database and a common Redis instance. Localized image storage is provided in each region and image pulls are served from the closest available storage engine. Container image pushes are written to the preferred storage engine for the Quay instance, and will then be replicated, in the background, to the other storage engines.
Setting up geo-replication on Openshift
-
Deploy Quay postgres instance:
-
Login to the database
-
Create a database for Quay
CREATE DATABASE quay;
-
Enable pg_trm extension inside the database
\c quay; CREATE EXTENSION IF NOT EXISTS pg_trgm;
-
-
Deploy a Redis instance:
Note-
Deploying a Redis instance might be unnecessary if your cloud provider has its own service.
-
Deploying a Redis instance is required if you are leveraging Builders.
-
Deploy a VM for Redis
-
Make sure that it is accessible from the clusters where Quay is running
-
Port 6379/TCP must be open
-
Run Redis inside the instance
sudo dnf install -y podman podman run -d --name redis -p 6379:6379 redis
-
-
Create two object storage backends, one for each cluster
Ideally one object storage bucket will be close to the 1st cluster (primary) while the other will run closer to the 2nd cluster (secondary).
-
Deploy the clusters with the same config bundle, using environment variable overrides to select the appropriate storage backend for an individual cluster
-
Configure a load balancer, to provide a single entry point to the clusters
Configuration
The config.yaml
file is shared between clusters, and will contain the details for the common PostgreSQL, Redis and storage backends:
DB_CONNECTION_ARGS:
autorollback: true
threadlocals: true
DB_URI: postgresql://postgres:password@10.19.0.1:5432/quay (1)
BUILDLOGS_REDIS:
host: 10.19.0.2
port: 6379
USER_EVENTS_REDIS:
host: 10.19.0.2
port: 6379
DISTRIBUTED_STORAGE_CONFIG:
usstorage:
- GoogleCloudStorage
- access_key: GOOGQGPGVMASAAMQABCDEFG
bucket_name: georep-test-bucket-0
secret_key: AYWfEaxX/u84XRA2vUX5C987654321
storage_path: /quaygcp
eustorage:
- GoogleCloudStorage
- access_key: GOOGQGPGVMASAAMQWERTYUIOP
bucket_name: georep-test-bucket-1
secret_key: AYWfEaxX/u84XRA2vUX5Cuj12345678
storage_path: /quaygcp
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS:
- usstorage
- eustorage
DISTRIBUTED_STORAGE_PREFERENCE:
- usstorage
- eustorage
-
The PostgreSQL DB_URI must also be included in the Clair configuration file. For more information about retrieving the Clair configuration file on OpenShift, see Retrieving the Clair config.
Create the configBundleSecret
:
$ oc create secret generic --from-file config.yaml=./config.yaml georep-config-bundle
In each of the clusters, set the configBundleSecret
and use the QUAY_DISTRIBUTED_STORAGE_PREFERENCE
environmental variable override to configure the appropriate storage for that cluster:
Note
|
The |
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage
Mixed storage for geo-replication
Quay geo-replication supports the use of different, and multiple, replication targets for example, using AWS S3 storage on public cloud and using Ceph storage on-prem. This complicates the key requirement of granting access to all storage backends from all Quay pods and cluster nodes. As a result, it is recommended that you:
-
Use a VPN to prevent visibility of the internal storage or
-
Use a token pair that only allows access to the specified bucket used by Quay
This will result in the public cloud instance of Quay having access to on-prem storage but the network will be encrypted, protected, and will use ACLs, thereby meeting security requirements.
If you cannot implement these security measures, it may be preferable to deploy two distinct Quay registries and to use repository mirroring as an alternative to geo-replication.
Project Quay Troubleshooting
Common failure modes and best practices for recovery.
Schema for Project Quay configuration
Most Project Quay configuration information is stored in the config.yaml
file that is created
using the browser-based config tool when Project Quay is first deployed.
The configuration options are described in the Project Quay Configuration Guide.