Once you have deployed a Project Quay registry, there are many ways you can further configure and manage that deployment. Topics covered here include:

  • Advanced Project Quay configuration

  • Setting notifications to alert you of a new Project Quay release

  • Securing connections with SSL and TLS certificates

  • Directing action logs storage to Elasticsearch

  • Configuring image security scanning with Clair

  • Scan pod images with the Container Security Operator

  • Integrate Project Quay into OpenShift with the Quay Bridge Operator

  • Mirroring images with repository mirroring

  • Sharing Quay images with a BitTorrent service

  • Authenticating users with LDAP

  • Enabling Quay for Prometheus and Grafana metrics

  • Setting up geo-replication

  • Troubleshooting Quay

Advanced Project Quay configuration

You can configure your Project Quay after initial deployment using several different interfaces:

  • The Project Quay Config Tool: Running the Quay container in config mode presents a Web-based interface for configuring the Project Quay cluster. This is the recommended method for most configuration of the Project Quay service itself.

  • Editing the config.yaml: The config.yaml file holds most of the configuration information for the Project Quay cluster. Editing that file directly is possible, but it is only recommended for advanced tuning and performance features that are not available through the Config Tool.

  • Project Quay API: Some Project Quay configuration can be done through the API.

While configuration for specific features is covered in separate sections, this section describes how to use each of those interfaces and perform some more advanced configuration.

Using Project Quay Config Tool to modify Project Quay

The Project Quay Config Tool is made available by running a Quay container in config mode alongside the regular Project Quay service. Running the Config Tool is different for Project Quay clusters running on OpenShift than it is for those running directly on host systems.

Running the Config Tool from the Project Quay Operator

If you are running the Project Quay Operator from OpenShift, the Config Tool is probably already available for you to use. To access the Config Tool, do the following:

  1. From the OpenShift console, select the project in which Project Quay is running. For example, quay-enterprise.

  2. From the left column, select Networking → Routes. You should see routes to both the Project Quay application and Config Tool, as shown in the following image:

    View the route to the Project Quay Config Tool

  3. Select the route to the Config Tool (for example, example-quayecosystem-quay-config) and select it. The Config tool Web UI should open in your browser.

  4. Select Modify configuration for this cluster. You should see the Config Tool, ready for you to change features of your Project Quay cluster, as shown in the following image:

    Modify Project Quay cluster settings from the Config Tool

  5. When you have made the changes you want, select Save Configuration Changes. The Config Tool will validate your changes.

  6. Make any corrections as needed by selecting Continue Editing or select Next to continue on.

  7. When prompted, it is recommended that you select Download Configuration. That will download a tarball of your new config.yaml, as well as any certificates and keys used with your Project Quay setup.

  8. Select Go to deployment rollout, then Populate the configuration to deployments. The Project Quay pods will be restarted and the changes will take effect.

The config.yaml file you saved can be used to make advanced changes to your configuration or just kept for future reference.

Running the Config Tool from the command line

If you are running Project Quay directly from a host system, using tools such as the podman or docker commands, after the initial Project Quay deployment, you can restart the Config Tool to modify your Project Quay cluster. Here’s how:

  1. Start quay in config mode: On the first quay node run the following, replacing my-secret-password with your password. If you would like to modify an existing config bundle, you can simply mount your configuration directory into the Quay container as you would in registry mode.

    # podman run --rm -it --name quay_config -p 8080:8080 \
        -v path/to/config-bundle:/conf/stack \
        quay.io/projectquay/quay:qui-gon config my-secret-password
  2. Open browser: When the quay configuration tool starts up, open a browser to the URL and port 8080 of the system you are running the configuration tool on (for example https://myquay.example.com:8080). You are prompted for a username and password.

At this point, you can begin modifying your Project Quay cluster as described earlier.

Using the API to modify Project Quay

See the Project Quay API Guide for information on how to access Project Quay API.

Editing the config.yaml file to modify Project Quay

Some advanced Project Quay configuration that is not available through the Config Tool can be achieved by editing the config.yaml file directly. Available settings are described in the Schema for Red Hat Quay configuration The following are examples of settings you can change directly in the config.yaml file.

Add name and company to Project Quay sign-in

Setting the following will cause users to be prompted for their name and company when they first sign in. Although this is optional, it can provide you with extra data about your Project Quay users:

+ FEATURE_USER_METADATA: true

Disable TLS Protocols

You can change the SSL_PROTOCOLS setting to remove SSL protocols that you do not want to support in your Project Quay instance. For example, to remove TLS v1 support from the default SSL_PROTOCOLS : ['TLSv1','TLSv1.1','TLSv1.2'], change it as follows:

+ SSL_PROTOCOLS : ['TLSv1.1','TLSv1.2']

Rate limit API calls

Adding the FEATURE_RATE_LIMITS parameter to the config.yaml causes nginx to limit certain API calls to 30 per second. If that feature is not set, API calls are limied to 300 per second (effectively unlimited). Rate limiting can be an important feature, if you need to make sure the resources available are not overwhelmed with traffic.

Some namespace may require unlimited access (perhaps they are important to CI/CD and take priority, for example). In this case, those namespace may be placed in a list in config.yaml for NON_RATE_LIMITED_NAMESPACES.

Adjust database connection pooling

Project Quay is composed of many different processes which all run within the same container. Many of these processes interact with the database.

If enabled, each process that interacts with the database will contain a connection pool. These per-process connection pools are configured to maintain a maximum of 20 connections. Under heavy load, it is possible to fill the connection pool for every process within a Project Quay container. Under certain deployments and loads, this may require analysis to ensure Project Quay does not exceed the database’s configured maximum connection count.

Overtime, the connection pools will release idle connections. To release all connections immediately, Project Quay requires a restart.

Database connection pooling may be toggled by setting the environment variable DB_CONNECTION_POOLING={true|false}

If database connection pooling is enabled, it is possible to change the maximum size of the connection pool. This can be done through the following config.yaml option:

DB_CONNECTION_ARGS:
  max_connections: 10
Database connection arguments

You can customize Project Quay database connection settings within the config.yaml file. These are entirely dependent upon the underlying database driver, such as psycopg2 for Postgres and pymysql for MySQL. It is also possible to pass in arguments used by Peewee’s Connection Pooling mechanism as seen below.

DB_CONNECTION_ARGS:
  max_connections: n  # Max Connection Pool size. (Connection Pooling only)
  timeout: n  # Time to hold on to connections. (Connection Pooling only)
  stale_timeout: n  # Number of seconds to block when the pool is full. (Connection Pooling only)
Database SSL configuration

Some key-value pairs defined under DB_CONNECTION_ARGS are generic while others are database-specific. In particular, SSL configuration depends on the database you are deploying.

PostgreSQL SSL connection arguments

A sample PostgreSQL SSL configuration is given below:

DB_CONNECTION_ARGS:
  sslmode: verify-ca
  sslrootcert: /path/to/cacert

The sslmode option determines whether or with what priority a secure SSL TCP/IP connection will be negotiated with the server. There are six modes:

  • disable: only try a non-SSL connection

  • allow: first try a non-SSL connection; if that fails, try an SSL connection

  • prefer: (default) first try an SSL connection; if that fails, try a non-SSL connection

  • require: only try an SSL connection. If a root CA file is present, verify the certificate in the same way as if verify-ca was specified

  • verify-ca: only try an SSL connection, and verify that the server certificate is issued by a trusted certificate authority (CA)

  • verify-full: only try an SSL connection, verify that the server certificate is issued by a trusted CA and that the requested server host name matches that in the certificate

More information on the valid arguments for PostgreSQL is available at https://www.postgresql.org/docs/current/libpq-connect.html.

MySQL SSL connection arguments

A sample MySQL SSL configuration follows:

DB_CONNECTION_ARGS:
  ssl:
    ca: /path/to/cacert

Information on the valid connection arguments for MySQL is available at https://dev.mysql.com/doc/refman/8.0/en/connecting-using-uri-or-key-value-pairs.html.

HTTP connection counts

It is possible to specify the quantity of simultaneous HTTP connections using environment variables. These can be specified as a whole, or for a specific component. The default for each is 50 parallel connections per process.

Environment variables:

WORKER_CONNECTION_COUNT_REGISTRY=n
WORKER_CONNECTION_COUNT_WEB=n
WORKER_CONNECTION_COUNT_SECSCAN=n
WORKER_CONNECTION_COUNT=n

Specifying a count for a specific component will override any value set in WORKER_CONNECTION_COUNT.

Dynamic process counts

To estimate the quantity of dynamically sized processes, the following calculation is used by default.

Note
Project Quay queries the available CPU count from the entire machine. Any limits applied using kubernetes or other non-virtualized mechanisms will not affect this behavior; Project Quay will makes its calculation based on the total number of processors on the Node. The default values listed are simply targets, but shall not exceed the maximum or be lower than the minimum.

Each of the following process quantities can be overridden using the environment variable specified below.

  • registry - Provides HTTP endpoints to handle registry action

    • minimum: 8

    • maximum: 64

    • default: $CPU_COUNT x 4

    • environment variable: WORKER_COUNT_REGISTRY

  • web - Provides HTTP endpoints for the web-based interface

    • minimum: 2

    • maximum: 32

    • default: $CPU_COUNT x 2

    • environment_variable: WORKER_COUNT_WEB

  • secscan - Interacts with Clair

    • minimum: 2

    • maximum: 4

    • default: $CPU_COUNT x 2

    • environment variable: WORKER_COUNT_SECSCAN

Environment variables

Project Quay allows overriding default behavior using environment variables. This table lists and describes each variable and the values they can expect.

Table 1. Worker count environment variables
Variable Description Values

WORKER_COUNT_REGISTRY

Specifies the number of processes to handle Registry requests within the Quay container.

Integer between 8 and 64

WORKER_COUNT_WEB

Specifies the number of processes to handle UI/Web requests within the container.

Integer between 2 and 32

WORKER_COUNT_SECSCAN

Specifies the number of processes to handle Security Scanning (e.g. Clair) integration within the container.

Integer between 2 and 4

DB_CONNECTION_POOLING

Toggle database connection pooling. In 3.4, it is disabled by default.

"true" or "false"

Turning off connection pooling

Project Quay deployments with a large amount of user activity can regularly hit the 2k maximum database connection limit. In these cases, connection pooling, which is enabled by default for Project Quay, can cause database connection count to rise exponentially and require you to turn off connection pooling.

If turning off connection pooling is not enough to prevent hitting that 2k database connection limit, you need to take additional steps to deal with the problem. In this case you might need to increase the maximum database connections to better suit your workload.

Using the configuration API

The configuration tool exposes 4 endpoints that can be used to build, validate, bundle and deploy a configuration. The config-tool API is documented at https://github.com/quay/config-tool/blob/master/pkg/lib/editor/API.md. In this section, you will see how to use the API to retrieve the current configuration and how to validate any changes you make.

Retrieving the default configuration

If you are running the configuration tool for the first time, and do not have an existing configuration, you can retrieve the default configuration. Start the container in config mode:

$ sudo podman run --rm -it --name quay_config \
  -p 8080:8080 \
  quay.io/projectquay/quay:qui-gon config secret

Use the config endpoint of the configuration API to get the default:

$ curl -X GET -u quayconfig:secret http://quay-server:8080/api/v1/config  | jq

The value returned is the default configuration in JSON format:

{
  "config.yaml": {
    "AUTHENTICATION_TYPE": "Database",
    "AVATAR_KIND": "local",
    "DB_CONNECTION_ARGS": {
      "autorollback": true,
      "threadlocals": true
    },
    "DEFAULT_TAG_EXPIRATION": "2w",
    "EXTERNAL_TLS_TERMINATION": false,
    "FEATURE_ACTION_LOG_ROTATION": false,
    "FEATURE_ANONYMOUS_ACCESS": true,
    "FEATURE_APP_SPECIFIC_TOKENS": true,
    ....
  }

}

Retrieving the current configuration

If you have already configured and deployed the Quay registry, stop the container and restart it in configuration mode, loading the existing configuration as a volume:

$ sudo podman run --rm -it --name quay_config \
  -p 8080:8080 \
  -v $QUAY/config:/conf/stack:Z \
  quay.io/projectquay/quay:qui-gon config secret

Use the config endpoint of the API to get the current configuration:

$ curl -X GET -u quayconfig:secret http://quay-server:8080/api/v1/config  | jq

The value returned is the current configuration in JSON format, including database and Redis configuration data:

{
  "config.yaml": {
    ....
    "BROWSER_API_CALLS_XHR_ONLY": false,
    "BUILDLOGS_REDIS": {
      "host": "quay-server",
      "password": "strongpassword",
      "port": 6379
    },
    "DATABASE_SECRET_KEY": "4b1c5663-88c6-47ac-b4a8-bb594660f08b",
    "DB_CONNECTION_ARGS": {
      "autorollback": true,
      "threadlocals": true
    },
    "DB_URI": "postgresql://quayuser:quaypass@quay-server:5432/quay",
    "DEFAULT_TAG_EXPIRATION": "2w",
    ....


  }

}

Validating configuration using the API

You can validate a configuration by posting it to the config/validate endpoint:

curl -u quayconfig:secret --header 'Content-Type: application/json' --request POST --data '
{
  "config.yaml": {
    ....
    "BROWSER_API_CALLS_XHR_ONLY": false,
    "BUILDLOGS_REDIS": {
      "host": "quay-server",
      "password": "strongpassword",
      "port": 6379
    },
    "DATABASE_SECRET_KEY": "4b1c5663-88c6-47ac-b4a8-bb594660f08b",
    "DB_CONNECTION_ARGS": {
      "autorollback": true,
      "threadlocals": true
    },
    "DB_URI": "postgresql://quayuser:quaypass@quay-server:5432/quay",
    "DEFAULT_TAG_EXPIRATION": "2w",
    ....

  }

} http://quay-server:8080/api/v1/config/validate | jq

The returned value is an array containing the errors found in the configuration. If the configuration is valid, an empty array [] is returned.

Determining the required fields

You can determine the required fields by posting an empty configuration structure to the config/validate endpoint:

curl -u quayconfig:secret --header 'Content-Type: application/json' --request POST --data '
{
  "config.yaml": {
  }

} http://quay-server:8080/api/v1/config/validate | jq

The value returned is an array indicating which fields are required:

[
  {
    "FieldGroup": "Database",
    "Tags": [
      "DB_URI"
    ],
    "Message": "DB_URI is required."
  },
  {
    "FieldGroup": "DistributedStorage",
    "Tags": [
      "DISTRIBUTED_STORAGE_CONFIG"
    ],
    "Message": "DISTRIBUTED_STORAGE_CONFIG must contain at least one storage location."
  },
  {
    "FieldGroup": "HostSettings",
    "Tags": [
      "SERVER_HOSTNAME"
    ],
    "Message": "SERVER_HOSTNAME is required"
  },
  {
    "FieldGroup": "HostSettings",
    "Tags": [
      "SERVER_HOSTNAME"
    ],
    "Message": "SERVER_HOSTNAME must be of type Hostname"
  },
  {
    "FieldGroup": "Redis",
    "Tags": [
      "BUILDLOGS_REDIS"
    ],
    "Message": "BUILDLOGS_REDIS is required"
  }
]

Getting Project Quay release notifications

To keep up with the latest Project Quay releases and other changes related to Project Quay, you can sign up for update notifications on the Red Hat Customer Portal. After signing up for notifications, you will receive notifications letting you know when there is new a Project Quay version, updated documentation, or other Project Quay news.

  1. Log into the Red Hat Customer Portal with your Red Hat customer account credentials.

  2. Select your user name (upper-right corner) to see Red Hat Account and Customer Portal selections: View account and portal selections

  3. Select Notifications. Your profile activity page appears.

  4. Select the Notifications tab.

  5. Select Manage Notifications.

  6. Select Follow, then choose Products from the drop-down box.

  7. From the drop-down box next to the Products, search for and select Project Quay: Select Products from notifications box

  8. Select the SAVE NOTIFICATION button. Going forward, you will receive notifications when there are changes to the Project Quay product, such as a new release.

Using SSL to protect connections to Project Quay

Introduction to using SSL

To configure Project Quay with a self-signed certificate, you need to create a Certificate Authority (CA) and then generate the required key and certificate files.

The following examples assume you have configured the server hostname quay-server.example.com using DNS or another naming mechanism, such as adding an entry in your /etc/hosts file:

$ cat /etc/hosts
...
192.168.1.112   quay-server.example.com

Create a Certificate Authority and sign a certificate

At the end of this procedure, you will have a certificate file and a primary key file named ssl.cert and ssl.key, respectively.

Create a Certificate Authority

  1. Generate the root CA key:

    $ openssl genrsa -out rootCA.key 2048
  2. Generate the root CA cert:

    $ openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem
  3. Enter the information that will be incorporated into your certificate request, including the server hostname, for example:

    Country Name (2 letter code) [XX]:IE
    State or Province Name (full name) []:GALWAY
    Locality Name (eg, city) [Default City]:GALWAY
    Organization Name (eg, company) [Default Company Ltd]:QUAY
    Organizational Unit Name (eg, section) []:DOCS
    Common Name (eg, your name or your server's hostname) []:quay-server.example.com

Sign a certificate

  1. Generate the server key:

    $ openssl genrsa -out ssl.key 2048
  2. Generate a signing request:

    $ openssl req -new -key ssl.key -out ssl.csr
  3. Enter the information that will be incorporated into your certificate request, including the server hostname, for example:

    Country Name (2 letter code) [XX]:IE
    State or Province Name (full name) []:GALWAY
    Locality Name (eg, city) [Default City]:GALWAY
    Organization Name (eg, company) [Default Company Ltd]:QUAY
    Organizational Unit Name (eg, section) []:DOCS
    Common Name (eg, your name or your server's hostname) []:quay-server.example.com
  4. Create a configuration file openssl.cnf, specifying the server hostname, for example:

    openssl.cnf
    [req]
    req_extensions = v3_req
    distinguished_name = req_distinguished_name
    [req_distinguished_name]
    [ v3_req ]
    basicConstraints = CA:FALSE
    keyUsage = nonRepudiation, digitalSignature, keyEncipherment
    subjectAltName = @alt_names
    [alt_names]
    DNS.1 = quay-server.example.com
    IP.1 = 192.168.1.112
  5. Use the configuration file to generate the certificate ssl.cert:

    $ openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf

Configuring SSL using the command line

Another option when configuring SSL is to use the command line interface.

  1. Copy the certificate file and primary key file to your configuration directory, ensuring they are named ssl.cert and ssl.key respectively:

    $ cp ~/ssl.cert $QUAY/config
    $ cp ~/ssl.key $QUAY/config
    $ cd $QUAY/config
  2. Edit the config.yaml file and specify that you want Quay to handle TLS:

    config.yaml
    ...
    SERVER_HOSTNAME: quay-server.example.com
    ...
    PREFERRED_URL_SCHEME: https
    ...
  3. Stop the Quay container and restart the registry:

    $ sudo podman rm -f quay
    $ sudo podman run -d --rm -p 80:8080 -p 443:8443 \
      --name=quay \
      -v $QUAY/config:/conf/stack:Z \
      -v $QUAY/storage:/datastorage:Z \
      quay.io/projectquay/quay:qui-gon

Configuring SSL using the UI

This section configures SSL using the Quay UI. To configure SSL using the command line interface, see the following section.

  1. Start the Quay container in configuration mode:

    $ sudo podman run --rm -it --name quay_config -p 80:8080 -p 443:8443 quay.io/projectquay/quay:qui-gon config secret
  2. In the Server Configuration section, select Red Hat Quay handles TLS for TLS. Upload the certificate file and private key file created earlier, ensuring that the Server Hostname matches the value used when creating the certs. Validate and download the updated configuration.

  3. Stop the Quay container and then restart the registry:

    $ sudo podman rm -f quay
    $ sudo podman run -d --rm -p 80:8080 -p 443:8443 \
    --name=quay \
    -v $QUAY/config:/conf/stack:Z \
    -v $QUAY/storage:/datastorage:Z \
    quay.io/projectquay/quay:qui-gon

Testing SSL configuration using the command line

  • Use the podman login command to attempt to log in to the Quay registry with SSL enabled:

    $ sudo podman login quay-server.example.com
    Username: quayadmin
    Password:
    
    Error: error authenticating creds for "quay-server.example.com": error pinging docker registry quay-server.example.com: Get "https://quay-server.example.com/v2/": x509: certificate signed by unknown authority
  • Podman does not trust self-signed certificates. As a workaround, use the --tls-verify option:

    $ sudo podman login --tls-verify=false quay-server.example.com
    Username: quayadmin
    Password:
    
    Login Succeeded!

Configuring Podman to trust the root Certificate Authority (CA) is covered in a subsequent section.

Testing SSL configuration using the browser

When you attempt to access the Quay registry, in this case, https://quay-server.example.com, the browser warns of the potential risk:

Potential risk

Proceed to the log in screen, and the browser will notify you that the connection is not secure:

Connection not secure

Configuring the system to trust the root Certificate Authority (CA) is covered in the subsequent section.

Configuring podman to trust the Certificate Authority

Podman uses two paths to locate the CA file, namely, /etc/containers/certs.d/ and /etc/docker/certs.d/.

  • Copy the root CA file to one of these locations, with the exact path determined by the server hostname, and naming the file ca.crt:

    $ sudo cp rootCA.pem /etc/containers/certs.d/quay-server.example.com/ca.crt
  • Alternatively, if you are using Docker, you can copy the root CA file to the equivalent Docker directory:

    $ sudo cp rootCA.pem /etc/docker/certs.d/quay-server.example.com/ca.crt

You should no longer need to use the --tls-verify=false option when logging in to the registry:

$ sudo podman login quay-server.example.com

Username: quayadmin
Password:
Login Succeeded!

Configuring the system to trust the certificate authority

  1. Copy the root CA file to the consolidated system-wide trust store:

    $ sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/
  2. Update the system-wide trust store configuration:

    $ sudo update-ca-trust extract
  3. You can use the trust list command to ensure that the Quay server has been configured:

    $ trust list | grep quay
        label: quay-server.example.com

    Now, when you browse to the registry at https://quay-server.example.com, the lock icon shows that the connection is secure:

    Connection not secure

  4. To remove the root CA from system-wide trust, delete the file and update the configuration:

    $ sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem
    $ sudo update-ca-trust extract
    $ trust list | grep quay
    $

More information can be found in the RHEL 8 documentation in the chapter Using shared system certificates.

Adding TLS Certificates to the Project Quay Container

To add custom TLS certificates to Project Quay, create a new directory named extra_ca_certs/ beneath the Project Quay config directory. Copy any required site-specific TLS certificates to this new directory.

Add TLS certificates to Project Quay

  1. View certificate to be added to the container

    $ cat storage.crt
    -----BEGIN CERTIFICATE-----
    MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV
    [...]
    -----END CERTIFICATE-----
  2. Create certs directory and copy certificate there

    $ mkdir -p quay/config/extra_ca_certs
    $ cp storage.crt quay/config/extra_ca_certs/
    $ tree quay/config/
    ├── config.yaml
    ├── extra_ca_certs
    │   ├── storage.crt
  3. Obtain the Quay container’s CONTAINER ID with podman ps:

    $ sudo podman ps
    CONTAINER ID        IMAGE                                COMMAND                  CREATED             STATUS              PORTS
    5a3e82c4a75f        <registry>/<repo>/quay:qui-gon "/sbin/my_init"          24 hours ago        Up 18 hours         0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 443/tcp   grave_keller
  4. Restart the container with that ID:

    $ sudo podman restart 5a3e82c4a75f
  5. Examine the certificate copied into the container namespace:

    $ sudo podman exec -it 5a3e82c4a75f cat /etc/ssl/certs/storage.pem
    -----BEGIN CERTIFICATE-----
    MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV

Add certs when deployed on Kubernetes

When deployed on Kubernetes, Project Quay mounts in a secret as a volume to store config assets. Unfortunately, this currently breaks the upload certificate function of the superuser panel.

To get around this error, a base64 encoded certificate can be added to the secret after Project Quay has been deployed. Here’s how:

  1. Begin by base64 encoding the contents of the certificate:

    $ cat ca.crt
    -----BEGIN CERTIFICATE-----
    MIIDljCCAn6gAwIBAgIBATANBgkqhkiG9w0BAQsFADA5MRcwFQYDVQQKDA5MQUIu
    TElCQ09SRS5TTzEeMBwGA1UEAwwVQ2VydGlmaWNhdGUgQXV0aG9yaXR5MB4XDTE2
    MDExMjA2NTkxMFoXDTM2MDExMjA2NTkxMFowOTEXMBUGA1UECgwOTEFCLkxJQkNP
    UkUuU08xHjAcBgNVBAMMFUNlcnRpZmljYXRlIEF1dGhvcml0eTCCASIwDQYJKoZI
    [...]
    -----END CERTIFICATE-----
    
    $ cat ca.crt | base64 -w 0
    [...]
    c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  2. Use the kubectl tool to edit the quay-enterprise-config-secret.

    $ kubectl --namespace quay-enterprise edit secret/quay-enterprise-config-secret
  3. Add an entry for the cert and paste the full base64 encoded string under the entry:

      custom-cert.crt:
    c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  4. Finally, recycle all Project Quay pods. Use kubectl delete to remove all Project Quay pods. The Project Quay Deployment will automatically schedule replacement pods with the new certificate data.

Configuring action log storage for Elasticsearch

By default, the past three months of usage logs are stored in the Project Quay database and exposed via the web UI on organization and repository levels. Appropriate administrative privileges are required to see log entries. For deployments with a large amount of logged operations, you can now store the usage logs in Elasticsearch instead of the Project Quay database backend. To do this, you need to provide your own Elasticsearch stack, as it is not included with Project Quay as a customizable component.

Enabling Elasticsearch logging can be done during Project Quay deployment or post-deployment using the Project Quay Config Tool. The resulting configuration is stored in the config.yaml file. Once configured, usage log access continues to be provided the same way, via the web UI for repositories and organizations.

Here’s how to configure action log storage to change it from the default Project Quay database to use Elasticsearch:

  1. Obtain an Elasticsearch account.

  2. Open the Project Quay Config Tool (either during or after Project Quay deployment).

  3. Scroll to the Action Log Storage Configuration setting and select Elasticsearch instead of Database. The following figure shows the Elasticsearch settings that appear:

    Choose Elasticsearch to view settings to store logs

  4. Fill in the following information for your Elasticsearch instance:

    • Elasticsearch hostname: The hostname or IP address of the system providing the Elasticsearch service.

    • Elasticsearch port: The port number providing the Elasticsearch service on the host you just entered. Note that the port must be accessible from all systems running the Project Quay registry. The default is TCP port 9200.

    • Elasticsearch access key: The access key needed to gain access to the Elastic search service, if required.

    • Elasticsearch secret key: The secret key needed to gain access to the Elastic search service, if required.

    • AWS region: If you are running on AWS, set the AWS region (otherwise, leave it blank).

    • Index prefix: Choose a prefix to attach to log entries.

    • Logs Producer: Choose either Elasticsearch (default) or Kinesis to direct logs to an intermediate Kinesis stream on AWS. You need to set up your own pipeline to send logs from Kinesis to Elasticsearch (for example, Logstash). The following figure shows additional fields you would need to fill in for Kinesis:

      On AWS optionally set up an intermediate Kinesis stream

  5. If you chose Elasticsearch as the Logs Producer, no further configuration is needed. If you chose Kinesis, fill in the following:

    • Stream name: The name of the Kinesis stream.

    • AWS access key: The name of the AWS access key needed to gain access to the Kinesis stream, if required.

    • AWS secret key: The name of the AWS secret key needed to gain access to the Kinesis stream, if required.

    • AWS region: The AWS region.

  6. When you are done, save the configuration. The Config Tool checks your settings. If there is a problem connecting to the Elasticsearch or Kinesis services, you will see an error and have the opportunity to continue editing. Otherwise, logging will begin to be directed to your Elasticsearch configuration after the cluster restarts with the new configuration.

Clair Security Scanning

Clair is a set of micro services that can be used with Project Quay to perform vulnerability scanning of container images associated with a set of Linux operating systems. The micro services design of Clair makes it appropriate to run in a highly scalable configuration, where components can be scaled separately as appropriate for enterprise environments.

Clair uses the following vulnerability databases to scan for issues in your images:

  • Alpine SecDB database

  • AWS UpdateInfo

  • Debian Oval database

  • Oracle Oval database

  • RHEL Oval database

  • SUSE Oval database

  • Ubuntu Oval database

  • Pyup.io (python) database

For information on how Clair does security mapping with the different databases, see ClairCore Severity Mapping.

Note

With the release of Clair V4 (image clair), the previously used Clair V2 (image clair-jwt) is no longer used. See below for how to run V2 in read-only mode while V4 is updating.

Setting Up Clair on a Project Quay OpenShift deployment

Deploying Via the Quay Operator

To set up Clair V4 on a new Project Quay deployment on OpenShift, it is highly recommended to use the Quay Operator. By default, the Quay Operator will install or upgrade a Clair deployment along with your Project Quay deployment and configure Clair security scanning automatically.

Manually Deploying Clair

To configure Clair V4 on an existing Project Quay OpenShift deployment running Clair V2, first ensure Project Quay has been upgraded to at least version 3.4.0. Then use the following steps to manually set up Clair V4 alongside Clair V2.

  1. Set your current project to the name of the project in which Project Quay is running. For example:

    $ oc project quay-enterprise
  2. Create a Postgres deployment file for Clair v4 (for example, clairv4-postgres.yaml) as follows.

    clairv4-postgres.yaml
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: clairv4-postgres
      namespace: quay-enterprise
      labels:
        quay-component: clairv4-postgres
    spec:
      replicas: 1
      selector:
        matchLabels:
          quay-component: clairv4-postgres
      template:
        metadata:
          labels:
            quay-component: clairv4-postgres
        spec:
          volumes:
            - name: postgres-data
              persistentVolumeClaim:
                claimName: clairv4-postgres
          containers:
            - name: postgres
              image: postgres:11.5
              imagePullPolicy: "IfNotPresent"
              ports:
                - containerPort: 5432
              env:
                - name: POSTGRES_USER
                  value: "postgres"
                - name: POSTGRES_DB
                  value: "clair"
                - name: POSTGRES_PASSWORD
                  value: "postgres"
                - name: PGDATA
                  value: "/etc/postgres/data"
              volumeMounts:
                - name: postgres-data
                  mountPath: "/etc/postgres"
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: clairv4-postgres
      labels:
        quay-component: clairv4-postgres
    spec:
      accessModes:
        - "ReadWriteOnce"
      resources:
        requests:
          storage: "5Gi"
        volumeName: "clairv4-postgres"
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: clairv4-postgres
      labels:
        quay-component: clairv4-postgres
    spec:
      type: ClusterIP
      ports:
        - port: 5432
          protocol: TCP
          name: postgres
          targetPort: 5432
      selector:
        quay-component: clairv4-postgres
  3. Deploy the postgres database as follows:

    $ oc create -f ./clairv4-postgres.yaml
  4. Create a Clair config.yaml file to use for Clair v4. For example:

    config.yaml
    introspection_addr: :8089
    http_listen_addr: :8080
    log_level: debug
    indexer:
      connstring: host=clairv4-postgres port=5432 dbname=clair user=postgres password=postgres sslmode=disable
      scanlock_retry: 10
      layer_scan_concurrency: 5
      migrations: true
    matcher:
      connstring: host=clairv4-postgres port=5432 dbname=clair user=postgres password=postgres sslmode=disable
      max_conn_pool: 100
      run: ""
      migrations: true
      indexer_addr: clair-indexer
    notifier:
      connstring: host=clairv4-postgres port=5432 dbname=clair user=postgres password=postgres sslmode=disable
      delivery: 1m
      poll_interval: 5m
      migrations: true
    auth:
      psk:
        key: MTU5YzA4Y2ZkNzJoMQ== (1)
        iss: ["quay"]
    # tracing and metrics
    trace:
      name: "jaeger"
      probability: 1
      jaeger:
        agent_endpoint: "localhost:6831"
        service_name: "clair"
    metrics:
      name: "prometheus"
    1. To generate a Clair pre-shared key (PSK), enable scanning in the Security Scanner section of the User Interface and click Generate PSK.

More information about Clair’s configuration format can be found in upstream Clair documentation.

  1. Create a secret from the Clair config.yaml:

    $ oc create secret generic clairv4-config-secret --from-file=./config.yaml
  2. Create the Clair v4 deployment file (for example, clair-combo.yaml) and modify it as necessary:

    clair-combo.yaml
    ---
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      labels:
        quay-component: clair-combo
      name: clair-combo
    spec:
      replicas: 1
      selector:
        matchLabels:
          quay-component: clair-combo
      template:
        metadata:
          labels:
            quay-component: clair-combo
        spec:
          containers:
            - image: quay.io/projectquay/clair:qui-gon  (1)
              imagePullPolicy: IfNotPresent
              name: clair-combo
              env:
                - name: CLAIR_CONF
                  value: /clair/config.yaml
                - name: CLAIR_MODE
                  value: combo
              ports:
                - containerPort: 8080
                  name: clair-http
                  protocol: TCP
                - containerPort: 8089
                  name: clair-intro
                  protocol: TCP
              volumeMounts:
                - mountPath: /clair/
                  name: config
          imagePullSecrets:
            - name: redhat-pull-secret
          restartPolicy: Always
          volumes:
            - name: config
              secret:
                secretName: clairv4-config-secret
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: clairv4 (2)
      labels:
        quay-component: clair-combo
    spec:
      ports:
        - name: clair-http
          port: 80
          protocol: TCP
          targetPort: 8080
        - name: clair-introspection
          port: 8089
          protocol: TCP
          targetPort: 8089
      selector:
        quay-component: clair-combo
      type: ClusterIP
    1. Change image to latest clair image name and version.

    2. With the Service set to clairv4, the scanner endpoint for Clair v4 is entered later into the Project Quay config.yaml in the SECURITY_SCANNER_V4_ENDPOINT as http://clairv4.

  3. Create the Clair v4 deployment as follows:

    $ oc create -f ./clair-combo.yaml
  4. Modify the config.yaml file for your Project Quay deployment to add the following entries at the end:

    FEATURE_SECURITY_SCANNER: true
    SECURITY_SCANNER_V4_ENDPOINT: http://clairv4 (1)
    1. Identify the Clair v4 service endpoint

  5. Redeploy the modified config.yaml to the secret containing that file (for example, quay-enterprise-config-secret:

    $ oc delete secret quay-enterprise-config-secret
    $ oc create secret generic quay-enterprise-config-secret --from-file=./config.yaml
  6. For the new config.yaml to take effect, you need to restart the Project Quay pods. Simply deleting the quay-app pods causes pods with the updated configuration to be deployed.

At this point, images in any of the organizations identified in the namespace whitelist will be scanned by Clair v4.

Setting up Clair on a non-OpenShift Project Quay deployment

For Project Quay deployments not running on OpenShift, it is possible to configure Clair security scanning manually. Project Quay deployments already running Clair V2 can use the instructions below to add Clair V4 to their deployment.

  1. Deploy a (preferably fault-tolerant) Postgres database server. Note that Clair requires the uuid-ossp extension to be added to its Postgres database. If the user supplied in Clair’s config.yaml has the necessary privileges to create the extension then it will be added automatically by Clair itself. If not, then the extension must be added before starting Clair. If the extension is not present, the following error will be displayed when Clair attempts to start.

    ERROR: Please load the "uuid-ossp" extension. (SQLSTATE 42501)
  2. Create a Clair config file in a specific folder, for example, /etc/clairv4/config/config.yaml).

    config.yaml
    introspection_addr: :8089
    http_listen_addr: :8080
    log_level: debug
    indexer:
      connstring: host=clairv4-postgres port=5432 dbname=clair user=postgres password=postgres sslmode=disable
      scanlock_retry: 10
      layer_scan_concurrency: 5
      migrations: true
    matcher:
      connstring: host=clairv4-postgres port=5432 dbname=clair user=postgres password=postgres sslmode=disable
      max_conn_pool: 100
      run: ""
      migrations: true
      indexer_addr: clair-indexer
    notifier:
      connstring: host=clairv4-postgres port=5432 dbname=clair user=postgres password=postgres sslmode=disable
      delivery_interval: 1m
      poll_interval: 5m
      migrations: true
    
    # tracing and metrics
    trace:
      name: "jaeger"
      probability: 1
      jaeger:
        agent_endpoint: "localhost:6831"
        service_name: "clair"
    metrics:
      name: "prometheus"

More information about Clair’s configuration format can be found in upstream Clair documentation.

  1. Run Clair via the container image, mounting in the configuration from the file you created.

    $ podman run -p 8080:8080 -p 8089:8089 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo -v /etc/clair4/config:/clair -d quay.io/projectquay/clair:qui-gon
  2. Follow the remaining instructions from the previous section for configuring Project Quay to use the new Clair V4 endpoint.

Running multiple Clair containers in this fashion is also possible, but for deployment scenarios beyond a single container the use of a container orchestrator like Kubernetes or OpenShift is strongly recommended.

Using Clair

  1. Log in to your Project Quay cluster and select an organization for which you have configured Clair scanning.

  2. Select a repository from that organization that holds some images and select Tags from the left navigation. The following figure shows an example of a repository with two images that have been scanned:

    Security scan information appears for scanned repository images

  3. If vulnerabilities are found, select to under the Security Scan column for the image to see either all vulnerabilities or those that are fixable. The following figure shows information on all vulnerabilities found:

    See all vulnerabilities or only those that are fixable

CVE ratings from the National Vulnerability Database

With Clair v4.2, enrichment data is now viewable in the Quay UI. Additionally, Clair v4.2 adds CVSS scores from the National Vulnerability Database for detected vulnerabilities.

With this change, if the vulnerability has a CVSS score that is within 2 levels of the distro’s score, the Quay UI present’s the distro’s score by default. For example:

Clair v4.2 data display

This differs from the previous interface, which would only display the following information:

Clair v4 data display

Configuring Clair for Disconnected Environments

Clair utilizes a set of components called Updaters to handle the fetching and parsing of data from various vulnerability databases. These Updaters are set up by default to pull vulnerability data directly from the internet and work out of the box. For customers in disconnected environments without direct access to the internet this poses a problem. Clair supports these environments through the ability to work with different types of update workflows that take into account network isolation. Using the clairctl command line utility, any process can easily fetch Updater data from the internet via an open host, securely transfer the data to an isolated host, and then import the Updater data on the isolated host into Clair itself.

The steps are as follows.

  1. First ensure that your Clair configuration has disabled automated Updaters from running.

    config.yaml
    matcher:
      disable_updaters: true
  2. Export out the latest Updater data to a local archive. This requires the clairctl tool which can be run directly as a binary, or via the Clair container image. Assuming your Clair configuration is in /etc/clairv4/config/config.yaml, to run via the container image:

    $ podman run -it --rm -v /etc/clairv4/config:/cfg:Z -v /path/to/output/directory:/updaters:Z --entrypoint /bin/clairctl quay.io/projectquay/clair:qui-gon --config /cfg/config.yaml export-updaters  /updaters/updaters.gz

    Note that you need to explicitly reference the Clair configuration. This will create the Updater archive in /etc/clairv4/updaters/updaters.gz. If you want to ensure the archive was created without any errors from the source databases, you can supply the --strict flag to clairctl. The archive file should be copied over to a volume that is accessible from the disconnected host running Clair. From the disconnected host, use the same procedure now to import the archive into Clair.

    $ podman run -it --rm -v /etc/clairv4/config:/cfg:Z -v /path/to/output/directory:/updaters:Z --entrypoint /bin/clairctl quay.io/projectquay/clair:qui-gon --config /cfg/config.yaml import-updaters /updaters/updaters.gz

Clair updater URLs

The following are the HTTP hosts and paths that Clair will attempt to talk to in a default configuration. This list is non-exhaustive, as some servers will issue redirects and some request URLs are constructed dynamically.

  • https://secdb.alpinelinux.org/

  • http://repo.us-west-2.amazonaws.com/2018.03/updates/x86_64/mirror.list

  • https://cdn.amazonlinux.com/2/core/latest/x86_64/mirror.list

  • https://www.debian.org/security/oval/

  • https://linux.oracle.com/security/oval/

  • https://packages.vmware.com/photon/photon_oval_definitions/

  • https://github.com/pyupio/safety-db/archive/

  • https://catalog.redhat.com/api/containers/

  • https://www.redhat.com/security/data/

  • https://support.novell.com/security/oval/

  • https://people.canonical.com/~ubuntu-security/oval/

Additional Information

For detailed documentation on the internals of Clair, including how the microservices are structured, please see the Upstream Clair and ClairCore documentation.

Scan pod images with the Container Security Operator

Using the Container Security Operator, (CSO) you can scan container images associated with active pods, running on OpenShift (4.2 or later) and other Kubernetes platforms, for known vulnerabilities. The CSO:

  • Watches containers associated with pods on all or specified namespaces

  • Queries the container registry where the containers came from for vulnerability information provided an image’s registry supports image scanning (such as a Quay registry with Clair scanning)

  • Exposes vulnerabilities via the ImageManifestVuln object in the Kubernetes API

Using the instructions here, the CSO is installed in the marketplace-operators namespace, so it is available to all namespaces on your OpenShift cluster.

Note

To see instructions on installing the CSO on Kubernetes, select the Install button from the Container Security OperatorHub.io page.

Run the CSO in OpenShift

To start using the CSO in OpenShift, do the following:

  1. Go to Operators → OperatorHub (select Security) to see the available Container Security Operator.

  2. Select the Container Security Operator, then select Install to go to the Create Operator Subscription page.

  3. Check the settings (all namespaces and automatic approval strategy, by default), and select Subscribe. The Container Security appears after a few moments on the Installed Operators screen.

  4. Optionally, you can add custom certificates to the CSO. In this example, create a certificate named quay.crt in the current directory. Then run the following command to add the cert to the CSO (restart the Operator pod for the new certs to take effect):

    $ oc create secret generic container-security-operator-extra-certs --from-file=quay.crt -n openshift-operators
  5. Open the OpenShift Dashboard (Home → Dashboards). A link to Image Security appears under the status section, with a listing of the number of vulnerabilities found so far. Select the link to see a Security breakdown, as shown in the following figure:

    Access SCO scanning data from OpenShift dashboard

  6. You can do one of two things at this point to follow up on any detected vulnerabilities:

    • Select the link to the vulnerability. You are taken to the container registry, Project Quay or other registry where the container came from, where you can see information about the vulnerability. The following figure shows an example of detected vulnerabilities from a Quay.io registry:

      The CSO points you to a registry containing the vulnerable image

    • Select the namespaces link to go to the ImageManifestVuln screen, where you can see the name of the selected image and all namespaces where that image is running. The following figure indicates that a particular vulnerable image is running in two namespaces:

      View namespaces a vulnerable image is running in

At this point, you know what images are vulnerable, what you need to do to fix those vulnerabilities, and every namespace that the image was run in. So you can:

  • Alert anyone running the image that they need to correct the vulnerability

  • Stop the images from running (by deleting the deployment or other object that started the pod the image is in)

Note that if you do delete the pod, it may take a few minutes for the vulnerability to reset on the dashboard.

Query image vulnerabilities from the CLI

You can query information on security from the command line. To query for detected vulnerabilities, type:

$ oc get vuln --all-namespaces
NAMESPACE     NAME              AGE
default       sha256.ca90...    6m56s
skynet        sha256.ca90...    9m37s

To display details for a particular vulnerability, identify one of the vulnerabilities, along with its namespace and the describe option. This example shows an active container whose image includes an RPM package with a vulnerability:

$ oc describe vuln --namespace mynamespace sha256.ac50e3752...
Name:         sha256.ac50e3752...
Namespace:    quay-enterprise
...
Spec:
  Features:
    Name:            nss-util
    Namespace Name:  centos:7
    Version:         3.44.0-3.el7
    Versionformat:   rpm
    Vulnerabilities:
      Description: Network Security Services (NSS) is a set of libraries...

Project Quay Security Scanning with Clair V2

Project Quay supports scanning container images for known vulnerabilities with a scanning engine such as Clair. This document explains how to configure Clair with Project Quay.

Note

With the release of Red Hat Quay 3.4, the default version of Clair is V4. This new version V4 is no longer being released as Technology Preview and is supported for production use. Customers are strongly encouraged to use Clair V4 for with Red Hat Quay 3.4. It is possible to run both Clair V4 and Clair V2 simultaneously if so desired. In future versions of Red Hat Quay, Clair V2 will eventually be removed.

Set up Clair V2 in the Project Quay config tool

Enabling Clair V2 in Project Quay consists of:

  • Starting the Project Quay config tool. See the Project Quay deployment guide for the type of deployment you are doing (OpenShift, Basic, or HA) for how to start the config tool for that environment.

  • Enabling security scanning, then generating a private key and PEM file in the config tool

  • Including the key and PEM file in the Clair config file

  • Start the Clair container

The procedure varies, based on whether you are running Project Quay on OpenShift or directly on a host.

Enabling Clair V2 on a Project Quay OpenShift deployment

To set up Clair V2 on Project Quay in OpenShift, see Add Clair image scanning to Project Quay.

Enabling Clair V2 on a Project Quay Basic or HA deployment

To set up Clair V2 on a Project Quay deployment where the container is running directly on the host system, do the following:

  1. Restart the Project Quay config tool: Run the Quay container again in config mode, open the configuration UI in a browser, then select Modify an existing configuration. When prompted, upload the quay-config.tar.gz file that was originally created for the deployment.

  2. Enable Security Scanning: Scroll to the Security Scanner section and select the "Enable Security Scanning" checkbox. From the fields that appear you need to create an authentication key and enter the security scanner endpoint. Here’s how:

    • Generate key: Click Create Key, then from the pop-up window type a name for the Clair private key and an optional expiration date (if blank, the key never expires). Then select Generate Key.

    • Copy the Clair key and PEM file: Save the Key ID (to a notepad or similar) and download a copy of the Private Key PEM file (named security_scanner.pem) by selecting "Download Private Key" (if you lose the key, you need to generate a new one). You will need the key and PEM file when you start the Clair container later.

      Close the pop-up when you are done. Here is an example of a completed Security Scanner config:

      Create authentication key and set scan endpoint

  3. Save the configuration: Click Save Configuration Changes and then select Download Configuration to save it to your local system.

  4. Deploy the configuration: To pick up the changes enabling scanning, as well as other changes you may have made to the configuration, unpack the quay-config.tar.gz and copy the resulting files to the config directory. For example:

    $ tar xvf quay-config.tar.gz
    config.yaml  ssl.cert  ssl.key
    $ cp config.yaml ssl* /mnt/quay/config

Next, start the Clair V2 container and associated database, as described in the following sections.

Setting Up Clair V2 Security Scanning

Once you have created the necessary key and pem files from the Project Quay config UI, you are ready to start up the Clair V2 container and associated database. Once that is done, you an restart your Project Quay cluster to have those changes take effect.

Procedures for running the Clair V2 container and associated database are different on OpenShift than they are for running those containers directly on a host.

Run Clair V2 on a Project Quay OpenShift deployment

To run the Clair V2 image scanning container and its associated database on an OpenShift environment with your Project Quay cluster, see Add Clair image scanning to Project Quay.

Run Clair V2 on a Project Quay Basic or HA deployment

To run Clair V2 and its associated database on non-OpenShift environments (directly on a host), you need to:

  • Start up a database

  • Configure and start Clair V2

Get Postgres and Clair

In order to run Clair, a database is required. For production deployments, MySQL is not supported. For production, we recommend you use PostgreSQL or other supported database:

  • Running on machines other than those running Project Quay

  • Ideally with automatic replication and failover

For testing purposes, a single PostgreSQL instance can be started locally:

  1. To start Postgres locally, do the following:

    # sudo podman run --name postgres -p 5432:5432 -d postgres
    # sleep 5
    # sudo podman run --rm --link postgres:postgres postgres \
       sh -c 'echo "create database clairtest" | psql -h \
       "$POSTGRES_PORT_5432_TCP_ADDR" -p  \
       "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'

    The configuration string for this test database is:

    postgresql://postgres@{DOCKER HOST GOES HERE}:5432/clairtest?sslmode=disable
  2. Pull the security-enabled Clair image:

You will need to build your own Clair container and pull it during this step. Instructions for building the Clair container are not yet available.

  1. Make a configuration directory for Clair

    # mkdir clair-config
    # cd clair-config

Configure Clair V2

Clair V2 can run either as a single instance or in high-availability mode. It is recommended to run more than a single instance of Clair, ideally in an auto-scaling group with automatic healing.

  1. Create a config.yaml file to be used in the Clair V2 config directory (/clair/config) from one of the two Clair configuration files shown here.

  2. If you are doing a high-availability installation, go through the procedure in Authentication for high-availability scanners to create a Key ID and Private Key (PEM).

  3. Save the Private Key (PEM) to a file (such as, $HOME/config/security_scanner.pem).

  4. Replace the value of key_id (CLAIR_SERVICE_KEY_ID) with the Key ID you generated and the value of private_key_path with the location of the PEM file (for example, /config/security_scanner.pem).

    For example, those two value might now appear as:

    key_id: { 4fb9063a7cac00b567ee921065ed16fed7227afd806b4d67cc82de67d8c781b1 }
    private_key_path: /clair/config/security_scanner.pem
  5. Change other values in the configuration file as needed.

Clair V2 configuration: High availability
clair:
  database:
    type: pgsql
    options:
      # A PostgreSQL Connection string pointing to the Clair Postgres database.
      # Documentation on the format can be found at: http://www.postgresql.org/docs/9.4/static/libpq-connect.html
      source: { POSTGRES_CONNECTION_STRING }
      cachesize: 16384
  api:
    # The port at which Clair will report its health status. For example, if Clair is running at
    # https://clair.mycompany.com, the health will be reported at
    # http://clair.mycompany.com:6061/health.
    healthport: 6061

    port: 6062
    timeout: 900s

    # paginationkey can be any random set of characters. *Must be the same across all Clair instances*.
    paginationkey: "XxoPtCUzrUv4JV5dS+yQ+MdW7yLEJnRMwigVY/bpgtQ="

  updater:
    # interval defines how often Clair will check for updates from its upstream vulnerability databases.
    interval: 6h
  notifier:
    attempts: 3
    renotifyinterval: 1h
    http:
      # QUAY_ENDPOINT defines the endpoint at which Quay is running.
      # For example: https://myregistry.mycompany.com
      endpoint: { QUAY_ENDPOINT }/secscan/notify
      proxy: http://localhost:6063

jwtproxy:
  signer_proxy:
    enabled: true
    listen_addr: :6063
    ca_key_file: /certificates/mitm.key # Generated internally, do not change.
    ca_crt_file: /certificates/mitm.crt # Generated internally, do not change.
    signer:
      issuer: security_scanner
      expiration_time: 5m
      max_skew: 1m
      nonce_length: 32
      private_key:
        type: preshared
        options:
          # The ID of the service key generated for Clair. The ID is returned when setting up
          # the key in [Quay Setup](security-scanning.md)
          key_id: { CLAIR_SERVICE_KEY_ID }
          private_key_path: /clair/config/security_scanner.pem

  verifier_proxies:
  - enabled: true
    # The port at which Clair will listen.
    listen_addr: :6060

    # If Clair is to be served via TLS, uncomment these lines. See the "Running Clair under TLS"
    # section below for more information.
    # key_file: /clair/config/clair.key
    # crt_file: /clair/config/clair.crt

    verifier:
      # CLAIR_ENDPOINT is the endpoint at which this Clair will be accessible. Note that the port
      # specified here must match the listen_addr port a few lines above this.
      # Example: https://myclair.mycompany.com:6060
      audience: { CLAIR_ENDPOINT }

      upstream: http://localhost:6062
      key_server:
        type: keyregistry
        options:
          # QUAY_ENDPOINT defines the endpoint at which Quay is running.
          # Example: https://myregistry.mycompany.com
          registry: { QUAY_ENDPOINT }/keys/
Clair V2 configuration: Single instance
clair:
  database:
    type: pgsql
    options:
      # A PostgreSQL Connection string pointing to the Clair Postgres database.
      # Documentation on the format can be found at: http://www.postgresql.org/docs/9.4/static/libpq-connect.html
      source: { POSTGRES_CONNECTION_STRING }
      cachesize: 16384
  api:
    # The port at which Clair will report its health status. For example, if Clair is running at
    # https://clair.mycompany.com, the health will be reported at
    # http://clair.mycompany.com:6061/health.
    healthport: 6061

    port: 6062
    timeout: 900s

    # paginationkey can be any random set of characters. *Must be the same across all Clair instances*.
    paginationkey:

  updater:
    # interval defines how often Clair will check for updates from its upstream vulnerability databases.
    interval: 6h
  notifier:
    attempts: 3
    renotifyinterval: 1h
    http:
      # QUAY_ENDPOINT defines the endpoint at which Quay is running.
      # For example: https://myregistry.mycompany.com
      endpoint: { QUAY_ENDPOINT }/secscan/notify
      proxy: http://localhost:6063

jwtproxy:
  signer_proxy:
    enabled: true
    listen_addr: :6063
    ca_key_file: /certificates/mitm.key # Generated internally, do not change.
    ca_crt_file: /certificates/mitm.crt # Generated internally, do not change.
    signer:
      issuer: security_scanner
      expiration_time: 5m
      max_skew: 1m
      nonce_length: 32
      private_key:
        type: autogenerated
        options:
          rotate_every: 12h
          key_folder: /clair/config/
          key_server:
            type: keyregistry
            options:
              # QUAY_ENDPOINT defines the endpoint at which Quay is running.
              # For example: https://myregistry.mycompany.com
              registry: { QUAY_ENDPOINT }/keys/


  verifier_proxies:
  - enabled: true
    # The port at which Clair will listen.
    listen_addr: :6060

    # If Clair is to be served via TLS, uncomment these lines. See the "Running Clair under TLS"
    # section below for more information.
    # key_file: /clair/config/clair.key
    # crt_file: /clair/config/clair.crt

    verifier:
      # CLAIR_ENDPOINT is the endpoint at which this Clair will be accessible. Note that the port
      # specified here must match the listen_addr port a few lines above this.
      # Example: https://myclair.mycompany.com:6060
      audience: { CLAIR_ENDPOINT }

      upstream: http://localhost:6062
      key_server:
        type: keyregistry
        options:
          # QUAY_ENDPOINT defines the endpoint at which Quay is running.
          # Example: https://myregistry.mycompany.com
          registry: { QUAY_ENDPOINT }/keys/

Configuring Clair V2 for TLS

To configure Clair to run with TLS, a few additional steps are required.

Using certificates from a public CA

For certificates that come from a public certificate authority, follow these steps:

  1. Generate a TLS certificate and key pair for the DNS name at which Clair will be accessed

  2. Place these files as clair.crt and clair.key in your Clair configuration directory

  3. Uncomment the key_file and crt_file lines under verifier_proxies in your Clair config.yaml

If your certificates use a public CA, you are now ready to run Clair. If you are using your own certificate authority, configure Clair to trust it below.

Configuring trust of self-signed SSL

Similar to the process for setting up Docker to trust your self-signed certificates, Clair must also be configured to trust your certificates. Using the same CA certificate bundle used to configure Docker, complete the following steps:

  1. Rename the same CA certificate bundle used to set up Quay Registry to ca.crt

  2. Make sure the ca.crt file is mounted inside the Clair container under /etc/pki/ca-trust/source/anchors/ as in the example below: You will need to build your own Clair container and run it during this step. Instructions for building the Clair container are not yet available.

Now Clair will be able to trust the source of your TLS certificates and use them to secure communication between Clair and Quay.

Using Clair V2 data sources

Before scanning container images, Clair tries to figure out the operating system on which the container was built. It does this by looking for specific filenames inside that image (see Table 1). Once Clair knows the operating system, it uses specific security databases to check for vulnerabilities (see Table 2).

Table 2. Container files that identify its operating system
Operating system Files identifying OS type

Redhat/CentOS/Oracle

etc/oracle-release

etc/centos-release

etc/redhat-release

etc/system-release

Alpine

etc/alpine-release

Debian/Ubuntu:

etc/os-release

usr/lib/os-release

etc/apt/sources.list

Ubuntu

etc/lsb-release

The data sources that Clair uses to scan containers are shown in Table 2.

Note

You must be sure that Clair has access to all listed data sources by whitelisting access to each data source’s location. You might need to add a wild-card character (*) at the end of some URLS that may not be fully complete because they are dynamically built by code.

Table 3. Clair V2 data sources and data collected
Data source Data collected Whitelist links Format License

Debian 6, 7, 8, unstable namespaces

Ubuntu 12.04, 12.10, 13.04, 14.04, 14.10, 15.04, 15.10, 16.04 namespaces

CentOS 5, 6, 7 namespace

rpm

Oracle Linux 5, 6, 7 namespaces

rpm

Alpine 3.3, 3.4, 3.5 namespaces

apk

MIT

Generic vulnerability metadata

N/A

Amazon Linux 2018.03, 2 namespaces

rpm

Run Clair V2

Execute the following command to run Clair V2:

You will need to build your own Clair container and run it during this step. Instructions for building the Clair container are not yet available.

Output similar to the following will be seen on success:

2016-05-04 20:01:05,658 CRIT Supervisor running as root (no user in config file)
2016-05-04 20:01:05,662 INFO supervisord started with pid 1
2016-05-04 20:01:06,664 INFO spawned: 'jwtproxy' with pid 8
2016-05-04 20:01:06,666 INFO spawned: 'clair' with pid 9
2016-05-04 20:01:06,669 INFO spawned: 'generate_mitm_ca' with pid 10
time="2016-05-04T20:01:06Z" level=info msg="No claims verifiers specified, upstream should be configured to verify authorization"
time="2016-05-04T20:01:06Z" level=info msg="Starting reverse proxy (Listening on ':6060')"
2016-05-04 20:01:06.715037 I | pgsql: running database migrations
time="2016-05-04T20:01:06Z" level=error msg="Failed to create forward proxy: open /certificates/mitm.crt: no such file or directory"
goose: no migrations to run. current version: 20151222113213
2016-05-04 20:01:06.730291 I | pgsql: database migration ran successfully
2016-05-04 20:01:06.730657 I | notifier: notifier service is disabled
2016-05-04 20:01:06.731110 I | api: starting main API on port 6062.
2016-05-04 20:01:06.736558 I | api: starting health API on port 6061.
2016-05-04 20:01:06.736649 I | updater: updater service is disabled.
2016-05-04 20:01:06,740 INFO exited: jwtproxy (exit status 0; not expected)
2016-05-04 20:01:08,004 INFO spawned: 'jwtproxy' with pid 1278
2016-05-04 20:01:08,004 INFO success: clair entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2016-05-04 20:01:08,004 INFO success: generate_mitm_ca entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
time="2016-05-04T20:01:08Z" level=info msg="No claims verifiers specified, upstream should be configured to verify authorization"
time="2016-05-04T20:01:08Z" level=info msg="Starting reverse proxy (Listening on ':6060')"
time="2016-05-04T20:01:08Z" level=info msg="Starting forward proxy (Listening on ':6063')"
2016-05-04 20:01:08,541 INFO exited: generate_mitm_ca (exit status 0; expected)
2016-05-04 20:01:09,543 INFO success: jwtproxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

To verify Clair V2 is running, execute the following command:

curl -X GET -I http://path/to/clair/here:6061/health

If a 200 OK code is returned, Clair is running:

HTTP/1.1 200 OK
Server: clair
Date: Wed, 04 May 2016 20:02:16 GMT
Content-Length: 0
Content-Type: text/plain; charset=utf-8

Once Clair V2 and its associated database are running, you man need to restart your quay application for the changes to take effect.

Integrate Project Quay into OpenShift with the Bridge Operator

Using the Quay Bridge Operator, you can replace the integrated container registry in OpenShift with a Project Quay registry. By doing this, your integrated OpenShift registry becomes a highly available, enterprise-grade Project Quay registry with enhanced role based access control (RBAC) features.

The primary goals of the Bridge Operator is to duplicate the features of the integrated OpenShift registry in the new Project Quay registry. The features enabled with this Operator include:

  • Synchronizing OpenShift namespaces as Project Quay organizations.

    • Creating Robot accounts for each default namespace service account

    • Creating Secrets for each created Robot Account (associating each Robot Secret to a Service Account as Mountable and Image Pull Secret)

    • Synchronizing OpenShift ImageStreams as Quay Repositories

  • Automatically rewriting new Builds making use of ImageStreams to output to Project Quay

  • Automatically importing an ImageStream tag once a build completes

Using this procedure with the Quay Bridge Operator, you enable bi-directional communication between your Project Quay and OpenShift clusters.

Running the Quay Bridge Operator

Prerequisites

Before setting up the Bridge Operator, have the following in place:

  • An existing Project Quay environment for which you have superuser permissions

  • A Red Hat OpenShift Container Platform environment (4.2 or later is recommended) for which you have cluster administrator permissions

  • An OpenShift command line tool (oc command)

Setting up and configuring OpenShift and Project Quay

Both Project Quay and OpenShift configuration is required:

Project Quay setup

Create a dedicated Project Quay organization, and from a new application you create within that organization, generate an OAuth token to be used with the Quay Bridge Operator in OpenShift

  1. Log in to Project Quay as a user with superuser access and select the organization for which the external application will be configured.

  2. In the left navigation, select Applications.

  3. Select Create New Application and entering a name for the new application (for example, openshift).

  4. With the new application displayed, select it.

  5. In the left navigation, select Generate Token to create a new OAuth2 token.

  6. Select all checkboxes to grant the access needed for the integration.

  7. Review the assigned permissions and then select Authorize Application, then confirm it.

  8. Copy and save the generated Access Token that appears to use in the next section.

OpenShift Setup

Setting up OpenShift for the Quay Bridge Operator requires several steps, including:

Deploying the Operator

The fastest method for deploying the operator is to deploy from OperatorHub. From the Administrator perspective in the OpenShift Web Console, navigate to the Operators tab, and then select OperatorHub.

Search for Quay Bridge Operator and then select Install.

Select an Approval Strategy and then select Install which will deploy the operator to the cluster.

Creating an OpenShift secret for the OAuth token

The Operator will use the previously obtained Access Token to communicate with Quay. Store this token within OpenShift as a secret.

Execute the following command to create a secret called quay-integration in the openshift-operators namespace with a key called token containing the access token:

$ oc create secret -n openshift-operators generic quay-integration --from-literal=token=<access_token>
Create the QuayIntegration Custom Resource

Finally, to complete the integration between OpenShift and Quay, a QuayIntegration custom resource needs to be created. This can be completed in the Web Console or from the command line.

quay-integration.yaml
apiVersion: quay.redhat.com/v1
kind: QuayIntegration
metadata:
  name: example-quayintegration
spec:
  clusterID: openshift  (1)
  credentialsSecret:
    namespace: openshift-operators
    name: quay-integration(2)
  quayHostname: https://<QUAY_URL>   (3)
  insecureRegistry: false (4)
  1. The clusterID value should be unique across the entire ecosystem. This value is optional and defaults to openshift.

  2. The credentialsSecret property refers to the namespace and name of the secret containing the token that was previously created.

  3. Replace QUAY_URL with the hostname of your Project Quay instance.

  4. If Quay is using self signed certificates, set the property insecureRegistry: true.

Create the QuayIntegration Custom Resource:

$ oc create -f quay-integration.yaml

At this point a Quay integration resource is created, linking the OpenShift cluster to the Project Quay instance. Organizations within Quay should be created for the related namespaces from the OpenShift environment

Repository mirroring

Repository mirroring

Project Quay repository mirroring lets you mirror images from external container registries (or another local registry) into your Project Quay cluster. Using repository mirroring, you can synchronize images to Project Quay based on repository names and tags.

From your Project Quay cluster with repository mirroring enabled, you can:

  • Choose a repository from an external registry to mirror

  • Add credentials to access the external registry

  • Identify specific container image repository names and tags to sync

  • Set intervals at which a repository is synced

  • Check the current state of synchronization

To use the mirroring functionality, you need to:

  • Enable Repository Mirroring in the Project Quay configuration

  • Run a repository mirroring worker

  • Create mirrored repositories

All repository mirroring configuration can be performed using the configuration tool UI or via the Quay API

Repository mirroring versus geo-replication

Quay geo-replication mirrors the entire image storage backend data between 2 or more different storage backends while the database is shared (one Quay registry with two different blob storage endpoints). The primary use cases for geo-replication are:

  • Speeding up access to the binary blobs for geographically dispersed setups

  • Guaranteeing that the image content is the same across regions

Repository mirroring synchronizes selected repositories (or subsets of repositories) from one registry to another. The registries are distinct, with registry is separate database and image storage. The primary use cases for mirroring are:

  • Independent registry deployments in different datacenters or regions, where a certain subset of the overall content is supposed to be shared across the datacenters / regions

  • Automatic synchronization or mirroring of selected (whitelisted) upstream repositories from external registries into a local Quay deployment

Note

Repository mirroring and geo-replication can be used simultaneously.

Table 4. Project Quay Repository mirroring versus geo-replication
Feature / Capability Geo-replication Repository mirroring

What is the feature designed to do?

A shared, global registry

Distinct, different registries

What happens if replication or mirroring hasn’t been completed yet?

The remote copy is used (slower)

No image is served

Is access to all storage backends in both regions required?

Yes (all Project Quay nodes)

No (distinct storage)

Can users push images from both sites to the same repository?

Yes

No

Is all registry content and configuration identical across all regions (shared database)

Yes

No

Can users select individual namespaces or repositories to be mirrored?

No,by default

Yes

Can users apply filters to synchronization rules?

No

Yes

Using repository mirroring

Here are some features and limitations of Project Quay repository mirroring:

  • With repository mirroring, you can mirror an entire repository or selectively limit which images are synced. Filters can be based on a comma-separated list of tags, a range of tags, or other means of identifying tags through regular expressions.

  • Once a repository is set as mirrored, you cannot manually add other images to that repository.

  • Because the mirrored repository is based on the repository and tags you set, it will hold only the content represented by the repo/tag pair. In other words, if you change the tag so that some images in the repository no longer match, those images will be deleted.

  • Only the designated robot can push images to a mirrored repository, superseding any role-based access control permissions set on the repository.

  • With a mirrored repository, a user can pull images (given read permission) from the repository but not push images to the repository.

  • Changing settings on your mirrored repository is done from the Mirrors tab on the Repositories page for the mirrored repository you create.

  • Images are synced at set intervals, but can also be synced on demand.

Mirroring configuration UI

  1. Start the Quay container in configuration mode and select the Enable Repository Mirroring check box. If you want to require HTTPS communications and verify certificates during mirroring, select the HTTPS and cert verification check box.

    Enable mirroring and require HTTPS and verified certificates

  2. Validate and download the configuration file, and then restart Quay in registry mode using the updated config file.

Mirroring configuration fields

Table 5. Mirroring configuration
Field Type Description

FEATURE_REPO_MIRROR

Boolean

Enable or disable repository mirroring

Default: false

 

 

 

REPO_MIRROR_INTERVAL

Number

The number of seconds between checking for repository mirror candidates

Default: 30

REPO_MIRROR_SERVER_HOSTNAME

String

Replaces the SERVER_HOSTNAME as the destination for mirroring.

Default: None

Example:
openshift-quay-service

REPO_MIRROR_TLS_VERIFY

Boolean

Require HTTPS and verify certificates of Quay registry during mirror.

Default: false

Mirroring worker

  • To run the repository mirroring worker, start by running a Quay pod with the repomirror option:

    $ sudo podman run -d --name mirroring-worker \
      -v $QUAY/config:/conf/stack:Z \
      quay.io/projectquay/quay:qui-gon repomirror
  • If you have configured TLS communications using a certificate /root/ca.crt, then the following example shows how to start the mirroring worker:

    $ sudo podman run -d --name mirroring-worker \
      -v $QUAY/config:/conf/stack:Z \
      -v /root/ca.crt:/etc/pki/ca-trust/source/anchors/ca.crt \
      quay.io/projectquay/quay:qui-gon repomirror

Creating a mirrored repository

The steps shown in this section assume you already have enabled repository mirroring in the configuration for your Project Quay cluster and that you have a deployed a mirroring worker.

When mirroring a repository from an external container registry, create a new private repository. Typically the same name is used as the target repository, for example, quay-rhel8:

Create new Project Quay repo

Repository mirroring settings

  1. In the Settings tab, set the Repository State to Mirror:

    Create a new Project Quay repo mirror

  2. In the Mirror tab, enter the details for connecting to the external registry, along with the tags, scheduling and access information:

    Repository mirroring

  3. Enter the details as required in the following fields:

    • Registry Location: The external repository you want to mirror, for example, registry.redhat.io/quay/quay-rhel8

    • Tags: This field is required. You may enter a comma-separated list of individual tags or tag patterns. (See Tag Patterns section for details.)

      Note

      In order for Quay to get the list of tags in the remote repository, one of the following requirements must be met:

      • An image with the "latest" tag must exist in the remote repository OR

      • At least one explicit tag, without pattern matching, must exist in the list of tags that you specify

    • Start Date: The date on which mirroring begins. The current date and time is used by default.

    • Sync Interval: Defaults to syncing every 24 hours. You can change that based on hours or days.

    • Robot User: Create a new robot account or choose an existing robot account to do the mirroring.

    • Username: The username for accessing the external registry holding the repository you are mirroring.

    • Password: The password associated with the Username. Note that the password cannot include characters that require an escape character (\).

Advanced settings

  • In the Advanced Settings section, configure TLS and proxy, if required:

  • Verify TLS: Check this box if you want to require HTTPS and to verify certificates, when communicating with the target remote registry.

  • HTTP Proxy: Identify the HTTP proxy server needed to access the remote site, if one is required.

  • HTTPS Proxy: Identify the HTTPS proxy server needed to access the remote site, if one is required.

  • No Proxy: List of locations that do not require proxy

Synchronize now

  • To perform an immediate mirroring operation, press the Sync Now button on the repository’s Mirroring tab. The logs are available on the Usage Logs tab:

    Usage logs

    When the mirroring is complete, the images will appear in the Tags tab:

    Repository mirroring tags

    Below is an example of a completed Repository Mirroring screen:

    Repository mirroring details

Event notifications for mirroring

There are three notification events for repository mirroring:

  • Repository Mirror Started

  • Repository Mirror Success

  • Repository Mirror Unsuccessful

The events can be configured inside the Settings tab for each repository, and all existing notification methods such as email, slack, Quay UI and webhooks are supported.

Mirroring tag patterns

As noted above, at least one Tag must be explicitly entered (ie. not a tag pattern) or the tag "latest" must exist in the report repository. (The tag "latest" will not be synced unless specified in the tag list.). This is required for Quay to get the list of tags in the remote repository to compare to the specified list to mirror.

Pattern syntax

Pattern

Description

*

Matches all characters

?

Matches any single character

[seq]

Matches any character in seq

[!seq]

Matches any character not in seq

Example tag patterns

Example Pattern

Example Matches

v3*

v32, v3.1, v3.2, v3.2-4beta, v3.3

v3.*

v3.1, v3.2, v3.2-4beta

v3.?

v3.1, v3.2, v3.3

v3.[12]

v3.1, v3.2

v3.[12]*

v3.1, v3.2, v3.2-4beta

v3.[!1]*

v3.2, v3.2-4beta, v3.3

Working with mirrored repositories

Once you have created a mirrored repository, there are several ways you can work with that repository. Select your mirrored repository from the Repositories page and do any of the following:

  • Enable/disable the repository: Select the Mirroring button in the left column, then toggle the Enabled check box to enable or disable the repository temporarily.

  • Check mirror logs: To make sure the mirrored repository is working properly, you can check the mirror logs. To do that, select the Usage Logs button in the left column. Here’s an example:

    View logs for your Project Quay repo mirror

  • Sync mirror now: To immediately sync the images in your repository, select the Sync Now button.

  • Change credentials: To change the username and password, select DELETE from the Credentials line. Then select None and add the username and password needed to log into the external registry when prompted.

  • Cancel mirroring: To stop mirroring, which keeps the current images available but stops new ones from being synced, select the CANCEL button.

  • Set robot permissions: Project Quay robot accounts are named tokens that hold credentials for accessing external repositories. By assigning credentials to a robot, that robot can be used across multiple mirrored repositories that need to access the same external registry.

    You can assign an existing robot to a repository by going to Account Settings, then selecting the Robot Accounts icon in the left column. For the robot account, choose the link under the REPOSITORIES column. From the pop-up window, you can:

    • Check which repositories are assigned to that robot.

    • Assign read, write or Admin privileges to that robot from the PERMISSION field shown in this figure: Assign a robot to mirrored repo

  • Change robot credentials: Robots can hold credentials such as Kubernetes secrets, Docker login information, and Mesos bundles. To change robot credentials, select the Options gear on the robot’s account line on the Robot Accounts window and choose View Credentials. Add the appropriate credentials for the external repository the robot needs to access.

    Assign permission to a robot

  • Check and change general setting: Select the Settings button (gear icon) from the left column on the mirrored repository page. On the resulting page, you can change settings associated with the mirrored repository. In particular, you can change User and Robot Permissions, to specify exactly which users and robots can read from or write to the repo.

Repository mirroring recommendations

  • Repository mirroring pods can run on any node including other nodes where Quay is already running

  • Repository mirroring is scheduled in the database and run in batches. As a result, more workers could mean faster mirroring, since more batches will be processed.

  • The optimal number of mirroring pods depends on:

    • The total number of repositories to be mirrored

    • The number of images and tags in the repositories and the frequency of changes

    • Parallel batches

  • You should balance your mirroring schedule across all mirrored repositories, so that they do not all start up at the same time.

  • For a mid-size deployment, with approximately 1000 users and 1000 repositories, and with roughly 100 mirrored repositories, it is expected that you would use 3-5 mirroring pods, scaling up to 10 if required.

LDAP Authentication Setup for Project Quay

The Lightweight Directory Access Protocol (LDAP) is an open, vendor-neutral, industry standard application protocol for accessing and maintaining distributed directory information services over an Internet Protocol (IP) network. Project Quay supports using LDAP as an identity provider.

Considerations prior to enabling LDAP

Existing Quay deployments

Conflicts between user names can arise when you enable LDAP for an existing Quay deployment that already has users configured. Consider the scenario where a particular user, alice, was manually created in Quay prior to enabling LDAP. If the user name alice also exists in the LDAP directory, Quay will create a new user alice-1 when alice logs in for the first time using LDAP, and will map the LDAP credentials to this account. This might not be want you want, for consistency reasons, and it is recommended that you remove any potentially conflicting local account names from Quay prior to enabling LDAP.

Manual User Creation and LDAP authentication

When Quay is configured for LDAP, LDAP-authenticated users are automatically created in Quay’s database on first log in, if the configuration option FEATURE_USER_CREATION is set to true. If this option is set to false, the automatic user creation for LDAP users will fail and the user is not allowed to log in. In this scenario, the superuser needs to create the desired user account first. Conversely, if FEATURE_USER_CREATION is set to true, this also means that a user can still create an account from the Quay login screen, even if there is an equivalent user in LDAP.

Set Up LDAP Configuration

In the config tool, locate the Authentication section and select “LDAP” from the drop-down menu. Update LDAP configuration fields as required.

Fill in LDAP information

  • Here is an example of the resulting entry in the config.yaml file:

AUTHENTICATION_TYPE: LDAP

Full LDAP URI

LDAP server URI LDAP server SSL

  • The full LDAP URI, including the ldap:// or ldaps:// prefix.

  • A URI beginning with ldaps:// will make use of the provided SSL certificate(s) for TLS setup.

  • Here is an example of the resulting entry in the config.yaml file:

LDAP_URI: ldaps://ldap.example.org

Team Synchronization

Team synchronization

  • If enabled, organization administrators who are also superusers can set teams to have their membership synchronized with a backing group in LDAP.

Team synchronization

  • The resynchronization duration is the period at which a team must be re-synchronized. Must be expressed in a duration string form: 30m, 1h, 1d.

  • Optionally allow non-superusers to enable and manage team syncing under organizations in which they are administrators.

  • Here is an example of the resulting entries in the config.yaml file:

FEATURE_TEAM_SYNCING: true
TEAM_RESYNC_STALE_TIME: 60m
FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP: true

Base and Relative Distinguished Names

Distinguished Names

  • A Distinguished Name path which forms the base path for looking up all LDAP records. Example: dc=my,dc=domain,dc=com

  • Optional list of Distinguished Name path(s) which form the secondary base path(s) for looking up all user LDAP records, relative to the Base DN defined above. These path(s) will be tried if the user is not found via the primary relative DN.

  • User Relative DN is relative to BaseDN. Example: ou=NYC not ou=NYC,dc=example,dc=org

  • Multiple “Secondary User Relative DNs” may be entered if there are multiple Organizational Units where User objects are located at. Simply type in the Organizational Units and click on Add button to add multiple RDNs. Example: ou=Users,ou=NYC and ou=Users,ou=SFO

  • The "User Relative DN" searches with subtree scope. For example, if your Organization has Organizational Units NYC and SFO under the Users OU (ou=SFO,ou=Users and ou=NYC,ou=Users), Project Quay can authenticate users from both the NYC and SFO Organizational Units if the User Relative DN is set to Users (ou=Users).

  • Here is an example of the resulting entries in the config.yaml file:

LDAP_BASE_DN:
- dc=example
- dc=com
LDAP_USER_RDN:
- ou=users
LDAP_SECONDARY_USER_RDNS:
- ou=bots
- ou=external

Additional User Filters

User filters

  • If specified, the additional filter used for all user lookup queries. Note that all Distinguished Names used in the filter must be full paths; the Base DN is not added automatically here. Must be wrapped in parens. Example: (&(someFirstField=someValue)(someOtherField=someOtherValue))

  • Here is an example of the resulting entry in the config.yaml file:

LDAP_USER_FILTER: (memberof=cn=developers,ou=groups,dc=example,dc=com)

Administrator DN

Administrator DN

  • The Distinguished Name and password for the administrator account. This account must be able to login and view the records for all user accounts. Example: uid=admin,ou=employees,dc=my,dc=domain,dc=com

  • The password will be stored in plaintext inside the config.yaml, so setting up a dedicated account or using a password hash is highly recommended.

  • Here is an example of the resulting entries in the config.yaml file:

LDAP_ADMIN_DN: cn=admin,dc=example,dc=com
LDAP_ADMIN_PASSWD: changeme

UID and Mail attributes

UID and Mail

  • The UID attribute is the name of the property field in LDAP user record to use as the username. Typically "uid".

  • The Mail attribute is the name of the property field in LDAP user record that stores user e-mail address(es). Typically "mail".

  • Either of these may be used during login.

  • The logged in username must exist in User Relative DN.

  • sAMAccountName is the UID attribute for against Microsoft Active Directory setups.

  • Here is an example of the resulting entries in the config.yaml file:

LDAP_UID_ATTR: uid
LDAP_EMAIL_ATTR: mail

Validation

Once the configuration is completed, click on “Save Configuration Changes” button to validate the configuration.

Fill in LDAP information

All validation must succeed before proceeding, or additional configuration may be performed by selecting the "Continue Editing" button.

Common Issues

Invalid credentials

Administrator DN or Administrator DN Password values are incorrect

Verification of superuser %USERNAME% failed: Username not found The user either does not exist in the remote authentication system OR LDAP auth is misconfigured.

Project Quay can connect to the LDAP server via Username/Password specified in the Administrator DN fields however cannot find the current logged in user with the UID Attribute or Mail Attribute fields in the User Relative DN Path. Either current logged in user does not exist in User Relative DN Path, or Administrator DN user do not have rights to search/read this LDAP path.

Configure an LDAP user as superuser

Once LDAP is configured, you can log in to your Project Quay instance with a valid LDAP username and password. You are prompted to confirm your Project Quay username as shown in the following figure:

Confirm LDAP username for Project Quay

To attach superuser privilege to an LDAP user, modify the config.yaml file with the username. For example:

SUPER_USERS:
- testadmin

Restart the Red Hat Quay container with the updated config.yaml file. The next time you log in, the user will have superuser privileges.

Prometheus and Grafana metrics under Project Quay

Project Quay exports a Prometheus- and Grafana-compatible endpoint on each instance to allow for easy monitoring and alerting.

Exposing the Prometheus endpoint

The Prometheus- and Grafana-compatible endpoint on the Project Quay instance can be found at port 9091. See Monitoring Quay with Prometheus and Grafana for details on configuring Prometheus and Grafana to monitor Quay repository counts.

Setting up Prometheus to consume metrics

Prometheus needs a way to access all Project Quay instances running in a cluster. In the typical setup, this is done by listing all the Project Quay instances in a single named DNS entry, which is then given to Prometheus.

DNS configuration under Kubernetes

A simple Kubernetes service can be configured to provide the DNS entry for Prometheus. Details on running Prometheus under Kubernetes can be found at Prometheus and Kubernetes and Monitoring Kubernetes with Prometheus.

DNS configuration for a manual cluster

SkyDNS is a simple solution for managing this DNS record when not using Kubernetes. SkyDNS can run on an etcd cluster. Entries for each Project Quay instance in the cluster can be added and removed in the etcd store. SkyDNS will regularly read them from there and update the list of Quay instances in the DNS record accordingly.

Geo-replication

Geo-replication allows multiple, geographically distributed Quay deployments to work as a single registry from the perspective of a client or user. It significantly improves push and pull performance in a globally-distributed Quay setup. Image data is asynchronously replicated in the background with transparent failover / redirect for clients.

Note

Deploying Project Quay with geo-replication on OpenShift is not supported by the Operator.

Geo-replication features

  • When geo-replication is configured, container image pushes will be written to the preferred storage engine for that Red Hat Quay instance (typically the nearest storage backend within the region).

  • After the initial push, image data will be replicated in the background to other storage engines.

  • The list of replication locations is configurable and those can be different storage backends.

  • An image pull will always use the closest available storage engine, to maximize pull performance.

  • If replication hasn’t been completed yet, the pull will use the source storage backend instead.

Geo-replication requirements and constraints

  • A single database, and therefore all metadata and Quay configuration, is shared across all regions.

  • A single Redis cache is shared across the entire Quay setup and needs to accessible by all Quay pods.

  • The exact same configuration should be used across all regions, with exception of the storage backend, which can be configured explicitly using the QUAY_DISTRIBUTED_STORAGE_PREFERENCE environment variable.

  • Geo-Replication requires object storage in each region. It does not work with local storage or NFS.

  • Each region must be able to access every storage engine in each region (requires a network path).

  • Alternatively, the storage proxy option can be used.

  • The entire storage backend (all blobs) is replicated. This is in contrast to repository mirroring, which can be limited to an organization or repository or image.

  • All Quay instances must share the same entrypoint, typically via load balancer.

  • All Quay instances must have the same set of superusers, as they are defined inside the common configuration file.

If the above requirements cannot be met, you should instead use two or more distinct Quay deployments and take advantage of repository mirroring functionality.

Geo-replication architecture

Georeplication

In the example shown above, Quay is running in two separate regions, with a common database and a common Redis instance. Localized image storage is provided in each region and image pulls are served from the closest available storage engine. Container image pushes are written to the preferred storage engine for the Quay instance, and will then be replicated, in the background, to the other storage engines.

Enable storage replication

  1. Scroll down to the section entitled Registry Storage.

  2. Click Enable Storage Replication.

  3. Add each of the storage engines to which data will be replicated. All storage engines to be used must be listed.

  4. If complete replication of all images to all storage engines is required, under each storage engine configuration click Replicate to storage engine by default. This will ensure that all images are replicated to that storage engine. To instead enable per-namespace replication, please contact support.

  5. When you are done, click Save Configuration Changes. Configuration changes will take effect the next time Project Quay restarts.

  6. After adding storage and enabling “Replicate to storage engine by default” for Georeplications, you need to sync existing image data across all storage. To do this, you need to oc exec (or docker/kubectl exec) into the container and run:

    # scl enable python27 bash
    # python -m util.backfillreplication

    This is a one time operation to sync content after adding new storage.

Run Project Quay with storage preferences

  1. Copy the config.yaml to all machines running Project Quay

  2. For each machine in each region, add a QUAY_DISTRIBUTED_STORAGE_PREFERENCE environment variable with the preferred storage engine for the region in which the machine is running.

    For example, for a machine running in Europe with the config directory on the host available from $QUAY/config:

    $ sudo podman run -d --rm -p 80:8080 -p 443:8443  \
       --name=quay \
       -v $QUAY/config:/conf/stack:Z \
       -e QUAY_DISTRIBUTED_STORAGE_PREFERENCE=europestorage \
       quay.io/projectquay/quay:qui-gon
    Note

    The value of the environment variable specified must match the name of a Location ID as defined in the config panel.

  3. Restart all Project Quay containers

Project Quay Troubleshooting

Schema for Project Quay configuration

Most Project Quay configuration information is stored in the config.yaml file that is created using the browser-based config tool when Project Quay is first deployed.

The configuration options are described in the Project Quay Configuration Guide.

Additional resources