Project Quay is an enterprise-quality container registry. Use Project Quay to build and store container images, then make them available to deploy across your enterprise. Red Hat is working on two approaches to deploying Project Quay on OpenShift:

  • Deploy Project Quay with an Operator: The Project Quay Setup Operator was developed to provide a simpler method to deploy and manage a Project Quay cluster. The Project Quay Setup Operator now fully supported and recommended for use in deploying Project Quay on OpenShift.

  • Deploy Project Quay objects individually: The current procedure in this guide provides a set of yaml files that you deploy individually to set up your Project Quay cluster. Although this procedure is still fully supported, expect this procedure to become deprecated in the near future.

Overview

Features of Project Quay include:

  • High availability

  • Geo-replication

  • Repository mirroring

  • Docker v2, schema 2 (multiarch) support

  • Continuous integration

  • Security scanning with Clair

  • Custom log rotation

  • Zero downtime garbage collection

  • 24/7 support

Project Quay provides support for:

  • Multiple authentication and access methods

  • Multiple storage backends

  • Custom certificates for Quay, Clair, and storage backends

  • Application registries

  • Different container image types

Architecture

Project Quay is made up of several core components.

  • Database: Used by Project Quay as its primary metadata storage (not for image storage).

  • Redis (key, value store): Stores live builder logs and the Project Quay tutorial.

  • Quay (container registry): Runs the quay container as a service, consisting of several components in the pod.

  • Clair: Scans container images for vulnerabilities and suggests fixes.

For supported deployments, you need to use one of the following types of storage:

  • Public cloud storage: In public cloud environments, you should use the cloud provider’s object storage, such as Amazon S3 (for AWS) or Google Cloud Storage (for Google Cloud).

  • Private cloud storage: In private clouds, an S3 or Swift compliant Object Store is needed, such as Ceph RADOS, or OpenStack Swift.

Warning

Do not use "Locally mounted directory" Storage Engine for any production configurations. Mounted NFS volumes are not supported. Local storage is meant for Project Quay test-only installations.

Prerequisites for Project Quay on OpenShift

Here are a few things you need to know before you begin the Project Quay on OpenShift deployment:

  • OpenShift cluster: You need a privileged account to an OpenShift 4.2 or later cluster on which to deploy the Project Quay. That account must have the ability to create namespaces at the cluster scope.

  • Storage: AWS cloud storage is used as an example in the following procedure. As an alternative, you can create Ceph cloud storage using steps from the Set up Ceph section of the high availability Project Quay deployment guide. The following is a list of other supported cloud storage:

  • Services: The OpenShift cluster must have enough capacity to run the following containerized services:

    • Database: We recommend you use an enterprise-quality database for production use of Project Quay. PostgreSQL is used as an example in this document. Other options include:

      • Crunchy Data PostgreSQL Operator: Although not supported directly by Red Hat, the CrunchDB Operator is available from Crunchy Data for use with Project Quay. If you take this route, you should have a support contract with Crunchy Data and work directly with them for usage guidance or issues relating to the operator and their database.

      • If your organization already has a high-availability (HA) database, you can use that database with Project Quay. See the Project Quay Support Policy for details on support for third-party databases and other components.

    • Key-value database: Redis is used to serve live builder logs and Project Quay tutorial content to your Project Quay configuration.

    • Project Quay: The quay container provides the features to manage the Project Quay registry.

    • Clair: The clair-jwt container provides Clair scanning services for the registry.

Set up Project Quay services

Deploying Project Quay on OpenShift requires you to create a set of yaml files. Although the oc command is used to configure the Project Quay registry here, you could use the OpenShift web UI instead, if you prefer.

Refer to Appendix A for the contents of these yaml files.

Here are a few things to keep in mind:

  • Your OpenShift account must have permission to create namespaces at the cluster scope.

  • Project Quay runs under its own namespace inside a Kubernetes cluster, so that needs to be created first. You can create it through the New project in the OpenShift web console or using quay-enterprise-namespace.yaml (as described here).

  • You need a working enterprise-quality database. In our example, we illustrate PostgreSQL (version 9.4 or above is required, although we recommend 9.6).

  • You can use an existing Redis service (needed for build logs and the Project Quay tutorial) or start one as described in this procedure.

Here are the major steps, detailed below, to complete a Red Hat Quay deployment on OpenShift:

  1. Set up the Red Hat Quay namespace and secrets

  2. Create the Red Hat Quay database

  3. Create Red Hat Quay roles and privileges

  4. Create the Redis deployment

  5. Prepare to configure Red Hat Quay

  6. Start the Red Hat Quay configuration user interface

  7. Deploy the Red Hat Quay configuration

  8. Add Clair image scanning

  9. Add repository mirroring

Set up Project Quay namespaces and secrets

  1. Get Project Quay yaml files: Create a set of yaml files in a directory on your local system from the contents shown in Appendix A. Study each file to determine where you might need to make modifications. You will use oc create to create the needed resources from those files.

  2. Log in with oc cli. Login as a user with cluster scope permissions to the OpenShift cluster. For example:

    $ oc login -u system:admin
  3. Create namespace. Run oc create quay-enterprise-namespace.yaml and then make quay-enterprise the current project. All objects will be deployed to this namespace/project:

    $ oc create -f quay-enterprise-namespace.yaml
    namespace "quay-enterprise" created
    $ oc project quay-enterprise
  4. Create the secret for the Project Quay configuration and app: Create the following secrets. During Project Quay configuration, the config.yaml, and optionally the ssl.cert and ssl.key, files are added to the application’s secret, so they can be included with the resulting Project Quay application:

    $ oc create -f quay-enterprise-config-secret.yaml
    secret/quay-enterprise-config-secret created
  5. Create the secret for quay.io. This pull secret provides credentials to pull containers from the Quay.io registry. Refer to Accessing Red Hat Project Quay to get the credentials you need to add to the quay-enterprise-redhat-pull-secret.yaml file, then run oc create:

    $ oc create -f quay-enterprise-redhat-pull-secret.yaml
    secret/redhat-pull-secret created
  6. Create the database. If you are not using your own enterprise-quality database (recommended), this procedure illustrates how to set up a Postgresql database on an OpenShift cluster. This entails creating AWS storage, a postgres deployment, and postgres service, then adding an extension to the database (see the description of quay-storageclass.yaml in Appendix A for information on adding encryption to your volumes):

    $ oc create -f quay-storageclass.yaml
    storageclass.storage.k8s.io/quay-storageclass created
    $ oc create -f db-pvc.yaml
    persistentvolumeclaim/postgres-storage created
    $ oc create -f postgres-deployment.yaml
    deployment.extensions/postgres-new created
    $ oc create -f postgres-service.yaml
    service/postgres created
    $ oc get pods -n quay-enterprise
    NAME                        READY   STATUS    RESTARTS   AGE
    postgres-xxxxxxxxxx-xxxxx   1/1     Running   0          3m26s

    Run the following command, replacing the name of the postgres pod with your pod:

    $ oc exec -it postgres-xxxxxxxxxx-xxxxx -n quay-enterprise -- /bin/bash -c 'echo "CREATE EXTENSION IF NOT EXISTS pg_trgm" | /opt/rh/rh-postgresql10/root/usr/bin/psql -d quay'
    Note

    The -d database_name must not be omitted. If it is, the extension will be created on the default PostgreSQL database.

  7. Create a serviceaccount for the database: Create a serviceaccount and grant it anyuid privilege. Running the PostgreSQL deployment under anyuid lets you add persistent storage to the deployment and allow it to store db metadata.

    # oc create serviceaccount postgres -n quay-enterprise
    serviceaccount/postgres created
    # oc adm policy add-scc-to-user anyuid -z system:serviceaccount:quay-enterprise:postgres
    scc "anyuid" added to: ["system:serviceaccount:quay-enterprise:system:serviceaccount:quay-enterprise:postgres"]
  8. Create the role and the role binding: Project Quay has native Kubernetes integrations. These integrations require Service Account to have access to the Kubernetes API. When Kubernetes RBAC is enabled, Role Based Access Control policy manifests also have to be deployed. This role will be used to run Project Quay and also to write the config.yaml file that Project Quay creates at the end of the web interface setup:

    $ oc create -f quay-servicetoken-role-k8s1-6.yaml
    $ oc create -f quay-servicetoken-role-binding-k8s1-6.yaml
  9. Create Redis deployment: If you haven’t already deployed Redis, create a quay-enterprise-redis.yaml file and deploy it:

    $ oc create -f quay-enterprise-redis.yaml
  10. Set up to configure Project Quay: Project Quay V3 added a tool for configuring the Project Quay service before deploying it. Although the config tool is in the same container as the full Project Quay service, it is deployed in a different way, as follows:

    $ oc create -f quay-enterprise-config.yaml
    $ oc create -f quay-enterprise-config-service-clusterip.yaml
    $ oc create -f quay-enterprise-config-route.yaml

    The quay configuration container is now set up to be accessed from port 443 from your Web browser. Before creating the configuration, however, you need to create a route to the permanent Project Quay service. This is because we need the Project Quay service’s publicly available FQDN when setting up the application.

  11. Start the Project Quay application: Identify the Project Quay Kubernetes service and create a route for it, then start the Project Quay application as follows:

    $ oc create -f quay-enterprise-service-clusterip.yaml
    service/quay-enterprise-clusterip created
    $ oc create -f quay-enterprise-app-route.yaml
    route.route.openshift.io/quay-enterprise created
    $ oc create -f quay-enterprise-app-rc.yaml
    deployment.extensions/quay-enterprise-app created
    Note

    The creation of the Project Quay application (quay-enterprise-app pod) will not complete until you have finished configuring the application. So don’t worry if you see that pod remain in "ContainerCreating" status until the configuration is done. At that point, the new configuration is fed to the application and it will change to the "Running" state.

    You will need to know the route to the Project Quay application when you do the configuration step.

  12. Begin to configure Project Quay: Open the public route to the Project Quay configuration container in a Web browser. To see the route to the quay configuration service, type the following:

    $ oc get route -n quay-enterprise quay-enterprise-config
    NAME                   HOST/PORT                                                                          PATH   SERVICES                    PORT    TERMINATION   WILDCARD
    quay-enterprise-config quay-enterprise-config-quay-enterprise.apps.test.example.com quay-enterprise-config    <all> passthrough  None

    For this example, you would open this URL in your web browser: https://quay-enterprise-config-quay-enterprise.apps.test.example.com

  13. Log in as quayconfig: When prompted, enter the username and password (the password was set as an argument to the quay config container in: quay-enterprise-config.yaml):

    • User Name: quayconfig

    • Password: secret

    You are prompted to select a configuration mode, as shown in the following figure:

    + Identifying the database Project Quay will use

  14. Choose configuration mode: Select "Start new configuration for this cluster" The result of this selection is the creation of a new configuration file (config.yaml) that you will use later for your Project Quay deployment.

  15. Identify the database: For the initial setup, add the following information about the type and location of the database to be used by Project Quay:

    • Database Type: Choose MySQL or PostgreSQL. PostgreSQL is used with the example shown here.

    • Database Server: Identify the IP address or hostname of the database, along with the port number if it is different from 3306.

    • Username: Identify a user with full access to the database.

    • Password: Enter the password you assigned to the selected user.

    • Database Name: Enter the database name you assigned when you started the database server.

    • SSL Certificate: For production environments, you should provide an SSL certificate to connect to the database.

      To verify the NAME of the service (postgres), type the following:

      $ oc get services -n quay-enterprise postgres
      NAME      TYPE      CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
      postgres  NodePort  172.30.127.41  <none>        5432:32212/TCP   19h

      The following figure shows an example of the screen for identifying the database used by Project Quay. The example yaml file sets the database server to postgres, the user name to username, the password to password, and the database to quay:

      Identifying the database Project Quay will use

  16. Validate database: Select Validate Database Settings and proceed to the next screen.

  17. Create Project Quay superuser: You need to set up an account with superuser privileges to Project Quay, to use for editing Project Quay configuration settings. That information includes a Username, Email address, and Password (entered twice).

    The following figure shows an example of the Project Quay Setup screen for setting up a Project Quay superuser account:

    Set up a Project Quay superuser account to do Project Quay configuration

    Select Create Super User, and proceed to the next screen.

  18. Identify settings: Go through each of the following settings. The minimum you must enter includes:

    • Server hostname: The URL to the Project Quay service is required.

    • Redis hostname: The URL or IP address to the Redis service is required.

      Here are all the settings you need to consider:

    • Custom SSL Certificates: Upload custom or self-signed SSL certificates for use by Project Quay. See Using SSL to protect connections to Project Quay for details. Recommended for high availability.

      Important

      Using SSL certificates is recommended for both basic and high availability deployments. If you decide to not use SSL, you must configure your container clients to use your new Project Quay setup as an insecure registry as described in Test an Insecure Registry.

    • Basic Configuration: Upload a company logo to rebrand your Project Quay registry.

    • Server Configuration: Hostname or IP address to reach the Project Quay service, along with TLS indication (recommended for production installations). To get the route to the permanent Project Quay service, type the following:

      $ oc get route -n quay-enterprise quay-enterprise
      NAME            HOST/PORT                                                               PATH SERVICES                  PORT TERMINATION WILDCARD
      quay-enterprise quay-enterprise-quay-enterprise.apps.cnegus-ocp.devcluster.openshift.com     quay-enterprise-clusterip <all>            None

      See Using SSL to protect connections to Project Quay. TLS termination can be done in two different ways:

      • On the instance itself, with all TLS traffic governed by the nginx server in the quay container (recommended).

      • On the load balancer. This is not recommended. Access to Project Quay could be lost if the TLS setup is not done correctly on the load balancer.

    • Data Consistency Settings: Select to relax logging consistency guarantees to improve performance and availability.

    • Time Machine: Allow older image tags to remain in the repository for set periods of time and allow users to select their own tag expiration times.

    • redis: Identify the hostname or IP address (and optional password) to connect to the redis service used by Project Quay. To find the address of the redis service, type the following:

      $ oc get services -n quay-enterprise quay-enterprise-redis
      NAME                  TYPE       CLUSTER-IP    EXTERNAL-IP PORT(S)  AGE
      quay-enterprise-redis ClusterIP  172.30.207.35 <none>      6379/TCP 40m
    • Repository Mirroring: Choose the checkbox to Enable Repository Mirroring. With this enabled, you can create repositories in your Project Quay cluster that mirror selected repositories from remote registries. Before you can enable repository mirroring, start the repository mirroring worker as described later in this procedure.

    • Registry Storage: Identify the location of storage. A variety of cloud and local storage options are available. Remote storage is required for high availability. Identify the Ceph storage location if you are following the example for Project Quay high availability storage. On OpenShift, the example uses Amazon S3 storage.

      • Action Log Storage Configuration: Action logs are stored in the Project Quay database by default. If you have a large amount of action logs, you can have those logs directed to Elasticsearch for later search and analysis. To do this, change the value of Action Logs Storage to Elasticsearch and configure related settings as described in Configure action log storage.

    • Action Log Rotation and Archiving: Select to enable log rotation, which moves logs older than 30 days into storage, then indicate storage area.

    • Security Scanner: We recommend setting up the Clair security scanner after you have completed the initial Project Quay deployment. Clair setup is described after the end of this procedure.

    • Application Registry: Enable an additional application registry that includes things like Kubernetes manifests or Helm charts (see the App Registry specification).

    • rkt Conversion: Allow rkt fetch to be used to fetch images from the Project Quay registry. Public and private GPG2 keys are needed. This field is deprecated.

    • E-mail: Enable e-mail to use for notifications and user password resets.

    • Internal Authentication: Change default authentication for the registry from Local Database to LDAP, Keystone (OpenStack), JWT Custom Authentication, or External Application Token.

    • External Authorization (OAuth): Enable to allow GitHub or GitHub Enterprise to authenticate to the registry.

    • Google Authentication: Enable to allow Google to authenticate to the registry.

    • Access settings: Basic username/password authentication is enabled by default. Other authentication types that can be enabled include: external application tokens (user-generated tokens used with docker or rkt commands), anonymous access (enable for public access to anyone who can get to the registry), user creation (let users create their own accounts), encrypted client password (require command-line user access to include encrypted passwords), and prefix username autocompletion (disable to require exact username matches on autocompletion).

      • Registry Protocol Settings: Leave the Restrict V1 Push Support checkbox enabled to restrict access to Docker V1 protocol pushes. Although Red Hat recommends against enabling Docker V1 push protocol, if you do allow it, you must explicitly whitelist the namespaces for which it is enabled.

    • Dockerfile Build Support: Enable to allow users to submit Dockerfiles to be built and pushed to Project Quay. This is not recommended for multitenant environments.

  19. Save the changes: Select Save Configuration Changes. You are presented with the following Download Configuration screen:

    Download the Project Quay configuration tarball to the local system

  20. Download configuration: Select the Download Configuration button and save the tarball (quay-config.tar.gz) to a local directory. Save this file in case you want to deploy the config files inside manually or just want a record of what you deployed.

  21. Deploy configuration: Unpack the configuration files

(tar xvf quay-config.tar.gz) and add them manually to the secret:

$ oc create secret generic quay-enterprise-config-secret -n quay-enterprise \
     --from-file=config.yaml=/path/to/config.yaml \
     --from-file=ssl.key=/path/to/ssl.key \
     --from-file=ssl.cert=/path/to/ssl.cert
  1. Check pods: In a couple of minutes (depending on your connection speed), Project Quay should be up and running and the following pods should be visible in the quay-enterprise namespace You might get a mount error at first, but that should resolve itself:

    $ oc get pods -n quay-enterprise
    NAME                                        READY STATUS  RESTARTS AGE
    postgres-5b4c5d7dd9-f8tqz                   1/1   Running 0        46h
    quay-enterprise-app-7899c7c77f-jrsrc        1/1   Running 0        45h
    quay-enterprise-config-app-86bbbcd446-mwmmg 1/1   Running 0        46h
    quay-enterprise-redis-684b9d6f55-tx6w9      1/1   Running 0        46h
  2. Get the URL for Project Quay: Type the following to get the hostname of the new Project Quay installation:

    $ oc get routes -n quay-enterprise quay-enterprise
    NAME            HOST/PORT                                             PATH SERVICES                  PORT  TERMINATION WILDCARD
    quay-enterprise quay-enterprise-quay-enterprise.apps.test.example.com      quay-enterprise-clusterip <all>             None
  3. Start using Project Quay: Open the hostname in a web browser to start using Project Quay.

Add Clair image scanning to Project Quay

Setting up and deploying Clair image scanning for your Project Quay deployment requires the following basic steps:

  • Setting up a database for Clair

  • Creating authentication keys for Clair

  • Deploying Clair

The following procedure assumes you already have a running Project Quay cluster on an OpenShift platform with the Project Quay Setup container running in your browser:

  1. Create a serviceaccount for clair-jwt: Create a serviceaccount and grant it anyuid privilege. Running the clair deployment under anyuid lets you generate certificates for jwt proxy and add them the default ca-bundle.

    # oc create serviceaccount clair-jwt -n quay-enterprise
    serviceaccount/jwt created
    # oc adm policy add-scc-to-user anyuid -z system:serviceaccount:quay-enterprise:clair-jwt
    scc "anyuid" added to: ["system:serviceaccount:quay-enterprise:system:serviceaccount:quay-enterprise:clair-jwt"]
  2. Create the Clair database: This example configures a postgresql database to use with the Clair image scanner. With the yaml files in the current directory, review those files for possible modifications, then run the following:

    $ oc create -f postgres-clair-storage.yaml
    $ oc create -f postgres-clair-deployment.yaml
    $ oc create -f postgres-clair-service.yaml
  3. Check Clair database objects: To view the Clair database objects, type:

    $ oc get all | grep -i clair
    pod/postgres-clair-xxxxxxxxx-xxxx 1/1      Running       0                     3m45s
    deployment.apps/postgres-clair    1/1      1             1                     3m45s
    service/postgres-clair            NodePort 172.30.193.64 <none> 5432:30680/TCP 159m
    replicaset.apps/postgres-clair-xx 1        1             1                     3m45s

    The output shows that the postgres-clair pod is running, postgres-clair was successfully deployed, the postgres-clair service is available on the address and port shown, and 1 replica set of postgres-clair is active.

  4. Open the Project Quay Setup UI: Reload the Project Quay Setup UI and select "Modify configuration for this cluster."

  5. Enable Security Scanning: Scroll to the Security Scanner section and select the "Enable Security Scanning" checkbox. From the fields that appear you need to create an authentication key and enter the security scanner endpoint. Here’s how:

    • Generate key: Click "Create Key" and then type a name for the Clair private key and an optional expiration date (if blank, the key never expires). Then select Generate Key.

    • Copy the Clair key and PEM file: Save the Key ID (to a notepad or similar) and download a copy of the Private Key PEM file (named security_scanner.pem) by selecting "Download Private Key" (if you lose this key, you will need to generate a new one).

  6. Modify clair-config.yaml: Return to the shell and the directory holding your yaml files. Edit the clair-config.yaml file and modify the following values:

    • database.options.source: Make sure the host, port, dbname, user, password, and ssl mode match those values you set when you create the postgres database for Clair.

    • key_id: Search for KEY_ID_HERE in this file and replace it with the contents of the key you generated from the Project Quay Setup screen in the Security Scanner section (security_scanner.pam file).

    • private_key_path: Identify the full path to the security_scanner.pem file you saved earlier.

  7. Create the Clair config secret and service: Run the following commands, identifying the paths to your clair-config.yaml and security_scanner.pem files.

    $ oc create secret generic clair-scanner-config-secret \
       --from-file=config.yaml=/path/to/clair-config.yaml \
       --from-file=security_scanner.pem=/path/to/security_scanner.pem
    $ oc create -f clair-service.yaml
    $ oc create -f clair-deployment.yaml
  8. Get the clair-service endpoint: In this example, the endpoint of of clair-service would be http://172.30.133.227:6060:

    $ oc get service clair-service
    NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
    clair-service   ClusterIP   172.30.133.227   <none>        6060/TCP,6061/TCP   76s
  9. Enter Security Scanner Endpoint: Return to the Project Quay Setup screen and fill in the clair-service endpoint. For example, http://clair-service:6060

  10. Deploy configuration: Select to save the configuration, then deploy it when prompted.

A green check mark will appear on the screen when the deployment is done. You can now start using Clair image scanning with Project Quay. For information on the data sources available with the Clair image scanner, see Using Clair data sources.

Add repository mirroring Project Quay

Enabling repository mirroring allows you to create container image repositories on your Project Quay cluster that exactly match the content of a selected external registry, then sync the contents of those repositories on a regular schedule and on demand.

To add the repository mirroring feature to your Project Quay cluster:

  • Run the repository mirroring worker. To do this, you start a quay pod with the repomirror option.

  • Select "Enable Repository Mirroring in the Project Quay Setup tool.

  • Log into your Project Quay Web UI and begin creating mirrored repositories as described in Repository Mirroring in Red Hat Quay.

The following procedure assumes you already have a running Project Quay cluster on an OpenShift platform, with the Project Quay Setup container running in your browser:

Note

Instead of running repository mirroring in its own container, you could start the quay application pod with the environment variable QUAY_OVERRIDE_SERVICES=repomirrorworker=true. This causes the repomirror worker to run inside the quay application pod instead of as a separate container.

  1. Start the repo mirroring worker: Start the quay container in repomirror mode as follows:

    $ oc create -f quay-enterprise-mirror.yaml
  2. Log into config tool: Log into the Project Quay Setup Web UI (config tool).

  3. Enable repository mirroring: Scroll down the the Repository Mirroring section and select the Enable Repository Mirroring check box, as shown here:

  4. Select HTTPS and cert verification: If you want to require HTTPS communications and verify certificates during mirroring, select this check box. Enable mirroring and require HTTPS and verified certificates

  5. Save configuration: Select the Save Configuration Changes button. Repository mirroring should now be enabled on your Project Quay cluster. Refer to Repository Mirroring in Project Quay for details on setting up your own mirrored container image repositories.

The server hostname you set with the config tools may not represent and endpoint that can be used to copy images to a mirror configured for that server. In that case, you can set a REPO_MIRROR_SERVER_HOSTNAME environment variable to identify the server’s URL in a way that it can be reached by a skopeo copy command.

Starting to use Project Quay

With Project Quay now running, you can:

  • Select Tutorial from the Quay home page to try the 15-minute tutorial. In the tutorial, you learn to log into Quay, start a container, create images, push repositories, view repositories, and change repository permissions with Quay.

  • Refer to the Use Project Quay for information on working with Project Quay repositories.

Appendix A: Project Quay on OpenShift configuration files

The following yaml files were created to deploy Project Quay on OpenShift. They are used throughout the deployment procedure in this document. We recommend you copy the files from this document into a directory, review the contents, and make any changes necessary for your deployment.

Project Quay namespaces and secrets

quay-enterprise-namespace.yaml
apiVersion: v1
kind: Namespace (1)
metadata:
  name: quay-enterprise (2)
  1. Identifies the Kind as Namespace

  2. Namespace is set to quay-enterprise throughout the yaml files

quay-enterprise-config-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  namespace: quay-enterprise
  name: quay-enterprise-config-secret
quay-enterprise-redhat-pull-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  namespace: quay-enterprise
  name: redhat-pull-secret
data:
  .dockerconfigjson: <Add credentials> (1)
type: kubernetes.io/dockerconfigjson
  1. Change <Add credentials> to include the credentials shown from Accessing Red Hat Quay

Project Quay storage

quay-storageclass.yaml
  apiVersion: storage.k8s.io/v1
  kind: StorageClass
  metadata:
    name: quay-storageclass
  parameters: (1)
    type: gp2
  provisioner: kubernetes.io/aws-ebs
  reclaimPolicy: Delete
  1. To encrypt the volume, add this to the parameters section (optionally replacing xfs with another filesystem type):

     encrypted: "true"
     fsType: xfs (or other fs)
     kmsKeyId:

Project Quay database

db-pvc.yaml
  apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    name: postgres-storage
    namespace: quay-enterprise
  spec:
    accessModes:
      - ReadWriteOnce
    volumeMode: Filesystem
    resources:
      requests:
        storage: 5Gi (1)
    storageClassName: quay-storageclass
  1. The 5Gi creates 5 gigabytes of storage for use by the Postgres database.

postgres-deployment.yaml
  apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    name: postgres
    namespace: quay-enterprise
  spec:
    replicas: 1 (1)
    template:
      metadata:
        labels:
          app: postgres
      spec:
        containers:
          - name: postgres
            image: registry.access.redhat.com/rhscl/postgresql-10-rhel7:1-35
            imagePullPolicy: "IfNotPresent"
            ports:
              - containerPort: 5432
            env:
            - name: POSTGRESQL_USER
              value: "username" (2)
            - name: POSTGRESQL_DATABASE
              value: "quay"
            - name: POSTGRESQL_PASSWORD
              value: "password" (3)
            volumeMounts:
              - mountPath: /var/lib/pgsql/data
                name: postgredb
            serviceAccount: postgres
            serviceAccountName: postgres
        volumes:
          - name: postgredb
            persistentVolumeClaim:
              claimName: postgres-storage
  1. Only one instance of the postgres database is defined here. Adjust replicas based on demand.

  2. Replace "username" with a name for your Postgres user

  3. Replace "password" with a password for your Postgres user

postgres-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: postgres
  namespace: quay-enterprise
  labels:
    app: postgres
spec:
  type: NodePort
  ports:
   - port: 5432
  selector:
   app: postgres

Project Quay authorization

quay-servicetoken-role-k8s1-6.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: quay-enterprise-serviceaccount
  namespace: quay-enterprise
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
- apiGroups:
  - extensions
  - apps
  resources:
  - deployments
  verbs:
  - get
  - list
  - patch
  - update
  - watch
quay-servicetoken-role-binding-k8s1-6.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: quay-enterprise-secret-writer
  namespace: quay-enterprise
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: quay-enterprise-serviceaccount
subjects:
- kind: ServiceAccount
  name: default

Redis database

quay-enterprise-redis.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: quay-enterprise
  name: quay-enterprise-redis
  labels:
    quay-enterprise-component: redis
spec:
  replicas: 1 (1)
  selector:
    matchLabels:
      quay-enterprise-component: redis
  template:
    metadata:
      namespace: quay-enterprise
      labels:
        quay-enterprise-component: redis
    spec:
      containers:
      - name: redis-master
        image: registry.access.redhat.com/rhscl/redis-32-rhel7
        imagePullPolicy: "IfNotPresent"
        ports:
        - containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
  namespace: quay-enterprise
  name: quay-enterprise-redis
  labels:
    quay-enterprise-component: redis
spec:
  ports:
    - port: 6379
  selector:
    quay-enterprise-component: redis
  1. Only one instance of the redis database is defined here. Adjust replicas based on demand.

Project Quay configuration pod

quay-enterprise-config.yaml
  apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    namespace: quay-enterprise
    name: quay-enterprise-config-app
    labels:
      quay-enterprise-component: config-app
  spec:
    replicas: 1
    selector:
      matchLabels:
        quay-enterprise-component: config-app
    template:
      metadata:
        namespace: quay-enterprise
        labels:
          quay-enterprise-component: config-app
      spec:
        containers:
        - name: quay-enterprise-config-app
          image: quay.io/projectquay/quay:qui-gon
          ports:
          - containerPort: 8443
          command: ["/quay-registry/quay-entrypoint.sh"]
          args: ["config", "secret"]
        imagePullSecrets:
          - name: redhat-pull-secret
quay-enterprise-config-service-clusterip.yaml
  apiVersion: v1
  kind: Service
  metadata:
    namespace: quay-enterprise
    name: quay-enterprise-config
  spec:
    type: ClusterIP
    ports:
      - protocol: TCP
        name: https
        port: 443
        targetPort: 8443
    selector:
      quay-enterprise-component: config-app
quay-enterprise-config-route.yaml
  apiVersion: v1
  kind: Route
  metadata:
    name: quay-enterprise-config
    namespace: quay-enterprise
  spec:
    to:
      kind: Service
      name: quay-enterprise-config
    tls:
      termination: passthrough

Project Quay application container

quay-enterprise-service-clusterip.yaml
  apiVersion: v1
  kind: Service
  metadata:
    namespace: quay-enterprise
    name: quay-enterprise-clusterip
  spec:
    type: ClusterIP
    ports:
      - protocol: TCP
        name: https
        port: 443
        targetPort: 8443
    selector:
      quay-enterprise-component: app
quay-enterprise-app-route.yaml
apiVersion: v1
kind: Route
metadata:
  name: quay-enterprise
  namespace: quay-enterprise
spec:
  to:
    kind: Service
    name: quay-enterprise-clusterip
  tls:
    termination: passthrough
quay-enterprise-app-rc.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: quay-enterprise
  name: quay-enterprise-app
  labels:
    quay-enterprise-component: app
spec:
  replicas: 1 (1)
  selector:
    matchLabels:
      quay-enterprise-component: app
  template:
    metadata:
      namespace: quay-enterprise
      labels:
        quay-enterprise-component: app
    spec:
      volumes:
        - name: configvolume
          secret:
            secretName: quay-enterprise-config-secret
      containers:
      - name: quay-enterprise-app
        image: quay.io/projectquay/quay:qui-gon
        ports:
        - containerPort: 8443
        volumeMounts:
        - name: configvolume
          readOnly: false
          mountPath: /conf/stack
        resources:
          limits:
             memory: "4Gi"
          requests:
            memory: "2Gi"
      imagePullSecrets:
        - name: redhat-pull-secret
  1. Only one instance of the quay container is defined here. Adjust replicas based on demand.

Clair image scanning

postgres-clair-storage.yaml
  apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    name: postgres-clair-storage
    namespace: quay-enterprise
  spec:
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        storage: 5Gi
    storageClassName: quay-storageclass
postgres-clair-deployment.yaml
  apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    labels:
      app: postgres-clair
    name: postgres-clair
    namespace: quay-enterprise
  spec:
    replicas: 1
    selector:
      matchLabels:
        app: postgres-clair
    template:
      metadata:
        labels:
          app: postgres-clair
      spec:
        containers:
        - env:
          - name: POSTGRESQL_USER
            value: clair (1)
          - name: POSTGRESQL_DATABASE
            value: clair (2)
          - name: POSTGRESQL_PASSWORD
            value: test123 (3)
          image: registry.access.redhat.com/rhscl/postgresql-10-rhel7:1-35
          imagePullPolicy: IfNotPresent
          name: postgres-clair
          ports:
          - containerPort: 5432
            protocol: TCP
          volumeMounts:
          - mountPath: /var/lib/pgsql/data
            name: postgredb
          serviceAccount: postgres
          serviceAccountName: postgres
        volumes:
        - name: postgredb
          persistentVolumeClaim:
            claimName: postgres-clair-storage
  1. Set the username for the Clair postgres database (clair by default)

  2. Set the name of the Clair postgres database

  3. Set the password for the Clair postgress user

postgres-clair-service.yaml
  apiVersion: v1
  kind: Service
  metadata:
    labels:
      app: postgres-clair
    name: postgres-clair
    namespace: quay-enterprise
  spec:
    ports:
    - nodePort: 30680
      port: 5432
      protocol: TCP
      targetPort: 5432
    selector:
      app: postgres-clair
    type: NodePort
clair-config.yaml

Modify source, endpoint, key_id, and registry settings to match your environment.

  clair:
    database:
      type: pgsql
      options:
        source: host=172.30.87.93 port=5432 dbname=clair user=clair password=test123 sslmode=disable
        cachesize: 16384
    api:
      # The port at which Clair will report its health status. For example, if Clair is running at
      # https://clair.mycompany.com, the health will be reported at
      # http://clair.mycompany.com:6061/health.
      healthport: 6061

      port: 6062
      timeout: 900s

      # paginationkey can be any random set of characters. *Must be the same across all Clair
      # instances*.
      paginationkey: "XxoPtCUzrUv4JV5dS+yQ+MdW7yLEJnRMwigVY/bpgtQ="

    updater:
      # interval defines how often Clair will check for updates from its upstream vulnerability databases.
      interval: 6h
    notifier:
      attempts: 3
      renotifyinterval: 1h
      http:
        # QUAY_ENDPOINT defines the endpoint at which Quay Enterprise is running.
        # For example: https://myregistry.mycompany.com
        endpoint: https://quay-enterprise.apps.lzha0413.qe.devcluster.openshift.com/secscan/notify (1)
        proxy: http://localhost:6063

  jwtproxy:
    signer_proxy:
      enabled: true
      listen_addr: :6063
      ca_key_file: /certificates/mitm.key # Generated internally, do not change.
      ca_crt_file: /certificates/mitm.crt # Generated internally, do not change.
      signer:
        issuer: security_scanner
        expiration_time: 5m
        max_skew: 1m
        nonce_length: 32
        private_key:
          type: preshared
          options:
            # The ID of the service key generated for Clair. The ID is returned when setting up
            # the key in [Quay Enterprise Setup](security-scanning.md)
            key_id: fc6c2b02c495c9b8fc674fcdbfdd2058f2f559d6bdd19d0ba70af26c0cb66a48 (2)
            private_key_path: /clair/config/security_scanner.pem

    verifier_proxies:
    - enabled: true
      # The port at which Clair will listen.
      listen_addr: :6060

      # If Clair is to be served via TLS, uncomment these lines. See the "Running Clair under TLS"
      # section below for more information.
      # key_file: /config/clair.key
      # crt_file: /config/clair.crt

      verifier:
        # CLAIR_ENDPOINT is the endpoint at which this Clair will be accessible. Note that the port
        # specified here must match the listen_addr port a few lines above this.
        # Example: https://myclair.mycompany.com:6060
        audience: http://clair-service:6060

        upstream: http://localhost:6062
        key_server:
          type: keyregistry
          options:
            # QUAY_ENDPOINT defines the endpoint at which Quay Enterprise is running.
            # Example: https://myregistry.mycompany.com
            registry: https://quay-enterprise.apps.lzha0413.qe.devcluster.openshift.com/keys/
  1. Check that the database options match those set earlier in postgres-clair-deployment.yaml.

  2. Insert the Key ID matches the value from the key generated from the Project Quay Setup screen.

clair-service.yaml
  apiVersion: v1
  kind: Service
  metadata:
    name: clair-service
    namespace: quay-enterprise
  spec:
    ports:
    - name: clair-api
      port: 6060
      protocol: TCP
      targetPort: 6060
    - name: clair-health
      port: 6061
      protocol: TCP
      targetPort: 6061
    selector:
      quay-enterprise-component: clair-scanner
    type: ClusterIP
clair-deployment.yaml
  apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    labels:
      quay-enterprise-component: clair-scanner
    name: clair-scanner
    namespace: quay-enterprise
  spec:
    replicas: 1
    selector:
      matchLabels:
        quay-enterprise-component: clair-scanner
    template:
      metadata:
        labels:
          quay-enterprise-component: clair-scanner
        namespace: quay-enterprise
      spec:
        containers:
        - image: quay.io/projectquay/clair-jwt:qui-gon
          imagePullPolicy: IfNotPresent
          name: clair-scanner
          ports:
          - containerPort: 6060
            name: clair-api
            protocol: TCP
          - containerPort: 6061
            name: clair-health
            protocol: TCP
          volumeMounts:
          - mountPath: /clair/config
            name: configvolume
          - mountPath: /etc/pki/ca-trust/source/anchors/ca.crt
            name: quay-ssl
            subPath: ca.crt
        imagePullSecrets:
        - name: redhat-pull-secret
        restartPolicy: Always
        volumes:
        - name: configvolume
          secret:
            secretName: clair-scanner-config-secret
        - name: quay-ssl
          secret:
            defaultMode: 420
            items:
            - key: ssl.cert
              path: ca.crt
            secretName: quay-enterprise-config-secret
        serviceAccount: clair-jwt
        serviceAccountName: clair-jwt

Repository mirroring

quay-enterprise-mirror.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: quay-enterprise
  name: quay-enterprise-mirror
  labels:
    quay-enterprise-component: mirror-app
spec:
  replicas: 1
  selector:
    matchLabels:
      quay-enterprise-component: mirror-app
  template:
    metadata:
      namespace: quay-enterprise
      labels:
        quay-enterprise-component: mirror-app
    spec:
      volumes:
      - name: configvolume
        secret:
          secretName: quay-enterprise-config-secret
      containers:
      - name: quay-enterprise-mirror-app
        image: quay.io/projectquay/quay:qui-gon
        ports:
        - containerPort: 8443
        volumeMounts:
        - name: configvolume
          readOnly: false
          mountPath: /conf/stack
        command: ["/quay-registry/quay-entrypoint.sh"]
        args: ["repomirror"]
      imagePullSecrets:
        - name: redhat-pull-secret

Additional resources