Project Quay is an enterprise-quality container registry. Use Project Quay to build and store container images, then make them available to deploy across your enterprise. Red Hat is working on two approaches to deploying Project Quay on OpenShift:
-
Deploy Project Quay with an Operator: The Project Quay Setup Operator was developed to provide a simpler method to deploy and manage a Project Quay cluster. The Project Quay Setup Operator now fully supported and recommended for use in deploying Project Quay on OpenShift.
-
Deploy Project Quay objects individually: The current procedure in this guide provides a set of yaml files that you deploy individually to set up your Project Quay cluster. Although this procedure is still fully supported, expect this procedure to become deprecated in the near future.
Overview
Features of Project Quay include:
-
High availability
-
Geo-replication
-
Repository mirroring
-
Docker v2, schema 2 (multiarch) support
-
Continuous integration
-
Security scanning with Clair
-
Custom log rotation
-
Zero downtime garbage collection
-
24/7 support
Project Quay provides support for:
-
Multiple authentication and access methods
-
Multiple storage backends
-
Custom certificates for Quay, Clair, and storage backends
-
Application registries
-
Different container image types
Architecture
Project Quay is made up of several core components.
-
Database: Used by Project Quay as its primary metadata storage (not for image storage).
-
Redis (key, value store): Stores live builder logs and the Project Quay tutorial.
-
Quay (container registry): Runs the quay container as a service, consisting of several components in the pod.
-
Clair: Scans container images for vulnerabilities and suggests fixes.
For supported deployments, you need to use one of the following types of storage:
-
Public cloud storage: In public cloud environments, you should use the cloud provider’s object storage, such as Amazon S3 (for AWS) or Google Cloud Storage (for Google Cloud).
-
Private cloud storage: In private clouds, an S3 or Swift compliant Object Store is needed, such as Ceph RADOS, or OpenStack Swift.
Warning
|
Do not use "Locally mounted directory" Storage Engine for any production configurations. Mounted NFS volumes are not supported. Local storage is meant for Project Quay test-only installations. |
Prerequisites for Project Quay on OpenShift
Here are a few things you need to know before you begin the Project Quay Operator on OpenShift deployment:
-
OpenShift cluster: You need a privileged account to an OpenShift 4.5 or later cluster on which to deploy the Project Quay Operator. That account must have the ability to create namespaces at the cluster scope.
-
Resource Requirements: Each Project Quay application pod has the following resource requirements:
-
8Gi of memory
-
2 milicores of CPU.
-
The Project Quay Operator will create at least one application pod per Project Quay deployment it manages. Ensure your OpenShift cluster has sufficient compute resources for these requirements.
-
Object Storage: By default, the Project Quay Operator uses the
ObjectBucketClaim
Kubernetes API to provision object storage. Consuming this API decouples the Operator from any vendor-specific implementation. OpenShift Container Storage provides this API via its NooBaa component, which will be used in this example. Otherwise, Project Quay can be manually configured to use any of the following supported cloud storage options:-
Amazon S3 (see S3 IAM Bucket Policy for details on configuring an S3 bucket policy for Project Quay)
-
Azure Blob Storage
-
Google Cloud Storage
-
Ceph Object Gateway (RADOS)
-
OpenStack Swift
-
CloudFront + S3
-
Set up Project Quay services
Deploying Project Quay on OpenShift requires you to create a set of yaml files.
Although the oc
command is used to configure the Project Quay registry here,
you could use the OpenShift web UI instead, if you prefer.
Refer to Appendix A for the contents of these yaml files.
Here are a few things to keep in mind:
-
Your OpenShift account must have permission to create namespaces at the cluster scope.
-
Project Quay runs under its own namespace inside a Kubernetes cluster, so that needs to be created first. You can create it through the
New project
in the OpenShift web console or using quay-enterprise-namespace.yaml (as described here). -
You need a working enterprise-quality database. In our example, we illustrate PostgreSQL (version 9.4 or above is required, although we recommend 9.6).
-
You can use an existing Redis service (needed for build logs and the Project Quay tutorial) or start one as described in this procedure.
Here are the major steps, detailed below, to complete a Red Hat Quay deployment on OpenShift:
-
Set up the Red Hat Quay namespace and secrets
-
Create the Red Hat Quay database
-
Create Red Hat Quay roles and privileges
-
Create the Redis deployment
-
Prepare to configure Red Hat Quay
-
Start the Red Hat Quay configuration user interface
-
Deploy the Red Hat Quay configuration
-
Add Clair image scanning
-
Add repository mirroring
Set up Project Quay namespaces and secrets
-
Get Project Quay yaml files: Create a set of yaml files in a directory on your local system from the contents shown in Appendix A. Study each file to determine where you might need to make modifications. You will use
oc create
to create the needed resources from those files. -
Log in with oc cli. Login as a user with cluster scope permissions to the OpenShift cluster. For example:
$ oc login -u system:admin
-
Create namespace. Run
oc create
quay-enterprise-namespace.yaml
and then makequay-enterprise
the current project. All objects will be deployed to this namespace/project:$ oc create -f quay-enterprise-namespace.yaml namespace "quay-enterprise" created $ oc project quay-enterprise
-
Create the secret for the Project Quay configuration and app: Create the following secrets. During Project Quay configuration, the config.yaml, and optionally the ssl.cert and ssl.key, files are added to the application’s secret, so they can be included with the resulting Project Quay application:
$ oc create -f quay-enterprise-config-secret.yaml secret/quay-enterprise-config-secret created
-
Create the secret for quay.io. This pull secret provides credentials to pull containers from the Quay.io registry. Refer to Accessing Red Hat Project Quay to get the credentials you need to add to the quay-enterprise-redhat-pull-secret.yaml file, then run
oc create
:$ oc create -f quay-enterprise-redhat-pull-secret.yaml secret/redhat-pull-secret created
-
Create the database. If you are not using your own enterprise-quality database (recommended), this procedure illustrates how to set up a Postgresql database on an OpenShift cluster. This entails creating AWS storage, a postgres deployment, and postgres service, then adding an extension to the database (see the description of
quay-storageclass.yaml
in Appendix A for information on adding encryption to your volumes):$ oc create -f quay-storageclass.yaml storageclass.storage.k8s.io/quay-storageclass created $ oc create -f db-pvc.yaml persistentvolumeclaim/postgres-storage created $ oc create -f postgres-deployment.yaml deployment.extensions/postgres-new created $ oc create -f postgres-service.yaml service/postgres created
$ oc get pods -n quay-enterprise NAME READY STATUS RESTARTS AGE postgres-xxxxxxxxxx-xxxxx 1/1 Running 0 3m26s
Run the following command, replacing the name of the postgres pod with your pod:
$ oc exec -it postgres-xxxxxxxxxx-xxxxx -n quay-enterprise -- /bin/bash -c 'echo "CREATE EXTENSION IF NOT EXISTS pg_trgm" | /opt/rh/rh-postgresql10/root/usr/bin/psql -d quay'
NoteThe
-d database_name
must not be omitted. If it is, the extension will be created on the default PostgreSQL database. -
Create a serviceaccount for the database: Create a serviceaccount and grant it anyuid privilege. Running the PostgreSQL deployment under anyuid lets you add persistent storage to the deployment and allow it to store db metadata.
# oc create serviceaccount postgres -n quay-enterprise serviceaccount/postgres created # oc adm policy add-scc-to-user anyuid -z system:serviceaccount:quay-enterprise:postgres scc "anyuid" added to: ["system:serviceaccount:quay-enterprise:system:serviceaccount:quay-enterprise:postgres"]
-
Create the role and the role binding: Project Quay has native Kubernetes integrations. These integrations require Service Account to have access to the Kubernetes API. When Kubernetes RBAC is enabled, Role Based Access Control policy manifests also have to be deployed. This role will be used to run Project Quay and also to write the config.yaml file that Project Quay creates at the end of the web interface setup:
$ oc create -f quay-servicetoken-role-k8s1-6.yaml $ oc create -f quay-servicetoken-role-binding-k8s1-6.yaml
-
Create Redis deployment: If you haven’t already deployed Redis, create a
quay-enterprise-redis.yaml
file and deploy it:$ oc create -f quay-enterprise-redis.yaml
-
Set up to configure Project Quay: Project Quay V3 added a tool for configuring the Project Quay service before deploying it. Although the config tool is in the same container as the full Project Quay service, it is deployed in a different way, as follows:
$ oc create -f quay-enterprise-config.yaml $ oc create -f quay-enterprise-config-service-clusterip.yaml $ oc create -f quay-enterprise-config-route.yaml
The quay configuration container is now set up to be accessed from port 443 from your Web browser. Before creating the configuration, however, you need to create a route to the permanent Project Quay service. This is because we need the Project Quay service’s publicly available FQDN when setting up the application.
-
Start the Project Quay application: Identify the Project Quay Kubernetes service and create a route for it, then start the Project Quay application as follows:
$ oc create -f quay-enterprise-service-clusterip.yaml service/quay-enterprise-clusterip created $ oc create -f quay-enterprise-app-route.yaml route.route.openshift.io/quay-enterprise created $ oc create -f quay-enterprise-app-rc.yaml deployment.extensions/quay-enterprise-app created
NoteThe creation of the Project Quay application (quay-enterprise-app pod) will not complete until you have finished configuring the application. So don’t worry if you see that pod remain in "ContainerCreating" status until the configuration is done. At that point, the new configuration is fed to the application and it will change to the "Running" state.
You will need to know the route to the Project Quay application when you do the configuration step.
-
Begin to configure Project Quay: Open the public route to the Project Quay configuration container in a Web browser. To see the route to the quay configuration service, type the following:
$ oc get route -n quay-enterprise quay-enterprise-config NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD quay-enterprise-config quay-enterprise-config-quay-enterprise.apps.test.example.com quay-enterprise-config <all> passthrough None
For this example, you would open this URL in your web browser: https://quay-enterprise-config-quay-enterprise.apps.test.example.com
-
Log in as quayconfig: When prompted, enter the username and password (the password was set as an argument to the quay config container in:
quay-enterprise-config.yaml
):-
User Name: quayconfig
-
Password: secret
-
-
Fill in the required fields: When you start the config tool without mounting an existing configuration bundle, you will be booted into an initial setup session. In a setup session, default values will be filled automatically. The following steps will walk through how to fill out the remaining required fields.
-
Identify the database: For the initial setup, add the following information about the type and location of the database to be used by Project Quay:
-
Database Type: Choose MySQL or PostgreSQL. PostgreSQL is used with the example shown here.
-
Database Server: Identify the IP address or hostname of the database, along with the port number if it is different from 3306.
-
Username: Identify a user with full access to the database.
-
Password: Enter the password you assigned to the selected user.
-
Database Name: Enter the database name you assigned when you started the database server.
-
SSL Certificate: For production environments, you should provide an SSL certificate to connect to the database.
To verify the NAME of the service (postgres), type the following:
$ oc get services -n quay-enterprise postgres NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE postgres NodePort 172.30.127.41 <none> 5432:32212/TCP 19h
The following figure shows an example of the screen for identifying the database used by Project Quay:
-
-
Identify settings: Go through each of the following settings. The minimum you must enter includes:
-
Server hostname: The URL to the Project Quay service is required.
-
Redis hostname: The URL or IP address to the Redis service is required.
Here are all the settings you need to consider:
-
Custom SSL Certificates: Upload custom or self-signed SSL certificates for use by Project Quay. See Using SSL to protect connections to Project Quay for details. Recommended for high availability.
ImportantUsing SSL certificates is recommended for both basic and high availability deployments. If you decide to not use SSL, you must configure your container clients to use your new Project Quay setup as an insecure registry as described in Test an Insecure Registry.
-
Basic Configuration: Upload a company logo to rebrand your Project Quay registry.
-
Server Configuration: Hostname or IP address to reach the Project Quay service, along with TLS indication (recommended for production installations). To get the route to the permanent Project Quay service, type the following:
$ oc get route -n quay-enterprise quay-enterprise NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD quay-enterprise quay-enterprise-quay-enterprise.apps.cnegus-ocp.devcluster.openshift.com quay-enterprise-clusterip <all> None
See Using SSL to protect connections to Project Quay. TLS termination can be done in two different ways:
-
On the instance itself, with all TLS traffic governed by the nginx server in the quay container (recommended).
-
On the load balancer. This is not recommended. Access to Project Quay could be lost if the TLS setup is not done correctly on the load balancer.
-
-
Data Consistency Settings: Select to relax logging consistency guarantees to improve performance and availability.
-
Time Machine: Allow older image tags to remain in the repository for set periods of time and allow users to select their own tag expiration times.
-
redis: Identify the hostname or IP address (and optional password) to connect to the redis service used by Project Quay. To find the address of the redis service, type the following:
$ oc get services -n quay-enterprise quay-enterprise-redis NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE quay-enterprise-redis ClusterIP 172.30.207.35 <none> 6379/TCP 40m
-
Repository Mirroring: Choose the checkbox to Enable Repository Mirroring. With this enabled, you can create repositories in your Project Quay cluster that mirror selected repositories from remote registries. Before you can enable repository mirroring, start the repository mirroring worker as described later in this procedure.
-
Registry Storage: Identify the location of storage. A variety of cloud and local storage options are available. Remote storage is required for high availability. Identify the Ceph storage location if you are following the example for Project Quay high availability storage. On OpenShift, the example uses Amazon S3 storage.
-
Action Log Storage Configuration: Action logs are stored in the Project Quay database by default. If you have a large amount of action logs, you can have those logs directed to Elasticsearch for later search and analysis. To do this, change the value of Action Logs Storage to Elasticsearch and configure related settings as described in Configure action log storage.
-
-
Action Log Rotation and Archiving: Select to enable log rotation, which moves logs older than 30 days into storage, then indicate storage area.
-
Security Scanner: We recommend setting up the Clair security scanner after you have completed the initial Project Quay deployment. Clair setup is described after the end of this procedure.
-
Application Registry: Enable an additional application registry that includes things like Kubernetes manifests or Helm charts (see the App Registry specification).
-
rkt Conversion: Allow
rkt fetch
to be used to fetch images from the Project Quay registry. Public and private GPG2 keys are needed. This field is deprecated. -
E-mail: Enable e-mail to use for notifications and user password resets.
-
Internal Authentication: Change default authentication for the registry from Local Database to LDAP, Keystone (OpenStack), JWT Custom Authentication, or External Application Token.
-
External Authorization (OAuth): Enable to allow GitHub or GitHub Enterprise to authenticate to the registry.
-
Google Authentication: Enable to allow Google to authenticate to the registry.
-
Access settings: Basic username/password authentication is enabled by default. Other authentication types that can be enabled include: external application tokens (user-generated tokens used with docker or rkt commands), anonymous access (enable for public access to anyone who can get to the registry), user creation (let users create their own accounts), encrypted client password (require command-line user access to include encrypted passwords), and prefix username autocompletion (disable to require exact username matches on autocompletion).
-
Registry Protocol Settings: Leave the
Restrict V1 Push Support
checkbox enabled to restrict access to Docker V1 protocol pushes. Although Red Hat recommends against enabling Docker V1 push protocol, if you do allow it, you must explicitly whitelist the namespaces for which it is enabled.
-
-
Dockerfile Build Support: Enable to allow users to submit Dockerfiles to be built and pushed to Project Quay. This is not recommended for multitenant environments.
-
-
Validate the changes: Select
Validate Configuration Changes
. If validation is successful, you will be presented with the following Download Configuration modal: -
Download configuration: Select the
Download Configuration
button and save the tarball (quay-config.tar.gz
) to a local directory. Save this file in case you want to deploy the config files inside manually or just want a record of what you deployed. -
Deploy configuration: Unpack the configuration files
(tar xvf quay-config.tar.gz
) and add them manually to the secret:
$ oc create secret generic quay-enterprise-config-secret -n quay-enterprise \
--from-file=config.yaml=/path/to/config.yaml \
--from-file=ssl.key=/path/to/ssl.key \
--from-file=ssl.cert=/path/to/ssl.cert
-
Check pods: In a couple of minutes (depending on your connection speed), Project Quay should be up and running and the following pods should be visible in the quay-enterprise namespace You might get a mount error at first, but that should resolve itself:
$ oc get pods -n quay-enterprise NAME READY STATUS RESTARTS AGE postgres-5b4c5d7dd9-f8tqz 1/1 Running 0 46h quay-enterprise-app-7899c7c77f-jrsrc 1/1 Running 0 45h quay-enterprise-config-app-86bbbcd446-mwmmg 1/1 Running 0 46h quay-enterprise-redis-684b9d6f55-tx6w9 1/1 Running 0 46h
-
Get the URL for Project Quay: Type the following to get the hostname of the new Project Quay installation:
$ oc get routes -n quay-enterprise quay-enterprise NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD quay-enterprise quay-enterprise-quay-enterprise.apps.test.example.com quay-enterprise-clusterip <all> None
-
Start using Project Quay: Open the hostname in a web browser to start using Project Quay.
Add Clair image scanning to Project Quay
Setting up and deploying Clair image scanning for your Project Quay deployment requires the following basic steps:
-
Setting up a database for Clair
-
Creating authentication keys for Clair
-
Deploying Clair
The following procedure assumes you already have a running Project Quay cluster on an OpenShift platform with the Project Quay Setup container running in your browser:
-
Create a serviceaccount for clair-jwt: Create a serviceaccount and grant it anyuid privilege. Running the clair deployment under anyuid lets you generate certificates for jwt proxy and add them the default ca-bundle.
# oc create serviceaccount clair-jwt -n quay-enterprise serviceaccount/jwt created # oc adm policy add-scc-to-user anyuid -z system:serviceaccount:quay-enterprise:clair-jwt scc "anyuid" added to: ["system:serviceaccount:quay-enterprise:system:serviceaccount:quay-enterprise:clair-jwt"]
-
Create the Clair database: This example configures a postgresql database to use with the Clair image scanner. With the yaml files in the current directory, review those files for possible modifications, then run the following:
$ oc create -f postgres-clair-storage.yaml $ oc create -f postgres-clair-deployment.yaml $ oc create -f postgres-clair-service.yaml
-
Check Clair database objects: To view the Clair database objects, type:
$ oc get all | grep -i clair pod/postgres-clair-xxxxxxxxx-xxxx 1/1 Running 0 3m45s deployment.apps/postgres-clair 1/1 1 1 3m45s service/postgres-clair NodePort 172.30.193.64 <none> 5432:30680/TCP 159m replicaset.apps/postgres-clair-xx 1 1 1 3m45s
The output shows that the postgres-clair pod is running, postgres-clair was successfully deployed, the postgres-clair service is available on the address and port shown, and 1 replica set of postgres-clair is active.
-
Open the Project Quay Setup UI: Reload the Project Quay Setup UI and select "Modify configuration for this cluster."
-
Enable Security Scanning: Scroll to the Security Scanner section and select the "Enable Security Scanning" checkbox. From the fields that appear you need to create an authentication key and enter the security scanner endpoint. Here’s how:
-
Generate key: Click "Create Key" and then type a name for the Clair private key and an optional expiration date (if blank, the key never expires). Then select Generate Key.
-
Copy the Clair key and PEM file: Save the Key ID (to a notepad or similar) and download a copy of the Private Key PEM file (named security_scanner.pem) by selecting "Download Private Key" (if you lose this key, you will need to generate a new one).
-
-
Modify clair-config.yaml: Return to the shell and the directory holding your yaml files. Edit the
clair-config.yaml
file and modify the following values:-
database.options.source: Make sure the host, port, dbname, user, password, and ssl mode match those values you set when you create the postgres database for Clair.
-
key_id: Search for KEY_ID_HERE in this file and replace it with the contents of the key you generated from the Project Quay Setup screen in the Security Scanner section (security_scanner.pam file).
-
private_key_path: Identify the full path to the security_scanner.pem file you saved earlier.
-
-
Create the Clair config secret and service: Run the following commands, identifying the paths to your
clair-config.yaml
andsecurity_scanner.pem
files.$ oc create secret generic clair-scanner-config-secret \ --from-file=config.yaml=/path/to/clair-config.yaml \ --from-file=security_scanner.pem=/path/to/security_scanner.pem $ oc create -f clair-service.yaml $ oc create -f clair-deployment.yaml
-
Get the clair-service endpoint: In this example, the endpoint of of clair-service would be http://172.30.133.227:6060:
$ oc get service clair-service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE clair-service ClusterIP 172.30.133.227 <none> 6060/TCP,6061/TCP 76s
-
Enter Security Scanner Endpoint: Return to the Project Quay Setup screen and fill in the clair-service endpoint. For example, http://clair-service:6060
-
Deploy configuration: Select to save the configuration, then deploy it when prompted.
A green check mark will appear on the screen when the deployment is done. You can now start using Clair image scanning with Project Quay. For information on the data sources available with the Clair image scanner, see Using Clair data sources.
Add repository mirroring Project Quay
Enabling repository mirroring allows you to create container image repositories on your Project Quay cluster that exactly match the content of a selected external registry, then sync the contents of those repositories on a regular schedule and on demand.
To add the repository mirroring feature to your Project Quay cluster:
-
Run the repository mirroring worker. To do this, you start a quay pod with the
repomirror
option. -
Select "Enable Repository Mirroring in the Project Quay Setup tool.
-
Log into your Project Quay Web UI and begin creating mirrored repositories as described in Repository Mirroring in Red Hat Quay.
The following procedure assumes you already have a running Project Quay cluster on an OpenShift platform, with the Project Quay Setup container running in your browser:
Note
|
Instead of running repository mirroring in its own container, you
could start the quay application pod with the environment variable
|
-
Start the repo mirroring worker: Start the quay container in
repomirror
mode as follows:$ oc create -f quay-enterprise-mirror.yaml
-
Log into config tool: Log into the Project Quay Setup Web UI (config tool).
-
Enable repository mirroring: Scroll down the the Repository Mirroring section and select the Enable Repository Mirroring check box, as shown here:
-
Select HTTPS and cert verification: If you want to require HTTPS communications and verify certificates during mirroring, select this check box.
-
Save configuration: Select the Save Configuration Changes button. Repository mirroring should now be enabled on your Project Quay cluster. Refer to Repository Mirroring in Project Quay for details on setting up your own mirrored container image repositories.
The server hostname you set with the config tools may not represent and endpoint
that can be used to copy images to a mirror configured for that server. In that case,
you can set a REPO_MIRROR_SERVER_HOSTNAME
environment variable to identify the server’s
URL in a way that it can be reached by a skopeo copy command.
Starting to use Project Quay
With Project Quay now running, you can:
-
Select Tutorial from the Quay home page to try the 15-minute tutorial. In the tutorial, you learn to log into Quay, start a container, create images, push repositories, view repositories, and change repository permissions with Quay.
-
Refer to the Use Project Quay for information on working with Project Quay repositories.
Appendix A: Project Quay on OpenShift configuration files
The following yaml files were created to deploy Project Quay on OpenShift. They are used throughout the deployment procedure in this document. We recommend you copy the files from this document into a directory, review the contents, and make any changes necessary for your deployment.
Project Quay namespaces and secrets
apiVersion: v1
kind: Namespace (1)
metadata:
name: quay-enterprise (2)
-
Identifies the Kind as Namespace
-
Namespace is set to quay-enterprise throughout the yaml files
apiVersion: v1
kind: Secret
metadata:
namespace: quay-enterprise
name: quay-enterprise-config-secret
apiVersion: v1
kind: Secret
metadata:
namespace: quay-enterprise
name: redhat-pull-secret
data:
.dockerconfigjson: <Add credentials> (1)
type: kubernetes.io/dockerconfigjson
-
Change <Add credentials> to include the credentials shown from Accessing Red Hat Quay
Project Quay storage
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: quay-storageclass
parameters: (1)
type: gp2
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
-
To encrypt the volume, add this to the parameters section (optionally replacing xfs with another filesystem type):
encrypted: "true" fsType: xfs (or other fs) kmsKeyId:
Project Quay database
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-storage
namespace: quay-enterprise
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 5Gi (1)
storageClassName: quay-storageclass
-
The 5Gi creates 5 gigabytes of storage for use by the Postgres database.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
namespace: quay-enterprise
spec:
replicas: 1 (1)
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: registry.redhat.io/rhel8/postgresql-10:latest
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRESQL_USER
value: "username" (2)
- name: POSTGRESQL_DATABASE
value: "quay"
- name: POSTGRESQL_PASSWORD
value: "password" (3)
volumeMounts:
- mountPath: /var/lib/pgsql/data
name: postgredb
serviceAccount: postgres
serviceAccountName: postgres
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-storage
-
Only one instance of the postgres database is defined here. Adjust replicas based on demand.
-
Replace "username" with a name for your Postgres user
-
Replace "password" with a password for your Postgres user
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: quay-enterprise
labels:
app: postgres
spec:
type: NodePort
ports:
- port: 5432
selector:
app: postgres
Project Quay authorization
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: quay-enterprise-serviceaccount
namespace: quay-enterprise
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- apiGroups:
- extensions
- apps
resources:
- deployments
verbs:
- get
- list
- patch
- update
- watch
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: quay-enterprise-secret-writer
namespace: quay-enterprise
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: quay-enterprise-serviceaccount
subjects:
- kind: ServiceAccount
name: default
Redis database
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: quay-enterprise
name: quay-enterprise-redis
labels:
quay-enterprise-component: redis
spec:
replicas: 1 (1)
selector:
matchLabels:
quay-enterprise-component: redis
template:
metadata:
namespace: quay-enterprise
labels:
quay-enterprise-component: redis
spec:
containers:
- name: redis-master
image: registry.redhat.io/rhel8/redis-5
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
namespace: quay-enterprise
name: quay-enterprise-redis
labels:
quay-enterprise-component: redis
spec:
ports:
- port: 6379
selector:
quay-enterprise-component: redis
-
Only one instance of the redis database is defined here. Adjust replicas based on demand.
Project Quay configuration pod
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: quay-enterprise
name: quay-enterprise-config-app
labels:
quay-enterprise-component: config-app
spec:
replicas: 1
selector:
matchLabels:
quay-enterprise-component: config-app
template:
metadata:
namespace: quay-enterprise
labels:
quay-enterprise-component: config-app
spec:
containers:
- name: quay-enterprise-config-app
image: quay.io/projectquay/quay:qui-gon
ports:
- containerPort: 8443
command: ["/quay-registry/quay-entrypoint.sh"]
args: ["config", "secret"]
imagePullSecrets:
- name: redhat-pull-secret
apiVersion: v1
kind: Service
metadata:
namespace: quay-enterprise
name: quay-enterprise-config
spec:
type: ClusterIP
ports:
- protocol: TCP
name: https
port: 443
targetPort: 8443
selector:
quay-enterprise-component: config-app
apiVersion: v1
kind: Route
metadata:
name: quay-enterprise-config
namespace: quay-enterprise
spec:
to:
kind: Service
name: quay-enterprise-config
tls:
termination: passthrough
Project Quay application container
apiVersion: v1
kind: Service
metadata:
namespace: quay-enterprise
name: quay-enterprise-clusterip
spec:
type: ClusterIP
ports:
- protocol: TCP
name: https
port: 443
targetPort: 8443
selector:
quay-enterprise-component: app
apiVersion: v1
kind: Route
metadata:
name: quay-enterprise
namespace: quay-enterprise
spec:
to:
kind: Service
name: quay-enterprise-clusterip
tls:
termination: passthrough
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: quay-enterprise
name: quay-enterprise-app
labels:
quay-enterprise-component: app
spec:
replicas: 1 (1)
selector:
matchLabels:
quay-enterprise-component: app
template:
metadata:
namespace: quay-enterprise
labels:
quay-enterprise-component: app
spec:
volumes:
- name: configvolume
secret:
secretName: quay-enterprise-config-secret
containers:
- name: quay-enterprise-app
image: quay.io/projectquay/quay:qui-gon
ports:
- containerPort: 8443
volumeMounts:
- name: configvolume
readOnly: false
mountPath: /conf/stack
resources:
limits:
memory: "4Gi"
requests:
memory: "2Gi"
imagePullSecrets:
- name: redhat-pull-secret
-
Only one instance of the quay container is defined here. Adjust replicas based on demand.
Clair image scanning
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-clair-storage
namespace: quay-enterprise
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: quay-storageclass
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: postgres-clair
name: postgres-clair
namespace: quay-enterprise
spec:
replicas: 1
selector:
matchLabels:
app: postgres-clair
template:
metadata:
labels:
app: postgres-clair
spec:
containers:
- env:
- name: POSTGRESQL_USER
value: clair (1)
- name: POSTGRESQL_DATABASE
value: clair (2)
- name: POSTGRESQL_PASSWORD
value: test123 (3)
image: registry.redhat.io/rhel8/postgresql-10:latest
imagePullPolicy: IfNotPresent
name: postgres-clair
ports:
- containerPort: 5432
protocol: TCP
volumeMounts:
- mountPath: /var/lib/pgsql/data
name: postgredb
serviceAccount: postgres
serviceAccountName: postgres
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-clair-storage
-
Set the username for the Clair postgres database (clair by default)
-
Set the name of the Clair postgres database
-
Set the password for the Clair postgress user
apiVersion: v1
kind: Service
metadata:
labels:
app: postgres-clair
name: postgres-clair
namespace: quay-enterprise
spec:
ports:
- nodePort: 30680
port: 5432
protocol: TCP
targetPort: 5432
selector:
app: postgres-clair
type: NodePort
Modify source, endpoint, key_id, and registry settings to match your environment.
clair:
database:
type: pgsql
options:
source: host=172.30.87.93 port=5432 dbname=clair user=clair password=test123 sslmode=disable
cachesize: 16384
api:
# The port at which Clair will report its health status. For example, if Clair is running at
# https://clair.mycompany.com, the health will be reported at
# http://clair.mycompany.com:6061/health.
healthport: 6061
port: 6062
timeout: 900s
# paginationkey can be any random set of characters. *Must be the same across all Clair
# instances*.
paginationkey: "XxoPtCUzrUv4JV5dS+yQ+MdW7yLEJnRMwigVY/bpgtQ="
updater:
# interval defines how often Clair will check for updates from its upstream vulnerability databases.
interval: 6h
notifier:
attempts: 3
renotifyinterval: 1h
http:
# QUAY_ENDPOINT defines the endpoint at which Quay Enterprise is running.
# For example: https://myregistry.mycompany.com
endpoint: https://quay-enterprise.apps.lzha0413.qe.devcluster.openshift.com/secscan/notify (1)
proxy: http://localhost:6063
jwtproxy:
signer_proxy:
enabled: true
listen_addr: :6063
ca_key_file: /certificates/mitm.key # Generated internally, do not change.
ca_crt_file: /certificates/mitm.crt # Generated internally, do not change.
signer:
issuer: security_scanner
expiration_time: 5m
max_skew: 1m
nonce_length: 32
private_key:
type: preshared
options:
# The ID of the service key generated for Clair. The ID is returned when setting up
# the key in [Quay Enterprise Setup](security-scanning.md)
key_id: fc6c2b02c495c9b8fc674fcdbfdd2058f2f559d6bdd19d0ba70af26c0cb66a48 (2)
private_key_path: /clair/config/security_scanner.pem
verifier_proxies:
- enabled: true
# The port at which Clair will listen.
listen_addr: :6060
# If Clair is to be served via TLS, uncomment these lines. See the "Running Clair under TLS"
# section below for more information.
# key_file: /config/clair.key
# crt_file: /config/clair.crt
verifier:
# CLAIR_ENDPOINT is the endpoint at which this Clair will be accessible. Note that the port
# specified here must match the listen_addr port a few lines above this.
# Example: https://myclair.mycompany.com:6060
audience: http://clair-service:6060
upstream: http://localhost:6062
key_server:
type: keyregistry
options:
# QUAY_ENDPOINT defines the endpoint at which Quay Enterprise is running.
# Example: https://myregistry.mycompany.com
registry: https://quay-enterprise.apps.lzha0413.qe.devcluster.openshift.com/keys/
-
Check that the database options match those set earlier in postgres-clair-deployment.yaml.
-
Insert the Key ID matches the value from the key generated from the Project Quay Setup screen.
apiVersion: v1
kind: Service
metadata:
name: clair-service
namespace: quay-enterprise
spec:
ports:
- name: clair-api
port: 6060
protocol: TCP
targetPort: 6060
- name: clair-health
port: 6061
protocol: TCP
targetPort: 6061
selector:
quay-enterprise-component: clair-scanner
type: ClusterIP
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
quay-enterprise-component: clair-scanner
name: clair-scanner
namespace: quay-enterprise
spec:
replicas: 1
selector:
matchLabels:
quay-enterprise-component: clair-scanner
template:
metadata:
labels:
quay-enterprise-component: clair-scanner
namespace: quay-enterprise
spec:
containers:
- image: quay.io/projectquay/clair-jwt:qui-gon
imagePullPolicy: IfNotPresent
name: clair-scanner
ports:
- containerPort: 6060
name: clair-api
protocol: TCP
- containerPort: 6061
name: clair-health
protocol: TCP
volumeMounts:
- mountPath: /clair/config
name: configvolume
- mountPath: /etc/pki/ca-trust/source/anchors/ca.crt
name: quay-ssl
subPath: ca.crt
imagePullSecrets:
- name: redhat-pull-secret
restartPolicy: Always
volumes:
- name: configvolume
secret:
secretName: clair-scanner-config-secret
- name: quay-ssl
secret:
defaultMode: 420
items:
- key: ssl.cert
path: ca.crt
secretName: quay-enterprise-config-secret
serviceAccount: clair-jwt
serviceAccountName: clair-jwt
Repository mirroring
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: quay-enterprise
name: quay-enterprise-mirror
labels:
quay-enterprise-component: mirror-app
spec:
replicas: 1
selector:
matchLabels:
quay-enterprise-component: mirror-app
template:
metadata:
namespace: quay-enterprise
labels:
quay-enterprise-component: mirror-app
spec:
volumes:
- name: configvolume
secret:
secretName: quay-enterprise-config-secret
containers:
- name: quay-enterprise-mirror-app
image: quay.io/projectquay/quay:qui-gon
ports:
- containerPort: 8443
volumeMounts:
- name: configvolume
readOnly: false
mountPath: /conf/stack
command: ["/quay-registry/quay-entrypoint.sh"]
args: ["repomirror"]
imagePullSecrets:
- name: redhat-pull-secret