Project Quay is an enterprise-quality container registry. Use Project Quay to build and store container images, then make them available to deploy across your enterprise.
The Project Quay Operator provides a simple method to deploy and manage Project Quay on an OpenShift cluster.
As of Project Quay 3.4.0, the Operator has been completely re-written to provide an improved out of the box experience as well as support for more Day 2 operations. As a result the new Operator is simpler to use and is more opinionated. The key differences from earlier versions of the Operator are:
-
The
QuayEcosystem
custom resource has been replaced with theQuayRegistry
custom resource -
The default installation options produces a fully supported Quay environment with all managed dependencies (database, caches, object storage, etc) supported for production use (some components may not be highly available)
-
A new robust validation library for Quay’s configuration which is shared by the Quay application and config tool for consistency
-
Object storage can now be provided by the Operator using the
ObjectBucketClaim
Kubernetes API (e.g. the NooBaa Operator can be from OperatorHub.io can be used to provide an implementation of that API) -
Customization of the container images used by deployed pods for testing and development scenarios
Introduction to the Project Quay Operator
This document outlines the steps for configuring, deploying, managing and upgrading Project Quay on OpenShift using the Project Quay Operator.
It shows you how to:
-
Install the Project Quay Operator
-
Configure object storage, either managed or unmanaged
-
Configure other unmanaged components, if required, including database, Redis, routes, TLS, etc.
-
Deploy the Project Quay registry on OpenShift using the Operator
-
Use advanced features supported by the Operator
-
Upgrade the registry by upgrading the Operator
QuayRegistry API
The Quay Operator provides the QuayRegistry
custom resource API to declaratively manage Quay
container registries on the cluster. Use either the OpenShift UI or a command-line tool to interact with this API.
-
Creating a
QuayRegistry
will result in the Operator deploying and configuring all necessary resources needed to run Quay on the cluster. -
Editing a
QuayRegistry
will result in the Operator reconciling the changes and creating/updating/deleting objects to match the desired configuration. -
Deleting a
QuayRegistry
will result in garbage collection of all previously created resources and theQuay
container registry will no longer be available.
The QuayRegistry
API is fairly simple, and the fields are outlined in the following sections.
Quay Operator components
Quay is a powerful container registry platform and as a result, has a significant number of dependencies. These include a database, object storage, Redis, and others. The Quay Operator manages an opinionated deployment of Quay and its dependencies on Kubernetes. These dependencies are treated as components and are configured through the QuayRegistry
API.
In the QuayRegistry
custom resource, the spec.components
field configures components. Each component contains two fields: kind
- the name of the component, and managed
- boolean whether the component lifecycle is handled by the Operator. By default (omitting this field), all components are managed and will be autofilled upon reconciliation for visibility:
spec:
components:
- kind: quay
managed: true
- kind: postgres
managed: true
- kind: clair
managed: true
- kind: redis
managed: true
- kind: horizontalpodautoscaler
managed: true
- kind: objectstorage
managed: true
- kind: route
managed: true
- kind: mirror
managed: true
- kind: monitoring
managed: true
- kind: tls
managed: true
- kind: clairpostgres
managed: true
Using managed components
Unless your QuayRegistry
custom resource specifies otherwise, the Operator will use defaults for the following managed components:
-
quay: Holds overrides for the Quay deployment, for example, environment variables and number of replicas. This component is new in Project Quay 3.7 and cannot be set to unmanaged.
-
postgres: For storing the registry metadata, uses an upstream (CentOS) version of Postgres 10
-
clair: Provides image vulnerability scanning
-
redis: Handles Quay builder coordination and some internal logging
-
horizontalpodautoscaler: Adjusts the number of Quay pods depending on memory/cpu consumption
-
objectstorage: For storing image layer blobs, utilizes the
ObjectBucketClaim
Kubernetes API which is provided by Noobaa/RHOCS -
route: Provides an external entrypoint to the Quay registry from outside OpenShift
-
mirror: Configures repository mirror workers (to support optional repository mirroring)
-
monitoring: Features include a Grafana dashboard, access to individual metrics, and alerting to notify for frequently restarting Quay pods
-
tls: Configures whether Project Quay or OpenShift handles TLS
-
clairpostgres: Configures a managed Clair database
The Operator will handle any required configuration and installation work needed for Project Quay to use the managed components. If the opinionated deployment performed by the Quay Operator is unsuitable for your environment, you can provide the Operator with unmanaged
resources (overrides) as described in the following sections.
Using unmanaged components for dependencies
If you have existing components such as Postgres, Redis or object storage that you would like to use with Quay, you first configure them within the Quay configuration bundle (config.yaml
) and then reference the bundle in your QuayRegistry
(as a Kubernetes Secret
) while indicating which components are unmanaged.
Note
|
The Quay config editor can also be used to create or modify an existing config bundle and simplifies the process of updating the Kubernetes |
Config bundle secret
The spec.configBundleSecret
field is a reference to the metadata.name
of a Secret
in the same namespace as the QuayRegistry
. This Secret
must contain a config.yaml
key/value pair. This config.yaml
file is a Quay config YAML file. This field is optional, and will be auto-filled by the Operator if not provided. If provided, it serves as the base set of config fields which are later merged with other fields from any managed components to form a final output Secret
, which is then mounted into the Quay application pods.
Prerequisites for Project Quay on OpenShift
Before you begin the deployment of Project Quay Operator on OpenShift, you should consider the following.
OpenShift cluster
You need a privileged account to an OpenShift 4.5 or later cluster on which to deploy the Project Quay Operator. That account must have the ability to create namespaces at the cluster scope.
Resource Requirements
Each Project Quay application pod has the following resource requirements:
-
8Gi of memory
-
2000 millicores of CPU.
The Project Quay Operator will create at least one application pod per Project Quay deployment it manages. Ensure your OpenShift cluster has sufficient compute resources for these requirements.
Object Storage
By default, the Project Quay Operator uses the ObjectBucketClaim
Kubernetes API to provision object storage. Consuming this API decouples the Operator from any vendor-specific implementation. Red Hat OpenShift Data Foundation provides this API via its NooBaa component, which will be used in this example.
Project Quay can be manually configured to use any of the following supported cloud storage options:
-
Amazon S3 (see S3 IAM Bucket Policy for details on configuring an S3 bucket policy for Project Quay)
-
Azure Blob Storage
-
Google Cloud Storage
-
Ceph Object Gateway (RADOS)
-
OpenStack Swift
-
CloudFront + S3
Installing the Quay Operator from OperatorHub
-
Using the OpenShift console, Select Operators → OperatorHub, then select the Red Hat Quay Operator. If there is more than one, be sure to use the Red Hat certified Operator and not the community version.
-
The Installation page outlines the features and prerequisites:
-
Select Install. The Operator Installation page appears.
-
The following choices are available for customizing the installation:
-
Update Channel: Choose the update channel, for example,
stable-3.7
for the latest release. -
Installation Mode: Choose
All namespaces on the cluster
if you want the Operator to be available cluster-wide. ChooseA specific namespace on the cluster
if you want it deployed only within a single namespace. It is recommended that you install the Operator cluster-wide. If you choose a single namespace, the monitoring component will not be available by default. -
Approval Strategy: Choose to approve either automatic or manual updates. Automatic update strategy is recommended.
-
-
Select Install.
-
After a short time, you will see the Operator installed successfully in the Installed Operators page.
Configuring Quay before deployment
The Operator can manage all the Project Quay components when deploying on OpenShift, and this is the default configuration. Alternatively, you can manage one or more components externally yourself, where you want more control over the set up, and then allow the Operator to manage the remaining components.
The standard pattern for configuring unmanaged components is:
-
Create a
config.yaml
configuration file with the appropriate settings -
Create a Secret using the configuration file
$ oc create secret generic --from-file config.yaml=./config.yaml config-bundle-secret
-
Create a QuayRegistry YAML file
quayregistry.yaml
, identifying the unmanaged components and also referencing the created Secret, for example:quayregistry.yamlapiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false
-
Deploy the registry using the YAML file:
$ oc create -n quay-enterprise -f quayregistry.yaml
Pre-configuring Quay for automation
Quay has a number of configuration options that support automation. These options can be set before deployment, to minimize the need to interact with the user interface.
Allowing the API to create the first user
Set the config option FEATURE_USER_INITIALIZE
to true
, so that you can use the API /api/v1/user/initialize
to create the first user. This API endpoint does not require authentication, unlike all other registry API calls which require an OAuth token which is generated by an OAuth application in an existing organization.
Once you have deployed Quay, you can use the API to create a user, for example, quayadmin
, provided no other users have already been created. For more information, see the section on Creating the first user using the API.
Enabling general API access
Set the config option BROWSER_API_CALLS_XHR_ONLY
to false
, to allow general access to the Quay registry API.
Adding a super user
While you cannot create a user until after deployment, it is convenient to ensure that first user is an administrator with full permissions. It is easier to configure this in advance, using the SUPER_USER
configuration object.
Restricting user creation
Once you have configured a super user, you can restrict the ability to create new users to the super user group. Set the FEATURE_USER_CREATION
to false
to restrict user creation.
Enabling new functionality
If you want to take advantage of the new functionality in Project Quay 3.7, enable some or all of the following features:
...
FEATURE_QUOTA_MANAGEMENT: true
FEATURE_BUILD_SUPPORT: true
FEATURE_PROXY_CACHE: true
FEATURE_STORAGE_REPLICATION: true
DEFAULT_SYSTEM_REJECT_QUOTA_BYTES: 102400000
...
Suggested configuration for automation
Create a config.yaml
configuration file that includes the appropriate settings:
...
FEATURE_USER_INITIALIZE: true
BROWSER_API_CALLS_XHR_ONLY: false
SUPER_USERS:
- quayadmin
FEATURE_USER_CREATION: false
...
Deploying the Operator using the initial configuration
-
Create a Secret using the configuration file
$ oc create secret generic -n quay-enterprise --from-file config.yaml=./config.yaml init-config-bundle-secret
-
Create a QuayRegistry YAML file
quayregistry.yaml
, identifying the unmanaged components and also referencing the created Secret, for example:quayregistry.yamlapiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: init-config-bundle-secret
-
Deploy the registry:
$ oc create -n quay-enterprise -f quayregistry.yaml
-
Create the first user,
quayadmin
, using the API
Configuring object storage
You need to configure object storage before installing Project Quay, irrespective of whether you are allowing the Operator to manage the storage or managing it yourself.
If you want the Operator to be responsible for managing storage, see the section on Managed storage for information on installing and configuring the NooBaa / RHOCS Operator.
If you are using a separate storage solution, set objectstorage
as unmanaged
when configuring the Operator. See the following section. Unmanaged storage, for details of configuring existing storage.
Unmanaged storage
Some configuration examples for unmanaged storage are provided in this section for convenience. See the Project Quay configuration guide for full details for setting up object storage.
AWS S3 storage
DISTRIBUTED_STORAGE_CONFIG:
s3Storage:
- S3Storage
- host: s3.us-east-2.amazonaws.com
s3_access_key: ABCDEFGHIJKLMN
s3_secret_key: OL3ABCDEFGHIJKLMN
s3_bucket: quay_bucket
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- s3Storage
Google cloud storage
DISTRIBUTED_STORAGE_CONFIG:
googleCloudStorage:
- GoogleCloudStorage
- access_key: GOOGQIMFB3ABCDEFGHIJKLMN
bucket_name: quay-bucket
secret_key: FhDAYe2HeuAKfvZCAGyOioNaaRABCDEFGHIJKLMN
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- googleCloudStorage
Azure storage
DISTRIBUTED_STORAGE_CONFIG:
azureStorage:
- AzureStorage
azure_container: azure_container_here
storage_path: /datastorage/registry
- azure_account_name: azure_account_name_here
azure_account_key: azure_account_key_here
sas_token: some/path/
endpoint_url: https://[account-name].blob.core.usgovcloudapi.net (1)
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- azureStorage
-
The
endpoint_url
parameter for Azure storage is optional. If left blank, theendpoint_url
will connect to the normal Azure region.
NooBaa unmanaged storage
-
Create a NooBaa Object Bucket Claim in the console at Storage → Object Bucket Claims.
-
Retrieve the Object Bucket Claim Data details including the Access Key, Bucket Name, Endpoint (hostname) and Secret Key.
-
Create a
config.yaml
configuration file, using the information for the Object Bucket Claim:DISTRIBUTED_STORAGE_CONFIG: default: - RHOCSStorage - access_key: WmrXtSGk8B3nABCDEFGH bucket_name: my-noobaa-bucket-claim-8b844191-dc6c-444e-9ea4-87ece0abcdef hostname: s3.openshift-storage.svc.cluster.local is_secure: true port: "443" secret_key: X9P5SDGJtmSuHFCMSLMbdNCMfUABCDEFGH+C5QD storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default
Managed storage
If you want the Operator to manage object storage for Quay, your cluster needs to be capable of providing it via the ObjectBucketClaim
API. There are multiple implementations of this API available, for instance, NooBaa in combination with Kubernetes PersistentVolumes
or scalable storage backends like Ceph. Refer to the NooBaa documentation for more details on how to deploy this component.
Note
|
Object storage disk space is allocated automatically by the Operator with 50 GiB. This number represents a usable amount of storage for most small to medium Project Quay installations but may not be sufficient for your use cases. Resizing the RHOCS volume is currently not handled by the Operator. See the section below on resizing managed storage for more details. |
Configuring the database
Using an existing Postgres database
-
Create a configuration file
config.yaml
with the necessary database fields:config.yaml:DB_URI: postgresql://test-quay-database:postgres@test-quay-database:5432/test-quay-database
-
Create a Secret using the configuration file:
$ kubectl create secret generic --from-file config.yaml=./config.yaml config-bundle-secret
-
Create a QuayRegistry YAML file
quayregistry.yaml
which marks thepostgres
component as unmanaged and references the created Secret:quayregistry.yamlapiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: postgres managed: false
-
Deploy the registry as detailed in the following sections.
Database configuration
You configure the connection to the database using the required DB_URI field and optional connection arguments in the DB_CONNECTION_ARGS structure. Some key-value pairs defined under DB_CONNECTION_ARGS are generic while others are database-specific. In particular, SSL configuration depends on the database you are deploying, and examples for PostgreSQL and MySQL are given below.
Database URI
Field | Type | Description |
---|---|---|
DB_URI |
String |
The URI for accessing the database, including any credentials |
Example:
postgresql://quayuser:quaypass@quay-server.example.com:5432/quay
Database connection arguments
Field | Type | Description |
---|---|---|
DB_CONNECTION_ARGS |
Object |
Optional connection arguments for the database, such as timeouts and SSL |
.autorollback |
Boolean |
Whether to use thread-local connections |
.threadlocals |
Boolean |
Whether to use auto-rollback connections |
PostgreSQL SSL connection arguments
A sample PostgreSQL SSL configuration is given below:
DB_CONNECTION_ARGS: sslmode: verify-ca sslrootcert: /path/to/cacert
The sslmode
option determines whether or with what priority a secure SSL TCP/IP connection will be negotiated with the server. There are six modes:
-
disable: only try a non-SSL connection
-
allow: first try a non-SSL connection; if that fails, try an SSL connection
-
prefer: (default) first try an SSL connection; if that fails, try a non-SSL connection
-
require: only try an SSL connection. If a root CA file is present, verify the certificate in the same way as if verify-ca was specified
-
verify-ca: only try an SSL connection, and verify that the server certificate is issued by a trusted certificate authority (CA)
-
verify-full: only try an SSL connection, verify that the server certificate is issued by a trusted CA and that the requested server host name matches that in the certificate
More information on the valid arguments for PostgreSQL is available at https://www.postgresql.org/docs/current/libpq-connect.html.
MySQL SSL connection arguments
A sample MySQL SSL configuration follows:
DB_CONNECTION_ARGS: ssl: ca: /path/to/cacert
Information on the valid connection arguments for MySQL is available at https://dev.mysql.com/doc/refman/8.0/en/connecting-using-uri-or-key-value-pairs.html.
Using the managed PostgreSQL
Recommendations:
-
Database backups should be performed regularly using either the supplied tools on the Postgres image or your own backup infrastructure. The Operator does not currently ensure the Postgres database is backed up.
-
Restoring the Postgres database from a backup must be done using Postgres tools and procedures. Be aware that your Quay
Pods
should not be running while the database restore is in progress. -
Database disk space is allocated automatically by the Operator with 50 GiB. This number represents a usable amount of storage for most small to medium Project Quay installations but may not be sufficient for your use cases. Resizing the database volume is currently not handled by the Operator.
Configuring TLS and routes
Support for OpenShift Container Platform Edge-Termination Routes has been added by way of a new managed component, tls
. This separates the route
component from TLS and allows users to configure both separately. EXTERNAL_TLS_TERMINATION: true
is the opinionated setting. Managed tls
means that the default cluster wildcard cert is used. Unmanaged tls
means that the user provided cert/key pair will be injected into the Route
.
ssl.cert
and ssl.key
are now moved to a separate, persistent Secret, which ensures that the cert/key pair is not re-generated upon every reconcile. These are now formatted as edge
routes and mounted to the same directory in the Quay container.
Multiple permutations are possible when configuring TLS and Routes, but the following rules apply:
-
If TLS is
managed
, then route must also bemanaged
-
If TLS is
unmanaged
then you must supply certs, either with the config tool or directly in the config bundle
The following table outlines the valid options:
Option | Route | TLS | Certs provided | Result |
---|---|---|---|---|
My own load balancer handles TLS |
Managed |
Managed |
No |
Edge Route with default wildcard cert |
Project Quay handles TLS |
Managed |
Unmanaged |
Yes |
Passthrough route with certs mounted inside the pod |
Project Quay handles TLS |
Unmanaged |
Unmanaged |
Yes |
Certificates are set inside the quay pod but route must be created manually |
Note
|
Project Quay 3.6 does not support builders when TLS is managed by the Operator. |
Creating the config bundle secret with TLS cert, key pair:
To add your own TLS cert and key, include them in the config bundle secret as follows:
$ oc create secret generic --from-file config.yaml=./config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret
Configuring other components
Using external Redis
If you wish to use an external Redis database, set the component as unmanaged in the QuayRegistry
instance:
-
Create a configuration file
config.yaml
with the necessary redis fields:BUILDLOGS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 USER_EVENTS_REDIS: host: quay-server.example.com password: strongpassword port: 6379
-
Create a Secret using the configuration file
$ oc create secret generic --from-file config.yaml=./config.yaml config-bundle-secret
-
Create a QuayRegistry YAML file
quayregistry.yaml
which marks redis component as unmanaged and references the created Secret:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: redis managed: false
-
Deploy the registry
Redis configuration fields
Build logs
Field | Type | Description |
---|---|---|
BUILDLOGS_REDIS |
Object |
Redis connection details for build logs caching |
.host |
String |
The hostname at which Redis is accessible |
.port |
Number |
The port at which Redis is accessible |
.password |
String |
The port at which Redis is accessible |
User events
Field | Type | Description |
---|---|---|
USER_EVENTS_REDIS |
Object |
Redis connection details for user event handling |
.host |
String |
The hostname at which Redis is accessible |
.port |
Number |
The port at which Redis is accessible |
.password |
String |
The port at which Redis is accessible |
Example redis configuration
BUILDLOGS_REDIS:
host: quay-server.example.com
password: strongpassword
port: 6379
USER_EVENTS_REDIS:
host: quay-server.example.com
password: strongpassword
port: 6379
Disabling the Horizontal Pod Autoscaler
HorizontalPodAutoscalers
have been added to the Clair, Quay, and Mirror pods, so that they now automatically scale during load spikes.
As HPA is configured by default to be managed
, the number of pods for Quay, Clair and repository mirroring is set to two. This facilitates the avoidance of downtime when updating / reconfiguring Quay via the Operator or during rescheduling events.
If you wish to disable autoscaling or create your own HorizontalPodAutoscaler
, simply specify the component as unmanaged in the QuayRegistry
instance:
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: quay-enterprise
spec:
components:
- kind: horizontalpodautoscaler
managed: false
Disabling Route Component
To prevent the Operator from creating a Route
:
-
Mark the component as unmanaged in the
QuayRegistry
:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: route managed: false
-
Specify that you want Quay to handle TLS in the configuration, by editing the
config.yaml
file:config.yaml... EXTERNAL_TLS_TERMINATION: false ... SERVER_HOSTNAME: example-registry-quay-quay-enterprise.apps.user1.example.com ... PREFERRED_URL_SCHEME: https ...
If you do not configure the unmanaged Route correctly, you will see an error similar to the following:
{ { "kind":"QuayRegistry", "namespace":"quay-enterprise", "name":"example-registry", "uid":"d5879ba5-cc92-406c-ba62-8b19cf56d4aa", "apiVersion":"quay.redhat.com/v1", "resourceVersion":"2418527" }, "reason":"ConfigInvalid", "message":"required component `route` marked as unmanaged, but `configBundleSecret` is missing necessary fields" }
Note
|
Disabling the default |
Unmanaged monitoring
If you install the Quay Operator in a single namespace, the monitoring component is automatically set to 'unmanaged'. To enable monitoring in this scenario, see the section Enabling monitoring when Operator is installed in a single namespace.
To disable monitoring explicitly:
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: quay-enterprise
spec:
components:
- kind: monitoring
managed: false
Unmanaged mirroring
To disable mirroring explicitly:
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: quay-enterprise
spec:
components:
- kind: mirroring
managed: false
Unmanaged Clair
For more information, see Unmanaged Clair configuration
Deploying Quay using the Quay Operator
The Operator can be deployed from the command line or from the OpenShift console, but the fundamental steps are the same.
Deploying Project Quay from the command line
-
Create a namespace, for example,
quay-enterprise
. -
Create a secret for the config bundle, if you want to pre-configure any aspects of the deployment
-
Create a
QuayRegistry
custom resource in a file calledquayregistry.yaml
-
For a minimal deployment, using all the defaults:
quayregistry.yaml:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise
-
If you want to have some components unmanaged, add this information in the
spec
field. For example, a minimal deployment might look like:quayregistry.yaml:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: clair managed: false - kind: horizontalpodautoscaler managed: false - kind: mirror managed: false - kind: monitoring managed: false
-
If you have created a config bundle, for example,
init-config-bundle-secret
, reference it in thequayregistry.yaml
file:quayregistry.yaml:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: init-config-bundle-secret
-
-
Create the
QuayRegistry
in specified namespace:$ oc create -n quay-enterprise -f quayregistry.yaml
-
See the section Monitoring and debugging the deployment process for information on how to track the progress of the deployment.
-
Wait until the
status.registryEndpoint
is populated.$ oc get quayregistry -n quay-enterprise example-registry -o jsonpath="{.status.registryEndpoint}" -w
Viewing created components using the command line
Use the oc get pods
command to view the deployed components:
$ oc get pods -n quay-enterprise
NAME READY STATUS RESTARTS AGE
example-registry-clair-app-5ffc9f77d6-jwr9s 1/1 Running 0 3m42s
example-registry-clair-app-5ffc9f77d6-wgp7d 1/1 Running 0 3m41s
example-registry-clair-postgres-54956d6d9c-rgs8l 1/1 Running 0 3m5s
example-registry-quay-app-79c6b86c7b-8qnr2 1/1 Running 4 3m42s
example-registry-quay-app-79c6b86c7b-xk85f 1/1 Running 4 3m41s
example-registry-quay-app-upgrade-5kl5r 0/1 Completed 4 3m50s
example-registry-quay-config-editor-597b47c995-svqrl 1/1 Running 0 3m42s
example-registry-quay-database-b466fc4d7-tfrnx 1/1 Running 2 3m42s
example-registry-quay-mirror-6d9bd78756-6lj6p 1/1 Running 0 2m58s
example-registry-quay-mirror-6d9bd78756-bv6gq 1/1 Running 0 2m58s
example-registry-quay-postgres-init-dzbmx 0/1 Completed 0 3m43s
example-registry-quay-redis-8bd67b647-skgqx 1/1 Running 0 3m42s
Horizontal Pod Autoscaling (HPA)
A default deployment shows the following running pods:
-
Two pods for the Quay application itself (
example-registry-quay-app-*`
) -
One Redis pod for Quay logging (
example-registry-quay-redis-*
) -
One database pod for PostgreSQL used by Quay for metadata storage (
example-registry-quay-database-*
) -
One pod for the Quay config editor (
example-registry-quay-config-editor-*
) -
Two Quay mirroring pods (
example-registry-quay-mirror-*
) -
Two pods for the Clair application (
example-registry-clair-app-*
) -
One PostgreSQL pod for Clair (
example-registry-clair-postgres-*
)
As HPA is configured by default to be managed
, the number of pods for Quay, Clair and repository mirroring is set to two. This facilitates the avoidance of downtime when updating / reconfiguring Quay via the Operator or during rescheduling events.
$ oc get hpa -n quay-enterprise
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
example-registry-clair-app Deployment/example-registry-clair-app 16%/90%, 0%/90% 2 10 2 13d
example-registry-quay-app Deployment/example-registry-quay-app 31%/90%, 1%/90% 2 20 2 13d
example-registry-quay-mirror Deployment/example-registry-quay-mirror 27%/90%, 0%/90% 2 20 2 13d
Using the API to create the first user
When using the API to create the first user, the following conditions must be met:
-
The config option
FEATURE_USER_INITIALIZE
must be set totrue
-
No users can already exist in the database
For more information on pre-configuring the deployment, see the section Pre-configuring Quay for automation
Invoking the API
Using the status.registryEndpoint
URL, invoke the /api/v1/user/initialize
API, passing in the username, password and email address. You can also request an OAuth token by specifying "access_token": true
.
$ curl -X POST -k https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/user/initialize --header 'Content-Type: application/json' --data '{ "username": "quayadmin", "password":"quaypass123", "email": "quayadmin@example.com", "access_token": true}'
{"access_token":"6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED", "email":"quayadmin@example.com","encrypted_password":"1nZMLH57RIE5UGdL/yYpDOHLqiNCgimb6W9kfF8MjZ1xrfDpRyRs9NUnUuNuAitW","username":"quayadmin"}
If successful, the method returns an object with the username, email and encrypted password. If a user already exists in the database, an error is returned:
$ curl -X POST -k https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/user/initialize --header 'Content-Type: application/json' --data '{ "username": "quayuser2", "password":"quaypass123", "email": "quayuser2@example.com"}'
{"message":"Cannot initialize user in a non-empty database"}
The password must be at least 8 characters and contain no whitespace:
$ curl -X POST -k https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/user/initialize --header 'Content-Type: application/json' --data '{ "username": "quayadmin", "password":"pass123", "email": "quayadmin@example.com"}'
{"message":"Failed to initialize user: Invalid password, password must be at least 8 characters and contain no whitespace."}
Using the OAuth token
You can now invoke the rest of the Quay API specifying the returned OAuth code. For example, to get a list of the current users:
$ curl -X GET -k -H "Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED" https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/superuser/users/
{
"users": [
{
"kind": "user",
"name": "quayadmin",
"username": "quayadmin",
"email": "quayadmin@example.com",
"verified": true,
"avatar": {
"name": "quayadmin",
"hash": "3e82e9cbf62d25dec0ed1b4c66ca7c5d47ab9f1f271958298dea856fb26adc4c",
"color": "#e7ba52",
"kind": "user"
},
"super_user": true,
"enabled": true
}
]
}
In this instance, the details for the quayadmin
user are returned as it is the only user that has been created so far.
Create organization
To create an organization, use a POST call to api/v1/organization/
endpoint:
$ curl -X POST -k --header 'Content-Type: application/json' -H "Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED" https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/ --data '{"name": "testorg", "email": "testorg@example.com"}'
"Created"
Get organization details
To retrieve the details of the organization you created:
$ curl -X GET -k --header 'Content-Type: application/json' -H "Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED" https://min-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/testorg
{
"name": "testorg",
"email": "testorg@example.com",
"avatar": {
"name": "testorg",
"hash": "5f113632ad532fc78215c9258a4fb60606d1fa386c91b141116a1317bf9c53c8",
"color": "#a55194",
"kind": "user"
},
"is_admin": true,
"is_member": true,
"teams": {
"owners": {
"name": "owners",
"description": "",
"role": "admin",
"avatar": {
"name": "owners",
"hash": "6f0e3a8c0eb46e8834b43b03374ece43a030621d92a7437beb48f871e90f8d90",
"color": "#c7c7c7",
"kind": "team"
},
"can_view": true,
"repo_count": 0,
"member_count": 1,
"is_synced": false
}
},
"ordered_teams": [
"owners"
],
"invoice_email": false,
"invoice_email_address": null,
"tag_expiration_s": 1209600,
"is_free_account": true
}
Monitoring and debugging the deployment process
Project Quay 3.6 provides new functionality to troubleshoot problems during the deployment phase. The status in the QuayRegistry object can help you monitor the health of the components during the deployment an help you debug any problems that may arise:
$ oc get quayregistry -n quay-enterprise -o yaml
Immediately after deployment, the QuayRegistry object will show the basic configuration:
apiVersion: v1
items:
- apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
creationTimestamp: "2021-09-14T10:51:22Z"
generation: 3
name: example-registry
namespace: quay-enterprise
resourceVersion: "50147"
selfLink: /apis/quay.redhat.com/v1/namespaces/quay-enterprise/quayregistries/example-registry
uid: e3fc82ba-e716-4646-bb0f-63c26d05e00e
spec:
components:
- kind: postgres
managed: true
- kind: clair
managed: true
- kind: redis
managed: true
- kind: horizontalpodautoscaler
managed: true
- kind: objectstorage
managed: true
- kind: route
managed: true
- kind: mirror
managed: true
- kind: monitoring
managed: true
- kind: tls
managed: true
configBundleSecret: example-registry-config-bundle-kt55s
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Use the oc get pods
command to view the current state of the deployed components:
$ oc get pods -n quay-enterprise
NAME READY STATUS RESTARTS AGE
example-registry-clair-app-86554c6b49-ds7bl 0/1 ContainerCreating 0 2s
example-registry-clair-app-86554c6b49-hxp5s 0/1 Running 1 17s
example-registry-clair-postgres-68d8857899-lbc5n 0/1 ContainerCreating 0 17s
example-registry-quay-app-upgrade-h2v7h 0/1 ContainerCreating 0 9s
example-registry-quay-config-editor-5f646cbcb7-lbnc2 0/1 ContainerCreating 0 17s
example-registry-quay-database-66f495c9bc-wqsjf 0/1 ContainerCreating 0 17s
example-registry-quay-mirror-854c88457b-d845g 0/1 Init:0/1 0 2s
example-registry-quay-mirror-854c88457b-fghxv 0/1 Init:0/1 0 17s
example-registry-quay-postgres-init-bktdt 0/1 Terminating 0 17s
example-registry-quay-redis-f9b9d44bf-4htpz 0/1 ContainerCreating 0 17s
While the deployment is in progress, the QuayRegistry object will show the current status. In this instance, database migrations are taking place, and other components are waiting until this completes.
status:
conditions:
- lastTransitionTime: "2021-09-14T10:52:04Z"
lastUpdateTime: "2021-09-14T10:52:04Z"
message: all objects created/updated successfully
reason: ComponentsCreationSuccess
status: "False"
type: RolloutBlocked
- lastTransitionTime: "2021-09-14T10:52:05Z"
lastUpdateTime: "2021-09-14T10:52:05Z"
message: running database migrations
reason: MigrationsInProgress
status: "False"
type: Available
configEditorCredentialsSecret: example-registry-quay-config-editor-credentials-btbkcg8dc9
configEditorEndpoint: https://example-registry-quay-config-editor-quay-enterprise.apps.docs.quayteam.org
lastUpdated: 2021-09-14 10:52:05.371425635 +0000 UTC
unhealthyComponents:
clair:
- lastTransitionTime: "2021-09-14T10:51:32Z"
lastUpdateTime: "2021-09-14T10:51:32Z"
message: 'Deployment example-registry-clair-postgres: Deployment does not have minimum availability.'
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2021-09-14T10:51:32Z"
lastUpdateTime: "2021-09-14T10:51:32Z"
message: 'Deployment example-registry-clair-app: Deployment does not have minimum availability.'
reason: MinimumReplicasUnavailable
status: "False"
type: Available
mirror:
- lastTransitionTime: "2021-09-14T10:51:32Z"
lastUpdateTime: "2021-09-14T10:51:32Z"
message: 'Deployment example-registry-quay-mirror: Deployment does not have minimum availability.'
reason: MinimumReplicasUnavailable
status: "False"
type: Available
When the deployment process finishes successfully, the status in the QuayRegistry object shows no unhealthy components:
status:
conditions:
- lastTransitionTime: "2021-09-14T10:52:36Z"
lastUpdateTime: "2021-09-14T10:52:36Z"
message: all registry component healthchecks passing
reason: HealthChecksPassing
status: "True"
type: Available
- lastTransitionTime: "2021-09-14T10:52:46Z"
lastUpdateTime: "2021-09-14T10:52:46Z"
message: all objects created/updated successfully
reason: ComponentsCreationSuccess
status: "False"
type: RolloutBlocked
configEditorCredentialsSecret: example-registry-quay-config-editor-credentials-hg7gg7h57m
configEditorEndpoint: https://example-registry-quay-config-editor-quay-enterprise.apps.docs.quayteam.org
currentVersion: 3.6.0
lastUpdated: 2021-09-14 10:52:46.104181633 +0000 UTC
registryEndpoint: https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org
unhealthyComponents: {}
Deploying Project Quay from the OpenShift console
-
Create a namespace, for example,
quay-enterprise
. -
Select Operators → Installed Operators, then select the Quay Operator to navigate to the Operator detail view.
-
Click 'Create Instance' on the 'Quay Registry' tile under 'Provided APIs'.
-
Optionally change the 'Name' of the
QuayRegistry
. This will affect the hostname of the registry. All other fields have been populated with defaults. -
Click 'Create' to submit the
QuayRegistry
to be deployed by the Quay Operator. -
You should be redirected to the
QuayRegistry
list view. Click on theQuayRegistry
you just created to see the details view. -
Once the 'Registry Endpoint' has a value, click it to access your new Quay registry via the UI. You can now select 'Create Account' to create a user and sign in.
Using the Quay UI to create the first user
Note
|
This procedure assumes that the |
-
In the OpenShift console, navigate to Operators → Installed Operators, with the appropriate namespace / project.
-
Click on the newly installed QuayRegistry, to view the details:
-
Once the
Registry Endpoint
has a value, navigate to this URL in your browser -
Select 'Create Account' in the Quay registry UI to create a user
-
Enter details for username, password, email and click
Create Account
-
You are automatically logged in to the Quay registry
Configuring Quay on OpenShift
Once deployed, you can configure the Quay application by editing the Quay configuration bundle secret spec.configBundleSecret
and you can also change the managed status of components in the spec.components
object of the QuayRegistry resource
Alternatively, you can use the config editor UI to configure the Quay application, as described in the section Using the config tool to reconfigure Quay on OpenShift.
Editing the config bundle secret in the OpenShift console
-
On the Quay Registry overview screen, click the link for the Config Bundle Secret:
-
To edit the secret, click Actions → Edit Secret
-
Modify the configuration and save the changes
-
Monitor the deployment to ensure successful completion and that the configuration changes have taken effect
Determining QuayRegistry endpoints and secrets
You can examine the QuayRegistry resource, using oc describe quayregistry
or oc get quayregistry -o yaml
, to determine the current endpoints and secrets:
$ oc get quayregistry example-registry -n quay-enterprise -o yaml
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
...
name: example-registry
namespace: quay-enterprise
...
spec:
components:
- kind: quay
managed: true
...
- kind: clairpostgres
managed: true
configBundleSecret: init-config-bundle-secret
status:
configEditorCredentialsSecret: example-registry-quay-config-editor-credentials-fg2gdgtm24
configEditorEndpoint: https://example-registry-quay-config-editor-quay-enterprise.apps.docs.gcp.quaydev.org
currentVersion: 3.7.0
lastUpdated: 2022-05-11 13:28:38.199476938 +0000 UTC
registryEndpoint: https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org
The relevant fields are:
-
registryEndpoint
: The URL for your registry, for browser access to the registry UI, and for the registry API endpoint -
configBundleSecret
: The config bundle secret, containing theconfig.yaml
file and any SSL certs -
configEditorEndpoint
: The URL for the config editor tool, for browser access to the config tool, and for the configuration API -
configEditorCredentialsSecret
: The secret containing the username (typicallyquayconfig
) and the password for the config editor tool
To determine the username and password for the config editor tool:
-
Retrieve the secret:
$ oc get secret -n quay-enterprise example-registry-quay-config-editor-credentials-fg2gdgtm24 -o yaml apiVersion: v1 data: password: SkZwQkVKTUN0a1BUZmp4dA== username: cXVheWNvbmZpZw== kind: Secret
-
Decode the username:
$ echo 'cXVheWNvbmZpZw==' | base64 --decode quayconfig
-
Decode the password:
$ echo 'SkZwQkVKTUN0a1BUZmp4dA==' | base64 --decode JFpBEJMCtkPTfjxt
Downloading the existing configuration
There are a number of methods for accessing the current configuration:
-
Using the config editor endpoint, specifying the username and password for the config editor:
$ curl -k -u quayconfig:JFpBEJMCtkPTfjxt https://example-registry-quay-config-editor-quay-enterprise.apps.docs.quayteam.org/api/v1/config
{ "config.yaml": { "ALLOW_PULLS_WITHOUT_STRICT_LOGGING": false, "AUTHENTICATION_TYPE": "Database", ... "USER_RECOVERY_TOKEN_LIFETIME": "30m" }, "certs": { "extra_ca_certs/service-ca.crt": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURVVENDQWptZ0F3SUJBZ0lJRE9kWFhuUXFjMUF3RFFZSktvWklodmNOQVFFTEJRQXdOakUwTURJR0ExVUUKQXd3cmIzQmxibk5vYVdaMExYTmxjblpwWTJVdGMyVnlkbWx1WnkxemFXZHVaWEpBTVRZek1UYzNPREV3TXpBZQpGdzB5TVRBNU1UWXdOelF4TkRKYUZ..." } }
-
Using the config bundle secret
-
Get the secret data:
$ oc get secret -n quay-enterprise init-config-bundle-secret -o jsonpath='{.data}'
Sample output{ "config.yaml": "RkVBVFVSRV9VU0 ... MDAwMAo=" }
-
Decode the data:
$ echo 'RkVBVFVSRV9VU0 ... MDAwMAo=' | base64 --decode
FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_PROXY_CACHE: true FEATURE_BUILD_SUPPORT: true DEFAULT_SYSTEM_REJECT_QUOTA_BYTES: 102400000
-
Using the config bundle to configure custom SSL certs
You can configure custom SSL certs either before initial deployment or after Project Quay is deployed on OpenShift, by creating or updating the config bundle secret. If you are adding the cert(s) to an existing deployment, you must include the existing config.yaml
in the new config bundle secret, even if you are not making any configuration changes.
Set TLS to unmanaged
In your Quay Registry yaml, set kind: tls
to managed: false
:
- kind: tls
managed: false
In the events, you should see that the change is blocked until you set up the appropriate config:
- lastTransitionTime: '2022-03-28T12:56:49Z'
lastUpdateTime: '2022-03-28T12:56:49Z'
message: >-
required component `tls` marked as unmanaged, but `configBundleSecret`
is missing necessary fields
reason: ConfigInvalid
status: 'True'
Add certs to config bundle
-
Create the secret using embedded data or using files:
-
Embed the configuration details directly in the Secret resource YAML file, for example:
custom-ssl-config-bundle.yamlapiVersion: v1 kind: Secret metadata: name: custom-ssl-config-bundle-secret namespace: quay-enterprise data: config.yaml: | FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_PROXY_CACHE: true FEATURE_BUILD_SUPPORT: true DEFAULT_SYSTEM_REJECT_QUOTA_BYTES: 102400000 extra_ca_cert_my-custom-ssl.crt: | -----BEGIN CERTIFICATE----- MIIDsDCCApigAwIBAgIUCqlzkHjF5i5TXLFy+sepFrZr/UswDQYJKoZIhvcNAQEL BQAwbzELMAkGA1UEBhMCSUUxDzANBgNVBAgMBkdBTFdBWTEPMA0GA1UEBwwGR0FM .... -----END CERTIFICATE-----
Next, create the secret from the YAML file:
$ oc create -f custom-ssl-config-bundle.yaml
-
Alternatively, you can create files containing the desired information, and then create the secret from those files:
$ oc create secret generic custom-ssl-config-bundle-secret \ --from-file=config.yaml \ --from-file=extra_ca_cert_my-custom-ssl.crt=my-custom-ssl.crt
-
-
Create or update the QuayRegistry YAML file
quayregistry.yaml
, referencing the created Secret, for example:quayregistry.yamlapiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: custom-ssl-config-bundle-secret
-
Deploy or update the registry using the YAML file:
oc apply -f quayregistry.yaml
Using the config tool to reconfigure Quay on OpenShift
Accessing the config editor
In the Details section of the QuayRegistry screen, the endpoint for the config editor is available, along with a link to the secret containing the credentials for logging into the config editor:
Retrieving the config editor credentials
-
Click on the link for the config editor secret:
-
In the Data section of the Secret details screen, click
Reveal values
to see the credentials for logging in to the config editor:
Logging in to the config editor
Browse to the config editor endpoint and then enter the username, typically quayconfig
, and the corresponding password to access the config tool:
Changing configuration
In this example of updating the configuration, a superuser is added via the config editor tool:
-
Add an expiration period, for example
4w
, for the time machine functionality: -
Select
Validate Configuration Changes
to ensure that the changes are valid -
Apply the changes by pressing the
Reconfigure Quay
button: -
The config tool notifies you that the change has been submitted to Quay:
Note
|
Reconfiguring Project Quay using the config tool UI can lead to the registry being unavailable for a short time, while the updated configuration is applied. |
Monitoring reconfiguration in the UI
QuayRegistry resource
After reconfiguring the Operator, you can track the progress of the redeployment in the YAML tab for the specific instance of QuayRegistry, in this case, example-registry
:
Each time the status changes, you will be prompted to reload the data to see the updated version. Eventually, the Operator will reconcile the changes, and there will be no unhealthy components reported.
Events
The Events tab for the QuayRegistry shows some events related to the redeployment:
Streaming events, for all resources in the namespace that are affected by the reconfiguration, are available in the OpenShift console under Home → Events:
Accessing updated information after reconfiguration
Accessing the updated config tool credentials in the UI
Since a new pod has been created for the config tool, a new secret will have been created, and you will need to use the updated password when you next attempt to login:
Accessing the updated config.yaml in the UI
Use the config bundle to access the updated config.yaml
file.
-
On the QuayRegistry details screen, click on the Config Bundle Secret
-
In the Data section of the Secret details screen, click Reveal values to see the
config.yaml
file -
Check that the change has been applied. In this case,
4w
should be in the list ofTAG_EXPIRATION_OPTIONS
:... SERVER_HOSTNAME: example-quay-openshift-operators.apps.docs.quayteam.org SETUP_COMPLETE: true SUPER_USERS: - quayadmin TAG_EXPIRATION_OPTIONS: - 2w - 4w ...
Custom SSL certificates UI
The config tool can be used to load custom certificates to facilitate access to resources such as external databases. Select the custom certs to be uploaded, ensuring that they are in PEM format, with an extension .crt
.
The config tool also displays a list of any uploaded certificates. Once you upload your custom SSL cert, it will appear in the list:
External Access to the Registry
When running on OpenShift, the Routes
API is available and will automatically be used as a managed component. After creating the QuayRegistry
, the external access point can be found in the status block of the QuayRegistry
:
status:
registryEndpoint: some-quay.my-namespace.apps.mycluster.com
Quay Operator features
Console monitoring and alerting
Project Quay {producty} provides support for monitoring Quay instances that were deployed using the Operator, from inside the OpenShift console. The new monitoring features include a Grafana dashboard, access to individual metrics, and alerting to notify for frequently restarting Quay pods.
Note
|
To enable the monitoring features, the Operator must be installed in "all namespaces" mode. |
Dashboard
In the OpenShift console, navigate to Monitoring → Dashboards and search for the dashboard of your desired Quay registry instance:
The dashboard shows various statistics including:
-
The number of Organizations, Repositories, Users and Robot accounts
-
CPU Usage and Max Memory Usage
-
Rates of Image Pulls and Pushes, and Authentication requests
-
API request rate
-
Latencies
Metrics
You can see the underlying metrics behind the Quay dashboard, by accessing Monitoring → Metrics in the UI. In the Expression field, enter the text quay_
to see the list of metrics available:
Select a sample metric, for example, quay_org_rows
:
This metric shows the number of organizations in the registry, and it is directly surfaced in the dashboard as well.
Alerting
An alert is raised if the Quay pods restart too often. The alert can be configured by accessing the Alerting rules tab from Monitoring → Alerting in the consol UI and searching for the Quay-specific alert:
Select the QuayPodFrequentlyRestarting rule detail to configure the alert:
Manually updating the vulnerability databases for Clair in an air-gapped OpenShift cluster
Clair utilizes packages called updaters
that encapsulate the logic of fetching and parsing different vulnerability databases. Clair supports running updaters in a different environment and importing the results. This is aimed at supporting installations that disallow the Clair cluster from talking to the Internet directly.
To manually update the vulnerability databases for Clair in an air-gapped OpenShift cluster, use the following steps:
-
Obtain the
clairctl
program -
Retrieve the Clair config
-
Use
clairctl
to export the updaters bundle from a Clair instance that has access to the internet -
Update the Clair config in the air-gapped OpenShift cluster to allow access to the Clair database
-
Transfer the updaters bundle from the system with internet access, to make it available inside the air-gapped environment
-
Use
clairctl
to import the updaters bundle into the Clair instance for the air-gapped OpenShift cluster
Obtaining clairctl
To obtain the clairctl
program from a Clair deployment in an OpenShift cluster, use the oc cp
command, for example:
$ oc -n quay-enterprise cp example-registry-clair-app-64dd48f866-6ptgw:/usr/bin/clairctl ./clairctl $ chmod u+x ./clairctl
For a standalone Clair deployment, use the podman cp
command, for example:
$ sudo podman cp clairv4:/usr/bin/clairctl ./clairctl $ chmod u+x ./clairctl
Retrieving the Clair config
Clair on OpenShift config
To retrieve the configuration file for a Clair instance deployed using the OpenShift Operator, retrieve and decode the config secret using the appropriate namespace, and save it to file, for example:
$ kubectl get secret -n quay-enterprise example-registry-clair-config-secret -o "jsonpath={$.data['config\.yaml']}" | base64 -d > clair-config.yaml
An excerpt from a Clair configuration file is shown below:
http_listen_addr: :8080
introspection_addr: ""
log_level: info
indexer:
connstring: host=example-registry-clair-postgres port=5432 dbname=postgres user=postgres password=postgres sslmode=disable
scanlock_retry: 10
layer_scan_concurrency: 5
migrations: true
scanner:
package: {}
dist: {}
repo: {}
airgap: false
matcher:
connstring: host=example-registry-clair-postgres port=5432 dbname=postgres user=postgres password=postgres sslmode=disable
max_conn_pool: 100
indexer_addr: ""
migrations: true
period: null
disable_updaters: false
notifier:
connstring: host=example-registry-clair-postgres port=5432 dbname=postgres user=postgres password=postgres sslmode=disable
migrations: true
indexer_addr: ""
matcher_addr: ""
poll_interval: 5m
delivery_interval: 1m
...
Standalone Clair config
For standalone Clair deployments, the config file is the one specified in CLAIR_CONF environment variable in the podman run
command, for example:
sudo podman run -d --rm --name clairv4 \ -p 8081:8081 -p 8089:8089 \ -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo \ -v /etc/clairv4/config:/clair:Z \ quay.io/projectquay/clair:qui-gon
Exporting the updaters bundle
From a Clair instance that has access to the internet, use clairctl
with the appropriate configuration file to export the updaters bundle:
$ ./clairctl --config ./config.yaml export-updaters updates.gz
Configuring access to the Clair database in the air-gapped OpenShift cluster
-
Use
kubectl
to determine the Clair database service:$ kubectl get svc -n quay-enterprise NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h ...
-
Forward the Clair database port so that it is accessible from the local machine, for example:
$ kubectl port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432
-
Update the Clair configuration file, replacing the value of the
host
in the multipleconnstring
fields withlocalhost
, for example:clair-config.yaml... connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable ...
Note
|
As an alternative to using |
Importing the updaters bundle into the air-gapped environment
After transferring the updaters bundle to the air-gapped environment, use clairctl
to import the bundle into the Clair database deployed by the OpenShift Operator:
$ ./clairctl --config ./clair-config.yaml import-updaters updates.gz
FIPS readiness and compliance
FIPS (the Federal Information Processing Standard developed by the National Institute of Standards and Technology, NIST) is regarded as the gold standard for securing and encrypting sensitive data, particularly in heavily regulated areas such as banking, healthcare and the public sector. Red Hat Enterprise Linux and Red Hat OpenShift Container Platform support this standard by providing a FIPS mode in which the system would only allow usage of certain, FIPS-validated cryptographic modules, like openssl
. This ensures FIPS compliance.
Project Quay supports running on RHEL and OCP in FIPS mode in production since version 3.5. Furthermore, Project Quay itself also commits to exclusively using cryptography libraries that are validated or are in the process of being validated by NIST. Project Quay 3.5 has pending FIPS 140-2 validation based on the RHEL 8.3 cryptography libraries. As soon as that validation is finalized, Project Quay will be officially FIPS compliant.
Advanced Concepts
Deploying Quay on infrastructure nodes
By default, Quay-related pods are placed on arbitrary worker nodes when using the Operator to deploy the registry. The OpenShift Container Platform documentation shows how to use machine sets to configure nodes to only host infrastructure components (see https://docs.openshift.com/container-platform/4.7/machine_management/creating-infrastructure-machinesets.html).
If you are not using OCP MachineSet resources to deploy infra nodes, this section shows you how to manually label and taint nodes for infrastructure purposes.
Once you have configured your infrastructure nodes, either manually or using machine sets, you can then control the placement of Quay pods on these nodes using node selectors and tolerations.
Label and taint nodes for infrastructure use
In the cluster used in this example, there are three master nodes and six worker nodes:
$ oc get nodes NAME                        STATUS  ROLES   AGE   VERSION user1-jcnp6-master-0.c.quay-devel.internal     Ready   master  3h30m  v1.20.0+ba45583 user1-jcnp6-master-1.c.quay-devel.internal     Ready   master  3h30m  v1.20.0+ba45583 user1-jcnp6-master-2.c.quay-devel.internal     Ready   master  3h30m  v1.20.0+ba45583 user1-jcnp6-worker-b-65plj.c.quay-devel.internal  Ready   worker  3h21m  v1.20.0+ba45583 user1-jcnp6-worker-b-jr7hc.c.quay-devel.internal  Ready   worker  3h21m  v1.20.0+ba45583 user1-jcnp6-worker-c-jrq4v.c.quay-devel.internal  Ready   worker  3h21m  v1.20.0+ba45583 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal  Ready   worker  3h21m  v1.20.0+ba45583 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal  Ready   worker  3h22m  v1.20.0+ba45583 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal  Ready   worker  3h21m  v1.20.0+ba45583
Label the final three worker nodes for infrastructure use:
$ oc label node --overwrite user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal node-role.kubernetes.io/infra= $ oc label node --overwrite user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal node-role.kubernetes.io/infra= $ oc label node --overwrite user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal node-role.kubernetes.io/infra=
Now, when you list the nodes in the cluster, the last 3 worker nodes will have an added role of infra
:
$ oc get nodes NAME                        STATUS  ROLES      AGE   VERSION user1-jcnp6-master-0.c.quay-devel.internal     Ready   master     4h14m  v1.20.0+ba45583 user1-jcnp6-master-1.c.quay-devel.internal     Ready   master     4h15m  v1.20.0+ba45583 user1-jcnp6-master-2.c.quay-devel.internal     Ready   master     4h14m  v1.20.0+ba45583 user1-jcnp6-worker-b-65plj.c.quay-devel.internal  Ready   worker     4h6m   v1.20.0+ba45583 user1-jcnp6-worker-b-jr7hc.c.quay-devel.internal  Ready   worker     4h5m   v1.20.0+ba45583 user1-jcnp6-worker-c-jrq4v.c.quay-devel.internal  Ready   worker     4h5m   v1.20.0+ba45583 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal  Ready   infra,worker  4h6m   v1.20.0+ba45583 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal  Ready   infra,worker  4h6m   v1.20.0+ba45583 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal  Ready   infra,worker  4h6m   v1.20.0+ba45583
With an infra node being assigned as a worker, there is a chance that user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node and then add tolerations for the pods you want to control.
$ oc adm taint nodes user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule $ oc adm taint nodes user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule $ oc adm taint nodes user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule
Create a Project with node selector and toleration
If you have already deployed Quay using the Quay Operator, remove the installed operator and any specific namespace(s) you created for the deployment.
Create a Project resource, specifying a node selector and toleration as shown in the following example:
kind: Project apiVersion: project.openshift.io/v1 metadata:  name: quay-registry  annotations:   openshift.io/node-selector: 'node-role.kubernetes.io/infra='   scheduler.alpha.kubernetes.io/defaultTolerations: >-    [{"operator": "Exists", "effect": "NoSchedule", "key":    "node-role.kubernetes.io/infra"}    ]
Use the oc apply
command to create the project:
$ oc apply -f quay-registry.yaml project.project.openshift.io/quay-registry created
Any subsequent resources created in the quay-registry
namespace should now be scheduled on the dedicated infrastructure nodes.
Install the Quay Operator in the namespace
When installing the Quay Operator, specify the appropriate project namespace explicitly, in this case quay-registry
. This will result in the operator pod itself landing on one of the three infrastructure nodes:
$ oc get pods -n quay-registry -o wide NAME                   READY  STATUS   RESTARTS  AGE  IP       NODE                        quay-operator.v3.4.1-6f6597d8d8-bd4dp  1/1   Running  0      30s  10.131.0.16  user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal
Create the registry
Create the registry as explained earlier, and then wait for the deployment to be ready. When you list the Quay pods, you should now see that they have only been scheduled on the three nodes that you have labelled for infrastructure purposes:
$ oc get pods -n quay-registry -o wide NAME                          READY  STATUS    RESTARTS  AGE   IP       NODE                         example-registry-clair-app-789d6d984d-gpbwd       1/1   Running   1      5m57s  10.130.2.80  user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal example-registry-clair-postgres-7c8697f5-zkzht     1/1   Running   0      4m53s  10.129.2.19  user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-app-56dd755b6d-glbf7       1/1   Running   1      5m57s  10.129.2.17  user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-config-editor-7bf9bccc7b-dpc6d  1/1   Running   0      5m57s  10.131.0.23  user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal example-registry-quay-database-8dc7cfd69-dr2cc     1/1   Running   0      5m43s  10.129.2.18  user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-mirror-78df886bcc-v75p9      1/1   Running   0      5m16s  10.131.0.24  user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal example-registry-quay-postgres-init-8s8g9        0/1   Completed  0      5m54s  10.130.2.79  user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal example-registry-quay-redis-5688ddcdb6-ndp4t      1/1   Running   0      5m56s  10.130.2.78  user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal quay-operator.v3.4.1-6f6597d8d8-bd4dp          1/1   Running   0      22m   10.131.0.16  user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal
Enabling monitoring when Operator is installed in a single namespace
When Project Quay Operator is installed in a single namespace, the monitoring component is unmanaged. To configure monitoring, you need to enable it for user-defined namespaces in OpenShift Container Platform. For more information, see the OCP documentation for Configuring the monitoring stack and Enabling monitoring for user-defined projects.
The following steps show you how to configure monitoring for Quay, based on the OCP documentation.
Creating a cluster monitoring config map
-
Check whether the
cluster-monitoring-config
ConfigMap object exists:$ oc -n openshift-monitoring get configmap cluster-monitoring-config Error from server (NotFound): configmaps "cluster-monitoring-config" not found
-
If the ConfigMap object does not exist:Â
-
Create the following YAML manifest. In this example, the file is called
cluster-monitoring-config.yaml
:$ cat cluster-monitoring-config.yaml apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |
-
Create the ConfigMap object:
$ oc apply -f cluster-monitoring-config.yaml configmap/cluster-monitoring-config created
$ oc -n openshift-monitoring get configmap cluster-monitoring-config NAME DATA AGE cluster-monitoring-config 1 12s
-
Creating a user-defined workload monitoring config map
-
Check whether the
user-workload-monitoring-config
ConfigMap object exists:$ oc -n openshift-user-workload-monitoring get configmap user-workload-monitoring-config Error from server (NotFound): configmaps "user-workload-monitoring-config" not found
-
If the ConfigMap object does not exist:
-
Create the following YAML manifest. In this example, the file is called
user-workload-monitoring-config.yaml
:$ cat user-workload-monitoring-config.yaml apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: |
-
Create the ConfigMap object:
$ oc apply -f user-workload-monitoring-config.yaml configmap/user-workload-monitoring-config created
-
Enable monitoring for user-defined projects
-
Check whether monitoring for user-defined projects is running:
$ oc get pods -n openshift-user-workload-monitoring No resources found in openshift-user-workload-monitoring namespace.
-
Edit the Â
cluster-monitoring-config
ConfigMap:$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Â
-
SetÂ
enableUserWorkload: true
to enable monitoring for user-defined projects on the cluster:apiVersion: v1 data: config.yaml: | enableUserWorkload: true kind: ConfigMap metadata: annotations:
-
Save the file to apply the changes and then check that the appropriate pods are running:
$ oc get pods -n openshift-user-workload-monitoring NAME READY STATUS RESTARTS AGE prometheus-operator-6f96b4b8f8-gq6rl 2/2 Running 0 15s prometheus-user-workload-0 5/5 Running 1 12s prometheus-user-workload-1 5/5 Running 1 12s thanos-ruler-user-workload-0 3/3 Running 0 8s thanos-ruler-user-workload-1 3/3 Running 0 8s
Â
Create a Service object to expose Quay metrics
-
Create a YAML file for the Service object:
$ cat quay-service.yaml apiVersion: v1 kind: Service metadata: annotations: labels: quay-component: monitoring quay-operator/quayregistry: example-registry name: example-registry-quay-metrics namespace: quay-enterprise spec: ports: - name: quay-metrics port: 9091 protocol: TCP targetPort: 9091 selector: quay-component: quay-app quay-operator/quayregistry: example-registry type: ClusterIP
 Â
-
Create the Service object:
$ oc apply -f quay-service.yaml service/example-registry-quay-metrics created
Create a ServiceMonitor object
Configure OpenShift Monitoring to scrape the metrics by creating a ServiceMonitor resource.
-
Create a YAML file for the ServiceMonitor resource:
$ cat quay-service-monitor.yaml apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: quay-operator/quayregistry: example-registry name: example-registry-quay-metrics-monitor namespace: quay-enterprise spec: endpoints: - port: quay-metrics namespaceSelector: any: true selector: matchLabels: quay-component: monitoring
-
Create the ServiceMonitor:
$ oc apply -f quay-service-monitor.yaml servicemonitor.monitoring.coreos.com/example-registry-quay-metrics-monitor created
View the metrics in OpenShift
You can access the metrics in the OpenShift console under Monitoring → Metrics. In the Expression field, enter the text quay_
to see the list of metrics available:
For example, if you have added users to your registry, select the quay-users_rows
metric:
Resizing Managed Storage
The Quay Operator creates default object storage using the defaults provided by RHOCS when creating a NooBaa
object (50 Gib). There are two ways to extend this storage; you can resize an existing PVC or add more PVCs to a new storage pool.
Resize Noobaa PVC
-
Log into the OpenShift console and select
Storage
→Persistent Volume Claims
. -
Select the
PersistentVolumeClaim
named likenoobaa-default-backing-store-noobaa-pvc-*
. -
From the Action menu, select
Expand PVC
. -
Enter the new size of the Persistent Volume Claim and select
Expand
.
After a few minutes (depending on the size of the PVC), the expanded size should reflect in the PVC’s Capacity
field.
Note
|
Expanding CSI volumes is a Technology Preview feature only. For more information, see https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/storage/expanding-persistent-volumes. |
Add Another Storage Pool
-
Log into the OpenShift console and select
Networking
→Routes
. Make sure theopenshift-storage
project is selected. -
Click on the
Location
field for thenoobaa-mgmt
Route. -
Log into the Noobaa Management Console.
-
On the main dashboard, under
Storage Resources
, selectAdd Storage Resources
. -
Select
Deploy Kubernetes Pool
-
Enter a new pool name. Click
Next
. -
Choose the number of Pods to manage the pool and set the size per node. Click
Next
. -
Click
Deploy
.
After a few minutes, the additional storage pool will be added to the Noobaa resources and available for use by Project Quay.
Customizing Default Operator Images
Note
|
Using this mechanism is not supported for production Quay environments and is strongly encouraged only for development/testing purposes. There is no guarantee your deployment will work correctly when using non-default images with the Quay Operator. |
In certain circumstances, it may be useful to override the default images used by the Operator. This can be done by setting one or more environment variables in the Quay Operator ClusterServiceVersion
.
Environment Variables
The following environment variables are used in the Operator to override component images:
Environment Variable |
Component |
|
|
|
|
|
|
|
|
Note
|
Override images must be referenced by manifest (@sha256:), not by tag (:latest). |
Applying Overrides to a Running Operator
When the Quay Operator is installed in a cluster via the Operator Lifecycle Manager (OLM), the managed component container images can be easily overridden by modifying the ClusterServiceVersion
object, which is OLM’s representation of a running Operator in the cluster. Find the Quay Operator’s ClusterServiceVersion
either by using a Kubernetes UI or kubectl
/oc
:
$ oc get clusterserviceversions -n <your-namespace>
Using the UI, oc edit
, or any other method, modify the Quay ClusterServiceVersion
to include the environment variables outlined above to point to the override images:
JSONPath: spec.install.spec.deployments[0].spec.template.spec.containers[0].env
- name: RELATED_IMAGE_COMPONENT_QUAY
value: quay.io/projectquay/quay@sha256:c35f5af964431673f4ff5c9e90bdf45f19e38b8742b5903d41c10cc7f6339a6d
- name: RELATED_IMAGE_COMPONENT_CLAIR
value: quay.io/projectquay/clair@sha256:70c99feceb4c0973540d22e740659cd8d616775d3ad1c1698ddf71d0221f3ce6
- name: RELATED_IMAGE_COMPONENT_POSTGRES
value: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33
- name: RELATED_IMAGE_COMPONENT_REDIS
value: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542
Note that this is done at the Operator level, so every QuayRegistry will be deployed using these same overrides.
AWS S3 CloudFront
If you use AWS S3 CloudFront for backend registry storage, specify the private key as shown in the following example:
$ oc create secret generic --from-file config.yaml=./config_awss3cloudfront.yaml --from-file default-cloudfront-signing-key.pem=./default-cloudfront-signing-key.pem test-config-bundle
Project Quay build enhancements
Prior to Project Quay 3.7, Quay ran podman
commands in virtual machines launched by pods. Running builds on virtual platforms requires enabling nested virtualization, which is not featured in Red Hat Enterprise Linux or OpenShift Container Platform. As a result, builds had to run on bare-metal clusters, which is an inefficient use of resources.
With Project Quay 3.7., the bare-metal constraint required to run builds has been removed by adding an additional build option which does not contain the virtual machine layer. As a result, builds can be run on virtualized platforms. Backwards compatibility to run previous build configurations are also available.
Project Quay enhanced build architecture
The preceding image shows the expected design flow and architecture of the enhanced build features:
With this enhancement, the build manager first creates the Job Object
. Then, the Job Object
then creates a pod using the quay-builder-image
. The quay-builder-image
will contain the quay-builder binary
and the Podman
service. The created pod runs as unprivileged
. The quay-builder binary
then builds the image while communicating status and retrieving build information from the Build Manager.
Project Quay build limitations
Running builds in Project Quay in an unprivileged context might cause some commands that were working under the previous build strategy to fail. Attempts to change the build strategy could potentially cause performance issues and reliability with the build.
Running builds direclty in a container will not have the same isolation as using virtual machines. Changing the build environment might also caused builds that were previously working to fail.
Creating a Project Quay builders environment with OpenShift
OpenShift TLS component
The Project Quay 3.6 Operator has introduced the tls
component which allows you to control TLS configuration.
Note
|
Project Quay 3.6 does not support builders when the TLS component is managed by the Operator. |
If you set tls
to unmanaged
, you supply your own ssl.cert
and ssl.key
files. In this instance, if you want your cluster to support builders, you must add both the Quay route and the builder route name to the SAN list in the cert, or alternatively use a wildcard. To add the builder route, use the following format:
[quayregistry-cr-name]-quay-builder-[ocp-namespace].[ocp-domain-name]:443
Using OpenShift Container Platform for Project Quay builders
The following procedure describes how you can implement the builders feature in Project Quay.
-
Builders require SSL certificates. For more information, see Adding TLS certificates to the Project Quay container.
-
If you are using AWS S3 storage, you must modify your storage bucket in the AWS console, prior to running builders. See "Modifying your AWS S3 storage bucket" in the following section for the required parameters.
Note
|
Procedure
|
Preparing OpenShift Container Platform for virtual builders
-
Log in to your Project Quay cluster using a cluster admin account.
-
Create a new project where your virtual builders will be run (e.g.,
virtual-builders
).$ oc new-project virtual-builders
-
Create a
ServiceAccount
in thisProject
that will be used to run builds.$ oc create sa -n virtual-builders quay-builder
-
Provide the created service account with editing permissions so that it can run the build:
$ oc adm policy -n virtual-builders add-role-to-user edit system:serviceaccount:virtual-builders:quay-builder
-
Grant the Quay builder
anyuid scc
permissions:$ oc adm policy -n virtual-builders add-scc-to-user anyuid -z quay-builder
NoteThis action requires cluster admin privileges. This is required because builders must run as the Podman user for unprivileged or rootless builds to work.
-
Obtain the token for the Quay builder service account:
$ oc sa get-token -n virtual-builders quay-builder
Sample outputeyJhbGciOiJSUzI1NiIsImtpZCI6IldfQUJkaDVmb3ltTHZ0dGZMYjhIWnYxZTQzN2dJVEJxcDJscldSdEUtYWsifQ...
-
Determine the builder route:
$ oc get route -n quay-enterprise
Sample outputNAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD ... example-registry-quay-builder example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org example-registry-quay-app grpc edge/Redirect None ...
-
Generate a self-signed SSL certificate with the .crt extension:
$ SECRET=$(oc get sa openshift-apiserver-sa --namespace=openshift-apiserver -o json | jq -r '.secrets[] | select(.name | contains("openshift-apiserver-sa-token"))'.name)
$ oc get secret $SECRET -n openshift-apiserver -o json | jq '.data."ca.crt"' -r | base64 -d > extra_ca_cert_build_cluster.crt
-
Locate the secret for you config bundle in the Console, and choose Actions → Edit Secret and add the appropriate builder configuration:
FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - <superusername> FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_BUILD_SUPPORT: True BUILDMAN_HOSTNAME: <sample_build_route> (1) BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ ORCHESTRATOR: REDIS_HOST: <sample_redis_hostname> (2) REDIS_PASSWORD: "" REDIS_SSL: false REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetesPodman NAME: openshift BUILDER_NAMESPACE: <sample_builder_namespace> (3) SETUP_TIME: 180 MINIMUM_RETRY_THRESHOLD: BUILDER_CONTAINER_IMAGE: <sample_builder_container_image> (4) # Kubernetes resource options K8S_API_SERVER: <sample_k8s_api_server> (5) K8S_API_TLS_CA: <sample_crt_file> (6) VOLUME_SIZE: 8G KUBERNETES_DISTRIBUTION: openshift CONTAINER_MEMORY_LIMITS: 300Mi CONTAINER_CPU_LIMITS: 1G (7) CONTAINER_MEMORY_REQUEST: 300Mi CONTAINER_CPU_REQUEST: 1G NODE_SELECTOR_LABEL_KEY: "" NODE_SELECTOR_LABEL_VALUE: "" SERVICE_ACCOUNT_NAME: <sample_service_account_name> SERVICE_ACCOUNT_TOKEN: <sample_account_token> (8)
-
The build route is obtained by running
oc get route -n
with the name of your OpenShift Operators namespace. A port must be provided at the end of the route, for example, and it should follow the following format:[quayregistry-cr-name]-quay-builder-[ocp-namespace].[ocp-domain-name]:443
. -
If your Redis host has a password or SSL certificates, you must update accordingly.
-
Set to match the name of your virtual builders namespace, for example,
virtual-builders
. -
For early access, the
BUILDER_CONTAINER_IMAGE
is currentlyquay.io/projectquay/quay-builder:3.7.0-rc.2
. Note that this might change during the early access window. In the event this happens, customers will be alerted. -
Obtained by running
oc cluster-info
. -
You must manually create and add your custom CA cert, for example,
K8S_API_TLS_CA: extra_ca_cert_build_cluster.crt
-
For virtual builds, you must ensure that there are enough resources in your cluster.
-
Obtained when running
oc create sa
.
Sample configFEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_BUILD_SUPPORT: True BUILDMAN_HOSTNAME: example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org:443 BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ ORCHESTRATOR: REDIS_HOST: example-registry-quay-redis REDIS_PASSWORD: "" REDIS_SSL: false REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetesPodman NAME: openshift BUILDER_NAMESPACE: virtual-builders SETUP_TIME: 180 MINIMUM_RETRY_THRESHOLD: BUILDER_CONTAINER_IMAGE: quay.io/projectquay/quay-builder:3.7.0-rc.2 # Kubernetes resource options K8S_API_SERVER: api.docs.quayteam.org:6443 K8S_API_TLS_CA: /conf/stack/extra_ca_certs/build_cluster.crt VOLUME_SIZE: 8G KUBERNETES_DISTRIBUTION: openshift CONTAINER_MEMORY_LIMITS: 1Gi CONTAINER_CPU_LIMITS: 1080m CONTAINER_MEMORY_REQUEST: 1Gi CONTAINER_CPU_REQUEST: 580m NODE_SELECTOR_LABEL_KEY: "" NODE_SELECTOR_LABEL_VALUE: "" SERVICE_ACCOUNT_NAME: quay-builder SERVICE_ACCOUNT_TOKEN: "eyJhbGciOiJSUzI1NiIsImtpZCI6IldfQUJkaDVmb3ltTHZ0dGZMYjhIWnYxZTQzN2dJVEJxcDJscldSdEUtYWsifQ"
-
Manually adding SSL certificates.
Important
|
|
Create and sign certs
-
Create a certificate authority and sign a certificate. For more information, see Create a Certificate Authority and sign a certificate.
Note-
Add an
alt_name
for the URL of your Quay registry. -
Add an
alt_name
for theBUILDMAN_HOSTNAME
that is specified in your config.yaml.
openssl.cnf[req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = example-registry-quay-quay-enterprise.apps.docs.quayteam.org DNS.2 = example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org
Sample commands$ openssl genrsa -out rootCA.key 2048 $ openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem $ openssl genrsa -out ssl.key 2048 $ openssl req -new -key ssl.key -out ssl.csr $ openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf
-
Set TLS to unmanaged
In your Quay Registry yaml, set kind: tls
to managed: false
:
- kind: tls
managed: false
In the events, you should see that the change is blocked until you set up the appropriate config:
- lastTransitionTime: '2022-03-28T12:56:49Z'
lastUpdateTime: '2022-03-28T12:56:49Z'
message: >-
required component `tls` marked as unmanaged, but `configBundleSecret`
is missing necessary fields
reason: ConfigInvalid
status: 'True'
Create temporary secrets
-
Create a secret in your default namespace for the CA cert:
$ oc create secret generic -n quay-enterprise temp-crt --from-file extra_ca_cert_build_cluster.crt
-
Create a secret in your default namespace for the ssl.key and ssl.cert files:
$ oc create secret generic -n quay-enterprise quay-config-ssl --from-file ssl.cert --from-file ssl.key
Copy secret data to config.yaml
-
Locate the new secrets in the console UI at Workloads → Secrets.
-
For each secret, locate the YAML view:
kind: Secret apiVersion: v1 metadata: name: temp-crt namespace: quay-enterprise uid: a4818adb-8e21-443a-a8db-f334ace9f6d0 resourceVersion: '9087855' creationTimestamp: '2022-03-28T13:05:30Z' ... data: extra_ca_cert_build_cluster.crt: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURNakNDQWhxZ0F3SUJBZ0l.... type: Opaque
kind: Secret apiVersion: v1 metadata: name: quay-config-ssl namespace: quay-enterprise uid: 4f5ae352-17d8-4e2d-89a2-143a3280783c resourceVersion: '9090567' creationTimestamp: '2022-03-28T13:10:34Z' ... data: ssl.cert: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVaakNDQTA2Z0F3SUJBZ0lVT... ssl.key: >- LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBc... type: Opaque
-
Locate the secret for your Quay Registry configuration bundle in the UI, or via the command line by running a command such as:
$ oc get quayregistries.quay.redhat.com -o jsonpath="{.items[0].spec.configBundleSecret}{'\n'}" -n quay-enterprise
-
In the OpenShift console, select the YAML tab for your config bundle secret, and add the data from the two secrets you created:
kind: Secret apiVersion: v1 metadata: name: init-config-bundle-secret namespace: quay-enterprise uid: 4724aca5-bff0-406a-9162-ccb1972a27c1 resourceVersion: '4383160' creationTimestamp: '2022-03-22T12:35:59Z' ... data: config.yaml: >- RkVBVFVSRV9VU0VSX0lOSVRJQUxJWkU6IHRydWUKQlJ... extra_ca_cert_build_cluster.crt: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURNakNDQWhxZ0F3SUJBZ0ldw.... ssl.cert: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVaakNDQTA2Z0F3SUJBZ0lVT... ssl.key: >- LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBc... type: Opaque
-
Click Save. You should see the pods being re-started:
$ oc get pods -n quay-enterprise
Sample outputNAME READY STATUS RESTARTS AGE ... example-registry-quay-app-6786987b99-vgg2v 0/1 ContainerCreating 0 2s example-registry-quay-app-7975d4889f-q7tvl 1/1 Running 0 5d21h example-registry-quay-app-7975d4889f-zn8bb 1/1 Running 0 5d21h example-registry-quay-app-upgrade-lswsn 0/1 Completed 0 6d1h example-registry-quay-config-editor-77847fc4f5-nsbbv 0/1 ContainerCreating 0 2s example-registry-quay-config-editor-c6c4d9ccd-2mwg2 1/1 Running 0 5d21h example-registry-quay-database-66969cd859-n2ssm 1/1 Running 0 6d1h example-registry-quay-mirror-764d7b68d9-jmlkk 1/1 Terminating 0 5d21h example-registry-quay-mirror-764d7b68d9-jqzwg 1/1 Terminating 0 5d21h example-registry-quay-redis-7cc5f6c977-956g8 1/1 Running 0 5d21h
-
After your Quay registry has reconfigured, check that your Quay app pods are running:
$ oc get pods -n quay-enterprise
Sample outputexample-registry-quay-app-6786987b99-sz6kb 1/1 Running 0 7m45s example-registry-quay-app-6786987b99-vgg2v 1/1 Running 0 9m1s example-registry-quay-app-upgrade-lswsn 0/1 Completed 0 6d1h example-registry-quay-config-editor-77847fc4f5-nsbbv 1/1 Running 0 9m1s example-registry-quay-database-66969cd859-n2ssm 1/1 Running 0 6d1h example-registry-quay-mirror-758fc68ff7-5wxlp 1/1 Running 0 8m29s example-registry-quay-mirror-758fc68ff7-lbl82 1/1 Running 0 8m29s example-registry-quay-redis-7cc5f6c977-956g8 1/1 Running 0 5d21h
-
In your browser, access the registry endpoint and validate that the certificate has been updated appropriately:
Common Name (CN) example-registry-quay-quay-enterprise.apps.docs.quayteam.org Organisation (O) DOCS Organisational Unit (OU) QUAY
Using the UI to create a build trigger
-
Log in to your Quay repository.
-
Click Create New Repository and create a new registry, for example,
testrepo
. -
On the Repositories page, click Builds tab on the left hand pane. Alternatively, use the corresponding URL directly, for example:
https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/repository/quayadmin/testrepo?tab=builds
ImportantIn some cases, the builder might have issues resolving hostnames. This issue might be related to the
dnsPolicy
being set todefault
on the job object. Currently, there is no workaround for this issue. It will be resolved in a future version of Red Hat Quay. -
Click Create Build Trigger → Custom Git Repository Push.
-
Enter the HTTPS or SSH style URL used to clone your Git repository, then click Continue. For example:
https://github.com/gabriel-rh/actions_test.git
-
Check Tag manifest with the branch or tag name and then click Continue.
-
Enter the location of the Dockerfile to build when the trigger is invoked, for example,
/Dockerfile
and click Continue. -
Enter the location of the context for the Docker build, for example,
/
, and click Continue. -
If warranted, create a Robot Account. Otherwise, click Continue.
-
Click Continue to verify the parameters.
-
On the Builds page, click Options icon of your Trigger Name, and then click Run Trigger Now.
-
Enter a commit SHA from the Git repository and click Start Build.
-
You can check the status of your build by clicking the commit in the Build History page, or by running
oc get pods -n virtual-builders
.$ oc get pods -n virtual-builders NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s
$ oc get pods -n virtual-builders NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Terminating 0 9s
$ oc get pods -n virtual-builders No resources found in virtual-builders namespace.
-
When the build is finished, you can check the status of the tag under Tags on the left hand pane.
NoteWith early access, full build logs and timestamps of builds are currently unavailable.
Modifying your AWS S3 storage bucket
If you are using AWS S3 storage, you must modify your storage bucket in the AWS console, prior to running builders.
-
Log in to your AWS console at s3.console.aws.com.
-
In the search bar, search for
S3
and then click S3. -
Click the name of your bucket, for example,
myawsbucket
. -
Click the Permissions tab.
-
Under Cross-origin resource sharing (CORS), include the following parameters:
[ { "AllowedHeaders": [ "Authorization" ], "AllowedMethods": [ "GET" ], "AllowedOrigins": [ "*" ], "ExposeHeaders": [], "MaxAgeSeconds": 3000 }, { "AllowedHeaders": [ "Content-Type", "x-amz-acl", "origin" ], "AllowedMethods": [ "PUT" ], "AllowedOrigins": [ "*" ], "ExposeHeaders": [], "MaxAgeSeconds": 3000 } ]
Geo-replication
Geo-replication allows multiple, geographically distributed Quay deployments to work as a single registry from the perspective of a client or user. It significantly improves push and pull performance in a globally-distributed Quay setup. Image data is asynchronously replicated in the background with transparent failover / redirect for clients.
With Project Quay 3.7, deployments of Project Quay with geo-replication is supported by standalone and Operator deployments.
Geo-replication features
-
When geo-replication is configured, container image pushes will be written to the preferred storage engine for that Red Hat Quay instance (typically the nearest storage backend within the region).
-
After the initial push, image data will be replicated in the background to other storage engines.
-
The list of replication locations is configurable and those can be different storage backends.
-
An image pull will always use the closest available storage engine, to maximize pull performance.
-
If replication hasn’t been completed yet, the pull will use the source storage backend instead.
Geo-replication requirements and constraints
-
A single database, and therefore all metadata and Quay configuration, is shared across all regions.
-
A single Redis cache is shared across the entire Quay setup and needs to accessible by all Quay pods.
-
The exact same configuration should be used across all regions, with exception of the storage backend, which can be configured explicitly using the
QUAY_DISTRIBUTED_STORAGE_PREFERENCE
environment variable. -
Geo-Replication requires object storage in each region. It does not work with local storage or NFS.
-
Each region must be able to access every storage engine in each region (requires a network path).
-
Alternatively, the storage proxy option can be used.
-
The entire storage backend (all blobs) is replicated. This is in contrast to repository mirroring, which can be limited to an organization or repository or image.
-
All Quay instances must share the same entrypoint, typically via load balancer.
-
All Quay instances must have the same set of superusers, as they are defined inside the common configuration file.
-
Geo-Replication requires SSL/TSL certificates and keys. For more information, see Using SSL to protect connections to Red Hat Quay.
If the above requirements cannot be met, you should instead use two or more distinct Quay deployments and take advantage of repository mirroring functionality.
Geo-replication - Quay Operator
Geo-replication architecture - Quay Operator
In the example shown above, Quay Operator is deployed in two separate regions, with a common database and a common Redis instance. Localized image storage is provided in each region and image pulls are served from the closest available storage engine. Container image pushes are written to the preferred storage engine for the Quay instance, and will then be replicated, in the background, to the other storage engines.
Setting up geo-replication on Openshift
-
Deploy Quay postgres instance:
-
Login to the database
-
Create a database for Quay
CREATE DATABASE quay;
-
Enable pg_trm extension inside the database
\c quay; CREATE EXTENSION IF NOT EXISTS pg_trgm;
-
-
Deploy a Redis instance:
Note-
Deploying a Redis instance might be unnecessary if your cloud provider has its own service.
-
Deploying a Redis instance is required if you are leveraging Builders.
-
Deploy a VM for Redis
-
Make sure that it is accessible from the clusters where Quay is running
-
Port 6379/TCP must be open
-
Run Redis inside the instance
sudo dnf install -y podman podman run -d --name redis -p 6379:6379 redis
-
-
Create two object storage backends, one for each cluster
Ideally one object storage bucket will be close to the 1st cluster (primary) while the other will run closer to the 2nd cluster (secondary).
-
Deploy the clusters with the same config bundle, using environment variable overrides to select the appropriate storage backend for an individual cluster
-
Configure a load balancer, to provide a single entry point to the clusters
Configuration
The config.yaml
file is shared between clusters, and will contain the details for the common PostgreSQL, Redis and storage backends:
DB_CONNECTION_ARGS:
autorollback: true
threadlocals: true
DB_URI: postgresql://postgres:password@10.19.0.1:5432/quay (1)
BUILDLOGS_REDIS:
host: 10.19.0.2
port: 6379
USER_EVENTS_REDIS:
host: 10.19.0.2
port: 6379
DISTRIBUTED_STORAGE_CONFIG:
usstorage:
- GoogleCloudStorage
- access_key: GOOGQGPGVMASAAMQABCDEFG
bucket_name: georep-test-bucket-0
secret_key: AYWfEaxX/u84XRA2vUX5C987654321
storage_path: /quaygcp
eustorage:
- GoogleCloudStorage
- access_key: GOOGQGPGVMASAAMQWERTYUIOP
bucket_name: georep-test-bucket-1
secret_key: AYWfEaxX/u84XRA2vUX5Cuj12345678
storage_path: /quaygcp
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS:
- usstorage
- eustorage
DISTRIBUTED_STORAGE_PREFERENCE:
- usstorage
- eustorage
-
The PostgreSQL DB_URI must also be included in the Clair configuration file. For more information about retrieving the Clair configuration file on OpenShift, see Retrieving the Clair config.
Create the configBundleSecret
:
$ oc create secret generic --from-file config.yaml=./config.yaml georep-config-bundle
In each of the clusters, set the configBundleSecret
and use the QUAY_DISTRIBUTED_STORAGE_PREFERENCE
environmental variable override to configure the appropriate storage for that cluster:
Note
|
The |
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage
Mixed storage for geo-replication
Quay geo-replication supports the use of different, and multiple, replication targets for example, using AWS S3 storage on public cloud and using Ceph storage on-prem. This complicates the key requirement of granting access to all storage backends from all Quay pods and cluster nodes. As a result, it is recommended that you:
-
Use a VPN to prevent visibility of the internal storage or
-
Use a token pair that only allows access to the specified bucket used by Quay
This will result in the public cloud instance of Quay having access to on-prem storage but the network will be encrypted, protected, and will use ACLs, thereby meeting security requirements.
If you cannot implement these security measures, it may be preferable to deploy two distinct Quay registries and to use repository mirroring as an alternative to geo-replication.
Upgrading the Quay Operator Overview
The Quay Operator follows a synchronized versioning scheme, which means that each version of the Operator is tied to the version of Quay and the components that it manages. There is no field on the QuayRegistry
custom resource which sets the version of Quay to deploy; the Operator only knows how to deploy a single version of all components. This scheme was chosen to ensure that all components work well together and to reduce the complexity of the Operator needing to know how to manage the lifecycles of many different versions of Quay on Kubernetes.
Operator Lifecycle Manager
The Quay Operator should be installed and upgraded using the Operator Lifecycle Manager (OLM). When creating a Subscription
with the default approvalStrategy: Automatic
, OLM will automatically upgrade the Quay Operator whenever a new version becomes available.
Warning
|
When the Quay Operator is installed via Operator Lifecycle Manager, it may be configured to support automatic or manual upgrades. This option is shown on the Operator Hub page for the Quay Operator during installation. It can also be found in the Quay Operator |
Upgrading the Quay Operator
The standard approach for upgrading installed Operators on OpenShift is documented at Upgrading installed Operators.
In general, Project Quay supports upgrades from a prior (N-1) minor version only. For example, upgrading directly from Project Quay 3.0.5 to the latest version of 3.5 is not supported. Instead, users would have to upgrade as follows:
-
3.0.5 → 3.1.3
-
3.1.3 → 3.2.2
-
3.2.2 → 3.3.4
-
3.3.4 → 3.4.z
-
3.4.z → 3.5.z
This is required to ensure that any necessary database migrations are done correctly and in the right order during the upgrade.
In some cases, Project Quay supports direct, single-step upgrades from prior (N-2, N-3) minor versions. This exception to the normal, prior minor version-only, upgrade simplifies the upgrade procedure for customers on older releases. The following upgrade paths are supported:
-
3.3.z → 3.6.z
-
3.4.z → 3.6.z
-
3.4.z → 3.7.z
-
3.5.z → 3.7.z
For users on standalone deployments of Quay wanting to upgrade to 3.6, see the Standalone upgrade guide.
Upgrading Quay
To update Quay from one minor version to the next, for example, 3.4 → 3.5, you need to change the update channel for the Quay Operator.
For z
stream upgrades, for example, 3.4.2 → 3.4.3, updates are released in the major-minor channel that the user initially selected during install. The procedure to perform a z
stream upgrade depends on the approvalStrategy
as outlined above. If the approval strategy is set to Automatic
, the Quay Operator will upgrade automatically to the newest z
stream. This results in automatic, rolling Quay updates to newer z
streams with little to no downtime. Otherwise, the update must be manually approved before installation can begin.
Notes on upgrading directly from 3.3.z or 3.4.z to 3.6
Upgrading with edge routing enabled
-
Previously, when running a 3.3.z version of Project Quay with edge routing enabled, users were unable to upgrade to 3.4.z versions of Project Quay. This has been resolved with the release of Project Quay 3.6.
-
When upgrading from 3.3.z to 3.6, if
tls.termination
is set tonone
in your Project Quay 3.3.z deployment, it will change to HTTPS with TLS edge termination and use the default cluster wildcard certificate. For example:apiVersion: redhatcop.redhat.io/v1alpha1 kind: QuayEcosystem metadata: name: quay33 spec: quay: imagePullSecretName: redhat-pull-secret enableRepoMirroring: true image: quay.io/quay/quay:v3.3.4-2 ... externalAccess: hostname: quayv33.apps.devcluster.openshift.com tls: termination: none database: ...
Upgrading with custom TLS certificate/key pairs without Subject Alternative Names
There is an issue for customers using their own TLS certificate/key pairs without Subject Alternative Names (SANs) when upgrading from Project Quay 3.3.4 to Project Quay 3.6 directly. During the upgrade to Project Quay 3.6, the deployment is blocked, with the error message from the Quay Operator pod logs indicating that the Quay TLS certificate must have SANs.
If possible, you should regenerate your TLS certificates with the correct hostname in the SANs. A possible workaround involves defining an environment variable in the quay-app
, quay-upgrade
and quay-config-editor
pods after upgrade to enable CommonName matching:
GODEBUG=x509ignoreCN=0
The GODEBUG=x509ignoreCN=0
flag enables the legacy behavior of treating the CommonName field on X.509 certificates as a host name when no SANs are present. However, this workaround is not recommended, as it will not persist across a redeployment.
Configuring Clair v4 when upgrading from 3.3.z or 3.4.z to 3.6 using the Quay Operator
To set up Clair v4 on a new Project Quay deployment on OpenShift, it is highly recommended to use the Quay Operator. By default, the Quay Operator will install or upgrade a Clair deployment along with your Project Quay deployment and configure Clair security scanning automatically.
For instructions on setting up Clair v4 on OpenShift, see Setting Up Clair on a Red Hat Quay OpenShift deployment.
Changing the update channel for an Operator
The subscription of an installed Operator specifies an update channel, which is used to track and receive updates for the Operator. To upgrade the Quay Operator to start tracking and receiving updates from a newer channel, change the update channel in the Subscription tab for the installed Quay Operator. For subscriptions with an Automatic
approval strategy, the upgrade begins automatically and can be monitored on the page that lists the Installed Operators.
Manually approving a pending Operator upgrade
If an installed Operator has the approval strategy in its subscription set to Manual
, when new updates are released in its current update channel, the update must be manually approved before installation can begin. If the Quay Operator has a pending upgrade, this status will be displayed in the list of Installed Operators. In the Subscription
tab for the Quay Operator, you can preview the install plan and review the resources that are listed as available for upgrade. If satisfied, click Approve
and return to the page that lists Installed Operators to monitor the progress of the upgrade.
The following image shows the Subscription tab in the UI, including the update Channel
, the Approval
strategy, the Upgrade status
and the InstallPlan
:
The list of Installed Operators provides a high-level summary of the current Quay installation:
Upgrading a QuayRegistry
When the Quay Operator starts, it immediately looks for any QuayRegistries
it can find in the namespace(s) it is configured to watch. When it finds one, the following logic is used:
-
If
status.currentVersion
is unset, reconcile as normal. -
If
status.currentVersion
equals the Operator version, reconcile as normal. -
If
status.currentVersion
does not equal the Operator version, check if it can be upgraded. If it can, perform upgrade tasks and set thestatus.currentVersion
to the Operator’s version once complete. If it cannot be upgraded, return an error and leave theQuayRegistry
and its deployed Kubernetes objects alone.
Enabling features in Quay 3.7
Quota management configuration
Quota management is now supported under the FEATURE_QUOTA_MANAGEMENT
property and is turned off by default. To enable quota management, set the feature flag in your config.yaml
to true
:
FEATURE_QUOTA_MANAGEMENT: true
Using Red Hat Quay to proxy a remote organization configuration
Using Red Hat Quay to proxy a remote organization is now supported under the FEATURE_PROXY_CACHE
property. To enable proxy cache, set the feature flag in your confg.yaml
to true
:
FEATURE_PROXY_CACHE: true
Project Quay build enhancements
Builds can be run on virtualized platforms. Backwards compatibility to run previous build configurations are also available. To enable virtual builds, set the feature flag in your config.yaml
to true
:
FEATURE_BUILD_SUPPORT: true
Geo-replication using the Project Quay Operator
Deployments of Project Quay with geo-replication is now supported by Operator deployments. To enable geo-replication, set the feature flag in your config.yaml
to true
:
FEATURE_STORAGE_REPLICATION: true
Enabling features in Quay 3.6
Console monitoring and alerting
The support for monitoring Quay 3.6 in the OpenShift console requires that the Operator is installed in all namespaces. If you previously installed the Operator in a specific namespace, delete the Operator itself and reinstall it for all namespaces once the upgrade has taken place.
OCI and Helm support
Support for Helm and some OCI artifacts is now enabled by default in Project Quay {producty}. If you want to explicitly enable the feature, for example, if you are upgrading from a version where it is not enabled by default, you need to reconfigure your Quay deployment to enable the use of OCI artifacts using the following properties:
FEATURE_GENERAL_OCI_SUPPORT: true
Upgrading a QuayEcosystem
Upgrades are supported from previous versions of the Operator which used the QuayEcosystem
API for a limited set of configurations. To ensure that migrations do not happen unexpectedly, a special label needs to be applied to the QuayEcosystem
for it to be migrated. A new QuayRegistry
will be created for the Operator to manage, but the old QuayEcosystem
will remain until manually deleted to ensure that you can roll back and still access Quay in case anything goes wrong. To migrate an existing QuayEcosystem
to a new QuayRegistry
, follow these steps:
-
Add
"quay-operator/migrate": "true"
to themetadata.labels
of theQuayEcosystem
.$ oc edit quayecosystem <quayecosystemname>
metadata: labels: quay-operator/migrate: "true"
-
Wait for a
QuayRegistry
to be created with the samemetadata.name
as yourQuayEcosystem
. TheQuayEcosystem
will be marked with the label"quay-operator/migration-complete": "true"
. -
Once the
status.registryEndpoint
of the newQuayRegistry
is set, access Quay and confirm all data and settings were migrated successfully. -
When you are confident everything worked correctly, you may delete the
QuayEcosystem
and Kubernetes garbage collection will clean up all old resources.
Reverting QuayEcosystem Upgrade
If something goes wrong during the automatic upgrade from QuayEcosystem
to QuayRegistry
, follow these steps to revert back to using the QuayEcosystem
:
-
Delete the
QuayRegistry
using either the UI orkubectl
:$ kubectl delete -n <namespace> quayregistry <quayecosystem-name>
-
If external access was provided using a
Route
, change theRoute
to point back to the originalService
using the UI orkubectl
.
Note
|
If your |
Supported QuayEcosystem Configurations for Upgrades
The Quay Operator will report errors in its logs and in status.conditions
if migrating a QuayEcosystem
component fails or is unsupported. All unmanaged components should migrate successfully because no Kubernetes resources need to be adopted and all the necessary values are already provided in Quay’s config.yaml
.
Database
Ephemeral database not supported (volumeSize
field must be set).
Redis
Nothing special needed.
External Access
Only passthrough Route
access is supported for automatic migration. Manual migration required for other methods.
-
LoadBalancer
without custom hostname: After theQuayEcosystem
is marked with label"quay-operator/migration-complete": "true"
, delete themetadata.ownerReferences
field from existingService
before deleting theQuayEcosystem
to prevent Kubernetes from garbage collecting theService
and removing the load balancer. A newService
will be created withmetadata.name
format<QuayEcosystem-name>-quay-app
. Edit thespec.selector
of the existingService
to match thespec.selector
of the newService
so traffic to the old load balancer endpoint will now be directed to the new pods. You are now responsible for the oldService
; the Quay Operator will not manage it. -
LoadBalancer
/NodePort
/Ingress
with custom hostname: A newService
of typeLoadBalancer
will be created withmetadata.name
format<QuayEcosystem-name>-quay-app
. Change your DNS settings to point to thestatus.loadBalancer
endpoint provided by the newService
.
Clair
Nothing special needed.
Object Storage
QuayEcosystem
did not have a managed object storage component, so object storage will always be marked as unmanaged. Local storage is not supported.
Repository Mirroring
Nothing special needed.
Additional resources
-
For more details on the Project Quay Operator, see the upstream quay-operator project.