Project Quay is an enterprise-quality container registry. Use Project Quay to build and store container images, then make them available to deploy across your enterprise.
The Project Quay Operator provides a simple method to deploy and manage Project Quay on an OpenShift cluster.
As of Project Quay 3.4.0, the Operator has been completely re-written to provide an improved out of the box experience as well as support for more Day 2 operations. As a result the new Operator is simpler to use and is more opinionated. The key differences from earlier versions of the Operator are:
-
The
QuayEcosystem
custom resource has been replaced with theQuayRegistry
custom resource -
The default installation options produces a fully supported Quay environment with all managed dependencies (database, caches, object storage, etc) supported for production use (some components may not be highly available)
-
A new robust validation library for Quay’s configuration which is shared by the Quay application and config tool for consistency
-
Object storage can now be provided by the Operator using the
ObjectBucketClaim
Kubernetes API (e.g. the NooBaa Operator can be from OperatorHub.io can be used to provide an implementation of that API) -
Customization of the container images used by deployed pods for testing and development scenarios
Introduction to the Project Quay Operator
This document outlines the steps for configuring, deploying, managing and upgrading Project Quay on OpenShift using the Project Quay Operator.
It shows you how to:
-
Install the Project Quay Operator
-
Configure object storage, either managed or unmanaged
-
Configure other unmanaged components, if required, including database, Redis, routes, TLS, etc.
-
Deploy the Project Quay registry on OpenShift using the Operator
-
Use advanced features supported by the Operator
-
Upgrade the registry by upgrading the Operator
QuayRegistry API
The Quay Operator provides the QuayRegistry
custom resource API to declaratively manage Quay
container registries on the cluster. Use either the OpenShift UI or a command-line tool to interact with this API.
-
Creating a
QuayRegistry
will result in the Operator deploying and configuring all necessary resources needed to run Quay on the cluster. -
Editing a
QuayRegistry
will result in the Operator reconciling the changes and creating/updating/deleting objects to match the desired configuration. -
Deleting a
QuayRegistry
will result in garbage collection of all previously created resources and theQuay
container registry will no longer be available.
The QuayRegistry
API is fairly simple, and the fields are outlined in the following sections.
Quay Operator components
Quay is a powerful container registry platform and as a result, has a significant number of dependencies. These include a database, object storage, Redis, and others. The Quay Operator manages an opinionated deployment of Quay and its dependencies on Kubernetes. These dependencies are treated as components and are configured through the QuayRegistry
API.
In the QuayRegistry
custom resource, the spec.components
field configures components. Each component contains two fields: kind
- the name of the component, and managed
- boolean whether the component lifecycle is handled by the Operator. By default (omitting this field), all components are managed and will be autofilled upon reconciliation for visibility:
spec:
components:
- kind: quay
managed: true
- kind: postgres
managed: true
- kind: clair
managed: true
- kind: redis
managed: true
- kind: horizontalpodautoscaler
managed: true
- kind: objectstorage
managed: true
- kind: route
managed: true
- kind: mirror
managed: true
- kind: monitoring
managed: true
- kind: tls
managed: true
- kind: clairpostgres
managed: true
Using managed components
Unless your QuayRegistry
custom resource specifies otherwise, the Operator will use defaults for the following managed components:
-
quay: Holds overrides for the Quay deployment, for example, environment variables and number of replicas. This component is new in Project Quay 3.7 and cannot be set to unmanaged.
-
postgres: For storing the registry metadata, uses an upstream (CentOS) version of Postgres 10
-
clair: Provides image vulnerability scanning
-
redis: Handles Quay builder coordination and some internal logging
-
horizontalpodautoscaler: Adjusts the number of Quay pods depending on memory/cpu consumption
-
objectstorage: For storing image layer blobs, utilizes the
ObjectBucketClaim
Kubernetes API which is provided by Noobaa/RHOCS -
route: Provides an external entrypoint to the Quay registry from outside OpenShift
-
mirror: Configures repository mirror workers (to support optional repository mirroring)
-
monitoring: Features include a Grafana dashboard, access to individual metrics, and alerting to notify for frequently restarting Quay pods
-
tls: Configures whether Project Quay or OpenShift handles TLS
-
clairpostgres: Configures a managed Clair database
The Operator will handle any required configuration and installation work needed for Project Quay to use the managed components. If the opinionated deployment performed by the Quay Operator is unsuitable for your environment, you can provide the Operator with unmanaged
resources (overrides) as described in the following sections.
Using unmanaged components for dependencies
If you have existing components such as Postgres, Redis or object storage that you would like to use with Quay, you first configure them within the Quay configuration bundle (config.yaml
) and then reference the bundle in your QuayRegistry
(as a Kubernetes Secret
) while indicating which components are unmanaged.
Note
|
The Quay config editor can also be used to create or modify an existing config bundle and simplifies the process of updating the Kubernetes |
Config bundle secret
The spec.configBundleSecret
field is a reference to the metadata.name
of a Secret
in the same namespace as the QuayRegistry
. This Secret
must contain a config.yaml
key/value pair. This config.yaml
file is a Quay config YAML file. This field is optional, and will be auto-filled by the Operator if not provided. If provided, it serves as the base set of config fields which are later merged with other fields from any managed components to form a final output Secret
, which is then mounted into the Quay application pods.
Prerequisites for Project Quay on OpenShift
Before you begin the deployment of Project Quay Operator on OpenShift, you should consider the following.
OpenShift cluster
You need a privileged account to an OpenShift 4.5 or later cluster on which to deploy the Project Quay Operator. That account must have the ability to create namespaces at the cluster scope.
Resource Requirements
Each Project Quay application pod has the following resource requirements:
-
8Gi of memory
-
2000 millicores of CPU.
The Project Quay Operator will create at least one application pod per Project Quay deployment it manages. Ensure your OpenShift cluster has sufficient compute resources for these requirements.
Object Storage
By default, the Project Quay Operator uses the ObjectBucketClaim
Kubernetes API to provision object storage. Consuming this API decouples the Operator from any vendor-specific implementation. Red Hat OpenShift Data Foundation provides this API via its NooBaa component, which will be used in this example.
Project Quay can be manually configured to use any of the following supported cloud storage options:
-
Amazon S3 (see S3 IAM Bucket Policy for details on configuring an S3 bucket policy for Project Quay)
-
Azure Blob Storage
-
Google Cloud Storage
-
Ceph Object Gateway (RADOS)
-
OpenStack Swift
-
CloudFront + S3
Installing the Quay Operator from OperatorHub
-
Using the OpenShift console, Select Operators → OperatorHub, then select the Project Quay Operator. If there is more than one, be sure to use the Red Hat certified Operator and not the community version.
-
The Installation page outlines the features and prerequisites:
-
Select Install. The Operator Installation page appears.
-
The following choices are available for customizing the installation:
-
Update Channel: Choose the update channel, for example,
stable-3.7
for the latest release. -
Installation Mode: Choose
All namespaces on the cluster
if you want the Operator to be available cluster-wide. ChooseA specific namespace on the cluster
if you want it deployed only within a single namespace. It is recommended that you install the Operator cluster-wide. If you choose a single namespace, the monitoring component will not be available by default. -
Approval Strategy: Choose to approve either automatic or manual updates. Automatic update strategy is recommended.
-
-
Select Install.
-
After a short time, you will see the Operator installed successfully in the Installed Operators page.
Configuring Quay before deployment
The Operator can manage all the Project Quay components when deploying on OpenShift, and this is the default configuration. Alternatively, you can manage one or more components externally yourself, where you want more control over the set up, and then allow the Operator to manage the remaining components.
The standard pattern for configuring unmanaged components is:
-
Create a
config.yaml
configuration file with the appropriate settings -
Create a Secret using the configuration file
$ oc create secret generic --from-file config.yaml=./config.yaml config-bundle-secret
-
Create a QuayRegistry YAML file
quayregistry.yaml
, identifying the unmanaged components and also referencing the created Secret, for example:quayregistry.yamlapiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false
-
Deploy the registry using the YAML file:
$ oc create -n quay-enterprise -f quayregistry.yaml
Pre-configuring Project Quay for automation
Project Quay has several configuration options that support automation. These options can be set before deployment to minimize the need to interact with the user interface.
Allowing the API to create the first user
To create the first user using the /api/v1/user/initialize
API, set the FEATURE_USER_INITIALIZE
parameter to true
. Unlike all other registry API calls which require an OAuth token that is generated by an OAuth application in an existing organization, the API endpoint does not require authentication.
After you have deployed Project Quay, you can use the API to create a user, for example, quayadmin
, assuming that no other users have already been created. For more information see Using the API to create the first user.
Enabling general API access
Set the config option BROWSER_API_CALLS_XHR_ONLY
to false
to allow general access to the Project Quay registry API.
Adding a super user
After deploying Project Quay, you can create a user. We advise that the first user be given administrator privileges with full permissions. Full permissions can be configured in advance by using the SUPER_USER
configuration object. For example:
...
SERVER_HOSTNAME: quay-server.example.com
SETUP_COMPLETE: true
SUPER_USERS:
- quayadmin
...
Restricting user creation
After you have configured a super user, you can restrict the ability to create new users to the super user group. Set the FEATURE_USER_CREATION
to false
to restrict user creation. For example:
...
FEATURE_USER_INITIALIZE: true
BROWSER_API_CALLS_XHR_ONLY: false
SUPER_USERS:
- quayadmin
FEATURE_USER_CREATION: false
...
Enabling new functionality
To use new Project Quay 3.8 functionality, enable some or all of the following features:
...
FEATURE_UI_V2: true
FEATURE_LISTEN_IP_VERSION:
FEATURE_SUPERUSERS_FULL_ACCESS: true
GLOBAL_READONLY_SUPER_USERS:
-
FEATURE_RESTRICTED_USERS: true
RESTRICTED_USERS_WHITELIST:
-
...
Enabling new functionality
To use new Project Quay 3.7 functionality, enable some or all of the following features:
...
FEATURE_QUOTA_MANAGEMENT: true
FEATURE_BUILD_SUPPORT: true
FEATURE_PROXY_CACHE: true
FEATURE_STORAGE_REPLICATION: true
DEFAULT_SYSTEM_REJECT_QUOTA_BYTES: 102400000
...
Suggested configuration for automation
The following config.yaml
parameters are suggested for automation:
...
FEATURE_USER_INITIALIZE: true
BROWSER_API_CALLS_XHR_ONLY: false
SUPER_USERS:
- quayadmin
FEATURE_USER_CREATION: false
...
Configuring object storage
You need to configure object storage before installing Project Quay, irrespective of whether you are allowing the Operator to manage the storage or managing it yourself.
If you want the Operator to be responsible for managing storage, see the section on Managed storage for information on installing and configuring the NooBaa / RHOCS Operator.
If you are using a separate storage solution, set objectstorage
as unmanaged
when configuring the Operator. See the following section. Unmanaged storage, for details of configuring existing storage.
Unmanaged storage
Some configuration examples for unmanaged storage are provided in this section for convenience. See the Project Quay configuration guide for full details for setting up object storage.
AWS S3 storage
DISTRIBUTED_STORAGE_CONFIG:
s3Storage:
- S3Storage
- host: s3.us-east-2.amazonaws.com
s3_access_key: ABCDEFGHIJKLMN
s3_secret_key: OL3ABCDEFGHIJKLMN
s3_bucket: quay_bucket
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- s3Storage
Google Cloud storage
DISTRIBUTED_STORAGE_CONFIG:
googleCloudStorage:
- GoogleCloudStorage
- access_key: GOOGQIMFB3ABCDEFGHIJKLMN
bucket_name: quay-bucket
secret_key: FhDAYe2HeuAKfvZCAGyOioNaaRABCDEFGHIJKLMN
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- googleCloudStorage
Azure storage
DISTRIBUTED_STORAGE_CONFIG:
azureStorage:
- AzureStorage
azure_container: azure_container_here
storage_path: /datastorage/registry
- azure_account_name: azure_account_name_here
azure_account_key: azure_account_key_here
sas_token: some/path/
endpoint_url: https://[account-name].blob.core.usgovcloudapi.net (1)
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- azureStorage
-
The
endpoint_url
parameter for Azure storage is optional and can be used with Microsoft Azure Government (MAG) endpoints. If left blank, theendpoint_url
will connect to the normal Azure region.As of Project Quay 3.7, you must use the Primary endpoint of your MAG Blob service. Using the Secondary endpoint of your MAG Blob service will result in the following error:
AuthenticationErrorDetail:Cannot find the claimed account when trying to GetProperties for the account whusc8-secondary
.
Ceph / RadosGW Storage / Hitachi HCP storage
DISTRIBUTED_STORAGE_CONFIG:
radosGWStorage:
- RadosGWStorage
- access_key: access_key_here
secret_key: secret_key_here
bucket_name: bucket_name_here
hostname: hostname_here
is_secure: 'true'
port: '443'
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- default
Swift storage
DISTRIBUTED_STORAGE_CONFIG:
swiftStorage:
- SwiftStorage
- swift_user: swift_user_here
swift_password: swift_password_here
swift_container: swift_container_here
auth_url: https://example.org/swift/v1/quay
auth_version: 1
ca_cert_path: /conf/stack/swift.cert"
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- swiftStorage
NooBaa unmanaged storage
Use the following procedure to deploy NooBaa as your unmanaged storage configuration.
-
Create a NooBaa Object Bucket Claim in the {product-title} console by navigating to Storage → Object Bucket Claims.
-
Retrieve the Object Bucket Claim Data details, including the Access Key, Bucket Name, Endpoint (hostname), and Secret Key.
-
Create a
config.yaml
configuration file using the information for the Object Bucket Claim:DISTRIBUTED_STORAGE_CONFIG: default: - RHOCSStorage - access_key: WmrXtSGk8B3nABCDEFGH bucket_name: my-noobaa-bucket-claim-8b844191-dc6c-444e-9ea4-87ece0abcdef hostname: s3.openshift-storage.svc.cluster.local is_secure: true port: "443" secret_key: X9P5SDGJtmSuHFCMSLMbdNCMfUABCDEFGH+C5QD storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default
For more information about configuring an Object Bucket Claim, see Object Bucket Claim.
Managed storage
If you want the Operator to manage object storage for Quay, your cluster needs to be capable of providing it via the ObjectBucketClaim
API. There are multiple implementations of this API available, for instance, NooBaa in combination with Kubernetes PersistentVolumes
or scalable storage backends like Ceph. Refer to the NooBaa documentation for more details on how to deploy this component.
Note
|
Object storage disk space is allocated automatically by the Operator with 50 GiB. This number represents a usable amount of storage for most small to medium Project Quay installations but may not be sufficient for your use cases. Resizing the RHOCS volume is currently not handled by the Operator. See the section below on resizing managed storage for more details. |
Configuring the database
Using an existing Postgres database
Requirements:
If you are using an externally managed PostgreSQL database, you must manually enable pg_trgm extension for a successful deployment.
-
Create a configuration file
config.yaml
with the necessary database fields:config.yaml:DB_URI: postgresql://test-quay-database:postgres@test-quay-database:5432/test-quay-database
-
Create a Secret using the configuration file:
$ kubectl create secret generic --from-file config.yaml=./config.yaml config-bundle-secret
-
Create a QuayRegistry YAML file
quayregistry.yaml
which marks thepostgres
component as unmanaged and references the created Secret:quayregistry.yamlapiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: postgres managed: false
-
Deploy the registry as detailed in the following sections.
Database configuration
This section describes the database configuration fields available for Project Quay deployments.
Database URI
With Project Quay, connection to the database is configured by using the required DB_URI
field.
The following table describes the DB_URI
configuration field:
Field | Type | Description |
---|---|---|
DB_URI |
String |
The URI for accessing the database, including any credentials. Example postgresql://quayuser:quaypass@quay-server.example.com:5432/quay |
Database connection arguments
Optional connection arguments are configured by the DB_CONNECTION_ARGS
parameter. Some of the key-value pairs defined under DB_CONNECTION_ARGS
are generic, while others are database specific.
The following table describes database connection arguments:
Field | Type | Description |
---|---|---|
DB_CONNECTION_ARGS |
Object |
Optional connection arguments for the database, such as timeouts and SSL. |
.autorollback |
Boolean |
Whether to use thread-local connections. |
.threadlocals |
Boolean |
Whether to use auto-rollback connections. |
PostgreSQL SSL connection arguments
With SSL, configuration depends on the database you are deploying. The following example shows a PostgreSQL SSL configuration:
DB_CONNECTION_ARGS:
sslmode: verify-ca
sslrootcert: /path/to/cacert
The sslmode
option determines whether, or with, what priority a secure SSL TCP/IP connection will be negotiated with the server. There are six modes:
Mode | Description |
---|---|
disable |
Your configuration only tries non-SSL connections. |
allow |
Your configuration first tries a non-SSL connection. Upon failure, tries an SSL connection. |
prefer |
Your configuration first tries an SSL connection. Upon failure, tries a non-SSL connection. |
require |
Your configuration only tries an SSL connection. If a root CA file is present, it verifies the certificate in the same way as if verify-ca was specified. |
verify-ca |
Your configuration only tries an SSL connection, and verifies that the server certificate is issued by a trusted certificate authority (CA). |
verify-full |
Only tries an SSL connection, and verifies that the server certificate is issued by a trusted CA and that the requested server host name matches that in the certificate. |
For more information on the valid arguments for PostgreSQL, see Database Connection Control Functions.
MySQL SSL connection arguments
The following example shows a sample MySQL SSL configuration:
DB_CONNECTION_ARGS:
ssl:
ca: /path/to/cacert
Information on the valid connection arguments for MySQL is available at Connecting to the Server Using URI-Like Strings or Key-Value Pairs.
Using the managed PostgreSQL
Recommendations:
-
Database backups should be performed regularly using either the supplied tools on the Postgres image or your own backup infrastructure. The Operator does not currently ensure the Postgres database is backed up.
-
Restoring the Postgres database from a backup must be done using Postgres tools and procedures. Be aware that your Quay
Pods
should not be running while the database restore is in progress. -
Database disk space is allocated automatically by the Operator with 50 GiB. This number represents a usable amount of storage for most small to medium Project Quay installations but may not be sufficient for your use cases. Resizing the database volume is currently not handled by the Operator.
Configuring TLS and routes
Support for OpenShift Container Platform Edge-Termination Routes has been added by way of a new managed component, tls
. This separates the route
component from TLS and allows users to configure both separately. EXTERNAL_TLS_TERMINATION: true
is the opinionated setting. Managed tls
means that the default cluster wildcard cert is used. Unmanaged tls
means that the user provided cert/key pair will be injected into the Route
.
ssl.cert
and ssl.key
are now moved to a separate, persistent Secret, which ensures that the cert/key pair is not re-generated upon every reconcile. These are now formatted as edge
routes and mounted to the same directory in the Quay container.
Multiple permutations are possible when configuring TLS and Routes, but the following rules apply:
-
If TLS is
managed
, then route must also bemanaged
-
If TLS is
unmanaged
then you must supply certs, either with the config tool or directly in the config bundle
The following table describes the valid options:
Option | Route | TLS | Certs provided | Result |
---|---|---|---|---|
My own load balancer handles TLS |
Managed |
Managed |
No |
Edge Route with default wildcard cert |
Project Quay handles TLS |
Managed |
Unmanaged |
Yes |
Passthrough route with certs mounted inside the pod |
Project Quay handles TLS |
Unmanaged |
Unmanaged |
Yes |
Certificates are set inside the quay pod but route must be created manually |
Note
|
Project Quay 3.7 does not support builders when TLS is managed by the Operator. |
Creating the config bundle secret with TLS cert, key pair:
To add your own TLS cert and key, include them in the config bundle secret as follows:
$ oc create secret generic --from-file config.yaml=./config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret
Configuring other components
Using external Redis
If you wish to use an external Redis database, set the component as unmanaged in the QuayRegistry
instance:
-
Create a configuration file
config.yaml
with the necessary redis fields:BUILDLOGS_REDIS: host: quay-server.example.com port: 6379 ssl: false USER_EVENTS_REDIS: host: quay-server.example.com port: 6379 ssl: false
-
Create a Secret using the configuration file
$ oc create secret generic --from-file config.yaml=./config.yaml config-bundle-secret
-
Create a QuayRegistry YAML file
quayregistry.yaml
which marks redis component as unmanaged and references the created Secret:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: redis managed: false
-
Deploy the registry
Redis configuration fields
This section details the configuration fields available for Redis deployments.
Build logs
The following build logs configuration fields are available for Redis deployments:
Field | Type | Description |
---|---|---|
BUILDLOGS_REDIS |
Object |
Redis connection details for build logs caching. |
.host |
String |
The hostname at which Redis is accessible. |
.port |
Number |
The port at which Redis is accessible. |
.password |
String |
The port at which Redis is accessible. |
.port |
Number |
The port at which Redis is accessible. |
ssl |
Boolean |
Whether to enable TLS communication between Redis and Quay. Defaults to false. |
User events
The following user event fields are available for Redis deployments:
Field | Type | Description |
---|---|---|
USER_EVENTS_REDIS |
Object |
Redis connection details for user event handling. |
.host |
String |
The hostname at which Redis is accessible. |
.port |
Number |
The port at which Redis is accessible. |
.password |
String |
The port at which Redis is accessible. |
ssl |
Boolean |
Whether to enable TLS communication between Redis and Quay. Defaults to false. |
Example Redis configuration
The following YAML shows a sample configuration using Redis:
BUILDLOGS_REDIS:
host: quay-server.example.com
password: strongpassword
port: 6379
ssl: true
USER_EVENTS_REDIS:
host: quay-server.example.com
password: strongpassword
port: 6379
ssl: true
Note
|
If your deployment uses Azure Cache for Redis and |
Disabling the Horizontal Pod Autoscaler
HorizontalPodAutoscalers
have been added to the Clair, Quay, and Mirror pods, so that they now automatically scale during load spikes.
As HPA is configured by default to be managed
, the number of pods for Quay, Clair and repository mirroring is set to two. This facilitates the avoidance of downtime when updating / reconfiguring Quay via the Operator or during rescheduling events.
If you wish to disable autoscaling or create your own HorizontalPodAutoscaler
, simply specify the component as unmanaged in the QuayRegistry
instance:
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: quay-enterprise
spec:
components:
- kind: horizontalpodautoscaler
managed: false
Disabling Route Component
To prevent the Operator from creating a Route
:
-
Mark the component as unmanaged in the
QuayRegistry
:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: route managed: false
-
Specify that you want Quay to handle TLS in the configuration, by editing the
config.yaml
file:config.yaml... EXTERNAL_TLS_TERMINATION: false ... SERVER_HOSTNAME: example-registry-quay-quay-enterprise.apps.user1.example.com ... PREFERRED_URL_SCHEME: https ...
If you do not configure the unmanaged Route correctly, you will see an error similar to the following:
{ { "kind":"QuayRegistry", "namespace":"quay-enterprise", "name":"example-registry", "uid":"d5879ba5-cc92-406c-ba62-8b19cf56d4aa", "apiVersion":"quay.redhat.com/v1", "resourceVersion":"2418527" }, "reason":"ConfigInvalid", "message":"required component `route` marked as unmanaged, but `configBundleSecret` is missing necessary fields" }
Note
|
Disabling the default |
Unmanaged monitoring
If you install the Quay Operator in a single namespace, the monitoring component is automatically set to 'unmanaged'. To enable monitoring in this scenario, see the section Enabling monitoring when Operator is installed in a single namespace.
To disable monitoring explicitly:
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: quay-enterprise
spec:
components:
- kind: monitoring
managed: false
Unmanaged mirroring
To disable mirroring explicitly:
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: quay-enterprise
spec:
components:
- kind: mirroring
managed: false
Deploying Quay using the Quay Operator
The Operator can be deployed from the command line or from the OpenShift console, but the fundamental steps are the same.
Deploying Project Quay from the command line
-
Create a namespace, for example,
quay-enterprise
. -
Create a secret for the config bundle, if you want to pre-configure any aspects of the deployment
-
Create a
QuayRegistry
custom resource in a file calledquayregistry.yaml
-
For a minimal deployment, using all the defaults:
quayregistry.yaml:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise
-
If you want to have some components unmanaged, add this information in the
spec
field. For example, a minimal deployment might look like:quayregistry.yaml:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: clair managed: false - kind: horizontalpodautoscaler managed: false - kind: mirror managed: false - kind: monitoring managed: false
-
If you have created a config bundle, for example,
init-config-bundle-secret
, reference it in thequayregistry.yaml
file:quayregistry.yaml:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: init-config-bundle-secret
-
If you have a proxy configured, you can add the information using overrides for Quay, Clair, and mirroring:
quayregistry.yaml:kind: QuayRegistry metadata: name: quay37 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: mirror managed: true overrides: env: - name: DEBUGLOG value: "true" - name: HTTP_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: HTTPS_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: NO_PROXY value: svc.cluster.local,localhost,quay370.apps.quayperf370.perfscale.devcluster.openshift.com - kind: tls managed: false - kind: clair managed: true overrides: env: - name: HTTP_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: HTTPS_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: NO_PROXY value: svc.cluster.local,localhost,quay370.apps.quayperf370.perfscale.devcluster.openshift.com - kind: quay managed: true overrides: env: - name: DEBUGLOG value: "true" - name: NO_PROXY value: svc.cluster.local,localhost,quay370.apps.quayperf370.perfscale.devcluster.openshift.com - name: HTTP_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: HTTPS_PROXY value: quayproxy.qe.devcluster.openshift.com:3128
-
-
Create the
QuayRegistry
in specified namespace:$ oc create -n quay-enterprise -f quayregistry.yaml
-
See the section Monitoring and debugging the deployment process for information on how to track the progress of the deployment.
-
Wait until the
status.registryEndpoint
is populated.$ oc get quayregistry -n quay-enterprise example-registry -o jsonpath="{.status.registryEndpoint}" -w
Viewing created components using the command line
Use the oc get pods
command to view the deployed components:
$ oc get pods -n quay-enterprise
NAME READY STATUS RESTARTS AGE
example-registry-clair-app-5ffc9f77d6-jwr9s 1/1 Running 0 3m42s
example-registry-clair-app-5ffc9f77d6-wgp7d 1/1 Running 0 3m41s
example-registry-clair-postgres-54956d6d9c-rgs8l 1/1 Running 0 3m5s
example-registry-quay-app-79c6b86c7b-8qnr2 1/1 Running 4 3m42s
example-registry-quay-app-79c6b86c7b-xk85f 1/1 Running 4 3m41s
example-registry-quay-app-upgrade-5kl5r 0/1 Completed 4 3m50s
example-registry-quay-config-editor-597b47c995-svqrl 1/1 Running 0 3m42s
example-registry-quay-database-b466fc4d7-tfrnx 1/1 Running 2 3m42s
example-registry-quay-mirror-6d9bd78756-6lj6p 1/1 Running 0 2m58s
example-registry-quay-mirror-6d9bd78756-bv6gq 1/1 Running 0 2m58s
example-registry-quay-postgres-init-dzbmx 0/1 Completed 0 3m43s
example-registry-quay-redis-8bd67b647-skgqx 1/1 Running 0 3m42s
Horizontal Pod Autoscaling (HPA)
A default deployment shows the following running pods:
-
Two pods for the Quay application itself (
example-registry-quay-app-*`
) -
One Redis pod for Quay logging (
example-registry-quay-redis-*
) -
One database pod for PostgreSQL used by Quay for metadata storage (
example-registry-quay-database-*
) -
One pod for the Quay config editor (
example-registry-quay-config-editor-*
) -
Two Quay mirroring pods (
example-registry-quay-mirror-*
) -
Two pods for the Clair application (
example-registry-clair-app-*
) -
One PostgreSQL pod for Clair (
example-registry-clair-postgres-*
)
As HPA is configured by default to be managed
, the number of pods for Quay, Clair and repository mirroring is set to two. This facilitates the avoidance of downtime when updating / reconfiguring Quay via the Operator or during rescheduling events.
$ oc get hpa -n quay-enterprise
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
example-registry-clair-app Deployment/example-registry-clair-app 16%/90%, 0%/90% 2 10 2 13d
example-registry-quay-app Deployment/example-registry-quay-app 31%/90%, 1%/90% 2 20 2 13d
example-registry-quay-mirror Deployment/example-registry-quay-mirror 27%/90%, 0%/90% 2 20 2 13d
Using the API to deploy Project Quay
This section introduces using the API to deploy Project Quay.
-
The config option
FEATURE_USER_INITIALIZE
must be set totrue
. -
No users can already exist in the database.
For more information on pre-configuring your Project Quay deployment, see the section Pre-configuring Project Quay for automation
Using the API to create the first user
Use the following procedure to create the first user in your Project Quay organization.
Note
|
This procedure requests an OAuth token by specifying |
-
Using the
status.registryEndpoint
URL, invoke the/api/v1/user/initialize
API, passing in the username, password and email address by entering the following command:$ curl -X POST -k https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/user/initialize --header 'Content-Type: application/json' --data '{ "username": "quayadmin", "password":"quaypass123", "email": "quayadmin@example.com", "access_token": true}'
If successful, the command returns an object with the username, email, and encrypted password. For example:
{"access_token":"6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED", "email":"quayadmin@example.com","encrypted_password":"1nZMLH57RIE5UGdL/yYpDOHLqiNCgimb6W9kfF8MjZ1xrfDpRyRs9NUnUuNuAitW","username":"quayadmin"}
If a user already exists in the database, an error is returned:
{"message":"Cannot initialize user in a non-empty database"}
If your password is not at least eight characters or contains whitespace, an error is returned:
{"message":"Failed to initialize user: Invalid password, password must be at least 8 characters and contain no whitespace."}
Monitoring and debugging the deployment process
Users can now troubleshoot problems during the deployment phase. The status in the QuayRegistry
object can help you monitor the health of the components during the deployment an help you debug any problems that may arise:
$ oc get quayregistry -n quay-enterprise -o yaml
Immediately after deployment, the QuayRegistry object will show the basic configuration:
apiVersion: v1
items:
- apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
creationTimestamp: "2021-09-14T10:51:22Z"
generation: 3
name: example-registry
namespace: quay-enterprise
resourceVersion: "50147"
selfLink: /apis/quay.redhat.com/v1/namespaces/quay-enterprise/quayregistries/example-registry
uid: e3fc82ba-e716-4646-bb0f-63c26d05e00e
spec:
components:
- kind: postgres
managed: true
- kind: clair
managed: true
- kind: redis
managed: true
- kind: horizontalpodautoscaler
managed: true
- kind: objectstorage
managed: true
- kind: route
managed: true
- kind: mirror
managed: true
- kind: monitoring
managed: true
- kind: tls
managed: true
configBundleSecret: example-registry-config-bundle-kt55s
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Use the oc get pods
command to view the current state of the deployed components:
$ oc get pods -n quay-enterprise
NAME READY STATUS RESTARTS AGE
example-registry-clair-app-86554c6b49-ds7bl 0/1 ContainerCreating 0 2s
example-registry-clair-app-86554c6b49-hxp5s 0/1 Running 1 17s
example-registry-clair-postgres-68d8857899-lbc5n 0/1 ContainerCreating 0 17s
example-registry-quay-app-upgrade-h2v7h 0/1 ContainerCreating 0 9s
example-registry-quay-config-editor-5f646cbcb7-lbnc2 0/1 ContainerCreating 0 17s
example-registry-quay-database-66f495c9bc-wqsjf 0/1 ContainerCreating 0 17s
example-registry-quay-mirror-854c88457b-d845g 0/1 Init:0/1 0 2s
example-registry-quay-mirror-854c88457b-fghxv 0/1 Init:0/1 0 17s
example-registry-quay-postgres-init-bktdt 0/1 Terminating 0 17s
example-registry-quay-redis-f9b9d44bf-4htpz 0/1 ContainerCreating 0 17s
While the deployment is in progress, the QuayRegistry object will show the current status. In this instance, database migrations are taking place, and other components are waiting until this completes.
status:
conditions:
- lastTransitionTime: "2021-09-14T10:52:04Z"
lastUpdateTime: "2021-09-14T10:52:04Z"
message: all objects created/updated successfully
reason: ComponentsCreationSuccess
status: "False"
type: RolloutBlocked
- lastTransitionTime: "2021-09-14T10:52:05Z"
lastUpdateTime: "2021-09-14T10:52:05Z"
message: running database migrations
reason: MigrationsInProgress
status: "False"
type: Available
configEditorCredentialsSecret: example-registry-quay-config-editor-credentials-btbkcg8dc9
configEditorEndpoint: https://example-registry-quay-config-editor-quay-enterprise.apps.docs.quayteam.org
lastUpdated: 2021-09-14 10:52:05.371425635 +0000 UTC
unhealthyComponents:
clair:
- lastTransitionTime: "2021-09-14T10:51:32Z"
lastUpdateTime: "2021-09-14T10:51:32Z"
message: 'Deployment example-registry-clair-postgres: Deployment does not have minimum availability.'
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2021-09-14T10:51:32Z"
lastUpdateTime: "2021-09-14T10:51:32Z"
message: 'Deployment example-registry-clair-app: Deployment does not have minimum availability.'
reason: MinimumReplicasUnavailable
status: "False"
type: Available
mirror:
- lastTransitionTime: "2021-09-14T10:51:32Z"
lastUpdateTime: "2021-09-14T10:51:32Z"
message: 'Deployment example-registry-quay-mirror: Deployment does not have minimum availability.'
reason: MinimumReplicasUnavailable
status: "False"
type: Available
When the deployment process finishes successfully, the status in the QuayRegistry object shows no unhealthy components:
status:
conditions:
- lastTransitionTime: "2021-09-14T10:52:36Z"
lastUpdateTime: "2021-09-14T10:52:36Z"
message: all registry component healthchecks passing
reason: HealthChecksPassing
status: "True"
type: Available
- lastTransitionTime: "2021-09-14T10:52:46Z"
lastUpdateTime: "2021-09-14T10:52:46Z"
message: all objects created/updated successfully
reason: ComponentsCreationSuccess
status: "False"
type: RolloutBlocked
configEditorCredentialsSecret: example-registry-quay-config-editor-credentials-hg7gg7h57m
configEditorEndpoint: https://example-registry-quay-config-editor-quay-enterprise.apps.docs.quayteam.org
currentVersion: {producty}
lastUpdated: 2021-09-14 10:52:46.104181633 +0000 UTC
registryEndpoint: https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org
unhealthyComponents: {}
Deploying Project Quay from the OpenShift console
-
Create a namespace, for example,
quay-enterprise
. -
Select Operators → Installed Operators, then select the Quay Operator to navigate to the Operator detail view.
-
Click 'Create Instance' on the 'Quay Registry' tile under 'Provided APIs'.
-
Optionally change the 'Name' of the
QuayRegistry
. This will affect the hostname of the registry. All other fields have been populated with defaults. -
Click 'Create' to submit the
QuayRegistry
to be deployed by the Quay Operator. -
You should be redirected to the
QuayRegistry
list view. Click on theQuayRegistry
you just created to see the details view. -
Once the 'Registry Endpoint' has a value, click it to access your new Quay registry via the UI. You can now select 'Create Account' to create a user and sign in.
Using the Quay UI to create the first user
Note
|
This procedure assumes that the |
-
In the OpenShift console, navigate to Operators → Installed Operators, with the appropriate namespace / project.
-
Click on the newly installed QuayRegistry, to view the details:
-
Once the
Registry Endpoint
has a value, navigate to this URL in your browser -
Select 'Create Account' in the Quay registry UI to create a user
-
Enter details for username, password, email and click
Create Account
-
You are automatically logged in to the Quay registry
Configuring Quay on OpenShift
Once deployed, you can configure the Quay application by editing the Quay configuration bundle secret spec.configBundleSecret
and you can also change the managed status of components in the spec.components
object of the QuayRegistry resource
Alternatively, you can use the config editor UI to configure the Quay application, as described in the section Using the config tool to reconfigure Quay on OpenShift.
Editing the config bundle secret in the OpenShift console
-
On the Quay Registry overview screen, click the link for the Config Bundle Secret:
-
To edit the secret, click Actions → Edit Secret
-
Modify the configuration and save the changes
-
Monitor the deployment to ensure successful completion and that the configuration changes have taken effect
Determining QuayRegistry endpoints and secrets
You can examine the QuayRegistry resource, using oc describe quayregistry
or oc get quayregistry -o yaml
, to determine the current endpoints and secrets:
$ oc get quayregistry example-registry -n quay-enterprise -o yaml
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
...
name: example-registry
namespace: quay-enterprise
...
spec:
components:
- kind: quay
managed: true
...
- kind: clairpostgres
managed: true
configBundleSecret: init-config-bundle-secret
status:
configEditorCredentialsSecret: example-registry-quay-config-editor-credentials-fg2gdgtm24
configEditorEndpoint: https://example-registry-quay-config-editor-quay-enterprise.apps.docs.gcp.quaydev.org
currentVersion: 3.7.0
lastUpdated: 2022-05-11 13:28:38.199476938 +0000 UTC
registryEndpoint: https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org
The relevant fields are:
-
registryEndpoint
: The URL for your registry, for browser access to the registry UI, and for the registry API endpoint -
configBundleSecret
: The config bundle secret, containing theconfig.yaml
file and any SSL certs -
configEditorEndpoint
: The URL for the config editor tool, for browser access to the config tool, and for the configuration API -
configEditorCredentialsSecret
: The secret containing the username (typicallyquayconfig
) and the password for the config editor tool
To determine the username and password for the config editor tool:
-
Retrieve the secret:
$ oc get secret -n quay-enterprise example-registry-quay-config-editor-credentials-fg2gdgtm24 -o yaml apiVersion: v1 data: password: SkZwQkVKTUN0a1BUZmp4dA== username: cXVheWNvbmZpZw== kind: Secret
-
Decode the username:
$ echo 'cXVheWNvbmZpZw==' | base64 --decode quayconfig
-
Decode the password:
$ echo 'SkZwQkVKTUN0a1BUZmp4dA==' | base64 --decode JFpBEJMCtkPTfjxt
Downloading the existing configuration
There are a number of methods for accessing the current configuration:
-
Using the config editor endpoint, specifying the username and password for the config editor:
$ curl -k -u quayconfig:JFpBEJMCtkPTfjxt https://example-registry-quay-config-editor-quay-enterprise.apps.docs.quayteam.org/api/v1/config
{ "config.yaml": { "ALLOW_PULLS_WITHOUT_STRICT_LOGGING": false, "AUTHENTICATION_TYPE": "Database", ... "USER_RECOVERY_TOKEN_LIFETIME": "30m" }, "certs": { "extra_ca_certs/service-ca.crt": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURVVENDQWptZ0F3SUJBZ0lJRE9kWFhuUXFjMUF3RFFZSktvWklodmNOQVFFTEJRQXdOakUwTURJR0ExVUUKQXd3cmIzQmxibk5vYVdaMExYTmxjblpwWTJVdGMyVnlkbWx1WnkxemFXZHVaWEpBTVRZek1UYzNPREV3TXpBZQpGdzB5TVRBNU1UWXdOelF4TkRKYUZ..." } }
-
Using the config bundle secret
-
Get the secret data:
$ oc get secret -n quay-enterprise init-config-bundle-secret -o jsonpath='{.data}'
Sample output{ "config.yaml": "RkVBVFVSRV9VU0 ... MDAwMAo=" }
-
Decode the data:
$ echo 'RkVBVFVSRV9VU0 ... MDAwMAo=' | base64 --decode
FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_PROXY_CACHE: true FEATURE_BUILD_SUPPORT: true DEFAULT_SYSTEM_REJECT_QUOTA_BYTES: 102400000
-
Using the config bundle to configure custom SSL certs
You can configure custom SSL certs either before initial deployment or after Project Quay is deployed on OpenShift, by creating or updating the config bundle secret. If you are adding the cert(s) to an existing deployment, you must include the existing config.yaml
in the new config bundle secret, even if you are not making any configuration changes.
Set TLS to unmanaged
In your Quay Registry yaml, set kind: tls
to managed: false
:
- kind: tls
managed: false
In the events, you should see that the change is blocked until you set up the appropriate config:
- lastTransitionTime: '2022-03-28T12:56:49Z'
lastUpdateTime: '2022-03-28T12:56:49Z'
message: >-
required component `tls` marked as unmanaged, but `configBundleSecret`
is missing necessary fields
reason: ConfigInvalid
status: 'True'
Add certs to config bundle
-
Create the secret using embedded data or using files:
-
Embed the configuration details directly in the Secret resource YAML file, for example:
custom-ssl-config-bundle.yamlapiVersion: v1 kind: Secret metadata: name: custom-ssl-config-bundle-secret namespace: quay-enterprise data: config.yaml: | FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_PROXY_CACHE: true FEATURE_BUILD_SUPPORT: true DEFAULT_SYSTEM_REJECT_QUOTA_BYTES: 102400000 extra_ca_cert_my-custom-ssl.crt: | -----BEGIN CERTIFICATE----- MIIDsDCCApigAwIBAgIUCqlzkHjF5i5TXLFy+sepFrZr/UswDQYJKoZIhvcNAQEL BQAwbzELMAkGA1UEBhMCSUUxDzANBgNVBAgMBkdBTFdBWTEPMA0GA1UEBwwGR0FM .... -----END CERTIFICATE-----
Next, create the secret from the YAML file:
$ oc create -f custom-ssl-config-bundle.yaml
-
Alternatively, you can create files containing the desired information, and then create the secret from those files:
$ oc create secret generic custom-ssl-config-bundle-secret \ --from-file=config.yaml \ --from-file=extra_ca_cert_my-custom-ssl.crt=my-custom-ssl.crt
-
-
Create or update the QuayRegistry YAML file
quayregistry.yaml
, referencing the created Secret, for example:quayregistry.yamlapiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: custom-ssl-config-bundle-secret
-
Deploy or update the registry using the YAML file:
oc apply -f quayregistry.yaml
Using the config tool to reconfigure Quay on OpenShift
Accessing the config editor
In the Details section of the QuayRegistry screen, the endpoint for the config editor is available, along with a link to the secret containing the credentials for logging into the config editor:
Retrieving the config editor credentials
-
Click on the link for the config editor secret:
-
In the Data section of the Secret details screen, click
Reveal values
to see the credentials for logging in to the config editor:
Logging in to the config editor
Browse to the config editor endpoint and then enter the username, typically quayconfig
, and the corresponding password to access the config tool:
Changing configuration
In this example of updating the configuration, a superuser is added via the config editor tool:
-
Add an expiration period, for example
4w
, for the time machine functionality: -
Select
Validate Configuration Changes
to ensure that the changes are valid -
Apply the changes by pressing the
Reconfigure Quay
button: -
The config tool notifies you that the change has been submitted to Quay:
Note
|
Reconfiguring Project Quay using the config tool UI can lead to the registry being unavailable for a short time, while the updated configuration is applied. |
Monitoring reconfiguration in the UI
QuayRegistry resource
After reconfiguring the Operator, you can track the progress of the redeployment in the YAML tab for the specific instance of QuayRegistry, in this case, example-registry
:
Each time the status changes, you will be prompted to reload the data to see the updated version. Eventually, the Operator will reconcile the changes, and there will be no unhealthy components reported.
Events
The Events tab for the QuayRegistry shows some events related to the redeployment:
Streaming events, for all resources in the namespace that are affected by the reconfiguration, are available in the OpenShift console under Home → Events:
Accessing updated information after reconfiguration
Accessing the updated config tool credentials in the UI
With Project Quay 3.7, reconfiguring Quay through the UI no longer generates a new login password. The password now generates only once, and remains the same after reconciling QuayRegistry
objects.
Accessing the updated config.yaml in the UI
Use the config bundle to access the updated config.yaml
file.
-
On the QuayRegistry details screen, click on the Config Bundle Secret
-
In the Data section of the Secret details screen, click Reveal values to see the
config.yaml
file -
Check that the change has been applied. In this case,
4w
should be in the list ofTAG_EXPIRATION_OPTIONS
:... SERVER_HOSTNAME: example-quay-openshift-operators.apps.docs.quayteam.org SETUP_COMPLETE: true SUPER_USERS: - quayadmin TAG_EXPIRATION_OPTIONS: - 2w - 4w ...
Custom SSL certificates UI
The config tool can be used to load custom certificates to facilitate access to resources such as external databases. Select the custom certs to be uploaded, ensuring that they are in PEM format, with an extension .crt
.
The config tool also displays a list of any uploaded certificates. Once you upload your custom SSL cert, it will appear in the list:
External Access to the Registry
When running on OpenShift, the Routes
API is available and will automatically be used as a managed component. After creating the QuayRegistry
, the external access point can be found in the status block of the QuayRegistry
:
status:
registryEndpoint: some-quay.my-namespace.apps.mycluster.com
Quay Operator features
Console monitoring and alerting
Project Quay provides support for monitoring Quay instances that were deployed using the Operator, from inside the OpenShift console. The new monitoring features include a Grafana dashboard, access to individual metrics, and alerting to notify for frequently restarting Quay pods.
Note
|
To enable the monitoring features, the Operator must be installed in "all namespaces" mode. |
Dashboard
In the OpenShift console, navigate to Monitoring → Dashboards and search for the dashboard of your desired Quay registry instance:
The dashboard shows various statistics including:
-
The number of Organizations, Repositories, Users and Robot accounts
-
CPU Usage and Max Memory Usage
-
Rates of Image Pulls and Pushes, and Authentication requests
-
API request rate
-
Latencies
Metrics
You can see the underlying metrics behind the Quay dashboard, by accessing Monitoring → Metrics in the UI. In the Expression field, enter the text quay_
to see the list of metrics available:
Select a sample metric, for example, quay_org_rows
:
This metric shows the number of organizations in the registry, and it is directly surfaced in the dashboard as well.
Alerting
An alert is raised if the Quay pods restart too often. The alert can be configured by accessing the Alerting rules tab from Monitoring → Alerting in the consol UI and searching for the Quay-specific alert:
Select the QuayPodFrequentlyRestarting rule detail to configure the alert:
Manually updating the vulnerability databases for Clair in an air-gapped OpenShift cluster
Clair utilizes packages called updaters
that encapsulate the logic of fetching and parsing different vulnerability databases. Clair supports running updaters in a different environment and importing the results. This is aimed at supporting installations that disallow the Clair cluster from talking to the Internet directly.
To manually update the vulnerability databases for Clair in an air-gapped OpenShift cluster, use the following steps:
-
Obtain the
clairctl
program -
Retrieve the Clair config
-
Use
clairctl
to export the updaters bundle from a Clair instance that has access to the internet -
Update the Clair config in the air-gapped OpenShift cluster to allow access to the Clair database
-
Transfer the updaters bundle from the system with internet access, to make it available inside the air-gapped environment
-
Use
clairctl
to import the updaters bundle into the Clair instance for the air-gapped OpenShift cluster
Obtaining clairctl
To obtain the clairctl
program from a Clair deployment in an OpenShift cluster, use the oc cp
command, for example:
$ oc -n quay-enterprise cp example-registry-clair-app-64dd48f866-6ptgw:/usr/bin/clairctl ./clairctl $ chmod u+x ./clairctl
For a standalone Clair deployment, use the podman cp
command, for example:
$ sudo podman cp clairv4:/usr/bin/clairctl ./clairctl $ chmod u+x ./clairctl
Retrieving the Clair config
Clair on OpenShift config
To retrieve the configuration file for a Clair instance deployed using the OpenShift Operator, retrieve and decode the config secret using the appropriate namespace, and save it to file, for example:
$ kubectl get secret -n quay-enterprise example-registry-clair-config-secret -o "jsonpath={$.data['config\.yaml']}" | base64 -d > clair-config.yaml
An excerpt from a Clair configuration file is shown below:
http_listen_addr: :8080
introspection_addr: ""
log_level: info
indexer:
connstring: host=example-registry-clair-postgres port=5432 dbname=postgres user=postgres password=postgres sslmode=disable
scanlock_retry: 10
layer_scan_concurrency: 5
migrations: true
scanner:
package: {}
dist: {}
repo: {}
airgap: false
matcher:
connstring: host=example-registry-clair-postgres port=5432 dbname=postgres user=postgres password=postgres sslmode=disable
max_conn_pool: 100
indexer_addr: ""
migrations: true
period: null
disable_updaters: false
notifier:
connstring: host=example-registry-clair-postgres port=5432 dbname=postgres user=postgres password=postgres sslmode=disable
migrations: true
indexer_addr: ""
matcher_addr: ""
poll_interval: 5m
delivery_interval: 1m
...
Standalone Clair config
For standalone Clair deployments, the config file is the one specified in CLAIR_CONF environment variable in the podman run
command, for example:
sudo podman run -d --rm --name clairv4 \ -p 8081:8081 -p 8089:8089 \ -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo \ -v /etc/clairv4/config:/clair:Z \ quay.io/projectquay/clair:v3.8.0
Exporting the updaters bundle
From a Clair instance that has access to the internet, use clairctl
with the appropriate configuration file to export the updaters bundle:
$ ./clairctl --config ./config.yaml export-updaters updates.gz
Configuring access to the Clair database in the air-gapped OpenShift cluster
-
Use
kubectl
to determine the Clair database service:$ kubectl get svc -n quay-enterprise NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h ...
-
Forward the Clair database port so that it is accessible from the local machine, for example:
$ kubectl port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432
-
Update the Clair configuration file:
clair-config.yamlindexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: name2repos_mapping_file: /data/repo-map.json
Note-
Replace the value of the
host
in the multipleconnstring
fields withlocalhost
. -
As an alternative to using
kubectl port-forward
, you can usekubefwd
instead. With this method, there is no need to modify theconnstring
field in the Clair configuration file to uselocalhost
.
-
Importing the updaters bundle into the air-gapped environment
After transferring the updaters bundle to the air-gapped environment, use clairctl
to import the bundle into the Clair database deployed by the OpenShift Operator:
$ ./clairctl --config ./clair-config.yaml import-updaters updates.gz
FIPS readiness and compliance
FIPS (the Federal Information Processing Standard developed by the National Institute of Standards and Technology, NIST) is regarded as the gold standard for securing and encrypting sensitive data, particularly in heavily regulated areas such as banking, healthcare and the public sector. Red Hat Enterprise Linux and Red Hat OpenShift Container Platform support this standard by providing a FIPS mode in which the system would only allow usage of certain, FIPS-validated cryptographic modules, like openssl
. This ensures FIPS compliance.
Project Quay supports running on FIPS-enabled RHEL and Red Hat OpenShift Container Platform from version 3.5.
Advanced Concepts
Deploying Quay on infrastructure nodes
By default, Quay-related pods are placed on arbitrary worker nodes when using the Operator to deploy the registry. The OpenShift Container Platform documentation shows how to use machine sets to configure nodes to only host infrastructure components (see https://docs.openshift.com/container-platform/4.7/machine_management/creating-infrastructure-machinesets.html).
If you are not using OCP MachineSet resources to deploy infra nodes, this section shows you how to manually label and taint nodes for infrastructure purposes.
Once you have configured your infrastructure nodes, either manually or using machine sets, you can then control the placement of Quay pods on these nodes using node selectors and tolerations.
Label and taint nodes for infrastructure use
In the cluster used in this example, there are three master nodes and six worker nodes:
$ oc get nodes NAME STATUS ROLES AGE VERSION user1-jcnp6-master-0.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583 user1-jcnp6-master-1.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583 user1-jcnp6-master-2.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583 user1-jcnp6-worker-b-65plj.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-b-jr7hc.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-c-jrq4v.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal Ready worker 3h22m v1.20.0+ba45583 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583
Label the final three worker nodes for infrastructure use:
$ oc label node --overwrite user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal node-role.kubernetes.io/infra= $ oc label node --overwrite user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal node-role.kubernetes.io/infra= $ oc label node --overwrite user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal node-role.kubernetes.io/infra=
Now, when you list the nodes in the cluster, the last 3 worker nodes will have an added role of infra
:
$ oc get nodes NAME STATUS ROLES AGE VERSION user1-jcnp6-master-0.c.quay-devel.internal Ready master 4h14m v1.20.0+ba45583 user1-jcnp6-master-1.c.quay-devel.internal Ready master 4h15m v1.20.0+ba45583 user1-jcnp6-master-2.c.quay-devel.internal Ready master 4h14m v1.20.0+ba45583 user1-jcnp6-worker-b-65plj.c.quay-devel.internal Ready worker 4h6m v1.20.0+ba45583 user1-jcnp6-worker-b-jr7hc.c.quay-devel.internal Ready worker 4h5m v1.20.0+ba45583 user1-jcnp6-worker-c-jrq4v.c.quay-devel.internal Ready worker 4h5m v1.20.0+ba45583 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba45583 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba45583 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba45583
With an infra node being assigned as a worker, there is a chance that user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node and then add tolerations for the pods you want to control.
$ oc adm taint nodes user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule $ oc adm taint nodes user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule $ oc adm taint nodes user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule
Create a Project with node selector and toleration
If you have already deployed Quay using the Quay Operator, remove the installed operator and any specific namespace(s) you created for the deployment.
Create a Project resource, specifying a node selector and toleration as shown in the following example:
kind: Project apiVersion: project.openshift.io/v1 metadata: name: quay-registry annotations: openshift.io/node-selector: 'node-role.kubernetes.io/infra=' scheduler.alpha.kubernetes.io/defaultTolerations: >- [{"operator": "Exists", "effect": "NoSchedule", "key": "node-role.kubernetes.io/infra"} ]
Use the oc apply
command to create the project:
$ oc apply -f quay-registry.yaml project.project.openshift.io/quay-registry created
Any subsequent resources created in the quay-registry
namespace should now be scheduled on the dedicated infrastructure nodes.
Install the Quay Operator in the namespace
When installing the Quay Operator, specify the appropriate project namespace explicitly, in this case quay-registry
. This will result in the operator pod itself landing on one of the three infrastructure nodes:
$ oc get pods -n quay-registry -o wide NAME READY STATUS RESTARTS AGE IP NODE quay-operator.v3.4.1-6f6597d8d8-bd4dp 1/1 Running 0 30s 10.131.0.16 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal
Create the registry
Create the registry as explained earlier, and then wait for the deployment to be ready. When you list the Quay pods, you should now see that they have only been scheduled on the three nodes that you have labelled for infrastructure purposes:
$ oc get pods -n quay-registry -o wide NAME READY STATUS RESTARTS AGE IP NODE example-registry-clair-app-789d6d984d-gpbwd 1/1 Running 1 5m57s 10.130.2.80 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal example-registry-clair-postgres-7c8697f5-zkzht 1/1 Running 0 4m53s 10.129.2.19 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-app-56dd755b6d-glbf7 1/1 Running 1 5m57s 10.129.2.17 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-config-editor-7bf9bccc7b-dpc6d 1/1 Running 0 5m57s 10.131.0.23 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal example-registry-quay-database-8dc7cfd69-dr2cc 1/1 Running 0 5m43s 10.129.2.18 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-mirror-78df886bcc-v75p9 1/1 Running 0 5m16s 10.131.0.24 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal example-registry-quay-postgres-init-8s8g9 0/1 Completed 0 5m54s 10.130.2.79 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal example-registry-quay-redis-5688ddcdb6-ndp4t 1/1 Running 0 5m56s 10.130.2.78 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal quay-operator.v3.4.1-6f6597d8d8-bd4dp 1/1 Running 0 22m 10.131.0.16 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal
Enabling monitoring when Operator is installed in a single namespace
When Project Quay Operator is installed in a single namespace, the monitoring component is unmanaged. To configure monitoring, you need to enable it for user-defined namespaces in OpenShift Container Platform. For more information, see the OCP documentation for Configuring the monitoring stack and Enabling monitoring for user-defined projects.
The following steps show you how to configure monitoring for Quay, based on the OCP documentation.
Creating a cluster monitoring config map
-
Check whether the
cluster-monitoring-config
ConfigMap object exists:$ oc -n openshift-monitoring get configmap cluster-monitoring-config Error from server (NotFound): configmaps "cluster-monitoring-config" not found
-
If the ConfigMap object does not exist:
-
Create the following YAML manifest. In this example, the file is called
cluster-monitoring-config.yaml
:$ cat cluster-monitoring-config.yaml apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |
-
Create the ConfigMap object:
$ oc apply -f cluster-monitoring-config.yaml configmap/cluster-monitoring-config created
$ oc -n openshift-monitoring get configmap cluster-monitoring-config NAME DATA AGE cluster-monitoring-config 1 12s
-
Creating a user-defined workload monitoring config map
-
Check whether the
user-workload-monitoring-config
ConfigMap object exists:$ oc -n openshift-user-workload-monitoring get configmap user-workload-monitoring-config Error from server (NotFound): configmaps "user-workload-monitoring-config" not found
-
If the ConfigMap object does not exist:
-
Create the following YAML manifest. In this example, the file is called
user-workload-monitoring-config.yaml
:$ cat user-workload-monitoring-config.yaml apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: |
-
Create the ConfigMap object:
$ oc apply -f user-workload-monitoring-config.yaml configmap/user-workload-monitoring-config created
-
Enable monitoring for user-defined projects
-
Check whether monitoring for user-defined projects is running:
$ oc get pods -n openshift-user-workload-monitoring No resources found in openshift-user-workload-monitoring namespace.
-
Edit the
cluster-monitoring-config
ConfigMap:$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
-
Set
enableUserWorkload: true
to enable monitoring for user-defined projects on the cluster:apiVersion: v1 data: config.yaml: | enableUserWorkload: true kind: ConfigMap metadata: annotations:
-
Save the file to apply the changes and then check that the appropriate pods are running:
$ oc get pods -n openshift-user-workload-monitoring NAME READY STATUS RESTARTS AGE prometheus-operator-6f96b4b8f8-gq6rl 2/2 Running 0 15s prometheus-user-workload-0 5/5 Running 1 12s prometheus-user-workload-1 5/5 Running 1 12s thanos-ruler-user-workload-0 3/3 Running 0 8s thanos-ruler-user-workload-1 3/3 Running 0 8s
Create a Service object to expose Quay metrics
-
Create a YAML file for the Service object:
$ cat quay-service.yaml apiVersion: v1 kind: Service metadata: annotations: labels: quay-component: monitoring quay-operator/quayregistry: example-registry name: example-registry-quay-metrics namespace: quay-enterprise spec: ports: - name: quay-metrics port: 9091 protocol: TCP targetPort: 9091 selector: quay-component: quay-app quay-operator/quayregistry: example-registry type: ClusterIP
-
Create the Service object:
$ oc apply -f quay-service.yaml service/example-registry-quay-metrics created
Create a ServiceMonitor object
Configure OpenShift Monitoring to scrape the metrics by creating a ServiceMonitor resource.
-
Create a YAML file for the ServiceMonitor resource:
$ cat quay-service-monitor.yaml apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: quay-operator/quayregistry: example-registry name: example-registry-quay-metrics-monitor namespace: quay-enterprise spec: endpoints: - port: quay-metrics namespaceSelector: any: true selector: matchLabels: quay-component: monitoring
-
Create the ServiceMonitor:
$ oc apply -f quay-service-monitor.yaml servicemonitor.monitoring.coreos.com/example-registry-quay-metrics-monitor created
View the metrics in OpenShift
You can access the metrics in the OpenShift console under Monitoring → Metrics. In the Expression field, enter the text quay_
to see the list of metrics available:
For example, if you have added users to your registry, select the quay-users_rows
metric:
Resizing Managed Storage
The Quay Operator creates default object storage using the defaults provided by RHOCS when creating a NooBaa
object (50 Gib). There are two ways to extend this storage; you can resize an existing PVC or add more PVCs to a new storage pool.
Resize Noobaa PVC
-
Log into the OpenShift console and select
Storage
→Persistent Volume Claims
. -
Select the
PersistentVolumeClaim
named likenoobaa-default-backing-store-noobaa-pvc-*
. -
From the Action menu, select
Expand PVC
. -
Enter the new size of the Persistent Volume Claim and select
Expand
.
After a few minutes (depending on the size of the PVC), the expanded size should reflect in the PVC’s Capacity
field.
Note
|
Expanding CSI volumes is a Technology Preview feature only. For more information, see https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/storage/expanding-persistent-volumes. |
Add Another Storage Pool
-
Log into the OpenShift console and select
Networking
→Routes
. Make sure theopenshift-storage
project is selected. -
Click on the
Location
field for thenoobaa-mgmt
Route. -
Log into the Noobaa Management Console.
-
On the main dashboard, under
Storage Resources
, selectAdd Storage Resources
. -
Select
Deploy Kubernetes Pool
-
Enter a new pool name. Click
Next
. -
Choose the number of Pods to manage the pool and set the size per node. Click
Next
. -
Click
Deploy
.
After a few minutes, the additional storage pool will be added to the Noobaa resources and available for use by Project Quay.
Customizing Default Operator Images
Note
|
Using this mechanism is not supported for production Quay environments and is strongly encouraged only for development/testing purposes. There is no guarantee your deployment will work correctly when using non-default images with the Quay Operator. |
In certain circumstances, it may be useful to override the default images used by the Operator. This can be done by setting one or more environment variables in the Quay Operator ClusterServiceVersion
.
Environment Variables
The following environment variables are used in the Operator to override component images:
Environment Variable |
Component |
|
|
|
|
|
|
|
|
Note
|
Override images must be referenced by manifest (@sha256:), not by tag (:latest). |
Applying Overrides to a Running Operator
When the Quay Operator is installed in a cluster via the Operator Lifecycle Manager (OLM), the managed component container images can be easily overridden by modifying the ClusterServiceVersion
object, which is OLM’s representation of a running Operator in the cluster. Find the Quay Operator’s ClusterServiceVersion
either by using a Kubernetes UI or kubectl
/oc
:
$ oc get clusterserviceversions -n <your-namespace>
Using the UI, oc edit
, or any other method, modify the Quay ClusterServiceVersion
to include the environment variables outlined above to point to the override images:
JSONPath: spec.install.spec.deployments[0].spec.template.spec.containers[0].env
- name: RELATED_IMAGE_COMPONENT_QUAY
value: quay.io/projectquay/quay@sha256:c35f5af964431673f4ff5c9e90bdf45f19e38b8742b5903d41c10cc7f6339a6d
- name: RELATED_IMAGE_COMPONENT_CLAIR
value: quay.io/projectquay/clair@sha256:70c99feceb4c0973540d22e740659cd8d616775d3ad1c1698ddf71d0221f3ce6
- name: RELATED_IMAGE_COMPONENT_POSTGRES
value: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33
- name: RELATED_IMAGE_COMPONENT_REDIS
value: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542
Note that this is done at the Operator level, so every QuayRegistry will be deployed using these same overrides.
AWS S3 CloudFront
If you use AWS S3 CloudFront for backend registry storage, specify the private key as shown in the following example:
$ oc create secret generic --from-file config.yaml=./config_awss3cloudfront.yaml --from-file default-cloudfront-signing-key.pem=./default-cloudfront-signing-key.pem test-config-bundle
Advanced Clair configuration
Unmanaged Clair configuration
With Project Quay 3.7, users can run an unmanaged Clair configuration on the Project Quay OpenShift Container Platform Operator. This feature allows users to create an unmanaged Clair database, or run their custom Clair configuration without an unmanaged database.
Unmanaging a Clair database
An unmanaged Clair database allows the Project Quay Operator to work in a geo-replicated environment, where multiple instances of the Operator must communicate with the same database. An unmanaged Clair database can also be used when a user requires a highly-available (HA) Clair database that exists outside of a cluster.
-
In the Quay Operator, set the
clairpostgres
component of the QuayRegistry custom resource to unmanaged:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: false
Configuring a custom Clair database
The Project Quay Operator for OpenShift Container Platform allows users to provide their own Clair configuration by editing the configBundleSecret
parameter.
-
Create a Quay config bundle secret that includes the
clair-config.yaml
:$ oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret
Example
clair-config.yaml
configuration:indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true
Note-
The database certificate is mounted under
/run/certs/rds-ca-2019-root.pem
on the Clair application pod in theclair-config.yaml
. It must be specified when configuring yourclair-config.yaml
. -
An example
clair-config.yaml
can be found at Clair on OpenShift config.
-
-
Add the
clair-config.yaml
to your bundle secret, namedconfigBundleSecret
:apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config> extra_ca_cert_<name>: <base64 encoded ca cert> clair-ssl.crt: >- clair-ssl.key: >-
NoteWhen updated, the provided
clair-config.yaml
is mounted into the Clair pod. Any fields not provided are automatically populated with defaults using the Clair configuration module.
After proper configuration, the Clair application pod should return to a Ready
state.
Running a custom Clair configuration with a managed
database
In some cases, users might want to run a custom Clair configuration with a managed
database. This is useful in the following scenarios:
-
When a user wants to disable an updater.
-
When a user is running in an air-gapped environment.
Note-
If you are running Quay in an air-gapped environment, the
airgap
parameter of yourclair-config.yaml
must be set totrue
. -
If you are running Quay in an air-gapped environment, you should disable all updaters.
-
Use the steps in "Configuring a custom Clair database" to configure your database when clairpostgres
is set to managed
.
For more information about running Clair in an air-gapped environment, see Configuring access to the Clair database in the air-gapped OpenShift cluster.
Project Quay build enhancements
Prior to Project Quay 3.7, Quay ran podman
commands in virtual machines launched by pods. Running builds on virtual platforms requires enabling nested virtualization, which is not featured in Red Hat Enterprise Linux or OpenShift Container Platform. As a result, builds had to run on bare-metal clusters, which is an inefficient use of resources.
With Project Quay 3.7., the bare-metal constraint required to run builds has been removed by adding an additional build option which does not contain the virtual machine layer. As a result, builds can be run on virtualized platforms. Backwards compatibility to run previous build configurations are also available.
Project Quay enhanced build architecture
The preceding image shows the expected design flow and architecture of the enhanced build features:
With this enhancement, the build manager first creates the Job Object
. Then, the Job Object
then creates a pod using the quay-builder-image
. The quay-builder-image
will contain the quay-builder binary
and the Podman
service. The created pod runs as unprivileged
. The quay-builder binary
then builds the image while communicating status and retrieving build information from the Build Manager.
Project Quay build limitations
Running builds in Project Quay in an unprivileged context might cause some commands that were working under the previous build strategy to fail. Attempts to change the build strategy could potentially cause performance issues and reliability with the build.
Running builds directly in a container will not have the same isolation as using virtual machines. Changing the build environment might also caused builds that were previously working to fail.
Creating a Project Quay builders environment with OpenShift
OpenShift TLS component
The tls
component allows you to control TLS configuration.
Note
|
Project Quay 3.7 does not support builders when the TLS component is managed by the Operator. |
If you set tls
to unmanaged
, you supply your own ssl.cert
and ssl.key
files. In this instance, if you want your cluster to support builders, you must add both the Quay route and the builder route name to the SAN list in the cert, or alternatively use a wildcard. To add the builder route, use the following format:
[quayregistry-cr-name]-quay-builder-[ocp-namespace].[ocp-domain-name]:443
Using OpenShift Container Platform for Project Quay builders
The following procedure describes how you can implement the builders feature in Project Quay.
-
Builders require SSL certificates. For more information, see Adding TLS certificates to the Project Quay container.
-
If you are using AWS S3 storage, you must modify your storage bucket in the AWS console, prior to running builders. See "Modifying your AWS S3 storage bucket" in the following section for the required parameters.
Note
|
Procedure
|
Preparing OpenShift Container Platform for virtual builders
-
Log in to your Project Quay cluster using a cluster admin account.
-
Create a new project where your virtual builders will be run (e.g.,
virtual-builders
).$ oc new-project virtual-builders
-
Create a
ServiceAccount
in thisProject
that will be used to run builds.$ oc create sa -n virtual-builders quay-builder
-
Provide the created service account with editing permissions so that it can run the build:
$ oc adm policy -n virtual-builders add-role-to-user edit system:serviceaccount:virtual-builders:quay-builder
-
Grant the Quay builder
anyuid scc
permissions:$ oc adm policy -n virtual-builders add-scc-to-user anyuid -z quay-builder
NoteThis action requires cluster admin privileges. This is required because builders must run as the Podman user for unprivileged or rootless builds to work.
-
Obtain the token for the Quay builder service account.
-
If using OpenShift Container Platform 4.10 or an earlier version, enter the following command:
oc sa get-token -n virtual-builders quay-builder
-
If using OpenShift Container Platform 4.11 or later, enter the following command:
$ oc create token quay-builder -n virtual-builders
Sample outputeyJhbGciOiJSUzI1NiIsImtpZCI6IldfQUJkaDVmb3ltTHZ0dGZMYjhIWnYxZTQzN2dJVEJxcDJscldSdEUtYWsifQ...
-
-
Determine the builder route:
$ oc get route -n quay-enterprise
Sample outputNAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD ... example-registry-quay-builder example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org example-registry-quay-app grpc edge/Redirect None ...
-
Generate a self-signed SSL certificate with the .crt extension:
$ oc extract cm/kube-root-ca.crt -n openshift-apiserver ca.crt
$ mv ca.crt extra_ca_cert_build_cluster.crt
-
Locate the secret for you config bundle in the Console, and choose Actions → Edit Secret and add the appropriate builder configuration:
FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - <superusername> FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_BUILD_SUPPORT: True BUILDMAN_HOSTNAME: <sample_build_route> (1) BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ JOB_REGISTRATION_TIMEOUT: 3600 (2) ORCHESTRATOR: REDIS_HOST: <sample_redis_hostname> (3) REDIS_PASSWORD: "" REDIS_SSL: false REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetesPodman NAME: openshift BUILDER_NAMESPACE: <sample_builder_namespace> (4) SETUP_TIME: 180 MINIMUM_RETRY_THRESHOLD: BUILDER_CONTAINER_IMAGE: <sample_builder_container_image> (5) # Kubernetes resource options K8S_API_SERVER: <sample_k8s_api_server> (6) K8S_API_TLS_CA: <sample_crt_file> (7) VOLUME_SIZE: 8G KUBERNETES_DISTRIBUTION: openshift CONTAINER_MEMORY_LIMITS: 300m (8) CONTAINER_CPU_LIMITS: 1G (9) CONTAINER_MEMORY_REQUEST: 300m (10) CONTAINER_CPU_REQUEST: 1G (11) NODE_SELECTOR_LABEL_KEY: "" NODE_SELECTOR_LABEL_VALUE: "" SERVICE_ACCOUNT_NAME: <sample_service_account_name> SERVICE_ACCOUNT_TOKEN: <sample_account_token> (12)
-
The build route is obtained by running
oc get route -n
with the name of your OpenShift Operators namespace. A port must be provided at the end of the route, for example, and it should follow the following format:[quayregistry-cr-name]-quay-builder-[ocp-namespace].[ocp-domain-name]:443
. -
If the
JOB_REGISTRATION_TIMEOUT
parameter is set too low, you might receive the following error:failed to register job to build manager: rpc error: code = Unauthenticated desc = Invalid build token: Signature has expired
. It is suggested that this parameter be set to at least 240. -
If your Redis host has a password or SSL certificates, you must update accordingly.
-
Set to match the name of your virtual builders namespace, for example,
virtual-builders
. -
For early access, the
BUILDER_CONTAINER_IMAGE
is currentlyquay.io/projectquay/quay-builder:3.7.0-rc.2
. Note that this might change during the early access window. In the event this happens, customers will be alerted. -
Obtained by running
oc cluster-info
. -
You must manually create and add your custom CA cert, for example,
K8S_API_TLS_CA: /conf/stack/extra_ca_certs/build_cluster.crt
. -
Defaults to 5120Mi if left unspecified.
-
For virtual builds, you must ensure that there are enough resources in your cluster. Defaults to 1000m if left unspecified.
-
Defaults to 3968Mi if left unspecified.
-
Defaults to 500m if left unspecified.
-
Obtained when running
oc create sa
.
Sample configFEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_BUILD_SUPPORT: True BUILDMAN_HOSTNAME: example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org:443 BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ JOB_REGISTRATION_TIMEOUT: 3600 ORCHESTRATOR: REDIS_HOST: example-registry-quay-redis REDIS_PASSWORD: "" REDIS_SSL: false REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetesPodman NAME: openshift BUILDER_NAMESPACE: virtual-builders SETUP_TIME: 180 MINIMUM_RETRY_THRESHOLD: BUILDER_CONTAINER_IMAGE: quay.io/projectquay/quay-builder:3.7.0-rc.2 # Kubernetes resource options K8S_API_SERVER: api.docs.quayteam.org:6443 K8S_API_TLS_CA: /conf/stack/extra_ca_certs/build_cluster.crt VOLUME_SIZE: 8G KUBERNETES_DISTRIBUTION: openshift CONTAINER_MEMORY_LIMITS: 1G CONTAINER_CPU_LIMITS: 1080m CONTAINER_MEMORY_REQUEST: 1G CONTAINER_CPU_REQUEST: 580m NODE_SELECTOR_LABEL_KEY: "" NODE_SELECTOR_LABEL_VALUE: "" SERVICE_ACCOUNT_NAME: quay-builder SERVICE_ACCOUNT_TOKEN: "eyJhbGciOiJSUzI1NiIsImtpZCI6IldfQUJkaDVmb3ltTHZ0dGZMYjhIWnYxZTQzN2dJVEJxcDJscldSdEUtYWsifQ"
-
Manually adding SSL certificates.
Important
|
|
Create and sign certs
-
Create a certificate authority and sign a certificate. For more information, see Create a Certificate Authority and sign a certificate.
Note-
Add an
alt_name
for the URL of your Quay registry. -
Add an
alt_name
for theBUILDMAN_HOSTNAME
that is specified in your config.yaml.
openssl.cnf[req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = example-registry-quay-quay-enterprise.apps.docs.quayteam.org DNS.2 = example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org
Sample commands$ openssl genrsa -out rootCA.key 2048 $ openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem $ openssl genrsa -out ssl.key 2048 $ openssl req -new -key ssl.key -out ssl.csr $ openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf
-
Set TLS to unmanaged
In your Quay Registry yaml, set kind: tls
to managed: false
:
- kind: tls
managed: false
In the events, you should see that the change is blocked until you set up the appropriate config:
- lastTransitionTime: '2022-03-28T12:56:49Z'
lastUpdateTime: '2022-03-28T12:56:49Z'
message: >-
required component `tls` marked as unmanaged, but `configBundleSecret`
is missing necessary fields
reason: ConfigInvalid
status: 'True'
Create temporary secrets
-
Create a secret in your default namespace for the CA cert:
$ oc create secret generic -n quay-enterprise temp-crt --from-file extra_ca_cert_build_cluster.crt
-
Create a secret in your default namespace for the ssl.key and ssl.cert files:
$ oc create secret generic -n quay-enterprise quay-config-ssl --from-file ssl.cert --from-file ssl.key
Copy secret data to config.yaml
-
Locate the new secrets in the console UI at Workloads → Secrets.
-
For each secret, locate the YAML view:
kind: Secret apiVersion: v1 metadata: name: temp-crt namespace: quay-enterprise uid: a4818adb-8e21-443a-a8db-f334ace9f6d0 resourceVersion: '9087855' creationTimestamp: '2022-03-28T13:05:30Z' ... data: extra_ca_cert_build_cluster.crt: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURNakNDQWhxZ0F3SUJBZ0l.... type: Opaque
kind: Secret apiVersion: v1 metadata: name: quay-config-ssl namespace: quay-enterprise uid: 4f5ae352-17d8-4e2d-89a2-143a3280783c resourceVersion: '9090567' creationTimestamp: '2022-03-28T13:10:34Z' ... data: ssl.cert: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVaakNDQTA2Z0F3SUJBZ0lVT... ssl.key: >- LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBc... type: Opaque
-
Locate the secret for your Quay Registry configuration bundle in the UI, or via the command line by running a command such as:
$ oc get quayregistries.quay.redhat.com -o jsonpath="{.items[0].spec.configBundleSecret}{'\n'}" -n quay-enterprise
-
In the OpenShift console, select the YAML tab for your config bundle secret, and add the data from the two secrets you created:
kind: Secret apiVersion: v1 metadata: name: init-config-bundle-secret namespace: quay-enterprise uid: 4724aca5-bff0-406a-9162-ccb1972a27c1 resourceVersion: '4383160' creationTimestamp: '2022-03-22T12:35:59Z' ... data: config.yaml: >- RkVBVFVSRV9VU0VSX0lOSVRJQUxJWkU6IHRydWUKQlJ... extra_ca_cert_build_cluster.crt: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURNakNDQWhxZ0F3SUJBZ0ldw.... ssl.cert: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVaakNDQTA2Z0F3SUJBZ0lVT... ssl.key: >- LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBc... type: Opaque
-
Click Save. You should see the pods being re-started:
$ oc get pods -n quay-enterprise
Sample outputNAME READY STATUS RESTARTS AGE ... example-registry-quay-app-6786987b99-vgg2v 0/1 ContainerCreating 0 2s example-registry-quay-app-7975d4889f-q7tvl 1/1 Running 0 5d21h example-registry-quay-app-7975d4889f-zn8bb 1/1 Running 0 5d21h example-registry-quay-app-upgrade-lswsn 0/1 Completed 0 6d1h example-registry-quay-config-editor-77847fc4f5-nsbbv 0/1 ContainerCreating 0 2s example-registry-quay-config-editor-c6c4d9ccd-2mwg2 1/1 Running 0 5d21h example-registry-quay-database-66969cd859-n2ssm 1/1 Running 0 6d1h example-registry-quay-mirror-764d7b68d9-jmlkk 1/1 Terminating 0 5d21h example-registry-quay-mirror-764d7b68d9-jqzwg 1/1 Terminating 0 5d21h example-registry-quay-redis-7cc5f6c977-956g8 1/1 Running 0 5d21h
-
After your Quay registry has reconfigured, check that your Quay app pods are running:
$ oc get pods -n quay-enterprise
Sample outputexample-registry-quay-app-6786987b99-sz6kb 1/1 Running 0 7m45s example-registry-quay-app-6786987b99-vgg2v 1/1 Running 0 9m1s example-registry-quay-app-upgrade-lswsn 0/1 Completed 0 6d1h example-registry-quay-config-editor-77847fc4f5-nsbbv 1/1 Running 0 9m1s example-registry-quay-database-66969cd859-n2ssm 1/1 Running 0 6d1h example-registry-quay-mirror-758fc68ff7-5wxlp 1/1 Running 0 8m29s example-registry-quay-mirror-758fc68ff7-lbl82 1/1 Running 0 8m29s example-registry-quay-redis-7cc5f6c977-956g8 1/1 Running 0 5d21h
-
In your browser, access the registry endpoint and validate that the certificate has been updated appropriately:
Common Name (CN) example-registry-quay-quay-enterprise.apps.docs.quayteam.org Organisation (O) DOCS Organisational Unit (OU) QUAY
Using the UI to create a build trigger
-
Log in to your Quay repository.
-
Click Create New Repository and create a new registry, for example,
testrepo
. -
On the Repositories page, click Builds tab on the left hand pane. Alternatively, use the corresponding URL directly, for example:
https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/repository/quayadmin/testrepo?tab=builds
ImportantIn some cases, the builder might have issues resolving hostnames. This issue might be related to the
dnsPolicy
being set todefault
on the job object. Currently, there is no workaround for this issue. It will be resolved in a future version of Project Quay. -
Click Create Build Trigger → Custom Git Repository Push.
-
Enter the HTTPS or SSH style URL used to clone your Git repository, then click Continue. For example:
https://github.com/gabriel-rh/actions_test.git
-
Check Tag manifest with the branch or tag name and then click Continue.
-
Enter the location of the Dockerfile to build when the trigger is invoked, for example,
/Dockerfile
and click Continue. -
Enter the location of the context for the Docker build, for example,
/
, and click Continue. -
If warranted, create a Robot Account. Otherwise, click Continue.
-
Click Continue to verify the parameters.
-
On the Builds page, click Options icon of your Trigger Name, and then click Run Trigger Now.
-
Enter a commit SHA from the Git repository and click Start Build.
-
You can check the status of your build by clicking the commit in the Build History page, or by running
oc get pods -n virtual-builders
.$ oc get pods -n virtual-builders NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s
$ oc get pods -n virtual-builders NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Terminating 0 9s
$ oc get pods -n virtual-builders No resources found in virtual-builders namespace.
-
When the build is finished, you can check the status of the tag under Tags on the left hand pane.
NoteWith early access, full build logs and timestamps of builds are currently unavailable.
Modifying your AWS S3 storage bucket
If you are using AWS S3 storage, you must modify your storage bucket in the AWS console, prior to running builders.
-
Log in to your AWS console at s3.console.aws.com.
-
In the search bar, search for
S3
and then click S3. -
Click the name of your bucket, for example,
myawsbucket
. -
Click the Permissions tab.
-
Under Cross-origin resource sharing (CORS), include the following parameters:
[ { "AllowedHeaders": [ "Authorization" ], "AllowedMethods": [ "GET" ], "AllowedOrigins": [ "*" ], "ExposeHeaders": [], "MaxAgeSeconds": 3000 }, { "AllowedHeaders": [ "Content-Type", "x-amz-acl", "origin" ], "AllowedMethods": [ "PUT" ], "AllowedOrigins": [ "*" ], "ExposeHeaders": [], "MaxAgeSeconds": 3000 } ]
Geo-replication
Geo-replication allows multiple, geographically distributed Project Quay deployments to work as a single registry from the perspective of a client or user. It significantly improves push and pull performance in a globally-distributed Project Quay setup. Image data is asynchronously replicated in the background with transparent failover / redirect for clients.
With Project Quay 3.7, deployments of Project Quay with geo-replication is supported by standalone and Operator deployments.
Geo-replication features
-
When geo-replication is configured, container image pushes will be written to the preferred storage engine for that Project Quay instance (typically the nearest storage backend within the region).
-
After the initial push, image data will be replicated in the background to other storage engines.
-
The list of replication locations is configurable and those can be different storage backends.
-
An image pull will always use the closest available storage engine, to maximize pull performance.
-
If replication hasn’t been completed yet, the pull will use the source storage backend instead.
Geo-replication requirements and constraints
-
A single database, and therefore all metadata and Quay configuration, is shared across all regions.
-
A single Redis cache is shared across the entire Quay setup and needs to accessible by all Quay pods.
-
The exact same configuration should be used across all regions, with exception of the storage backend, which can be configured explicitly using the
QUAY_DISTRIBUTED_STORAGE_PREFERENCE
environment variable. -
Geo-Replication requires object storage in each region. It does not work with local storage or NFS.
-
Each region must be able to access every storage engine in each region (requires a network path).
-
Alternatively, the storage proxy option can be used.
-
The entire storage backend (all blobs) is replicated. This is in contrast to repository mirroring, which can be limited to an organization or repository or image.
-
All Quay instances must share the same entrypoint, typically via load balancer.
-
All Quay instances must have the same set of superusers, as they are defined inside the common configuration file.
-
Geo-replication requires your Clair configuration to be set to
unmanaged
. An unmanaged Clair database allows the Project Quay Operator to work in a geo-replicated environment, where multiple instances of the Operator must communicate with the same database. For more information, see Advanced Clair configuration. -
Geo-Replication requires SSL/TSL certificates and keys. For more information, see Using SSL to protect connections to Project Quay.
If the above requirements cannot be met, you should instead use two or more distinct Quay deployments and take advantage of repository mirroring functionality.
Geo-replication using the Project Quay Operator
In the example shown above, the Project Quay Operator is deployed in two separate regions, with a common database and a common Redis instance. Localized image storage is provided in each region and image pulls are served from the closest available storage engine. Container image pushes are written to the preferred storage engine for the Quay instance, and will then be replicated, in the background, to the other storage engines.
Because the Operator now manages the Clair security scanner and its database separately, geo-replication setups can be leveraged so that they do not manage the Clair database. Instead, an external shared database would be used. Project Quay and Clair support several providers and vendors of PostgreSQL, which can be found in the Project Quay 3.x test matrix. Additionally, the Operator also supports custom Clair configurations that can be injected into the deployment, which allows users to configure Clair with the connection credentials for the external database.
Setting up geo-replication on Openshift
-
Deploy Quay postgres instance:
-
Login to the database
-
Create a database for Quay
CREATE DATABASE quay;
-
Enable pg_trm extension inside the database
\c quay; CREATE EXTENSION IF NOT EXISTS pg_trgm;
-
-
Deploy a Redis instance:
Note-
Deploying a Redis instance might be unnecessary if your cloud provider has its own service.
-
Deploying a Redis instance is required if you are leveraging Builders.
-
Deploy a VM for Redis
-
Make sure that it is accessible from the clusters where Quay is running
-
Port 6379/TCP must be open
-
Run Redis inside the instance
sudo dnf install -y podman podman run -d --name redis -p 6379:6379 redis
-
-
Create two object storage backends, one for each cluster
Ideally one object storage bucket will be close to the 1st cluster (primary) while the other will run closer to the 2nd cluster (secondary).
-
Deploy the clusters with the same config bundle, using environment variable overrides to select the appropriate storage backend for an individual cluster
-
Configure a load balancer, to provide a single entry point to the clusters
Configuration
The config.yaml
file is shared between clusters, and will contain the details for the common PostgreSQL, Redis and storage backends:
SERVER_HOSTNAME: <georep.quayteam.org or any other name> (1)
DB_CONNECTION_ARGS:
autorollback: true
threadlocals: true
DB_URI: postgresql://postgres:password@10.19.0.1:5432/quay (2)
BUILDLOGS_REDIS:
host: 10.19.0.2
port: 6379
USER_EVENTS_REDIS:
host: 10.19.0.2
port: 6379
DISTRIBUTED_STORAGE_CONFIG:
usstorage:
- GoogleCloudStorage
- access_key: GOOGQGPGVMASAAMQABCDEFG
bucket_name: georep-test-bucket-0
secret_key: AYWfEaxX/u84XRA2vUX5C987654321
storage_path: /quaygcp
eustorage:
- GoogleCloudStorage
- access_key: GOOGQGPGVMASAAMQWERTYUIOP
bucket_name: georep-test-bucket-1
secret_key: AYWfEaxX/u84XRA2vUX5Cuj12345678
storage_path: /quaygcp
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS:
- usstorage
- eustorage
DISTRIBUTED_STORAGE_PREFERENCE:
- usstorage
- eustorage
FEATURE_STORAGE_REPLICATION: true
-
A proper
SERVER_HOSTNAME
must be used for the route and must match the hostname of the global load balancer. -
To retrieve the configuration file for a Clair instance deployed using the OpenShift Operator, see Retrieving the Clair config.
Create the configBundleSecret
:
$ oc create secret generic --from-file config.yaml=./config.yaml georep-config-bundle
In each of the clusters, set the configBundleSecret
and use the QUAY_DISTRIBUTED_STORAGE_PREFERENCE
environmental variable override to configure the appropriate storage for that cluster:
Note
|
The |
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage
+
Note
|
Because TLS is unmanaged, and the route is managed, you must supply the certificates with either with the config tool or directly in the config bundle. For more information, see Configuring TLS and routes. |
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage
+
Note
|
Because TLS is unmanaged, and the route is managed, you must supply the certificates with either with the config tool or directly in the config bundle. For more information, see Configuring TLS and routes. |
Mixed storage for geo-replication
Project Quay geo-replication supports the use of different and multiple replication targets, for example, using AWS S3 storage on public cloud and using Ceph storage on-prem. This complicates the key requirement of granting access to all storage backends from all Project Quay pods and cluster nodes. As a result, it is recommended that you:
-
Use a VPN to prevent visibility of the internal storage or
-
Use a token pair that only allows access to the specified bucket used by Quay
This will result in the public cloud instance of Project Quay having access to on-prem storage but the network will be encrypted, protected, and will use ACLs, thereby meeting security requirements.
If you cannot implement these security measures, it may be preferable to deploy two distinct Project Quay registries and to use repository mirroring as an alternative to geo-replication.
Backing up and restoring Project Quay managed by the Project Quay Operator
Use the content within this section to back up and restore Project Quay when managed by the Project Quay Operator on OpenShift Container Platform.
Backing up Project Quay
This procedure describes how to create a backup of Project Quay deployed on OpenShift Container Platform using the Project Quay Operator
-
A healthy Project Quay deployment on OpenShift Container Platform using the Project Quay Operator (status condition
Available
is set totrue
) -
The components
quay
,postgres
andobjectstorage
are set tomanaged: true
-
If the component
clair
is set tomanaged: true
the componentclairpostgres
is also set tomanaged: true
(starting with Project Quay Operator v3.7 or later)
Note
|
If your deployment contains partially unmanaged database or storage components and you are using external services for Postgres or S3-compatible object storage to run your Project Quay deployment, you must refer to the service provider or vendor documentation to create a backup of the data. You can refer to the tools described in this guide as a starting point on how to backup your external Postgres database or object storage. |
Project Quay configuration backup
-
Backup the
QuayRegistry
custom resource by exporting it:$ oc get quayregistry <quay-registry-name> -n <quay-namespace> -o yaml > quay-registry.yaml
-
Edit the resulting
quayregistry.yaml
and remove the status section and the following metadata fields:metadata.creationTimestamp metadata.finalizers metadata.generation metadata.resourceVersion metadata.uid
-
Backup the managed keys secret:
NoteIf you are running a version older than Project Quay 3.7.0, this step can be skipped. Some secrets are automatically generated while deploying Quay for the first time. These are stored in a secret called
<quay-registry-name>-quay-registry-managed-secret-keys
in the namespace of theQuayRegistry
resource.$ oc get secret -n <quay-namespace> <quay-registry-name>-quay-registry-managed-secret-keys -o yaml > managed-secret-keys.yaml
-
Edit the the resulting
managed-secret-keys.yaml
file and remove the entrymetadata.ownerReferences
. Yourmanaged-secret-keys.yaml
file should look similar to the following:apiVersion: v1 kind: Secret type: Opaque metadata: name: <quayname>-quay-registry-managed-secret-keys namespace: <quay-namespace> data: CONFIG_EDITOR_PW: <redacted> DATABASE_SECRET_KEY: <redacted> DB_ROOT_PW: <redacted> DB_URI: <redacted> SECRET_KEY: <redacted> SECURITY_SCANNER_V4_PSK: <redacted>
All information under the
data
property should remain the same. -
Backup the current Quay configuration:
$ oc get secret -n <quay-namespace> $(oc get quayregistry <quay-registry-name> -n <quay-namespace> -o jsonpath='{.spec.configBundleSecret}') -o yaml > config-bundle.yaml
-
Backup the
/conf/stack/config.yaml
file mounted inside of the Quay pods:$ oc exec -it quay-pod-name -- cat /conf/stack/config.yaml > quay-config.yaml
Scale down your Project Quay deployment
Important
|
This step is needed to create a consistent backup of the state of your Project Quay deployment. Do not omit this step, including in setups where Postgres databases and/or S3-compatible object storage are provided by external services (unmanaged by the Operator). |
-
For Operator version 3.7 and newer: Scale down the Project Quay deployment by disabling auto scaling and overriding the replica count for Project Quay, mirror workers, and Clair (if managed). Your
QuayRegistry
resource should look similar to the following:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: … - kind: horizontalpodautoscaler managed: false (1) - kind: quay managed: true overrides: (2) replicas: 0 - kind: clair managed: true overrides: replicas: 0 - kind: mirror managed: true overrides: replicas: 0 …
-
Disable auto scaling of Quay, Clair and Mirroring workers
-
Set the replica count to 0 for components accessing the database and objectstorage
-
-
For Operator version 3.6 and earlier: Scale down the Project Quay deployment by scaling down the Project Quay Operator first and then the managed Project Quay resources:
$ oc scale --replicas=0 deployment $(oc get deployment -n <quay-operator-namespace>|awk '/^quay-operator/ {print $1}') -n <quay-operator-namespace> $ oc scale --replicas=0 deployment $(oc get deployment -n <quay-namespace>|awk '/quay-app/ {print $1}') -n <quay-namespace> $ oc scale --replicas=0 deployment $(oc get deployment -n <quay-namespace>|awk '/quay-mirror/ {print $1}') -n <quay-namespace> $ oc scale --replicas=0 deployment $(oc get deployment -n <quay-namespace>|awk '/clair-app/ {print $1}') -n <quay-namespace>
-
Wait for the
registry-quay-app
,registry-quay-mirror
andregistry-clair-app
pods (depending on which components you set to be managed by the Project Quay Operator) to disappear. You can check their status by running the following command:$ oc get pods -n <quay-namespace>
Example output:
$ oc get pod quay-operator.v3.7.1-6f9d859bd-p5ftc 1/1 Running 0 12m quayregistry-clair-postgres-7487f5bd86-xnxpr 1/1 Running 1 (12m ago) 12m quayregistry-quay-app-upgrade-xq2v6 0/1 Completed 0 12m quayregistry-quay-config-editor-6dfdcfc44f-hlvwm 1/1 Running 0 73s quayregistry-quay-database-859d5445ff-cqthr 1/1 Running 0 12m quayregistry-quay-redis-84f888776f-hhgms 1/1 Running 0 12m
Project Quay managed database backup
Note
|
If your Project Quay deployment is configured with external (unmanged) Postgres database(s), refer to your vendor’s documentation on how to create a consistent backup of these databases. |
-
Identify the Quay PostgreSQL pod name:
$ oc get pod -l quay-component=postgres -n <quay-namespace> -o jsonpath='{.items[0].metadata.name}'
Example output:
quayregistry-quay-database-59f54bb7-58xs7
-
Obtain the Quay database name:
$ oc -n <quay-namespace> rsh $(oc get pod -l app=quay -o NAME -n <quay-namespace> |head -n 1) cat /conf/stack/config.yaml|awk -F"/" '/^DB_URI/ {print $4}' quayregistry-quay-database
-
Download a backup database:
$ oc exec quayregistry-quay-database-59f54bb7-58xs7 -- /usr/bin/pg_dump -C quayregistry-quay-database > backup.sql
Project Quay managed object storage backup
The instructions in this section apply to the following configurations:
-
Standalone, multi-cloud object gateway configurations
-
OpenShift Data Foundations storage requires that the Project Quay Operator provisioned an S3 object storage bucket from, through the ObjectStorageBucketClaim API
Note
|
If your Project Quay deployment is configured with external (unmanged) object storage, refer to your vendor’s documentation on how to create a copy of the content of Quay’s storage bucket. |
-
Decode and export the
AWS_ACCESS_KEY_ID
:$ export AWS_ACCESS_KEY_ID=$(oc get secret -l app=noobaa -n <quay-namespace> -o jsonpath='{.items[0].data.AWS_ACCESS_KEY_ID}' |base64 -d)
-
Decode and export the
AWS_SECRET_ACCESS_KEY_ID
:$ export AWS_SECRET_ACCESS_KEY=$(oc get secret -l app=noobaa -n <quay-namespace> -o jsonpath='{.items[0].data.AWS_SECRET_ACCESS_KEY}' |base64 -d)
-
Create a new directory and copy all blobs to it:
$ mkdir blobs $ aws s3 sync --no-verify-ssl --endpoint https://$(oc get route s3 -n openshift-storage -o jsonpath='{.spec.host}') s3://$(oc get cm -l app=noobaa -n <quay-namespace> -o jsonpath='{.items[0].data.BUCKET_NAME}') ./blobs
Scale the Project Quay deployment back up
-
For Operator version 3.7 and newer: Scale up the Project Quay deployment by re-enabling auto scaling, if desired, and removing the replica overrides for Quay, mirror workers and Clair as applicable. Your
QuayRegistry
resource should look similar to the following:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: … - kind: horizontalpodautoscaler managed: true (1) - kind: quay (2) managed: true - kind: clair managed: true - kind: mirror managed: true …
-
Re-enables auto scaling of Quay, Clair and Mirroring workers again (if desired)
-
Replica overrides are removed again to scale the Quay components back up
-
-
For Operator version 3.6 and earlier: Scale up the Project Quay deployment by scaling up the Project Quay Operator again:
$ oc scale --replicas=1 deployment $(oc get deployment -n <quay-operator-namespace> | awk '/^quay-operator/ {print $1}') -n <quay-operator-namespace>
-
Check the status of the Project Quay deployment:
$ oc wait quayregistry registry --for=condition=Available=true -n <quay-namespace>
Example output:
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: ... name: registry namespace: <quay-namespace> ... spec: ... status: - lastTransitionTime: '2022-06-20T05:31:17Z' lastUpdateTime: '2022-06-20T17:31:13Z' message: All components reporting as healthy reason: HealthChecksPassing status: 'True' type: Available
Restoring Project Quay
This procedure is used to restore Project Quay when the Project Quay Operator manages the database. It should be performed after a backup of your Project Quay registry has been performed. See Backing up Project Quay for more information.
-
Project Quay is deployed on OpenShift Container Platform using the Project Quay Operator.
-
A backup of the Project Quay configuration managed by the Project Quay Operator has been created following the instructions in the Backing up Project Quay section
-
Your Project Quay database has been backed up.
-
The object storage bucket used by Project Quay has been backed up.
-
The components
quay
,postgres
andobjectstorage
are set tomanaged: true
-
If the component
clair
is set tomanaged: true
, the componentclairpostgres
is also set tomanaged: true
(starting with Project Quay Operator v3.7 or later) -
There is no running Project Quay deployment managed by the Project Quay Operator in the target namespace on your OpenShift Container Platform cluster
Note
|
If your deployment contains partially unmanaged database or storage components and you are using external services for Postgres or S3-compatible object storage to run your Project Quay deployment, you must refer to the service provider or vendor documentation to restore their data from a backup prior to restore Project Quay |
Restoring Project Quay and its configuration from a backup
Note
|
These instructions assume you have followed the process in the Backing up Project Quay guide and create the backup files with the same names. |
-
Restore the backed up Project Quay configuration and the generated keys from the backup:
$ oc create -f ./config-bundle.yaml $ oc create -f ./managed-secret-keys.yaml
ImportantIf you receive the error
Error from server (AlreadyExists): error when creating "./config-bundle.yaml": secrets "config-bundle-secret" already exists
, you must delete your existing resource with$ oc delete Secret config-bundle-secret -n <quay-namespace>
and recreate it with$ oc create -f ./config-bundle.yaml
. -
Restore the
QuayRegistry
custom resource:$ oc create -f ./quay-registry.yaml
-
Check the status of the Project Quay deployment and wait for it to be available:
$ oc wait quayregistry registry --for=condition=Available=true -n <quay-namespace>
Scale down your Project Quay deployment
-
For Operator version 3.7 and newer: Scale down the Project Quay deployment by disabling auto scaling and overriding the replica count for Quay, mirror workers and Clair (if managed). Your
QuayRegistry
resource should look similar to the following:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: … - kind: horizontalpodautoscaler managed: false (1) - kind: quay managed: true overrides: (2) replicas: 0 - kind: clair managed: true overrides: replicas: 0 - kind: mirror managed: true overrides: replicas: 0 …
-
Disable auto scaling of Quay, Clair and Mirroring workers
-
Set the replica count to 0 for components accessing the database and objectstorage
-
-
For Operator version 3.6 and earlier: Scale down the Project Quay deployment by scaling down the Project Quay Operator first and then the managed Project Quay resources:
$ oc scale --replicas=0 deployment $(oc get deployment -n <quay-operator-namespace>|awk '/^quay-operator/ {print $1}') -n <quay-operator-namespace> $ oc scale --replicas=0 deployment $(oc get deployment -n <quay-namespace>|awk '/quay-app/ {print $1}') -n <quay-namespace> $ oc scale --replicas=0 deployment $(oc get deployment -n <quay-namespace>|awk '/quay-mirror/ {print $1}') -n <quay-namespace> $ oc scale --replicas=0 deployment $(oc get deployment -n <quay-namespace>|awk '/clair-app/ {print $1}') -n <quay-namespace>
-
Wait for the
registry-quay-app
,registry-quay-mirror
andregistry-clair-app
pods (depending on which components you set to be managed by Operator) to disappear. You can check their status by running the following command:$ oc get pods -n <quay-namespace>
Example output:
registry-quay-config-editor-77847fc4f5-nsbbv 1/1 Running 0 9m1s registry-quay-database-66969cd859-n2ssm 1/1 Running 0 6d1h registry-quay-redis-7cc5f6c977-956g8 1/1 Running 0 5d21h
Restore your Project Quay database
-
Identify your Quay database pod:
$ oc get pod -l quay-component=postgres -n <quay-namespace> -o jsonpath='{.items[0].metadata.name}'
Example output:
quayregistry-quay-database-59f54bb7-58xs7
-
Upload the backup by copying it from the local environment and into the pod:
$ oc cp ./backup.sql -n <quay-namespace> registry-quay-database-66969cd859-n2ssm:/tmp/backup.sql
-
Open a remote terminal to the database:
$ oc rsh -n <quay-namespace> registry-quay-database-66969cd859-n2ssm
-
Enter psql:
bash-4.4$ psql
-
You can list the database by running the following command:
postgres=# \l
Example output:
List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges ----------------------------+----------------------------+----------+------------+------------+----------------------- postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 | quayregistry-quay-database | quayregistry-quay-database | UTF8 | en_US.utf8 | en_US.utf8 |
-
Drop the database:
postgres=# DROP DATABASE "quayregistry-quay-database";
Example output:
DROP DATABASE
-
Exit the postgres CLI to re-enter bash-4.4:
\q
-
Redirect your PostgreSQL database to your backup database:
sh-4.4$ psql < /tmp/backup.sql
-
Exit bash:
sh-4.4$ exit
Restore your Project Quay object storage data
-
Export the
AWS_ACCESS_KEY_ID
:$ export AWS_ACCESS_KEY_ID=$(oc get secret -l app=noobaa -n <quay-namespace> -o jsonpath='{.items[0].data.AWS_ACCESS_KEY_ID}' |base64 -d)
-
Export the
AWS_SECRET_ACCESS_KEY
:$ export AWS_SECRET_ACCESS_KEY=$(oc get secret -l app=noobaa -n <quay-namespace> -o jsonpath='{.items[0].data.AWS_SECRET_ACCESS_KEY}' |base64 -d)
-
Upload all blobs to the bucket by running the following command:
$ aws s3 sync --no-verify-ssl --endpoint https://$(oc get route s3 -n openshift-storage -o jsonpath='{.spec.host}') ./blobs s3://$(oc get cm -l app=noobaa -n <quay-namespace> -o jsonpath='{.items[0].data.BUCKET_NAME}')
Scale up your Project Quay deployment
-
For Operator version 3.7 and newer: Scale up the Project Quay deployment by re-enabling auto scaling, if desired, and removing the replica overrides for Quay, mirror workers and Clair as applicable. Your
QuayRegistry
resource should look similar to the following:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: … - kind: horizontalpodautoscaler managed: true (1) - kind: quay (2) managed: true - kind: clair managed: true - kind: mirror managed: true …
-
Re-enables auto scaling of Project Quay, Clair and mirroring workers again (if desired)
-
Replica overrides are removed again to scale the Project Quay components back up
-
-
For Operator version 3.6 and earlier: Scale up the Project Quay deployment by scaling up the Project Quay Operator again:
$ oc scale --replicas=1 deployment $(oc get deployment -n <quay-operator-namespace> | awk '/^quay-operator/ {print $1}') -n <quay-operator-namespace>
-
Check the status of the Project Quay deployment:
$ oc wait quayregistry registry --for=condition=Available=true -n <quay-namespace>
Example output:
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: ... name: registry namespace: <quay-namespace> ... spec: ... status: - lastTransitionTime: '2022-06-20T05:31:17Z' lastUpdateTime: '2022-06-20T17:31:13Z' message: All components reporting as healthy reason: HealthChecksPassing status: 'True' type: Available
Deploying IPv6 on the Project Quay Operator
Your Project Quay Operator deployment can now be served in locations that only support IPv6, such as Telco and Edge environments.
For a list of known limitations, see IPv6 limitations
Enabling the IPv6 protocol family
Use the following procedure to enable IPv6 support on your standalone Project Quay deployment.
-
You have updated Project Quay to 3.8.
-
Your host and container software platform (Docker, Podman) must be configured to support IPv6.
-
In your deployment’s
config.yaml
file, add theFEATURE_LISTEN_IP_VERSION
parameter and set it toIPv6
, for example:--- FEATURE_GOOGLE_LOGIN: false FEATURE_INVITE_ONLY_USER_CREATION: false FEATURE_LISTEN_IP_VERSION: IPv6 FEATURE_MAILING: false FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP: false ---
-
Start, or restart, your Project Quay deployment.
-
Check that your deployment is listening to IPv6 by entering the following command:
$ curl <quay_endpoint>/health/instance {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200}
After enabling IPv6 in your deployment’s config.yaml
, all Project Quay features can be used as normal, so long as your environment is configured to use IPv6 and is not hindered by the IPv6 and dual-stack limitations.
Warning
|
If your environment is configured to IPv4, but the |
IPv6 limitations
-
Currently, attempting to configure your Project Quay deployment with the common Azure Blob Storage configuration will not work on IPv6 single stack environments. Because the endpoint of Azure Blob Storage does not support IPv6, there is no workaround in place for this issue.
For more information, see PROJQUAY-4433.
-
Currently, attempting to configure your Project Quay deployment with Amazon S3 CloudFront will not work on IPv6 single stack environments. Because the endpoint of Amazon S3 CloudFront does not support IPv6, there is no workaround in place for this issue.
For more information, see PROJQUAY-4470.
-
Currently, OpenShift Data Foundations (ODF) is unsupported when Project Quay is deployed on IPv6 single stack environments. As a result, ODF cannot be used in IPv6 environments. This limitation is scheduled to be fixed in a future version of OpenShift Data Foundations.
-
Currently, dual-stack (IPv4 and IPv6) support does not work on Project Quay OpenShift Container Platform deployments. When Project Quay 3.8 is deployed on OpenShift Container Platform with dual-stack support enabled, the Quay Route generated by the Project Quay Operator only generates an IPv4 address, and not an IPv6 address. As a result, clients with an IPv6 address cannot access the Project Quay application on OpenShift Container Platform. This limitation is scheduled to be fixed in a future version of OpenShift Container Platform.
Upgrading the Quay Operator Overview
The Quay Operator follows a synchronized versioning scheme, which means that each version of the Operator is tied to the version of Quay and the components that it manages. There is no field on the QuayRegistry
custom resource which sets the version of Quay to deploy; the Operator only knows how to deploy a single version of all components. This scheme was chosen to ensure that all components work well together and to reduce the complexity of the Operator needing to know how to manage the lifecycles of many different versions of Quay on Kubernetes.
Operator Lifecycle Manager
The Quay Operator should be installed and upgraded using the Operator Lifecycle Manager (OLM). When creating a Subscription
with the default approvalStrategy: Automatic
, OLM will automatically upgrade the Quay Operator whenever a new version becomes available.
Warning
|
When the Quay Operator is installed via Operator Lifecycle Manager, it may be configured to support automatic or manual upgrades. This option is shown on the Operator Hub page for the Quay Operator during installation. It can also be found in the Quay Operator |
Upgrading the Quay Operator
The standard approach for upgrading installed Operators on OpenShift is documented at Upgrading installed Operators.
In general, Project Quay supports upgrades from a prior (N-1) minor version only. For example, upgrading directly from Project Quay 3.0.5 to the latest version of 3.5 is not supported. Instead, users would have to upgrade as follows:
-
3.0.5 → 3.1.3
-
3.1.3 → 3.2.2
-
3.2.2 → 3.3.4
-
3.3.4 → 3.4.z
-
3.4.z → 3.5.z
This is required to ensure that any necessary database migrations are done correctly and in the right order during the upgrade.
In some cases, Project Quay supports direct, single-step upgrades from prior (N-2, N-3) minor versions. This exception to the normal, prior minor version-only, upgrade simplifies the upgrade procedure for customers on older releases. The following upgrade paths are supported:
-
3.3.z → 3.6.z
-
3.4.z → 3.6.z
-
3.4.z → 3.7.z
-
3.5.z → 3.7.z
-
3.7.z → 3.8.z
For users on standalone deployments of Quay wanting to upgrade to 3.8, see the Standalone upgrade guide.
Upgrading Quay
To update Quay from one minor version to the next, for example, 3.4 → 3.5, you need to change the update channel for the Quay Operator.
For z
stream upgrades, for example, 3.4.2 → 3.4.3, updates are released in the major-minor channel that the user initially selected during install. The procedure to perform a z
stream upgrade depends on the approvalStrategy
as outlined above. If the approval strategy is set to Automatic
, the Quay Operator will upgrade automatically to the newest z
stream. This results in automatic, rolling Quay updates to newer z
streams with little to no downtime. Otherwise, the update must be manually approved before installation can begin.
Notes on upgrading directly from 3.3.z or 3.4.z to 3.6
Upgrading with edge routing enabled
-
Previously, when running a 3.3.z version of Project Quay with edge routing enabled, users were unable to upgrade to 3.4.z versions of Project Quay. This has been resolved with the release of Project Quay 3.6.
-
When upgrading from 3.3.z to 3.6, if
tls.termination
is set tonone
in your Project Quay 3.3.z deployment, it will change to HTTPS with TLS edge termination and use the default cluster wildcard certificate. For example:apiVersion: redhatcop.redhat.io/v1alpha1 kind: QuayEcosystem metadata: name: quay33 spec: quay: imagePullSecretName: redhat-pull-secret enableRepoMirroring: true image: quay.io/quay/quay:v3.3.4-2 ... externalAccess: hostname: quayv33.apps.devcluster.openshift.com tls: termination: none database: ...
Upgrading with custom TLS certificate/key pairs without Subject Alternative Names
There is an issue for customers using their own TLS certificate/key pairs without Subject Alternative Names (SANs) when upgrading from Project Quay 3.3.4 to Project Quay 3.6 directly. During the upgrade to Project Quay 3.6, the deployment is blocked, with the error message from the Quay Operator pod logs indicating that the Quay TLS certificate must have SANs.
If possible, you should regenerate your TLS certificates with the correct hostname in the SANs. A possible workaround involves defining an environment variable in the quay-app
, quay-upgrade
and quay-config-editor
pods after upgrade to enable CommonName matching:
GODEBUG=x509ignoreCN=0
The GODEBUG=x509ignoreCN=0
flag enables the legacy behavior of treating the CommonName field on X.509 certificates as a host name when no SANs are present. However, this workaround is not recommended, as it will not persist across a redeployment.
Configuring Clair v4 when upgrading from 3.3.z or 3.4.z to 3.6 using the Quay Operator
To set up Clair v4 on a new Project Quay deployment on OpenShift, it is highly recommended to use the Quay Operator. By default, the Quay Operator will install or upgrade a Clair deployment along with your Project Quay deployment and configure Clair security scanning automatically.
For instructions on setting up Clair v4 on OpenShift, see Setting Up Clair on a Project Quay OpenShift deployment.
Swift configuration when upgrading from 3.3.z to 3.6
When upgrading from Project Quay 3.3.z to 3.6.z, some users might receive the following error: Switch auth v3 requires tenant_id (string) in os_options
. As a workaround, you can manually update your DISTRIBUTED_STORAGE_CONFIG
to add the os_options
and tenant_id
parameters:
DISTRIBUTED_STORAGE_CONFIG:
brscale:
- SwiftStorage
- auth_url: http://****/v3
auth_version: "3"
os_options:
tenant_id: ****
project_name: ocp-base
user_domain_name: Default
storage_path: /datastorage/registry
swift_container: ocp-svc-quay-ha
swift_password: *****
swift_user: *****
Changing the update channel for an Operator
The subscription of an installed Operator specifies an update channel, which is used to track and receive updates for the Operator. To upgrade the Quay Operator to start tracking and receiving updates from a newer channel, change the update channel in the Subscription tab for the installed Quay Operator. For subscriptions with an Automatic
approval strategy, the upgrade begins automatically and can be monitored on the page that lists the Installed Operators.
Manually approving a pending Operator upgrade
If an installed Operator has the approval strategy in its subscription set to Manual
, when new updates are released in its current update channel, the update must be manually approved before installation can begin. If the Quay Operator has a pending upgrade, this status will be displayed in the list of Installed Operators. In the Subscription
tab for the Quay Operator, you can preview the install plan and review the resources that are listed as available for upgrade. If satisfied, click Approve
and return to the page that lists Installed Operators to monitor the progress of the upgrade.
The following image shows the Subscription tab in the UI, including the update Channel
, the Approval
strategy, the Upgrade status
and the InstallPlan
:
The list of Installed Operators provides a high-level summary of the current Quay installation:
Upgrading a QuayRegistry
When the Quay Operator starts, it immediately looks for any QuayRegistries
it can find in the namespace(s) it is configured to watch. When it finds one, the following logic is used:
-
If
status.currentVersion
is unset, reconcile as normal. -
If
status.currentVersion
equals the Operator version, reconcile as normal. -
If
status.currentVersion
does not equal the Operator version, check if it can be upgraded. If it can, perform upgrade tasks and set thestatus.currentVersion
to the Operator’s version once complete. If it cannot be upgraded, return an error and leave theQuayRegistry
and its deployed Kubernetes objects alone.
Upgrading a QuayEcosystem
Upgrades are supported from previous versions of the Operator which used the QuayEcosystem
API for a limited set of configurations. To ensure that migrations do not happen unexpectedly, a special label needs to be applied to the QuayEcosystem
for it to be migrated. A new QuayRegistry
will be created for the Operator to manage, but the old QuayEcosystem
will remain until manually deleted to ensure that you can roll back and still access Quay in case anything goes wrong. To migrate an existing QuayEcosystem
to a new QuayRegistry
, follow these steps:
-
Add
"quay-operator/migrate": "true"
to themetadata.labels
of theQuayEcosystem
.$ oc edit quayecosystem <quayecosystemname>
metadata: labels: quay-operator/migrate: "true"
-
Wait for a
QuayRegistry
to be created with the samemetadata.name
as yourQuayEcosystem
. TheQuayEcosystem
will be marked with the label"quay-operator/migration-complete": "true"
. -
Once the
status.registryEndpoint
of the newQuayRegistry
is set, access Quay and confirm all data and settings were migrated successfully. -
When you are confident everything worked correctly, you may delete the
QuayEcosystem
and Kubernetes garbage collection will clean up all old resources.
Reverting QuayEcosystem Upgrade
If something goes wrong during the automatic upgrade from QuayEcosystem
to QuayRegistry
, follow these steps to revert back to using the QuayEcosystem
:
-
Delete the
QuayRegistry
using either the UI orkubectl
:$ kubectl delete -n <namespace> quayregistry <quayecosystem-name>
-
If external access was provided using a
Route
, change theRoute
to point back to the originalService
using the UI orkubectl
.
Note
|
If your |
Supported QuayEcosystem Configurations for Upgrades
The Quay Operator will report errors in its logs and in status.conditions
if migrating a QuayEcosystem
component fails or is unsupported. All unmanaged components should migrate successfully because no Kubernetes resources need to be adopted and all the necessary values are already provided in Quay’s config.yaml
.
Database
Ephemeral database not supported (volumeSize
field must be set).
Redis
Nothing special needed.
External Access
Only passthrough Route
access is supported for automatic migration. Manual migration required for other methods.
-
LoadBalancer
without custom hostname: After theQuayEcosystem
is marked with label"quay-operator/migration-complete": "true"
, delete themetadata.ownerReferences
field from existingService
before deleting theQuayEcosystem
to prevent Kubernetes from garbage collecting theService
and removing the load balancer. A newService
will be created withmetadata.name
format<QuayEcosystem-name>-quay-app
. Edit thespec.selector
of the existingService
to match thespec.selector
of the newService
so traffic to the old load balancer endpoint will now be directed to the new pods. You are now responsible for the oldService
; the Quay Operator will not manage it. -
LoadBalancer
/NodePort
/Ingress
with custom hostname: A newService
of typeLoadBalancer
will be created withmetadata.name
format<QuayEcosystem-name>-quay-app
. Change your DNS settings to point to thestatus.loadBalancer
endpoint provided by the newService
.
Clair
Nothing special needed.
Object Storage
QuayEcosystem
did not have a managed object storage component, so object storage will always be marked as unmanaged. Local storage is not supported.
Repository Mirroring
Nothing special needed.
Additional resources
-
For more details on the Project Quay Operator, see the upstream quay-operator project.