Project Quay is an enterprise-quality container registry. Use Project Quay to build and store container images, then make them available to deploy across your enterprise.
The Project Quay Operator provides a simple method to deploy and manage Project Quay on an OpenShift cluster.
With the release of Project Quay 3.4.0, the Project Quay Operator was re-written to offer an enhanced experience and to add more support for Day 2 operations. As a result, the Project Quay Operator is now simpler to use and is more opinionated. The key difference from versions prior to Project Quay 3.4.0 include the following:
-
The
QuayEcosystem
custom resource has been replaced with theQuayRegistry
custom resource. -
The default installation options produces a fully supported Project Quay environment, with all managed dependencies (database, caches, object storage, and so on) supported for production use.
NoteSome components might not be highly available.
-
A new validation library for Project Quay’s configuration, which is shared by the Project Quay application and config tool for consistency.
-
Object storage can now be provided by the Project Quay Operator using the
ObjectBucketClaim
Kubernetes API. For example, the NooBaa Operator fromOperatorHub.io
can be used to provide an implementation of that API. -
Customization of the container images used by deployed pods for testing and development scenarios.
Introduction to the Project Quay Operator
Use the content in this chapter to execute the following:
-
Install the Project Quay Operator
-
Configure managed, or unmanaged, object storage
-
Configure unmanaged components, such as the database, Redis, routes, TLS, and so on.
-
Deploy the Project Quay registry on OpenShift Container Platform using the Project Quay Operator
-
Use advanced features supported by the Project Quay Operator
-
Upgrade the registry by upgrading the Project Quay Operator
Project Quay Operator components
Project Quay has a significant number of dependencies. These include a database, object storage, Redis, and others. The Project Quay Operator manages an opinionated deployment of Project Quay and its dependencies on Kubernetes. These dependencies are treated as components and are configured through the QuayRegistry
API.
In the QuayRegistry
custom resource, the spec.components
field configures components. Each component contains two fields: kind
(the name of the component), and managed
(a boolean that addresses whether the component lifecycle is handled by the Project Quay Operator). By default, all components are managed and auto-filled upon reconciliation for visibility:
spec:
components:
- kind: quay
managed: true
- kind: postgres
managed: true
- kind: clair
managed: true
- kind: redis
managed: true
- kind: horizontalpodautoscaler
managed: true
- kind: objectstorage
managed: true
- kind: route
managed: true
- kind: mirror
managed: true
- kind: monitoring
managed: true
- kind: tls
managed: true
- kind: clairpostgres
managed: true
Using managed components
Unless your QuayRegistry
custom resource specifies otherwise, the Project Quay Operator uses defaults for the following managed components:
-
quay: Holds overrides for the Project Quay deployment. For example, environment variables and number of replicas. This component is new as of Project Quay 3.7 and cannot be set to unmanaged.
-
postgres: For storing the registry metadata, As of Project Quay 3.9, uses an upstream (CentOS) version of PostgreSQL 13.
-
clair: Provides image vulnerability scanning.
-
redis: Stores live builder logs and the Project Quay tutorial. Also includes the locking mechanism that is required for garbage collection.
-
horizontalpodautoscaler: Adjusts the number of
Quay
pods depending on memory/cpu consumption. -
objectstorage: For storing image layer blobs, utilizes the
ObjectBucketClaim
Kubernetes API which is provided by Noobaa or RHOCS. -
route: Provides an external entrypoint to the Project Quay registry from outside of OpenShift Container Platform.
-
mirror: Configures repository mirror workers to support optional repository mirroring.
-
monitoring: Features include a Grafana dashboard, access to individual metrics, and alerting to notify for frequently restarting Quay pods.
-
tls: Configures whether Project Quay or OpenShift Container Platform handles SSL/TLS.
-
clairpostgres: Configures a managed Clair database. This is a separate database than the PostgreSQL database used to deploy Project Quay.
The Project Quay Operator handles any required configuration and installation work needed for Project Quay to use the managed components. If the opinionated deployment performed by the Project Quay Operator is unsuitable for your environment, you can provide the Project Quay Operator with unmanaged
resources (overrides) as described in the following sections.
Using unmanaged components for dependencies
If you have existing components such as PostgreSQL, Redis, or object storage that you want to use with Project Quay, you first configure them within the Project Quay configuration bundle (config.yaml
). Then, they must be referenced in your QuayRegistry
bundle as a Kubernetes Secret
while indicating which components are unmanaged.
Note
|
|
See the following sections for configuring unmanaged components:
Config bundle secret
The spec.configBundleSecret
field is a reference to the metadata.name
of a Secret
in the same namespace as the QuayRegistry
. This Secret
must contain a config.yaml
key/value pair. This config.yaml
file is a Project Quay config.yaml
file. This field is optional, and is auto-filled by the Project Quay Operator if not provided. If provided, it serves as the base set of config fields which are later merged with other fields from any managed components to form a final output Secret
, which is then mounted into the Project Quay application pods.
Prerequisites for Project Quay on OpenShift Container Platform
Before you begin the deployment of Project Quay Operator on OpenShift Container Platform, you should consider the following.
OpenShift cluster
You need a privileged account to an OpenShift Container Platform 4.5 or later cluster on which to deploy the Project Quay Operator. That account must have the ability to create namespaces at the cluster scope.
Resource Requirements
Each Project Quay application pod has the following resource requirements:
-
8 Gi of memory
-
2000 millicores of CPU.
The Project Quay Operator creates at least one application pod per Project Quay deployment it manages. Ensure your OpenShift Container Platform cluster has sufficient compute resources for these requirements.
Object Storage
By default, the Project Quay Operator uses the ObjectBucketClaim
Kubernetes API to provision object storage. Consuming this API decouples the Project Quay Operator from any vendor-specific implementation. Red Hat OpenShift Data Foundation provides this API through its NooBaa component, which will be used in this example.
Project Quay can be manually configured to use any of the following supported cloud storage options:
-
Amazon S3 (see S3 IAM Bucket Policy for details on configuring an S3 bucket policy for Project Quay)
-
MicroShift Azure Blob Storage
-
Google Cloud Storage
-
Ceph Object Gateway (RADOS)
-
OpenStack Swift
-
CloudFront + S3
Installing the Project Quay Operator from the OperatorHub
Use the following procedure to install the Project Quay Operator from the OpenShift Container Platform OperatorHub.
-
Using the OpenShift Container Platform console, select Operators → OperatorHub.
-
In the search box, type Project Quay and select the official Project Quay Operator provided by Red Hat. This directs you to the Installation page, which outlines the features, prerequisites, and deployment information.
-
Select Install. This directs you to the Operator Installation page.
-
The following choices are available for customizing the installation:
-
Update Channel: Choose the update channel, for example,
stable-3.7
for the latest release. -
Installation Mode: Choose
All namespaces on the cluster
if you want the Project Quay Operator to be available cluster-wide. ChooseA specific namespace on the cluster
if you want it deployed only within a single namespace. It is recommended that you install the Project Quay Operator cluster-wide. If you choose a single namespace, the monitoring component will not be available by default.-
Approval Strategy: Choose to approve either automatic or manual updates. Automatic update strategy is recommended.
-
-
-
Select Install.
Configuring Project Quay before deployment
The Project Quay Operator can manage all of the Project Quay components when deployed on OpenShift Container Platform. This is the default configuration, however, you can manage one or more components externally when you want more control over the set up.
Use the following pattern to configure unmanaged Project Quay components.
-
Create a
config.yaml
configuration file with the appropriate settings. -
Create a
Secret
using the configuration file by entering the following command:$ oc create secret generic --from-file config.yaml=./config.yaml config-bundle-secret
-
Create a
quayregistry.yaml
file, identifying the unmanaged components and also referencing the createdSecret
, for example:ExampleQuayRegistry
YAML fileapiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false
-
Deploy the registry by using the
quayregistry.yaml
file:$ oc create -n quay-enterprise -f quayregistry.yaml
Pre-configuring Project Quay for automation
Project Quay supports several configuration options that enable automation. Users can configure these options before deployment to reduce the need for interaction with the user interface.
Allowing the API to create the first user
To create the first user, users need to set the FEATURE_USER_INITIALIZE parameter to true and call the /api/v1/user/initialize API. Unlike all other registry API calls that require an OAuth token generated by an OAuth application in an existing organization, the API endpoint does not require authentication.
Users can use the API to create a user such as quayadmin
after deploying Project Quay, provided no other users have been created. For more information, see Using the API to create the first user.
Enabling general API access
Users should set the BROWSER_API_CALLS_XHR_ONLY config option to false to allow general access to the Project Quay registry API.
Adding a superuser
After deploying Project Quay, users can create a user and give the first user administrator privileges with full permissions. Users can configure full permissions in advance by using the SUPER_USER configuration object. For example:
...
SERVER_HOSTNAME: quay-server.example.com
SETUP_COMPLETE: true
SUPER_USERS:
- quayadmin
...
Restricting user creation
After you have configured a superuser, you can restrict the ability to create new users to the superuser group by setting the FEATURE_USER_CREATION
to false
. For example:
...
FEATURE_USER_INITIALIZE: true
BROWSER_API_CALLS_XHR_ONLY: false
SUPER_USERS:
- quayadmin
FEATURE_USER_CREATION: false
...
Enabling new functionality in Project Quay 3.10
To use new Project Quay 3.8 functions, enable some or all of the following features:
FEATURE_UI_V2: true
FEATURE_UI_V2_REPO_SETTINGS: true
FEATURE_AUTO_PRUNE: true
ROBOTS_DISALLOW: false
Suggested configuration for automation
The following config.yaml
parameters are suggested for automation:
...
FEATURE_USER_INITIALIZE: true
BROWSER_API_CALLS_XHR_ONLY: false
SUPER_USERS:
- quayadmin
FEATURE_USER_CREATION: false
...
Configuring object storage
You need to configure object storage before installing Project Quay, irrespective of whether you are allowing the Project Quay Operator to manage the storage or managing it yourself.
If you want the Project Quay Operator to be responsible for managing storage, see the section on Managed storage for information on installing and configuring NooBaa and the Red Hat OpenShift Data Foundations Operator.
If you are using a separate storage solution, set objectstorage
as unmanaged
when configuring the Operator. See the following section. Unmanaged storage, for details of configuring existing storage.
Using unmanaged storage
This section provides configuration examples for unmanaged storage for your convenience. Refer to the Project Quay configuration guide for complete instructions on how to set up object storage.
AWS S3 storage
Use the following example when configuring AWS S3 storage for your Project Quay deployment.
DISTRIBUTED_STORAGE_CONFIG:
s3Storage:
- S3Storage
- host: s3.us-east-2.amazonaws.com
s3_access_key: ABCDEFGHIJKLMN
s3_secret_key: OL3ABCDEFGHIJKLMN
s3_bucket: quay_bucket
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- s3Storage
Google Cloud storage
Use the following example when configuring Google Cloud storage for your Project Quay deployment.
DISTRIBUTED_STORAGE_CONFIG:
googleCloudStorage:
- GoogleCloudStorage
- access_key: GOOGQIMFB3ABCDEFGHIJKLMN
bucket_name: quay-bucket
secret_key: FhDAYe2HeuAKfvZCAGyOioNaaRABCDEFGHIJKLMN
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- googleCloudStorage
Microsoft Azure storage
Use the following example when configuring Microsoft Azure storage for your Project Quay deployment.
DISTRIBUTED_STORAGE_CONFIG:
azureStorage:
- AzureStorage
- azure_account_name: azure_account_name_here
azure_container: azure_container_here
storage_path: /datastorage/registry
azure_account_key: azure_account_key_here
sas_token: some/path/
endpoint_url: https://[account-name].blob.core.usgovcloudapi.net (1)
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- azureStorage
-
The
endpoint_url
parameter for Microsoft Azure storage is optional and can be used with Microsoft Azure Government (MAG) endpoints. If left blank, theendpoint_url
will connect to the normal Microsoft Azure region.As of Project Quay 3.7, you must use the Primary endpoint of your MAG Blob service. Using the Secondary endpoint of your MAG Blob service will result in the following error:
AuthenticationErrorDetail:Cannot find the claimed account when trying to GetProperties for the account whusc8-secondary
.
Ceph/RadosGW Storage
Use the following example when configuring Ceph/RadosGW storage for your Project Quay deployment.
DISTRIBUTED_STORAGE_CONFIG:
radosGWStorage: #storage config name
- RadosGWStorage #actual driver
- access_key: access_key_here #parameters
secret_key: secret_key_here
bucket_name: bucket_name_here
hostname: hostname_here
is_secure: 'true'
port: '443'
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE: #must contain name of the storage config
- radosGWStorage
Swift storage
Use the following example when configuring Swift storage for your Project Quay deployment.
DISTRIBUTED_STORAGE_CONFIG:
swiftStorage:
- SwiftStorage
- swift_user: swift_user_here
swift_password: swift_password_here
swift_container: swift_container_here
auth_url: https://example.org/swift/v1/quay
auth_version: 1
ca_cert_path: /conf/stack/swift.cert"
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- swiftStorage
NooBaa unmanaged storage
Use the following procedure to deploy NooBaa as your unmanaged storage configuration.
-
Create a NooBaa Object Bucket Claim in the Project Quay console by navigating to Storage → Object Bucket Claims.
-
Retrieve the Object Bucket Claim Data details, including the Access Key, Bucket Name, Endpoint (hostname), and Secret Key.
-
Create a
config.yaml
configuration file that uses the information for the Object Bucket Claim:DISTRIBUTED_STORAGE_CONFIG: default: - RHOCSStorage - access_key: WmrXtSGk8B3nABCDEFGH bucket_name: my-noobaa-bucket-claim-8b844191-dc6c-444e-9ea4-87ece0abcdef hostname: s3.openshift-storage.svc.cluster.local is_secure: true port: "443" secret_key: X9P5SDGJtmSuHFCMSLMbdNCMfUABCDEFGH+C5QD storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default
For more information about configuring an Object Bucket Claim, see Object Bucket Claim.
Using an unmanaged NooBaa instance
Use the following procedure to use an unmanaged NooBaa instance for your Project Quay deployment.
-
Create a NooBaa Object Bucket Claim in the console at Storage → Object Bucket Claims.
-
Retrieve the Object Bucket Claim Data details including the
Access Key
,Bucket Name
,Endpoint (hostname)
, andSecret Key
. -
Create a
config.yaml
configuration file using the information for the Object Bucket Claim. For example:DISTRIBUTED_STORAGE_CONFIG: default: - RHOCSStorage - access_key: WmrXtSGk8B3nABCDEFGH bucket_name: my-noobaa-bucket-claim-8b844191-dc6c-444e-9ea4-87ece0abcdef hostname: s3.openshift-storage.svc.cluster.local is_secure: true port: "443" secret_key: X9P5SDGJtmSuHFCMSLMbdNCMfUABCDEFGH+C5QD storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default
Managed storage
If you want the Operator to manage object storage for Project Quay, your cluster needs to be capable of providing it through the ObjectBucketClaim
API. There are multiple implementations of this API available, for instance, NooBaa in combination with Kubernetes PersistentVolumes
or scalable storage backends like Ceph. Refer to the NooBaa documentation for more details on how to deploy this component.
Note
|
Object storage disk space is allocated automatically by the Project Quay Operator with 50 GiB. This number represents a usable amount of storage for most small to medium Project Quay installations but might not be sufficient for your use cases. Resizing the Red Hat OpenShift Data Foundation volume is currently not handled by the Project Quay Operator. See the section below about resizing managed storage for more details. |
Leveraging the Multicloud Object Gateway Component in the Red Hat OpenShift Data Foundation Operator for Project Quay
As part of a Project Quay subscription, users are entitled to use the Multicloud Object Gateway component of the Red Hat OpenShift Data Foundation Operator (formerly known as OpenShift Container Storage Operator). This gateway component allows you to provide an S3-compatible object storage interface to Project Quay backed by Kubernetes PersistentVolume
-based block storage. The usage is limited to a Project Quay deployment managed by the Operator and to the exact specifications of the multicloud Object Gateway instance as documented below.
Since Project Quay does not support local filesystem storage, users can leverage the gateway in combination with Kubernetes PersistentVolume
storage instead, to provide a supported deployment. A PersistentVolume
is directly mounted on the gateway instance as a backing store for object storage and any block-based StorageClass
is supported.
By the nature of PersistentVolume
, this is not a scale-out, highly available solution and does not replace a scale-out storage system like Red Hat OpenShift Data Foundation. Only a single instance of the gateway is running. If the pod running the gateway becomes unavailable due to rescheduling, updates or unplanned downtime, this will cause temporary degradation of the connected Project Quay instances.
Using the following procedures, you will install the Local Storage Operator, Red Hat OpenShift Data Foundation, and create a standalone Multicloud Object Gateway to deploy Project Quay on OpenShift Container Platform.
Note
|
The following documentation shares commonality with the official Red Hat OpenShift Data Foundation documentation. |
Installing the Local Storage Operator on OpenShift Container Platform
Use the following procedure to install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices.
-
Log in to the OpenShift Web Console.
-
Click Operators → OperatorHub.
-
Type local storage into the search box to find the Local Storage Operator from the list of Operators. Click Local Storage.
-
Click Install.
-
Set the following options on the Install Operator page:
-
For Update channel, select stable.
-
For Installation mode, select A specific namespace on the cluster.
-
For Installed Namespace, select Operator recommended namespace openshift-local-storage.
-
For Update approval, select Automatic.
-
-
Click Install.
Installing Red Hat OpenShift Data Foundation on OpenShift Container Platform
Use the following procedure to install Red Hat OpenShift Data Foundation on OpenShift Container Platform.
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
and Operator installation permissions. -
You must have at least three worker nodes in the OpenShift Container Platform cluster.
-
For additional resource requirements, see the Planning your deployment guide.
-
Log in to the OpenShift Web Console.
-
Click Operators → OperatorHub.
-
Type OpenShift Data Foundation in the search box. Click OpenShift Data Foundation.
-
Click Install.
-
Set the following options on the Install Operator page:
-
For Update channel, select the most recent stable version.
-
For Installation mode, select A specific namespace on the cluster.
-
For Installed Namespace, select Operator recommended Namespace: openshift-storage.
-
For Update approval, select Automatic or Manual.
If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.
If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.
-
For Console plugin, select Enable.
-
-
Click Install.
After the Operator is installed, a pop-up with a message,
Web console update is available
appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. -
Continue to the following section, "Creating a standalone Multicloud Object Gateway", to leverage the Multicloud Object Gateway Component for Project Quay.
Creating a standalone Multicloud Object Gateway using the OpenShift Container Platform UI
Use the following procedure to create a standalone Multicloud Object Gateway.
-
You have installed the Local Storage Operator.
-
You have installed the Red Hat OpenShift Data Foundation Operator.
-
In the OpenShift Web Console, click Operators → Installed Operators to view all installed Operators.
Ensure that the namespace is
openshift-storage
. -
Click Create StorageSystem.
-
On the Backing storage page, select the following:
-
Select Multicloud Object Gateway for Deployment type.
-
Select the Create a new StorageClass using the local storage devices option.
-
Click Next.
NoteYou are prompted to install the Local Storage Operator if it is not already installed. Click Install, and follow the procedure as described in "Installing the Local Storage Operator on OpenShift Container Platform".
-
-
On the Create local volume set page, provide the following information:
-
Enter a name for the LocalVolumeSet and the StorageClass. By default, the local volume set name appears for the storage class name. You can change the name.
-
Choose one of the following:
-
Disk on all nodes
Uses the available disks that match the selected filters on all the nodes.
-
Disk on selected nodes
Uses the available disks that match the selected filters only on the selected nodes.
-
-
From the available list of Disk Type, select SSD/NVMe.
-
Expand the Advanced section and set the following options:
Volume Mode
Filesystem is selected by default. Always ensure that Filesystem is selected for Volume Mode.
Device Type
Select one or more device type from the dropdown list.
Disk Size
Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included.
Maximum Disks Limit
This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes.
-
Click Next
A pop-up to confirm the creation of
LocalVolumeSet
is displayed. -
Click Yes to continue.
-
-
In the Capacity and nodes page, configure the following:
-
Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class.
-
Click Next to continue.
-
-
Optional. Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption.
-
From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP). If you selected Vault, go to the next step. If you selected Thales CipherTrust Manager (using KMIP), go to step iii.
-
Select an Authentication Method.
Using Token Authentication method
-
Enter a unique Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Token.
-
Expand Advanced Settings to enter additional settings and certificate details based on your
Vault
configuration:-
Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
-
Optional: Enter TLS Server Name and Vault Enterprise Namespace.
-
Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate, and Client Private Key.
-
Click Save and skip to step iv.
Using Kubernetes authentication method
-
-
Enter a unique Vault Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name.
-
Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:
-
Enter the Key Value secret path in Backend Path that is dedicated and unique to Red Hat OpenShift Data Foundation.
-
Optional: Enter TLS Server Name and Authentication Path if applicable.
-
Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate, and Client Private Key.
-
Click Save and skip to step iv.
-
-
-
To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below:
-
Enter a unique Connection Name for the Key Management service within the project.
-
In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example:
-
Address: 123.34.3.2
-
Port: 5696
-
-
Upload the Client Certificate, CA certificate, and Client Private Key.
-
If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above.
-
The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example,
kmip_all_<port>.ciphertrustmanager.local
.
-
-
Select a Network.
-
Click Next.
-
-
In the Review and create page, review the configuration details. To modify any configuration settings, click Back.
-
Click Create StorageSystem.
Create A standalone Multicloud Object Gateway using the CLI
Use the following procedure to install the Red Hat OpenShift Data Foundation (formerly known as OpenShift Container Storage) Operator and configure a single instance Multi-Cloud Gateway service.
Note
|
The following configuration cannot be run in parallel on a cluster with Red Hat OpenShift Data Foundation installed. |
-
On the OpenShift Web Console, and then select Operators → OperatorHub.
-
Search for Red Hat OpenShift Data Foundation, and then select Install.
-
Accept all default options, and then select Install.
-
Confirm that the Operator has installed by viewing the Status column, which should be marked as Succeeded.
WarningWhen the installation of the Red Hat OpenShift Data Foundation Operator is finished, you are prompted to create a storage system. Do not follow this instruction. Instead, create NooBaa object storage as outlined the following steps.
-
On your machine, create a file named
noobaa.yaml
with the following information:apiVersion: noobaa.io/v1alpha1 kind: NooBaa metadata: name: noobaa namespace: openshift-storage spec: dbResources: requests: cpu: '0.1' memory: 1Gi dbType: postgres coreResources: requests: cpu: '0.1' memory: 1Gi
This creates a single instance deployment of the Multi-cloud Object Gateway.
-
Apply the configuration with the following command:
$ oc create -n openshift-storage -f noobaa.yaml
Example outputnoobaa.noobaa.io/noobaa created
-
After a few minutes, the Multi-cloud Object Gateway should finish provisioning. You can enter the following command to check its status:
$ oc get -n openshift-storage noobaas noobaa -w
Example outputNAME MGMT-ENDPOINTS S3-ENDPOINTS IMAGE PHASE AGE noobaa [https://10.0.32.3:30318] [https://10.0.32.3:31958] registry.redhat.io/ocs4/mcg-core-rhel8@sha256:56624aa7dd4ca178c1887343c7445a9425a841600b1309f6deace37ce6b8678d Ready 3d18h
-
Configure a backing store for the gateway by creating the following YAML file, named
noobaa-pv-backing-store.yaml
:apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: noobaa-pv-backing-store namespace: openshift-storage spec: pvPool: numVolumes: 1 resources: requests: storage: 50Gi (1) storageClass: STORAGE-CLASS-NAME (2) type: pv-pool
-
The overall capacity of the object storage service. Adjust as needed.
-
The
StorageClass
to use for thePersistentVolumes
requested. Delete this property to use the cluster default.
-
-
Enter the following command to apply the configuration:
$ oc create -f noobaa-pv-backing-store.yaml
Example outputbackingstore.noobaa.io/noobaa-pv-backing-store created
This creates the backing store configuration for the gateway. All images in Project Quay will be stored as objects through the gateway in a
PersistentVolume
created by the above configuration. -
Run the following command to make the
PersistentVolume
backing store the default for allObjectBucketClaims
issued by the Project Quay Operator:$ oc patch bucketclass noobaa-default-bucket-class --patch '{"spec":{"placementPolicy":{"tiers":[{"backingStores":["noobaa-pv-backing-store"]}]}}}' --type merge -n openshift-storage
Configuring traffic ingress
Configuring SSL/TLS and Routes
Support for OpenShift Container Platform Edge-Termination Routes has been added by way of a new managed component, tls
. This separates the route
component from SSL/TLS and allows users to configure both separately.
EXTERNAL_TLS_TERMINATION: true
is the opinionated setting.
-
Managed
tls
means that the default cluster wildcard certificate is used. -
Unmanaged
tls
means that the user provided key and certificate pair is be injected into theRoute
.
The ssl.cert
and ssl.key
are now moved to a separate, persistent secret, which ensures that the key and certificate pair are not re-generated upon every reconcile. The key and certificate pair are now formatted as edge
routes and mounted to the same directory in the Quay
container.
Multiple permutations are possible when configuring SSL/TLS and Routes, but the following rules apply:
-
If SSL/TLS is
managed
, then your route must also bemanaged
-
If SSL/TLS is
unmanaged
then you must supply certificates, either with the config tool or directly in the config bundle
The following table describes the valid options:
Option | Route | TLS | Certs provided | Result |
---|---|---|---|---|
My own load balancer handles TLS |
Managed |
Managed |
No |
Edge Route with default wildcard cert |
Project Quay handles TLS |
Managed |
Unmanaged |
Yes |
Passthrough route with certs mounted inside the pod |
Project Quay handles TLS |
Unmanaged |
Unmanaged |
Yes |
Certificates are set inside the quay pod but route must be created manually |
Note
|
Project Quay 3.7 does not support builders when TLS is managed by the Operator. |
Creating the config bundle secret with the SSL/TLS cert and key pair
Use the following procedure to create a config bundle secret that includes your own SSL/TLS certificate and key pair.
-
Enter the following command to create config bundle secret that includes your own SSL/TLS certificate and key pair:
$ oc create secret generic --from-file config.yaml=./config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret
Configuring the database
Using an existing PostgreSQL database
If you are using an externally managed PostgreSQL database, you must manually enable the pg_trgm
extension for a successful deployment.
Use the following procedure to deploy an existing PostgreSQL database.
-
Create a
config.yaml
file with the necessary database fields. For example:Exampleconfig.yaml
file:DB_URI: postgresql://test-quay-database:postgres@test-quay-database:5432/test-quay-database
-
Create a
Secret
using the configuration file:$ kubectl create secret generic --from-file config.yaml=./config.yaml config-bundle-secret
-
Create a
QuayRegistry
YAML file which marks thepostgres
component asunmanaged
and references the createdSecret
. For example:Examplequayregistry.yaml
fileapiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: postgres managed: false
-
Deploy the registry as detailed in the following sections.
Database configuration
This section describes the database configuration fields available for Project Quay deployments.
Database URI
With Project Quay, connection to the database is configured by using the required DB_URI
field.
The following table describes the DB_URI
configuration field:
Field | Type | Description |
---|---|---|
DB_URI |
String |
The URI for accessing the database, including any credentials. Example postgresql://quayuser:quaypass@quay-server.example.com:5432/quay |
Database connection arguments
Optional connection arguments are configured by the DB_CONNECTION_ARGS
parameter. Some of the key-value pairs defined under DB_CONNECTION_ARGS
are generic, while others are database specific.
The following table describes database connection arguments:
Field | Type | Description |
---|---|---|
DB_CONNECTION_ARGS |
Object |
Optional connection arguments for the database, such as timeouts and SSL/TLS. |
.autorollback |
Boolean |
Whether to use thread-local connections. |
.threadlocals |
Boolean |
Whether to use auto-rollback connections. |
PostgreSQL SSL/TLS connection arguments
With SSL/TLS, configuration depends on the database you are deploying. The following example shows a PostgreSQL SSL/TLS configuration:
DB_CONNECTION_ARGS:
sslmode: verify-ca
sslrootcert: /path/to/cacert
The sslmode
option determines whether, or with, what priority a secure SSL/TLS TCP/IP connection will be negotiated with the server. There are six modes:
Mode | Description |
---|---|
disable |
Your configuration only tries non-SSL/TLS connections. |
allow |
Your configuration first tries a non-SSL/TLS connection. Upon failure, tries an SSL/TLS connection. |
prefer |
Your configuration first tries an SSL/TLS connection. Upon failure, tries a non-SSL/TLS connection. |
require |
Your configuration only tries an SSL/TLS connection. If a root CA file is present, it verifies the certificate in the same way as if verify-ca was specified. |
verify-ca |
Your configuration only tries an SSL/TLS connection, and verifies that the server certificate is issued by a trusted certificate authority (CA). |
verify-full |
Only tries an SSL/TLS connection, and verifies that the server certificate is issued by a trusted CA and that the requested server hostname matches that in the certificate. |
For more information on the valid arguments for PostgreSQL, see Database Connection Control Functions.
MySQL SSL/TLS connection arguments
The following example shows a sample MySQL SSL/TLS configuration:
DB_CONNECTION_ARGS:
ssl:
ca: /path/to/cacert
Information on the valid connection arguments for MySQL is available at Connecting to the Server Using URI-Like Strings or Key-Value Pairs.
Using the managed PostgreSQL database
With Project Quay 3.9, if your database is managed by the Project Quay Operator, updating from Project Quay 3.8 → 3.9 automatically handles upgrading PostgreSQL 10 to PostgreSQL 13.
Important
|
|
If you do not want the Project Quay Operator to upgrade your PostgreSQL deployment from PostgreSQL 10 → 13, you must set the PostgreSQL parameter to managed: false
in your quayregistry.yaml
file. For more information about setting your database to unmanaged, see Using an existing Postgres database.
Important
|
|
If you want your PostgreSQL database to match the same version as your Red Hat Enterprise Linux (RHEL) system, see Migrating to a RHEL 8 version of PostgreSQL for RHEL 8 or Migrating to a RHEL 9 version of PostgreSQL for RHEL 9.
For more information about the Project Quay 3.8 → 3.9 procedure, see "Updating Project Quay and the Project Quay and Clair PostgreSQL databases on OpenShift Container Platform".
PostgreSQL database recommendations
The Project Quay team recommends the following for managing your PostgreSQL database.
-
Database backups should be performed regularly using either the supplied tools on the PostgreSQL image or your own backup infrastructure. The Project Quay Operator does not currently ensure that the PostgreSQL database is backed up.
-
Restoring the PostgreSQL database from a backup must be done using PostgreSQL tools and procedures. Be aware that your
Quay
pods should not be running while the database restore is in progress. -
Database disk space is allocated automatically by the Project Quay Operator with 50 GiB. This number represents a usable amount of storage for most small to medium Project Quay installations but might not be sufficient for your use cases. Resizing the database volume is currently not handled by the Project Quay Operator.
Configuring external Redis
Use the content in this section to set up an external Redis deployment.
Using an unmanaged Redis database
Use the following procedure to set up an external Redis database.
-
Create a
config.yaml
file using the following Redis fields:BUILDLOGS_REDIS: host: quay-server.example.com port: 6379 ssl: false USER_EVENTS_REDIS: host: quay-server.example.com port: 6379 ssl: false
-
Enter the following command to create a secret using the configuration file:
$ oc create secret generic --from-file config.yaml=./config.yaml config-bundle-secret
-
Create a
quayregistry.yaml
file that sets the Redis component tounmanaged
and references the created secret:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: redis managed: false
-
Deploy the Project Quay registry.
Using unmanaged Horizontal Pod Autoscalers
Horizontal Pod Autoscalers (HPAs) are now included with the Clair
, Quay
, and Mirror
pods, so that they now automatically scale during load spikes.
As HPA is configured by default to be managed, the number of Clair
, Quay
, and Mirror
pods is set to two. This facilitates the avoidance of downtime when updating or reconfiguring Project Quay by the Operator or during rescheduling events.
Disabling the Horizontal Pod Autoscaler
To disable autoscaling or create your own HorizontalPodAutoscaler
, specify the component as unmanaged
in the QuayRegistry
instance. For example:
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: quay-enterprise
spec:
components:
- kind: horizontalpodautoscaler
managed: false
Disabling the Route component
Use the following procedure to prevent the Project Quay Operator from creating a route.
-
Set the component as
managed: false
in thequayregistry.yaml
file:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: route managed: false
-
Edit the
config.yaml
file to specify that Project Quay handles SSL/TLS. For example:... EXTERNAL_TLS_TERMINATION: false ... SERVER_HOSTNAME: example-registry-quay-quay-enterprise.apps.user1.example.com ... PREFERRED_URL_SCHEME: https ...
If you do not configure the unmanaged route correctly, the following error is returned:
{ { "kind":"QuayRegistry", "namespace":"quay-enterprise", "name":"example-registry", "uid":"d5879ba5-cc92-406c-ba62-8b19cf56d4aa", "apiVersion":"quay.redhat.com/v1", "resourceVersion":"2418527" }, "reason":"ConfigInvalid", "message":"required component `route` marked as unmanaged, but `configBundleSecret` is missing necessary fields" }
Note
|
Disabling the default route means you are now responsible for creating a |
Disabling the monitoring component
If you install the Project Quay Operator in a single namespace, the monitoring component is automatically set to managed: false
. Use the following reference to explicitly disable monitoring.
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: quay-enterprise
spec:
components:
- kind: monitoring
managed: false
To enable monitoring in this scenario, see Enabling monitoring when the Project Quay Operator is installed in a single namespace.
Disabling the mirroring component
To disable mirroring explicitly, use the following YAML configuration:
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: quay-enterprise
spec:
components:
- kind: mirroring
managed: false
Deploying the Project Quay Operator
The Project Quay Operator can be deployed from the command line or from the OpenShift Container Platform console, however the steps are fundamentally the same.
Deploying Project Quay from the command line
Use the following procedure to deploy Project Quay from using the command-line interface (CLI).
-
You have logged into OpenShift Container Platform using the CLI.
-
Create a namespace, for example,
quay-enterprise
, by entering the following command:$ oc new-project quay-enterprise
-
Optional. If you want to pre-configure any aspects of your Project Quay deployment, create a
Secret
for the config bundle:$ oc create secret generic quay-enterprise-config-bundle --from-file=config-bundle.tar.gz=/path/to/config-bundle.tar.gz
-
Create a
QuayRegistry
custom resource in a file calledquayregistry.yaml
-
For a minimal deployment, using all the defaults:
quayregistry.yaml:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise
-
Optional. If you want to have some components unmanaged, add this information in the
spec
field. A minimal deployment might look like the following example:Example quayregistry.yaml with unmanaged componentsapiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: clair managed: false - kind: horizontalpodautoscaler managed: false - kind: mirror managed: false - kind: monitoring managed: false
-
Optional. If you have created a config bundle, for example,
init-config-bundle-secret
, reference it in thequayregistry.yaml
file:Example quayregistry.yaml with a config bundleapiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: init-config-bundle-secret
-
Optional. If you have a proxy configured, you can add the information using overrides for Project Quay, Clair, and mirroring:
Example quayregistry.yaml with proxy configuredkind: QuayRegistry metadata: name: quay37 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: mirror managed: true overrides: env: - name: DEBUGLOG value: "true" - name: HTTP_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: HTTPS_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: NO_PROXY value: svc.cluster.local,localhost,quay370.apps.quayperf370.perfscale.devcluster.openshift.com - kind: tls managed: false - kind: clair managed: true overrides: env: - name: HTTP_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: HTTPS_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: NO_PROXY value: svc.cluster.local,localhost,quay370.apps.quayperf370.perfscale.devcluster.openshift.com - kind: quay managed: true overrides: env: - name: DEBUGLOG value: "true" - name: NO_PROXY value: svc.cluster.local,localhost,quay370.apps.quayperf370.perfscale.devcluster.openshift.com - name: HTTP_PROXY value: quayproxy.qe.devcluster.openshift.com:3128 - name: HTTPS_PROXY value: quayproxy.qe.devcluster.openshift.com:3128
-
-
Create the
QuayRegistry
in specified namespace:$ oc create -n quay-enterprise -f quayregistry.yaml
-
Enter the following command to see when the
status.registryEndpoint
is populated:$ oc get quayregistry -n quay-enterprise example-registry -o jsonpath="{.status.registryEndpoint}" -w
-
For more information about how to track the progress of your Project Quay deployment, see Monitoring and debugging the deployment process.
Viewing created components using the command line
Use the following procedure to view deployed Project Quay components.
-
You have deployed the Project Quay Operator on {ocp.}
-
Enter the following command to view the deployed components:
$ oc get pods -n quay-enterprise
Example outputNAME READY STATUS RESTARTS AGE example-registry-clair-app-5ffc9f77d6-jwr9s 1/1 Running 0 3m42s example-registry-clair-app-5ffc9f77d6-wgp7d 1/1 Running 0 3m41s example-registry-clair-postgres-54956d6d9c-rgs8l 1/1 Running 0 3m5s example-registry-quay-app-79c6b86c7b-8qnr2 1/1 Running 4 3m42s example-registry-quay-app-79c6b86c7b-xk85f 1/1 Running 4 3m41s example-registry-quay-app-upgrade-5kl5r 0/1 Completed 4 3m50s example-registry-quay-config-editor-597b47c995-svqrl 1/1 Running 0 3m42s example-registry-quay-database-b466fc4d7-tfrnx 1/1 Running 2 3m42s example-registry-quay-mirror-6d9bd78756-6lj6p 1/1 Running 0 2m58s example-registry-quay-mirror-6d9bd78756-bv6gq 1/1 Running 0 2m58s example-registry-quay-postgres-init-dzbmx 0/1 Completed 0 3m43s example-registry-quay-redis-8bd67b647-skgqx 1/1 Running 0 3m42s
Horizontal Pod Autoscaling
A default deployment shows the following running pods:
-
Two pods for the Project Quay application itself (
example-registry-quay-app-*`
) -
One Redis pod for Project Quay logging (
example-registry-quay-redis-*
) -
One database pod for PostgreSQL used by Project Quay for metadata storage (
example-registry-quay-database-*
) -
One pod for the Project Quay config editor (
example-registry-quay-config-editor-*
) -
Two
Quay
mirroring pods (example-registry-quay-mirror-*
) -
Two pods for the Clair application (
example-registry-clair-app-*
) -
One PostgreSQL pod for Clair (
example-registry-clair-postgres-*
)
Horizontal PPod Autoscaling is configured by default to be managed
, and the number of pods for Quay, Clair and repository mirroring is set to two. This facilitates the avoidance of downtime when updating or reconfiguring Project Quay through the Project Quay Operator or during rescheduling events. You can enter the following command to view information about HPA objects:
$ oc get hpa -n quay-enterprise
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
example-registry-clair-app Deployment/example-registry-clair-app 16%/90%, 0%/90% 2 10 2 13d
example-registry-quay-app Deployment/example-registry-quay-app 31%/90%, 1%/90% 2 20 2 13d
example-registry-quay-mirror Deployment/example-registry-quay-mirror 27%/90%, 0%/90% 2 20 2 13d
Using the API to create the first user
Use the following procedure to create the first user in your Project Quay organization.
-
The config option
FEATURE_USER_INITIALIZE
must be set totrue
. -
No users can already exist in the database.
Note
|
Procedure
This procedure requests an OAuth token by specifying |
-
As the root user, install
python39
by entering the following command:$ sudo yum install python39
-
Upgrade the
pip
package manager for Python 3.9:$ python3.9 -m pip install --upgrade pip
-
Use the
pip
package manager to install thebcrypt
package:$ pip install bcrypt
-
Generate a secure, hashed password using the
bcrypt
package in Python 3.9 by entering the following command:$ python3.9 -c 'import bcrypt; print(bcrypt.hashpw(b"subquay12345", bcrypt.gensalt(12)).decode("utf-8"))'
-
Open your Project Quay configuration file and update the following configuration fields:
FEATURE_USER_INITIALIZE: true SUPER_USERS: - quayadmin
-
Stop the Project Quay service by entering the following command:
$ sudo podman stop quay
-
Start the Project Quay service by entering the following command:
$ sudo podman run -d -p 80:8080 -p 443:8443 --name=quay -v $QUAY/config:/conf/stack:Z -v $QUAY/storage:/datastorage:Z {productrepo}/{quayimage}:{productminv}
-
Run the following
CURL
command to generate a new user with a username, password, email, and access token:$ curl -X POST -k http://quay-server.example.com/api/v1/user/initialize --header 'Content-Type: application/json' --data '{ "username": "quayadmin", "password":"quaypass12345", "email": "quayadmin@example.com", "access_token": true}'
If successful, the command returns an object with the username, email, and encrypted password. For example:
{"access_token":"6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED", "email":"quayadmin@example.com","encrypted_password":"1nZMLH57RIE5UGdL/yYpDOHLqiNCgimb6W9kfF8MjZ1xrfDpRyRs9NUnUuNuAitW","username":"quayadmin"} # gitleaks:allow
If a user already exists in the database, an error is returned:
{"message":"Cannot initialize user in a non-empty database"}
If your password is not at least eight characters or contains whitespace, an error is returned:
{"message":"Failed to initialize user: Invalid password, password must be at least 8 characters and contain no whitespace."}
-
Log in to your Project Quay deployment by entering the following command:
$ sudo podman login -u quayadmin -p quaypass12345 http://quay-server.example.com --tls-verify=false
Example outputLogin Succeeded!
For more information on pre-configuring your Project Quay deployment, see the section Pre-configuring Project Quay for automation
Monitoring and debugging the deployment process
Users can now troubleshoot problems during the deployment phase. The status in the QuayRegistry
object can help you monitor the health of the components during the deployment an help you debug any problems that may arise.
-
Enter the following command to check the status of your deployment:
$ oc get quayregistry -n quay-enterprise -o yaml
Example outputImmediately after deployment, the
QuayRegistry
object will show the basic configuration:apiVersion: v1 items: - apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: creationTimestamp: "2021-09-14T10:51:22Z" generation: 3 name: example-registry namespace: quay-enterprise resourceVersion: "50147" selfLink: /apis/quay.redhat.com/v1/namespaces/quay-enterprise/quayregistries/example-registry uid: e3fc82ba-e716-4646-bb0f-63c26d05e00e spec: components: - kind: postgres managed: true - kind: clair managed: true - kind: redis managed: true - kind: horizontalpodautoscaler managed: true - kind: objectstorage managed: true - kind: route managed: true - kind: mirror managed: true - kind: monitoring managed: true - kind: tls managed: true configBundleSecret: example-registry-config-bundle-kt55s kind: List metadata: resourceVersion: "" selfLink: ""
-
Use the
oc get pods
command to view the current state of the deployed components:$ oc get pods -n quay-enterprise
Example outputNAME READY STATUS RESTARTS AGE example-registry-clair-app-86554c6b49-ds7bl 0/1 ContainerCreating 0 2s example-registry-clair-app-86554c6b49-hxp5s 0/1 Running 1 17s example-registry-clair-postgres-68d8857899-lbc5n 0/1 ContainerCreating 0 17s example-registry-quay-app-upgrade-h2v7h 0/1 ContainerCreating 0 9s example-registry-quay-config-editor-5f646cbcb7-lbnc2 0/1 ContainerCreating 0 17s example-registry-quay-database-66f495c9bc-wqsjf 0/1 ContainerCreating 0 17s example-registry-quay-mirror-854c88457b-d845g 0/1 Init:0/1 0 2s example-registry-quay-mirror-854c88457b-fghxv 0/1 Init:0/1 0 17s example-registry-quay-postgres-init-bktdt 0/1 Terminating 0 17s example-registry-quay-redis-f9b9d44bf-4htpz 0/1 ContainerCreating 0 17s
-
While the deployment is in progress, the
QuayRegistry
object will show the current status. In this instance, database migrations are taking place, and other components are waiting until completion:status: conditions: - lastTransitionTime: "2021-09-14T10:52:04Z" lastUpdateTime: "2021-09-14T10:52:04Z" message: all objects created/updated successfully reason: ComponentsCreationSuccess status: "False" type: RolloutBlocked - lastTransitionTime: "2021-09-14T10:52:05Z" lastUpdateTime: "2021-09-14T10:52:05Z" message: running database migrations reason: MigrationsInProgress status: "False" type: Available configEditorCredentialsSecret: example-registry-quay-config-editor-credentials-btbkcg8dc9 configEditorEndpoint: https://example-registry-quay-config-editor-quay-enterprise.apps.docs.quayteam.org lastUpdated: 2021-09-14 10:52:05.371425635 +0000 UTC unhealthyComponents: clair: - lastTransitionTime: "2021-09-14T10:51:32Z" lastUpdateTime: "2021-09-14T10:51:32Z" message: 'Deployment example-registry-clair-postgres: Deployment does not have minimum availability.' reason: MinimumReplicasUnavailable status: "False" type: Available - lastTransitionTime: "2021-09-14T10:51:32Z" lastUpdateTime: "2021-09-14T10:51:32Z" message: 'Deployment example-registry-clair-app: Deployment does not have minimum availability.' reason: MinimumReplicasUnavailable status: "False" type: Available mirror: - lastTransitionTime: "2021-09-14T10:51:32Z" lastUpdateTime: "2021-09-14T10:51:32Z" message: 'Deployment example-registry-quay-mirror: Deployment does not have minimum availability.' reason: MinimumReplicasUnavailable status: "False" type: Available
-
When the deployment process finishes successfully, the status in the
QuayRegistry
object shows no unhealthy components:status: conditions: - lastTransitionTime: "2021-09-14T10:52:36Z" lastUpdateTime: "2021-09-14T10:52:36Z" message: all registry component healthchecks passing reason: HealthChecksPassing status: "True" type: Available - lastTransitionTime: "2021-09-14T10:52:46Z" lastUpdateTime: "2021-09-14T10:52:46Z" message: all objects created/updated successfully reason: ComponentsCreationSuccess status: "False" type: RolloutBlocked configEditorCredentialsSecret: example-registry-quay-config-editor-credentials-hg7gg7h57m configEditorEndpoint: https://example-registry-quay-config-editor-quay-enterprise.apps.docs.quayteam.org currentVersion: {producty} lastUpdated: 2021-09-14 10:52:46.104181633 +0000 UTC registryEndpoint: https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org unhealthyComponents: {}
Deploying Project Quay from the OpenShift Container Platform console
-
Create a namespace, for example,
quay-enterprise
. -
Select Operators → Installed Operators, then select the Quay Operator to navigate to the Operator detail view.
-
Click 'Create Instance' on the 'Quay Registry' tile under 'Provided APIs'.
-
Optionally change the 'Name' of the
QuayRegistry
. This will affect the hostname of the registry. All other fields have been populated with defaults. -
Click 'Create' to submit the
QuayRegistry
to be deployed by the Quay Operator. -
You should be redirected to the
QuayRegistry
list view. Click on theQuayRegistry
you just created to see the details view. -
Once the 'Registry Endpoint' has a value, click it to access your new Quay registry via the UI. You can now select 'Create Account' to create a user and sign in.
Using the Project Quay UI to create the first user
Use the following procedure to create the first user by the Project Quay UI.
Note
|
This procedure assumes that the |
-
In the OpenShift Container Platform console, navigate to Operators → Installed Operators, with the appropriate namespace / project.
-
Click on the newly installed
QuayRegistry
object to view the details. For example: -
After the
Registry Endpoint
has a value, navigate to this URL in your browser. -
Select Create Account in the Project Quay registry UI to create a user. For example:
-
Enter the details for Username, Password, Email, and then click Create Account. For example:
After creating the first user, you are automatically logged in to the Project Quay registry. For example:
Viewing the status of the QuayRegistry object
Lifecycle observability for a given Project Quay deployment is reported in the status
section of the corresponding QuayRegistry
object. The Project Quay Operator constantly updates this section, and this should be the first place to look for any problems or state changes in Project Quay or its managed dependencies.
Viewing the registry endpoint
Once Project Quay is ready to be used, the status.registryEndpoint
field will be populated with the publicly available hostname of the registry.
Viewing the config editor endpoint
Access Project Quay’s UI-based config editor using status.configEditorEndpoint
.
Viewing the config editor credentials secret
The username and password for the config editor UI will be stored in a Secret
in the same namespace as the QuayRegistry
referenced by status.configEditorCredentialsSecret
.
Viewing the version of Project Quay in use
The current version of Project Quay that is running will be reported in status.currentVersion
.
Viewing the conditions of your Project Quay deployment
Certain conditions will be reported in status.conditions
.
Updating Project Quay and the Project Quay and Clair PostgreSQL databases on OpenShift Container Platform
Important
|
If your Project Quay deployment is upgrading from one y-stream to the next, for example, from 3.8.10 → 3.8.11, you must not switch the upgrade channel from |
When updating Project Quay 3.8 → 3.9, the Operator automatically upgrades the existing PostgreSQL databases for Clair and Project Quay from version 10 to version 13.
Important
|
|
You can update Project Quay and the Project Quay and Clair PostgreSQL databases on OpenShift Container Platform by using the Web Console UI, or by using the CLI.
Updating Project Quay and the Project Quay and Clair PostgreSQL databases using the OpenShift Container Platform web console
Use the following procedure to update Project Quay and the Project Quay and Clair PostgreSQL databases using the OpenShift Container Platform web console.
Important
|
|
-
You have installed Project Quay 3.6, 3.7, or 3.8 on OpenShift Container Platform.
-
100 GB of free, additional storage.
During the upgrade process, additional persistent volume claims (PVCs) are provisioned to store the migrated data. This helps prevent a destructive operation on user data. The upgrade process rolls out PVCs for 50 GB for both the Project Quay database upgrade, and the Clair database upgrade.
-
Optional. Back up your old PVCs from PostgreSQL 10 by setting
POSTGRES_UPGRADE_RETAIN_BACKUP
toTrue
yourquay-operator
Subscription
object. For example:apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay-enterprise spec: channel: stable-3.8 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: POSTGRES_UPGRADE_RETAIN_BACKUP value: "true"
-
In the OpenShift Container Platform Web Console, navigate to Operators → Installed Operators.
-
Click on the Project Quay Operator.
-
Navigate to the Subscription tab.
-
Under Subscription details click Update channel.
-
Select stable-3.9 and save the changes.
-
Check the progress of the new installation under Upgrade status. Wait until the upgrade status changes to 1 installed before proceeding.
-
In your OpenShift Container Platform cluster, navigate to Workloads → Pods. Existing pods should be terminated, or in the process of being terminated.
-
Wait for the following pods, which are responsible for upgrading the database and alembic migration of existing data, to spin up:
clair-postgres-upgrade
,quay-postgres-upgrade
, andquay-app-upgrade
. -
After the
clair-postgres-upgrade
,quay-postgres-upgrade
, andquay-app-upgrade
pods are marked as Completed, the remaining pods for your Project Quay deployment spin up. This takes approximately ten minutes. -
Verify that the
quay-database
andclair-postgres
pods now use thepostgresql-13
image. -
After the
quay-app
pod is marked as Running, you can reach your Project Quay registry.
Updating Project Quay and the Project Quay and Clair PostgreSQL databases using the CLI
Use the following procedure to update Project Quay and the Project Quay and Clair PostgreSQL databases using the command-line interface (CLI).
Important
|
|
-
You have installed Project Quay 3.6, 3.7, or 3.8 on OpenShift Container Platform.
-
100 GB of free, additional storage.
During the upgrade process, additional persistent volume claims (PVCs) are provisioned to store the migrated data. This helps prevent a destructive operation on user data. The upgrade process rolls out PVCs for 50 GB for both the Project Quay database upgrade, and the Clair database upgrade.
-
Retrieve your
quay-operator
configuration file by entering the followingoc get
command:$ oc get subscription quay-operator -n quay-enterprise -o yaml > quay-operator.yaml
-
Retrieve the latest version of the Project Quay Operator and its channel by entering the following command:
oc get packagemanifests quay-operator \ -o jsonpath='{range .status.channels[*]}{@.currentCSV} {@.name}{"\n"}{end}' \ | awk '{print "STARTING_CSV=" $1 " CHANNEL=" $2 }' \ | sort -nr \ | head -1
Example outputSTARTING_CSV=quay-operator.v3.9.0 CHANNEL=stable-3.9
-
Using the output from the previous command, update your
Subscription
custom resource for the Project Quay Operator and save it asquay-operator.yaml
. For example:apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay-enterprise spec: channel: stable-3.9 (1) name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: POSTGRES_UPGRADE_RETAIN_BACKUP (2) value: "true"
-
Specify the value you obtained in the previous step for the
spec.channel
parameter. -
Optional. Back up your old PVCs from PostgreSQL 10 by setting
POSTGRES_UPGRADE_RETAIN_BACKUP
toTrue
yourquay-operator
Subscription
object.
-
-
Enter the following command to apply the configuration:
$ oc apply -f quay-operator.yaml
Example outputsubscription.operators.coreos.com/quay-operator created