Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
80 changes: 0 additions & 80 deletions CRC.adoc

This file was deleted.

2 changes: 1 addition & 1 deletion README.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ This repository uses https://github.com/golang/go/wiki/Modules[Go modules].

== Step by step guide - running in CodeReady Containers

Refer to link:CRC.adoc[this guide] for detailed instructions on running the e2e tests in a local CodeReady Containers cluster.
Refer to link:openshift_local.adoc[this guide] for detailed instructions on running the e2e tests in a local CodeReady Containers cluster.

== End-to-End Tests

Expand Down
Binary file removed doc/images/crc_oc_login.png
Binary file not shown.
Binary file removed doc/images/crc_start_output.png
Binary file not shown.
Binary file removed doc/images/download.png
Binary file not shown.
Binary file removed doc/images/extract_crc.png
Binary file not shown.
Binary file added doc/images/openshift_local_download.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed doc/images/quay_repo.png
Binary file not shown.
Binary file removed doc/images/quay_repo_detail.png
Binary file not shown.
Binary file removed doc/images/quay_repo_visibility.png
Binary file not shown.
226 changes: 226 additions & 0 deletions openshift_local.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,226 @@
:imagesdir: doc/images

== Setting up OpenShift Local (formerly CRC, CodeReady Containers) — Step-by-step guide

IMPORTANT: OpenShift Local includes an embedded system bundle that contains certificates which expire 30 days
after the release. Because of this it is very important to always run the latest release of OpenShift Local.

OpenShift Local is a distribution of OpenShift designed to be run on a development PC, and while some features have
been disabled by default it is still quite demanding in terms of system resources, and for this reason it is
recommended that it be installed on a machine that has at least 32GB of memory.

This guide will walk through the steps of downloading and installing OpenShift Local, and running the e2e tests against
local CodeReady Toolchain `host-operator` and `member-operator` repositories.

=== Install the required tools

Please check the xref:required_tools.adoc[Required Tools] page and install those tools and utilities before proceeding,
as otherwise you will experience test failures and having to restart the tests from the beginning.

=== Download and install OpenShift Local
Download OpenShift Local from https://developers.redhat.com/products/openshift-local/overview[developers.redhat.com].
You will need to log in using your Red Hat SSO account, after which you may click on the `Install OpenShift on your
laptop` button which will take you to the download page for OpenShift Local. From here, select your OS before clicking
`Download OpenShift Local`. You will also need to download your pull secret, and keep that in a safe place.

image::openshift_local_download.png[align="center"]

Extract the downloaded file into a directory of your choice:

[source,bash]
----
tar -xvf crc-linux-amd64.tar.xz
----

Give execution permissions to the binary and move it to a directory on your path or `/usr/local/bin`:

[source,bash]
----
chmod u+x crc
sudo mv crc /usr/local/bin
----

=== Set up the cluster and enable or tweak the cluster's settings

You need to set up the OpenShift Local cluster — the daemons, configurations and basic settings for it to be able to
run — by running the command below. You only need to do it the first time you're setting up the cluster or after
running `crc cleanup`:

[source,bash]
----
crc setup
----

Also, in order to run the tests seamlessly and without any problems, there is a set of settings you are advised to
change:

[source,bash]
----
# Cluster monitoring is required for the tests to pass
crc config set enable-cluster-monitoring true

# A minimum of 14GB of virtual machine memory is required to run by default, so it is recommended to change that
# setting. Tweaking the CPUs and disk size is optional, but also recommended to avoid having issues down the line.
crc config set cpus 6
crc config set disk-size 50
crc config set memory 14500 # in MB. You can also use 20000 for 20GB, to be safer.
----

Now you can go ahead and start the cluster. The first time you will need to provide the pull secret you were presented
with in the OpenShift Local download page — if you didn't grab it you can come back and copy it. Run the following
command to start the cluster:

[source,bash]
----
crc start
----

While your local OpenShift cluster boots, you can go ahead and prepare the Quay repositories to be able to run the
tests.

=== Final configurations
==== Creating the Quay repositories and making them public

Please follow the steps in the xref:quay.adoc["Configure your Quay account for dev deployment"] document to set up the
Quay repositories, and then come back to this guide.

==== Logging in to your cluster and Quay

After some time has passed, the local OpenShift cluster might be ready to work with. The terminal should show an output
similar to the following one:

[source,text]
----
INFO Using bundle path /home/${USER}/.crc/cache/crc_libvirt_4.19.8_amd64.crcbundle
INFO Checking if running as non-root
INFO Checking if running inside WSL2
INFO Checking if crc-admin-helper executable is cached
INFO Checking if running on a supported CPU architecture
INFO Checking if crc executable symlink exists
INFO Checking minimum RAM requirements
INFO Check if Podman binary exists in: /home/${USER}/.crc/bin/oc
INFO Checking if Virtualization is enabled
INFO Checking if KVM is enabled
INFO Checking if libvirt is installed
INFO Checking if user is part of libvirt group
INFO Checking if active user/process is currently part of the libvirt group
INFO Checking if libvirt daemon is running
INFO Checking if a supported libvirt version is installed
INFO Checking if crc-driver-libvirt is installed
INFO Checking crc daemon systemd socket units
INFO Checking if vsock is correctly configured
INFO Loading bundle: crc_libvirt_4.19.8_amd64...
CRC requires a pull secret to download content from Red Hat.
You can copy it from the Pull Secret section of https://console.redhat.com/openshift/create/local.
? Please enter the pull secret ***********************************************************************************************************************************************************************************************************************************************************************************************************

INFO Creating CRC VM for OpenShift 4.19.8...
INFO Generating new SSH key pair...
INFO Generating new password for the kubeadmin user
INFO Starting CRC VM for openshift 4.19.8...
INFO CRC instance is running with IP 127.0.0.1
INFO CRC VM is running
INFO Updating authorized keys...
INFO Resizing /dev/vda4 filesystem
INFO Configuring shared directories
INFO Check internal and public DNS query...
INFO Check DNS query from host...
INFO Verifying validity of the kubelet certificates...
INFO Starting kubelet service
INFO Waiting for kube-apiserver availability... [takes around 2min]
INFO Adding user's pull secret to the cluster...
INFO Updating SSH key to machine config resource...
INFO Waiting until the user's pull secret is written to the instance disk...
INFO Overriding password for developer user
INFO Changing the password for the users
INFO Updating cluster ID...
INFO Enabling cluster monitoring operator...
INFO Updating root CA cert to admin-kubeconfig-client-ca configmap...
INFO Starting openshift instance... [waiting for the cluster to stabilize]
INFO 2 operators are progressing: authentication, console
INFO 2 operators are progressing: authentication, console
INFO 2 operators are progressing: console, monitoring
INFO Operator monitoring is progressing
INFO Operator monitoring is progressing
INFO Operator monitoring is progressing
INFO Operator monitoring is progressing
INFO Operator monitoring is progressing
INFO Operator monitoring is progressing
INFO Operator monitoring is progressing
INFO Operator monitoring is progressing
INFO Operator monitoring is progressing
INFO Operator monitoring is progressing
INFO Operator monitoring is progressing
INFO Operator monitoring is progressing
INFO All operators are available. Ensuring stability...
INFO Operators are stable (2/3)...
INFO 2 operators are progressing: console, openshift-controller-manager
INFO All operators are available. Ensuring stability...
INFO Operators are stable (2/3)...
WARN Cluster is not ready: cluster operators are still not stable after 10m0.700913862s
INFO Adding crc-admin and crc-developer contexts to kubeconfig...
Started the OpenShift cluster.

The server is accessible via web console at:
https://console-openshift-console.apps-crc.testing

Log in as administrator:
Username: kubeadmin
Password: ${KUBEADMIN_PASSWORD}

Log in as user:
Username: developer
Password: ${DEVELOPER_PASSWORD}

Use the 'oc' command line interface:
$ eval $(crc oc-env)
$ oc login -u developer https://api.crc.testing:6443
----

Add the `oc` executable to your current path by running the following command:

[source,bash]
----
eval $(crc oc-env)
----

Now, log in as the *kubeadmin* user, since you need the required privileges to manage namespaces, install operators,
clean up resources and what not. Your login command should look something similar to this:

[source,bash]
----
oc login -u kubeadmin -p ${KUBEADMIN_PASSWORD} https://api.crc.testing:6443
----

Also, make sure you're logged in to Quay with Podman:

[source,bash]
----
podman login quay.io
----

==== Running the tests

Please follow the steps in xref:README.adoc#_running_end_to_end_tests["README § Running End-to-End tests"] to set up
the local operator repositories — if any — and run the tests.

=== Cleaning up

After a run, regardless of whether it was successful or not, you can — and it is recommended to — run the following
target to clean up the resources in the local OpenShift cluster:

[source,bash]
----
make clean-e2e-resources
----

If for some reason the cleaning up of the resources gets stuck, you can run the following target before running the
"clean" target again to remove the finalizers that prevent the cleanup:

[source,bash]
----
make force-remove-finalizers-from-e2e-resources

# Rerun the cleanup again.
make clean-e2e-resources
----
38 changes: 23 additions & 15 deletions quay.adoc
Original file line number Diff line number Diff line change
@@ -1,22 +1,30 @@
== Configure your Quay account for dev deployment

There is a set of images that is built and pushed to quay repositories while deploying local versions of Toolchain (Sandbox) operators to OpenShift cluster. Please make sure that the repositories exist in your quay.io account.
There is a set of images that are built and pushed to quay repositories while deploying local versions of Toolchain
(Sandbox) operators to OpenShift cluster. Please make sure that the repositories exist in your Quay.io account.

=== Repositories
. Register for a quay.io account if you don't have one
. Make sure you have set the _QUAY_NAMESPACE_ variable: +
`export QUAY_NAMESPACE=<quay-username>`
. Log in to quay.io using +
`podman login quay.io`
* Make sure that these repositories exist on quay.io and the visibility is set to `public` for all of them:
* https://quay.io/repository/<quay-username>/host-operator
* https://quay.io/repository/<quay-username>/host-operator-bundle
* https://quay.io/repository/<quay-username>/host-operator-index
* https://quay.io/repository/<quay-username>/member-operator
* https://quay.io/repository/<quay-username>/member-operator-webhook
* https://quay.io/repository/<quay-username>/member-operator-bundle
* https://quay.io/repository/<quay-username>/member-operator-index
* https://quay.io/repository/<quay-username>/registration-service
. Register for a quay.io account if you don't have one and log in to the account.
. Go to the repository section, or click on the following link: https://quay.io/repository.
. Click on the "Create new repository" button.
. Select your personal namespace if it is not already selected for you, give the repository an appropriate name, and
choose "Public" as the repository's visibility.
. Click on "Create".

Also, make sure that:

. You have set the `QUAY_NAMESPACE` environment variable so that any commands or tests you run use your personal Quay
repositories: `export QUAY_NAMESPACE=<quay-username>`
. You are logged in to your Quay.io account in Podman too with `podman login quay.io` for the same previous reason.
. You end up with the following *public* repositories in Quay:
.. https://quay.io/repository/<quay-username>/host-operator
.. https://quay.io/repository/<quay-username>/host-operator-bundle
.. https://quay.io/repository/<quay-username>/host-operator-index
.. https://quay.io/repository/<quay-username>/member-operator
.. https://quay.io/repository/<quay-username>/member-operator-webhook
.. https://quay.io/repository/<quay-username>/member-operator-bundle
.. https://quay.io/repository/<quay-username>/member-operator-index
.. https://quay.io/repository/<quay-username>/registration-service


=== Public visibility
Expand Down
Loading