diff --git a/asciidoc/components/upgrade-controller.adoc b/asciidoc/components/upgrade-controller.adoc index 49ea67e9..7c8143aa 100644 --- a/asciidoc/components/upgrade-controller.adoc +++ b/asciidoc/components/upgrade-controller.adoc @@ -296,7 +296,9 @@ The Upgrade Plan resource's status can be viewed in the following way: kubectl get upgradeplan -n upgrade-controller-system -o yaml ---- +[#ex-running-upgrade-plan] .Running Upgrade Plan example: +==== [,yaml,subs="attributes"] ---- apiVersion: lifecycle.suse.com/v1alpha1 @@ -376,6 +378,7 @@ status: observedGeneration: 1 sucNameSuffix: 90315a2b6d ---- +==== Here you can view every component that the Upgrade Controller will try to schedule an upgrade for. Each condition follows the below template: @@ -412,7 +415,9 @@ An Upgrade Plan scheduled by the Upgrade Controller can be marked as `successful . The `lastSuccessfulReleaseVersion` property points to the `releaseVersion` that is specified in the Upgrade Plan's configuration. _This property is added to the Upgrade Plan's status by the Upgrade Controller once the upgrade process is successful._ +[#ex-succesful-upgrade-plan] .Successful `UpgradePlan` example: +==== [,yaml,subs="attributes"] ---- apiVersion: lifecycle.suse.com/v1alpha1 @@ -493,6 +498,7 @@ status: observedGeneration: 1 sucNameSuffix: 90315a2b6d ---- +==== [#components-upgrade-controller-how-track-helm] === Helm Controller diff --git a/asciidoc/components/virtualization.adoc b/asciidoc/components/virtualization.adoc index aef44ae2..f3f577a6 100644 --- a/asciidoc/components/virtualization.adoc +++ b/asciidoc/components/virtualization.adoc @@ -187,17 +187,33 @@ DESCRIPTION: Now that KubeVirt and CDI are deployed, let us define a simple virtual machine based on https://get.opensuse.org/tumbleweed/[openSUSE Tumbleweed]. This virtual machine has the most simple of configurations, using standard "pod networking" for a networking configuration identical to any other pod. It also employs non-persistent storage, ensuring the storage is ephemeral, just like in any container that does not have a https://kubernetes.io/docs/concepts/storage/persistent-volumes/[PVC]. -[,shell] ----- +[,shell, literal] +---- +$ cat < user-data.yaml +#cloud-config +disable_root: false +ssh_pwauth: True +users: + - default + - name: suse + groups: sudo + shell: /bin/bash + sudo: ALL=(ALL) NOPASSWD:ALL + lock_passwd: False + plain_text_passwd: 'suse' +EOF $ kubectl apply -f - < user-data.yaml +#cloud-config +disable_root: false +ssh_pwauth: True +users: + - default + - name: suse + groups: sudo + shell: /bin/bash + sudo: ALL=(ALL) NOPASSWD:ALL + lock_passwd: False + plain_text_passwd: 'suse' +EOF $ kubectl apply -f - < Virtual Machines* page and click `Create from YAML` in the upper right of the screen. 3. Fill in or paste a virtual machine definition and press `Create`. Use virtual machine definition from Deploying Virtual Machines section as an inspiration. -image::virtual-machines-page.png[] +image::virtual-machines-page.png[scaledwidth=100%] ==== Virtual Machine Actions @@ -538,7 +571,7 @@ The "Virtual machines" list provides a `Console` drop-down list that allows to c In some cases, it takes a short while before the console is accessible on a freshly started virtual machine. -image::vnc-console-ui.png[] +image::vnc-console-ui.png[scaledwidth=100%] == Installing with Edge Image Builder diff --git a/asciidoc/edge-book/releasenotes.adoc b/asciidoc/edge-book/releasenotes.adoc index 5142fd53..7c5e4f7e 100644 --- a/asciidoc/edge-book/releasenotes.adoc +++ b/asciidoc/edge-book/releasenotes.adoc @@ -189,6 +189,7 @@ and a custom script named `30a-copy-elemental-system-agent-override.sh` can be u The following table describes the individual components that make up the 3.5.0 release, including the version, the Helm chart version (if applicable), and from where the released artifact can be pulled in the binary format. Please follow the associated documentation for usage and deployment examples. +// can you please help me with the sha256 code going outside the page border |====== | Name | Version | Helm Chart Version | Artifact Location (URL/Image) | SUSE Linux Micro | 6.2 (latest) | N/A | https://www.suse.com/download/sle-micro/[SUSE Linux Micro Download Page] + diff --git a/asciidoc/edge-book/welcome.adoc b/asciidoc/edge-book/welcome.adoc index bb65049b..a57cf939 100644 --- a/asciidoc/edge-book/welcome.adoc +++ b/asciidoc/edge-book/welcome.adoc @@ -36,7 +36,7 @@ SUSE Edge is comprised of both existing SUSE and Rancher components along with a ==== Management Cluster -image::suse-edge-management-cluster.svg[scaledwidth=100%] +image::suse-edge-management-cluster.png[scaledwidth=100%] * *Management*: This is the centralized part of SUSE Edge that is used to manage the provisioning and lifecycle of connected downstream clusters. The management cluster typically includes the following components: ** Multi-cluster management with <>, enabling a common dashboard for downstream cluster onboarding and ongoing lifecycle management of infrastructure and applications, also providing comprehensive tenant isolation and `IDP` (Identity Provider) integrations, a large marketplace of third-party integrations and extensions, and a vendor-neutral API. @@ -49,7 +49,7 @@ image::suse-edge-management-cluster.svg[scaledwidth=100%] ==== Downstream Clusters -image::suse-edge-downstream-cluster.svg[scaledwidth=100%] +image::suse-edge-downstream-cluster.png[scaledwidth=100%] * *Downstream*: This is the distributed part of SUSE Edge that is used to run the user workloads at the Edge, i.e. the software that is running at the edge location itself, and is typically comprised of the following components: ** A choice of Kubernetes distributions, with secure and lightweight distributions like <> and <> (`RKE2` is hardened, certified and optimized for usage in government and regulated industries). @@ -60,7 +60,7 @@ image::suse-edge-downstream-cluster.svg[scaledwidth=100%] === Connectivity -image::suse-edge-connected-architecture.svg[scaledwidth=100%] +image::suse-edge-connected-architecture.png[scaledwidth=100%] The above image provides a high-level architectural overview for *connected* downstream clusters and their attachment to the management cluster. The management cluster can be deployed on a wide variety of underlying infrastructure platforms, in both on-premises and cloud capacities, depending on networking availability between the downstream clusters and the target management cluster. The only requirement for this to function are API and callback URL's to be accessible over the network that connects downstream cluster nodes to the management infrastructure. diff --git a/asciidoc/product/atip-requirements.adoc b/asciidoc/product/atip-requirements.adoc index 4cf7c695..b37cae64 100644 --- a/asciidoc/product/atip-requirements.adoc +++ b/asciidoc/product/atip-requirements.adoc @@ -48,7 +48,7 @@ The hardware requirements for SUSE Telco Cloud are as follows: As a reference for the network architecture, the following diagram shows a typical network architecture for a Telco environment: -image::product-atip-requirements1.svg[scaledwidth=100%] +image::product-atip-requirements1.png[scaledwidth=100%] The network architecture is based on the following components: diff --git a/asciidoc/quickstart/eib.adoc b/asciidoc/quickstart/eib.adoc index e04e6632..f73594bb 100644 --- a/asciidoc/quickstart/eib.adoc +++ b/asciidoc/quickstart/eib.adoc @@ -162,7 +162,7 @@ This will output something similar to: [,console] ---- -$6$G392FCbxVgnLaFw1$Ujt00mdpJ3tDHxEg1snBU3GjujQf6f8kvopu7jiCBIhRbRvMmKUqwcmXAKggaSSKeUUOEtCP3ZUoZQY7zTXnC1 +$6$G392FCbxVgnLaFw$Ujt00mdpJ3tDHxEg1snBU3GjujQf6f8kvopu7jiCBIhRbRvMmKUqwcmXAKggaSSKeUUOEtCP3ZUoZQY7zTXnC1 ---- We can then add a section in the definition file called `operatingSystem` with a `users` array inside it. The resulting file should look like: @@ -178,7 +178,7 @@ image: operatingSystem: users: - username: root - encryptedPassword: $6$G392FCbxVgnLaFw1$Ujt00mdpJ3tDHxEg1snBU3GjujQf6f8kvopu7jiCBIhRbRvMmKUqwcmXAKggaSSKeUUOEtCP3ZUoZQY7zTXnC1 + encryptedPassword: $6$G392FCbxVgnLaFw$Ujt00mdpJ3tDHxEg1snBU3GjujQf6f8kvopu7jiCBIhRbRvMmKUqwcmXAKggaSSKeUUOEtCP3ZUoZQY7zTXnC1 ---- [NOTE] @@ -291,7 +291,7 @@ image: operatingSystem: users: - username: root - encryptedPassword: $6$G392FCbxVgnLaFw1$Ujt00mdpJ3tDHxEg1snBU3GjujQf6f8kvopu7jiCBIhRbRvMmKUqwcmXAKggaSSKeUUOEtCP3ZUoZQY7zTXnC1 + encryptedPassword: $6$G392FCbxVgnLaFw$Ujt00mdpJ3tDHxEg1snBU3GjujQf6f8kvopu7jiCBIhRbRvMmKUqwcmXAKggaSSKeUUOEtCP3ZUoZQY7zTXnC1 packages: packageList: - nvidia-container-toolkit @@ -354,7 +354,7 @@ image: operatingSystem: users: - username: root - encryptedPassword: $6$G392FCbxVgnLaFw1$Ujt00mdpJ3tDHxEg1snBU3GjujQf6f8kvopu7jiCBIhRbRvMmKUqwcmXAKggaSSKeUUOEtCP3ZUoZQY7zTXnC1 + encryptedPassword: $6$G392FCbxVgnLaFw$Ujt00mdpJ3tDHxEg1snBU3GjujQf6f8kvopu7jiCBIhRbRvMmKUqwcmXAKggaSSKeUUOEtCP3ZUoZQY7zTXnC1 packages: packageList: - nvidia-container-toolkit diff --git a/asciidoc/quickstart/elemental.adoc b/asciidoc/quickstart/elemental.adoc index 93d56232..7eb8f5f1 100644 --- a/asciidoc/quickstart/elemental.adoc +++ b/asciidoc/quickstart/elemental.adoc @@ -19,7 +19,7 @@ This approach can be useful in scenarios where the devices that you want to cont == High-level architecture -image::quickstart-elemental-architecture.svg[scaledwidth=100%] +image::quickstart-elemental-architecture.png[scaledwidth=100%] == Resources needed diff --git a/asciidoc/quickstart/metal3.adoc b/asciidoc/quickstart/metal3.adoc index 490b26cd..dcda4042 100644 --- a/asciidoc/quickstart/metal3.adoc +++ b/asciidoc/quickstart/metal3.adoc @@ -32,7 +32,7 @@ cluster bare-metal servers, including automated inspection, cleaning and provisi == High-level architecture -image::quickstart-metal3-architecture.svg[scaledwidth=100%] +image::quickstart-metal3-architecture.png[scaledwidth=100%] == Prerequisites