Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions asciidoc/components/upgrade-controller.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -296,7 +296,9 @@ The Upgrade Plan resource's status can be viewed in the following way:
kubectl get upgradeplan <upgradeplan_name> -n upgrade-controller-system -o yaml
----

[#ex-running-upgrade-plan]
.Running Upgrade Plan example:
====
[,yaml,subs="attributes"]
----
apiVersion: lifecycle.suse.com/v1alpha1
Expand Down Expand Up @@ -376,6 +378,7 @@ status:
observedGeneration: 1
sucNameSuffix: 90315a2b6d
----
====

Here you can view every component that the Upgrade Controller will try to schedule an upgrade for. Each condition follows the below template:

Expand Down Expand Up @@ -412,7 +415,9 @@ An Upgrade Plan scheduled by the Upgrade Controller can be marked as `successful

. The `lastSuccessfulReleaseVersion` property points to the `releaseVersion` that is specified in the Upgrade Plan's configuration. _This property is added to the Upgrade Plan's status by the Upgrade Controller once the upgrade process is successful._

[#ex-succesful-upgrade-plan]
.Successful `UpgradePlan` example:
====
[,yaml,subs="attributes"]
----
apiVersion: lifecycle.suse.com/v1alpha1
Expand Down Expand Up @@ -493,6 +498,7 @@ status:
observedGeneration: 1
sucNameSuffix: 90315a2b6d
----
====

[#components-upgrade-controller-how-track-helm]
=== Helm Controller
Expand Down
55 changes: 44 additions & 11 deletions asciidoc/components/virtualization.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -187,17 +187,33 @@ DESCRIPTION:

Now that KubeVirt and CDI are deployed, let us define a simple virtual machine based on https://get.opensuse.org/tumbleweed/[openSUSE Tumbleweed]. This virtual machine has the most simple of configurations, using standard "pod networking" for a networking configuration identical to any other pod. It also employs non-persistent storage, ensuring the storage is ephemeral, just like in any container that does not have a https://kubernetes.io/docs/concepts/storage/persistent-volumes/[PVC].

[,shell]
----
[,shell, literal]
----
$ cat <<EOF > user-data.yaml
#cloud-config
disable_root: false
ssh_pwauth: True
users:
- default
- name: suse
groups: sudo
shell: /bin/bash
sudo: ALL=(ALL) NOPASSWD:ALL
lock_passwd: False
plain_text_passwd: 'suse'
EOF
$ kubectl apply -f - <<EOF
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: tumbleweed
name: ingress-example
namespace: default
spec:
runStrategy: Always
template:
metadata:
labels:
app: nginx
spec:
domain:
devices: {}
Expand All @@ -211,7 +227,7 @@ spec:
image: quay.io/containerdisks/opensuse-tumbleweed:1.0.0
name: tumbleweed-containerdisk-0
- cloudInitNoCloud:
userDataBase64: I2Nsb3VkLWNvbmZpZwpkaXNhYmxlX3Jvb3Q6IGZhbHNlCnNzaF9wd2F1dGg6IFRydWUKdXNlcnM6CiAgLSBkZWZhdWx0CiAgLSBuYW1lOiBzdXNlCiAgICBncm91cHM6IHN1ZG8KICAgIHNoZWxsOiAvYmluL2Jhc2gKICAgIHN1ZG86ICBBTEw9KEFMTCkgTk9QQVNTV0Q6QUxMCiAgICBsb2NrX3Bhc3N3ZDogRmFsc2UKICAgIHBsYWluX3RleHRfcGFzc3dkOiAnc3VzZScK
userDataBase64: $(cat user-data.yaml)
name: cloudinitdisk
EOF
----
Expand Down Expand Up @@ -338,11 +354,15 @@ $ chmod a+x /usr/local/bin/virtctl

You can then use the `virtctl` command-line tool to create virtual machines. Let us replicate our previous virtual machine, noting that we are piping the output directly into `kubectl apply`:

[,shell]
[,shell, literal]
----
$ virtctl create vm --name virtctl-example --memory=1Gi \
--volume-containerdisk=src:quay.io/containerdisks/opensuse-tumbleweed:1.0.0 \
--cloud-init-user-data "I2Nsb3VkLWNvbmZpZwpkaXNhYmxlX3Jvb3Q6IGZhbHNlCnNzaF9wd2F1dGg6IFRydWUKdXNlcnM6CiAgLSBkZWZhdWx0CiAgLSBuYW1lOiBzdXNlCiAgICBncm91cHM6IHN1ZG8KICAgIHNoZWxsOiAvYmluL2Jhc2gKICAgIHN1ZG86ICBBTEw9KEFMTCkgTk9QQVNTV0Q6QUxMCiAgICBsb2NrX3Bhc3N3ZDogRmFsc2UKICAgIHBsYWluX3RleHRfcGFzc3dkOiAnc3VzZScK" | kubectl apply -f -
--cloud-init-user-data "I2Nsb3VkLWNvbmZpZwpkaXNhYmxlX3Jvb3Q6IGZhbHNlCnNzaF9wd2F1dGg6IFRyd
WUKdXNlcnM6CiAgLSBkZWZhdWx0CiAgLSBuYW1lOiBzdXNlCiAgICBncm91cHM6IH
N1ZG8KICAgIHNoZWxsOiAvYmluL2Jhc2gKICAgIHN1ZG86ICBBTEw9KEFMTCkgTk9
QQVNTV0Q6QUxMCiAgICBsb2NrX3Bhc3N3ZDogRmFsc2UKICAgIHBsYWluX3RleHRf
cGFzc3dkOiAnc3VzZScK" | kubectl apply -f -
----

This should then show the virtual machine running (it should start a lot quicker this time given that the container image will be cached):
Expand Down Expand Up @@ -429,8 +449,21 @@ In the example environment, another openSUSE Tumbleweed virtual machine is deplo

Let us create this virtual machine now:

[,shell]
----
[,shell, literal]
----
$ cat <<EOF > user-data.yaml
#cloud-config
disable_root: false
ssh_pwauth: True
users:
- default
- name: suse
groups: sudo
shell: /bin/bash
sudo: ALL=(ALL) NOPASSWD:ALL
lock_passwd: False
plain_text_passwd: 'suse'
EOF
$ kubectl apply -f - <<EOF
apiVersion: kubevirt.io/v1
kind: VirtualMachine
Expand All @@ -456,7 +489,7 @@ spec:
image: quay.io/containerdisks/opensuse-tumbleweed:1.0.0
name: tumbleweed-containerdisk-0
- cloudInitNoCloud:
userDataBase64: I2Nsb3VkLWNvbmZpZwpkaXNhYmxlX3Jvb3Q6IGZhbHNlCnNzaF9wd2F1dGg6IFRydWUKdXNlcnM6CiAgLSBkZWZhdWx0CiAgLSBuYW1lOiBzdXNlCiAgICBncm91cHM6IHN1ZG8KICAgIHNoZWxsOiAvYmluL2Jhc2gKICAgIHN1ZG86ICBBTEw9KEFMTCkgTk9QQVNTV0Q6QUxMCiAgICBsb2NrX3Bhc3N3ZDogRmFsc2UKICAgIHBsYWluX3RleHRfcGFzc3dkOiAnc3VzZScKcnVuY21kOgogIC0genlwcGVyIGluIC15IG5naW54CiAgLSBzeXN0ZW1jdGwgZW5hYmxlIC0tbm93IG5naW54CiAgLSBlY2hvICJJdCB3b3JrcyEiID4gL3Nydi93d3cvaHRkb2NzL2luZGV4Lmh0bQo=
userDataBase64: $(cat user-data.yaml)
name: cloudinitdisk
EOF
----
Expand Down Expand Up @@ -524,7 +557,7 @@ The extension allows you to directly interact with KubeVirt Virtual Machine reso
2. Navigate to *KubeVirt > Virtual Machines* page and click `Create from YAML` in the upper right of the screen.
3. Fill in or paste a virtual machine definition and press `Create`. Use virtual machine definition from Deploying Virtual Machines section as an inspiration.

image::virtual-machines-page.png[]
image::virtual-machines-page.png[scaledwidth=100%]

==== Virtual Machine Actions

Expand All @@ -538,7 +571,7 @@ The "Virtual machines" list provides a `Console` drop-down list that allows to c

In some cases, it takes a short while before the console is accessible on a freshly started virtual machine.

image::vnc-console-ui.png[]
image::vnc-console-ui.png[scaledwidth=100%]

== Installing with Edge Image Builder

Expand Down
1 change: 1 addition & 0 deletions asciidoc/edge-book/releasenotes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -189,6 +189,7 @@ and a custom script named `30a-copy-elemental-system-agent-override.sh` can be u

The following table describes the individual components that make up the 3.5.0 release, including the version, the Helm chart version (if applicable), and from where the released artifact can be pulled in the binary format. Please follow the associated documentation for usage and deployment examples.

// can you please help me with the sha256 code going outside the page border
|======
| Name | Version | Helm Chart Version | Artifact Location (URL/Image)
| SUSE Linux Micro | 6.2 (latest) | N/A | https://www.suse.com/download/sle-micro/[SUSE Linux Micro Download Page] +
Expand Down
6 changes: 3 additions & 3 deletions asciidoc/edge-book/welcome.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ SUSE Edge is comprised of both existing SUSE and Rancher components along with a

==== Management Cluster

image::suse-edge-management-cluster.svg[scaledwidth=100%]
image::suse-edge-management-cluster.png[scaledwidth=100%]

* *Management*: This is the centralized part of SUSE Edge that is used to manage the provisioning and lifecycle of connected downstream clusters. The management cluster typically includes the following components:
** Multi-cluster management with <<components-rancher,Rancher Prime>>, enabling a common dashboard for downstream cluster onboarding and ongoing lifecycle management of infrastructure and applications, also providing comprehensive tenant isolation and `IDP` (Identity Provider) integrations, a large marketplace of third-party integrations and extensions, and a vendor-neutral API.
Expand All @@ -49,7 +49,7 @@ image::suse-edge-management-cluster.svg[scaledwidth=100%]

==== Downstream Clusters

image::suse-edge-downstream-cluster.svg[scaledwidth=100%]
image::suse-edge-downstream-cluster.png[scaledwidth=100%]

* *Downstream*: This is the distributed part of SUSE Edge that is used to run the user workloads at the Edge, i.e. the software that is running at the edge location itself, and is typically comprised of the following components:
** A choice of Kubernetes distributions, with secure and lightweight distributions like <<components-k3s,K3s>> and <<components-rke2,RKE2>> (`RKE2` is hardened, certified and optimized for usage in government and regulated industries).
Expand All @@ -60,7 +60,7 @@ image::suse-edge-downstream-cluster.svg[scaledwidth=100%]

=== Connectivity

image::suse-edge-connected-architecture.svg[scaledwidth=100%]
image::suse-edge-connected-architecture.png[scaledwidth=100%]

The above image provides a high-level architectural overview for *connected* downstream clusters and their attachment to the management cluster. The management cluster can be deployed on a wide variety of underlying infrastructure platforms, in both on-premises and cloud capacities, depending on networking availability between the downstream clusters and the target management cluster. The only requirement for this to function are API and callback URL's to be accessible over the network that connects downstream cluster nodes to the management infrastructure.

Expand Down
2 changes: 1 addition & 1 deletion asciidoc/product/atip-requirements.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ The hardware requirements for SUSE Telco Cloud are as follows:

As a reference for the network architecture, the following diagram shows a typical network architecture for a Telco environment:

image::product-atip-requirements1.svg[scaledwidth=100%]
image::product-atip-requirements1.png[scaledwidth=100%]

The network architecture is based on the following components:

Expand Down
8 changes: 4 additions & 4 deletions asciidoc/quickstart/eib.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ This will output something similar to:

[,console]
----
$6$G392FCbxVgnLaFw1$Ujt00mdpJ3tDHxEg1snBU3GjujQf6f8kvopu7jiCBIhRbRvMmKUqwcmXAKggaSSKeUUOEtCP3ZUoZQY7zTXnC1
$6$G392FCbxVgnLaFw$Ujt00mdpJ3tDHxEg1snBU3GjujQf6f8kvopu7jiCBIhRbRvMmKUqwcmXAKggaSSKeUUOEtCP3ZUoZQY7zTXnC1
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had to remove the number from the password as it was not breaking the text correctly in the PDF rendition. Please let me know if it's okay.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, an example string

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hardys Can we use the string as suggested by Frank in the earlier comment?
For example:

Suggested change
$6$G392FCbxVgnLaFw$Ujt00mdpJ3tDHxEg1snBU3GjujQf6f8kvopu7jiCBIhRbRvMmKUqwcmXAKggaSSKeUUOEtCP3ZUoZQY7zTXnC1
$6$G392FCbxVgn[...]Y7zTXnC1

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hardys Can you please help here ?

----

We can then add a section in the definition file called `operatingSystem` with a `users` array inside it. The resulting file should look like:
Expand All @@ -178,7 +178,7 @@ image:
operatingSystem:
users:
- username: root
encryptedPassword: $6$G392FCbxVgnLaFw1$Ujt00mdpJ3tDHxEg1snBU3GjujQf6f8kvopu7jiCBIhRbRvMmKUqwcmXAKggaSSKeUUOEtCP3ZUoZQY7zTXnC1
encryptedPassword: $6$G392FCbxVgnLaFw$Ujt00mdpJ3tDHxEg1snBU3GjujQf6f8kvopu7jiCBIhRbRvMmKUqwcmXAKggaSSKeUUOEtCP3ZUoZQY7zTXnC1
----

[NOTE]
Expand Down Expand Up @@ -291,7 +291,7 @@ image:
operatingSystem:
users:
- username: root
encryptedPassword: $6$G392FCbxVgnLaFw1$Ujt00mdpJ3tDHxEg1snBU3GjujQf6f8kvopu7jiCBIhRbRvMmKUqwcmXAKggaSSKeUUOEtCP3ZUoZQY7zTXnC1
encryptedPassword: $6$G392FCbxVgnLaFw$Ujt00mdpJ3tDHxEg1snBU3GjujQf6f8kvopu7jiCBIhRbRvMmKUqwcmXAKggaSSKeUUOEtCP3ZUoZQY7zTXnC1
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment as above

packages:
packageList:
- nvidia-container-toolkit
Expand Down Expand Up @@ -354,7 +354,7 @@ image:
operatingSystem:
users:
- username: root
encryptedPassword: $6$G392FCbxVgnLaFw1$Ujt00mdpJ3tDHxEg1snBU3GjujQf6f8kvopu7jiCBIhRbRvMmKUqwcmXAKggaSSKeUUOEtCP3ZUoZQY7zTXnC1
encryptedPassword: $6$G392FCbxVgnLaFw$Ujt00mdpJ3tDHxEg1snBU3GjujQf6f8kvopu7jiCBIhRbRvMmKUqwcmXAKggaSSKeUUOEtCP3ZUoZQY7zTXnC1
packages:
packageList:
- nvidia-container-toolkit
Expand Down
2 changes: 1 addition & 1 deletion asciidoc/quickstart/elemental.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ This approach can be useful in scenarios where the devices that you want to cont

== High-level architecture

image::quickstart-elemental-architecture.svg[scaledwidth=100%]
image::quickstart-elemental-architecture.png[scaledwidth=100%]

== Resources needed

Expand Down
2 changes: 1 addition & 1 deletion asciidoc/quickstart/metal3.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ cluster bare-metal servers, including automated inspection, cleaning and provisi

== High-level architecture

image::quickstart-metal3-architecture.svg[scaledwidth=100%]
image::quickstart-metal3-architecture.png[scaledwidth=100%]

== Prerequisites

Expand Down
Loading