Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 0 additions & 3 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -287,10 +287,7 @@ The command line interface name is `vcluster`.

### Abbreviations for Kubernetes distros

- [Lightweight Kubernetes](https://k3s.io/): K3s
- [Kubernetes](https://kubernetes.io/): K8s
- [Zero Friction Kubernetes](https://k0sproject.io/): k0s Note that k0s is the
only Kubernetes distro to use a lower case 'k'
- [AWS Elastic Kubernetes Service](https://aws.amazon.com/eks/): EKS

### Other product terms
Expand Down
8 changes: 0 additions & 8 deletions vcluster/_fragments/distro/compat-k0s.mdx

This file was deleted.

9 changes: 0 additions & 9 deletions vcluster/_fragments/distro/compat-k3s.mdx

This file was deleted.

58 changes: 0 additions & 58 deletions vcluster/_fragments/high-availability-k3s.mdx

This file was deleted.

5 changes: 1 addition & 4 deletions vcluster/_fragments/private-nodes-limitations.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,8 @@ Certain vCluster features are automatically disabled or unavailable. If you incl
The following features are not available:

- `sync.*` - No resource syncing between virtual and host clusters
- `integrations.*` - Integrations depend on syncing functionality
- `integrations.*` - Integrations depend on syncing capability
- `networking.replicateServices` - Services are not replicated to host
- `controlPlane.distro.k3s` - Only standard Kubernetes (k8s) is supported
- `controlPlane.coredns.embedded: true` - Embedded CoreDNS conflicts with custom CNI
- `controlPlane.advanced.virtualScheduler.enabled: false` - Virtual scheduler cannot be disabled
- `sleepMode.*` - No ability to sleep workloads or control plane
Expand Down Expand Up @@ -38,8 +37,6 @@ networking:
# Distribution restrictions
controlPlane:
distro:
k3s:
enabled: false # k3s distribution not supported
k8s:
enabled: true # Only standard Kubernetes works

Expand Down
20 changes: 12 additions & 8 deletions vcluster/_fragments/virtual-cluster-content.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,16 +5,17 @@ referred to as the "host" cluster, or the "parent" cluster.
Virtual clusters, being fully functional Kubernetes clusters in their own right, can be a very
useful tool if you are running into issues with the limitations of traditional Kubernetes
namespaces. Often administrator do not want to make, or cannot make any special exceptions to the
multi-tenancy configuration of the underlying parent cluster in order to accommodate user requests.
multi-tenancy configuration of the underlying parent cluster to accommodate user requests.
For example, some users may need to create their own Custom Resource Definitions (CRDs) which
could potentially impact any other users in the cluster. Another user may need pods from two
separate namespaces to communicate with each other, despite the standard NetworkPolicy not
permitting this. In both of these (and many more!) scenarios, a virtual cluster may be a perfect
solution!
permitting this. In both of these (and many more) scenarios, a virtual cluster may be a perfect
solution.

The diagram below briefly outlines the attributes of virtual clusters as compared to using
namespaces or physical clusters for isolation and multi-tenancy.

<!-- vale off -->
<figure>
<img
src={require('@site/static/media/rebranding/vcluster-comparison.png').default}
Expand All @@ -23,13 +24,14 @@ namespaces or physical clusters for isolation and multi-tenancy.
<figcaption>vcluster - Comparison</figcaption>
</figure>

The virtual cluster functionality of vCluster Platform comes from the popular open-source project
The virtual cluster capability of vCluster Platform comes from the popular open source project
[vcluster](https://vcluster.com). vCluster Platform provides a centralized management layer for virtual
clusters, allowing users to provision virtual clusters in any vCluster Platform managed cluster (or virtual
cluster!). vCluster Platform also offers the capability to import existing virtual clusters such that they
can then be managed from the central vCluster Platform instance!
cluster). vCluster Platform also offers the capability to import existing virtual clusters such that they
can then be managed from the central vCluster Platform instance.
<!-- vale on -->

## Why use Virtual Kubernetes Clusters?
## Why use virtual kubernetes clusters?

Virtual clusters can be used to partition a single physical cluster into multiple logical,
virtual clusters. This partitioning process still allows for leveraging the benefits of Kubernetes
Expand Down Expand Up @@ -77,16 +79,18 @@ quickly setting up demo applications for your sales team.

Virtual clusters provide immense benefits for large-scale Kubernetes deployments and multi-tenancy.

<!-- vale off -->
- **Full Admin Access**:
- Deploy operators with CRDs, create namespaces and other cluster-scoped resources that you normally can't create inside a namespace.
- Taint and label nodes without influencing the host cluster.
- Reuse and share services across multiple virtual clusters with ease.
<-- vale on -->
- **Cost Savings:**
- Create lightweight vCluster instances that share the underlying host cluster instead of creating separate "real" clusters.
- Auto-scale, purge, snapshot, and move your vCluster instances, since they are Kubernetes deployments.
- **Low Overhead:**
- vCluster instances are super lightweight and only reside in a single namespace.
- vCluster instances run with [K3s](https://k3s.io/), a super low-footprint K8s distribution. You can use other supported distributions such as [K0s](https://k0sproject.io/), vanilla [Kubernetes](https://kubernetes.io/), and [AWS EKS](https://aws.amazon.com/eks/).
- You run vCluster with supported distributions such as vanilla [Kubernetes](https://kubernetes.io/), and [AWS EKS](https://aws.amazon.com/eks/).
- The vCluster control plane runs inside a single pod. Open source vCluster also uses a CoreDNS pod for vCluster-internal DNS capabilities. With vCluster Platform, however, you can enable the integrated CoreDNS so you don't need the additional pod.
- **No Network Degradation:**
- Since the pods and services inside a vCluster are actually being synchronized down to the host cluster, they are effectively using the underlying cluster's pod and service networking. The vCluster pods are as fast as other pods in the underlying host cluster.
Expand Down
1 change: 0 additions & 1 deletion vcluster/_partials/deploy/distros.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@ By default, the distribution of vCluster is vanilla Kubernetes (K8s) and is the
The following distributions are supported for virtual clusters:

- **K8s**: By default, vCluster uses vanilla Kubernetes. This is the recommended distribution.
- [**K3s**](https://github.com/k3s-io/k3s): A lightweight, certified Kubernetes distribution designed for resource-constrained environments, remote locations, and IoT devices. K3s is only supported for control plane as a container and worker nodes as host nodes.

<HostClusterCompat distro="any supported Kubernetes distribution"/>

Expand Down
4 changes: 2 additions & 2 deletions vcluster/_partials/what-are-virtual-clusters.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -50,9 +50,9 @@ Virtual clusters provide immense benefits for large-scale Kubernetes deployments

### Enhanced flexibility and compatibility

- **Diverse Kubernetes environments:** vCluster supports different Kubernetes versions and distributions (including Kubernetes, <GlossaryTerm term="k3s">K3s</GlossaryTerm>, and <GlossaryTerm term="k0s">K0s</GlossaryTerm>), allowing version skews. This makes it possible to tailor each virtual cluster to specific requirements without impacting others.
- **Diverse Kubernetes environments:** vCluster supports different Kubernetes versions and distributions (including Kubernetes), allowing version skews. This makes it possible to tailor each virtual cluster to specific requirements without impacting others.
- **Adaptable backing stores:** Choose from a range of data stores, from lightweight (SQLite) to enterprise-grade options (embedded etcd, external data stores like Global RDS), catering to various scalability and durability needs.
- **Runs anywhere:** Virtual clusters can run on EKS, GKE, AKS, OpenShift, RKE, K3s, cloud, edge, and on-prem. As long as it's a Kubernetes cluster, you can run a virtual cluster on top of it.
- **Runs anywhere:** Virtual clusters can run on EKS, GKE, AKS, OpenShift, RKE, cloud, edge, and on-prem. As long as it's a Kubernetes cluster, you can run a virtual cluster on top of it.


### Improved scalability
Expand Down
3 changes: 1 addition & 2 deletions vcluster/cli/vcluster_convert_config.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,15 +21,14 @@ Reads from stdin if no file is given via "-f".

Examples:
vcluster convert config --distro k8s -f /my/k8s/values.yaml
vcluster convert config --distro k3s < /my/k3s/values.yaml
##############################################################
```


## Flags

```
--distro string Kubernetes distro of the config. Allowed distros: k8s, k3s
--distro string Kubernetes distro of the config. Allowed distros: k8s
-f, --file string Path to the input file
-h, --help help for config
-o, --output string Prints the output in the specified format. Allowed values: yaml, json (default "yaml")
Expand Down

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Configure <GlossaryTerm term="vcluster">vCluster</GlossaryTerm> to work behind a

## Overview

When deploying vcluster behind a corporate proxy, you need to configure the standard proxy environment variables (`HTTP_PROXY`, `HTTPS_PROXY`, and `NO_PROXY`) on the vcluster control plane pods. The statefulSet configuration ensures that vcluster can:
When deploying vCluster behind a corporate proxy, you need to configure the standard proxy environment variables (`HTTP_PROXY`, `HTTPS_PROXY`, and `NO_PROXY`) on the vCluster control plane pods. The `statefulSet` configuration ensures that vCluster can:

- Access external resources through the proxy when needed
- Communicate with internal cluster services without going through the proxy
Expand Down Expand Up @@ -245,7 +245,7 @@ kubectl exec -n vcluster-proxy my-vcluster-0 -- curl -I http://my-vcluster-etcd:

## External etcd deployments

When using external etcd as the backing store for vCluster instead of the default embedded SQLite/k3s, you **must** include the etcd service name explicitly in the `NO_PROXY` environment variable. The service name follows the pattern `<vcluster-name>-etcd`. This requirement is critical because:
When using external etcd as the backing store for vCluster instead of the default embedded SQLite, you **must** include the etcd service name explicitly in the `NO_PROXY` environment variable. The service name follows the pattern `<vcluster-name>-etcd`. This requirement is critical because:

- The Go HTTP client used by vCluster requires exact hostname matches for services without a leading dot
- Domain patterns like `.local` or `.svc.cluster.local` do not cover the etcd service name
Expand Down
31 changes: 2 additions & 29 deletions vcluster/integrations/metrics-server.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ import TenancySupport from '../_fragments/tenancy-support.mdx';

<TenancySupport hostNodes="true" />

### Installing metrics server (inside vCluster)
### Install metrics server (inside vCluster)

In case the above recommended method of getting metrics in vCluster using the metrics server proxy does not fulfill your requirements and you need a dedicated metrics server installation in the vCluster you can follow this section.
Make sure the vCluster has access to the host clusters nodes.
Expand All @@ -25,36 +25,9 @@ kube-system coredns-854c77959c-q5878 3m 17Mi
kube-system metrics-server-5fbdc54f8c-fgrqk 0m 6Mi
```

:::info K3s Errors

If you see the below error after installing metrics-server (check [k3s#5334](https://github.com/k3s-io/k3s/issues/5344) for more information):

```
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503
```
Create a file named `metrics_patch.yaml` with the following contents:
```
spec:
template:
spec:
containers:
- name: metrics-server
command:
- /metrics-server
- --metric-resolution=30s
- --kubelet-insecure-tls=true
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
```
and apply the patch with kubectl:
```
kubectl patch deployment metrics-server --patch-file metrics_patch.yaml -n kube-system
```

:::

### How does it work?

By default, vCluster will create a service for each node that redirects incoming traffic from within the vCluster to the node kubelet to vCluster itself. This means that if workloads within the vCluster try to scrape node metrics the traffic reaches vCluster first. Vcluster will redirect the incoming request to the host cluster, rewrite the response (pod names, pod namespaces etc) and return it to the requester.
By default, vCluster will create a service for each node that redirects incoming traffic from within the vCluster to the node kubelet to vCluster itself. This means that if workloads within the vCluster try to scrape node metrics the traffic reaches vCluster first. vCluster will redirect the incoming request to the host cluster, rewrite the response (pod names, pod namespaces etc) and return it to the requester.


<MetricsServer />
8 changes: 4 additions & 4 deletions vcluster/manage/accessing-vcluster.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -50,11 +50,11 @@ If you have manually [exposed the vCluster](#expose-vcluster), you can specify t
vcluster connect my-vcluster -n my-vcluster --server my-domain.org
```

#### Connect using Service Accounts
#### Connect using service accounts

By default, vCluster updates the current kubeconfig to access the vCluster that contains the default admin client certificate and client key to authenticate to the vCluster. This means that all kubeconfig files generated have cluster admin access within the vCluster.

Often this might not be desired. Instead of giving a user admin access to the virtual cluster, you can also use [service account authentication](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens) to the virtual cluster. Let's say we want to create a kubeconfig file that only has view access in the virtual cluster. Then you would create a new service account inside the vCluster and assign it the cluster role `view` via a cluster role binding. Then we would generate a service account token and use that instead of the client-cert and client-key inside the kubeconfig.
Often this might not be desired. Instead of giving a user admin access to the virtual cluster, you can also use [service account authentication](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens) to the virtual cluster. Suppose we want to create a kubeconfig file that only has view access in the virtual cluster. Then you would create a new service account inside the vCluster and assign it the cluster role `view` via a cluster role binding. Then we would generate a service account token and use that instead of the client-cert and client-key inside the kubeconfig.

```
vcluster connect my-vcluster -n my-vcluster --service-account kube-system/my-user --cluster-role view
Expand Down Expand Up @@ -104,7 +104,7 @@ Error from server (Forbidden): namespaces is forbidden: User "system:serviceacco

You can replace the token field in the kubeconfig with any other service account token from inside the vCluster to act as this service account against the vCluster. For more information about service accounts and tokens, refer to the [official Kubernetes documentation](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens).

## Retrieving the kubeconfig from the vCluster secret
## Retrieve the kubeconfig from the vCluster secret

<TenancySupport hostNodes="true" privateNodes="true"/>

Expand Down Expand Up @@ -160,7 +160,7 @@ For example, if you want to expose a vCluster at `https://my-domain.org`, you ca
# and use it in the generated kube config secret.
controlPlane:
# distro: (update distro details as per your configurations)
# k3s:
# k8s:
# enabled: true
proxy:
extraSANs:
Expand Down
4 changes: 0 additions & 4 deletions vcluster/manage/backup-restore/restore.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -126,10 +126,6 @@ When taking snapshots and restoring virtual clusters, there are limitations:
**Sleeping virtual clusters**
- Snapshots require a running vCluster control plane and do not work with sleeping virtual clusters.

**Virtual clusters using the k0s distro**
- Use the `--pod-exec` flag to take a snapshot of a k0s virtual cluster.
- k0s virtual clusters do not support restore or clone operations. Migrate them to k8s instead.

**Virtual clusters using an external database**
- Virtual clusters with an external database handle backup and restore outside of vCluster. A database administrator must back up or restore the external database according to the database documentation. Avoid using the vCluster CLI backup and restore commands for clusters with an external database.

Expand Down
Loading