Skip to content
Open
2 changes: 1 addition & 1 deletion api/v4/objectstorage_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ type S3Spec struct {

// ObjectStorageStatus defines the observed state of ObjectStorage.
type ObjectStorageStatus struct {
// Phase of the large message store
// Phase of the object storage
Phase Phase `json:"phase"`

// Resource revision tracker
Expand Down
4 changes: 4 additions & 0 deletions api/v4/queue_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,10 @@ type SQSSpec struct {
// +kubebuilder:validation:Pattern=`^https?://[^\s/$.?#].[^\s]*$`
// Amazon SQS Service endpoint
Endpoint string `json:"endpoint"`

// +optional
// List of remote storage volumes
VolList []VolumeSpec `json:"volumes,omitempty"`
}

// QueueStatus defines the observed state of Queue
Expand Down
13 changes: 9 additions & 4 deletions api/v4/zz_generated.deepcopy.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

34 changes: 34 additions & 0 deletions config/crd/bases/enterprise.splunk.com_indexerclusters.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8480,6 +8480,40 @@ spec:
description: Name of the queue
minLength: 1
type: string
volumes:
description: List of remote storage volumes
items:
description: VolumeSpec defines remote volume config
properties:
endpoint:
description: Remote volume URI
type: string
name:
description: Remote volume name
type: string
path:
description: Remote volume path
type: string
provider:
description: 'App Package Remote Store provider. Supported
values: aws, minio, azure, gcp.'
type: string
region:
description: Region of the remote storage volume where
apps reside. Used for aws, if provided. Not used for
minio and azure.
type: string
secretRef:
description: Secret object name
type: string
storageType:
description: 'Remote Storage type. Supported values:
s3, blob, gcs. s3 works with aws or minio providers,
whereas blob works with azure provider, gcs works
for gcp.'
type: string
type: object
type: array
required:
- dlq
- name
Expand Down
34 changes: 34 additions & 0 deletions config/crd/bases/enterprise.splunk.com_ingestorclusters.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4661,6 +4661,40 @@ spec:
description: Name of the queue
minLength: 1
type: string
volumes:
description: List of remote storage volumes
items:
description: VolumeSpec defines remote volume config
properties:
endpoint:
description: Remote volume URI
type: string
name:
description: Remote volume name
type: string
path:
description: Remote volume path
type: string
provider:
description: 'App Package Remote Store provider. Supported
values: aws, minio, azure, gcp.'
type: string
region:
description: Region of the remote storage volume where
apps reside. Used for aws, if provided. Not used for
minio and azure.
type: string
secretRef:
description: Secret object name
type: string
storageType:
description: 'Remote Storage type. Supported values:
s3, blob, gcs. s3 works with aws or minio providers,
whereas blob works with azure provider, gcs works
for gcp.'
type: string
type: object
type: array
required:
- dlq
- name
Expand Down
2 changes: 1 addition & 1 deletion config/crd/bases/enterprise.splunk.com_objectstorages.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ spec:
description: Auxillary message describing CR status
type: string
phase:
description: Phase of the large message store
description: Phase of the object storage
enum:
- Pending
- Ready
Expand Down
33 changes: 33 additions & 0 deletions config/crd/bases/enterprise.splunk.com_queues.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,39 @@ spec:
description: Name of the queue
minLength: 1
type: string
volumes:
description: List of remote storage volumes
items:
description: VolumeSpec defines remote volume config
properties:
endpoint:
description: Remote volume URI
type: string
name:
description: Remote volume name
type: string
path:
description: Remote volume path
type: string
provider:
description: 'App Package Remote Store provider. Supported
values: aws, minio, azure, gcp.'
type: string
region:
description: Region of the remote storage volume where apps
reside. Used for aws, if provided. Not used for minio
and azure.
type: string
secretRef:
description: Secret object name
type: string
storageType:
description: 'Remote Storage type. Supported values: s3,
blob, gcs. s3 works with aws or minio providers, whereas
blob works with azure provider, gcs works for gcp.'
type: string
type: object
type: array
required:
- dlq
- name
Expand Down
10 changes: 5 additions & 5 deletions docs/CustomResources.md
Original file line number Diff line number Diff line change
Expand Up @@ -404,21 +404,21 @@ spec:
endpoint: https://s3.us-west-2.amazonaws.com
```

ObjectStorage inputs can be found in the table below. As of now, only S3 provider of large message store is supported.
ObjectStorage inputs can be found in the table below. As of now, only S3 provider of object storage is supported.

| Key | Type | Description |
| ---------- | ------- | ------------------------------------------------- |
| provider | string | [Required] Provider of large message store (Allowed values: s3) |
| s3 | S3 | [Required if provider=s3] S3 large message store inputs |
| provider | string | [Required] Provider of object storage (Allowed values: s3) |
| s3 | S3 | [Required if provider=s3] S3 object storage inputs |

S3 large message store inputs can be found in the table below.
S3 object storage inputs can be found in the table below.

| Key | Type | Description |
| ---------- | ------- | ------------------------------------------------- |
| path | string | [Required] Remote storage location for messages that are larger than the underlying maximum message size |
| endpoint | string | [Optional, if not provided formed based on region] S3-compatible service endpoint

Change of any of the large message queue inputs triggers the restart of Splunk so that appropriate .conf files are correctly refreshed and consumed.
Change of any of the object storage inputs triggers the restart of Splunk so that appropriate .conf files are correctly refreshed and consumed.

## MonitoringConsole Resource Spec Parameters

Expand Down
40 changes: 27 additions & 13 deletions docs/IndexIngestionSeparation.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
---
title: Index and Ingestion Separation
parent: Deploy & Configure
nav_order: 6
---

# Background

Separation between ingestion and indexing services within Splunk Operator for Kubernetes enables the operator to independently manage the ingestion service while maintaining seamless integration with the indexing service.
Expand All @@ -10,7 +16,7 @@ This separation enables:
# Important Note

> [!WARNING]
> **As of now, only brand new deployments are supported for Index and Ingestion Separation. No migration path is implemented, described or tested for existing deployments to move from a standard model to Index & Ingestion separation model.**
> **For customers deploying SmartBus on CMP, the Splunk Operator for Kubernetes (SOK) manages the configuration and lifecycle of the ingestor tier. The following SOK guide provides implementation details for setting up ingestion separation and integrating with existing indexers. This reference is primarily intended for CMP users leveraging SOK-managed ingestors.**

# Document Variables

Expand Down Expand Up @@ -38,7 +44,7 @@ SQS message queue inputs can be found in the table below.
| endpoint | string | [Optional, if not provided formed based on region] AWS SQS Service endpoint
| dlq | string | [Required] Name of the dead letter queue |

Change of any of the queue inputs triggers the restart of Splunk so that appropriate .conf files are correctly refreshed and consumed.
**First provisioning or update of any of the queue inputs requires Ingestor Cluster and Indexer Cluster Splunkd restart, but this restart is implemented automatically and done by SOK.**

## Example
```
Expand All @@ -61,21 +67,21 @@ ObjectStorage is introduced to store large message (messages that exceed the siz

## Spec

ObjectStorage inputs can be found in the table below. As of now, only S3 provider of large message store is supported.
ObjectStorage inputs can be found in the table below. As of now, only S3 provider of object storage is supported.

| Key | Type | Description |
| ---------- | ------- | ------------------------------------------------- |
| provider | string | [Required] Provider of large message store (Allowed values: s3) |
| s3 | S3 | [Required if provider=s3] S3 large message store inputs |
| provider | string | [Required] Provider of object storage (Allowed values: s3) |
| s3 | S3 | [Required if provider=s3] S3 object storage inputs |

S3 large message store inputs can be found in the table below.
S3 object storage inputs can be found in the table below.

| Key | Type | Description |
| ---------- | ------- | ------------------------------------------------- |
| path | string | [Required] Remote storage location for messages that are larger than the underlying maximum message size |
| endpoint | string | [Optional, if not provided formed based on region] S3-compatible service endpoint

Change of any of the large message queue inputs triggers the restart of Splunk so that appropriate .conf files are correctly refreshed and consumed.
Change of any of the object storage inputs triggers the restart of Splunk so that appropriate .conf files are correctly refreshed and consumed.

## Example
```
Expand All @@ -102,13 +108,13 @@ In addition to common spec inputs, the IngestorCluster resource provides the fol
| ---------- | ------- | ------------------------------------------------- |
| replicas | integer | The number of replicas (defaults to 3) |
| queueRef | corev1.ObjectReference | Message queue reference |
| objectStorageRef | corev1.ObjectReference | Large message store reference |
| objectStorageRef | corev1.ObjectReference | Object storage reference |

## Example

The example presented below configures IngestorCluster named ingestor with Splunk ${SPLUNK_IMAGE_VERSION} image that resides in a default namespace and is scaled to 3 replicas that serve the ingestion traffic. This IngestorCluster custom resource is set up with the service account named ingestor-sa allowing it to perform SQS and S3 operations. Queue and ObjectStorage references allow the user to specify queue and bucket settings for the ingestion process.

In this case, the setup uses the SQS and S3 based configuration where the messages are stored in sqs-test queue in us-west-2 region with dead letter queue set to sqs-dlq-test queue. The large message store is set to ingestion bucket in smartbus-test directory. Based on these inputs, default-mode.conf and outputs.conf files are configured accordingly.
In this case, the setup uses the SQS and S3 based configuration where the messages are stored in sqs-test queue in us-west-2 region with dead letter queue set to sqs-dlq-test queue. The object storage is set to ingestion bucket in smartbus-test directory. Based on these inputs, default-mode.conf and outputs.conf files are configured accordingly.

```
apiVersion: enterprise.splunk.com/v4
Expand Down Expand Up @@ -139,13 +145,13 @@ In addition to common spec inputs, the IndexerCluster resource provides the foll
| ---------- | ------- | ------------------------------------------------- |
| replicas | integer | The number of replicas (defaults to 3) |
| queueRef | corev1.ObjectReference | Message queue reference |
| objectStorageRef | corev1.ObjectReference | Large message store reference |
| objectStorageRef | corev1.ObjectReference | Object storage reference |

## Example

The example presented below configures IndexerCluster named indexer with Splunk ${SPLUNK_IMAGE_VERSION} image that resides in a default namespace and is scaled to 3 replicas that serve the indexing traffic. This IndexerCluster custom resource is set up with the service account named ingestor-sa allowing it to perform SQS and S3 operations. Queue and ObjectStorage references allow the user to specify queue and bucket settings for the indexing process.

In this case, the setup uses the SQS and S3 based configuration where the messages are stored in and retrieved from sqs-test queue in us-west-2 region with dead letter queue set to sqs-dlq-test queue. The large message store is set to ingestion bucket in smartbus-test directory. Based on these inputs, default-mode.conf, inputs.conf and outputs.conf files are configured accordingly.
In this case, the setup uses the SQS and S3 based configuration where the messages are stored in and retrieved from sqs-test queue in us-west-2 region with dead letter queue set to sqs-dlq-test queue. The object storage is set to ingestion bucket in smartbus-test directory. Based on these inputs, default-mode.conf, inputs.conf and outputs.conf files are configured accordingly.

```
apiVersion: enterprise.splunk.com/v4
Expand Down Expand Up @@ -425,6 +431,14 @@ In the following example, the dashboard presents ingestion and indexing data in

- [kube-prometheus-stack](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack)

# App Installation for Ingestor Cluster Instances

Application installation is supported for Ingestor Cluster instances. However, as of now, applications are installed using local scope and if any application requires Splunk restart, there is no automated way to detect it and trigger automatically via Splunk Operator.

Therefore, to be able to enforce Splunk restart for each of the Ingestor Cluster pods, it is recommended to add/update IngestorCluster CR annotations/labels and apply the new configuration which will trigger the rolling restart of Splunk pods for Ingestor Cluster.

We are under the investigation on how to make it fully automated. What is more, ideally, update of annotations and labels should not trigger pod restart at all and we are investigating on how to fix this behaviour eventually.

# Example

1. Install CRDs and Splunk Operator for Kubernetes.
Expand Down Expand Up @@ -703,7 +717,7 @@ Spec:
Name: queue
Namespace: default
Image: splunk/splunk:${SPLUNK_IMAGE_VERSION}
Large Message Store Ref:
Object Storage Ref:
Name: os
Namespace: default
Replicas: 3
Expand All @@ -727,7 +741,7 @@ Status:
Endpoint: https://sqs.us-west-2.amazonaws.com
Name: sqs-test
Provider: sqs
Large Message Store:
Object Storage:
S3:
Endpoint: https://s3.us-west-2.amazonaws.com
Path: s3://ingestion/smartbus-test
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -169,6 +169,7 @@ items:
{{- if .namespace }}
namespace: {{ .namespace }}
{{- end }}
{{- end }}
{{- with $.Values.indexerCluster.objectStorageRef }}
objectStorageRef:
name: {{ .name }}
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
{{- if .Values.objectStorage.enabled }}
{{- if .Values.objectStorage }}
{{- if .Values.objectStorage.enabled }}
apiVersion: enterprise.splunk.com/v4
kind: ObjectStorage
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,12 @@ spec:
{{- if .name }}
name: {{ .name | quote }}
{{- end }}
{{- if .region }}
region: {{ .region | quote }}
{{- if .authRegion }}
authRegion: {{ .authRegion | quote }}
{{- end }}
{{- if .volumes }}
volumes:
{{ toYaml . | indent 4 }}
{{- end }}
{{- end }}
{{- end }}
Expand Down
Loading
Loading