Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
---
sidebar_label: 'Create your first object storage ClickPipe'
description: 'Seamlessly connect your object storage to ClickHouse Cloud.'
slug: /integrations/clickpipes/object-storage
title: 'Creating your first object-storage ClickPipe'
doc_type: 'guide'
integration:
- support_level: 'core'
- category: 'clickpipes'
---

import cp_step0 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step0.png';
import cp_step1 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step1.png';
import cp_step2_object_storage from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step2_object_storage.png';
import cp_step3_object_storage from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step3_object_storage.png';
import cp_step4a from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step4a.png';
import cp_step4a3 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step4a3.png';
import cp_step4b from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step4b.png';
import cp_step5 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step5.png';
import cp_success from '@site/static/images/integrations/data-ingestion/clickpipes/cp_success.png';
import cp_remove from '@site/static/images/integrations/data-ingestion/clickpipes/cp_remove.png';
import cp_destination from '@site/static/images/integrations/data-ingestion/clickpipes/cp_destination.png';
import cp_overview from '@site/static/images/integrations/data-ingestion/clickpipes/cp_overview.png';
import Image from '@theme/IdealImage';

Object Storage ClickPipes provide a simple and resilient way to ingest data from Amazon S3, Google Cloud Storage, Azure Blob Storage, and DigitalOcean Spaces into ClickHouse Cloud. Both one-time and continuous ingestion are supported with exactly-once semantics.

# Creating your first object storage ClickPipe {#creating-your-first-clickpipe}

## Prerequisite {#prerequisite}

- You have familiarized yourself with the [ClickPipes intro](../index.md).

## Navigate to data sources {#1-load-sql-console}

In the cloud console, select the `Data Sources` button on the left-side menu and click on "Set up a ClickPipe"

<Image img={cp_step0} alt="Select imports" size="lg" border/>

## Select a data source {#2-select-data-source}

Select your data source.

<Image img={cp_step1} alt="Select data source type" size="lg" border/>

## Configure the ClickPipe {#3-configure-clickpipe}

Fill out the form by providing your ClickPipe with a name, a description (optional), your IAM role or credentials, and bucket URL.
You can specify multiple files using bash-like wildcards.
For more information, [see the documentation on using wildcards in path](/integrations/clickpipes/object-storage/reference/#limitations).

<Image img={cp_step2_object_storage} alt="Fill out connection details" size="lg" border/>

## Select data format {#4-select-format}

The UI will display a list of files in the specified bucket.
Select your data format (we currently support a subset of ClickHouse formats) and if you want to enable continuous ingestion.
([More details below](/integrations/clickpipes/object-storage/reference/#continuous-ingest)).

<Image img={cp_step3_object_storage} alt="Set data format and topic" size="lg" border/>

## Configure table, schema and settings {#5-configure-table-schema-settings}

In the next step, you can select whether you want to ingest data into a new ClickHouse table or reuse an existing one.
Follow the instructions in the screen to modify your table name, schema, and settings.
You can see a real-time preview of your changes in the sample table at the top.

<Image img={cp_step4a} alt="Set table, schema, and settings" size="lg" border/>

You can also customize the advanced settings using the controls provided

<Image img={cp_step4a3} alt="Set advanced controls" size="lg" border/>

Alternatively, you can decide to ingest your data in an existing ClickHouse table.
In that case, the UI will allow you to map fields from the source to the ClickHouse fields in the selected destination table.

<Image img={cp_step4b} alt="Use an existing table" size="lg" border/>

:::info
You can also map [virtual columns](../../sql-reference/table-functions/s3#virtual-columns), like `_path` or `_size`, to fields.
:::

## Configure permissions {#6-configure-permissions}

Finally, you can configure permissions for the internal ClickPipes user.

**Permissions:** ClickPipes will create a dedicated user for writing data into a destination table. You can select a role for this internal user using a custom role or one of the predefined role:
- `Full access`: with the full access to the cluster. Required if you use materialized view or Dictionary with the destination table.
- `Only destination table`: with the `INSERT` permissions to the destination table only.

<Image img={cp_step5} alt="Permissions" size="lg" border/>

## Complete setup {#7-complete-setup}

By clicking on "Complete Setup", the system will register your ClickPipe, and you'll be able to see it listed in the summary table.

<Image img={cp_success} alt="Success notice" size="sm" border/>

<Image img={cp_remove} alt="Remove notice" size="lg" border/>

The summary table provides controls to display sample data from the source or the destination table in ClickHouse

<Image img={cp_destination} alt="View destination" size="lg" border/>

As well as controls to remove the ClickPipe and display a summary of the ingest job.

<Image img={cp_overview} alt="View overview" size="lg" border/>

**Congratulations!** you have successfully set up your first ClickPipe.
If this is a streaming ClickPipe, it will be continuously running, ingesting data in real-time from your remote data source.
Otherwise, it will ingest the batch and complete.
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
---
sidebar_label: 'ClickPipes for object storage'
description: 'Seamlessly connect your object storage to ClickHouse Cloud.'
slug: /integrations/clickpipes/object-storage
title: 'Integrating Object Storage with ClickHouse Cloud'
doc_type: 'guide'
sidebar_label: 'Reference'
description: 'Details supported formats, exactly-once semantics, view-support, scaling, limitations, authentication with object storage ClickPipes'
slug: /integrations/clickpipes/object-storage/reference
sidebar_position: 1
title: 'Reference'
doc_type: 'reference'
integration:
- support_level: 'core'
- category: 'clickpipes'
Expand All @@ -14,85 +15,8 @@ import S3svg from '@site/static/images/integrations/logos/amazon_s3_logo.svg';
import Gcssvg from '@site/static/images/integrations/logos/gcs.svg';
import DOsvg from '@site/static/images/integrations/logos/digitalocean.svg';
import ABSsvg from '@site/static/images/integrations/logos/azureblobstorage.svg';
import cp_step0 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step0.png';
import cp_step1 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step1.png';
import cp_step2_object_storage from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step2_object_storage.png';
import cp_step3_object_storage from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step3_object_storage.png';
import cp_step4a from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step4a.png';
import cp_step4a3 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step4a3.png';
import cp_step4b from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step4b.png';
import cp_step5 from '@site/static/images/integrations/data-ingestion/clickpipes/cp_step5.png';
import cp_success from '@site/static/images/integrations/data-ingestion/clickpipes/cp_success.png';
import cp_remove from '@site/static/images/integrations/data-ingestion/clickpipes/cp_remove.png';
import cp_destination from '@site/static/images/integrations/data-ingestion/clickpipes/cp_destination.png';
import cp_overview from '@site/static/images/integrations/data-ingestion/clickpipes/cp_overview.png';
import Image from '@theme/IdealImage';

# Integrating object storage with ClickHouse Cloud
Object Storage ClickPipes provide a simple and resilient way to ingest data from Amazon S3, Google Cloud Storage, Azure Blob Storage, and DigitalOcean Spaces into ClickHouse Cloud. Both one-time and continuous ingestion are supported with exactly-once semantics.

## Prerequisite {#prerequisite}
You have familiarized yourself with the [ClickPipes intro](./index.md).

## Creating your first ClickPipe {#creating-your-first-clickpipe}

1. In the cloud console, select the `Data Sources` button on the left-side menu and click on "Set up a ClickPipe"

<Image img={cp_step0} alt="Select imports" size="lg" border/>

2. Select your data source.

<Image img={cp_step1} alt="Select data source type" size="lg" border/>

3. Fill out the form by providing your ClickPipe with a name, a description (optional), your IAM role or credentials, and bucket URL. You can specify multiple files using bash-like wildcards. For more information, [see the documentation on using wildcards in path](#limitations).

<Image img={cp_step2_object_storage} alt="Fill out connection details" size="lg" border/>

4. The UI will display a list of files in the specified bucket. Select your data format (we currently support a subset of ClickHouse formats) and if you want to enable continuous ingestion [More details below](#continuous-ingest).

<Image img={cp_step3_object_storage} alt="Set data format and topic" size="lg" border/>

5. In the next step, you can select whether you want to ingest data into a new ClickHouse table or reuse an existing one. Follow the instructions in the screen to modify your table name, schema, and settings. You can see a real-time preview of your changes in the sample table at the top.

<Image img={cp_step4a} alt="Set table, schema, and settings" size="lg" border/>

You can also customize the advanced settings using the controls provided

<Image img={cp_step4a3} alt="Set advanced controls" size="lg" border/>

6. Alternatively, you can decide to ingest your data in an existing ClickHouse table. In that case, the UI will allow you to map fields from the source to the ClickHouse fields in the selected destination table.

<Image img={cp_step4b} alt="Use an existing table" size="lg" border/>

:::info
You can also map [virtual columns](../../sql-reference/table-functions/s3#virtual-columns), like `_path` or `_size`, to fields.
:::

7. Finally, you can configure permissions for the internal ClickPipes user.

**Permissions:** ClickPipes will create a dedicated user for writing data into a destination table. You can select a role for this internal user using a custom role or one of the predefined role:
- `Full access`: with the full access to the cluster. Required if you use materialized view or Dictionary with the destination table.
- `Only destination table`: with the `INSERT` permissions to the destination table only.

<Image img={cp_step5} alt="Permissions" size="lg" border/>

8. By clicking on "Complete Setup", the system will register you ClickPipe, and you'll be able to see it listed in the summary table.

<Image img={cp_success} alt="Success notice" size="sm" border/>

<Image img={cp_remove} alt="Remove notice" size="lg" border/>

The summary table provides controls to display sample data from the source or the destination table in ClickHouse

<Image img={cp_destination} alt="View destination" size="lg" border/>

As well as controls to remove the ClickPipe and display a summary of the ingest job.

<Image img={cp_overview} alt="View overview" size="lg" border/>

Image
9. **Congratulations!** you have successfully set up your first ClickPipe. If this is a streaming ClickPipe it will be continuously running, ingesting data in real-time from your remote data source. Otherwise it will ingest the batch and complete.

## Supported data sources {#supported-data-sources}

| Name |Logo|Type| Status | Description |
Expand Down Expand Up @@ -134,9 +58,9 @@ To increase the throughput on large ingest jobs, we recommend scaling the ClickH
- ClickPipes will only attempt to ingest objects at 10GB or smaller in size. If a file is greater than 10GB an error will be appended to the ClickPipes dedicated error table.
- Azure Blob Storage pipes with continuous ingest on containers with over 100k files will have a latency of around 10–15 seconds in detecting new files. Latency increases with file count.
- Object Storage ClickPipes **does not** share a listing syntax with the [S3 Table Function](/sql-reference/table-functions/s3), nor Azure with the [AzureBlobStorage Table function](/sql-reference/table-functions/azureBlobStorage).
- `?` Substitutes any single character
- `*` Substitutes any number of any characters except / including empty string
- `**` Substitutes any number of any character include / including empty string
- `?` - Substitutes any single character
- `*` - Substitutes any number of any characters except / including empty string
- `**` - Substitutes any number of any character include / including empty string

:::note
This is a valid path (for S3):
Expand Down Expand Up @@ -179,13 +103,3 @@ Currently only protected buckets are supported for DigitalOcean spaces. You requ

### Azure Blob Storage {#azureblobstorage}
Currently only protected buckets are supported for Azure Blob Storage. Authentication is done via a connection string, which supports access keys and shared keys. For more information, read [this guide](https://learn.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string).

## FAQ {#faq}

- **Does ClickPipes support GCS buckets prefixed with `gs://`?**

No. For interoperability reasons we ask you to replace your `gs://` bucket prefix with `https://storage.googleapis.com/`.

- **What permissions does a GCS public bucket require?**

`allUsers` requires appropriate role assignment. The `roles/storage.objectViewer` role must be granted at the bucket level. This role provides the `storage.objects.list` permission, which allows ClickPipes to list all objects in the bucket which is required for onboarding and ingestion. This role also includes the `storage.objects.get` permission, which is required to read or download individual objects in the bucket. See: [Google Cloud Access Control](https://cloud.google.com/storage/docs/access-control/iam-roles) for further information.
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
---
sidebar_label: 'FAQ'
description: 'FAQ for object storage ClickPipes'
slug: /integrations/clickpipes/object-storage/faq
sidebar_position: 1
title: 'FAQ'
doc_type: 'reference'
integration:
- support_level: 'core'
- category: 'clickpipes'
---

## FAQ {#faq}

<details>
<summary>Does ClickPipes support GCS buckets prefixed with `gs://`?</summary>

No. For interoperability reasons we ask you to replace your `gs://` bucket prefix with `https://storage.googleapis.com/`.

</details>

<details>
<summary>What permissions does a GCS public bucket require?</summary>

`allUsers` requires appropriate role assignment. The `roles/storage.objectViewer` role must be granted at the bucket level. This role provides the `storage.objects.list` permission, which allows ClickPipes to list all objects in the bucket which is required for onboarding and ingestion. This role also includes the `storage.objects.get` permission, which is required to read or download individual objects in the bucket. See: [Google Cloud Access Control](https://cloud.google.com/storage/docs/access-control/iam-roles) for further information.

</details>
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
description: 'Landing page with table of contents for the object storage ClickPipes section'
slug: /integrations/clickpipes/object-storage/index
sidebar_position: 1
title: 'Object storage ClickPipes'
doc_type: 'landing-page'
---

<!--AUTOGENERATED_START-->
| Page | Description |
|-----|-----|
| [Reference](/integrations/clickpipes/object-storage/reference) | Details supported formats, exactly-once semantics, view-support, scaling, limitations, authentication with object storage ClickPipes |
| [FAQ](/integrations/clickpipes/object-storage/faq) | FAQ for object storage ClickPipes |
| [Creating your first object-storage ClickPipe](/integrations/clickpipes/object-storage) | Seamlessly connect your object storage to ClickHouse Cloud. |
<!--AUTOGENERATED_END-->
2 changes: 1 addition & 1 deletion docs/integrations/data-ingestion/dbms/dynamodb/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ There are a few requirements for the destination table:
- Rows with the same sorting key will be deduplicated based on the `version` column.

### Create the snapshot ClickPipe {#create-the-snapshot-clickpipe}
Now you can create a ClickPipe to load the snapshot data from S3 into ClickHouse. Follow the S3 ClickPipe guide [here](/integrations/data-ingestion/clickpipes/object-storage.md), but use the following settings:
Now you can create a ClickPipe to load the snapshot data from S3 into ClickHouse. Follow the S3 ClickPipe guide [here](/integrations/clickpipes/object-storage), but use the following settings:

- **Ingest path**: You will need to locate the path of the exported json files in S3. The path will look something like this:

Expand Down
1 change: 1 addition & 0 deletions scripts/autogenerate-table-of-contents.sh
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ COMMANDS=(
'--single-toc --dir="docs/development" --md="docs/development/index.md" --ignore images'
'--single-toc --dir="docs/getting-started/example-datasets" --md="docs/getting-started/index.md" --ignore images'
'--single-toc --dir="docs/integrations/data-ingestion/clickpipes/kafka" --md="docs/integrations/data-ingestion/clickpipes/kafka/index.md" --ignore images'
'--single-toc --dir="docs/integrations/data-ingestion/clickpipes/object-storage" --md="docs/integrations/data-ingestion/clickpipes/object-storage/index.md" --ignore images'
'--single-toc --dir="docs/use-cases/AI_ML/MCP" --md="docs/use-cases/AI_ML/MCP/index.md" --ignore images'
'--single-toc --dir="docs/use-cases/AI_ML/MCP/ai_agent_libraries" --md="docs/use-cases/AI_ML/MCP/ai_agent_libraries/index.md"'
'--single-toc --dir="docs/cloud/guides" --md="docs/cloud/guides/index.md"'
Expand Down
Loading