Skip to content

[Fix][HiveSink] Implement overwrite semantics for streaming commits to prevent multiple deletions of target directories #10280

@Adamyuanyuan

Description

@Adamyuanyuan

Search before asking

  • I had searched in the issues and found no similar issues.

What happened

When running SeaTunnel on Flink in STREAMING mode with Hive sink overwrite: true, the final Hive partition/table directory may lose previously committed files and end up containing only a subset of data (often only files from the last checkpoint).

SeaTunnel Version

2.3.12

SeaTunnel Config

{
    "env" : {
        "execution.parallelism" : 1,
        "job.mode" : "STREAMING"
    },
    "source" : [
        {
            "url" : "jdbc:mysql://10.xx.xx:xxx/xxx?useUnicode=true&characterEncoding=UTF-8&useSSL=false",
            "driver" : "com.mysql.cj.jdbc.Driver",
            "user" : "bi",
            "password" : "******",
            "query" : "SELECT xxx FROM xxxWHERE 1=1",
            "partition_column" : "id",
            "partition_num" : 32,
            "split.size" : 20000,
            "fetch_size" : 5000,
            "result_table_name" : "result_table",
            "plugin_name" : "Jdbc"
        }
    ],
    "sink" : [
        {
            "table_name" : "xxx.xxx",
            "metastore_uri" : "thrift://bigdata-vm-xx-hslpl:9083",
            "hdfs_site_path" : "datasource-conf/xxx/hdfs-site.xml",
            "hive_site_path" : "datasource-conf/xxx/hive-site.xml",
            "krb5_path" : "datasource-conf/xxx/krb5.conf",
            "kerberos_principal" : "hive/bigdata-vm-xx-hslpl@MR.xxx",
            "kerberos_keytab_path" : "datasource-conf/xxx/hive.keytab",
            "overwrite" : true,
            "source_table_name" : "source_table",
            "compress_codec" : "SNAPPY",
            "plugin_name" : "Hive"
        }
    ],
    "transform" : [
        {
            "source_table_name" : "result_table",
            "result_table_name" : "source_table",
            "query" : "select *, '2025-12-16' as `pt` from result_table",
            "plugin_name" : "Sql"
        }
    ]
}

Running Command

use flink API

Error Exception

lost data.

Zeta or Flink or Spark Version

No response

Java or Scala Version

No response

Screenshots

No response

Are you willing to submit PR?

  • Yes I am willing to submit a PR!

Code of Conduct

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions