diff --git a/content/en/26/kafka-connect/connector-development-guide.md b/content/en/26/kafka-connect/connector-development-guide.md index 1ddf55c9bf..f747343389 100644 --- a/content/en/26/kafka-connect/connector-development-guide.md +++ b/content/en/26/kafka-connect/connector-development-guide.md @@ -181,7 +181,7 @@ The `SinkTask` documentation contains full details, but this interface is nearly The `flush()` method is used during the offset commit process, which allows tasks to recover from failures and resume from a safe point such that no events will be missed. The method should push any outstanding data to the destination system and then block until the write has been acknowledged. The `offsets` parameter can often be ignored, but is useful in some cases where implementations want to store offset information in the destination store to provide exactly-once delivery. For example, an HDFS connector could do this and use atomic move operations to make sure the `flush()` operation atomically commits the data and offsets to a final location in HDFS. -### [Errant Record Reporter](connect_errantrecordreporter) +### Errant Record Reporter When error reporting is enabled for a connector, the connector can use an `ErrantRecordReporter` to report problems with individual records sent to a sink connector. The following example shows how a connector's `SinkTask` subclass might obtain and use the `ErrantRecordReporter`, safely handling a null reporter when the DLQ is not enabled or when the connector is installed in an older Connect runtime that doesn't have this reporter feature: diff --git a/content/en/27/kafka-connect/connector-development-guide.md b/content/en/27/kafka-connect/connector-development-guide.md index 1416207e45..3519387c62 100644 --- a/content/en/27/kafka-connect/connector-development-guide.md +++ b/content/en/27/kafka-connect/connector-development-guide.md @@ -188,7 +188,7 @@ The `SinkTask` documentation contains full details, but this interface is nearly The `flush()` method is used during the offset commit process, which allows tasks to recover from failures and resume from a safe point such that no events will be missed. The method should push any outstanding data to the destination system and then block until the write has been acknowledged. The `offsets` parameter can often be ignored, but is useful in some cases where implementations want to store offset information in the destination store to provide exactly-once delivery. For example, an HDFS connector could do this and use atomic move operations to make sure the `flush()` operation atomically commits the data and offsets to a final location in HDFS. -### [Errant Record Reporter](connect_errantrecordreporter) +### Errant Record Reporter When error reporting is enabled for a connector, the connector can use an `ErrantRecordReporter` to report problems with individual records sent to a sink connector. The following example shows how a connector's `SinkTask` subclass might obtain and use the `ErrantRecordReporter`, safely handling a null reporter when the DLQ is not enabled or when the connector is installed in an older Connect runtime that doesn't have this reporter feature: diff --git a/content/en/28/kafka-connect/connector-development-guide.md b/content/en/28/kafka-connect/connector-development-guide.md index 1416207e45..3519387c62 100644 --- a/content/en/28/kafka-connect/connector-development-guide.md +++ b/content/en/28/kafka-connect/connector-development-guide.md @@ -188,7 +188,7 @@ The `SinkTask` documentation contains full details, but this interface is nearly The `flush()` method is used during the offset commit process, which allows tasks to recover from failures and resume from a safe point such that no events will be missed. The method should push any outstanding data to the destination system and then block until the write has been acknowledged. The `offsets` parameter can often be ignored, but is useful in some cases where implementations want to store offset information in the destination store to provide exactly-once delivery. For example, an HDFS connector could do this and use atomic move operations to make sure the `flush()` operation atomically commits the data and offsets to a final location in HDFS. -### [Errant Record Reporter](connect_errantrecordreporter) +### Errant Record Reporter When error reporting is enabled for a connector, the connector can use an `ErrantRecordReporter` to report problems with individual records sent to a sink connector. The following example shows how a connector's `SinkTask` subclass might obtain and use the `ErrantRecordReporter`, safely handling a null reporter when the DLQ is not enabled or when the connector is installed in an older Connect runtime that doesn't have this reporter feature: diff --git a/content/en/30/kafka-connect/connector-development-guide.md b/content/en/30/kafka-connect/connector-development-guide.md index 1416207e45..3519387c62 100644 --- a/content/en/30/kafka-connect/connector-development-guide.md +++ b/content/en/30/kafka-connect/connector-development-guide.md @@ -188,7 +188,7 @@ The `SinkTask` documentation contains full details, but this interface is nearly The `flush()` method is used during the offset commit process, which allows tasks to recover from failures and resume from a safe point such that no events will be missed. The method should push any outstanding data to the destination system and then block until the write has been acknowledged. The `offsets` parameter can often be ignored, but is useful in some cases where implementations want to store offset information in the destination store to provide exactly-once delivery. For example, an HDFS connector could do this and use atomic move operations to make sure the `flush()` operation atomically commits the data and offsets to a final location in HDFS. -### [Errant Record Reporter](connect_errantrecordreporter) +### Errant Record Reporter When error reporting is enabled for a connector, the connector can use an `ErrantRecordReporter` to report problems with individual records sent to a sink connector. The following example shows how a connector's `SinkTask` subclass might obtain and use the `ErrantRecordReporter`, safely handling a null reporter when the DLQ is not enabled or when the connector is installed in an older Connect runtime that doesn't have this reporter feature: diff --git a/content/en/31/kafka-connect/connector-development-guide.md b/content/en/31/kafka-connect/connector-development-guide.md index 1416207e45..3519387c62 100644 --- a/content/en/31/kafka-connect/connector-development-guide.md +++ b/content/en/31/kafka-connect/connector-development-guide.md @@ -188,7 +188,7 @@ The `SinkTask` documentation contains full details, but this interface is nearly The `flush()` method is used during the offset commit process, which allows tasks to recover from failures and resume from a safe point such that no events will be missed. The method should push any outstanding data to the destination system and then block until the write has been acknowledged. The `offsets` parameter can often be ignored, but is useful in some cases where implementations want to store offset information in the destination store to provide exactly-once delivery. For example, an HDFS connector could do this and use atomic move operations to make sure the `flush()` operation atomically commits the data and offsets to a final location in HDFS. -### [Errant Record Reporter](connect_errantrecordreporter) +### Errant Record Reporter When error reporting is enabled for a connector, the connector can use an `ErrantRecordReporter` to report problems with individual records sent to a sink connector. The following example shows how a connector's `SinkTask` subclass might obtain and use the `ErrantRecordReporter`, safely handling a null reporter when the DLQ is not enabled or when the connector is installed in an older Connect runtime that doesn't have this reporter feature: diff --git a/content/en/32/kafka-connect/connector-development-guide.md b/content/en/32/kafka-connect/connector-development-guide.md index 1484fa58d4..32eefd1ab9 100644 --- a/content/en/32/kafka-connect/connector-development-guide.md +++ b/content/en/32/kafka-connect/connector-development-guide.md @@ -181,7 +181,7 @@ The `SinkTask` documentation contains full details, but this interface is nearly The `flush()` method is used during the offset commit process, which allows tasks to recover from failures and resume from a safe point such that no events will be missed. The method should push any outstanding data to the destination system and then block until the write has been acknowledged. The `offsets` parameter can often be ignored, but is useful in some cases where implementations want to store offset information in the destination store to provide exactly-once delivery. For example, an HDFS connector could do this and use atomic move operations to make sure the `flush()` operation atomically commits the data and offsets to a final location in HDFS. -### [Errant Record Reporter](connect_errantrecordreporter) +### Errant Record Reporter When error reporting is enabled for a connector, the connector can use an `ErrantRecordReporter` to report problems with individual records sent to a sink connector. The following example shows how a connector's `SinkTask` subclass might obtain and use the `ErrantRecordReporter`, safely handling a null reporter when the DLQ is not enabled or when the connector is installed in an older Connect runtime that doesn't have this reporter feature: diff --git a/content/en/35/kafka-connect/connector-development-guide.md b/content/en/35/kafka-connect/connector-development-guide.md index fb8fc31700..2d3c537e62 100644 --- a/content/en/35/kafka-connect/connector-development-guide.md +++ b/content/en/35/kafka-connect/connector-development-guide.md @@ -188,7 +188,7 @@ The `SinkTask` documentation contains full details, but this interface is nearly The `flush()` method is used during the offset commit process, which allows tasks to recover from failures and resume from a safe point such that no events will be missed. The method should push any outstanding data to the destination system and then block until the write has been acknowledged. The `offsets` parameter can often be ignored, but is useful in some cases where implementations want to store offset information in the destination store to provide exactly-once delivery. For example, an HDFS connector could do this and use atomic move operations to make sure the `flush()` operation atomically commits the data and offsets to a final location in HDFS. -### [Errant Record Reporter](connect_errantrecordreporter) +### Errant Record Reporter When error reporting is enabled for a connector, the connector can use an `ErrantRecordReporter` to report problems with individual records sent to a sink connector. The following example shows how a connector's `SinkTask` subclass might obtain and use the `ErrantRecordReporter`, safely handling a null reporter when the DLQ is not enabled or when the connector is installed in an older Connect runtime that doesn't have this reporter feature: diff --git a/content/en/36/kafka-connect/connector-development-guide.md b/content/en/36/kafka-connect/connector-development-guide.md index dae7cd51eb..3a39f706a1 100644 --- a/content/en/36/kafka-connect/connector-development-guide.md +++ b/content/en/36/kafka-connect/connector-development-guide.md @@ -195,7 +195,7 @@ The `SinkTask` documentation contains full details, but this interface is nearly The `flush()` method is used during the offset commit process, which allows tasks to recover from failures and resume from a safe point such that no events will be missed. The method should push any outstanding data to the destination system and then block until the write has been acknowledged. The `offsets` parameter can often be ignored, but is useful in some cases where implementations want to store offset information in the destination store to provide exactly-once delivery. For example, an HDFS connector could do this and use atomic move operations to make sure the `flush()` operation atomically commits the data and offsets to a final location in HDFS. -### [Errant Record Reporter](connect_errantrecordreporter) +### Errant Record Reporter When error reporting is enabled for a connector, the connector can use an `ErrantRecordReporter` to report problems with individual records sent to a sink connector. The following example shows how a connector's `SinkTask` subclass might obtain and use the `ErrantRecordReporter`, safely handling a null reporter when the DLQ is not enabled or when the connector is installed in an older Connect runtime that doesn't have this reporter feature: diff --git a/content/en/37/kafka-connect/connector-development-guide.md b/content/en/37/kafka-connect/connector-development-guide.md index dae7cd51eb..3a39f706a1 100644 --- a/content/en/37/kafka-connect/connector-development-guide.md +++ b/content/en/37/kafka-connect/connector-development-guide.md @@ -195,7 +195,7 @@ The `SinkTask` documentation contains full details, but this interface is nearly The `flush()` method is used during the offset commit process, which allows tasks to recover from failures and resume from a safe point such that no events will be missed. The method should push any outstanding data to the destination system and then block until the write has been acknowledged. The `offsets` parameter can often be ignored, but is useful in some cases where implementations want to store offset information in the destination store to provide exactly-once delivery. For example, an HDFS connector could do this and use atomic move operations to make sure the `flush()` operation atomically commits the data and offsets to a final location in HDFS. -### [Errant Record Reporter](connect_errantrecordreporter) +### Errant Record Reporter When error reporting is enabled for a connector, the connector can use an `ErrantRecordReporter` to report problems with individual records sent to a sink connector. The following example shows how a connector's `SinkTask` subclass might obtain and use the `ErrantRecordReporter`, safely handling a null reporter when the DLQ is not enabled or when the connector is installed in an older Connect runtime that doesn't have this reporter feature: diff --git a/content/en/38/kafka-connect/connector-development-guide.md b/content/en/38/kafka-connect/connector-development-guide.md index 6d1a9cb83d..0080ab279a 100644 --- a/content/en/38/kafka-connect/connector-development-guide.md +++ b/content/en/38/kafka-connect/connector-development-guide.md @@ -197,7 +197,7 @@ The `SinkTask` documentation contains full details, but this interface is nearly The `flush()` method is used during the offset commit process, which allows tasks to recover from failures and resume from a safe point such that no events will be missed. The method should push any outstanding data to the destination system and then block until the write has been acknowledged. The `offsets` parameter can often be ignored, but is useful in some cases where implementations want to store offset information in the destination store to provide exactly-once delivery. For example, an HDFS connector could do this and use atomic move operations to make sure the `flush()` operation atomically commits the data and offsets to a final location in HDFS. -### [Errant Record Reporter](connect_errantrecordreporter) +### Errant Record Reporter When error reporting is enabled for a connector, the connector can use an `ErrantRecordReporter` to report problems with individual records sent to a sink connector. The following example shows how a connector's `SinkTask` subclass might obtain and use the `ErrantRecordReporter`, safely handling a null reporter when the DLQ is not enabled or when the connector is installed in an older Connect runtime that doesn't have this reporter feature: diff --git a/content/en/39/kafka-connect/connector-development-guide.md b/content/en/39/kafka-connect/connector-development-guide.md index 6d1a9cb83d..0080ab279a 100644 --- a/content/en/39/kafka-connect/connector-development-guide.md +++ b/content/en/39/kafka-connect/connector-development-guide.md @@ -197,7 +197,7 @@ The `SinkTask` documentation contains full details, but this interface is nearly The `flush()` method is used during the offset commit process, which allows tasks to recover from failures and resume from a safe point such that no events will be missed. The method should push any outstanding data to the destination system and then block until the write has been acknowledged. The `offsets` parameter can often be ignored, but is useful in some cases where implementations want to store offset information in the destination store to provide exactly-once delivery. For example, an HDFS connector could do this and use atomic move operations to make sure the `flush()` operation atomically commits the data and offsets to a final location in HDFS. -### [Errant Record Reporter](connect_errantrecordreporter) +### Errant Record Reporter When error reporting is enabled for a connector, the connector can use an `ErrantRecordReporter` to report problems with individual records sent to a sink connector. The following example shows how a connector's `SinkTask` subclass might obtain and use the `ErrantRecordReporter`, safely handling a null reporter when the DLQ is not enabled or when the connector is installed in an older Connect runtime that doesn't have this reporter feature: diff --git a/content/en/40/kafka-connect/connector-development-guide.md b/content/en/40/kafka-connect/connector-development-guide.md index 6d1a9cb83d..0080ab279a 100644 --- a/content/en/40/kafka-connect/connector-development-guide.md +++ b/content/en/40/kafka-connect/connector-development-guide.md @@ -197,7 +197,7 @@ The `SinkTask` documentation contains full details, but this interface is nearly The `flush()` method is used during the offset commit process, which allows tasks to recover from failures and resume from a safe point such that no events will be missed. The method should push any outstanding data to the destination system and then block until the write has been acknowledged. The `offsets` parameter can often be ignored, but is useful in some cases where implementations want to store offset information in the destination store to provide exactly-once delivery. For example, an HDFS connector could do this and use atomic move operations to make sure the `flush()` operation atomically commits the data and offsets to a final location in HDFS. -### [Errant Record Reporter](connect_errantrecordreporter) +### Errant Record Reporter When error reporting is enabled for a connector, the connector can use an `ErrantRecordReporter` to report problems with individual records sent to a sink connector. The following example shows how a connector's `SinkTask` subclass might obtain and use the `ErrantRecordReporter`, safely handling a null reporter when the DLQ is not enabled or when the connector is installed in an older Connect runtime that doesn't have this reporter feature: diff --git a/content/en/41/kafka-connect/connector-development-guide.md b/content/en/41/kafka-connect/connector-development-guide.md index 3b2345358b..d8544666e9 100644 --- a/content/en/41/kafka-connect/connector-development-guide.md +++ b/content/en/41/kafka-connect/connector-development-guide.md @@ -197,7 +197,7 @@ The `SinkTask` documentation contains full details, but this interface is nearly The `flush()` method is used during the offset commit process, which allows tasks to recover from failures and resume from a safe point such that no events will be missed. The method should push any outstanding data to the destination system and then block until the write has been acknowledged. The `offsets` parameter can often be ignored, but is useful in some cases where implementations want to store offset information in the destination store to provide exactly-once delivery. For example, an HDFS connector could do this and use atomic move operations to make sure the `flush()` operation atomically commits the data and offsets to a final location in HDFS. -### [Errant Record Reporter](connect_errantrecordreporter) +### Errant Record Reporter When error reporting is enabled for a connector, the connector can use an `ErrantRecordReporter` to report problems with individual records sent to a sink connector. The following example shows how a connector's `SinkTask` subclass might obtain and use the `ErrantRecordReporter`, safely handling a null reporter when the DLQ is not enabled or when the connector is installed in an older Connect runtime that doesn't have this reporter feature: