lib/gis: add concurrency support for G_percent()#7259
lib/gis: add concurrency support for G_percent()#7259nilason wants to merge 1 commit intoOSGeo:mainfrom
Conversation
This adds new non-locking G_progress_* API, which is build on re-entrancy, atomic compare-and-swap, and a dedicated thread for the progress reporting. The legacy API with G_percent() and G_progress() is updated as wrappers using a global context. The implementation is organized as a telemetry pipeline. Producer-side API calls update atomic progress state and enqueue EV_PROGRESS or EV_LOG records into a bounded ring buffer. A single consumer thread drains that buffer, converts raw records into GProgressEvent values, and forwards them either to installed sink callbacks or to default renderers selected from the current G_info_format() mode. Concurrency is designed as multi-producer, single-consumer per telemetry stream. Producers reserve slots with an atomic write_index, publish events by setting a per-slot ready flag with release semantics, and use atomic compare-and-swap to ensure that only one producer emits a given percent threshold or time-gated update. The consumer advances a non-atomic read_index, waits for published slots, processes events in FIFO order, and then marks slots free again. Two lifecycle models are used. Isolated GProgressContext instances create a dedicated consumer thread that is joined during destruction. The legacy process-wide G_percent() path initializes one shared telemetry instance and a detached consumer thread on first use.
|
Without studying this in depth, this looks like overkill to me...Surely there must be a simpler, maybe less general solution? How to test this? |
A lot of code is for accommodating 1-to-1 adaptation to current use of G_percent and G_progress API. Removing that layer, (especially the pattern of finishing the counter with
With this PR (built with C11 atomic operation support –which most of current and also not-so-current compilers do– and pthread) all calls of G_procent and G_progress use this concurrent code. So try any module, but in particular parallelised/OpenMP modules where the advantage of this is put to the test. |
This adds new non-locking G_progress_* API, which is build on re-entrancy, atomic compare-and-swap, and a dedicated thread for the progress reporting. The legacy API with G_percent() and G_progress() is updated as wrappers using a global context.
Current code using G_process and G_percent, should work as is. New code may look like:
or:
The progress calculations are separated from output, so it is possible for caller to override the default modes of output with a custom function via a
GProgressSink. I have also added a time interval based reporting withG_progress_context_create_time().Requirement: support for C11 with atomic operations (
<stdatomic.h>) andpthread. There should be a solution for adopting for MSVC, but I can't test that.Implementation details:
The implementation is organized as a telemetry pipeline. Producer-side API calls update atomic progress state and enqueue EV_PROGRESS or EV_LOG records into a bounded ring buffer. A single consumer thread drains that buffer, converts raw records into GProgressEvent values, and forwards them either to installed sink callbacks or to default renderers selected from the current G_info_format() mode.
Concurrency is designed as multi-producer, single-consumer per telemetry stream. Producers reserve slots with an atomic write_index, publish events by setting a per-slot ready flag with release semantics, and use atomic compare-and-swap to ensure that only one producer emits a given percent threshold or time-gated update. The consumer advances a non-atomic read_index, waits for published slots, processes events in FIFO order, and then marks slots free again.
Two lifecycle models are used. Isolated GProgressContext instances create a dedicated consumer thread that is joined during destruction. The legacy process-wide G_percent() path initializes one shared telemetry instance and a detached consumer thread on first use.
Disclosure: created with assistance of ChatGPT and Codex.
Closes: #5776