-
Couldn't load subscription status.
- Fork 501
Description
I have been exploring the use of different json libraries to optimize the performance of my custom pipeline and exporter to write traces and logs to files. My primary concern was serialization speed, which the OtlpHttpExporter was struggling with while logging extensively. While the nlohmann::json library is intuitive and easy to use, it's not very fast for serializing. This is particularly evident when writing a lot of logs. The switch to the rapidjson library brought a significant 40-50% performance improvement to my logging system.
In order to benchmark the serialization impact, I modified the OtlpHttpClient to immediately return ExportResult::kSuccess after converting the proto message to json and dumping it to a string. I then modified example_otlp_http with the following:
...
constexpr uint64_t kMaxIterations = 1000000;
...
InitTracer();
auto start = std::chrono::high_resolution_clock::now();
for (uint64_t i = 0; i < kMaxIterations; i++)
foo_library();
auto stop = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(stop - start);
std::cout << "BM_JSON: " << duration.count() << " milliseconds" << std::endl;
CleanupTracer();
Result for nlohmann::json implementation: 127145 milliseconds
Result for rapidjson implementation : 82763 milliseconds
The code is available at: https://github.com/perhapsmaple/opentelemetry-cpp/tree/json-benchmark
Not final - I think some more changes could be made to make it a little bit more efficient
I think this is an easy avenue for improvement, and we should consider benchmarking more thoroughly with both libraries. Happy to hear your thoughts and feedback.