I've been dealing with memory issues and OOM for the LAPI pod. Fixed with chunked_decisions_stream=true. I don't see why this option should not be enabled considering the size of the intial decision fetch by bouncers.