-
Couldn't load subscription status.
- Fork 776
Description
What happened:
When I run k6 load tests with litmus, I see "1 mainLogs: Failed to get argo pod logs" in the console log.
What you expected to happen:
I expect to see logs that give me information about the k6 load test run.
Where can this issue be corrected? (optional)
I investigated and found k6 experiments create a 4-level k8s pod hierarchy:
- Argo Workflow Pods:
experiment-name-timestamp-xxxxx - Chaos Runner:
experiment-runner-xxxxx - Job Pod:
experiment-job-xxxxx - Helper Pod:
experiment-helper-xxxxx** ← Contains the actual k6 results
Main issues according to my investigation:
https://github.com/litmuschaos/litmus/blob/master/chaoscenter/subscriber/pkg/k8s/log.go#L56
This code in log.go does not check for the helper pod and hence doesn't get any of the logs from it back.
The challenge here also seems to be that the logs are fetched from the “live” pods via k8s API calls, so when the pod dies and gets removed the logs are gone hence why we see the “failed to get pod logs” error.
There also seems to be another frontend issue in
| return Object.entries(JSON.parse(podLogs.getPodLog.log)) |
How to reproduce it (as minimally and precisely as possible):
I ran a basic k6 load test that just does SELECT 1; on SQL for 30 seconds on a cluster using litmus. Could keep all configurations like number of users to the lowest just to get an end-to-end k6 run.
Anything else we need to know?: