Added deployment run script#127
Added deployment run script#127JordyBottelier wants to merge 3 commits intorundeck-plugins:masterfrom
Conversation
|
I'm facing a similar problem... Because the pod resource model can have multiple pods per deployment, if I try to run an script across all my deployments, it actually runs across all my pods -- which can be a problem if two pods are doing the same thing at the same time and could deadlock or conflict. If I had a list of deployments or namespaces, I could use that to run this updated code. But it seems that I still end up running the code once per pod if I derice the list of deployment names from my nodes/pods. I have no way to de-duplicate those nodes so there is only one per deployment. In Rancher 1.x, which we used before k8s, I updated the plugin to create a counter so container would increment each time it found a container within the service/deployment. It worked, but was a bit of a hack. Do you have a way to run only one pod per deployment based on the node inventory with your approach here? |
contents/common.py
Outdated
| exit(1) | ||
|
|
||
| if not resp: | ||
| log.error("Namespace %s does not exits.", namespace) |
There was a problem hiding this comment.
spelling: "exits" should be "exist"
There was a problem hiding this comment.
Thanks for the review, I fixed the spelling mistake :)
I don't fully understand your question or what you are actually trying to accomplish here? The code that I wrote executes a job on exactly 1 pod per deployment, but I do not fully understand your use case |
|
You're right -- my comment was a bit unclear. A minor difference is that I want to run a command rather than a script. The main difference is that I want to use the node selection interface in Rundeck -- so I can run across all deployments and have the node set automatically update and run on exactly one pod per deployment. It looks like I would have to add jobs and specify the deployment as I added deployments to my k8s environment. |
|
I have just created a pull request (#131) to illustrate the approach I have taken...it may not be as aesthetically correct as defining a node resource based on the ReplicaSet (which is what I think I would need to do to use node selection to identify one node per deployment) but it is effective and is only a few lines of code change that enables me to use all the rest of the code in pods-* python scripts without needing to add new copied code. |
|
@kdebisschop I checked out your code and left a comment We have a somewhat similar use-case, however, they are still quite different :) |
|
I think you can do the same using an Orchestrator plugin and a node filter with the deployment label, something like this: |


Hi,
I've been using this plugin for a while now and I love it, but I wanted to add some functionality in regards to deployments. I sometimes want to run a job/script on one of the pods of a deployment (randomly), with the option of retrying the job automatically if it fails.
I tested the functionality both locally and on a Kubernetes cluster and it seems to work fine. I used a lot of code from the pods-run-script.py and even extracted some parts to the common.py file.
If there's any questions or improvements you'd like to see let me know