Thanks for reporting Nir,
You are right about the timeout - but that exists only during the actual
code. In this case the slave was
somehow out of storage / memory and it stuck during loading our stdci code.
Indeed, there’s times when jobs get stuck in certain stages when timeouts don’t apply. Usually this does not decrease capacity since the job is not really running on any host so we periodically kill such jobs or they just get dropped on Jenkins restarts.
do you know if we can add a global timeout for the whole pipeline? Something like this:
Or maybe we’re already using this? IMHO we don’t have any jobs running longer than 4 hours so, let’s say, a 24h timeout should be well enough in all cases.
It’s not used anywhere in the code, I can implement it.
Thanks, I can confirm I see these stuck jobs from time to time and just kill them since if they’re running for more than a day then their result is of no use to anybody. I’ll assign the ticket to you so that you can prioritize it. Not an urgent matter but a good thing to have global timeouts on our side.