Discussion:
Where does Jenkins store pending job info?
Noah Hoffman
2011-10-20 18:43:02 UTC
Permalink
Hi All,

I'm investigating a bug in our Jenkins setup in that, when we trigger a safe restart, it does correctly wait for running builds to finish before restarting but all pending jobs are lost on reboot. It might be something in some custom scripts we use that handle the restart (we copy files from a local Perforce depot into the Jenkins homedir). Does anyone know where Jenkins stores pending build info? I don't see anything written anywhere in the Jobs folder that suggests it's here.

Thanks,
Noah
Dean Yu
2011-10-20 22:33:18 UTC
Permalink
I think it's the queue.xml file in the home directory.

-- Dean
Post by Noah Hoffman
Hi All,
I'm investigating a bug in our Jenkins setup in that, when we trigger a
safe restart, it does correctly wait for running builds to finish before
restarting but all pending jobs are lost on reboot. It might be something in
some custom scripts we use that handle the restart (we copy files from a local
Perforce depot into the Jenkins homedir). Does anyone know where Jenkins
stores pending build info? I don't see anything written anywhere in the Jobs
folder that suggests it's here.
Thanks,
Noah
David Karlsen
2011-10-20 23:04:42 UTC
Permalink
Try the persistent build queue plugin.
Post by Noah Hoffman
Hi All,
I'm investigating a bug in our Jenkins setup in that, when we trigger a
safe restart, it does correctly wait for running builds to finish before
restarting but all pending jobs are lost on reboot. It might be something
in some custom scripts we use that handle the restart (we copy files from a
local Perforce depot into the Jenkins homedir). Does anyone know where
Jenkins stores pending build info? I don't see anything written anywhere in
the Jobs folder that suggests it's here.
Thanks,
Noah
nimeacuerdo
2011-11-10 09:10:26 UTC
Permalink
Anybody experiencing this problem? I would like to know if there's a
better approach to jobs configurations bulk updates...
Hi,
We have just faced the following problem: updating the configuration
of all our jobs with the update-job CLI command results, after the
update for all the jobs is finished, in all the running builds to
behave weirdly.
They appear to be running (and in fact the associated processes are
running in the slaves we have), but the links to all the builds are
broken. Also, when going to any of the jobs for which there is a build
running, it does not appear in the list of builds of the job. It's
like if it did not exist. Furthermore, if we let those somewhat
orphaned builds to fiinish, they never appear associated to their
corresponding jobs, as if they never had existed.
This is a big problem for us as this is the way we wanted to use to
performbulkupdates in all our jobs configurations :/
Any ideas regarding this?
Thanks in advance,
David
Jan Seidel
2011-11-10 09:17:20 UTC
Permalink
Seems to be a kind of standard behaviour.
I assume that changes are written to logs and configuration files when
the jobs have finished.
This behaviour is already been addressed by me at an earlier time.
The exact some behaviour can be observed when you reload the
configuration from disk by URL or manage jenkins :/
Only thing you can do is to make sure that no job is running.

Jan
Post by nimeacuerdo
Anybody experiencing this problem? I would like to know if there's a
better approach to jobs configurations bulk updates...
Hi,
We have just faced the following problem: updating the configuration
of all our jobs with the update-job CLI command results, after the
update for all the jobs is finished, in all the running builds to
behave weirdly.
They appear to be running (and in fact the associated processes are
running in the slaves we have), but the links to all the builds are
broken. Also, when going to any of the jobs for which there is a build
running, it does not appear in the list of builds of the job. It's
like if it did not exist. Furthermore, if we let those somewhat
orphaned builds to fiinish, they never appear associated to their
corresponding jobs, as if they never had existed.
This is a big problem for us as this is the way we wanted to use to
performbulkupdates in all our jobs configurations :/
Any ideas regarding this?
Thanks in advance,
David
nimeacuerdo
2011-11-10 09:55:58 UTC
Permalink
Thx for the feedback Jan. I have updated https://issues.jenkins-ci.org/browse/JENKINS-3265
with this information.
Post by Jan Seidel
Seems to be a kind of standard behaviour.
I assume that changes are written to logs and configuration files when
the jobs have finished.
This behaviour is already been addressed by me at an earlier time.
The exact some behaviour can be observed when you reload the
configuration from disk by URL or manage jenkins :/
Only thing you can do is to make sure that no job is running.
Jan
Post by nimeacuerdo
Anybody experiencing this problem? I would like to know if there's a
better approach to jobs configurations bulk updates...
Hi,
We have just faced the following problem: updating the configuration
of all our jobs with the update-job CLI command results, after the
update for all the jobs is finished, in all the running builds to
behave weirdly.
They appear to be running (and in fact the associated processes are
running in the slaves we have), but the links to all the builds are
broken. Also, when going to any of the jobs for which there is a build
running, it does not appear in the list of builds of the job. It's
like if it did not exist. Furthermore, if we let those somewhat
orphaned builds to fiinish, they never appear associated to their
corresponding jobs, as if they never had existed.
This is a big problem for us as this is the way we wanted to use to
performbulkupdates in all our jobs configurations :/
Any ideas regarding this?
Thanks in advance,
David
Loading...