How to clear a "stuck" job in Collabnet Edge - collabnet

I'm having a problem with a job in collabnet edge. I created a blank repository, then did a load from a dump file. There was an issue during the load (finally figured out that I had run out of disk space) and due to this, a job got stuck.
So, here's what's happening:
The load appeared to finish, the job isn't shown on the list.
The data was not loaded correctly (no space...but took a while to figure that out)
So I deleted the repository, added disk space and then tried to reload, but I get a message saying I can't because there is already a job running.
A dump file is already set to be loaded. Only one load may be scheduled at a time; progress can be monitored on the Jobs screen.
As mentioned, there is no job listed as being in progress. The repo it was loading has been deleted.
How do I clear out this stuck job?

Check, if there is a file in the following directory:
csvn/data/dumps/yourRepositoryName/load
Delete the file and then reload your dump-file.

Check if there is a job progress for this repository logged under csvn/data/logs/temp
If present remove it and then re-schedule

Related

There are resources Jenkins was not able to dispose automatically - concerning?

After running different jobs I sometimes get this message in Jenkins:
"There are resources Jenkins was not able to dispose automatically."
I can then click the link provided and there is no additional information there. The jobs run fine, the workspace is as expected, the jobs folder looks normal. Is this something I should be concerned with?
You mentioned you believe all the work happens on your master, not an agent. This may negate what I'm about to say but it might help for troubleshooting anyway;
We have a master/agent setup and often get those warnings. We found it was because one of our jobs created files with permission settings that didn't give Jenkins permission to delete them. Sometimes we could track down the exact files; sometimes it was blank, like you said.
We figured out that the blank ones were happening because the agent was taken offline once it was done with its jobs, and then deleted. No agent = no files. Maybe your master deletes its workspace periodically and creates the same effect?
Either way the solution for us was to change the permissions on the affected files, and we stopped getting the messages.
This error came when jenkins tries to delete cleanup folder but cant delete may be due to permission error or other.
To check the files which Jenkins is trying but not able to delete:
sudo find /var/lib/jenkins/workspace/ws-cleanup/ -user root
To delete:
sudo find /var/lib/jenkins/workspace/ws-cleanup/ -user root -delete
To avoid add the delete command whose job files are giving problem.
Regards
DevOpsBro

Jenkins Restored Job is not visible

We have restored a job on Jenkins server from our last backup. But its not visible on UI.
I have tried reloading configuration from disk and even restarted the service , still not visible.
Am I missing something ?
Did you restore only the config.xml file (and its parent folder)?
Did you check the permissions on the restored file (or folder)?
Did you check the "All" view in Jenkins?
If you are using a direct path to your job (like http://your.jenkins.ci/jobs/myjob), does it work?

Liquibase maven update fails on second attempt

I'm trying to create a table (if not exists) to a remote database (postgres in docker container). The sql command is located in a sql file.
When I execute
mvn liquibase:update
by hand (bash terminal) everything works great. The build is a success and the message for the creation of the table is displayed.
Now if I delete the table and execute the same command, then although I'm seeing a success message for the completion of the task, the table is not created (of course the message about the creation of the table is not displayed either).
As I've mentioned the database is in a docker container. So it is a remote database. I've already included the below command in my pom.xml
<promptOnNonLocalDatabase>false</promptOnNonLocalDatabase>
Why is this happening?
Is this changeset run with "runAllways" mode? If not, than when you are running it first time change is applied and recorded as applied and it is not executed second time at all (and there is no difference how you are running it).
Liquibase keeps track of the changes it has applied in a separate table called DATABASECHANGELOG. If you manually delete the table (let's call this one "MyTable") as described above, and then re-run Liquibase, it just checks the DATABASECHANGELOG table to see if the change mentioned has been run, and if it has, it will not re-run it. If you want to see the table get re-created you would have to manually remove the row from DATABASECHANGELOG, or else (as the comment above suggests) mark the changeset in the changelog as "runAlways". Typically you do NOT want runAlways to be set because then Liquibase will try to do things like create a table that already exists.

How to Remotely start jenkins build and get back result transactionally?

I had a request a to create a java client and start jenkins build for a specific job; and get back the result of that build.
The problem is, the system is used by multiple users and their build might messed up altogether. Also the get latest build my retrieve me the previous finished build instead of current one. Is there anyway to do build/get result transactionally?
I don't think there's a way to get true transactional functionality (in the way that, say, Postgres is transactional), however, I think you can prevent collisions amongst multiple users by doing the following:
Have your build wrapped around a script (bash, Python, or similar) which takes out an exclusive lock on a semfile before the build and releases it after its done. That is, a file which serves as a semaphore that the build process must be able to exclusively lock in order to be able to proceed.
That way, if you have a build in progress, and another user triggers one, the in-progress build will have the semfile locked, and the 2nd one will block waiting for the exclusive lock on that file, getting the lock only once the 1st build is complete and has released the lock on the file.
Also, to be able to refer to each remote build after the fact, I would recommend you refer to my previous post Retrieve id of remotely triggered jenkins job.

Is there a possibility in jenkins to run build only if something changed (in ClearCase SCM) from last build?

I need to build in jenkins only if there has been any change in ClearCase stream. I want to check it also in nightly or when someone choose to build manually, and to stop the build completely if there are no changes.
I tried the poll SCM but it doesn't seem to work well...
Any suggestion?
If it is possible, you should monitor the update of a snapshot view and, if the log of said update reveal any new files loaded, trigger the Jnekins job.
You find a similar approach in this thread.
You don't want to do something like that in a checkin trigger. It runs on the users client and will slow tings down, not to mention that you'd somehow have to figure out how to give every client access to that snapshot view.
What can work is a cron or scheduled job that runs lshistory and does something when it finds new checkins.
Yes you could do this via trigger, but I'd suggest a combo of trigger and additional script.. since updating the snapshot view might be time-consuming and effect checkins...
Create a simple trigger that when the files you are concerned about are changed on a stream will fire..
The script should "touch/create" a file in some well-known network location (or perhaps write to a pipe)...
the other script could be some cron (unix) or AT (windows) job that runs continually or each minute and if the well-known file is there will perform the update of the snapshot view..
The script could also read the Pipe written to by the trigger if you go that route
This is better than a cron job that has to do an lshistory each time.. but Martina was right to suggest not doing the whole thing in a trigger for performance and snapshot view accessability for all clients.. but a trigger to write to a pipe or write some empty file is efficient and the cron/AT job that actually does the update is effieicnet as it does not have to query the VOB each minute... just the file (or only after there is info on the pipe)..

Resources