Jenkins jobs disappeared from GUI, but are still present on the disk - jenkins

I had 2 folders containing several jobs and a lot of others jobs placed directly in the Jenkins jobs root.
I was the only one using the 2 folders, the rest of my colleagues only used the jobs in the root.
I haven't accessed Jenkins for 2 weeks. Today, when I logged in, the 2 folders containing my jobs were gone. The root jobs are still present.
On the disk the jobs are still present. I can find the configuration, the builds etc.
What could be the cause? My colleagues say they haven't touched my folders. No related errors in the jenkins log. All my plugins are still enabled. Indeed, many have updates available, but I guess it shouldn't matter. Also, if any problems were related to one of the jobs, why is the whole folders not showing up? I would expect only the jobs inside folders not showing up.
Any ideas how to approach this and recover my jobs?

Did someone uninstall the folders plug-in?
Have you tried reloading configuration from disk?

Related

Jenkins duplicate workspace directories?

Working with Jenkins, I'm looking into cleaning up unused workspaces, as to save space on disk.
I noticed something strange though: projects would be copied several times, only with slightly different directory names.
Example:
workspaces
project1
project1#tmp
project1#2
project1#2#tmp
Is this normal? Are these 3 extra directories safe to clean?
Yes, it would be safe to delete those. Though I would suggest you do the delete/clean up as part of your pipeline job, maybe as the last step? (drop the workspace) rather doing it manually yourself.

What's a good location for permanently storing Jenkins artifacts?

I was recently put in charge of all Jenkins-related work at my job, and was tasked with storing build artifacts from our declarative pipelines in a place where:
- They are accessible to everyone on the team
- They can be stored for long periods of time
Ideally they would be visible on the Jenkins interface, where they appear when using the default 'archiveArtifacts' command. I know this saves them in the JENKINS_HOME directory. The problem is that I have to discard old builds to avoid running out of space and the artifacts are deleted with them. Furthermore, I don't have access to the server that Jenkins runs on because it's managed by a separate team, so I can't go into JENKINS_HOME.
I looked into a few ARMs like Nexus and Artifactory, but from my understanding those are only supposed to be used for full releases. I'm looking to save artifacts after every new merge, which can happen multiple times a day.
I'm currently saving them on a functional user's home directory, but I'm the only one with direct access to it so that's no good. I also looked into plugins like ArtifactDeployer, which doesn't support pipelines and only does as much as a 'cp' command as far as I could tell.
I ended up creating some freestyle jobs that copy artifacts from the pipelines and save them directly in their workspace. This way they're stored on our Jenkins slaves and visible through the interface to anyone who has permission to view job workspaces.
Nexus does not care what kind of artifacts you drop there. Its a good idea to use it.

Jenkins performance issue

I have Jenkins version 1.6.11 installed onto a windows server, the number of jobs configured are huge and the load is distributed among multiple Master & slave's. There are couple of issues occurring very frequently,
The whole Jenkins UI becomes so slow, either Jenkins server needs to restarted or the whole server needs to restarted to bring it back to normal.
Certain jobs are taking way too much time to load. To fix this, that particular job has to be abandoned and new ones has to be created for the same.
It would be really helpful if you could provide possible solutions for the two issues.
Use this syntax /node_modules/ to remove node modules folder, but before you do this, you should exclude .git folder using /.git/

VSTS agent very slow to download artifacts from local network share

I'm running an on-prem TFS instance with two agents. Agent 1 has a local path where we store our artifacts. Agent 2 has to access that path over a network path (\agent1\artifacts...).
Downloading the artifacts from agent 1 takes 20-30 seconds. Downloading the artifacts from agent 2 takes 4-5 minutes. If from agent 2 I copy the files using explorer, it takes about 20-30 seconds.
I've tried adding other agents on other machines. All of them perform equally poorly when downloading the artifacts but quick when copying manually.
Anyone else experience this or offer some ideas of what might work to fix this?
Yes It's definitely the v2 that's causing the problem.
Our download artifacts step has gone from 2mins to 36mins. Which is completely unacceptable. Im going to try out agent v2.120.2 to see if that's any better...
Agent v2.120.2
I think it's because of the amount of files in our artifacts, we have 3.71GB across 12,042 files in 2,604 Folders!
The other option I will look into it zipping or creating a nuget package for each public artifact and then after the drop, unzipping! Not the ideal solution but something I've done before when needing to use RoboCopy which is apparently what this version of the Agent uses.
RoboCopy is not great at handling lots of small files, and having to create a handle for each file across the network adds a lot of overhead!
Edit:
The change to the newest version made no difference. We've decided to go a different route and use an Artifact type of "Server" rather than "File Share" which has sped it up from 26 minutes to 4.5 minutes.
I've found the source of my problem and it seems to be the v2 agent.
Going off of Marina's comment I tried to install a 2nd agent on 01 and it had the exact same behavior as 02. I tried to figure out what was different and then I noticed 01's agent version is 1.105.7 and the new test instance is 2.105.7. I took a stab in the dark and installed the 1.105.7 on my second server and they now have comparable artifact download times.
I appreciate your help.

Rebuild a jenkins job programmatically

We have a three layer multi configuration which at times fails since some sub job fails at times on some slaves
We are looking at rebuilding the whole job across all slaves selected in parent job from the beginning if any of the sub jobs fail
I have looked at rebuild plugin, but am also looking at a programmatic way of solving the problem, any guidance would help
Try Jenkins Remote Access API. This can do it.
https://wiki.jenkins-ci.org/display/JENKINS/Remote+access+API

Resources