Fresh image and container of
owasp/zap2docker-stable:latest
The command:
docker exec zap1 ./zap-baseline.py
Hangs or processes forever afer:
FAIL-NEW: 0 FAIL-INPROG: 0 WARN-NEW: 4 WARN-INPROG: 0 INFO: 0 IGNORE: 0 PASS: 12
While earlier (2-3 months ago) it executed properly. Btw when I execute the same command inside the container, then it executes and shuts down properly. How to fix this so that jenkins job won't be stuck forever at the summary?
BTW Why does baseline-scan.py always print out the help section if I add '-r report.html' at the end? (EDIT, a typo -t instead of -r, but the problem stays)
That command doesnt look right to me.
The recommended command is:
docker run -t owasp/zap2docker-stable zap-baseline.py -t https://www.example.com
As per https://github.com/zaproxy/zaproxy/wiki/ZAP-Baseline-Scan
Its always printing out the help because '-t report-html' isnt valid. Look at the help shown to see the valid arguments. For an html report you should be using '-r report.html'
Related
I'm trying to start my condor jobs using docker containers but all my jobs are in idle. Using the command 'conodor_q -better-analyze' I got the following output:
enter image description here
The requirement that stop my jobs is TARGET.HasDocker, but Docker is sucessfully installed on my VM and and the command 'condor_status -l | grep Docker' gives the following output:
enter image description here
Do you know what could be the problem? Maybe the problem is the different name HasDockerVolumeBDP1_2021 respect to HasDocker and because of it is not recognised? In this case, how do I change the HasDocker name?
Thank you in advance!
I am very new to docker, have created a Dockerfile to create an image that executes the protractor tests.
That Dockerfile has an entry point that expects a parameter with Suite Name that I want to execute.
It all runs very well when I provide suite names in the command line.
I have about 30 test suites, I am using another .sh file which filters out suite names, and in a for loop, it runs docker commands with different suite names.
Now I do not want to sun 30 suites simultaneously but want to set a limit of say 6 at a time and want to keep others waiting until one is finished.
I execute like this:
for (( i=0; i<${tests}; i++ ));
do
docker run -dit containername $testSuiteName
done
So how can I limit the maximum number of executions at a time?
There are going to be a number of ways of tackling this problem. Here
is one possible solution.
You can treat this as a shell scripting problem, rather than a Docker problem. For example, consider the following, which instead of docker run ... just uses sleep:
#!/bin/sh
let count=5
let tests=20
for (( i=0; i<tests; i++ )); do
sleep $((RANDOM % 10)) &
echo "started $!"
let count--
if (( count <= 0 )); then
wait -n
let count++
fi
done
echo "waiting for remaining jobs"
wait
echo "all done"
This starts $count processes in parallel, and then waits for one to
exit. When a process exits, it immediately starts a new one. Once it
has started all the jobs, it simply waits for everything to finish.
Using this model, you would drop the -d from your docker run
command line, since you need the shell to track the background
processes. Instead of sleep ... &, you would run:
docker run containername $testSuiteName > /dev/null 2>&1 &
Note that I've dropped the -i and -t options here, since it looks like
you're running these tests non-interactively.
A few more possible solutions:
If you run your tests in jenkins, you can create a job that runs one test suite. Set the max number of executors to 6. And start as many tests as you want. Now jenkins won't let run more than 6 jobs at a time.
This is I think the ideal and most correct approach, yet most difficult one. You can use an orchestrator as kubernetes. This actually controls all your docker image. Unfortunately, I don't have step by step guide how to achieve. But this is really the most professional way to tackle your problem
I'm trying to tie scripts from an existing pipeline on docker into my snakemake pipeline. I have the docker pipeline set up using singularity and it works. For instance,
singularity exec docker://mypipeline some_command.sh file.bam out_file.bam
works perfectly when I run it interactively on the command line. Similarly, when I incorporate the exact same command into my Snakefile it also works:
rule myrule:
input:
"file.bam"
output:
"out_file.bam"
shell:
"singularity exec docker://mypipeline some_command.sh {input} {output}"
However, when I try to follow this tutorial https://reproducibility.sschmeier.com/container/index.html#using-a-container-in-our-workflow to incorporate the container into my workflow as follows
singularity: "docker://mypipeline"
rule myrule:
input:
"file.bam"
output:
"out_file.bam"
shell:
"some_command.sh {input} {output}"
And I run snakemake -p --use-singularity --cores 1 I get the following output
Building DAG of jobs...
Using shell: /bin/bash
Provided cores: 1 (use --cores to define parallelism)
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 myrule
1
[Sun May 17 15:28:11 2020]
rule myrule:
input: file.bam
output: out_file.bam
jobid: 0
some_command.sh file.bam out_file.bam
Activating singularity image myImage.simg
Then I get a very long report that I'm not sure what to make of, followed by this error message
Waiting at most 5 seconds for missing files.
MissingOutputException in line 3 of Snakefile:
Job completed successfully, but some output files are missing. Missing files after 5 seconds:
out_file.bam
This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: .snakemake/log/2020-05-17T152810.484310.snakemake.log
My questions:
Why does one work and not the other/how can I get the last example to work?
Is it good practice to declare singularity: "docker://... upfront or does it not matter?
Error message suggests singularity command got executed successfully but snakemake doesn't see the output file. Is the output file out_file.bam shown in your code same as the one you actually use, or you removed some filepath? I would suggest adding --verbose flag to snakemake and reviewing the actual singularity command that snakemake executes.
When I try to anchore-cli image add ... it gives me a failure of
Error: failed post url=http://engine-catalog:8228/v1/images
HTTP Code: 500
Detail: {'error_codes': []}
I then do a docker-compose ps and I see aevolume_engine-catalog_1 /docker-entrypoint.sh anch ... Up (unhealthy)
I try to fix the above with a docker-compose up -d but it just says everything is up to date.
So I have to restart my computer, then run docker-compose up -d again, and it starts everything up.
I then run the anchore-cli image add ... again, but it gets stuck on Status: not_analyzed
Waiting 5.0 seconds for next retry. It does this for about 10 minutes, and then it says Error: Requested image not found in system ... I'm then stuck back at square 1.
Anyone know what is wrong here? I'm using anchore-cli, version 0.4.1
I am unable to run repo non-interactively inside a container as part of a freestyle job.
It prompts for the user-name and email. I got round that by doing a git config --global inside the job.
But then it does the color test, and that hangs indefinitely.
Looking at the source code for repo I see this
if os.isatty(0) and os.isatty(1) and not self.manifest.IsMirror:
if opt.config_name or self._ShouldConfigureUser():
self._ConfigureUser()
self._ConfigureColor()
So, I ran the following inside the container:
python -C "import os; print os.isatty(0), os.isatty(1)"
and, sure enough, it printed out True True
Looking at the Jenkins log, it launches the container with --tty specified, and there seems no way to configure that option.
I can't find a bash option to force a script to be run in a non-interactive shell. If I put the above python line in a file and execute it with almost any combination of commands and options, it still prints out True True
The only way I see something different is if I use I/O redirection
bash <a.sh
which prints out False True - i.e. stdin is not a tty, and
bash <a.sh >a.log
which prints False False.
For a complex script, are there any problems using the bash <script approach?
Does anyone know any jenkins magic to prevent docker being launched using --tty?
I know that the --tty is the culprit. I built the container locally and ran the following
$ docker run repotest python -c "import os;print os.isatty(0), os.isatty(1)"
False False
$ docker run --tty repotest python -c "import os;print os.isatty(0), os.isatty(1)"
True True
Running Versions:
repo: 1.12.37 (per Ubuntu 16.04 apt-get)
Jenkins: 2.149
Cloudbees Docker Plugin: 1.7.3
Container base is ubuntu:xenial
I'm using the "Build inside a docker container" option.
To run bash script repo_script.sh "non-interactively", or more exactly speaking without having terminals associated with standard streams, you could run your script simply as
repo_script.sh < /dev/null 2>&1 | cat
assuming you want to see the output the way you would see it running simply as repo_script.sh. By piping the standard output and error to a different process the file descriptor appears as a pipe and not TTY to repo_script.sh. You could also direct output to a file, or even to /dev/null if you do not care about the output:
log_file=/dev/null
repo_script.sh < /dev/null > "${log_file}" 2>&1
Running the script as
bash < repo_script.sh | cat
might would work too, though it is very unorthodox and to my mind hackish way of running a script just to break the association of TTY to the standard input. From script engine point of view, it is different to read a script program from a file than from standard input (which typically, if it is a terminal, is not seekable), so there might be some subtle differences that could possibly bite you in unexpected ways. This way does not as clearly communicate your intention to the next person that need to understand your code, and may lead to partial hair loss in that person due to extraneous head scratching.
There is no need for any bash options, just using the output directions from within the interpreting shell as above described is an easy-to-comprehend, multi-platform compatible standard convention for changing the standard stream associations.
P.S. I think it should be enough for your repo script to just test if the standard input is a TTY. It looks to me like the author of that script did not think deeply enough there. There is simply no use waiting for input if you do not have terminal device associated with standard input, and you could determine that everything needs to run without user interaction from there or stop with an error if that is not possible.