I am trying to run the OWASP ZAP baseline SCAN in my Cloud Build pipeline.
https://www.zaproxy.org/docs/docker/baseline-scan/#usage
I have found tutorials on how to do it this in Github, in Azure, and others but nothing in Cloud Build. Is there a better option for OWASP security testing?
This is what I have in my cloudRun.yaml file:
steps:
- name: 'gcr.io/cloud-builders/docker'
id: 'ZAP Proxy vulnerability scan'
entrypoint: '/bin/sh'
args: ['scripts/ZAP_OWASP_Run.sh','${PROJECT_ID}']
And this is what I have in the ZAP_OWASP_Run.sh:
docker run -v $(pwd):/zap/wrk/:rw --user root -t owasp/zap2docker-stable zap-baseline.py -t https://myWebsite.com -T 5
I had to add --user root because I was getting an error about permissions being denied.
This is kind of working but when I tried to add a config file to ignore certain warnings it broke again. I have had to hack this together that I started thinking I am going about this in the completely wrong way so I came to ask here.
Edit 1:
When I run the docker command without --user root I get the following error:
2023-01-23 23:22:34,992 Unable to copy yaml file to /zap/wrk/zap.yaml [Errno 13] Permission denied: '/zap/wrk/zap.yaml'
When I try to pass in a config file:
docker run -v $(pwd):/zap/wrk/:rw --user root -t owasp/zap2docker-stable zap-baseline.py -t https://radformation.com -T 5 -c zapAlerts.config
I get the following error:
2023-01-24 00:19:09,957 Failed to load config file /zap/wrk/zapAlerts.config not enough values to unpack (expected 3, got 1)
EDIT 2: I got it working by first generating the config file locally and editing it, I tried to copy it from an online source originally.
My main question is, am I even doing this correctly? It feels very hacky. Is there a better way to ensure my website is OWASP compliant in GCP?
Docker perms can be a complete pain. It looks like ZAP cannot update the directory mapped to /zap/wrk/ - this will cause you lots of problems.
Try running docker run -v $(pwd):/zap/wrk/:rw -t owasp/zap2docker-stable touch /zap/wrk/test.txt - if that fails then its definitely a 'local' perm problem - once you get that simple case working then ZAP should run fine.
The Failed to load config file /zap/wrk/zapAlerts.config not enough values to unpack (expected 3, got 1) error message means the file is in the wrong format. How did you create it?
FYI we have a Diagnosing Docker Problems page here: https://www.zaproxy.org/docs/docker/diagnosing-problems/
Related
I'm quite new to software development and having some issues setting up a docker container.
I've pull the docker container and run it. Now I want to apply some configuration to my container with
docker run --rm --network="ansible_default" -v C:\folder\folder1\ansible\playbooks:/ansible/playbooks docker.<address>/ansible ansible-playbook -i host localhost.playbook.yml
But when I run the above code, it just gives an error:
ERROR the playbook localhost.playbook.yml does not appear to be a file
I am running on administration powershell and have cd into the folder that contains the yaml files. (so inside C:\folder\folder1\ansible\playbooks)
Do I need ansible installed? Any pointers would be greatly appreciated!
EDIT: The docker container exits with a code 2, I'm supposed to be able to access it via localhost:8080 but it's just a blank screen. Exited(2) I'm not too sure what it means, haven't found much success online.
Turns out the solution is to reinstall Docker.
I have limited knowledge of docker. But this is what I have done. I installed docker desktop. Pulled images for influxdb 1.8 and grafana and loadimpact/k6. Created containers for influxdb and grafana which are running fine.
http://localhost:3000/ -> working
http://localhost:8086/ -> gives 404 page not found
I want to run my k6 script in the docker, save result in the influxdb and then use grafana to create custom dashboards based on data in influxdb.
When I give the following command from the command prompt from the folder in which K6 script is present:
docker run -v /k6 -i loadimpact/k6 run --out influxdb=http://localhost:8086/myk6db - <K6-script.js
I get the following error.
time="2021-10-16T10:09:58Z" level=error msg="The moduleSpecifier \"./libs/shim/core.js\" couldn't be found on local disk. Make sure that you've specified the right path to the file. If you're running k6 using the Docker image make sure you have mounted the local directory (-v /local/path/:/inside/docker/path) containing your script and modules so that they're accessible by k6 from inside of the container, see https://k6.io/docs/using-k6/modules#using-local-modules-with-docker.\n\tat reflect.methodValueCall (native)\n\tat file:///-:205:34(24)\n" hint="script exception"
The folder is which K6-script.js is present, two more folders are present K6 and libs which are imported in the K6-script.js .
Then I referred [https://k6.io/docs/using-k6/modules/#local-filesystem-modules][1] and gave the following command
docker run -v //c/loadtesting:/src -i loadimpact/k6 run --out influxdb=http://localhost:8086/myk6db K6-script.js
which gives me the following error.
level=error msg="The moduleSpecifier \"K6-script.js\" couldn't be found on local disk. Make sure that you've specified the right path to the file. If you're running k6 using the Docker image make sure you have mounted the local directory (-v /local/path/:/inside/docker/path) containing your script and modules so that they're accessible by k6 from inside of the container, see https://k6.io/docs/using-k6/modules#using-local-modules-with-docker. Additionally it was tried to be loaded as remote module by prepending \"https://\" to it, which also didn't work. Remote resolution error: \"Get \"https://K6-script.js\": dial tcp: lookup K6-script.js on 192.168.65.5:53: no such host\""
How do I resolve this error and run K6 script in the docker using influxdb?
after much trial and error when I gave following command , the test ran. It couldn't connect to to InfluxDB database but that is another issue I need to resolve . But otherwise the test ran.
docker run -v //c/loadtesting:/src -i loadimpact/k6 run --out influxdb=http://localhost:8086/myk6db /src/K6-script.js
I think it needed the path of script which is inside the container to run the script.
I'm new to Docker and currently following this tutorial:
Learn Docker in 12 minutes
I created the necessary files and I made it up to display "Hello World!" on localhost:80.
Beyond that point, I tried to mount the container using the direct reference to my folder so I can update the index.php file to mimic the development evironment, and then I come with this error:
All I did is change the way the image is ran so I can update the content of the index.php file and see the changes reflect in the webpage when I hit F5.
Currently using Docker for Windows on Windows 10 Pro
Docker for Windows is running
I followed every steps scrupulously so I don't get myself fooled and it didn't work for me it seems.
To answer Mornor's question, here is the result for docker ps
And here for docker logs [container-name]
And since I now better understand what happens under the hood, how do I go to solve my problem illustrated in the log?
Here is my Dockfile
And the command I executed to run my image:
docker run -p 80:80 -v /wmi/tutorials/docker/src/:/var/www/html/ hello-world
And so you see that the file exists:
Error is coming from Apache which tries to show you the directory contents as there is no index file available. Either your docker mapping is not working correctly, or your apache does not have php support installed on it. You are accessing http://localhost, try http://localhost/index.php.
If you get same error, problem is with mapping. If you get php code the problem is with missing PHP support in Apache.
I think you're wrongly mouting your index.php. What you could do to debug it, is to firstly check if the index.php is indeed mounted within the container.
You could issue the following command :
docker run -p 80:80 -v /wmi/tutorials/docker/src/:/var/www/html/ hello-world bash -c 'ls -lsh /var/www/html/'
(use sh instead of bash if it does not work). If you can indeed see a index.php, then congratulations your file is correctly mounted, and the error is not coming from Docker, but from Apache.
If index.php is not there, then you have to check your Dockerfile. You mount src/, check if /src is in the same directory as your Dockerfile.
Keep us updated :)
I know the answer is late but the answer is very easy:
this happens When using docker and you have SELinux, be aware that the host has no knowledge of container SELinux policy.
by adding z
docker run -p 80:80 -v /wmi/tutorials/docker/src/:/var/www/html/:z hello-world
this will automatically do the chcon .... that you need to do.
Check whether the html folder has the proper permission or not.
Thank you
I'm newbie in docker. And i tried to create a Dockerfile to run a website was written byrails, postgresql on apache+passenger.
But when i run Dockerfile, it run successfully, but it had a problem with permission denied. I found problem that folder web must belongs to apache user. Then i tried to change apache user to source web (on container). And it run ok.
But every time i modified a file on local. It always ask password when i saved this file.
And i checked permission source on local. It changed all role to weird role.
How can i solved this problem ?
This is my Dockerfile.
And i used two commands to run.
docker build -t wics .
docker run -v /home/khanhpn/Project/wics:/home/abc -p 80:80 -it wics /bin/bash
After a mount of time, i found a solution to solve this problem.
I just add this line in Dockerfile, the problem was solved.
RUN usermod -u 1000 apache
I have a server running Gitlab. Let's say that the address is https://gitlab.mydomain.com.
Now what I want to achieve is to install a Continuous Integration system. Being that I am using Gitlab, I opt for Gitlab CI, as it feels the more natural way to go. So I go to the Docker repo and I found this image.
So I run the image to create a container with the following
docker run --restart=always -d -p 9000:9000 -e GITLAB_URLS="https://gitlab.mydomain.com" anapsix/gitlab-ci
I give it a minute to boot up and I can now access to the CI through the URL http://gitlab.mydomain.com:9000. So far so good.
I log in the CI and I am greeted by this message:
Now you need Runners to process your builds.
So I come back to the Docker Hub and I find this other image. Apparently to boot up this image I have to do it interactively. I follow the instructions and it will create configuration files:
mkdir -p /opt/gitlab-ci-runner
docker run --name gitlab-ci-runner -it --rm -v /opt/gitlab-ci-runner:/home/gitlab_ci_runner/data sameersbn/gitlab-ci-runner:5.0.0-1 app:setup
The interactive setup will ask me for the proper data that it needs:
Please enter the gitlab-ci coordinator URL (e.g. http://gitlab-ci.org:3000/ )
http://gitlab.mydomain.com:9000/
Please enter the gitlab-ci token for this runner:
12345678901234567890
Registering runner with registration token: 12345678901234567890, url: http://gitlab.mydomain.com:9000/.
Runner token: aaaaaabbbbbbcccccccdddddd
Runner registered successfully. Feel free to start it!
I go to http://gitlab.mydomain:9000/admin/runners, and hooray, the runner appears on stage.
All seems like to work great, but here comes the problem:
If I restart the machine, due to an update or whatever reason, the runner is not there anymore. I could maybe add --restart=always to the command when I run the image of the runner, but this would be problematic because:
The setup is interactive, so the token to register runners have to be input manually
Every time the container with Gitlab CI is re-run the token to register new runners is different.
How could I solve this problem?
I have a way of pointing you in the right direction but im trying to make it myself, hope we both manage to get it up heres my situation.
im using coreOS + docker trying to do exactly what youre trying to do, and in coreOS you can setup a service that starts the CI everytime you restart the machine (as well as gitlab and the others) my problem is trying to make that same installation automatic.
after some digging i found this: https://registry.hub.docker.com/u/ubergarm/gitlab-ci-runner/
in this documentation they state that they can do it in 2 ways:
1-Mount in a .dockercfg file containing credentials into the /root
directory
2-start your container with this info:
-e CI_SERVER_URL=https://my.ciserver.com \
-e REGISTRATION_TOKEN=12345678901234567890 \
Meaning you can setup to auto start the CI with your configs, ive been trying this for 2 days if you manage to do it tell me how =(