Docker Bench for Security for image on Google Cloud - docker

I have an image in Container Registry and deployed to App Engine flex.
How do I use Docker Bench for Security to check my containers security?

You can't use Docker Bench for the images that are uploaded in the Google Cloud Container Registry.
You can do it locally with the following command:
docker run -it --net host --pid host --userns host --cap-add audit_control \
-e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \
-v /etc:/etc:ro \
-v /usr/bin/docker-containerd:/usr/bin/docker-containerd:ro \
-v /usr/bin/docker-runc:/usr/bin/docker-runc:ro \
-v /usr/lib/systemd:/usr/lib/systemd:ro \
-v /var/lib:/var/lib:ro \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
--label docker_bench_security \
docker/docker-bench-security
For more information on Docker Bench usage you can check this
I think you should also be able to replicate this process with Cloud Build. You can check the documentation to see how to use it.
Cloud Build quickstart
Cloud Build config reference

Related

docker: Error response from daemon: invalid volume specification

I'm currently following this tutorial to run a model on Docker that was built using the Google Cloud AutoML Vision:
https://cloud.google.com/vision/automl/docs/containers-gcs-tutorial
I'm having trouble running the container, specifically running this command:
sudo docker run --rm --name ${CONTAINER_NAME} -p ${PORT}:8501 -v ${YOUR_MODEL_PATH}:/tmp/mounted_model/0001 -t ${CPU_DOCKER_GCR_PATH}
I have my environment variables set up right (did an echo $<env_var>). I do not have a /tmp/mounted_model/0001 directory on my local system. My model path is configured to be the model location on the cloud storage.
${YOUR_MODEL_PATH} must be a directory on the host on which you're running the container.
Your question suggests that you're using the Cloud Storage bucket path but you cannot do this.
Reviewing the tutorial, I think the instructions are confusing.
You are told to:
gsutil cp \
${YOUR_MODEL_PATH} \
${YOUR_LOCAL_MODEL_PATH}/saved_model.pb
So, your command should probably be:
sudo docker run \
--rm \
--interactive --tty \
--name=${CONTAINER_NAME} \
--publish=${PORT}:8501 \
--volume=${YOUR_LOCAL_MODEL_PATH}:/tmp/mounted_model/0001 \
${CPU_DOCKER_GCR_PATH}
NB I added --interactive --tty to make debugging easier; it's optional
NB ${YOUR_LOCAL_MODEL_PATH} not ${YOUR_MODEL_PATH}
NB The command should not be -t ${CPU_DOCKER_GCR_PATH} omit the -t
I've not run through this tutorial.

How an app in docker container access DB in windows?

OS: Windows server 2016
I have an App wrote in Go and put in a docker container. The App has to access "D:\test.db". How can I do that?
Using docker volumes and by using the -v or --mount flag when you start your container.
A modified example from the Docker docs:
$ docker run -d \
--mount source=myvol2,target=/app \
nginx:latest
you just need to replace nginx:latext with your image name and adapt source and target as you need.
Another example (also from the docs) using -v and mounting in read-only mode:
$ docker run -d \
-v nginx-vol:/usr/share/nginx/html:ro \
nginx:latest

Docker not found when building docker image using Docker Jenkins container pipeline

I have a Jenkins running as a docker container, now I want to build a Docker image using pipeline, but Jenkins container always tells Docker not found.
[simple-tdd-pipeline] Running shell script
+ docker build -t simple-tdd .
/var/jenkins_home/workspace/simple-tdd-pipeline#tmp/durable-
ebc35179/script.sh: 2: /var/jenkins_home/workspace/simple-tdd-
pipeline#tmp/durable-ebc35179/script.sh: docker: not found
Here is how I run my Jenkins image:
docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v
/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock
jenkins
And the DockerFile of Jenkins image is:
https://github.com/jenkinsci/docker/blob/9f29488b77c2005bbbc5c936d47e697689f8ef6e/Dockerfile
You're missing the docker client. Install it as this in Dockerfile:
RUN curl -fsSLO https://get.docker.com/builds/Linux/x86_64/docker-17.04.0-ce.tgz \
&& tar xzvf docker-17.04.0-ce.tgz \
&& mv docker/docker /usr/local/bin \
&& rm -r docker docker-17.04.0-ce.tgz
Source
In your Jenkins interface go to "Manage Jenkins/Global Tool Configuration"
Then scroll down to Docker Installations and click "Add Docker". Give it a name like "myDocker"
Make sure to check the box which says "Install automatically". Click "Add Installer" and select "Download from docker.com". Leave "latest" in the Docker version. Make sure you click Save.
In your Jenkinsfile add the following stage before you run any docker commands:
stage('Initialize'){
def dockerHome = tool 'myDocker'
env.PATH = "${dockerHome}/bin:${env.PATH}"
}
Edit: May 2018
As pointed by Guillaume Husta, this jpetazzo's blog article discourages this technique:
Former versions of this post advised to bind-mount the docker binary from the host to the container. This is not reliable anymore, because the Docker Engine is no longer distributed as (almost) static libraries.
Docker client should be installed inside a container as described here. Also, jenkins user should be in docker group, so execute following:
$ docker exec -it -u root my-jenkins /bin/bash
# usermod -aG docker jenkins
and finally restart my-jenkins container.
Original answer:
You could use host's docker engine like in this #Adrian Mouat blog article.
docker run -d \
--name my-jenkins \
-v /var/jenkins_home:~/.jenkins \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 8080:8080 jenkins
This avoids having multiple docker engine version on host and jenkins container.
The problem is in your Jenkins, it isn't capable to use the docker engine, even if you do install the docker from the plugin manager. From what I got researching there are some alternatives to workaround this issue:
1: Build a image using some docker image with pre-installed docker in it like provided by getintodevops/jenkins-withdocker:lts
2: Build the images from jenkins/jenkins mounting the volumes to your host then install the docker all by yourself by creating another container with same volumes and executing the bash cmd to install the docker or using Robert suggestion
docker run -p 8080:8080 -p 50000:50000 -v $HOME/.jenkins/:/var/jenkins_home
-v /var/run/docker.sock:/var/run/docker.sock jenkins/jenkins:latest
or 3: The most simple, just add the installed docker path from your host machine to be used by your jenkins container with: -v $(which docker):/usr/bin/docker
Your docker command should look like this:
docker run \
--name jenkins --rm \
-u root -p 8080:8080 -p 50000:50000 \
-v $(which docker):/usr/bin/docker\
-v $HOME/.jenkins/:/var/jenkins_home
-v /var/run/docker.sock:/var/run/docker.sock \
jenkins/jenkins:latest
[Source]https://forums.docker.com/t/docker-not-found-in-jenkins-pipeline/31683
Extra option: Makes no sense if you just want to make use of a single Jenkis server but it's always possible to install a OS like Ubuntu using an image and install the jenkins .war file from there
docker run -d \
--group-add docker \
-v $(pwd)/jenkins_home:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(which docker):/usr/bin/docker \
-p 8080:8080 -p 50000:50000 \
jenkins/jenkins:lts
Just add option --group-add docker when docker run.
Add docker path i.e -v $(which docker):/usr/bin/docker to container in volumes like
docker run -d \
--name my-jenkins \
-v $(which docker):/usr/bin/docker \
-v /var/jenkins_home:~/.jenkins \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 8080:8080 jenkins
This section helped me install docker inside the jenkins container: https://www.jenkins.io/doc/book/installing/docker/#downloading-and-running-jenkins-in-docker
Also, I had to replace FROM jenkins/jenkins:2.303.1-lts-jdk11 in the Dockerfile in step 4(a) with jenkins/jenkins.

How to run Tensorboard and jupyter concurrently with docker?

I'm starting to learn how to use TensorFlow to do machine learning. And find out docker is pretty convenient to deploy TensorFlow to my machine. However, the example that I could found did not work on my target setting. Which is
Under ubuntu16.04 os, using nvidia-docker to host jupyter and tensorboard service together(could be two container or one container with two service). And files create from jupyter should be visible to host OS.
Ubuntu 16.04
Dokcer
nvidia-docker
Jupyter
Tensorboard
Jupyter container
nvidia-docker run \
--name jupyter \
-d \
-v $(pwd)/notebooks:/root/notebooks \
-v $(pwd)/logs:/root/logs \
-e "PASSWORD=*****" \
-p 8888:8888 \
tensorflow/tensorflow:latest-gpu
Tensorboard container
nvidia-docker run \
--name tensorboard \
-d \
-v $(pwd)/logs:/root/logs \
-p 6006:6006 \
tensorflow/tensorflow:latest-gpu \
tensorboard --logdir /root/logs
I tried to mount logs folder to both container, and let Tensorboard access the result of jupyter. But the mount seems did work. When I create new file in jupyter container with notebooks folder, host folder $(pwd)/notebooks just appear nothing.
I also followed the instructions in Nvidia Docker, Jupyter Notebook and Tensorflow GPU
nvidia-docker run -d -e PASSWORD='winrar' -p 8888:8888 -p 6006:6006 gcr.io/tensorflow/tensorflow:latest-gpu-py3
Only Jupyter worked, tensorboard could not reach from port 6006.
I was facing the same problem today.
Short answer: I'm going to assume you are using the same container for both Jupyter Notebook and tensorboard. So, as you wrote, you can deploy the container with:
nvidia-docker run -d --name tensor -e PASSWORD='winrar'\
-p 8888:8888 -p 6006:6006 gcr.io/tensorflow/tensorflow:latest-gpu-py3
Now you can access both 8888 and 6006 ports but first you need to initialize tensorboard:
docker exec -it tensor bash
tensorboard --logdir /root/logs
About the other option: running jupyter and tensorboard in different containers. If you have problems mounting same directories in different containers (in the past there was a bug about that), since Docker 1.9 you can create independent volumes unlinked to particular containers. This may be a solution.
Create two volumes to store logs and notebooks.
Deploy both images with these volumes.
docker volume create --name notebooks
docker volume create --name logs
nvidia-docker run \
--name jupyter \
-d \
-v notebooks:/root/notebooks \
-v logs:/root/logs \
-e "PASSWORD=*****" \
-p 8888:8888 \
tensorflow/tensorflow:latest-gpu
nvidia-docker run \
--name tensorboard \
-d \
-v logs:/root/logs \
-p 6006:6006 \
tensorflow/tensorflow:latest-gpu \
tensorboard --logdir /root/logs
As an alternative, you can also use the ML Workspace Docker image. The ML Workspace is a web IDE that combines Jupyter, TensorBoard, VS Code, and many other tools & libraries into one convenient Docker image. Deploying a single workspace instance is as simple as:
docker run -p 8080:8080 mltooling/ml-workspace:latest
All tools are accessible from the same port. You can find information on how to access TensorBoard here.

Docker Swarm - equivalent docker commands

As far as I know, the Docker Swarm API is compatible with the Offical Docker API.
What is the equivalent Docker Swarm commands for the following docker commands:
docker ps -a
docker run --net=host --privileged=true \
-e DEVICE=$VETH_NAME -e SWARM_MANAGER_ADDR=$SWARM_MANAGER_ADDR -e SWARM_MANAGER_PORT=$SWARM_MANAGER_PORT \
-v conf_files:/etc/sur \
-v conf_files:/etc/sur/rules \
-v _log:/var/log/sur\
-d sur
The standalone swarm simply has a different host/port for you to connect with the client (client being the docker cli). It relays the commands as appropriate from the manager to each node in the swarm. The easiest way to do that is to set $DOCKER_HOST to point to the port the manager is listening to:
# start your manager, the end of the command is your discovery method
docker run -d -P --restart=always --name swarm-manager swarm manager ...
# send all future commands to the manager
export DOCKER_HOST=$(docker port swarm-manager 2375)
# run any docker ps, docker run, etc commands on the Swarm
docker ps
docker run --net=host --privileged=true \
-e DEVICE=$VETH_NAME \
-e SWARM_MANAGER_ADDR=$SWARM_MANAGER_ADDR \
-e SWARM_MANAGER_PORT=$SWARM_MANAGER_PORT \
-v conf_files:/etc/sur \
-v conf_files:/etc/sur/rules \
-v _log:/var/log/sur \
-d sur
# return to running commands on the local docker host
unset DOCKER_HOST
If you needed those SWARM_MANAGER_ADDR/PORT values defined, those can come out of the docker port command. Otherwise, I'm not familiar with the "sur" image to know about the values you need to pass there.

Resources