I am using the Ansible Docker module and trying to run docker with the "--rm" flag set. However, I do not see an option for specifying to use the "--rm" flag or a way to pass in which Docker flags to set on the Ansible Docker Module.
Is there a way to set the "--rm" flag when starting a container with the Ansible Docker module?
Thanks
The Docker module linked by OP is deprecated and #Lexandro's answer is outdated.
This is now supported in the newer module named docker_container as the auto_remove feature (added in 2.4)
deprecated
--rm only implemented in the docker client itself with combining two functions: run then remove and only works in interactive mode. So you can't run container with -d option or invoke this function via RESTful API. You can use --rm only in case if you call it via docker run --rm ...........
If you have a older docker API version, then you will get
"msg": "Docker API version is x.xx. Minimum version required is x.xx to set auto_remove option."
Hence may be you can use the
cleanup: yes
It cleans up the container after the command executes which does the same as --rm.
I had similar issue when I had,
"msg": "Docker API version is 1.23. Minimum version required is 1.25 to set auto_remove option."
Related
I need my image to start with this command:
docker run -it --rm --security-opt seccomp=./chrome.json <image_id>
I'm deploying it to Google Compute Engine: https://cloud.google.com/compute/docs/containers/deploying-containers
As far as I understand, I can't specify arguments there, so Google Cloud starts it with just docker run command.
How do I pass these arguments? Maybe I can specify those args in Dockerfile somehow?
When you use the feature to deploy container directly on Compute Engine, you are limited to the definition of
Entry point
Args to pass at the entry point
Environment variables
That's all, you can't add additional/custom params.
One solution is, instead of using the built in feature, to use the container-optimized OS (COS) on your Compute Engine and to create a startup script to download and run the container with the docker args that you want
METADATA=http://metadata.google.internal/computeMetadata/v1
SVC_ACCT=$METADATA/instance/service-accounts/default
ACCESS_TOKEN=$(curl -H 'Metadata-Flavor: Google' $SVC_ACCT/token | cut -d'"' -f 4)
docker login -u oauth2accesstoken -p $ACCESS_TOKEN https://gcr.io
docker run … gcr.io/your-project/your-image
On the latest line, you can customize the run param in your startup script.
So now, for the update, you have to update the startup script and to reset your VM (or to create a new Compute Engine with COS and the new startup script; and to delete the previous one).
It's matter of tradeoff between the convenience of a built in feature and the customization capacity.
I'm working with a poor internet connection and trying to pull and run a image.
I wanted to download one layer at a time and per documentation tried adding a flat --max-concurrent-downloads like so:
docker run --rm -p 8787:8787 -e PASSWORD=blah --max-concurrent-downloads=1 rocker/verse
But this gives an error:
unknown flag: --max-concurrent-downloads See 'docker run --help'.
I tried typing docker run --help and interestingly did not see the option --max-concurrent-downloads.
I'm using Docker Toolbox since I'm on a old Mac.
Over here under l there's an option for --max-concurrent-downloads however this doesn't appear on my terminal when typing docker run --help
How can I change the default of downloading 3 layers at a time to just one?
From the official documentation: (https://docs.docker.com/engine/reference/commandline/pull/#concurrent-downloads)
You can pass --max-concurrent-downloads during a pull operation.
You can set --max-concurrent-downloads with the dockerd command.
If you're using the docker Desktop GUI for Mac or Windows:
You can edit the .json file directly in docker engine settings:
This setting needs to be passed to dockerd when starting the daemon, not to the docker client CLI. The dockerd process is running inside of a VM with docker-machine (and other docker desktop environments).
With docker-machine that is used in toolbox, you typically pass the engine flags on the docker-machine create command line, e.g.
docker-machine create --engine-opt max-concurrent-downloads=1
Once you have a created machine, you can follow the steps from these answers to modify the config of an already running machine, mainly:
SSH into your local docker VM.
note: if 'default' is not the name of your docker machine then substitute 'default' with your docker machine name $
docker-machine ssh default
Open Docker profile $ sudo vi /var/lib/boot2docker/profile
Then in that profile, you would add your --engine-opt max-concurrent-downloads=1.
Newer versions of docker desktop (along with any Linux install) make this much easier with a configuration menu daemon -> advanced where you can specify your daemon.json entries like:
{
"max-concurrent-downloads": 1
}
I try to get Mattermost working with Docker for Windows. As mentioned here I executed the following command:
docker run --name mattermost-preview -d --publish 8065:8065 mattermost/mattermost-preview
After pulling and extracting the files, docker exits and throws the following error:
docker.exe: Error response from daemon: Unrecognised volume spec: invalid volume specification: './mattermost-data'.
Running Windows Server 2019 PreRelease 17623 and docker 17.10.0-ee-preview-3
Feedback from our engineers is that while they've never used Docker with Windows, your issue might be Windows related because it can't create a Docker volume (maybe related to the different path syntax between Linux and Windows ?)
Not sure if this will help, but here are the very basic example using volume on Windows Engine.
docker run -it -v C:\Users\Administrator\:C:\Users\public microsoft/nanoserver powershell
Also, you might want to use stable release channel instead of pre-release version of Windows. There are number of changes made, and base images will not be compatible. It is likely that the author of this image built it for Windows stable release.
Maybe, contact support for Mattermost?
I'm trying to figure out how to use nvidia-docker (https://github.com/NVIDIA/nvidia-docker) using https://docs.ansible.com/ansible/latest/docker_container_module.html#docker-container.
Problem
My current Ansible playbook execute my container using "docker" command instead of "nvidia-docker".
What I have done
According to some readings, I have tried adding my devices, without success
docker_container:
name: testgpu
image: "{{ image }}"
devices: ['/dev/nvidiactl', '/dev/nvidia-uvm', '/dev/nvidia0', '/dev/nvidia-uvm-tools]
state: started
note I tried different syntax for devices (inline ..), but still getting the same problem
This command does not throws any error. As expected it creates a Docker container with my image and try to start it.
Looking at my container logs:
terminate called after throwing an instance of 'std::runtime_error'
what(): No CUDA driver found
which is the exact same error I'm getting when running
docker run -it <image>
instead of
nvidia-docker run -it <image>
Any ideas how to override docker command when using docker_container with Ansible?
I can confirm my CUDA drivers are installed, and all the path /dev/nvidia* are valid.
Thanks
docker_container module doesn't use docker executable, it uses Docker daemon API through docker-py Python library.
Looking at nvidia-docker wrapper script, it sets --runtime=nvidia and -e NVIDIA_VISIBLE_DEVICES.
To set NVIDIA_VISIBLE_DEVICES you can use env argument of docker_container.
But I see no ways to set runtime via docker_container module as of current Ansible 2.4.
You can try to overcome this by setting "default-runtime": "nvidia" in your daemon.json configuration file, so Docker daemon will use nvidia runtime by default.
I have installed grafana via docker.
Is it possible to export and run grafana-cli on my host?
If you meant running Grafana with some plugins installed, you can do it by passing a list of plugin names to a variable called GF_INSTALL_PLUGINS.
sudo docker run -d -p 3000:3000 -e "GF_INSTALL_PLUGINS=gridprotectionalliance-openhistorian-datasource,gridprotectionalliance-osisoftpi-datasource" grafana/grafana
I did this on Grafana 4.x
Installing plugins for Grafana 3 "or above"
For a full automatic setup of your Grafana install with the plugins you want I would follow Ricardo's suggestion. Its much better if you can configure your entire container as wanted in a single hit like that.
However if you are just playing with the plugins and want to install some manually, then you can access a shell on the running docker instance from the host.
host:~$ docker exec -it grafana /bin/bash
... assuming you named the docker container "grafana" otherwise you will need to substitute the given container name. The shell prompt that returns will allow you to run the standard
root#3e04b4578ebe:/# grafana-cli plugins install ....
Be warned that it may tell you to run service grafana-server restart afterwards. In my experience that didn't work (Not sure it runs as a traditional service in the container). However if you exit the container, and restart the container from the host...
host:~$ docker restart grafana
That should restart the grafana service and your new plugins should be in place.
Grafana running in docker container
Docker installed on Windows 10
Test: command to display grafana-cli help
c:\>docker exec -it grafana grafana-cli --help
Tested with a version: Version 6.4.4 November 6, 2019