I am running virtualbox images inside docker containers and this requires launching docker with either
docker run -i -t --device=/dev/vboxdrv fommil/freeslick:base
or
docker run -i -t --privileged=true fommil/freeslick:base
Obviously, the former is preferable, but I have no control over the way the target script launches the docker instance (it is managed by a third party) other than turning on/off privileged mode.
Is there a way to set system defaults for docker run such that all images launched on a Linux boxen will use --device=/dev/vboxdrv?
Because --device is an "operator exclusive option" it can only be specified at the invocation of the docker run command. So no, there is no way to default that option.
Related
am trying to start a container for nifi-toolkit, I get the error message No program option specified. Options available include: encrypt-config s2s flow-analyzer node-manager tls-toolkit file-manager notify zk-migrator cli but, am not sure where/how to set these values
Open a terminal and run:
docker run --rm apache/nifi-toolkit file-manager
This is an example for file-manager option. Change it for different options. You can also add parameter(s) after file-manager.
Explanation:
This image has an entrypoint defined. Meaning a program that runs by default when you start a container based on it.
You can see it with:
docker pull apache/nifi-toolkit
docker run --rm --entrypoint cat apache/nifi-toolkit /opt/sh/docker-entrypoint.sh
Is there any way that the docker environment could be given to multiple USB devices, say /dev/video0, /dev/video4 and /dev/ttyUSB4?
In case of a single device, it could be
docker run -t -i --device=/dev/ttyUSB4 ubuntu bash
and for multiple devices
docker run -t -i --privileged -v /dev/bus/usb:/dev/bus/usb ubuntu bash
But I needed to know if there is anyway I could provide access like in the former case to specific devices alone (Not using privilages mode).
Docker works with device option as an array. So,
you can specify several devices also with device option:
docker run -ti --device=/dev/ttyUSB4 --device=/dev/video0 --device=/dev/video4 ubuntu bash
In docker-compose is also possible:
docker-compose.yml
...
services:
myservice:
...
devices:
- "/dev/ttyUSB4:/dev/ttyUSB4"
- "/dev/video0:/dev/video0"
- "/dev/video4:/dev/video4"
There's another possibility giving linux capability, but it's unrecommended (dangerous like privileged mode) for production: FOWNER capability:
docker run -ti --cap-add=FOWNER ubuntu bash
Nevertheless, in kubernetes, for example, it's not enough and you need privileged mode.
I use following command to build web server
docker run --name webapp -p 8080:4000 mypyweb
When it stopped and I want to restart, I always use:
sudo docker start webapp && sudo docker exec -it webapp bash
But I can't see the server state as the first time:
Digest: sha256:e61b45be29f72fb119ec9f10ca660c3c54c6748cb0e02a412119fae3c8364ecd
Status: Downloaded newer image for ericgoebelbecker/stackify-tutorial:1.00
* Running on http://0.0.0.0:4000/ (Press CTRL+C to quit)
How can I see the state instead of interacting with the shell?
When you use docker run, the default behavior is to run the container detached. This runs in the background and is detached from your shell's stdin/out.
To run the container in the foreground and connected to stdin/out:
docker run --interactive --tty --publish=8080:4000 mypyweb
To docker start a container, similarly:
docker start --interactive --attach [CONTAINER]
NB --attach rather than -tty
You may list (all add --all) running containers:
docker container ls
E.g. I ran Nginx:
CONTAINER ID IMAGE PORTS NAMES
7cc4b4e1cfd6 nginx 0.0.0.0:8888->80/tcp nostalgic_thompson
NB You may use the NAME or any uniquely identifiable subset of the ID to reference the container
Then:
docker stop nostalgic_thompson
docker start --interative --attach 7cc4
You may check the container's logs (when running detached or from another shell) by grabbing the container's ID or NAMES
docker logs nostalgic_thompson
docker logs 7cc4
HTH!
Using docker exec is causing the shell to attach to the container. If you are comparing the behavior of docker run versus docker start, they behave differently, and it is confusing. Try this:
$ sudo docker start -a webapp
the -a flag tells docker to attach stdout/stderr and forward signals.
There are some other switches you can use with the start command (and a huge number for the run command). You can run docker [command] --help to get a summary of the options.
One other command that you might want to use is logs which will show the console output logs for a running container:
$ docker ps
[find the container ID]
$ docker logs [container ID]
If you think your container's misbehaving, it's often not wrong to just delete it and create a new one.
docker rm webapp
docker run --name webapp -p 8080:4000 mypyweb
Containers occasionally have more involved startup sequences and these can assume they're generally starting from a clean slate. It should also be extremely routine to delete and recreate a container; it's required for some basic tasks like upgrading the image underneath a container to a newer version or changing published ports or environment variables.
docker exec probably shouldn't be part of your core workflow, any more than you'd open a shell to interact with your Web browser. I generally don't tend to docker stop containers, except to immediately docker rm them.
I'm using Docker for Windows (Education Edition with Hyper-V) and am fairly new to Docker. My workflow feels a little bit complicated and I think there are better ways. Here's what I do:
When I develop with Docker containers, I add a Dockerfile to my project first.
Then I am going to build the container by running a command like docker build -t containername .
When Docker is done building, I am going to run the container with a command like docker run -p 8080:8080 containername (sometimes I add a volume at this point)
This runs the container and leaves my Powershell in a state where I can read debug messages and so on from the container.
Then I'm testing and developing the application.
Once I'm done developing and testing, I need to CTRL + C in order to exit the running container.
Now comes the tricky part: Say, I forgot something and want to test what I forgot to test right away. I would again run docker build -t containername . BUT docker would now tell me, that the port is already taken. So I continue like this:
I search for my container with this command: docker ps
Once I found the name (i.e. silly_walrusbeard) I type docker stop silly_walrusbeard. Now I can run docker build -t containername . again and the port is now free.
How could I simplify this workflow? Is there an alternative to CTRL+C that also stops the container? Thanks for your suggestions!
list all current containers with docker ps -a. Kill them with docker kill <ID> and maybe docker rm <ID>.
And when you run new containers use the --rm to free ports (among other things) automatically when the container stops:
docker run --rm -it containername
(I usually need the -it when running shells, but I'm not sure about powershell. Maybe you don't need it)
I have a testenvironment for code in a docker image which I use by running bash in the container:
me#host$ docker run -ti myimage bash
Inside the container, I launch a program normally by saying
root#docker# ./myprogram
However, I want the process of myprogram to have a negative niceness (there are valid reasons for this). However:
root#docker# nice -n -7 ./myprogram
nice: cannot set niceness: Permission denied
Given that docker is run by the docker daemon which runs as root and I am root inside the container, why doesn't this work and how can force a negative niceness?
Note: The docker image is running debian/sid and the host is ubuntu/12.04.
Try adding
--privileged=true
to your run command.
[edit] privileged=true is the old method. Looks like
--cap-add=SYS_NICE
Should work as well.
You could also set the CPU priority of the whole container with -c for CPU shares.
Docker docs: http://docs.docker.com/reference/run/#runtime-constraints-on-cpu-and-memory
CGroups/cpu.shares docs: https://www.kernel.org/doc/Documentation/scheduler/sched-design-CFS.txt