I would like to be able to upgrade container without restarting all other containers that are linked to it.
According to this
https://docs.docker.com/userguide/dockerlinks/#container-linking
If you restart the source container, the linked containers /etc/hosts
files will be automatically updated with the source container's new IP
address, allowing linked communication to continue.
Sounds great, but I don't want to just restart. I need to upgrade to newer version. And its not working.
Lets see this example from article above:
sudo docker run -d --name db training/postgres
sudo docker run -t -i --rm --link db:db training/webapp /bin/bash
cat /etc/hosts
Restart db container:
sudo docker restart db
and inside running container cat /etc/hosts will show new ip address for db.
But what I want:
sudo docker stop db
sudo docker rm db
sudo docker run -d --name db training/postgres:new_version
And now inside running container cat /etc/hosts will show old ip address for db. Link is broken.
Is it any way to overcome this issue?
By the way, all my containers run on the same host, so ambassadors are not an option.
Related
I have a host with ubuntu 20.04, and I run firefox in container from ubuntu:20.04 image.
When firefox is already started on the host: container stops immediately, new window of firefox appears, and I can see all my host browsing history, sessions and so on.
When firefox is NOT started on the host: container is running, new window of "firefox [container hash]" appears, I can see only container browsing history and sessions there (as expected). BUT when I start firefox on the host while container is still running: new window of "firefox [same container hash]" appears, and I can see only container browsing history and sessions.
If I run firefox as a different user, like
sudo -H -u some-user firefox
and having umask 077 - I've got perfect isolation and parallel running without docker, but that's not the full goal
My dockerfile:
FROM ubuntu:20.04
WORKDIR /usr/src/app
RUN apt-get update && apt-get install -y firefox
CMD firefox
Terminal history:
xhost +local:docker
docker build -t firefox .
docker create -ti -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --name ff firefox
docker start ff
I suppose this behavior of process launch from container is not really obvious and expected. Could you please explain what exactly is happening and why?
Docker container is not an isolated machine. The commands that run inside docker container are executed on the host machine (or the docker VM if using Docker for Mac).
This can be verified in the following way:
Run a command inside docker container docker exec -it <container-name> sleep 100
On the host machine, grep for this command ps -ef | grep sleep. For mac, docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh will provide a shell into the running docker VM.
On my machine:
# ps -ef | grep sleep
2609 root 0:00 sleep 100
2616 root 0:00 grep sleep
When you run a daemon, it creates a socket file in temp directory.
This file is the gateway to communication with the application.
For instance, when mysql is running in the system, it creates a socket file /var/run/mysqld/mysqld.sock which is used for communication by mysql client.
These daemons can also bind to a port, and be accessed through the network this way. These ports are simply socket connections to your application which are visible over the network.
Coming back to your question,
docker create -ti -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --name ff firefox
/tmp/.X11-unix is managing Unix-domain sockets. Since this is mounted within the container, the socket space between the container and host is shared.
When firefox is running on the host, the socket is occupied already. Thus the container fails to start
When firefox is not running on host and container is started, the socket is free and hence the container is able to start. This uses the filesystem inside container to store history etc. Thus you do not see the history from host.
If you run firefox from host now, it will simply connect to this unix socket and launch a firefox window.
I have problem with Docker. I created new container using sudo docker run --name myXampp -p 41061:22 -p 41062:80 -d -v ~/Projekty/Xampp:/www pindr0p/xampp and I could access localhost:41062, but when i restarted my pc, i wanted to run docker container and again access server so I did sudo docker start myXampp but I can not access localhost:41062 anymore. Did I miss something? I even tried start with -p flags, but no success. Please help me.
Thanks
Restart your container by container Id
List all the containers who are existed or check the status of your containers after restart
docker ps -a
Then restart the container by Contaner Id
docker restart <container_id>
try to remove it competently
first please get list of dockers run as process
docker ps
then try to remove it:
docker rm <your-docker> --force
then try to run
docker ps
and make sure the docker removed
then try to
docker run blob
again
yes the container made from image again and all your new config reverted back
Stop the container using:
sudo docker stop 29ddc6836adfa14d4ec3a025fddd2e5587212fef77ba0d6edb83642a3daedd3e
and then try:
sudo docker run --name myXampp -p 41061:22 -p 41062:80 -d -v ~/Projekty/Xampp:/www pindr0p/xampp
Need to write a Dockerfile that installs docker in container-a. Because container-a needs to execute a docker command to container-b that's running alongside container-a.
My understanding is you're not supposed to use "sudo" when writing the Dockerfile.
But I'm getting stuck -- what user to I assign to docker group? When you run docker exec -it, you are automatically root.
sudo usermod -a -G docker whatuser?
Also (and I'm trying this out manually inside container-a to see if it even works) you have to do a newgrp docker to activate the changes to groups. Anytime I do that, I end up sudo'ing when I haven't sudo'ed. Does that make sense? The symptom is -- I go to exit the container, and I have to exit twice (as if I changed users).
What am I doing wrong?
If you are trying to run the containers alongside one another (not container inside container), you should mount the docker socket from the host system and execute commands to other containers that way:
docker run --name containera \
-v /var/run/docker.sock:/var/run/docker.sock \
yourimage
With the the docker socket mounted you can control docker on the host system.
I am using nodeBB to start a server you can run ./nodebb start to stop you can do ./nodebb stop. Now that I have dockerized it http://nodebb-francais.readthedocs.org/projects/nodebb/en/latest/installing/docker/nodebb-redis.html I am not sure how I can interact with it.
I have followed the steps "Using docker-machine mac os x"
docker run --name my-forum-redis -d -p 6379:6379 nodebb/docker:ubuntu-redis
Then
docker run --name my-forum-nodebb --link my-forum-redis:redis -p 80:80 -p 443:443 -p 4567:4567 -P -t -i nodebb/docker:ubuntu
Then
docker start my-forum-nodebb
I had an issue with redis address in use, so I want to fix that and restart but I am not sure how? Also I would like to issue the command grunt in the project directory, again not sure how?
My question is how can I interact with an app inside a docker container as if I had direct access to the project folder itself? Am I missing something?
All code in this answer is untested, as I'm currently at a computer without docker.
See whether the containers are still running
docker ps
Stop misconfigured containers
docker stop my-forum-redis
docker stop my-forum-nodebb
Remove misconfigured containers and their volumes
(The docker images they are based on will be retained.)
docker rm --volumes --force stop my-forum-nodebb
docker rm --volumes --force my-forum-redis
Start again
Then, issue your 3 commands again, now with the correct ports.
Execute arbitrary commands inside container
Also I would like to issue the command grunt in the project directory, again not sure how?
You probably want to do the following after the docker run --name my-forum-nodebb ... command but before docker start my-forum-nodebb.
docker run accepts a command to execute instead of the container's default command. Let's first use this to find out where in the container we'd land:
docker run my-forum-nodebb pwd
If that is the directory where you want to run grunt, just go forward with it:
docker run my-forum-nodebb grunt
If not, you'll have to stuff several commands into a single one. You can do that by invoking a shell:
docker run my-forum-nodebb bash -c 'cd /path/to/project/dir; grunt'
where /path/to/project/dir is to be replaced by where you want to run grunt.
I'm fresh user of Docker. The fist problem with which I'm faced is logging into container.
I'm found solutions to execute container bash commands by
docker exec -it ID bash
But, this is solution only for install/ remove packages. What to use if I want to edit nginx config in docker container ?
One of solutions can be loggin to container via ssh connection, but maybe Docker have something own for this ?, I mean easilly access without install OpenSSH ?
as you said,
docker exec -it container_id bash
and then use your favorite editor to edit any nginx config file. vi or nano is usually installed, but you may need to install emacs or vim, if this is your favorite editor
if you have just a few characters to modify,
docker exec container_id sed ...
might do the job. If you want to SSH into your container, you will need to install SSH and deal with the SSH keys, I am not sure this is what you need.
You're going about it the wrong way. You should rarely need to log into a container to edit files.
Instead, mount the nginx.conf with -v from the host. That way you can edit the file with your normal editor. Once you've got the config working the way you want it, you can then build a new image with it baked in.
In general, you have to get into the mindset of containers being ephemeral. You don't patch them; you throw them away and replace them with a fixed version.
How: Docker logging to container
Yes, you can. You can login the running container.
Exist docker exec or docker attach is not good enough. Looking to start a shell inside a Docker container? The solution is: jpetazzo/nsenter with two commands: nsenter and docker-enter.
If you are in Linux environment, then run below command:
docker run --rm -v /usr/local/bin:/target jpetazzo/nsenter
docker ps
# replace <container_name_or_ID> with real container name or ID.
PID=$(docker inspect --format {{.State.Pid}} <container_name_or_ID>)
nsenter --target $PID --mount --uts --ipc --net --pid
Then you are in that running container, you can run any linux commands now.
I prefer the other command docker-enter. Without login the container, you can directly run linux commands in container with docker-enter command. Second, I can't memory multiple options of nsenter command and no need to find out the container's PID.
docker-enter 0e8c248982c5 ls /opt
If you are mac or windows user, run docket with toolbox:
docker-machine ssh default
docker run --rm -v /usr/local/bin:/target jpetazzo/nsenter
PID=$(docker inspect --format {{.State.Pid}} 0e8c248982c5)
sudo nsenter --target $PID --mount --uts --ipc --net --pid
If you are mac or windows user, run docket with boot2docker:
boot2docker ssh
docker run --rm -v /usr/local/bin:/target jpetazzo/nsenter
PID=$(docker inspect --format {{.State.Pid}} 0e8c248982c5)
sudo nsenter --target $PID --mount --uts --ipc --net --pid
Note: The command docker run --rm -v /usr/local/bin:/target jpetazzo/nsenter only need run one time.
How: edit nginx config
For your second question, you can think about ONBUILD in Docker.
ONBUILD COPY nginx.conf /etc/nginx/nginx.conf
With this solution, you can:
edit nginx.conf in local, you can use any exist editor .
needn't build your image every time after you change nginx configuration.
every time, after you change nginx.conf file in local, you need stop, remove and re-run the containe, new nginx.conf file will be deployed into contrainer when docker run command.
You can refer the detail on how to use ONBUILD here: docker build