Can one determine the hostname of the Docker host from a Container? - docker

This is kind of a duplicate question to this one. . But ironically, in spite of that question's title, all the answers say: Use an ENV variable.
My use case: I don't know if docker is running via docker-compose or swarm. However, it will not be docker run! I am trying to kick off an upgrade script that resides on the host. Thus, from within a container I need the docker host name. Is there any programmatic way to get this WITHOUT environment variables?

Here is the feature request that has been implemented in 17.10 https://github.com/moby/moby/issues/30966
This is how you do it:
$ hostname testing123
$ docker service create \
--name foo \
--hostname "{{.Service.Name}}-{{.Task.Slot}}-{{.Node.Hostname}}" \
nginx:alpine
$ docker inspect foo.1.k51r3eclvcelsxy6jthtavkwa --format
'{{.Config.Hostname}}'
foo-1-testing123
This should also work in docker-compose
services:
nginx:
image: nginx
hostname: '{{.Node.Hostname}}'

No. The goal of docker is to isolate the host environment as much as possible. The only way you could get the hostname is if you were to pass it into the container as a variable (or bind-mount a file containing the hostname, etc).
I don't know if docker is running via docker-compose or swarm. However, it will not be docker run! I am trying to kick off an upgrade script that resides on the host.
Why do you care how the container was started? How do you plan to run a script on the host? The answers to these questions might help us provide a better answer to your original question.

If you're using Docker for Mac, you can use the following hostname:
docker.for.mac.localhost

Related

How to restart the ROS docker container with GUI enabled [duplicate]

Let's say I have pulled the official mysql:5.6.21 image.
I have deployed this image by creating several docker containers.
These containers have been running for some time until MySQL 5.6.22 is released. The official image of mysql:5.6 gets updated with the new release, but my containers still run 5.6.21.
How do I propagate the changes in the image (i.e. upgrade MySQL distro) to all my existing containers? What is the proper Docker way of doing this?
After evaluating the answers and studying the topic I'd like to summarize.
The Docker way to upgrade containers seems to be the following:
Application containers should not store application data. This way you can replace app container with its newer version at any time by executing something like this:
docker pull mysql
docker stop my-mysql-container
docker rm my-mysql-container
docker run --name=my-mysql-container --restart=always \
-e MYSQL_ROOT_PASSWORD=mypwd -v /my/data/dir:/var/lib/mysql -d mysql
You can store data either on host (in directory mounted as volume) or in special data-only container(s). Read more about it
About volumes (Docker docs)
Tiny Docker Pieces, Loosely Joined (by Tom Offermann)
How to deal with persistent storage (e.g. databases) in Docker (Stack Overflow question)
Upgrading applications (eg. with yum/apt-get upgrade) within containers is considered to be an anti-pattern. Application containers are supposed to be immutable, which shall guarantee reproducible behavior. Some official application images (mysql:5.6 in particular) are not even designed to self-update (apt-get upgrade won't work).
I'd like to thank everybody who gave their answers, so we could see all different approaches.
I don't like mounting volumes as a link to a host directory, so I came up with a pattern for upgrading docker containers with entirely docker managed containers. Creating a new docker container with --volumes-from <container> will give the new container with the updated images shared ownership of docker managed volumes.
docker pull mysql
docker create --volumes-from my_mysql_container [...] --name my_mysql_container_tmp mysql
By not immediately removing the original my_mysql_container yet, you have the ability to revert back to the known working container if the upgraded container doesn't have the right data, or fails a sanity test.
At this point, I'll usually run whatever backup scripts I have for the container to give myself a safety net in case something goes wrong
docker stop my_mysql_container
docker start my_mysql_container_tmp
Now you have the opportunity to make sure the data you expect to be in the new container is there and run a sanity check.
docker rm my_mysql_container
docker rename my_mysql_container_tmp my_mysql_container
The docker volumes will stick around so long as any container is using them, so you can delete the original container safely. Once the original container is removed, the new container can assume the namesake of the original to make everything as pretty as it was to begin.
There are two major advantages to using this pattern for upgrading docker containers. Firstly, it eliminates the need to mount volumes to host directories by allowing volumes to be directly transferred to an upgraded containers. Secondly, you are never in a position where there isn't a working docker container; so if the upgrade fails, you can easily revert to how it was working before by spinning up the original docker container again.
Just for providing a more general (not mysql specific) answer...
In short
Synchronize with service image registry (https://docs.docker.com/compose/compose-file/#image):
docker-compose pull
Recreate container if docker-compose file or image have changed:
docker-compose up -d
Background
Container image management is one of the reason for using docker-compose
(see https://docs.docker.com/compose/reference/up/)
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes). To prevent Compose from picking up changes, use the --no-recreate flag.
Data management aspect being also covered by docker-compose through mounted external "volumes" (See https://docs.docker.com/compose/compose-file/#volumes) or data container.
This leaves potential backward compatibility and data migration issues untouched, but these are "applicative" issues, not Docker specific, which have to be checked against release notes and tests...
I would like to add that if you want to do this process automatically (download, stop and restart a new container with the same settings as described by #Yaroslav) you can use WatchTower. A program that auto updates your containers when they are changed https://github.com/v2tec/watchtower
Consider for this answers:
The database name is app_schema
The container name is app_db
The root password is root123
How to update MySQL when storing application data inside the container
This is considered a bad practice, because if you lose the container, you will lose the data. Although it is a bad practice, here is a possible way to do it:
1) Do a database dump as SQL:
docker exec app_db sh -c 'exec mysqldump app_schema -uroot -proot123' > database_dump.sql
2) Update the image:
docker pull mysql:5.6
3) Update the container:
docker rm -f app_db
docker run --name app_db --restart unless-stopped \
-e MYSQL_ROOT_PASSWORD=root123 \
-d mysql:5.6
4) Restore the database dump:
docker exec app_db sh -c 'exec mysql -uroot -proot123' < database_dump.sql
How to update MySQL container using an external volume
Using an external volume is a better way of managing data, and it makes easier to update MySQL. Loosing the container will not lose any data. You can use docker-compose to facilitate managing multi-container Docker applications in a single host:
1) Create the docker-compose.yml file in order to manage your applications:
version: '2'
services:
app_db:
image: mysql:5.6
restart: unless-stopped
volumes_from: app_db_data
app_db_data:
volumes: /my/data/dir:/var/lib/mysql
2) Update MySQL (from the same folder as the docker-compose.yml file):
docker-compose pull
docker-compose up -d
Note: the last command above will update the MySQL image, recreate and start the container with the new image.
Similar answer to above
docker images | awk '{print $1}' | grep -v 'none' | grep -iv 'repo' | xargs -n1 docker pull
Here's what it looks like using docker-compose when building a custom Dockerfile.
Build your custom Dockerfile first, appending a next version number to differentiate. Ex: docker build -t imagename:version . This will store your new version locally.
Run docker-compose down
Edit your docker-compose.yml file to reflect the new image name you set at step 1.
Run docker-compose up -d. It will look locally for the image and use your upgraded one.
-EDIT-
My steps above are more verbose than they need to be. I've optimized my workflow by including the build: . parameter to my docker-compose file. The steps looks this now:
Verify that my Dockerfile is what I want it to look like.
Set the version number of my image name in my docker-compose file.
If my image isn't built yet: run docker-compose build
Run docker-compose up -d
I didn't realize at the time, but docker-compose is smart enough to simply update my container to the new image with the one command, instead of having to bring it down first.
If you do not want to use Docker Compose, I can recommend portainer. It has a recreate function that lets you recreate a container while pulling the latest image.
You need to either rebuild all the images and restart all the containers, or somehow yum update the software and restart the database. There is no upgrade path but that you design yourself.
Taking from http://blog.stefanxo.com/2014/08/update-all-docker-images-at-once/
You can update all your existing images using the following command pipeline:
docker images | awk '/^REPOSITORY|\<none\>/ {next} {print $1}' | xargs -n 1 docker pull
Make sure you are using volumes for all the persistent data (configuration, logs, or application data) which you store on the containers related to the state of the processes inside that container. Update your Dockerfile and rebuild the image with the changes you wanted, and restart the containers with your volumes mounted at their appropriate place.
Tried a bunch of things from here, but this worked out for me eventually.
IF you have AutoRemove: On on the Containers you can't STOP and EDIT the contianers, or a Service is running that can't be stopped even momentarily,
You must:
PULL latest image --> docker pull [image:latest]
Verify if the correct image is pulled, you can see the UNUSED tag in the Portainer Images section
UPDATE the service using Portainer or CLI and make sure you use LATEST VERSION of the image, Portainer will give you the option to do same.
THis would not only UPDATE the Container with Latest Image, but also keep the Service Running.
This is something I've also been struggling with for my own images. I have a server environment from which I create a Docker image. When I update the server, I'd like all users who are running containers based on my Docker image to be able to upgrade to the latest server.
Ideally, I'd prefer to generate a new version of the Docker image and have all containers based on a previous version of that image automagically update to the new image "in place." But this mechanism doesn't seem to exist.
So the next best design I've been able to come up with so far is to provide a way to have the container update itself--similar to how a desktop application checks for updates and then upgrades itself. In my case, this will probably mean crafting a script that involves Git pulls from a well-known tag.
The image/container doesn't actually change, but the "internals" of that container change. You could imagine doing the same with apt-get, yum, or whatever is appropriate for you environment. Along with this, I'd update the myserver:latest image in the registry so any new containers would be based on the latest image.
I'd be interested in hearing whether there is any prior art that addresses this scenario.
Update
This is mainly to query the container not to update as building images is the way to be done
I had the same issue so I created docker-run, a very simple command-line tool that runs inside a docker container to update packages in other running containers.
It uses docker-py to communicate with running docker containers and update packages or run any arbitrary single command
Examples:
docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run exec
by default this will run date command in all running containers and return results but you can issue any command e.g. docker-run exec "uname -a"
To update packages (currently only using apt-get):
docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run update
You can create and alias and use it as a regular command line
e.g.
alias docker-run='docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run'

docker-compose build and up

I am not an advance user so please bear with me.
I am building a docker image using docker-compose -f mydocker-compose-file.yml ... on my machine.
The image then been pushed to a remote docker registry.
Then from a remote server I pull down this image.
To run this image; I have to copy mydocker-compose-file.yml from my machine to remote server and then run docker-compose -f mydocker-compose-file.yml up -d.
I find this very inefficient as why I need the same YAML file to run the docker image (should I?).
Is there a way to just spin up the container without this file from remote machine?
As of compose 1.24 along with the 18.09 release of docker (you'll need at least that client version on the remote host), you can run docker commands to a remote host over SSH.
# all docker commands in this shell will not talk to the remote host
export DOCKER_HOST=ssh://user#host
# you can verify that with docker info to see which engine you're talking to
docker info
# and now run your docker-compose up command locally to start/stop containers
docker-compose up -d
With previous versions, you could configure TLS certificates to allow specific clients to connect to the docker API over a network connection. See these docs for more details.
Note, if you have host volumes, the variables and paths will be expanded to your laptop directories, but the host mounts will happen on the remote server where those directories may not exist. This is a good situation to switch to named volumes.
Everything you can do with Docker Compose, you can do with plain docker commands.
Depending on how exactly you're interacting with the remote server, your tooling might have native ways to do this. One specific example I'm familiar with is the Ansible docker_container module. If you're already using a tool like Ansible, Chef, or Salt, you can probably use a tool like this to do the same thing your docker-compose.yml file does.
But otherwise there's more or less a direct translation between a docker-compose.yml file
version: '3'
services:
foo:
image: me/foo:20190510.01
ports: ['8080:8080']
and a command line
docker run -d --name foo -p 8080:8080 me/foo:20190510.01
My experience has been that the docker run commands quickly become unwieldy and you want to record them in a file; and once they're in a file, you start to wish they were in a more structured format, even if you need an auxiliary tool to run them; which brings you back to copying around the docker-compose.yml file. I think that's pretty routine. (Something needs to tell the server what to run.)

docker-compose vs creating and running an image

I'm new to docker and trying to understand what's best for my project (a webapp).
So far, I understand that I can either :
use docker-compose up -d to start a container defined by a set of rule in a docker-compose.yaml
build an image from a dockerfile and then create a container from this image
If I understand correctly, docker-compose up -d allows me (via volumes) to mount files (e.g my application) into the container. If i build an image however, I am able to embed my application natively in it (with a Dockerfile and COPY instruction).
Is my understanding correct ? How should I choose between those 2 choices ?
Docker Compose is simply a convenience wrapper around the docker command.
Everything you can do in docker compose, you can do plainly with running docker.
For example, these docker commands:
$ docker build -t temp .
$ docker run -i -p 3000:80 -v $PWD/public:/docroot/ temp
are similar to having this docker compose file:
version: '3'
services:
web:
build: .
image: temp
ports: ["3000:80"]
volumes:
- ./public:/docroot
and running:
$ docker-compose up web
Although docker compose advantages are most obvious when using multiple containers, it can also be used to start a single container.
My advice to you is: Start without docker compose, to understand how to build a simple image, and how to run it using the docker command line. When you feel comfortable with it, take a look at docker compose.
As for the best practice in regards to copying files to the container, or mounting them - the answer is both, and here is why:
When you are in development mode, you do not want to build the image on every code change. This is where the volume mount comes into play. However, your final docker image should contain your code so it can be deployed anywhere else. After all, this is why we use containers right? This is where the COPY comes into play.
Finally, remember that when you mount a volume to the container, it will "shadow" the contents of that folder in the container - this is how using both mount and COPY actually works as you expect it to work.
Docker-compose is just a container orchestrator.
I just provides you a simple way to create multiple related containers. The relationship between containers can be volumes, networks, start order, environment variables, etc.
In background, docker-compose uses plain docker. So, anything you can do using docker-compose (mounting volumes, using custom networks, scaling) can be done using docker commands (but of course is harder).

How can I reuse a Docker container as a service?

I already have a running container for both postgres and redis in use for various things. However, I started those from the command line months ago. Now I'm trying to install a new application and the recipe for this involves writing out a docker compose file which includes both postgres and redis as services.
Can the compose file be modified in such a way as to specify the already-running containers? Postgres already does a fine job of siloing any of the data, and I can't imagine that it would be a problem to reuse the running redis.
Should I even reuse them? It occurs to me that I could run multiple containers for both, and I'm not sure there would be any disadvantage to that (other than a cluttered docker ps output).
When I set container_name to the names of the existing containers, I get what I assume is a rather typical error of:
cb7cb3e78dc50b527f71b71b7842e1a1c". You have to remove (or rename) that container to be able to reuse that name.
Followed by a few that compain that the ports are already in use (5432, 6579, etc).
Other answers here on Stackoverflow suggest that if I had originally invoked these services from another compose file with the exact same details, I could do so here as well and it would reuse them. But the command I used to start them was somehow never written to my bash_history, so I'm not even sure of the details (other than name, ports, and restart always).
Are you looking for docker-compose's external_links keyword?
external_links allows you reuse already running containers.
According to docker-compose specification:
This keyword links to containers started outside this docker-compose.yml or even outside of Compose, especially for containers that provide shared or common services. external_links follow semantics similar to the legacy option links when specifying both the container name and the link alias (CONTAINER:ALIAS).
And here's the syntax:
external_links:
- redis_1
- project_db_1:mysql
- project_db_1:postgresql
You can give name for your container. If there is no container with the given name, then it is the first time to run the image. If the named container is found, restart the container.
In this way, you can reuse the container. Here is my sample script.
containerName="IamContainer"
if docker ps -a --format '{{.Names}}' | grep -Eq "^${containerName}\$"; then
docker restart ${containerName}
else
docker run --name ${containerName} -d hello-world
fi
You probably don't want to keep using a container that you don't know how to create. However, the good news is that you should be able to figure out how you can create your container again by inspecting it with the command
$ docker container inspect ID
This will display all settings, the docker-compose specific ones will be under Config.Labels. For container reuse across projects, you'd be interested in the values of com.docker.compose.project and com.docker.compose.service, so that you can pass them to docker-compose --project-name and use them as the service's name in your docker-compose.yaml.

Dockerfile for multiple Docker containers

I am working with Docker and I have a web-app that requires the following:
Tomcat
PostgreSQL
MongoDB
To install item 2 and 3 I do the following:
I can run a command for PostgreSQL like :
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
For Mongodb I run:
docker run --name some-mongo -d mongo
For Tomcat, I have a Dockerfile with Tomcat and copying my war to the apps folder. I build the image using Docker and run it.
My question is whether there is a better way to coordinate this step by step via separate script? Is Docker compose the solution for this?
thanks
A Dockerfile is a recipe for building an image, which is a template for starting containers. To describe a system that is made of multiple containers using standardized images, you would use docker-compose, not a new Dockerfile. You would use a Dockerfile to customize a pre-existing docker image, like mysql or node or ubuntu, for some specific use.
docker-compose allows you to express multiple docker commands as a .yml file in a specific format.
You can then use docker-compose up to start the set of containers.
The docker-compose .yml file for your example might start looking somewhat like this
some-postgres:
environment:
POSTGRES_PASSWORD:mysecretpassword
image: postgres
some-mongo:
image: mongo
You would add links between the containers with a links: line. These and other details are in the docs.
Basically docker-compose is just a yaml file implementation of docker run.
As docker run has arguments passed to it, these exact same arguments are stipulated in docker compose in a yaml format instead of on the command line.
Docker compose supports multiple containers too.
Docker compose has a few other nice features such as docker-compose logs , this command gives logs of all containers started by compose.

Resources