What's the best way to reconstruct docker run command parameters from existing docker container? I could use docker inspect and use the info found there. Is there any better way?
Not super easy, but you can do it by formatting the output from docker inspect. For a container started with this command:
> docker run -d -v ~:/home -p 8080:80 -e NEW_VAR=x --name web3 nginx:alpine sleep 10m
You can pull out the volumes, port mapping, environment variables, container name, image name and command with:
> docker inspect -f "V: {{.Mounts}} P: {{.HostConfig.PortBindings}} E:{{.Config.Env}} NAME: {{.Name }} IMAGE: {{.Config.Image}} COMMAND: {{.Path}} {{.Args}}" web3
That gives you the output:
V: [{ /home/scrapbook /home true rprivate}] P: map[80/tcp:[{ 8080}]] E:[NEW_VAR=x PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin NGINX_VERSION=1.11.5] NAME: /web3 IMAGE: nginx:alpine COMMAND: sleep [10m]
Which is a start.
Docker Captain Adrian Mouat has an excellent blog post on formatting the output: Docker Inspect Template Magic.
See also this answer which links to a tool which programmatically derives the docker run command from a container.
Related
When I run Docker from command line I do the following:
docker run -it -d --rm --hostname rabbit1 --name rabbit1 -p 127.0.0.1:8000:5672 -p 127.0.0.1:8001:15672 rabbitmq:3-management
I publish the ports with -p in order to see the connection on the host.
How can I do this automatically with a Dockerfile?
The Dockerfile provides the instructions used to build the docker image.
The docker run command provides instructions used to run a container from a docker image.
How can I do this automatically with a Dockerfile
You don't.
Port publishing is something you configure only when starting a container.
You cant specify ports in Dockerfile but you can use docker-compose to achieve that.
Docker Compose is a tool for running multi-container applications on Docker.
example for docker-compose.yml with ports:
version: "3.8"
services :
rabbit1:
image : mongo
container_name : rabbitmq:3-management
ports:
- 8000:5672
- 8001:15672
I run my container by five Docker commands as follows:
docker run --privileged -d -v /root/docker/data:/var/lib/mysql -p 8888:80 testimg:2 init
docker ps ---> to get container ID
docker exec -it container_id bash
docker exec container_id systemctl start mariadb
docker exec container_id systemctl start httpd
I was trying to do these steps by docker-compose but failed.
Can somebody make a docker-compose.yml or Dockerfile to get same result for me?
You're not going to be be able to do this with just a docker-compose.yml, because a compose file doesn't have any mechanism similar to docker exec. Additionally, running systemd (or really any process manager) inside a container is an anti-pattern. It can complicate the management and scaling of your containers, and in most cases doesn't provide you with any benefits.
Why don't you just have two images:
One that starts mariadb
One that starts Apache httpd
That might look something like:
version: "3"
services:
web:
image: httpd
ports:
- "8888:80"
db:
image: mariadb
volumes:
- "/root/docker/data:/var/lib/mysql"
You would probably need a custom image for the web server containing whatever application you're running, but you can definitely use the official mariadb image for your database.
I am trying to start an ASP.NET Core container hosting a website.
It does not exposes the ports when using the following command line
docker run my-image-name -d -p --expose 80
or
docker run my-image-name -d -p 80
Upon startup, the log will show :
Now listening on: http://[::]:80
So I assume the application is not bound to a specific address.
But does work when using the following docker compose file
version: '0.1'
services:
website:
container_name: "aspnetcore-website"
image: aspnetcoredocker
ports:
- '80:80'
expose:
- '80'
You need to make sure to pass all options (-d -p 80) to the docker command before naming the image as described in the docker run docs. The notation is:
docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
So please try the following:
docker run -d -p 80 my-image-name
Otherwise the parameters are used as command/args inside the container. So basically running your entrypoint of the docker image with the additional params of -d -p 80 instead of passing them to the docker command itself. So in your example the docker daemon is just not receiving the params -d and -p 80 and thus not mapping the port to the host. You can also notice that by not receiving the -d the command runs in the foreground and you see the logs in your terminal.
I have an image with MYSQL installed. I need to map the /var/lib/mysql directory to my host system. Following is the screenshot that I see within that directory, when I use the following command
docker run --rm -it --env-file=envProxy --network mynetwork --name my_db_dev -p 3306:3306 my_db /bin/bash
Now when I try to mount a directory from my host ( Windows 10 ), by running another container from the same image, the mysql directory is blank.
docker run --rm -it --env-file=envProxy --network mynetwork -v D:/docker/data:/var/lib/mysql --name my_db_dev1 -p 3306:3306 my_db /bin/bash
Also tried this, but none works
docker run --rm -it --env-file=envProxy --network mynetwork -v D:\docker\data:/var/lib/mysql --name my_db_dev1 -p 3306:3306 my_db /bin/bash
One thing that I see, is that the mysql directory in the path has now root user, instead of mysql as in the previous case.
I wanted all the content from the existing container (mysql directory ) to be copied back to the host mount directory
Is that Possible ? and How can that be achieved ?
Same problem on Docker Desktop(2.0.0.3 (31259)). I'd got the solution from this issues.
I ensured the containers were stopped, opened docker settings, selected "Shared Drives", removed the tick on "C" and added it again. Docker asked for the Windows account credentials and I entered the new ones. After that and starting containers, mount volumes were ok. Problem solved.
It could fix the problem more simply by just reset the credentials in Docker Settings.
If you need to get files from container into host, better use docker cp command: https://docs.docker.com/engine/reference/commandline/cp/
It will look like:
docker cp my_db_dev1:/var/lib/mysql d:\docker\data
UPD
Actually I want to persist the database files across other containers,
so I wanted use volumes
In this case you have to:
Start using docker-compose to orchestrate containers.
In docker-compose.yml you create volume, which is shared between all containers. Something like:
docker-compose.yml
version: '3'
services:
db1:
image: whatever
volumes:
- myvol:/data
db2:
image: whatever2
volumes:
- myvol:/data
volumes:
myvol:
Description: https://docs.docker.com/compose/compose-file/#volume-configuration-reference
Use Windows paths writing with backslash '\' and it is recommended using variables to specify path. On the other side for Linux use slash '/' For example:
docker run -it -v %userprofile%\work\myproj\some-data:/var/data
First create a folder structure like below,
C:\Users\rajit\MYSQL_DATA\MYSQL_CONFIG
C:\Users\rajit\MYSQL_DATA\DATA_DIR
then please adjust like below,
docker pull mysql:8.0
docker run --name mysql-docker -v C:\Users\rajit\MYSQL_DATA\MYSQL_CONFIG:/etc/mysql/conf.d --env="MYSQL_ROOT_PASSWORD=root" --env="MYSQL_PASSWORD=root" --env="MYSQL_DATABASE=test_db" -v C:\Users\rajit\MYSQL_DATA\DATA_DIR:/var/lib/mysql -d -p 3306:3306 mysql:8.0 --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
try to turn off anti virus program or fire wall. Then click on "reset credentials" under settings/shared drives.
That worked for me.
Best regards.
I have created a Zeppelin docker image in my local system and configured the Spark Interpreter through maven repositories and runned the Zeppelin It worked. But when I stop the Docker and runned again the Interpreter binding was gone. How to solve this Issue ? I want that Interpreter binding one-time so that when ever I stop the docker and run again It has to store those interpreter Binding as it is.
You need 3 volumes for persisting configurations, notebooks and logs.
Note: If you added custom interpreters, you need an additional volume for your interpreter binaries.
docker volume create zeppelin-conf
docker volume create zeppelin-notebook
docker volume create zeppelin-logs
docker volume create zeppelin-interpreter
Run the container with above volumes mounted.
docker run -d --restart always -p 8080:8080 -v zeppelin-conf:/zeppelin/conf -v zeppelin-notebook:/zeppelin/notebook -v zeppelin-logs:/zeppelin/logs -v zeppelin-interpreter:/zeppelin/interpreter apache/zeppelin:0.8.1
If you just want to persist configurations you can use following lines:
docker volume create zeppelin-conf
docker run -d --restart always -p 8080:8080 -v zeppelin-conf:/zeppelin/conf apache/zeppelin:0.8.1
Configurations:/zeppelin/conf
Notebooks: /zeppelin/notebook
Logs: /zeppelin/logs
Interpreters: /zeppelin/interpreter
Edit: The /zeppelin directory is the default home directory for docker images. See Dockerfile. Therefore, you don't need to specify ZEPPELIN_NOTEBOOK_DIR, ZEPPELIN_LOG_DIR or ZEPPELIN_INTERPRETER_DIR environment variables.
Mount file into docker run is easy - just pass it into --volume parameter. But in zeppelin case there some parameters pre-configured there, so replace it with empty file is most likely is not what you want achieve. So I could recommend first get that file with default content from container and then mount to it in next run. Please follow step-by step instructions:
First we prepare default config for nest runs.
Run default container temporary:
sudo docker run -d --name zeppelin-test apache/zeppelin:0.8.1
And get default config from it:
mkdir -p conf
sudo docker zeppelin-test cat /zeppelin/conf/interpreter.json > conf/interpreter.json
Note 1: It will not be used for work, so most parameters unimportant. It need to be done once for setup only!
Note 2: Because that config populated on start unfortunately you can't obtain it on single run like: sudo docker run --rm apache/zeppelin:0.8.1 cat /zeppelin/conf/interpreter.json
Now we can provide use it as bind-mount.
If you use direct docker run method without docker-compose, add option, among others: --volume $(pwd)/conf/interpreter.json:/zeppelin/conf/interpreter.json
But I recommend use docker-compose, so there option placed under volumes: key like - ./conf/interpreter.json:/zeppelin/conf/interpreter.json. Full example:
version: '3.7'
services:
zeppelin:
image: apache/zeppelin:0.8.1
ports:
- "7077:7077"
- "8080:8080"
volumes:
- ./logs:/logs
- ./notebook:/notebook
- ./conf/interpreter.json:/zeppelin/conf/interpreter.json
environment:
ZEPPELIN_NOTEBOOK_DIR: /notebook
ZEPPELIN_LOG_DIR: /logs
And then just run from that directory:
docker-compose up -d
Interpreter bindings are stored in conf/interpreter.json. Need use external interpreter.json file.