I don't how to run the docker-compose equivalent of my code
docker run -d --name=server --restart=always --net network --ip 172.18.0.5 -p 5003:80 -v $APP_PHOTO_DIR:/app/mysql-data -v $APP_CONFIG_DIR:/app/config webserver
I've done this:
version: '3'
services:
server:
image: app-dependencies
ports:
- "5003:80"
volumes:
- ./app:/app
command: python /app/app.py
restart: always
networks:
app_net:
ipv4_address: 172.18.0.5
Are you sure you need an IP address for container? It is not recommended practice, why do you want to set it explicitly?
docker-compose.yml
version: '3'
services:
server: # correct, this would be container's name
image: webserver # this should be image name from your command line
ports:
- "5003:80" # correct, but only if you need to communicate to service from ouside
volumes: # volumes just repeat you command line, you can use Env vars
- $APP_PHOTO_DIR:/app/mysql-data
- $APP_CONFIG_DIR:/app/config
command: ["python", "/app/app.py"] # JSON notation strongly recommended
restart: always
Then docker-compose up -d and that's it. You can access your service from host with localhost:5003, no need for internal IP.
For networks, I always include in the docker-compose file, the network specification. If the network already exists, docker will not create a new one.
version: '3'
services:
server:
image: app-dependencies
ports:
- "5003:80"
volumes:
- ./app:/app
command: python /app/app.py
restart: always
networks:
app_net:
ipv4_address: 172.18.0.5
networks:
app_net:
name: NETWORK_NAME
driver: bridge
ipam:
config:
- subnet: NETWORK_SUBNET
volumes:
VOLUME_NAME:
driver:local
And you will need to add the volumes separately to match the docker run command.
Related
I would like to translate the following docker command to a docker-compose file:
docker run -d --restart=always -v /var/run/docker.sock:/var/run/docker.sock --net shinyproxy-net -p 8080:8080 imshinyproxy
This is the docker-compose.yml that I wrote:
version: "3.7"
services:
shinyproxy:
image: imshinyproxy
container_name: imshinyproxy
environment:
- PUID=1000
- PGID=65537
- TZ=america/new_york
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 8080:8080
networks:
- shinyproxy-net
restart: unless-stopped
Alas, when I try to run docker-compose up I get the following error:
$ docker-compose up
ERROR: Service "shinyproxy" uses an undefined network "shinyproxy-net"
I know the network exist:
$ sudo docker network create shinyproxy-net
Error response from daemon: network with name shinyproxy-net already exists
What am I doing wrong?
You must declare an external network in the networks section of your docker-compose.yml :
version: "3.7"
services:
shinyproxy:
[...]
networks:
- shinyproxy-net
networks:
shinyproxy-net:
external:
name: shinyproxy-net
networks.shinyproxy-net.external.name should correspond to the name of your previously created network.
I'm trying to make service that can forward remote database port to container and at the same time can be accessible by alias hostname from other containers to work with them.
I am think that make all containers communicate by host network is bad practice, so i am trying to setup that configuration.
When i am triyng to add to php-fpm service network with driver: host, docker says
only one instance of "host" network is allowed
When i am trying to set php-fpm service with this
networks
- host
Docker says that he cant find out network with this name.
When i try to define network in docker-compose by id of built-in host, it just cant start this container.
This is my docker-compose:
version: '3.2'
networks:
backend-network:
driver: bridge
frontend-network:
driver: bridge
volumes:
redis-data:
home-dir:
services:
&app-service app: &app-service-template
build:
context: ./docker/app
dockerfile: Dockerfile
volumes:
- ./src:/app:rw
- home-dir:/home/user
hostname: *app-service
environment:
FPM_PORT: &php-fpm-port 9001
FPM_USER: "${USER_ID:-1000}"
FPM_GROUP: "${GROUP_ID:-1000}"
APP_ENV: local
HOME: /home/user
command: keep-alive.sh
networks:
- backend-network
&php-fpm-service php-fpm:
<<: *app-service-template
user: 'root:root'
restart: always
hostname: *php-fpm-service
ports: [*php-fpm-port]
environment:
FPM_PORT: *php-fpm-port
FPM_USER: "${USER_ID:-1000}"
FPM_GROUP: "${GROUP_ID:-1000}"
APP_ENV: local
HOME: /home/user
entrypoint: /fpm-entrypoint.sh
command: php-fpm --nodaemonize -R -d "opcache.enable=0" -d "display_startup_errors=On" -d "display_errors=On" -d "error_reporting=E_ALL"
networks:
- backend-network
- frontend-network
nginx:
build:
context: ./docker/nginx
dockerfile: Dockerfile
restart: always
working_dir: /usr/share/nginx/html
environment:
FPM_HOST: *php-fpm-service
FPM_PORT: *php-fpm-port
ROOT_DIR: '/app/public' # App path must equals with php-fpm container path
volumes:
- ./src:/app:ro
ports: ['9999:80']
depends_on:
- *php-fpm-service
networks:
- frontend-network
Network scheme (question about green line):
Host works on Debian 7 (updates prohibited) and conainer works with lastest Alpine
I have compose file as follows;
redis:
image: redis
ports:
- "6379:6379"
php:
build: .
image: php:fpm
volumes:
- ./code:/var/www/html
links:
- redis:redis
networks:
- code-network
I'm entering into php container with the following command.
docker exec -it php_id /bin/bash
but I can't run "redis-cli" command in this container. What do I need to do to run it.
I added "links" parameter to compose file but it didn't.
You are putting the php-fpm container in a network of its own. Here is a fixed compose file:
version: "3"
services:
redis:
image: redis
ports:
- "6379:6379"
php:
build: .
image: php:fpm
volumes:
- ./code:/var/www/html
networks:
- code-network
- default
networks:
code-network:
See this for more info on compose networking.
About the redis-cli issue: You'd need to add the appropriate repository on the php-fpm container and then install it. As you are using the php:fpm image, you propably want to use redis with some php-application, therefore you don't need debians redis-cli package, but rather the php-extension.
See this post for more info.
I am using Docker version 1.12.3 and docker-compose version 1.8.1. I have some services which contains for example elasticsearch, rabbitmq and a webapp
My problem is that a service can not access another service by its host becuase docker-compose does not put all service hots in /etc/hosts file. I don't know their IP's because it is defined on docker-compose up phase.
I use networks feature as it is described at https://docs.docker.com/compose/networking/ instead of links because I do circular reference and links doesn't support it. But using networks does not put all services hosts to each service nodes /etc/hosts file. I set container_name, I set hostname but nothing happened. What I am missing;
Here is my docker-compose.yml;
version: '2'
services:
elasticsearch1:
image: elasticsearch:5.0
container_name: "elasticsearch1"
hostname: "elasticsearch1"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='Ned Stark' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
ports:
- "9200:9200"
- "9300:9300"
networks:
- webapp
elasticsearch2:
image: elasticsearch:5.0
container_name: "elasticsearch2"
hostname: "elasticsearch2"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='Daenerys Targaryen' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
networks:
- webapp
elasticsearch3:
image: elasticsearch:5.0
container_name: "elasticsearch3"
hostname: "elasticsearch3"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='John Snow' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
networks:
- webapp
rabbit1:
image: harbur/rabbitmq-cluster
container_name: "rabbit1"
hostname: "rabbit1"
environment:
- ERLANG_COOKIE=abcdefg
networks:
- webapp
rabbit2:
image: harbur/rabbitmq-cluster
container_name: "rabbit2"
hostname: "rabbit2"
environment:
- ERLANG_COOKIE=abcdefg
- CLUSTER_WITH=rabbit1
- ENABLE_RAM=true
networks:
- webapp
rabbit3:
image: harbur/rabbitmq-cluster
container_name: "rabbit3"
hostname: "rabbit3"
environment:
- ERLANG_COOKIE=abcdefg
- CLUSTER_WITH=rabbit1
networks:
- webapp
my_webapp:
image: my_webapp:0.2.0
container_name: "my_webapp"
hostname: "my_webapp"
command: "supervisord -c /etc/supervisor/supervisord.conf -n"
environment:
- DYNACONF_SETTINGS=settings.prod
ports:
- "8000:8000"
tty: true
networks:
- webapp
networks:
webapp:
driver: bridge
This is how I understand they can't comunicate with each other;
I get this error on elasticserach cluster initialization;
Caused by: java.net.UnknownHostException: elasticsearch3
And this is how I docker-composing
docker-compose up
If the container expects the hostname to be available immediate when the container starts that is likely why it's failing.
The hostname isn't going to exist until the other containers start. You can use an entrypoint script to wait until all the hostnames are available, then exec elasticsearch ...
To update some images I used 'docker-compose pull'.
Then I build: 'docker-compose build'.
I wanted only to update the Application Container so I removed
it and restarted:
'docker-compose rm app' and 'docker-compose up -d app'.
But something unwanted happened. The data container was recreated too.
The old data is lost.
Dockerfile for Datacontainer:
FROM gitlab/gitlab-ce:latest
VOLUME ["/etc/gitlab", "/var/log/gitlab", "/var/opt/gitlab"]
ENTRYPOINT ["hostname"]
docker-compose.yml:
version: '2'
services:
gitlab:
image: 'gitlab/gitlab-ce:latest'
domainname: example.com
hostname: gitlab
networks:
- devenv
restart: always
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://gitlab.example.com'
gitlab_rails['gitlab_shell_ssh_port'] = 2224
ports:
- '80:80'
- '2224:22'
volumes_from:
- gitlabdata
gitlabdata:
build: gitlab-data
How can I avoid this next time?
The docker-compose up command has the --no-recreate flag.
This flag avoid to recreate containers if they already exists.
Therefore you can run
docker-compose up -d --no-recreate app
Your issue was because you created a volume for a container, and then removed the container, this also removed your volumes.
You should change your volumes so that they bind mount to a host directory, this way your files are stored on the host, and you can reattach to those directories incase the container goes away. Another benefit is the ability to get access to those files from the host.
Here is roughly what your compose file would look like with the new volume config.
version: '2'
services:
gitlab:
image: 'gitlab/gitlab-ce:latest'
domainname: example.com
hostname: gitlab
networks:
- devenv
restart: always
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://gitlab.example.com'
gitlab_rails['gitlab_shell_ssh_port'] = 2224
ports:
- '80:80'
- '2224:22'
volumes_from:
- gitlabdata
gitlabdata:
build: gitlab-data
Volumes:
- /etc/gitlab:/dir/on/host
- /var/log/gitlab:/dir/on/host2
- /var/opt/gitlab:/dir/on/host3
You can mount to what ever you want on the host. More info about volumes here: https://docs.docker.com/engine/userguide/containers/dockervolumes/#mount-a-host-directory-as-a-data-volume
I'm facing an issue with this, in my case, always I restart the docker-compose service, the container is recreated:
My /docker-compose-app.service
[Unit]
Description=Docker Compose Application Service
Requires=docker.service
After=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/srv/dockercnf
ExecStart=/usr/local/bin/docker-compose up --no-recreate -d
ExecStop=/usr/local/bin/docker-compose down
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target
The docker-compose.yml
version: '2'
services:
mysql2:
image: mysql/mysql-server:8.0
container_name: mysql2-container
networks:
static-network:
ipv4_address: 172.18.1.2
mysql57:
image: mysql/mysql-server:5.7
container_name: mysql57-container
networks:
static-network:
ipv4_address: 172.18.1.3
networks:
static-network:
ipam:
config:
- subnet: 172.18.0.0/16
#docker-compose v3+ do not use ip_range
ip_range: 172.18.1.0/24
I don't know what I'm missing or doing wrong.
Edited - Solved
After reading StackOverFlow Thread and getting my own conclusion, I edited my docker-compose.yml file as follows:
version: '2'
services:
mysql2:
image: mysql/mysql-server:8.0
container_name: mysql2-container
networks:
static-network:
ipv4_address: 172.18.1.2
mysql57:
image: mysql/mysql-server:5.7
container_name: mysql57-container
restart: always
volumes:
- /srv/mysql57-container:/var/lib/mysql
networks:
static-network:
ipv4_address: 172.18.1.3
networks:
static-network:
ipam:
config:
- subnet: 172.18.0.0/16
#docker-compose v3+ do not use ip_range
ip_range: 172.18.1.0/24