Translate docker run command to docker compose - docker

I'm losing my mind a little here, I think I've translated the command correctly, but I'm getting an error when I try docker-compose up -d
Here's my command, this works - without failure:
sudo docker run -i \
--hostname localhost \
--publish 444:443 --publish 8080:8080 --publish 23:22 \
--name gitlab \
--restart always \
--volume /home/admin/gitlab/config:/etc/gitlab \
--volume /home/admin/gitlab/logs:/var/log/gitlab \
--volume /home/admin/gitlab/data:/var/opt/gitlab \
--volume /home/admin/gitlab/logs/reconfigure:/var/log/gitlab/reconfigure \
-e VIRTUAL_HOST=git.example.com
-e VIRTUAL_PORT=8080
gitlab/gitlab-ce:latest
Here's my docker-compose.yml file, which isn't working
gitlab:
image: 'gitlab/gitlab-ce:latest'
restart: always
container_name: gitlab
ports:
- '8080:80'
- '23:22'
- '444:443'
volumes:
- '/home/admin/gitlab/config:/etc/gitlab'
- '/home/admin/gitlab/logs:/var/log/gitlab'
- '/home/admin/gitlab/data:/var/opt/gitlab'
- '/home/admin/gitlab/logs/reconfigure:/var/log/gitlab/reconfigure'
environment:
- VIRTUAL_HOST=git.example.ca
- VIRTUAL_PORT=8080
Can you see something that I'm doing wrong?

Add hostname: localhost as you do it in docker run to have
gitlab:
image: 'gitlab/gitlab-ce:latest'
restart: always
container_name: gitlab
hostname: localhost
ports:
- '8080:80'
- '23:22'
- '444:443'
volumes:
- '/home/admin/gitlab/config:/etc/gitlab'
- '/home/admin/gitlab/logs:/var/log/gitlab'
- '/home/admin/gitlab/data:/var/opt/gitlab'
- '/home/admin/gitlab/logs/reconfigure:/var/log/gitlab/reconfigure'
environment:
- VIRTUAL_HOST=git.example.ca
- VIRTUAL_PORT=8080

Related

Docker compose passing parameters to set as environment variables of Dockerfile

The following is my Dockerfile
FROM openjdk:11.0.7-jre-slim
ARG HTTP_PORT \
NODE_NAME \
DEBUG_PORT \
JMX_PORT
ENV APP_ROOT=/root \
HTTP_PORT=$HTTP_PORT \
NODE_NAME=$NODE_NAME \
DEBUG_PORT=$DEBUG_PORT \
JMX_PORT=$JMX_PORT
ADD spring-boot-app.jar $APP_ROOT/spring-boot-app.jar
ADD Config $APP_ROOT/Config
ADD start.sh $APP_ROOT/start.sh
WORKDIR ${APP_ROOT}
CMD ["/root/start.sh"]
Contents of start.sh as follows:
#!/bin/bash
java -Dnode.name=$NODE_NAME -Dapp.port=$HTTP_PORT -agentlib:jdwp=transport=dt_socket,address=$DEBUG_PORT,server=y,suspend=n -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=$JMX_PORT -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -jar spring-boot-app.jar
I am able to run using same image with different params as follows:
docker run -p 9261:9261 -p 65054:65054 -p 8080:8080 -itd --name=app-1 -e HTTP_PORT=8080 -e NODE_NAME=NODE1 -e DEBUG_PORT=9261 -e JMX_PORT=65054 my-image
docker run -p 9221:9221 -p 65354:65354 -p 8180:8180 -itd --name=app-2 -e HTTP_PORT=8180 -e NODE_NAME=NODE2 -e DEBUG_PORT=9221 -e JMX_PORT=65354 my-image
How to achieve this using docker-compose? I have tried the following but it is not working.
version: '3.1'
services:
app-alpha:
image: my-image
environment:
- HTTP_PORT:8080
- NODE_NAME:NODE1
- DEBUG_PORT:9261
- JMX_PORT:65054
ports:
- 9261:9261
- 65054:65054
- 8080:8080
app-beta:
image: my-image
environment:
- HTTP_PORT:8180
- NODE_NAME:NODE2
- DEBUG_PORT:9221
- JMX_PORT:65354
ports:
- 9221:9221
- 65354:65354
- 8180:8180
Replace = instead : So your variables looks:
environment:
- HTTP_PORT=8080
- NODE_NAME=NODE1
- DEBUG_PORT=9261
- JMX_PORT=65054

How to convert Kafka from docker-run command to docker-compose?

I am building Kafka CDC but following the document, it runs many docker-run commands.
I want to put it all into a docker-compose.yml but I fail at 1 last command I can not convert to
The below are the commands
docker run -d --name postgres \
-p 5432:5432 \
-e POSTGRES_USER=start_data_engineer \
-e POSTGRES_PASSWORD=password debezium/postgres:12
docker run -d --name zookeeper -p 2181:2181 -p 2888:2888 -p 3888:3888 debezium/zookeeper:1.1
docker run -d --name kafka -p 9092:9092 --link zookeeper:zookeeper debezium/kafka:1.1
docker run -d --name connect -p 8083:8083 --link kafka:kafka \
--link postgres:postgres \
-e BOOTSTRAP_SERVERS=kafka:9092 \
-e GROUP_ID=sde_group \
-e CONFIG_STORAGE_TOPIC=sde_storage_topic \
-e OFFSET_STORAGE_TOPIC=sde_offset_topic debezium/connect:1.1
This is the line I can not convert
docker run -it --rm --name consumer --link zookeeper:zookeeper \
--link kafka:kafka debezium/kafka:1.1 \
watch-topic -a bankserver1.bank.holding --max-messages 1 | grep '^{' | jq
Here is my docker-compose.yml so far
version: '2'
services:
zookeeper:
image: debezium/zookeeper
ports:
- 2181:2181
- 2888:2888
- 3888:3888
kafka:
image: debezium/kafka
ports:
- 9092:9092
links:
- zookeeper
environment:
- ZOOKEEPER_CONNECT=zookeeper:2181
postgres:
image: debezium/postgres:9.6
ports:
- "5432:5432"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
connect:
image: debezium/connect
ports:
- 8083:8083
- 5005:5005
links:
- kafka
- postgres
- zookeeper
environment:
- BOOTSTRAP_SERVERS=kafka:9092
- GROUP_ID=1
- CONFIG_STORAGE_TOPIC=my_connect_configs
- OFFSET_STORAGE_TOPIC=my_connect_offsets
- STATUS_STORAGE_TOPIC=my_source_connect_statuses
consumer:
image: debezium/kafka:1.1
links:
- zookeeper
- kafka
command: watch-topic -a bankserver1.bank.holding --max-messages 1 | grep '^{' | jq
When I run docker-compose up, everything run normally. But the consumer always fail with this output.
The ZOOKEEPER_CONNECT variable must be set, or the container must be linked to one that runs Zookeeper.
consumer_1 | WARNING: Using default BROKER_ID=1, which is valid only for non-clustered installations.
consumer_1 | The ZOOKEEPER_CONNECT variable must be set, or the container must be linked to one that runs Zookeeper.
--- Update
For now I just want to read and shootdown. Making sure it works first.
Later then I will have a source handle those reading stuff.
docker run -it --rm --name consumer --link zookeeper:zookeeper --link kafka:kafka debezium/kafka:1.1 watch-topic -a bankserver1.bank.holding | grep --line-buffered '^{' | <your-file-path>/stream.py > my-output/holding_pivot.txt
Following will work...
The points are
I don't know why, but ZOOKEEPER_CONNECT and KAFKA_BROKER do not be set automatically.
You must break commands into a list.
Finally, pipe command had not run inside container.
version: '2'
services:
zookeeper:
image: debezium/zookeeper
ports:
- 2181:2181
- 2888:2888
- 3888:3888
kafka:
image: debezium/kafka
ports:
- 9092:9092
environment:
- ZOOKEEPER_CONNECT=zookeeper:2181
postgres:
image: debezium/postgres:9.6
ports:
- "5432:5432"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
connect:
image: debezium/connect
ports:
- 8083:8083
- 5005:5005
environment:
- BOOTSTRAP_SERVERS=kafka:9092
- GROUP_ID=1
- CONFIG_STORAGE_TOPIC=my_connect_configs
- OFFSET_STORAGE_TOPIC=my_connect_offsets
- STATUS_STORAGE_TOPIC=my_source_connect_statuses
consumer:
image: debezium/kafka:1.1
environment:
- ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_BROKER=kafka:9092
command:
- watch-topic
- -a
- bankserver1.bank.holding
- --max-messages
- "1"
the consumer always fail with this output.
As the error says, you need to provide a ZOOKEEPER_CONNECT. However, you should be using entrypoint there, not command.
In any case, I don't know if the Debezium container will have the Python modules for you to pipe into stream.py or what watch-topic does, but you don't need another debezium/kafka container since you can exec into the running one.
docker-compose exec kafka \
bash -c "watch-topic -a bankserver1.bank.holding | grep --line-buffered '^{' | <your-file-path>/stream.py > my-output/holding_pivot.txt"

What is the equivalent of ‍-h in docker-compose?

I want convert docker run to docker-compose with -h parameter
What is the equivalent of ‍‍‍‍-h in docker-compose?
My docker run command:
docker run --rm -p 8080:80/tcp -p 1935:1935 -p 3478:3478
-p 3478:3478/udp bigbluebutton -h webinar.mydomain.com
My docker-compose
version: "3"
services:
bigbluebutton:
build: .
container_name: "bigbluebutton"
restart: unless-stopped
ports:
- 1935:1935
- 3478:3478
- 3478:3478/udp
- 8080:80
networks:
public:
networks:
public:
external:
name: public
Anything that appears after the docker run image name is the Compose command:.
docker run \
--rm -p 8080:80/tcp -p 1935:1935 \ # Docker options
-p 3478:3478 -p 3478:3478/udp \ # More Docker options
bigbluebutton \ # Image name
-h webinar.mydomain.com # Command
services:
bigbluebutton:
build: .
command: -h webinar.mydomain.com
ports: ['8080:80', '1935:1935', '3478:3478', '3478:3478/udp']

Build from linuxserver\deluge

I'd like to be able to use a Dockerfile with the linuxserver\deluge image but I'm unsure what is the correct way to do this in a docker-compose.yaml file.
docker create \
--name=deluge \
--net=host \
-e PUID=1001 \
-e PGID=1001 \
-e UMASK_SET=<022> \
-e TZ=<timezone> \
-v </path/to/deluge/config>:/config \
-v </path/to/your/downloads>:/downloads \
--restart unless-stopped \
linuxserver/deluge
Can someone help me convert this please so that I can use a Dockerfile
Thanks :)
The following docker-compose.yml file is similar to your command :
version: "3"
services:
deluge:
container_name: deluge
image: linuxserver/deluge
environment:
- PUID=1001
- PGID=1001
- UMASK_SET=<022>
- TZ=<timezone>
volumes:
- </path/to/deluge/config>:/config
- </path/to/your/downloads>:/downloads
restart: unless-stopped
network_mode: host
Documentation is a great place to find the mapping between docker options and docker-compose syntax. Here is a recap of what have been used for this example :
--name => container_name
-e => environment (array of key=value)
-v => volumes (array of volume_or_folder_on_host:/path/inside/container)
--restart <policy> => restart: <policy>
--net=xxxx => network_mode
You can now run docker-compose up to start all your services (only deluge here) instead of your docker run command.

Running a local kibana in a container

I am trying to run use kibana console with my local elasticsearch (container)
In the ElasticSearch documentation I see
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.2.2
Which lets me run the community edition in a quick one liner.
Looking at the kibana documentation i see only
docker pull docker.elastic.co/kibana/kibana:6.2.2
Replacing pull with run it looks for the x-pack (I think it means not community) and fails to find the ES
Unable to revive connection: http://elasticsearch:9200/
Is there a one liner that could easily set up kibana localy in a container?
All I need is to work with the console (Sense replacement)
If you want to use kibana with elasticsearch locally with docker, they have to communicate with each other. To do so, according to the doc, you need to link the containers.
You can give a name to the elasticsearch container with --name:
docker run \
--name elasticsearch_container \
--publish 9200:9200 \
--publish 9300:9300 \
--env "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch:6.2.2
And then link this container to kibana:
docker run \
--name kibana \
--publish 5601:5601 \
--link elasticsearch_container:elasticsearch_alias \
--env "ELASTICSEARCH_URL=http://elasticsearch_alias:9200" \
docker.elastic.co/kibana/kibana:6.2.2
The port 5601 is exposed locally to access it from your browser. You can check in the monitoring section that elasticsearch's health is green.
EDIT (24/03/2020):
The option --link may eventually be removed and is now a legacy feature of docker.
The idiomatic way of reproduce the same thing is to firstly create a user-defined bridge:
docker network create elasticsearch-kibana
And then create the containers inside it:
 Version 6
docker run \
--name elasticsearch_container \
--network elasticsearch-kibana \
--publish 9200:9200 \
--publish 9300:9300 \
--env "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch:6.2.2
docker run \
--name kibana \
--publish 5601:5601 \
--network elasticsearch-kibana \
--env "ELASTICSEARCH_URL=http://elasticsearch_container:9200" \
docker.elastic.co/kibana/kibana:6.2.2
Version 7
As it was pointed out, the environment variable changed for the version 7. It now is ELASTICSEARCH_HOSTS.
docker run \
--name elasticsearch_container \
--network elasticsearch-kibana \
--publish 9200:9200 \
--publish 9300:9300 \
--env "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch:7.6.2
docker run \
--name kibana \
--publish 5601:5601 \
--network elasticsearch-kibana \
--env "ELASTICSEARCH_HOSTS=http://elasticsearch_container:9200" \
docker.elastic.co/kibana/kibana:7.6.2
User-defined bridges provide automatic DNS resolution between containers that means you can access each other by their container names.
It is convenient to use docker-compose as well.
For instance, the file below, stored in home directory, allows to start Kibana with one command:
docker-compose up -d:
# docker-compose.yml
version: "2"
kibana:
image: "docker.elastic.co/kibana/kibana:6.2.2"
container_name: "kibana"
environment:
- "ELASTICSEARCH_URL=http://<elasticsearch-endpoint>:9200"
- "XPACK_GRAPH_ENABLED=false"
- "XPACK_ML_ENABLED=false"
- "XPACK_REPORTING_ENABLED=false"
- "XPACK_SECURITY_ENABLED=false"
- "XPACK_WATCHER_ENABLED=false"
ports:
- "5601:5601"
restart: "unless-stopped"
In addition, Kibana service might be a part of your project in development environment (in case, docker-compose is used).

Resources