Portainer Stack - docker compose issue with MacVLan network - docker

I am starting to use portrainer.io to manage my docker images, instead of Synology DSM Docker GUI.
Background information:
I've used MacVLAN to create an own IP address for my Pihole Docker, overall everything regarding this piHole is running fine with this settings, made by DSM GUI.
environment network volumesports
Problem:
I now would like to use portrainer.io to manage my Docker installation. Including the Stack option which should be docker compose.
I am now struggeling to get my PiHole Image up with this Docker script:
services:
pihole:
container_name: pihole
image: pihole/pihole:latest
networks: docker
ports:
- "53:53/tcp"
- "53:53/udp"
- "67:67/udp"
- "80:80/tcp"
environment:
TZ: 'Europe/Berlin'
WEBPASSWORD: 'password'
ServerIP: "0.0.0.0"
# Volumes store your data between container upgrades
volumes:
- '/pihole/pihole/:/etc/pihole/'
- '/pihole/dnsmasq/:/etc/dnsmasq.d/'
# Recommended but not required (DHCP needs NET_ADMIN)
# https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
cap_add:
- NET_ADMIN
restart: unless-stopped
Does anyone have an idea why I get "Unable to deploy stack" as error message?

You are telling the service to use a network called "docker", but the network is not defined in the compose file. Is this the complete docker-compose file?
If yes, then you are missing the networks section:
networks:
docker:
external: true

Related

Docker inter-container communication

I'm facing a relatively simple problem here but I'm starting to wonder why it doesn't work.
I want to start two Docker Containers with Docker Compose: InfluxDB and Chronograph.
Unfortunately, the chronograph does not reach InfluxDB under the given hostname: "Unable to connect to InfluxDB Influx 1: Error contacting source"
What could be the reason for this?
Here is my docker-compose.yml:
version: "3.8"
services:
influxdb:
image: influxdb
restart: unless-stopped
ports:
- 8086:8086
volumes:
- influxdb-volume:/var/lib/influxdb
networks:
- test
chronograf:
image: chronograf
restart: unless-stopped
ports:
- 8888:8888
volumes:
- chronograf-volume:/var/lib/chronograf
depends_on:
- influxdb
networks:
- test
volumes:
influxdb-volume:
chronograf-volume:
networks:
test:
driver: bridge
I have also tried to start a shell inside the two containers and then ping the containers to each other or use wget to get the HTTP-API of the other container. Even this communication between the containers does not work. On both attempts with wget and ping I get timeouts.
It must be said that I use a Banana Pi BPI-M1 here. Is it possible that it is somehow due to the Linux that container to container communication does not work?
If not configured, chronograf will try to access influxdb on localhost:8086. To be able to reach the correct influxdb instance, you need to specify the url accordingly using either the --influxdb-url command line flag or (personal preference) an environment variable INFLUXDB_URL. Those should be set to the value of http://influxdb:8086 which is the docker DNS name derived from the service name of your compose file (the keys one level below services).
This should do the trick (snippet):
chronograf:
image: chronograf
restart: unless-stopped
ports:
- 8888:8888
volumes:
- chronograf-volume:/var/lib/chronograf
environment:
- INFLUXDB_URL=http://influxdb:8086
depends_on:
- influxdb
networks:
- test
Please check the chronograf readme (section Using the container with InfluxDB) for details on configuring the image and check the docker compose networking docs on some more info about networks and dns naming.
The Docker service creates some iptables entries in the tables filter and nat. My OpenVPN Gateway script executed the following commands at startup:
iptables --flush -t filter
iptables --flush -t nat
This will delete the entries from Docker and communication between the containers and the Internet will no longer be possible.
I have rewritten the script and now everything works again.

Docker-Compose, How To Connect Java Application With Custom Docker Network On Redis Container

I have a java application, that connects through external database through custom docker network
and I want to connect a Redis container.
docker-redis github topic
I tried the following on the application config:
1 localhost:6379
2 app_redis://app_redis:6379
3 redis://app_redis:6379
nothing works on my setup
docker network setup:
docker network create -d bridge --subnet 192.168.0.0/24 --gateway 192.168.0.1 mynet
Connect to a Database Running on Your Docker Host
PS: this might be off-topic, how I can add the network on docker-compose instead of external
docker-compose:
services:
app-kotin:
build: ./app
container_name: app_server
restart: always
working_dir: /app
command: java -jar app-server.jar
ports:
- 3001:3001
links:
- app-redis
networks:
- front
app-redis:
image: redis:5.0.9-alpine
container_name: app-redis
expose:
- 6379
networks:
front:
external:
name: mynet
with the setup above how can I connect through a Redis container?
Both containers need to be on the same Docker network to communicate with each other. The app-kotin container is on the front network, but the app-redis container doesn't have a networks: block and so goes onto an automatically-created default network.
The simplest fix from what you have is to also put the app-redis container on to the same network:
app-redis:
image: redis:5.0.9-alpine
networks:
- front
The Compose service name app-redis will then be usable as a host name, from other containers on the same network.
You can simplify this setup considerably. You don't generally need to manually specify IP configuration for the Docker-private networks. Compose can create the network for you, and in fact it will create a network named default for you. (Networking in Compose discusses this further.) links: and expose: aren't used in modern Docker networking; Compose can provide a default container_name: for you; and you don't need to repeat the working_dir: or command: from the image. Removing all of that would leave you with:
version: '3'
services:
app-kotin:
build: ./app
restart: always
ports:
- '3001:3001'
app-redis:
image: redis:5.0.9-alpine
The server container will be able to use the other container's Compose service name app-redis as a host name, even with this minimal configuration.

Expose application running on host port to Selenoid

I'm running a Selenoid application test automation script, and would like to run this script against a local application. However, I can't find how to expose my local application (running on port 8787) to Selenoid. I found the following thread discussing a similar issue, but it doesn't solve my issue. The linked thread describes to use the host's ip address. However, I want to make my test system independent. The host ip address is different for each system, and is hard to be retrieved system independently.
I already tried adding the expose field to my docker compose file:
version: '3'
services:
selenoid:
network_mode: bridge
image: aerokube/selenoid:latest-release
volumes:
- "${PWD}/run:/etc/selenoid"
- "/var/run/docker.sock:/var/run/docker.sock"
- "${PWD}/run/video:/opt/selenoid/video"
- "${PWD}/run/logs:/opt/selenoid/logs"
environment:
- OVERRIDE_VIDEO_OUTPUT_DIR=${PWD}/run/video
- TZ=Europe/Amsterdam
command: ["-conf", "/etc/selenoid/browsers.json", "-video-output-dir", "/opt/selenoid/video", "-log-output-dir", "/opt/selenoid/logs"]
ports:
- "4444:4444"
expose:
- "8787"
However, this doesn't work because the docker containers created by Selenoid do not get passed the same option.
Is there any way to expose my host port 8787 to my Selenoid container in a system/os independent way (either via a configuration in the docker-compose.yml file, a capability passed to the remote driver or any other way?)?
Selenoid runs browsers in standard Docker containers, so anything applicable to Docker is applicable to Selenoid browsers. Docker was created for the case when all interacting parts are packed to containers and in that case you have legacy Docker links or modern Docker custom networks on your service. If you still want to run your application on the host machine without packing it to container, you have to either user host machine IP or on some platforms Docker provides a particular domain name, e.g. docker.for.mac.localhost on Mac.
I finally realized that yes, the application I run actually runs in a Docker container and thus linking them is as easy as putting Selenoid and the application in the same Docker network. Final docker-compose.yml is as follows:
version: '3'
networks:
my_network_name:
external:
name: my_network_name # This assumes network is already created
services:
selenoid:
networks:
my_network_name: null
image: aerokube/selenoid:latest-release
volumes:
- "${PWD}/run:/etc/selenoid"
- "/var/run/docker.sock:/var/run/docker.sock"
- "${PWD}/run/video:/opt/selenoid/video"
- "${PWD}/run/logs:/opt/selenoid/logs"
environment:
- OVERRIDE_VIDEO_OUTPUT_DIR=${PWD}/run/video
- TZ=Europe/Amsterdam
command: ["-container-network", "my_network_name", "-conf", "/etc/selenoid/browsers.json", "-video-output-dir", "/opt/selenoid/video", "-log-output-dir", "/opt/selenoid/logs"]
ports:
- "4444:4444"
expose:
- "8787"

Docker compose is not finding network from swarm host

I have one server where I create an overlay network with the following command:
docker network create --driver=overlay --attachable caja_gestiones
In server two I want to use my docker compose to deploy all my containers and one of them use the gestiones network and the default network, this is my docker-compose.yml:
version: '3.3'
services:
msgestiones:
image: msgestiones:latest
hostname: msgestiones
container_name: msgestiones
environment:
- perfil=desarrollo
- JAVA_OPTS=-Xmx512M -Xms512M
- TZ=America/Mexico_City
networks:
- marcador
- caja_gestiones
msmovimientos:
image: msmovimientos:latest
hostname: msmovimientos
container_name: msmovimientos
environment:
- perfil=desarrollo
- JAVA_OPTS=-Xmx512M -Xms512M
- TZ=America/Mexico_City
networks:
- marcador
networks:
marcador:
driver: bridge
caja_gestiones:
external:
name: caja_gestiones
When I ran the docker compose it throws an error saying that the network does not exists, but if I run a dummy container using that network, the network appear and the compose works, how can I make the compose use the overlay network without run a dummy container before?
Did you try to deploy it as a stack instead of a compose? You can use the same compose file, but deploy it with docker stack deploy -c composefile.yaml yourstackname?

Connecting to host network of a docker inside a docker image

I have a docker-compose stack running multiple images together.
Once i have the docker stack up, I have one local-stack and one mongo running. In Local-stack, if a lambda is to be executed, a new docker image is launched. from the lambda image i need to connect to the mongo image in the existing stack.
The docker.sock access is already provided to local stack so it creates a new docker image in my host machine. now the connection to the network is not established from lambda to mongo. neither by applying loopback address nor by network alias mentioned in the docker-compose.yml
can you please help me how can i establish connection...?
UPDATE
My docker-compose.yml
version: '2'
services:
mongo:
image: mongo:3.5
networks:
apitests:
aliases:
- mongo
ports:
- "27017:27017"
localstack:
image: localstack/localstack
ports:
- "4567-4583:4567-4583"
- "4050:4050"
env_file:
- localstack-config.list
volumes:
- "/tmp/localstack:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
apitests:
aliases:
- localaws
networks:
apitests: {}
My localstack-config.list
SERVICES=sqs,sns,lambda,s3
DEBUG=1
DEFAULT_REGION=eu-west-1
PORT_WEB_UI=4050
LAMBDA_EXECUTOR=docker
DOCKER_HOST=unix:///var/run/docker.sock
LAMBDA_REMOTE_DOCKER=false
Seems like local stack had a bug in the code where the newly spawning docker images did not have the access to the local stack network.
There was no provision also to provide any such access as only a temporary image was created on the go.
Enhanced and raised a PR#883 for the same.

Resources