Docker and Openproject compose - with own network bridge - docker

I set up a Docker host with rootless. Portainer is working fine and I also can start with the following compose file the openproject but only when I run it on the standard bridge network from docker.
I have created several network adapter on my docker host (debian). So i have there for example
ens1283 = 192.168.10.5
ens3283 = 192.168.50.11
Why: ens3283 has on the host of course a mac address which gives me the possibility to distribute a fixed ip.
Now I created a bridge on portainer with the following settings:
Nam VLAN50FIX11
ID 2725cc96b95b7de962a5a69d3437e0b601f4606782ad97bffe8234166eaab93e
Driver bridge
Scope local
Attachable false
Internal false
IPV4 Subnet - 172.17.11.0/16 IPV4 Gateway - 172.17.11.1
IPV4 IP range - 172.17.11.1/25 IPV4 Excluded Ips
Access control
Ownership administrators
Network options
com.docker.network.bridge.enable_icc true
com.docker.network.bridge.enable_ip_masquerade true
com.docker.network.bridge.host_binding_ipv4 192.168.50.11
com.docker.network.bridge.name VLAN50FIX11
com.docker.network.driver.mtu 1500
I also tried it with different network so: 192.168.50.0/24 and a lot other version.
But I had never luck I always got ERR_SS_PROTOCOL_ERROR in the browser. When I start the stack with the default bridge, it works fine.
Docker compose file:
version: '3.9'
services:
openproject:
hostname: SVGXXX-OPEN-01
image: openproject/community:12.1.5
networks:
- VLAN50FIX11
ports:
- 8181:80
container_name: openproject
environment:
- PUID=1001
- PGID=1001
- SECRET_KEY_BASE=9jsdjkSKjf99847459Dg7956ds61
volumes:
- /var/lib/containers/openproject/pgdata:/var/openproject/pgdata
- /var/lib/containers/openproject/assets:/var/openproject/assets
restart: unless-stopped
networks:
VLAN50FIX11:
external: true
I always recieve: ERR_SSL_PROTOCOL_ERROR when I start the openproject stack.
What do I need to change?
Thanks for your help

Perhaps you have to set
OPENPROJECT_HTTPS=false
disables the on-by-default HTTPS mode of OpenProject so you can access
the instance over HTTP-only. For all production systems we strongly
advise not to set this to false, and instead set up a proper TLS/SSL
termination on your outer web server.
https://www.openproject.org/docs/installation-and-operations/installation/docker/#all-in-one-container

Related

Sending API-requests between two docker containers

I have running a DDEV-Environment for Magento2, locally on my Mac OSX (Ventura)
https://ddev.readthedocs.io/en/stable/users/quickstart/#magento-2
For testing purpose I included Nifi per docker-compose.yaml inside my ddev project .ddev/docker-compose.nifi.yaml
Below you can see the docker-compose, which is really minimal at this point. Nifi works like expected, because I can login etc, although it is not persistent yet, but thats a different problem
version: '3'
services:
nifi:
image: apache/nifi:latest
container_name: ddev-${DDEV_SITENAME}-nifi
ports:
# HTTP
- "8080:8080"
# HTTPS
- "8443:8443"
volumes:
# - ./nifi/database_repository:/opt/nifi/nifi-current/database_repository
# - ./nifi/flowfile_repository:/opt/nifi/nifi-current/flowfile_repository
# - ./nifi/content_repository:/opt/nifi/nifi-current/content_repository
# - ./nifi/provenance_repository:/opt/nifi/nifi-current/provenance_repository
# - ./nifi/state:/opt/nifi/nifi-current/state
# - ./nifi/logs:/opt/nifi/nifi-current/logs
# - ./nifi/conf/login-identity-providers.xml:/opt/nifi/nifi-current/conf/login-identity-providers.xml
- ".:/mnt/ddev_config"
All I want to do is sending a POST-requst from Nifi to my Magento2 module.
I tried several IPs now, which I got from docker inspect ddev-ddev-magento2-web but I always receive "Connection refused"
My output from docker network ls:
NETWORK ID NAME DRIVER SCOPE
95bea4031396 bridge bridge local
692b58ca294e ddev-ddev-magento2_default bridge local
46be47991abe ddev_default bridge local
7e19ae1626f1 host host local
f8f4f1aeef04 nifi_docker_default bridge local
dbdba30546d7 nifi_docker_mynetwork bridge local
ca12e667b773 none null local
My Magento2-Module is working properly, because sending requests from Postmanto it works fine
You don't want most of what you have. Please remove the ports statement, which you shouldn't need at all; if you need anything, you'll need an expose. But I doubt you need that in this case?
You'll want to look at the docs:
Additional services and add-ons
Additional services with docker-compose
Then create a .ddev/docker-compose.nifi.yaml with something like
services:
nifi:
image: apache/nifi:latest
container_name: ddev-${DDEV_SITENAME}-nifi
container_name: "ddev-${DDEV_SITENAME}-someservice"
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.approot: ${DDEV_APPROOT}
expose:
- "8080"
environment:
- VIRTUAL_HOST=$DDEV_HOSTNAME
- HTTP_EXPOSE=8080:8080
- HTTPS_EXPOSE=9999:8080
volumes:
- ".:/mnt/ddev_config"
The name of the "web" container from inside your nifi container will be "web", curl http://web:8080, assuming that you have nifi on port 8080.
I don't know what you're trying to accomplish, but this may get you started. Feel free to come over to the DDEV Discord channel for more interactive help.

How to set bind9 docker container as dns of other container

I'm trying to set up ssl for my home network and I've set up a bind9 container with a custom domain that points to my unraid server. So far so good. I've also set up a private step-ca certificate authority which needs it's dns set to the bind9 container so that it knows about my private domain. The setup works if I set the dns of the step container to the internal docker ip address of the bind container but since these ip addresses are ephemeral I can't rely on that hence why I'm binding the bind9 ip address to something within 192.168.0.1/24 and accessing it there. This works if I set the dns server of my pc to the bind9 container but I am unable to do so for other docker containers for some reason
in short, step-ca and my proxy traefik need their dns set to bind9 which I want to have set up with a static ip address on the 192.168.0.1/24 subnet. traefik also needs to be able to talk to containers on the bridge network br0 otherwise it won't be able to proxy requests to the containers
The addresses of your containers don't need to be ephemeral. We can set up a custom network using the networks top-level element that defines a static range for the network using the ipam option, and then we can assign our containers static address on this network.
We can use the dns option to configure containers to use the bind9 container for name resolution.
Here's an example docker-compose.yaml that sets up a bind9 container and a couple of additional containers that will use it for DNS:
version: "3"
services:
bind9:
image: docker.io/internetsystemsconsortium/bind9:9.19
volumes:
- "./bind:/etc/bind"
- bind9_cache:/var/cache/bind
- bind9_log:/var/log
- bind9_lib:/var/lib/bind
networks:
bind9:
ipv4_address: 192.168.133.10
web1:
image: docker.io/alpinelinux/darkhttpd:latest
networks:
bind9:
ipv4_address: 192.168.133.20
dns: 192.168.133.10
web2:
image: docker.io/alpinelinux/darkhttpd:latest
networks:
bind9:
ipv4_address: 192.168.133.21
dns: 192.168.133.10
networks:
bind9:
ipam:
driver: default
config:
- subnet: 192.168.133.0/24
gateway: 192.168.133.1
volumes:
bind9_cache:
bind9_lib:
bind9_log:
In in the bind directory, I have bind configured to serve the following zonefile:
$TTL 604800
# IN SOA docker.example. root.docker.example. (
2 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
# IN NS ns.docker.example.
ns IN A 192.168.133.10
web1 IN A 192.168.133.20
web2 IN A 192.168.133.21
web IN A 192.168.133.20
web IN A 192.168.133.21
From either the web1 or web2 containers, we can confirm that they are using our bind instance for name resolution:
/ $ wget -O- web1.docker.example:8080
Connecting to web1.docker.example:8080 (192.168.133.20:8080)
writing to stdout
<html>
.
.
.
</html>
Recall that docker-compose is just a fancy wrapper for docker run, so you can accomplish the same thing without using docker-compose (although it will of course make life much easier).
If you need to access the bind9 container, you would of course just publish the appropriate ports on your host by adding the necessary ports section to the compose configuration (or by using the --publish/-p option on the docker run command line):
bind9:
image: docker.io/internetsystemsconsortium/bind9:9.19
ports:
- "53:53/udp"
- "53:53/tcp"
volumes:
- "./bind:/etc/bind"
- bind9_cache:/var/cache/bind
- bind9_log:/var/log
- bind9_lib:/var/lib/bind
networks:
bind9:
ipv4_address: 192.168.133.10

Expose docker-compose windows containers to windows host network

I'im fairly new to docker and docker compose.
I have a simple scenario, based on three applications (app1, app2, app3) that I want to connect to my host's network. The purpose is having an internet connection also inside the container.
Here is my docker-compose file:
version: '3.9'
services:
app1container:
image: app1img
build: ./app1
networks:
network_comp:
ipv4_address: 192.168.1.1
extra_hosts:
anotherpc: 192.168.1.44
ports:
- 80:80
- 8080:8080
app2container:
depends_on:
- "app1container"
image: app2img
build: ./app2
networks:
network_comp:
ipv4_address: 192.168.1.2
ports:
- 3100:3100
app3container:
depends_on:
- "app1container"
image: app3img
build: ./app3
networks:
network_comp:
ipv4_address: 192.168.1.3
ports:
- 9080:9080
networks:
network_comp:
driver: ""
ipam:
driver: ""
config:
- subnet: 192.168.0.0/24
gateway: 192.168.1.254
I already read the docker-compose documentation, which says that there is no a bridge driver for Windows OS. Is there anyway a solution to this issue?
You shouldn't usually need to do any special setup for this to work. When your Compose service has ports:, that makes a port available on the host's IP address. The essential rules for this are:
The service inside the container must listen on the special 0.0.0.0 "all interfaces" address (not 127.0.0.1 "this container only"), on some (usually fixed) port.
The container must be started with Compose ports: (or docker run -p). You choose the first port number, the second port number must match the port inside the container.
The service can be reached via the host's IP address on the first port number (or, if you're using the older Docker Toolbox setup, on the docker-machine ip address).
http://host.example.com:12345 (from other hosts)
|
v
ports: ['12345:8080'] (in the `docker-compose.yml`)
|
v
./my_server -bind 0.0.0.0:8080 (the main container command)
You can remove all of the manual networks: configuration in this file. In particular, it's problematic if you try to specify the Docker network to have the same IP address range as the host network, since these are two separate networks. Compose automatically provides a network named default that should work for most practical applications.

docker postgresql access from other container

I have a docker-compose file which is globally like this.
version '2'
services:
app:
image: myimage
ports:
- "80:80"
networks:
mynet:
ipv4_adress: 192.168.22.22
db:
image: postgres:9.5
ports:
- "6432:5432"
networks:
mynet:
ipv4_adress: 192.168.22.23
...
networks:
mynet:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.22.0/24
I want to put my postgresql and application in subnetworks to avoid the ports to be exposed outside my computer/server.
From within the app container, I can't connect to 192.168.22.23, I installed net-tools to use ifconfig/netstat, and it doesn't seem the containers are able to communicate.
I assume I have this problem because I'm using subnetworks with static ipv4 adresses.
I can access both static IPs from the host (connect to postgres and access the application)
Do you have any piece of advice, the goal is to access the ports of another container to communicate with him, without removing the use of static ips (on app at least). Here, to connect to postgresql from the app container.
The docker run -p option and Docker Compose ports: option take a bind address as an optional parameter. You can use this to make a service accessible from the same host, but not from other hosts:
services:
db:
ports:
- '127.0.0.1:6432:5432'
(The other good use of this setting is if you have a gateway machine with both a public and private network interface, and you want a service to only be accessible from the private network.)
Once you have this, you can dispense with all of the manual networks: setup. Non-Docker services on the same host can reach the service via the special host name localhost and the published port number. Docker services can use inter-container networking; within the same docker-compose.yml file you can use the service name as a host name, and the internal port number.
host$ PGHOST=localhost PGPORT=6432 psql
services:
app:
environment:
- PGHOST=db
- PGPORT=5432
You should remove all of the manual networks: setup, and in general try not to think about the Docker-internal IP addresses at all. If your Docker is Docker for Mac or Docker Toolbox, you cannot reach the internal IP addresses at all. In a multi-host environment they will be similarly unreachable from hosts other than where the container itself is running.

Docker compose cannot start service address already in use?

For some reason docker-compose does not like the address 203.0.113.1 for the gogs container in the below configuration. Note that in the below example I have Gogs running on 203.0.113.3 which works, but if I change that to 203.0.113.1 then the message:
ERROR: for f1d322793d47_docker_gogs_1 Cannot start service gogs: Address already in use.
I have checked to make sure no container using the ip address 203.0.113.1 is running so I'm curious whether docker-compose just disallows that address for some reason in general?
version: '3'
services:
gogs-nginx:
build: ./nginx
ports:
- "80:80"
networks:
mk1net:
ipv4_address: 203.0.113.2
gogs:
image: gogs/gogs
ports:
- "3000:3000"
volumes:
- gogs-data:/data
depends_on:
- gogs-nginx
networks:
mk1net:
ipv4_address: 203.0.113.3
volumes:
gogs-data:
external: true
networks:
mk1net:
ipam:
config:
- subnet: 203.0.113.0/24
In a network there are three IPs which are normally reserved for specific tasks.
0 is used as network address. 1 is used as Gateway address and 255 is used as Broadcast Address.
If one container wants to communicate with an other container in the same network he can speak with him directly. When he wants to talk with some other IP outside his network he sends his request to the Gateway address and hopes the Gateway knows how to route this.
To see this you can inspect a docker container and check his Gateway property near the IPAddress.
Or use ifconfig (Linux) and search for a network with the same id as your created one. This network will have the IP 203.0.113.1
So your IP is already used by the Network Gateway.
In docker compose version 2 there is a config to change the Gateway and Broadcast ip.
For version 3 it seems the config is currently not supported.
Update: docker compose version 3 now has ipam.gateway. See config
If some docker service happens to not have an ip address explicitly assigned, it is assigned automatically/randomly the first free one, if it happens to be coincidentally the same one as one of the ip addresses explicitly assigned in the docker-compose service definition later/afterwards, then they get into conflict, and it fails with the said error message.

Resources