I have two docker containers that share the same network. When I ssh into one of the containers make a http call to the other, I get 200 response: curl -i http://app-web.
I need to be able to call app-web container via https: curl https://app-web, however that returns: Failed to connect to app-web port 443: Connection refused.
This is the docker-compose.yml file for the app-web. What am I missing?
version: "3.8"
networks:
local-proxy:
external: true
internal:
external: false
services:
web:
build:
context: ./docker/bin/php
container_name: app-web"
expose:
- "80"
- "443"
networks:
- internal
- local-proxy
As stated by #David Maze
Your application isn't listening on port 443. Compose expose: does
pretty much nothing at all, and you can delete that section of the
file without changing anything about how the containers work.
You need to make sure that the app-web container is set up and actually listening on port 443.
For example, for Apache, this may mean:
Enabling the necessary modules. I.e. a2enmod headers ssl.
Setting up that domain to be able to handle/receive SSL connections.
Restarting your server to implement the changes.
More to that here. How To Create a Self-Signed SSL Certificate for Apache in Ubuntu 18.04
Related
This is for my local docker development. I have two docker hosts and I'm using traefik's reverse proxy to pull them up in browser. One of the hosts is an api which I need to communicate via https calls. The container I'm trying to connect to has the following params:
version: "3.8"
networks:
local-proxy:
external: true
internal:
external: false
services:
web:
build:
context: ./docker/bin/php
container_name: "app-web"
expose:
- "80"
- "443"
networks:
- internal
- local-proxy
I'm able to connect to it via curl when the call is made via non-ssl (http).
curl http://app-web (200 response)
I need to be able to connect via https, in order to keep everything like it's running in production, however it keeps throwing Failed to connect to app-web port 443: Connection refused
Is it possible at all to connect via 443 port from one container to another?
Note: These containers are never deployed to production. They are just for local dev.
I have setup in which service_main stream logs on socket 127.0.0.1:6000
Simplified docker-compose.yml looks like that:
version: "3"
networks:
some_network:
driver: bridge
ipam:
driver: default
config:
- subnet: 100.100.100.0/24
gateway: 100.100.100.1
services:
service_main:
image: someimage1
networks:
some_network:
ipv4_address: 100.100.100.2
service_listener:
image: someimage2
networks:
some_network:
ipv4_address: 100.100.100.21
entrypoint: some_app
command: listen 100.100.100.2:6000
My assumption that it SHOULD work since both containers belong to one network.
However I got an error(from service_listener) that 100.100.100.2:6000 is not available
(which i interpret that service tries to listen some public socket instead of network.)
I tried different ways, without deep understanding: expose/ publish 6000 port on service_main, or set socket for logs as 100.100.100.21:6000 and in service_listener listen 127.0.0.1:6000 (end publish port it also). But nothing works. And apparently I don't understand why.
In same network with similar approach - powerdns and postgresql works fine - I tell powerdns in config that db host is on 100.100.100.x and it works.
It all depends on what you want to do
If you want to access service_main from outside like the host the containers are running on then there are 2 ways to fix this:
Publish the port. This is done with the Ports command:
services:
service_main:
image: someimage1
ports:
- "6000:4000"
In this case port 4000 being the port where someimage1 is running on inside the Docker Container.
Use a ProxyServer which talks to the IP address of the Docker Container.
But the you need to make sure that the thing you have running inside the Docker Container (someimage1) is indeed running on port 6000.
Proxyserver
The nice thing about the proxyserver method is that you can use nginx inside another docker container and put all the deployment and networking stuff in there. (Shameless self-promotion for an example I created of a proxyserver in docker)
Non Routable Networks
And I would always use a non-routable network for internal networks, not 100.100.100.*
I assume when I publish/mapping port - I make it available not only for docker compose network but for external calls.
My problem was solved by next steps:
In configuration of service_main I set that it should stream log to socket: 100.100.100.21:6000
In service_listener I told app inside to listen 0.0.0.0:6000 port
service_listener:
image: someimage2
networks:
some_network:
ipv4_address: 100.100.100.21
entrypoint: some_app
command: listen 0.0.0.0:6000
It helped.
I currently have a VM running 2 services. A frontend httpd/apache2 service that proxies all request to my backend services.
My bankend service only listens on 127.0.0.1:7878. This means it is only accessible via localhost. Thats the reason why Im using a frontend so that I can use that to proxy my requests to 127.0.0.1:7878
So my apache2 config on the VM looks like :
root#vm:/etc/apache2/sites-enabled# cat backend.conf
<VirtualHost *:443>
ServerName my.domain.com
ProxyPass / http://localhost:7878/
ProxyPassReverse / http://localhost:7878/
SSLEngine On
SSLCertificateFile /etc/apache2/ssl/ssl_cert.crt
SSLCertificateKeyFile /etc/apache2/ssl/ssl_cert.key
</VirtualHost>
Now I want to dockerize both services and deploy them using docker-compose
I have setup my backend service like :
version: '3'
services:
backend:
build: backend/.
ports:
- "7878:7878"
And my backend/ folder has all the required files for my backend service including the Dockerfile. I am able to successfully build by docker image and can run it successfully. When I exec into the pod, I can successfully run curl commands towards 127.0.0.1:7878/some-end-point
Now I need to dockerize the frontend service too. It could be apache or it could even be nginx. But Im not sure how both containers will interac with each other given that my backend services ONLY listens on 127.0.0.1
If I extend my docker-compose file like :
version: '3'
services:
backend:
build: backend/.
ports:
- "7878:7878"
frontend:
build: frontend/.
ports:
- "80:80"
- "443:443"
I believe it will spin up its own network and my backend service wont be accessible using 127.0.0.1:7878
So in this case, whats the best approach ? How do I use docker-compose to spin up different containers on the SAME network so that they share 127.0.0.1 ?
You can't do that as you describe: the IPv4 address 127.0.0.1 is a magic address that always means "me", and in a Docker context it will mean "this container".
It's easy enough to set up a private Docker-internal network for your containers; in fact, Docker Compose will do this automatically for you. Your backend service must be listening on 0.0.0.0 to be accessible from other containers. You're not required to set externally published ports: on your container (or use the docker run -p option), though. If you don't, then your container will only be reachable from other containers on the same Docker-internal network, using the service name in the docker-compose.yml file as a DNS name, on whatever port the process inside the container happens to be listening on.
A minimal example of this could look like:
version: '3'
services:
proxy:
image: 'my/proxy:20181220.01'
environment:
BACKEND_URL: 'http://backend'
BIND_ADDRESS: '0.0.0.0:80'
ports:
- '8080:80'
backend:
image: 'my/backend:20181220.01'
environment:
BIND_ADDRESS: '0.0.0.0:80'
From outside Docker, you can reach the proxy at http://server-hostname.example.com:8080. From inside Docker, the two hostnames proxy and backend will resolve to Docker-internal addresses, and we've set both services (via a hypothetical environment variable setup) to listen on the ordinary HTTP port 80.
My node webserver uses express and listen on port 5500.
My dockerfile-compose doesn't expose any port of my node webserver (named webserver) as following:
version: "3"
services:
webserver:
build: ./server
form:
build: ./ui
ports:
- "6800:80"
networks:
- backend // i let the backend network just for legacy but in fact webserver isn't in this networks
command: [nginx-debug, '-g', 'daemon off;']
networks:
backend:
My Nginx reverse proxy as following:
/request {
proxy_pass http://webserver:5500/request
}
expectation: y request have to fail because of the absence of shared network between the two services.
result : request succeed.
I can't understand why ? maybe the default network between the container make the job ?
more info: the request fails when the reverse_proxy redirect to a bad port, but succeed if the domain name is wrong and the port is good as following :
proxy_pass http://webver:5500/request > succeed
I can't understand the Nginx / Docker flow here. Someone would please explain to us what happens here ?
More recent versions of Docker Compose create a Docker network automatically. Once that network exists, Docker provides its own DNS system so that containers can reach each other by name or network alias; Compose registers each service under its name in the YAML file, so within this set of containers, webserver and form would both be resolvable host names.
(The corollary to this is that you don't usually need to include a networks: block in the YAML file at all, and there's not much benefit to explicitly specifying a container_name: or manually setting container network settings.)
I'm trying to get a simple OpenVPN server set up on a cheap Vultr vps through docker-compose.
I was able to generate certificates and such just fine, and can even connect to the server..
But when I try to connect to it on my mac through Tunnelblick, I have no internet. My IPv6 internet works, but seems to just be using my home internet, not the VPN tunnel.
Whenever I try to connect to any IPv4 traffic, it times out. Even trying ping 8.8.8.8 gives me a timeout error.
docker-compose:
version: '3.5'
services:
openvpn:
container_name: openvpn
image: kylemanna/openvpn
restart: unless-stopped
cap_add:
- NET_ADMIN
network_mode: host
ports:
- "943:943"
- "1194:1194/udp"
privileged: true
hostname: example.com
volumes:
- /lib/modules:/lib/modules:ro
- /etc/openvpn:/etc/openvpn
volumes:
openvpn-config:
name: openvpn-config
It may be related to DNS nameserver settings not being pushed to clients. You can try manually assigning a nameserver (e.g. 8.8.8.8) in Tunnelblick.
As for IPv6 traffic not being encapsulated, I'd check if the docker engine is configured to handle such traffic. It looks like Kylemanna's image needs additional configuration (e.g. add --ipv6 when starting the Docker daemon) as explained at IPv6 Support