Run Swagger docker container as extension of server - docker

I have a docker compose file serving the nginx:alpine image for an API I'm developing. I also have a container for swagger ui and another for swagger editor. The nginx container is setup to support SSL via container port 443 (mapped via the host port 9443 as I also use Laravel Valet in secure mode for other small projects).
My question is, how can I get swagger containers to be routed, such that my team can visit https://domain.dev/swagger-ui and https://domain.dev/swagger-editor in my browser in my dev environment rather than https://localhost:8081 and https://localhost:8082 respectively?
Here's my docker-compose YAML file
version: "3.3"
services:
nginx:
container_name: "cygnus-nginx"
image: nginx:alpine
restart: "always"
command: /bin/sh -c "nginx -g 'daemon off;'"
ports:
- "9800:80"
- "9443:443"
volumes:
- "./docker/nginx/conf.d/:/etc/nginx/conf.d:delegated"
- "./docker/nginx/ssl/:/etc/nginx/ssl:delegated"
- ".:/var/www/html:delegated"
swagger-editor:
container_name: "cygnus-swagger-editor"
image: swaggerapi/swagger-editor
restart: "always"
ports:
- "8081:8080"
swagger-ui:
container_name: "cygnus-swagger-ui"
image: swaggerapi/swagger-ui
restart: "always"
ports:
- "8082:8080"
volumes:
- ./swagger/swagger.json:/swagger.json
environment:
SWAGGER_JSON: /swagger.json
and my /etc/hosts file:
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
127.0.0.1 mydomain.dev
Any pointers on this? Thanks in advance!

Related

docker host: use docker dns to resolve container name from host network

I need to resolve a container name to the IP Address from the docker host.
The reason for this is, i need a container to run on the host network, but it must be also able to resolve the container "backend" which it connects also to. (The container must be send & receive multicast packets)
docker-compose.yml
version: "3"
services:
database:
image: mongo
container_name: database
hostname: database
ports:
- "27017:27017"
backend:
image: "project/backend:latest"
container_name: backend
hostname: backend
environment:
- NODE_ENV=production
- DATABASE_HOST=database
- UUID=5025f846-7587-11ed-9ca7-8b992b5e7dd3
ports:
- "8080:8080"
depends_on:
- database
tty: true
frontend:
image: "project/frontend:latest"
container_name: frontend
hostname: frontend
ports:
- "80:80"
- "443:443"
depends_on:
- backend
environment:
- BACKEND_HOST=backend
connector:
image: "project/connector:latest"
container_name: connector
hostname: connector
ports:
- "1900:1900/udp"
#expose:
# - "1900/udp"
environment:
- NODE_ENV=production
- BACKEND_HOST=backend
- STARTUP_DELAY=1500
depends_on:
- backend
network_mode: host
tty: true
How can i resolve the hostname "backend" via docker from the docker host?
dig backend #127.0.0.11 & dig backend #172.17.0.1 did not work.
A test with a docker ubuntu image & socat proves, that i can receive ssdp multicast packets:
docker run --net host -it --rm ubuntu
socat UDP4-RECVFROM:1900,ip-add-membership=239.255.255.250:0.0.0.0,fork -
The only problem i now have is the DNS/Container name resolution from the host (network).
TL;DR
The container "connector" must be on the host network,but also be able to resolve the container name "backend" to the docker internal IP Address.
NOTE: Perhaps this is better suited on superuser or similar?

Access docker ports from a container inside another container at localhost

I have a setup where I build 2 dockers with docker-compose.
1 container is a web application. I can access it with port 8080. Another container is ElasticSearch; it's accessible with port 9200.
This is the content of my docker-compose.yml file:
version: '3'
services:
serverapplication:
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
elasticsearch:
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
When I browse to http://localhost:8080/serverapplication I can see my server application.
When I browse to http://localhost:9200/ I can see the default page of ElasticSearch.
But when I try to access ElasticSearch from inside the serverapplication, I get a "connection refused". It seems that the 9200 port is unreachable at localhost for the server application.
How can I fix this?
It's never safe to use localhost, since localhost means something else for your host system, for elasticsearch and for your server application. You're only able to access the containers from your host's localhost because you're mapping container ports onto your host's ports.
put them in the same network
give the containers a name
access elasticsearch through its containername, which Docker automatically resolves to the current IP of your elasticsearch container.
Code:
version: '3'
services:
serverapplication:
container_name: serverapplication
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
networks:
- my-network
elasticsearch:
container_name: elasticsearch
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
networks:
- my-network
networks:
my-network:
driver: bridge
Your server application must use the host name elasticsearch to access elasticsearch service i.e., http://elasticsearch:9200
Your serverapplication and elasticsearch are running in different containers. The localhost of serverapplication is different from localhost of elasticsearch.
docker-compose sets up a network between the containers such that they can be accessed with their service names. So from your serverapplication, you must use the name 'elasticsearch' to connect to it.

How to access docker container using localhost address

I am trying to access a docker container from another container using localhost address.
The compose file is pretty simple. Both containers ports are exposed.
There are no problems when building.
In my host machine I can successfully execute curl http://localhost:8124/ and get a response.
But inside the django_container when trying the same command I get Connection refused error.
I tried adding them in the same network, still result didn't change.
Well if I try to execute with the internal ip of that container like curl 'http://172.27.0.2:8123/' I get the response.
Is this the default behavior? How can I reach clickhouse_container using localhost?
version: '3'
services:
django:
container_name: django_container
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
container_name: clickhouse_container
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"
So with this line here - "8124:8123" you're mapping the port of clickhouse container to localhost 8124. Which allows you to access clickhouse from localhost at port 8124.
If you want to hit clickhouse container from within the dockerhost network you have to use the hostname for the container. This is what I like to do:
version: '3'
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
hostname: clickhouse
container_name: clickhouse
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"
If you make the changes like I have made above you should be able to access clickhouse from within the django container like this curl http://clickhouse:8123.
As in #Billy Ferguson's answer, you can visit using localhost in host machine just because: you define a port mapping to route localhost:8124 to clickhouse:8123.
But when from other container(django), you can't. But if you insist, there is a ugly workaround: share host's network namespace with network_mode, but with this the django container will just share all network of host.
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
network_mode: "host"
It depends of config.xml settings. If in config.xml <listen_host> 0.0.0.0</listen_host> you can use clickhouse-client -h your_ip --port 9001

Access ftp service via other docker container

I have a Golang app, and it is supposed to connect to a FTP Server.
Now, both Golang app and FTP Server is dockerized, but I don't know how to connect to FTP server from Golang app
Here is my docker-compose.yml
version: '2'
services:
myappgo:
image: myappgo:exp
volumes:
- ./volume:/go
networks:
myappgo_network:
env_file:
- test.env
ftpd-server:
container_name: ftpd-server
image: stilliard/pure-ftpd:hardened
ports:
- "21:21"
- "30000-30009:30000-30000"
environment:
PUBLICHOST: "localhost"
FTP_USER_NAME: "test"
FTP_USER_PASS: "test"
FTP_USER_HOME: "/home/test"
restart: on-failure
networks:
myappgo_network:
networks:
myappgo_network:
When I run docker compose, all services are up.
I could get IP of ftp container with:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' ftpd-server
And then, I installed a ftp client for alpine in my golang container, lftp:
docker exec -it my_app_go sh
apk add lftp
lftp -d ftp://test:test#172.19.0.2 # -d for debug
lftp test#172.19.0.2:~> ls
---- Connecting to 172.19.0.2 (172.19.0.2) port 21
`ls' at 0 [Connecting...]
What am I missing ?
At least, you need 21/TCP for commands and 20/TCP for data on ftp-server:
ports:
- "21:21"
- "20:20"
- "30000-30009:30000-30009"
I changed your compose-file a little bit:
version: '2'
services:
myappgo:
image: alpine:3.8
tty: true
networks:
swarm_default:
ftpd-server:
container_name: ftpd-server
image: stilliard/pure-ftpd:hardened
ports:
- "21:21"
- "20:20"
- "30000-30009:30000-30009"
environment:
PUBLICHOST: "localhost"
FTP_USER_NAME: "test"
FTP_USER_PASS: "test"
FTP_USER_HOME: "/home/test"
restart: on-failure
networks:
swarm_default:
networks:
swarm_default:
Then I created on ftp-server file /home/test/1 and I can see it from mygoapp-container:
/ # lftp ftp://test:test#172.19.0.2
lftp test#172.19.0.2:/> dir
-rw-r--r-- 1 0 0 0 Jan 22 14:18 1
First simplify your dockerfile
version: '3' # i assume you can migrate to version 3, yes?
services:
myappgo:
image: myappgo:exp
volumes:
- ./volume:/go
env_file:
- test.env
ftpd-server:
image: stilliard/pure-ftpd:hardened
environment:
PUBLICHOST: "0.0.0.0"
FTP_USER_NAME: "test"
FTP_USER_PASS: "test"
FTP_USER_HOME: "/home/test"
restart: on-failure
Second, default network is created by docker-compose; no need to do it explicitly. All services get connected to it under their names, so you access them not by ip but by name like ftpd-server
Third, you dont need to expose your ports if you access them from inside. If you need to access them from outside, then you expose.
Next, launch ftp with binding to 0.0.0.0 - binding any tcp service to localhost or 127.0.0.1 makes it accessable only locally.
Last, use service names to connect. Forget about ip addresses and docker inspect. You connection from myappgo to ftp will look like ftp://ftpd-server/foo/bar

Cannot connect to Redis from Laravel Application

I have to configure Redis with Socketio in my Laravel application. However, what ever I have tried so far, I get the same error:
Connection refused [tcp://127.0.0.1:6379] i
I can go to the container with docker exec -it id sh and when I ping the server I get the PONG message. Client is already 'predis' in my database.php file and package also installed.
.env
REDIS_HOST=redis
REDIS_PASSWORD=null
REDIS_PORT=6379
docker-compose.yml
version: "2"
services:
api:
build: .
ports:
- 9000:9000
volumes:
- .:/app
- /app/vendor
depends_on:
- postgres
- redis
environment:
DATABASE_URL: postgres://xx#postgres/xx
postgres:
image: postgres:latest
environment:
POSTGRES_USER: xx
POSTGRES_DB: xx
POSTGRES_PASSWORD: xx
volumes:
- .Data:/var/lib/postgresql/data
ports:
- 3306:5432
redis:
build: ./Redis/
ports:
- 6003:6379
volumes:
- ../RedisData/data:/data
command: redis-server --appendonly yes
Dockerfile (redis)
FROM redis:alpine
COPY redis.conf /usr/local/etc/redis/redis.conf
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
The error is saying it can connect to 127.0.0.1 on port 6379. So make sure the host and port is ok:
host 127.0.0.1 is ok: this work if you run the php on the same host than redis, or if you run php on Docker host machine, but in this case, the port will be 6003
port 6379 is ok: host is not good, you must specify the Docker container hostname: redis
make sure configuration cache is ok
Set your REDIS_HOST to redis like this REDIS_HOST=redis. The reason is that you already built your docker file and specified redis as the name of your redis service
Had Same issue...
Also updated following in redis.conf
bind 127.0.0.1
To
bind redis
as redis is the existing host now

Resources