I am trying to set up a Learning Locker server within Docker (on Windows 10, Docker using WSL for emulation) using the repo from michzimney. This service is composed of several Docker containers (Mongo, Redis, NGINX, etc) networked together. Using the provided docker-compose.yml file I have been able to set up the service and access it from localhost, but I cannot access the server from any machine on the rest of my home network.
This is a specific case, but some guidance will be valuable as I am very new to Docker and will need to build many such environments in the future, for now in Windows but later in Docker on Synology, where the services can be access from network and internet.
My research has led me to user-defined bridging using docker -p [hostip]:80:80 but this didn't work for me. I have also turned off Windows firewall since that seems to cause a host of issues for some but still no effect. I tried to bridge my virtual switch manager for WSL using Windows 10 Hyper-V manager, but that didn't work, and I have tried bridging the WSL connector to LAN using basic Windows 10 networking but that didn't work and I had to reset my network.
So the first question is is this a Windows networking issue or a
Docker configuration issue?
The second question, assuming it's a
Docker configuration issue, is how can I modify the following YML
file to make the service accessible to the outside network:
version: '2'
services:
mongo:
image: mongo:3.4
restart: unless-stopped
volumes:
- "${DATA_LOCATION}/mongo:/data/db"
redis:
image: redis:4-alpine
restart: unless-stopped
xapi:
image: learninglocker/xapi-service:2.1.10
restart: unless-stopped
environment:
- MONGO_URL=mongodb://mongo:27017/learninglocker_v2
- MONGO_DB=learninglocker_v2
- REDIS_URL=redis://redis:6379/0
depends_on:
- mongo
- redis
volumes:
- "${DATA_LOCATION}/xapi-storage:/usr/src/app/storage"
api:
image: michzimny/learninglocker2-app:${DOCKER_TAG}
environment:
- DOMAIN_NAME
- APP_SECRET
- SMTP_HOST
- SMTP_PORT
- SMTP_SECURED
- SMTP_USER
- SMTP_PASS
command: "node api/dist/server"
restart: unless-stopped
depends_on:
- mongo
- redis
volumes:
- "${DATA_LOCATION}/app-storage:/opt/learninglocker/storage"
ui:
image: michzimny/learninglocker2-app:${DOCKER_TAG}
environment:
- DOMAIN_NAME
- APP_SECRET
- SMTP_HOST
- SMTP_PORT
- SMTP_SECURED
- SMTP_USER
- SMTP_PASS
command: "./entrypoint-ui.sh"
restart: unless-stopped
depends_on:
- mongo
- redis
- api
volumes:
- "${DATA_LOCATION}/app-storage:/opt/learninglocker/storage"
- "${DATA_LOCATION}/ui-logs:/opt/learninglocker/logs"
worker:
image: michzimny/learninglocker2-app:${DOCKER_TAG}
environment:
- DOMAIN_NAME
- APP_SECRET
- SMTP_HOST
- SMTP_PORT
- SMTP_SECURED
- SMTP_USER
- SMTP_PASS
command: "node worker/dist/server"
restart: unless-stopped
depends_on:
- mongo
- redis
volumes:
- "${DATA_LOCATION}/app-storage:/opt/learninglocker/storage"
nginx:
image: michzimny/learninglocker2-nginx:${DOCKER_TAG}
environment:
- DOMAIN_NAME
restart: unless-stopped
depends_on:
- ui
- xapi
ports:
- "443:443"
- "80:80"
So far I have attempted to change the ports option to the following:
ports:
- "192.168.1.102:443:443"
- "192.168.1.102:80:80"
But then the container wasn't even accessible from the host machine anymore. I also tried adding network-mode=host under the nginx service but the build failed saying it was not compatible with port mapping. Do I need to set network-mode=host for every service or is the problem something else entirely?
Any help is appreciated.
By the looks of your docker-compose.yml, you are exposing ports 80 & 443 to your host (Windows machine). So, if your windows IP is 192.168.1.102 - you should be able to reach http://192.168.1.102 & https://192.168.1.102 on your LAN if there is nothing blocking it (firewall etc.).
You can confirm that you are indeed listening on those ports by running 'netstat -a' and checking to see if you are LISTENING on those ports.
Related
I want to switch from using the docker run-command to a docker-compose file with my nextcloud instance that runs behind a reverse proxy (jwilder/nginx-proxy).
This is the run command I used to use:
sudo docker run -d -p 8080:80 --expose 80 --expose 443 -e VIRTUAL_HOST=nextcloud.example.com -v nextcloud:/var/www/html --restart=always --name=nextcloud nextcloud:24.0.8
I installed the mariaDB later in the container so that I didn't have to struggle with networking. Also I use the Port 8080 only in my internal network for fast up- and downloading.
This worked quite well, but now I want to create a similar environment with docker-compose:
version: '3.8'
volumes:
nextcloud:
db:
services:
db:
image: mariadb:10.5
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=my-super-strong-password
- MYSQL_PASSWORD=my-other-super-strong-password
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
app:
image: nextcloud:24.0.8
restart: always
ports:
- 8080:80
expose:
- 80
- 443
links:
- db
volumes:
- nextcloud:/var/www/html
environment:
- MYSQL_PASSWORD=my-other-super-strong-password
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=db
- PHP_MEMORY_LIMIT=1G
- PHP_UPLOAD_LIMIT=128M
- VIRTUAL_HOST=nextcloud.example.com
The containers are starting successfully and I can use nextcloud in my internal network. But I cannot reach them from my domain. Instead I get a 502 Bad request. The VIRTUAL_HOST redirection seems to work since I'd get a 503 Service Temporarily Unavailable instead.
I think exposing the ports 80 and 443 doesn't work.
I've tried to add a proxy network:
networks:
proxy:
driver: bridge
external: true
and added
networks:
- default
to the db service and
networks:
- default
- proxy
to the app service.
That didn't fixed the problem. Does any of you have an idea what I can try next?
I've tried different ways to expose the ports and tried to create different networks
Nevermind found the problem.
Instead of simply creating an network named proxy, I had to create a new jwilder reverse-proxy service via docker compose with a name, as an example myreverseproxy. In each service I want to make public I needed to name this proxy as:
networks:
- default
- myreverseproxy
Also I had to use the name in the networks service area:
networks:
myreverseproxy:
external: true
I have 2 services inside my docker cluster. frontend runs on port 8090, and backend runs on port 8000. How can I make frontend call backend via local DNS like fetch('https://backend.local/')? Because if I use docker-hostname, I need to specify the port to call the back-end. Do I need to have a local DNS Server inside my docker?
You have to create a Software Defined Network (SDN) in docker and then all containers running in that network can communicate with each other using the container names or you can define alias for each and use that. A simple docker-compose file for a backend microservice and mysql database can be created using the below configs.
version: '3.2'
networks:
testNetwork:
services:
mysql-dev:
image: mysql:latest
container_name: mysql-dev
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=root
ports:
- "3306:3306"
networks:
- testNetwork
backend:
image: backend:1.0
container_name: backend
environment:
- DB_USER=root
- DB_PASS=root
- DB_NAME=root
- DB_HOST=mysql-dev
- DB_DIALECT=mysql
ports:
- "4000:4000"
working_dir: /backend
command: npm start
networks:
- testNetwork
I have a setup where I build 2 dockers with docker-compose.
1 container is a web application. I can access it with port 8080. Another container is ElasticSearch; it's accessible with port 9200.
This is the content of my docker-compose.yml file:
version: '3'
services:
serverapplication:
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
elasticsearch:
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
When I browse to http://localhost:8080/serverapplication I can see my server application.
When I browse to http://localhost:9200/ I can see the default page of ElasticSearch.
But when I try to access ElasticSearch from inside the serverapplication, I get a "connection refused". It seems that the 9200 port is unreachable at localhost for the server application.
How can I fix this?
It's never safe to use localhost, since localhost means something else for your host system, for elasticsearch and for your server application. You're only able to access the containers from your host's localhost because you're mapping container ports onto your host's ports.
put them in the same network
give the containers a name
access elasticsearch through its containername, which Docker automatically resolves to the current IP of your elasticsearch container.
Code:
version: '3'
services:
serverapplication:
container_name: serverapplication
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
networks:
- my-network
elasticsearch:
container_name: elasticsearch
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
networks:
- my-network
networks:
my-network:
driver: bridge
Your server application must use the host name elasticsearch to access elasticsearch service i.e., http://elasticsearch:9200
Your serverapplication and elasticsearch are running in different containers. The localhost of serverapplication is different from localhost of elasticsearch.
docker-compose sets up a network between the containers such that they can be accessed with their service names. So from your serverapplication, you must use the name 'elasticsearch' to connect to it.
I've created PrestaShop store on server. Is there any possible way to use docker for my store and migrate it into another server using docker? I know that I'll need docker-compose but to be honest I don't know what to do with files on current server.
Ok, so I deeped into problem and solution for ma quesstion is as below. What I did is pull original image from prestashop and copy there my files.
Next step was use mariadb image. I had backup.sql file exported from previous store phpmyadmin
version: '2'
services:
prestashop:
image: prestashop
ports:
- 80:80
links:
- mariadb:mariadb
depends_on:
- mariadb
volumes:
- ./src:/var/www/html
- ./src/modules:/var/www/html/modules
- ./src/themes:/var/www/html/themes
- ./src/override:/var/www/html/override
environment:
- PS_DEV_MODE=1
- DB_SERVER=mariadb
- DB_USER=root
- DB_PASSWD=root
- DB_NAME=prestashop
- PS_INSTALL_AUTO=0
mariadb:
image: mariadb
volumes:
- backup.sql:/docker-entrypoint-initdb.d
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=prestashop
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- mariadb
ports:
- 81:80
environment:
- PMA_HOST=mariadb
- PMA_USER=root
- PMA_PASSWORD=root
The biggest issue is IP in docker-machine. Keep in mind that if you are using docker toolbox you have IP 192.168.99.100 but in Docker for Windows your IP depends on localhost (or just type localhost).
You can use this docker-compose.yml :
version: "3"
services:
prestashop:
image: prestashop/prestashop
networks:
mycustomnetwork:
ports:
- 82:80
links:
- mariadb:mariadb
depends_on:
- mariadb
volumes:
- ./src:/var/www/html
- ./src/modules:/var/www/html/modules
- ./src/themes:/var/www/html/themes
- ./src/override:/var/www/html/override
environment:
- PS_DEV_MODE=1
- DB_SERVER=mariadb
- DB_USER=root
- DB_PASSWD=mycustompassword
- DB_NAME=prestashop
- PS_INSTALL_AUTO=0
mariadb:
image: mariadb
networks:
mycustomnetwork:
volumes:
- presta_db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=mycustompassword
- MYSQL_DATABASE=prestashop
phpmyadmin:
image: phpmyadmin/phpmyadmin
networks:
mycustomnetwork:
links:
- mariadb:mariadb
ports:
- 1235:80
depends_on:
- mariadb
environment:
- PMA_HOST=mariadb
- PMA_USER=root
- PMA_PASSWORD=mycustompassword
volumes:
presta_db:
networks:
mycustomnetwork:
external: true
Replace mycustomnetwork and mycustompassword
Then run docker-compose up
Web url : localhost:82
PHP MyAdmin url : localhost:1235
You can follow this tutorial to setup Prestashop in a Docker environment.
https://hub.docker.com/r/prestashop/prestashop/
You will need to add your current files to the Prestashop container and most likely import your database in a MySQL container. Docker-compose will be used to launch those containers together. Once this is done, you will be able to deploy the whole thing anywhere.
You should also include bridge network in your compose file, some examples might work from here https://runnable.com/docker/docker-compose-networking.
This way db can be configured to be accessed only by prestashop on local docker network without being exposed outside. Presta db can also be pointed to the name of the running image, in case your IP changes or something. All what you would leave running is port 80 on the app.
I am trying to set up a 2 node private IPFS cluster using docker. For that purpose I am using ipfs/ipfs-cluster:latest image.
My docker-compose file looks like :
version: '3'
services:
peer-1:
image: ipfs/ipfs-cluster:latest
ports:
- 8080:8080
- 4001:4001
- 5001:5001
volumes:
- ./cluster/peer1/config:/data/ipfs-cluster
peer-2:
image: ipfs/ipfs-cluster:latest
ports:
- 8081:8080
- 4002:4001
- 5002:5001
volumes:
- ./cluster/peer2/config:/data/ipfs-cluster
While starting the containers getting following error
ERROR ipfshttp: error posting to IPFS: Post http://127.0.0.1:5001/api/v0/repo/stat?size-only=true: dial tcp 127.0.0.1:5001: connect: connection refused ipfshttp.go:745
Please help with the problem.
Is there any proper documentation about how to setup a IPFS cluster on docker. This document misses on lot of details.
Thank you.
I figured out how to run a multi-node IPFS cluster on docker environment.
The current ipfs/ipfs-cluster which is version 0.4.17 doesn't run ipfs peer i.e. ipfs/go-ipfs in it. We need to run it separately.
So now in order to run a multi-node (2 node in this case) IPSF cluster in docker environment we need to run 2 IPFS peer container and 2 IPFS cluster container 1 corresponding to each peer.
So your docker-compose file will look as follows :
version: '3'
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
services:
ipfs0:
container_name: ipfs0
image: ipfs/go-ipfs
ports:
- "4001:4001"
- "5001:5001"
- "8081:8080"
volumes:
- ./var/ipfs0-docker-data:/data/ipfs/
- ./var/ipfs0-docker-staging:/export
networks:
vpcbr:
ipv4_address: 10.5.0.5
ipfs1:
container_name: ipfs1
image: ipfs/go-ipfs
ports:
- "4101:4001"
- "5101:5001"
- "8181:8080"
volumes:
- ./var/ipfs1-docker-data:/data/ipfs/
- ./var/ipfs1-docker-staging:/export
networks:
vpcbr:
ipv4_address: 10.5.0.7
ipfs-cluster0:
container_name: ipfs-cluster0
image: ipfs/ipfs-cluster
depends_on:
- ipfs0
environment:
CLUSTER_SECRET: 1aebe6d1ff52d96241e00d1abbd1be0743e3ccd0e3f8a05e3c8dd2bbbddb7b93
IPFS_API: /ip4/10.5.0.5/tcp/5001
ports:
- "9094:9094"
- "9095:9095"
- "9096:9096"
volumes:
- ./var/ipfs-cluster0:/data/ipfs-cluster/
networks:
vpcbr:
ipv4_address: 10.5.0.6
ipfs-cluster1:
container_name: ipfs-cluster1
image: ipfs/ipfs-cluster
depends_on:
- ipfs1
- ipfs-cluster0
environment:
CLUSTER_SECRET: 1aebe6d1ff52d96241e00d1abbd1be0743e3ccd0e3f8a05e3c8dd2bbbddb7b93
IPFS_API: /ip4/10.5.0.7/tcp/5001
ports:
- "9194:9094"
- "9195:9095"
- "9196:9096"
volumes:
- ./var/ipfs-cluster1:/data/ipfs-cluster/
networks:
vpcbr:
ipv4_address: 10.5.0.8
This will spin 2 peer IPFS cluster and we can store and retrieve file using any of the peer.
The catch here is we need to provide the IPFS_API to ipfs-cluster as environment variable so that the ipfs-cluster knows its corresponding peer. And for both the ipfs-cluster we need to have the same CLUSTER_SECRET.
According to the article you posted:
The container does not run go-ipfs. You should run the IPFS daemon
separetly, for example, using the ipfs/go-ipfs Docker container. We
recommend mounting the /data/ipfs-cluster folder to provide a custom,
working configuration, as well as persistency for the cluster data.
This is usually achieved by passing -v
:/data/ipfs-cluster to docker run).
If in fact you need to connect to another service within the docker-compose, you can simply refer to it by the service name, since hostname entries are created in all the containers in the docker-compose so services can talk to each other by name instead of ip
Additionally:
Unless you run docker with --net=host, you will need to set $IPFS_API
or make sure the configuration has the correct node_multiaddress.
The equivalent of --net=host in docker-compose is network_mode: "host" (incompatible with port-mapping) https://docs.docker.com/compose/compose-file/#network_mode