I am trying to access Flask app from the Docker compose getting started tutorial from my local host but without making changes to this pruned Dockerfile:
# syntax=docker/dockerfile:1
FROM python:3.9-alpine
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
This if my docker-compose.yml:
version: "3.9"
services:
web:
build: .
command: flask run
volumes:
- type: bind
source: .
target: /code
environment:
- ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
ports:
- target: 5000
published: 8000
networks:
- counter-net
redis:
image: "redis:alpine"
networks:
- counter-net
networks:
counter-net:
volumes:
volume-net:
When I use docker compose up I can see: Running on http://127.0.0.1:5000 but I cannot access it on Running on 127.0.0.1:8000 or localhost:8000
I can see 2_counter-net when I list networks and if relevant earlier I tried creating a volume but removed this when I changed the source to . and it came up without error.
How can I correct my config please?
You are trying to use a bridge network so that ports opened in the container can be forwarded to ports on your host computer. It's true that you could remove the user-defined network and just rely on the default bridge network (by removing all the "networks" sections from the YAML file). That should solve your problem. However, Docker doesn't recommend this approach for production.
The other option is to add a bridge driver to your user-defined network specification:
networks:
counter-net:
driver: bridge
And David is right, you should fix the YAML in your environment.
environment:
- FLASK_APP=app.py
- FLASK_RUN_HOST=0.0.0.0
Related
i'm using Docker-Desktop on Windows and i'm trying to get running 3 containers inside docker-desktop.
After few research and test, i get the 3 container running [WEB - API - DB], everything seems to compile/run without issue in the logs but i'can't access my web container from outside.
Here's my dockerfile and docker-compose, what did i miss or get wrong ?
[WEB] dockerfile
FROM node:16.17.0-bullseye-slim
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
#EXPOSE 4200 (the issue is the same with or without this line)
CMD ["npm", "run", "start"]
[API] dockerfile
FROM openjdk:17.0.1-jdk-slim
WORKDIR /app
COPY ./target/test-0.0.1-SNAPSHOT.jar /app
#EXPOSE 2022 (the issue is the same with or without this line)
CMD ["java", "-jar", "test-0.0.1-SNAPSHOT.jar"]
Docker-compose file
version: "3.8"
services:
### FRONTEND ###
web:
container_name: wallet-web
restart: always
build: ./frontend
ports:
- "80:4200"
depends_on:
- "api"
networks:
customnetwork:
ipv4_address: 172.20.0.12
#networks:
# - "api"
# - "web"
### BACKEND ###
api:
container_name: wallet-api
restart: always
build: ./backend
ports:
- "2022:2022"
depends_on:
- "db"
networks:
customnetwork:
ipv4_address: 172.20.0.11
#networks:
# - "api"
# - "web"
### DATABASE ###
db:
container_name: wallet-db
restart: always
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
networks:
customnetwork:
ipv4_address: 172.20.0.10
#networks:
# - "api"
# - "web"
networks:
customnetwork:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
gateway: 172.20.0.1
# api:
# web:
Listening on:
enter image description here
I found several issue similar to mine but the solution didn't worked for me.
If i understand you are trying to access on port 80. To do that, you have to map your container port 4200 to 80 in yaml file 80:4200 instead of 4200:4200.
https://docs.docker.com/config/containers/container-networking/
Have you looked in the browsers development console, if there comes any error. Your docker-compose seems not to have any issue.
How ever lets try to debug it:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6245eaffd67e nginx "/docker-entrypoint.…" About an hour ago Up About an hour 0.0.0.0:4200->80/tcp test-api-1
copy the container id then execute:
docker exec -it 6245eaffd67e bin/bash
Now you are inside the container. Instead of the id you can use also the containers name.
curl http://localhost:80
Note: in my case here i just create a container from an nginx image.
In your case use the port where your app is running. Control it in your code if you arent sure. A lot of Javascript-frameworks start default on 3000.
If you get an error: curl command not found, install it in your image:
FROM node:16.17.0-bullseye-slim
USER root # to install dependencies you need sudo permissions su we tell the image that it is root
RUN apt update -y && apt install curl -y
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
#EXPOSE 4200 (the issue is the same with or without this line)
USER node # we dont want to execute the image as root so we put user node (this user is defined in the node:16.17.0-bullseye-slim image)
CMD ["npm", "run", "start"]
Now the curl should work (if it doesnt already).
The same should work from your host.
Here is an important thing:
The localhost, always refers to the fisical computer, or the container itselfs where you are refering. Every container and your PC have localhost and they are not the same.
In the docker-compose you just map the port host/container, so your PC (host) where docker is running can access the docker network from the host on the host port you defined, inside the port of the container.
If you cant still access from your host, try to change the host ports 2022, 4200 ecc. Could be possible that something conflicts on your Windows machine.
It happens sometimes that the docker networks can create some conflicts.
Execute a docker-compose down, so it should be delete and recreated.
Still not working?
Reset docker-desktop to factory settings, control if you have last version (this is always better).
If all this doesnt help, let me know so we can debugg further.
For the sake of clarity i post you here the docker-compose which i used to check. I just used nginx to test the ports as i dont have your images.
version: "3.8"
services:
### FRONTEND ###
web:
restart: always
image: nginx
ports:
- "4200:80"
depends_on:
- "api"
networks:
- "web"
### BACKEND ###
api:
restart: always
image: nginx
ports:
- "2022:80"
depends_on:
- "db"
networks:
- "api"
- "web"
### DATABASE ###
db:
restart: always
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
networks:
- "api"
networks:
api:
web:
```
Update:
You can log what happens in the conatiner like so:
```
docker logs containerid/name
```
If you are using Visualcode there is excellent extension for docker build also by Microsoft:
Just search docker in the extensions. Has something like 20.000.000 downloads and can help you a lot debugging containers ecc. After installing it you see the dockericon on the left toolbar.
If you can see directly the errors that occurs in the logs, maybe you can post them partially. So it would be possible to understand. Please tell also something about your Frontendapp architecture, (react-app, angular). There are some frameworks that need to be startet on 0.0.0.0 instead of 127.0.0.1 or they dont work.
I have been playing around with docker, celery, redis and Flask for the past 2-3 days, after successfully setting up a flask, celery and redis server I decided to go onto to the next point which dockerizing it. I have successfully created a docker image and a composer file which seem to work just fine when building. I am using a local redis server and I am able to access it by using docker.for.mac.localhost as the host name in order to access the redis server from inside the container, but, when I try to access the flask app while it's running from outside of the container it doesn't work.
Having done some research I have tried the following:
Running with server host as 0.0.0.0
Exposing and using a different port other than 5000
This is my Dockerfile:
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 5000
CMD ["python3", "./app.py"]
And this is my docker-compose.yml file
version: "3"
services:
web:
container_name: web
build: ./api
ports:
- "5000:5001"
links:
- redis
depends_on:
- redis
environment:
- FLASK_ENV=development
volumes:
- ./api:/app
redis:
container_name: redis
image: redis:5.0.5
hostname: redis
worker:
build:
context: ./api
hostname: worker
entrypoint: celery
command: -A app.celery worker --loglevel=info
volumes:
- ./api:/app
links:
- redis
depends_on:
- redis
Thanks for any help in advance!
Your port mapping is backwards. It should be external to internal.
ports:
- "5001:5000"
I have a docker-compose file as seen below. The app and the flask are separate containers. I cannot connect to "python" container although both are on the same network. However, if I expose the port 5000 to the outside via port (e.g. - port: "9000:5000") then it is accessible. However, I only want "app" to access the "python" internally and not from outside of host.
Isn't this possible?
version: '3'
services:
python:
build:
context: ./docker/python
image: python:3.6.12
volumes:
- IQData:/NMIQV2/Data
- IQCode:/NMIQV2/Code
- IQAnalysis:/NMIQV2/Analysis
networks:
- base-network app:
build:
context: .
ports:
- "8080:80"
- "5000:5000"
networks:
- base-network
links:
- redis
- mongo
- python
depends_on:
- redis
- mongo
- python networks:
base-network:
driver: bridge
python container Docker File:
FROM python:3.6.12 EXPOSE 5000
EXPOSE 5000
# set the working directory in the container
WORKDIR /NMIQV2/Code
# copy the content of the local src directory to the working directory
COPY src/ .
# copy the dependencies file to the working directory
COPY requirements.txt .
# install dependencies
RUN pip install -r requirements.txt
server.py
from flask import Flask
server = Flask(__name__)
#server.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
server.run(host='0.0.0.0', port=5000)
I have tried several possible solutions on Stack Overflow but nothing seems to work for me.
I am developing microservices using RabbitMQ. The solution contains multiple projects and runs without any problem, but as soon as I use docker-compose option to build the project, Visual Studio throws the following exception:
RabbitMQ.Client.Exceptions.BrokerUnreachableException ExtendedSocketException: Connection refused 127.0.0.1:5672
In my solution, I have three projects communicating with each other via RabbitMQ.
Below is the code for my YAML file.
My docker-compose.yaml:
version: '3.4'
services:
rabbitmq:
hostname: webnet
image: rabbitmq:3.7.2-management
ports:
- "15672:15672"
- "5672:5672"
networks:
- webnet
sql-server-db:
container_name: sql-server-db
image: microsoft/mssql-server-linux:2017-latest
ports:
- "1433:1433"
environment:
SA_PASSWORD: "customerdbalten#123"
ACCEPT_EULA: "Y"
networks:
- webnet
myproject.simulation.api:
image: ${DOCKER_REGISTRY-}myprojectsimulationapi
build:
context: .
dockerfile: myproject.Simulation.Api/Dockerfile
links:
- rabbitmq
ports:
- '5000'
networks:
- webnet
myproject.updateservice.api:
image: ${DOCKER_REGISTRY-}myprojectupdateserviceapi
build:
context: .
dockerfile: myproject.updateservice.Api/Dockerfile
links:
- rabbitmq
- sql-server-db
ports:
- '5050'
networks:
- webnet
myproject.web:
image: ${DOCKER_REGISTRY-}myprojectweb
build:
context: .
dockerfile: MyProject.Web/Dockerfile
links:
- rabbitmq
ports:
- '5001'
networks:
- webnet
networks:
webnet:
driver: bridge
My Docker file:
My Dockerfile looks like the following:
FROM microsoft/dotnet:2.2-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM microsoft/dotnet:2.2-sdk AS build
WORKDIR /src
COPY
MyProject.UpdateService.Api/MyProject.UpdateService.Api.csproj MyProject.UpdateService.Api/
COPY MyProject.Common/MyProject.Common.csproj MyProject.Common/
RUN dotnet restore MyProject.UpdateService.Api/MyProject.UpdateService.Api.csproj
COPY . .
WORKDIR /src/MyProject.UpdateService.Api
RUN dotnet build MyProject.UpdateService.Api.csproj -c Release -o /app
FROM build AS publish
RUN dotnet publish MyProject.UpdateService.Api.csproj -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "MyProject.UpdateService.Api.dll"]
I've also created another simple solution with nothing but two projects - a sender and receiver - that uses RabbitMQ. This solution throws the same exception while docker-composeing, otherwise, it just runs. The YAML file has nothing but auto-generated code.
From the discussion, we found out that the rabbitMQ container is not running as there is already running rabbitMQ service on the host.
You have two option, stop the host RabbitMQ service and then try to connect with rabbitMQ container.
Hostname: rabbitmq:5672
Or if you want to connect with Host RabbitMQ service then you can use
Hostname: host.docker.internal
#or
Hostname: HOST_IP
I WANT TO CONNECT FROM A CONTAINER TO A SERVICE ON THE HOST
The host has a changing IP address (or none if you have no network
access). From 18.03 onwards our recommendation is to connect to the
special DNS name `host.docker.internal, which resolves to the internal
IP address used by the host. This is for development purpose and will
not work in a production environment outside of Docker Desktop for
Windows.
The gateway is also reachable as gateway.docker.internal.
docker-for-windows-networking
I have two docker containers: simple rest api and a database. I want to start the rest api only when the database is ready. I tried several solutions, when i figured out the problem is in the network.
wait-for-it.sh script works perfectly fine when I start rest api without waiting for the database and then do docker exec -it <api-container-name> bash and run it from there. When I'm trying to run it as a CMD in Dockerfile it can't establish connection with the database. Same thing happens when I start the api while database is already running.
Api dockerfile:
FROM microsoft/aspnetcore-build:2.0
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN chmod +x Scripts/wait-for-it.sh
RUN Scripts/wait-for-it.sh -t 30 172.20.1.2:3306 #times out when waiting for database
RUN dotnet publish -c Release -o out
ENTRYPOINT ["dotnet", "out/Atlanta.dll"]
docker-compose:
version: '3'
networks:
backend:
driver: bridge
ipam:
config:
- subnet: 172.20.1.0/24
services:
main-db:
container_name: main-db
image: mysql
environment:
MYSQL_DATABASE: Main
MYSQL_ROOT_PASSWORD: root
ports:
- "5000:3306"
networks:
backend:
ipv4_address: 172.20.1.2
atlanta-ms:
container_name: atlanta
build:
context: ./Atlanta
dockerfile: Dockerfile
image: atlanta:ms
ports:
- "5001:80"
networks:
backend:
ipv4_address: 172.20.1.3
I see your confusion.
The RUN statement isn't doing what you think it is; it's running the wait-for-it.sh while the container is building, and isn't under the docker-compose's control. It will not run when your container runs! You should check out the documentation from docker about container start-up order and docker-compose.
Detached mode for your database will have no effect on build/run process; expect you won't be able to interact with it.
Using docker-compose will, by default, have all the container in non-interactive mode; but that's okay because you can still attach/detach to the containers.
You should add a depends-on option to your docker-compose.yml and add your wait-for-it.sh to the command option in the docker-compose.yml, not the Dockerfile.
version: '3'
networks:
backend:
driver: bridge
ipam:
config:
- subnet: 172.20.1.0/24
services:
main-db:
container_name: main-db
image: mysql
environment:
MYSQL_DATABASE: Main
MYSQL_ROOT_PASSWORD: root
ports:
- "5000:3306"
networks:
backend:
ipv4_address: 172.20.1.2
atlanta-ms:
container_name: atlanta
build:
context: ./Atlanta
dockerfile: Dockerfile
# Add this `depends-on`
depends-on:
- "main-db"
# Add this `command` option
command: ["Scripts/wait-for-it.sh", "-t", "30", "172.20.1.2:3306"]
image: atlanta:ms
ports:
- "5001:80"
networks:
backend:
ipv4_address: 172.20.1.3
I would recommend moving wait-for-it.sh to your WORKDIR, and make sure it is passed as "./wait-for-it.sh" to command, just to make your life easier.
Don't forget to remove RUN Scripts/wait-for-it.sh -t 30 172.20.1.2:3306 from your Dockerfile! (Because docker-compose is handling it now.)
And remember that the command for using docker-compose is docker-compose up, unless, that is, you'd like to use docker swarm instead.
Well. I managed to solve my problem. I can guess this is not a clean way of doing it, but it works for now.
I'm building and running both containers first. Then, when both api and db are up, i'm remotely executing migration commands using docker exec. wait-for-it script is not needed here.
run.sh
#!/bin/bash
docker-compose up -d --build atlanta-ms main-db
docker exec atlanta bash -c "dotnet ef migrations add InitialMigration && dotnet ef database update"
rest api Dockerfile
FROM microsoft/aspnetcore-build:2.0
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
ENTRYPOINT ["dotnet", "out/Atlanta.dll"]
docker-compose.yml
version: '3'
networks:
backend:
driver: bridge
ipam:
config:
- subnet: 172.20.1.0/24
services:
main-db:
container_name: main-db
image: mysql
environment:
MYSQL_ROOT_PASSWORD: root
ports:
- "5000:3306"
networks:
backend:
ipv4_address: 172.20.1.2
atlanta-ms:
container_name: atlanta
build:
context: ./Atlanta
dockerfile: Dockerfile
image: atlanta:ms
ports:
- "5001:80"
networks:
backend:
ipv4_address: 172.20.1.3
The drawback of this solution is that rest api is running in detached mode, so i cannot stop it just by doing ctrl + c in console. Adding docker attach atlanta line at the end of run.sh script works (i can stop container by simple ctrl + c), but it doesn't work with several containers (cannot attach to more than one container) so i have to write simple stop script and call it independently from run.sh to stop containers, what is a little inconvenient.
I would be very grateful if someone could tell me how can i attach to few containers, so i could stop them all with ctrl + c (thats how the docker-compose up service1 service2 service3 without "-d" flag works)