I have created 2 services in docker swarm using images app1 and app2, where service app1 makes a call to app2 but in docker swarm service app1 can't connect to service app2 at app2:5000 (<service-name>:<port>) getting error requests.exceptions.ConnectionError, on the contrary If I create normal container (without docker swarm) then app1 can easily call app2 at app2:5000 (<container-name>:<port>)
Inside docker swarm following commands have been used to create service
$ sudo docker service create --name app1 -p 5001:5000 app1:latest
$ sudo docker service create --name app2 -p 5002:5000 app2:latest
Outside docker swarm following commands are used to run containers
$ sudo docker-compose build
$ sudo docker-compose up
Code used to build Images app1 and app2 are shown below
app.py (App1)
from flask import Flask, request, jsonify
import requests
app = Flask(__name__)
#app.route('/')
def func1():
return jsonify('This is App #1')
#app.route('/call')
def func2():
res = requests.get('http://app2:5000/call')
res = res.json()
return jsonify(res)
if __name__ == '__main__':
app.run(host='0.0.0.0', debug=True)
app.py (App2)
from flask import Flask, jsonify
app = Flask(__name__)
#app.route('/')
def func1():
return jsonify('This is App #2')
#app.route('/call')
def func2():
return jsonify('Call to App2 is Successful')
if __name__ == '__main__':
app.run(host='0.0.0.0', debug=True)
docker-compose.yml
version: '3.3'
services:
app1:
build: ./app1
image: "app1:latest"
container_name: app1
ports:
- "5001:5000"
networks:
- net1
app2:
build: ./app2
image: "app2:latest"
container_name: app2
ports:
- "5002:5000"
networks:
- net1
networks:
net1:
external: true
Putting services in the same network fixed the issue
Related
I am trying to test an API that sends long-running jobs to a queue processed by Celery workers.. I am using RabbitMQ running in a Docker container as the message queue. However, when sending a message to the queue I get the following error: Error: [Errno 111] Connection refused
Steps to reproduce:
Start RabbitMQ container: docker run -d -p 5672:5672 rabbitmq
Start Celery server: celery -A celery worker --loglevel=INFO
Build docker image: docker build -t fastapi .
Run container docker run -it -p 8000:8000 fastapi
Dockerfile:
FROM python:3.9
WORKDIR /
COPY . .
RUN pip install --no-cache-dir --upgrade -r ./requirements.txt
EXPOSE 8000
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]
requirements.txt:
anyio==3.6.1
asgiref==3.5.2
celery==5.2.7
click==8.1.3
colorama==0.4.4
fastapi==0.78.0
h11==0.13.0
httptools==0.4.0
idna==3.3
pydantic==1.9.1
python-dotenv==0.20.0
PyYAML==6.0
sniffio==1.2.0
starlette==0.19.1
typing_extensions==4.2.0
uvicorn==0.17.6
watchgod==0.8.2
websockets==10.3
app.py:
from fastapi import FastAPI
import tasks
#app.get("/{num}")
async def root(num):
tasks.update_db.delay(num)
return {"success": True}
tasks.py:
from celery import Celery
import time
celery = Celery('tasks', broker='amqp://')
#celery.task(name='update_db')
def update_db(num: int) -> None:
time.sleep(30)
return
You can't connect to rabbitmq on localhost; it's not running in the same container as your Python app. Since you've exposed rabbit on your host, you can connect to it using the address of your host. One way of doing that is starting the app container like this:
docker run -it -p 8000:8000 --add-host host.docker.internal:host-gateway fastapi
And then modify your code like this:
celery = Celery('tasks', broker='amqp://host.docker.internal')
With that code in place, let's re-run your example:
$ docker run -d -p 5672:5672 rabbitmq
$ docker run -d -p 8000:8000 --add-host host.docker.internal:host-gateway fastapi
$ curl http://localhost:8000/1
{"success":true}
There's no reason to publish the rabbitmq ports on your host if you only need to access it from within a container. When building an application with multiple containers, using something like docker-compose can make your life easier.
If you used the following docker-compose.yaml:
version: "3"
services:
rabbitmq:
image: rabbitmq
app:
build:
context: .
ports:
- "8000:8000"
And modified your code to connect to rabbitmq:
celery = Celery('tasks', broker='amqp://rabbitmq')
You could then run docker-compose up to bring up both containers. Your app would be exposed on host port 8000, but rabbitmq would only be available to your app container.
Incidentally, rather than hardcoding the broker uri in your code, you might want to get that from an environment variable instead:
celery = Celery('tasks', broker=os.getenv('APP_BROKER_URI'))
That allows you to use different connection strings without needing to rebuild your image every time. We'd need to modify the docker-compose.yaml to include the appropriate variable:
version: "3"
services:
rabbitmq:
image: rabbitmq
app:
build:
context: .
environment:
APP_BROKER_URI: "amqp://rabbitmq"
ports:
- "8000:8000"
Update tasks.py
import time
celery = Celery('tasks', broker='amqp://user:pass#host:port//')
#celery.task(name='update_db')
def update_db(num: int) -> None:
time.sleep(30)
return
as the question says. Here it is my situation.
My project folder is:
PROJ
|____docker-compose.yml
|____servDir/
|____Dockerfile
|____server.py
In the docker-compose.yml:
service1:
image: img1:v0.1
container_name: cont1
build: ./servDir/.
ports:
- "5001:5002"
server.py:
from flask import Flask, request
import json
app = Flask(__name__)
PORT = 5001
#app.route("/greetings/")
def hello():
return "Hello World!"
if __name__ == '__main__':
app.run(host="0.0.0.0", port=int(PORT), debug=True)
When I run docker-compose up and go to http://localhost:5001/greetings/ I receive a ERR_CONNECTION_REFUSED.
Instead, if I set ports as 5001:5001, I'm able to receive the page content.
Why? Should I set them always equals, in order to reach the container by browser?
I thought that ports configuration was HOST:CONTAINER, and that browser would be a HOST sevice.
UPDATE:
Dockerfile:
FROM python:3
WORKDIR /home/python/app/
COPY . /home/python/app/
RUN chmod a+x *.py
CMD ["python", "./server.py"]
This is right : HOST:CONTAINER
Try to use this to expose it for your localhost and LAN :
service1:
image: img1:v0.1
container_name: cont1
build: ./servDir/.
ports:
- "0.0.0.0:5001:5002"
or this to only your localhost :
service1:
image: img1:v0.1
container_name: cont1
build: ./servDir/.
ports:
- "127.0.0.1:5001:5002"
Also, you wrote :
When I run docker-compose up and go to http://localhost:6002/greetings/ I receive a ERR_CONNECTION_REFUSED.
Looking at your docker compose you should access it like that instead:
http://localhost:6002 --> http://localhost:5001
Change server.py config :
from flask import Flask, request
import json
app = Flask(__name__)
PORT = 5002
#app.route("/greetings/")
def hello():
return "Hello World!"
if __name__ == '__main__':
app.run(host="0.0.0.0", port=int(PORT), debug=True)
I have 2 application run with a different network and it uses separate docker-compose.yaml. So I trying to call an request from app A to app B, but it not works.
docker exec -it app_a_running curl http://localhost:8012/user/1
So I got an error
cURL error 7: Failed to connect to localhost port 8012
docker-compose-app-a.yaml
version: "3"
services:
app:
build: go/
restart: always
ports:
- 8011:8011
volumes:
- ../src/app:/go/src/app
working_dir: /go/src/app
container_name: app-a
command: sleep 72000
networks:
- app-a-network
networks:
app-a-network:
docker-compose-app-b.yaml
version: "3"
services:
app:
build: go/
restart: always
ports:
- 8012:8012
volumes:
- ../src/app:/go/src/app
working_dir: /go/src/app
container_name: app-b
command: sleep 72000
networks:
- app-b-network
networks:
app-b-network:
Questions:
Is it possible to do this?
If the first question is possible, Please suggest me :)
You can use curl on docker containers. The reason why your curl command didn't work is probably that you did not publish your docker container's port. For example, try:
docker run -d -p 8080:8080 tomcat
instead of
docker run -d tomcat
This will forward the port 8080 of your machine to the port 8080 of your container.
If you have a shell to your container, you can use the service name or the container's name to curl a container on your Docker network, provided your target exists with the same network.
I am trying to run a simple flask app in debug mode using docker-compose. I have created my Dockerfile as follows:
FROM jazzdd/alpine-flask
EXPOSE 80
My docker-compose file looks like this:
version: '2'
networks:
test_network:
driver: bridge
services:
db:
networks:
- test_network
image: postgres:9.5.3
env_file:
- docker.env
expose:
- 5432
app:
networks:
- test_network
build: .
env_file:
- docker.env
expose:
- 80
ports:
- 80:80
volumes:
- ./app/:/app
command: -d
My docker.env just has password to postgres database. I created a simple python file as follows:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return "Hello, World"
if __name__ == '__main__':
app.run(debug=True,host='0.0.0.0')
Now to run the app, I am using docker-compose up -d --build command. I would assume that after the app starts on the server, when I make any change to app.py file, it will be reflected on the webpage without me having to restart the containers. I'm not seeing the expected behavior. I tried setting my local env variable FLASK_DEBUG=1 but not sure if that would help. Am I missing something?
I also referenced this page but didn't see anything useful.
A sample (simplifed) runthru demostrating file edits with no need for container restarts outlined below for your reference.
app.py
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return "Hello, World"
if __name__ == '__main__':
app.run(debug=True,host='0.0.0.0',port=80)
You will need to specify the port for the flask development server in order to match the exposed container port of 80.
screenshot can be viewed here
Summary of Steps in screenshot (MAC OS X):
starting with Empty directory
Create app.py
docker run
curl localhost (this will display Hello, World)
edit app.py
curl localhost (this should display the new edits)
in my case I had a conflict with gevent. here's the workaround:
import os
if not (os.environ.get('FLASK_DEBUG') == 1):
from gevent import monkey
monkey.patch_all()
I have already a mysql container named "mysqlDemoStorage" running, exposing port 3306 to 0.0.0.0:3306. I also have a flask app which provides a login page and table-displaying page. The flask app works quite well in host. The login page connects to "user" table in the mysql container and the table-displaying page connects to another table holding all the data to display.
The docker-compose file I used to create the mysql container is as follows:
version: '3'
services:
mysql:
container_name: mysqlDemoStorage
environment:
MYSQL_ROOT_PASSWORD: "demo"
command:
--character-set-server=utf8
ports:
- 3306:3306
image: "docker.io/mysql:latest"
restart: always
Now I want to dockerize the flask app so that I can still view the app from host. The mysql container detail is as followed:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c48955b3589e mysql:latest "docker-entrypoint.s…" 13 days ago Up 49 minutes 0.0.0.0:3306->3306/tcp, 33060/tcp mysqlDemoStorage
The dockerfile of the flask app I wrote is as follows:
FROM python:latest
WORKDIR /storage_flask
ADD . /storage_flask
RUN pip install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python","run.py"]
The flask image can be successfuly built, but when I run the image, I fail to load the page. One point I think that causes the problem is the init.py file to initiate the flask app, which is as follows:
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_bcrypt import Bcrypt
from flask_login import LoginManager
app = Flask(__name__)
app.config['SECRET_KEY'] = 'aafa4f8047ce31126011638be8530da6'
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql+pymysql://root:demo#localhost:3306/storage'
db = SQLAlchemy(app)
bcrypt = Bcrypt(app)
login_manager = LoginManager(app)
login_manager.login_view = "login"
login_manager.login_message_category = 'info'
from storage_flask import routes
I was thinking passing the IP of the mysql container to the flask container as the config string for DB connection. But I'm not sure how to do it.
Could someone help to solve the problem? Thank you
change this line
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql+pymysql://root:demo#localhost:3306/storage'
to
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql+pymysql://root:demo#mysql:3306/storage'
You also need to make sure that both containers are connected to the same network, for that you need to update your docker-compose file to be something like the below file
version: '3.7'
networks:
my_network_name:
name: my_network_name
external: false
services:
mysql:
container_name: mysqlDemoStorage
environment:
MYSQL_ROOT_PASSWORD: "demo"
command:
--character-set-server=utf8
ports:
- 3306:3306
image: "docker.io/mysql:latest"
restart: always
networks:
- my_network_name
second file
version: '3.7'
networks:
my_network_name:
name: my_network_name
external: true
services:
python_app:
container_name: pythonDemoStorage
ports:
- 5000:5000
image: "Myimage"
restart: always
networks:
- my_network_name