I want to have a ZMQ connection between my Docker instance and my host machine on a bridged connection between my host and Docker, designated by the ports in my docker-compose.yml file.
The intended output is to have my client print the data that's sent over from the server.py (zmqserver - Docker) to client.py (host).
Docker-compose
version: '3.7'
services:
zmqserver:
build: ./Server/
container_name: zmq_server
ports:
- 25565:13371
# network_mode: "host"
Dockerfile
FROM nvidia/cuda:11.7.0-devel-rockylinux8
WORKDIR /opt/testing
RUN yum install -y python3 python3-pip && python3 -m pip install pyzmq
# EXPOSE 13371:13371
ADD server.py .
ENTRYPOINT ["python3", "-u", "server.py"]
server.py
import zmq
import time
if __name__ == '__main__':
context = zmq.Context()
zmq_socket = context.socket(zmq.PUSH)
print('ZMQing')
zmq_socket.bind("tcp://0.0.0.0:13371")
print('Bind complete')
for num in range(2000):
work_message = { 'num' : num }
zmq_socket.send_json(work_message)
print('Sent')
time.sleep(.5)
client.py
import zmq
if __name__ == '__main__':
context = zmq.Context()
zmq_socket = context.socket(zmq.PULL)
zmq_socket.connect("tcp://zmqserver:25565")
for _ in range(2000):
result = zmq_socket.recv_json()
print(result)
I've tried specifying the service name as the host here in client.py and even 0.0.0.0 or 127.0.0.1 for host, nothing seems to work.
Any advice is appreciated!
Thanks.
Related
I am trying to test an API that sends long-running jobs to a queue processed by Celery workers.. I am using RabbitMQ running in a Docker container as the message queue. However, when sending a message to the queue I get the following error: Error: [Errno 111] Connection refused
Steps to reproduce:
Start RabbitMQ container: docker run -d -p 5672:5672 rabbitmq
Start Celery server: celery -A celery worker --loglevel=INFO
Build docker image: docker build -t fastapi .
Run container docker run -it -p 8000:8000 fastapi
Dockerfile:
FROM python:3.9
WORKDIR /
COPY . .
RUN pip install --no-cache-dir --upgrade -r ./requirements.txt
EXPOSE 8000
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]
requirements.txt:
anyio==3.6.1
asgiref==3.5.2
celery==5.2.7
click==8.1.3
colorama==0.4.4
fastapi==0.78.0
h11==0.13.0
httptools==0.4.0
idna==3.3
pydantic==1.9.1
python-dotenv==0.20.0
PyYAML==6.0
sniffio==1.2.0
starlette==0.19.1
typing_extensions==4.2.0
uvicorn==0.17.6
watchgod==0.8.2
websockets==10.3
app.py:
from fastapi import FastAPI
import tasks
#app.get("/{num}")
async def root(num):
tasks.update_db.delay(num)
return {"success": True}
tasks.py:
from celery import Celery
import time
celery = Celery('tasks', broker='amqp://')
#celery.task(name='update_db')
def update_db(num: int) -> None:
time.sleep(30)
return
You can't connect to rabbitmq on localhost; it's not running in the same container as your Python app. Since you've exposed rabbit on your host, you can connect to it using the address of your host. One way of doing that is starting the app container like this:
docker run -it -p 8000:8000 --add-host host.docker.internal:host-gateway fastapi
And then modify your code like this:
celery = Celery('tasks', broker='amqp://host.docker.internal')
With that code in place, let's re-run your example:
$ docker run -d -p 5672:5672 rabbitmq
$ docker run -d -p 8000:8000 --add-host host.docker.internal:host-gateway fastapi
$ curl http://localhost:8000/1
{"success":true}
There's no reason to publish the rabbitmq ports on your host if you only need to access it from within a container. When building an application with multiple containers, using something like docker-compose can make your life easier.
If you used the following docker-compose.yaml:
version: "3"
services:
rabbitmq:
image: rabbitmq
app:
build:
context: .
ports:
- "8000:8000"
And modified your code to connect to rabbitmq:
celery = Celery('tasks', broker='amqp://rabbitmq')
You could then run docker-compose up to bring up both containers. Your app would be exposed on host port 8000, but rabbitmq would only be available to your app container.
Incidentally, rather than hardcoding the broker uri in your code, you might want to get that from an environment variable instead:
celery = Celery('tasks', broker=os.getenv('APP_BROKER_URI'))
That allows you to use different connection strings without needing to rebuild your image every time. We'd need to modify the docker-compose.yaml to include the appropriate variable:
version: "3"
services:
rabbitmq:
image: rabbitmq
app:
build:
context: .
environment:
APP_BROKER_URI: "amqp://rabbitmq"
ports:
- "8000:8000"
Update tasks.py
import time
celery = Celery('tasks', broker='amqp://user:pass#host:port//')
#celery.task(name='update_db')
def update_db(num: int) -> None:
time.sleep(30)
return
In example here- https://docs.docker.com/compose/gettingstarted/
Flask:
from flask import Flask
app = Flask(__name__)
cache = redis.Redis(host='redis', port=6379)
Dockerfile:
# syntax=docker/dockerfile:1
FROM python:3.7-alpine
WORKDIR /code
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
RUN apk add --no-cache gcc musl-dev linux-headers
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
EXPOSE 5000
COPY . .
CMD ["flask", "run"]
Compose:
version: "3.9"
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
environment:
FLASK_ENV: development
redis:
image: "redis:alpine"
Web app runs in container on 0.0.0.0 port 5000
Redis runs on separate container and default port 6389
The web app container is run by exposing port 5000:5000
I am trying to understand how does the web app container communicate with the redis container when there is no network specified. Or when the port 6389 of the other container is not exposed?
If there is no network information provided in the docker-compose.yml file, Docker will use a default subnet for the container (unless explicitly named, it will be called ${current_working_dir}_default}. Since the two containers are technically on the same subnet, it is possible for them to communicate with each other without exposing ports. Using EXPOSE to expose ports is generally for allowing users the host machine to communicate with a container (i.e., opening the Flask app with a browser on your laptop).
In order for your Flask app to communicate with Redis, you may need to add a hostname descriptor to the redis service in your docker-compose.yml file. Otherwise, the hostname for the redis service may just be some SHA hash, and it would make it hard for the Flask app to find it (unless you know and use the IP address to the redis service)
I have created a Go backend server that contains an api. All works well when running locally "go run server". I did however encounter issues when running it in docker. So I created a Dockerfile and run the linux container in networkmode host but can't access the api in the browser. If it's working I should be able to see a json response from http://localhost:8500/status. So I'm thinking I need permissions or add flags or more installation related scripts. I have been testing different flags and ports in docker but can't identify the issue. See code, dockerfile and command below.
Note: When I run the program locally on windows a security warning pops up, perhaps this is related to the issue?
I'm running docker desktop community v 2.2.0.5 stable.
Api code:
import (
"log"
"net/http"
conf "server/conf"
"github.com/gorilla/mux"
"github.com/jinzhu/gorm"
_ "github.com/jinzhu/gorm/dialects/mysql"
)
var db *gorm.DB
// Middleware used to add correct permissions for all requests
func middleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Headers", "Content-Type")
next.ServeHTTP(w, r)
})
}
func StartApi(config conf.Config, _db *gorm.DB) {
log.Println("Start Api")
db = _db
router := mux.NewRouter()
// Create
router.HandleFunc("/login", login).Methods("POST", "OPTIONS")
router.HandleFunc("/network", createNetwork).Methods("POST", "OPTIONS")
// Read
router.HandleFunc("/users/{networkName}", getUsers).Methods("GET", "OPTIONS")
router.HandleFunc("/streams/{networkName}", getStreams).Methods("GET", "OPTIONS")
router.HandleFunc("/status", showStatus).Methods("GET", "OPTIONS")
log.Println("API started at port " + config.Backend.Api_port) // api port is stored in config
log.Println(http.ListenAndServe(":"+config.Backend.Api_port, middleware(router)))
log.Println("Api done!") }
Dockerfile:
# Start from the latest golang base image
FROM golang:latest
WORKDIR /go/src/server
COPY ./src/server .
# Download packages
RUN go get .
# Compile code
#RUN go build -o server
RUN go install server
# Expose ports
EXPOSE 8500
EXPOSE 8600
#ENTRYPOINT ./server
CMD ["go", "run", "server"]
Docker-compose:
version : '3'
services:
react:
build: ./ReactFrontend/
container_name: ReactFrontend
tty: true
ports:
- 4000:3000
backend:
network_mode: host
build: ./GoBackend/
container_name: goserver
ports:
- 8500:8500
- 8600:8600
Docker command:
docker run --net=host goserver
Any help is appreciated,
Cheers!
So i solved it for now by:
As mentioned here:
https://docs.docker.com/compose/networking/
Add to docker-compose:
networks:
default:
external:
name: database_default
As mentioned here: Access to mysql container from other container
I can connect to database as < db_container_name >:3306 in backend.
To automate the process I created a .sh script for handling an extra setup config step on container start. However, due to my structure in my config.yml it was hard to update with just "sed" commands so I created a python program for updating all config data. The Dockerfile, docker-compose file, setup.sh and update_config.py file is shown below.
setup.sh:
#!/bin/bash
# Don't remove this!
# This file is used by dockerfile to replace configs
# Replace config on run
python3 update_config.py
# Start program
go run server
Dockerfile:
# Start from the latest golang base image
FROM golang:latest
WORKDIR /go/src/server
COPY ./src/server .
# Install python3 and yml compability
RUN apt-get update && apt-get install -y python3-pip
RUN python3 --version
RUN pip3 install PyYAML
# Download packages
RUN go get .
# Compile code
#RUN go build -o server
RUN go install server
# Expose ports
EXPOSE 8500
EXPOSE 8600
# ENV
ENV DB_HOST "mysql:3306"
#CHMOD setup
RUN chmod +x setup.sh
CMD ["./setup.sh"]
Docker-compose
version : '3'
services:
react:
build: ./ReactFrontend/
container_name: ReactFrontend
tty: true
ports:
- 4000:3000
backend:
build: ./GoBackend/
container_name: GoBackend
environment:
DB_HOST: mysql:3306 # Name or IP of DB container!
ports:
- 8500:8500
- 8600:8600
networks:
default:
external:
name: database_default
update_config.py:
import yaml
import os
"""
DONT REMOVE
This file is used in the dockerfile!
"""
fname = "/go/src/server/config.yml"
stream = open(fname, 'r')
data = yaml.safe_load(stream)
stream.close()
# Add more updates here!
if os.environ.get('DB_HOST') is not None:
data["database"]["ip"] = os.environ['DB_HOST']
# Updated data print
print("Updated Data", data)
# Write changes to config
stream = open(fname, 'w')
yaml.dump(data, stream)
stream.close()
Example docker command that now works if we only want container to run:
docker run -p 8500:8500 -p 8600:8600 --net database_default goserver
It is working fine! And we are avoiding using unnecessary host network mode!
as the question says. Here it is my situation.
My project folder is:
PROJ
|____docker-compose.yml
|____servDir/
|____Dockerfile
|____server.py
In the docker-compose.yml:
service1:
image: img1:v0.1
container_name: cont1
build: ./servDir/.
ports:
- "5001:5002"
server.py:
from flask import Flask, request
import json
app = Flask(__name__)
PORT = 5001
#app.route("/greetings/")
def hello():
return "Hello World!"
if __name__ == '__main__':
app.run(host="0.0.0.0", port=int(PORT), debug=True)
When I run docker-compose up and go to http://localhost:5001/greetings/ I receive a ERR_CONNECTION_REFUSED.
Instead, if I set ports as 5001:5001, I'm able to receive the page content.
Why? Should I set them always equals, in order to reach the container by browser?
I thought that ports configuration was HOST:CONTAINER, and that browser would be a HOST sevice.
UPDATE:
Dockerfile:
FROM python:3
WORKDIR /home/python/app/
COPY . /home/python/app/
RUN chmod a+x *.py
CMD ["python", "./server.py"]
This is right : HOST:CONTAINER
Try to use this to expose it for your localhost and LAN :
service1:
image: img1:v0.1
container_name: cont1
build: ./servDir/.
ports:
- "0.0.0.0:5001:5002"
or this to only your localhost :
service1:
image: img1:v0.1
container_name: cont1
build: ./servDir/.
ports:
- "127.0.0.1:5001:5002"
Also, you wrote :
When I run docker-compose up and go to http://localhost:6002/greetings/ I receive a ERR_CONNECTION_REFUSED.
Looking at your docker compose you should access it like that instead:
http://localhost:6002 --> http://localhost:5001
Change server.py config :
from flask import Flask, request
import json
app = Flask(__name__)
PORT = 5002
#app.route("/greetings/")
def hello():
return "Hello World!"
if __name__ == '__main__':
app.run(host="0.0.0.0", port=int(PORT), debug=True)
I'm trying to get a basic Flask backend, and frontend framework in separate containers communicating with each other via docker-compose.
Caveat here is that I'm using Windows 10 Home so I need to be using Docker Toolbox so I've had to add a few networking rules for port forwarding. However, I can't seem to access http://localhost:5000 for my backend. I get ECONNREFUSED. I'm just trying to get basic communication between the frontend and backend communications to simulate frontend/api communication.
Given my port forwarding rules, I can access http://localhost:8080 and I can view the static portions of the app. However, I can't access the backend or can I tell if it they are communicating. New to both Flask and Docker so please forgive my ignorance. Coming from a .NET background, Windows seems to really make this a pain. Thank you for your help.
Here is my project structure:
Here is my application.py:
# Start with a basic flask app webpage.
from flask_socketio import SocketIO, emit
from flask import Flask, render_template, url_for, copy_current_request_context
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret!'
app.config['DEBUG'] = True
#turn the flask app into a socketio app
socketio = SocketIO(app)
#app.route('/')
def index():
#only by sending this page first will the client be connected to the socketio instance
return render_template('index.html')
if __name__ == '__main__':
socketio.run(app)
Dockerfile for the backend:
FROM python:2.7
ADD ./requirements.txt /backend/requirements.txt
WORKDIR /backend
RUN pip install -r requirements.txt
ADD . /backend
ENTRYPOINT ["python"]
CMD ["/backend/application.py"]
EXPOSE 5000
Dockerfile for frontend:
FROM node:latest
COPY . /src
WORKDIR /src
RUN npm install --loglevel warn
RUN npm run production
EXPOSE 8080
CMD [ "node", "server.js" ]
And my docker-compose.yml:
version: '2'
services:
backend:
build:
context: ./backend
dockerfile: Dockerfile
restart: always
ports:
- "5000:5000"
env_file:
- .env
frontend:
build: ./frontend
ports:
- "8080:8080"
Your issue with Flask configuration, as long as you get this error ECONNREFUSED while trying to connect it means that there is no service running on port 5000 with the ip you are trying to use and that's because this function socketio.run(app) defaults to 127.0.0.1 which will be the localhost inside the container itself. In order to make your application accessible from outside the container or through the container ip in general you have to pass another parameter to that function called host with value 0.0.0.0 in order to be listen on any interface inside the container.
socketio.run(app, host='0.0.0.0')
Quoted from the documentation:
run(app, host=None, port=None, **kwargs)
Run the SocketIO web server.
Parameters:
app – The Flask application instance.
host – The hostname or IP address for the server to listen on. Defaults to 127.0.0.1.
port – The port number for the server to listen on. Defaults to 5000.