Connect two Docker Containers via SocketIO - docker

I have multiple Docker Containers (Server, Client, AI) running on the same Docker bridge. I want the AI to connect to the Server in order to exchange information with Socket.io.
However, when I try to connect the AI to to the Server via:
sio = socketio.Client()
def main():
sio.connect('http://localhost:4000')
sio.emit('ConnectAI', "TEST")
sio.wait()
print("Setting up Connection...")
time.sleep(10)
asyncio.run(main())
I get an Error saying
socketio.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=4000): Max retries exceeded with url: /socket.io/?transport=polling&EIO=4&t=1659336255.2636092 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f87b70dc7f0>: Failed to establish a new connection: [Errno 111] Connection refused'))
I don't know why the connection gets refused. The machines run on the same network, they both use the same version of socketIO.
Here is my Docker-Compose file:
version: "3.9"
services:
client:
build:
context: ./client
dockerfile: Dockerfile
ports:
- "3000:3000"
server:
build:
context: ./server
dockerfile: Dockerfile
ports:
- "4000:4000"
ai:
build:
context: ./ai
dockerfile: Dockerfile
ports:
- "5000:5000"
And here is my Server Docker:
# Dockerfile
FROM node:14.20.0
WORKDIR /app
COPY . /app
RUN npm install
EXPOSE 4000
CMD npm start
And my AI Docker:
# Dockerfile
FROM python:3.8-slim-buster
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
EXPOSE 5000
CMD python3 connection.py

Related

i read old question but cant fix: docker Error: Unable to access jarfile

hi, I am new to docker and trying to containerize a simple spring boot application. The docker file is as below.
version:
win 11
docker desktop : newest version
dockerfile
FROM openjdk:8-jre-alpine
RUN mkdir app
WORKDIR /app
# Copy the jar to the production image from the builder stage.
COPY target/taco-cloud-*.jar app/taco-cloud.jar
# Run the web service on container startup.
EXPOSE 9090
CMD ["java", "-jar", "taco-cloud.jar"]
docker-compose
version: '2.4'
services:
mysql:
container_name: test-data
image: mysql:latest
networks:
- kell-network
restart: always
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=taco_cloud
- MYSQL_USER=kell
- MYSQL_PASSWORD=dskell0502
volumes:
- mysql-data:/var/lib/mysql
- ./schema.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- "3307:3306"
web:
container_name: test-web
image: test:ver1
ports:
- "9090:9090"
depends_on:
- mysql
networks:
- kell-network
volumes:
mysql-data:
networks:
kell-network:
driver: bridge
when I am trying to run docker-compose, I am getting "Error: Unable to access jarfile taco-clound.jar"
test-web | Error: Unable to access jarfile taco-cloud.jar
I tried to edit the dockerfile but it still doesn't work
FROM maven:latest
RUN mkdir /app
WORKDIR /app
COPY . .
EXPOSE 8080
CMD ["mvn", "spring-boot:run"]
and
# Use the official maven/Java 8 image to create a build artifact: https://hub.docker.com/_/maven
FROM maven:3.5-jdk-8-alpine as builder
# Copy local code to the container image.
RUN mkdir app
WORKDIR /app
COPY pom.xml .
COPY src ./src
# Build a release artifact.
RUN mvn package -DskipTests
# Use the Official OpenJDK image for a lean production stage of our multi-stage build.
# https://hub.docker.com/_/openjdk
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM openjdk:8-jre-alpine
# Copy the jar to the production image from the builder stage.
COPY --from=builder target/taco-cloud-*.jar app/taco-cloud.jar
# Run the web service on container startup.
EXPOSE 9090
CMD ["java", "-jar", "taco-cloud.jar"]
WORKDIR /app
COPY target/taco-cloud-*.jar app/taco-cloud.jar
AFAIK COPY command don't support * wildcards. You have to know exact filename or rename it to taco-cloud.jar before building image and run COPY taco-cloud.jar ./taco-cloud.jar
Even if taco-cloud-*.jar will be copied, it will be copied into /app/app/taco-cloud.jar. Do you mean COPY {original_file} /app/taco-cloud.jar or COPY {original_file} ./taco-cloud.jar?

docker-compose is not mapping ports

I have problem with dockerizing Vue.js app. After docker-compose up --build app normally starts, but I can't access it (docker ps shows that no port has been mapped). Vue.js app starts normally at :8080. I can't find why it's not mapped by Docker:
Dockerfile:
FROM node:10.15-alpine
WORKDIR /app
COPY package*.json /app/
RUN npm ci
COPY . .
RUN npm run dev
docker-compose.yml:
version: '3'
services:
frontend:
build: ./app
ports:
- 8082:8080
networks:
backend:
driver: bridge

Unable to npm install in a node docker image

FROM node:latest
WORKDIR /frontend/
ENV PATH /frontend/node_modules/.bin:$PATH
COPY package.json /frontend/package.json
COPY . /frontend/
RUN npm install --silent
RUN npm install react-scripts#3.0.1 -g --silent
CMD ["npm", "run", "start"]
This is my Dockerfile for the frontend of my project.
I put this as one of the services in my docker-compose.yml file, and when I run docker-compose up -d --build, it gives me
Step 6/8 : RUN npm install --silent
---> Running in 09a4f59a96fa
ERROR: Service 'frontend' failed to build: The command '/bin/sh -c npm install --silent' returned a non-zero code: 1
My docker-compose file looks like below for your reference:
# Docker Compose
version: '3.7'
services:
frontend:
container_name: frontend
build:
context: frontend
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- '.:/frontend'
- '/frontend/node_modules'
backend:
build: ./backend
ports:
- "5000:5000"
volumes:
- .:/code
Thanks in advance
EDIT: Error in the frontend after build
For docker-compose, I think it should be like
- ./frontend/:/frontend'
as the build context is frontend.
Second, thing if you are using volume then why you are installing and copying code in Dockerfile? If you are using bind volume then remove these from your Dockerfile as these will be overridden from the host code.
COPY package.json /frontend/package.json
COPY . /frontend/

Docker Compose port mapping: 127.0.0.1 refused to connect

I am currently working with Docker and a simple Flask website to which I want to send images. For this I'm working on port 8080, but the mapping from docker to host is not working properly as I am unable to connect. Could someone explain to me what I am doing wrong?
docker-compose.yml
version: "2.3"
services:
dev:
container_name: xvision-dev
build:
context: ./
dockerfile: docker/dev.dockerfile
working_dir: /app
volumes:
- .:/app
- /path/to/images:/app/images
ports:
- "127.0.0.1:8080:8080"
- "8080"
- "8080:8080"
dev.dockerfile
FROM tensorflow/tensorflow:latest
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt
RUN apt update && apt install -y python-tk
EXPOSE 8080
CMD ["python", "-u", "app.py"]
app.py
#APP.route('/test', methods=['GET'])
def hello():
return "Hello world!"
def main():
"""Start the script"""
APP.json_encoder = Float32Encoder
APP.run(host="127.0.0.1", port=os.getenv('PORT', 8080))
I start my docker with docker-compose up, this gives the output: Running on http://127.0.0.1:8080/ (Press CTRL+C to quit).
But when I do a get request to 127.0.0.1:8080/test I get that there is no response.
I have also tried docker-compose run --service-port dev as some people have suggested online but this says that there is no service dev.
Can someone help me for what I am doing wrong?
Use:
APP.run(host="0.0.0.0", port=os.getenv('PORT', 8080))
Using only:
ports:
- "8080:8080"
is enough

Connect python with h2o that runs in docker, but ipv4 for h2o changes

I'm new to docker and I'm trying to run h2o in docker and then use python to connect to it.
I have folder with:
model-generator folder in which I have the python script and Dockerfile to build an image
h2o-start folder in which I have h2o.jar file and Dockerfile to start that jar
docker-compose.yml file with:
version: "3"
services:
h2o-start:
image: milanpanic2/h2o-start
build:
context: ./h2o-start
restart: always
model-generator:
image: milanpanic2/model-generator
build:
context: ./model-generator
restart: always
My python script contains:
import h2o
h2o.connect(ip='172.19.0.3', port='54321')
When I run docker-compose up it gives me an error that python can't connect, because there isn't anything on 172.19.0.3
Dockerfile for python
FROM python:2.7-slim
WORKDIR /app
ADD . /app
RUN pip install > --trusted-host pypi.python.org -r requirements.txt
EXPOSE 80
ENV NAME World
CMD ["python", "passhash.py"]
Dockerfile for h2o
FROM openjdk:8
ADD h2o.jar h2o.jar
EXPOSE 54321 EXPOSE 54322
ENTRYPOINT ["java", "-jar", "h2o.jar"]
Try to start container exposing port 54321: add to your h2o-start: in docker-compose file:
ports:
- "54321:54321"
- "54322:54322"

Resources