Unable to access distant neo4j from my docker fastapi app - docker

I have 2 things : a neo4j database which is deployed on GCP compute engine at this IP bolt://35.241.254.136:7687
And I have a fastapi app that need to access to neo4j.
When I run the API server with uvicorn main:app --reload, all is working correctly. I can access my distant database.
However when I run the api with docker (docker run -d --name mycontainer -p 80:80 myimage), it's impossible to access the neo4j database and I have this error
ServiceUnavailable( neo4j.exceptions.ServiceUnavailable: Couldn't connect to 35.241.254.136:7687 (resolved to ('35.241.254.136:7687',)):
Maybe something is wrong, I don't know about docker.
Here is my dockerfile
FROM python:3.9
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
COPY ./ /code/app
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"]

Make sure your container can see the jeo4j IP (35.241.254.136).
Execute the container's bash:
docker exec -it mycontainer bash
then ping the url:
ping 35.241.254.136
If you cannot reach to the host, you need to put your container in same network with jeo4j.
This link may be helpful.

Related

Can't run django rest framework with docker

I'm currently learning Docker and django rest at the same time.
When i run the command python3 manage.py runserver I can access to django admin page from http://localhost:8000/admin/, but when i run docker run -p 80:80 docker_django_tutorial the page is anccesible. (docker_django_tutorial is the name of my docker image)
I guess i need to add somewhere python3 manage.py runserver in my dockerfile ?
Here is my Dockerfile:
#Use the Python3.7.2 container image
FROM python:3.7.2-stretch
WORKDIR /app
COPY . /app
RUN pip install -r requirements.txt
CMD ["python3", "manage.py", "runserver", "0.0.0.0:8000"]
and here my file requirements.txt:
Django==3.1.1
djangorestframework
You're telling Django to listen on port 8000 but then telling Docker to expose port 80. Use -p 8000:8000 instead. If you want it to listen on port 80, you can use -p 80:8000. The first number is host (your system) port and the second is container port (Django).

Docker runs only on Port 80

I am unable to run my docker image on Port 4000, even though I can run it on Port 80. What am I doing wrong here?
FROM node:latest as build
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx:latest
COPY --from=build /usr/src/app/dist/admin /usr/share/nginx/html
EXPOSE 4200
I'm creating the image using the following command:
docker build --pull --rm -f "DockerFile" -t admin1:v1 "."
When I run it on port 80, I'm able to use it:
docker run --rm -d -p 4200:4200/tcp -p 80:80/tcp admin1:v1
However, when I run the following command, I'm unable to use it:
docker run --rm -d -p 4200:4200/tcp -p 4000:4000/tcp admin1:v1
I have researched similar questions online, but I haven't been able to fix the problem. Any suggestion will be greatly appreciated!
You need to map the docker container port to the docker host port.
Try the following Command
docker run --rm -d -p 4200:4200/tcp -p 4000:80/tcp admin1:v1
The following is the extract from the Docker Documentation
-p 8080:80 Map TCP port 80 in the container to port 8080 on the Docker host.
You can refer the link for further information.
Docker Documentation

I can't get access to an exposed port in docker

I'm using Ubuntu 20.04 and running Python 3.8. Here is my dockerfile:
FROM python:3.8
WORKDIR /usr/src/flog/
COPY requirements/ requirements/
RUN pip install -r requirements/dev.txt
RUN pip install gunicorn
COPY flog/ flog/
COPY migrations/ migrations/
COPY wsgi.py ./
COPY docker_boot.sh ./
RUN chmod +x docker_boot.sh
ENV FLASK_APP wsgi.py
EXPOSE 5000
ENTRYPOINT ["./docker_boot.sh"]
and my docker_boot.sh
#! /bin/sh
flask deploy
flask create-admin
flask forge
exec gunicorn -b 0.0.0.0:5000 --access-logfile - --error-logfile - wsgi:app
I ran docker run flog -d -p 5000:5000 in my terminal. And I couldn't get my app working by typing localhost:5000 but it worked quite well when I typed 172.17.0.2:5000 (the docker machine's ip address). But I want the app to run on localhost:5000.
I'm sure there is nothing wrong with the requirements/dev.txt and the code because it works well when I run flask run directly in my terminal.
Edit on 2021.3.16:
Add docker ps information when docker run flog -d -p 5000:5000 is running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ff048e904183 flog "./docker_boot.sh -d…" 8 seconds ago Up 6 seconds 5000/tcp inspiring_kalam
It is strange that there's no mapping of the hosts. I'm sure the firewall is off.
Can anyone help me? Thanks.
Use docker run -d -p 0.0.0.0:5000:5000 flog.
The arguments and the flags that are after the image name are passed as arguments to the entrypoint of the container created from that image.
Run docker ps and you need to see something like
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
565a97468fc7 flog "docker_boot.sh" 1 minute ago Up 1 minutes 0.0.0.0:5000->5000/tcp xxxxxxxx_xxxxxxx

docker container can't communicate to other container on the same network

I have two images that should communicate with each other. The first image is the web API, and the second image is the web itself.
Before I build and run the images, I create a new network called nat using the command:
docker network create nat
After this, I start to create my images which I called image-api that runs on port 8080 and image-web that run on port 8081.
Then, I run the image-api image using the command:
docker run -d -p 3000:8080 --network nat image-api
I mapped the container port 8080 to my host port 3000. I tried to access the localhost port 3000 in my browser and it's running without an error and giving me the response as it should be.
The problem here is, my second image, the image-web image. I try to run it using the command:
docker run -d -p 3001:8081 --network nat image-web
When I try to access localhost:3001 in my browser, it's running, but not giving the data from image-api container. When I try to logs the image-web container, it's giving me an error like:
Error: getaddrinfo ENOTFOUND image-api
Just in case, I try to access the image-api container from my image-web container using URL like this:
http://image-api/ping
Here's my image-api Dockerfile:
FROM node:14 as builder
WORKDIR /app
COPY package.json /app
RUN npm install
FROM node:14
EXPOSE 8080
CMD ["node", "server.js"]
WORKDIR /server
COPY --from=builder /app/node_modules /server/node_modules
COPY server.js .
And here's my image-web Dockerfile:
FROM node:14 as builder
WORKDIR /app
COPY package.json .
RUN npm install
FROM node:14
ENV IMAGE_URL=http://image-api/ping
EXPOSE 8081
CMD ["node", "app.js"]
WORKDIR /web
COPY --from=builder /app/node_modules /web/node_modules
COPY /src .
Just in case, I already tried to run both of them without docker, and they run as it should be in my local computer.
EDIT:
I've also tried to run the container using name like this:
docker run -d -p 3000:8080 --network nat --name imageapi image-api
And try to access it using:
http://imageapi/ping
But it's still giving me the same error
SOLUTIONS:
As being pointed out by #davidMaze on the answer. After running our container using --name tag. I can access my container using http://imageapi:8080/ping
You need to explicitly specify a docker run --name when you start the container. Since you can run multiple containers from the same image, Docker doesn't automatically assign a container name based on the image name; you need to set it yourself.
docker run -d -p 3000:8080 --network nat --name image-api image-api
docker run -d -p 3001:8081 --network nat --name image-web image-web
Once you set the container name, Docker provides an internal DNS service and the container will be reachable by that name from other containers on the same network.
Use bridge networks in the Docker documentation describes this further, though mostly in contrast to an obsolete networking mode.

Cannot access server running in container from host

I have a simple Dockerfile
FROM golang:latest
RUN mkdir -p /app
WORKDIR /app
COPY . .
ENV GOPATH /app
RUN go install huru
EXPOSE 3000
ENTRYPOINT /app/bin/huru
I build like so:
docker build -t huru .
and run like so:
docker run -it -p 3000:3000 huru
for some reason when I go to localhost:3000 with the browser, I get
I have exposed servers running in containers to the host machine before so not sure what's going on.
From the information provided in the question if you see logs of the application
(docker logs <container_id>) than the docker application starts successfully and it looks like port exposure is done correctly.
In any case in order to see ports mappings when the container is up and running you can use:
docker ps
and check the "PORTS" section
If you see there something like 0.0.0.0:3000->3000/tcp
Then I can think about some firewall rules that prevent the application from being accessed...
Another possible reason (although probably you've checked this already) is that the application starts and finishes before you actually try to access it in the browser.
In this case, docker ps won't show the exited container, but docker ps -a will.
The last thing I can think of is that in the docker container itself the application doesn't really answer the port 3000 (I mean, maybe the startup script starts the web server on some other port, so exposing port 3000 doesn't really do anything useful).
In order to check this you can enter the docker container itself with something like docker exec -it <container_id> bash
and check for the opened ports with lsof -i or just wget localhost:3000 from within the container itelf
Try this one, if this has any output log. Please check them...
FROM golang:latest
RUN apt -y update
RUN mkdir -p /app
COPY . /app
RUN go install huru
WORKDIR /app
docker build -t huru:latest .
docker run -it -p 3000:3000 huru:latest bin/huru
Try this url: http://127.0.0.1:3000
I use the loopback

Resources