I can't get access to an exposed port in docker - docker

I'm using Ubuntu 20.04 and running Python 3.8. Here is my dockerfile:
FROM python:3.8
WORKDIR /usr/src/flog/
COPY requirements/ requirements/
RUN pip install -r requirements/dev.txt
RUN pip install gunicorn
COPY flog/ flog/
COPY migrations/ migrations/
COPY wsgi.py ./
COPY docker_boot.sh ./
RUN chmod +x docker_boot.sh
ENV FLASK_APP wsgi.py
EXPOSE 5000
ENTRYPOINT ["./docker_boot.sh"]
and my docker_boot.sh
#! /bin/sh
flask deploy
flask create-admin
flask forge
exec gunicorn -b 0.0.0.0:5000 --access-logfile - --error-logfile - wsgi:app
I ran docker run flog -d -p 5000:5000 in my terminal. And I couldn't get my app working by typing localhost:5000 but it worked quite well when I typed 172.17.0.2:5000 (the docker machine's ip address). But I want the app to run on localhost:5000.
I'm sure there is nothing wrong with the requirements/dev.txt and the code because it works well when I run flask run directly in my terminal.
Edit on 2021.3.16:
Add docker ps information when docker run flog -d -p 5000:5000 is running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ff048e904183 flog "./docker_boot.sh -d…" 8 seconds ago Up 6 seconds 5000/tcp inspiring_kalam
It is strange that there's no mapping of the hosts. I'm sure the firewall is off.
Can anyone help me? Thanks.

Use docker run -d -p 0.0.0.0:5000:5000 flog.
The arguments and the flags that are after the image name are passed as arguments to the entrypoint of the container created from that image.
Run docker ps and you need to see something like
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
565a97468fc7 flog "docker_boot.sh" 1 minute ago Up 1 minutes 0.0.0.0:5000->5000/tcp xxxxxxxx_xxxxxxx

Related

When I run "Docker ps" I can't see some container. What wrong with me?

I am testing docker with official doc. just following official doc.
I am using official code
https://github.com/docker/getting-started/tree/master/app
and then this Dockerfile
# syntax=docker/dockerfile:1
FROM node:12-alpine
RUN apk add --no-cache python2 g++ make
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]
EXPOSE 3000
and then build command
docker build -t getting-started .
and then run command
docker run -dp 3000:3000 getting-started
It's perfect. but I just want to modified a little code more simple.
like this. This is nodejs code.
const http = require('http');
const hostname = '0.0.0.0';
const port = 3000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello World');
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
and then Dockerfile like this
FROM node:12-alpine
WORKDIR /app
COPY . .
CMD ["node", "src/index.js"]
EXPOSE 3000
and then build and run same command.
I checked my running container with this command "docker ps".I couldn't see my container is running.
I ran "docker ps -a" command. I was able to see the container with no port mapping container. So I wasn't able to connect my container.
I added both of docker state.
I ran same command but firstcontainer don't have port mapping. What's wrong?
docker run -dp 80:3000 getting-started -> this is official app
docker run -dp 3000:3000 firstcontainer -> my simple app
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
fea13c68893f firstcontainer "docker-entrypoint.s…" 10 seconds ago Exited (1) 10 seconds ago
2f552d1e9f55 getting-started "docker-entrypoint.s…" 26 seconds ago Up 25 seconds
0.0.0.0:80->3000/tcp, :::80->3000/tcp
You can run the docker command in foreground to see the logs (without the d flag):
docker run -p 3000:3000 firstcontainer -> my simple app
and see what's the error
Your EXPOSE instruction is written after CMD. Change it to come before CMD. The CMD or ENTRYPOINT MUST be the last instruction in your Dockerfile. There is nothing to "run" in this image otherwise.

Unable to access distant neo4j from my docker fastapi app

I have 2 things : a neo4j database which is deployed on GCP compute engine at this IP bolt://35.241.254.136:7687
And I have a fastapi app that need to access to neo4j.
When I run the API server with uvicorn main:app --reload, all is working correctly. I can access my distant database.
However when I run the api with docker (docker run -d --name mycontainer -p 80:80 myimage), it's impossible to access the neo4j database and I have this error
ServiceUnavailable( neo4j.exceptions.ServiceUnavailable: Couldn't connect to 35.241.254.136:7687 (resolved to ('35.241.254.136:7687',)):
Maybe something is wrong, I don't know about docker.
Here is my dockerfile
FROM python:3.9
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
COPY ./ /code/app
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"]
Make sure your container can see the jeo4j IP (35.241.254.136).
Execute the container's bash:
docker exec -it mycontainer bash
then ping the url:
ping 35.241.254.136
If you cannot reach to the host, you need to put your container in same network with jeo4j.
This link may be helpful.

How to properly expose/publish ports for a WebUI-based application in Docker?

I'm trying to port this webapp to Docker. I wrote the following Dockerfile:
FROM anapsix/alpine-java
MAINTAINER <name>
COPY aard2-web-0.7-java6.jar /home/aard2-web-0.7-java6.jar
COPY start.sh /home/start.sh
CMD ["bash", "/home/start.sh"]
EXPOSE 8013/tcp
Here are the contents of start.sh:
#!/bin/bash
java -Dslobber.browse=true -jar /home/aard2-web-0.7-java6.jar /home/dicts/*.slob
Then I built the image:
docker build -t aard2-docker .
And I used the following command to run the container:
docker run --name Aard2 -p 127.0.0.1:8013:8013 -v /home/<name>/dicts:/home/dicts aard2-docker
The app is running normally, prompting that it's listening at http://127.0.0.1:8013. However, I opened the address only to find that I couldn't connect to the app.
I tried using the EXPOSE command (as shown in the Dockerfile snippet above) and variants of the -p flag, such as -p 127.0.0.1:8013:8013, -p 8013:8013, -p 8013:8013/tcp, but none of them worked.
How can I expose/publish the port to 127.0.0.1 properly? Thanks!
Here's the response from the original author:
you need to tell the server to listen on all network interfaces instead of localhost - that is you are missing -Dslobber.host=0.0.0.0
this works for me:
FROM anapsix/alpine-java
COPY ./build/libs/aard2-web-0.7.jar /home/aard2-web-0.7.jar
CMD ["bash", "-c", "java -Dslobber.host=0.0.0.0 -jar /home/aard2-web-0.7.jar /dicts/*.slob"]
EXPOSE 8013/tcp
and then run like this:
docker run -v $HOME/Downloads:/dicts -p 8013:8013 --rm aard2-web
-Dslobber.browse=true opens default browser, I don't think this has any effect in docker so don't need that.
https://github.com/itkach/aard2-web/issues/12#issuecomment-895557949

Cannot access server running in container from host

I have a simple Dockerfile
FROM golang:latest
RUN mkdir -p /app
WORKDIR /app
COPY . .
ENV GOPATH /app
RUN go install huru
EXPOSE 3000
ENTRYPOINT /app/bin/huru
I build like so:
docker build -t huru .
and run like so:
docker run -it -p 3000:3000 huru
for some reason when I go to localhost:3000 with the browser, I get
I have exposed servers running in containers to the host machine before so not sure what's going on.
From the information provided in the question if you see logs of the application
(docker logs <container_id>) than the docker application starts successfully and it looks like port exposure is done correctly.
In any case in order to see ports mappings when the container is up and running you can use:
docker ps
and check the "PORTS" section
If you see there something like 0.0.0.0:3000->3000/tcp
Then I can think about some firewall rules that prevent the application from being accessed...
Another possible reason (although probably you've checked this already) is that the application starts and finishes before you actually try to access it in the browser.
In this case, docker ps won't show the exited container, but docker ps -a will.
The last thing I can think of is that in the docker container itself the application doesn't really answer the port 3000 (I mean, maybe the startup script starts the web server on some other port, so exposing port 3000 doesn't really do anything useful).
In order to check this you can enter the docker container itself with something like docker exec -it <container_id> bash
and check for the opened ports with lsof -i or just wget localhost:3000 from within the container itelf
Try this one, if this has any output log. Please check them...
FROM golang:latest
RUN apt -y update
RUN mkdir -p /app
COPY . /app
RUN go install huru
WORKDIR /app
docker build -t huru:latest .
docker run -it -p 3000:3000 huru:latest bin/huru
Try this url: http://127.0.0.1:3000
I use the loopback

Docker-compose up does not start a container

Dockerfile:
FROM shawnzhu/ruby-nodejs:0.12.7
RUN \
apt-get install git \
&& npm install -g bower gulp grunt \
gem install sass
RUN useradd -ms /bin/bash devel
# Deal with ssh
COPY ssh_keys/id_rsa /devel/.ssh/id_rsa
COPY ssh_keys/id_rsa.pub /devel/.ssh/id_rsa.pub
RUN echo "IdentityFile /devel/.ssh/id_rsa" > /devel/.ssh/config
# set root password
RUN echo 'root:password' | chpasswd
# Add gitconfig
COPY .gitconfig /devel/.gitconfig
USER devel
WORKDIR /var/www/
EXPOSE 80
docker-compose.yml file:
nodejs:
build: .
ports:
- "8001:80"
- "3000:3000"
volumes:
- ~/Web/docker/nodejs/www:/var/www
Commands:
$ docker-compose build nodejs
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
nodejs_nodejs latest aece5fb27134 2 minutes ago 596.5 MB
shawnzhu/ruby-nodejs 0.12.7 bbd5b568b88f 5 months ago 547.5 MB
$ docker-compose up -d nodejs
Creating nodejs_nodejs_1
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c24c6d0e756b nodejs_nodejs "/bin/bash" About a minute ago Exited (0) About a minute ago nodejs_nodejs_1
As you can see the docker-compose up -d should have created a container and run it on the background, but it didn't. Instead it exited with code 0.
If your Dockerfile doesn't do anything (for example a webserver to listen on port 80), it will be discarded as soon as it finishes running the instructions. Because Docker containers should be "ephemeral".
If you just want to start a container and interact with it via terminal, don't use docker-compose up -d, use the following instead:
docker run -it --entrypoint=/bin/bash [your_image_id]
This will start your container and run /bin/bash, the -it helps you keep the terminal session to interact with the container. When you are done doing your works, press Ctrl-D to exit.
I had a similar problem with SQL Server 2017 container exiting soon after it was created. The process running inside the container should be long running, otherwise Docker will exit the container. In the docker-compose scenario I implemented the tty:true approach which is documented here https://www.handsonarchitect.com/2018/01/docker-compose-tip-how-to-avoid-sql.html

Resources