How to expose a Docker container - docker

I have this Dockerfile written.
FROM python:3.6
WORKDIR /usr/src/app
EXPOSE 8080
CMD [ "python3", "-m http.server" ] //even tried CMD [ "python3", "-m", "http.server" ]
I built the image with this:
docker build -t --name server .
and I ran a container from the image like this:
docker run -d -p 8080:8080 --name web server
But when I hit < host-url >:8080
It doesn't work.
Can somebody please help me?

You are trying to run the Python SimpleHTTPServer which is served in port 8000 by default.
Either your Dockerfile should expose 8000 instead of 8080
EXPOSE 8000
Or, change the command to run it in port 8080
CMD ["python3", "-m", "http.server", "8080"]

Related

Can't run django rest framework with docker

I'm currently learning Docker and django rest at the same time.
When i run the command python3 manage.py runserver I can access to django admin page from http://localhost:8000/admin/, but when i run docker run -p 80:80 docker_django_tutorial the page is anccesible. (docker_django_tutorial is the name of my docker image)
I guess i need to add somewhere python3 manage.py runserver in my dockerfile ?
Here is my Dockerfile:
#Use the Python3.7.2 container image
FROM python:3.7.2-stretch
WORKDIR /app
COPY . /app
RUN pip install -r requirements.txt
CMD ["python3", "manage.py", "runserver", "0.0.0.0:8000"]
and here my file requirements.txt:
Django==3.1.1
djangorestframework
You're telling Django to listen on port 8000 but then telling Docker to expose port 80. Use -p 8000:8000 instead. If you want it to listen on port 80, you can use -p 80:8000. The first number is host (your system) port and the second is container port (Django).

how to access running container from the browser windows

I want to access my running container from the browser docker in windows 10 pro
i can't access container IPs directly i try with http://localhost:8000 & http://172.17.0.3:8000
this is my Dockerfile
# syntax=docker/dockerfile:1
FROM node:16.15.1
ENV NODE_ENV=production
WORKDIR /app
COPY ["package.json", "package-lock.json*", "./"]
RUN npm install --production
COPY . .
CMD [ "node", "server.js" ]
i start my container from the interface docker
this the log
I forwarded the traffic from port 8080 to container port 3000 with this command it works now
docker run -d -p 8080:3000 expressdocker

Docker runs only on Port 80

I am unable to run my docker image on Port 4000, even though I can run it on Port 80. What am I doing wrong here?
FROM node:latest as build
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx:latest
COPY --from=build /usr/src/app/dist/admin /usr/share/nginx/html
EXPOSE 4200
I'm creating the image using the following command:
docker build --pull --rm -f "DockerFile" -t admin1:v1 "."
When I run it on port 80, I'm able to use it:
docker run --rm -d -p 4200:4200/tcp -p 80:80/tcp admin1:v1
However, when I run the following command, I'm unable to use it:
docker run --rm -d -p 4200:4200/tcp -p 4000:4000/tcp admin1:v1
I have researched similar questions online, but I haven't been able to fix the problem. Any suggestion will be greatly appreciated!
You need to map the docker container port to the docker host port.
Try the following Command
docker run --rm -d -p 4200:4200/tcp -p 4000:80/tcp admin1:v1
The following is the extract from the Docker Documentation
-p 8080:80 Map TCP port 80 in the container to port 8080 on the Docker host.
You can refer the link for further information.
Docker Documentation

How to expose COM port to container correctly?

I cannot expose ttyS5 to ubuntu container.
I tried:
docker run -t -i --privileged -v /dev/ttyS5:/dev/ttyS5 ubuntu /bin/bash
Inside the ubuntu, ttyS5 is a directory not a device node
I confirm ttyS5 is working, I tried to send and return data through ttyS5 and ttyS6(COM6)
Is there anyone know how to fix this issue ?
PS. My system is WIN10+docker desktop+ubuntu 1804 app
You need to add the EXPOSE statement to your Dockerfile for port 8080.
Here's the reference from Docker: https://docs.docker.com/engine/reference/builder/#expose
Your final Dockerfile should look like this:
FROM adzerk/boot-clj
EXPOSE 8080
WORKDIR /app
COPY . /app

Passing environment variables from host to docker

I know it is a question that many people have asked. But I don't know what I'm doing wrong. I have this Dockerfile
FROM nginx
ENV USER=${USER}
COPY proof.sh .
RUN chmod 777 proof.sh
CMD echo ${USER} >> /usr/share/nginx/html/index.html
CMD ["nginx", "-g", "daemon off;"]
EXPOSE 80
When I execute the env command in Linux, I got USER=fran and after that I run these commands:
sudo docker run --entrypoint "/bin/sh" 5496674f99e5 ./prueba.sh and as well I run this other docker run --env USER -d -p 89:80 prueba . If I have understood well, doing the last command this environment variable from host it should be passed to the docker, but I don't get anything. Why?. Could you help me?. Thanks in advance
This should be like
FROM nginx
ARG USER=default
ENV USER=${USER}
RUN echo ${USER}
CMD ["nginx", "-g", "daemon off;"]
EXPOSE 80
So now if you build with some build-args like
docker build --no-cache --build-arg USER=fran_new -t my_image .
You will see in logs like fran_new
or if you want to have user from host OS at run time then pass it as an environment variable at run time then
docker run --name my_container -e USER=$USER my_image
So the container user will be the same as in host.

Resources