port mapping -p 8080:8080 vs --net=host - docker

Dockerfile
FROM ubuntu:20.04
# Setup
RUN apt-get update && apt-get install -y unzip xz-utils git openssh-client curl python3 && apt-get upgrade -y && rm -rf /var/cache/apt
# Install Flutter
RUN git clone https://github.com/flutter/flutter.git /usr/local/flutter
ENV PATH="/usr/local/flutter/bin:/usr/local/flutter/bin/cache/dart-sdk/bin:${PATH}"
RUN flutter channel master
RUN flutter upgrade
RUN flutter config --enable-web
RUN flutter doctor -v
# Copy files to container and get dependencies
COPY . /usr/local/bin/app
WORKDIR /usr/local/bin/app
RUN flutter pub get
RUN flutter build web
# Document the exposed port and start server
EXPOSE 8080
RUN chmod +x /usr/local/bin/app/server/server.sh
ENTRYPOINT [ "/usr/local/bin/app/server/server.sh" ]
Entrypoint server.sh file
#!/bin/bash
cd build/web/
python3 -m http.server 8080
I build an image - docker build --network=host --tag image1 .
Then I try to run it:
docker run -d -p 8080:8080 image1 -- doesnt work. no error but just doesnt load
docker run -d image1 -- doesnt work. no error but just doesnt load
docker run -d --net=host image1 -- works !!
Why does -p 8080:8080 not work whereas --net=host work ?

How are you trying to access your app? At port 8000 or 8080? Your title and the command you posted doesn't seem to match. Are you trying to map 8080 on your machine to 8080 in the app? If so, you have a typo in your command. Your command is mapping 8000 to 8080 and I'm guessing you're then trying to access it at localhost:8080 and encountering nothing.
I think it should just be docker run -d -p 8080:8080 image and then you should be able access it at localhost:8080 just fine.

Related

How to exchange files between docker container and local filesystem?

I have a TypeScript code that reads the contents of a directory and has to delete them one by one at some intervals.
Everything works fine locally. I made a docker container for my code and wanted to achieve the same purpose, however, I realized that the directory contents are the same ones existed at the time of building the container.
As for my understanding, the connection between the docker container and the local file system is missing.
I have been wandering around bind and volume options, and I came across the following simple tutorial:
How To Share Data Between the Docker Container and the Host
According to the previous tutorial, theoretically, I would be able to achieve my goal:
If you make any changes to the ~/nginxlogs folder, you’ll be able to see them from inside the Docker container in real-time as well.
However, I followed exactly the same steps but still couldn't see the changes made locally reflected in the docker container, or vice versa.
My question is: How can I access my local file system from a docker container to read/write/delete files?
Update
This is my dockerfile
FROM ampervue/ffmpeg
RUN curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -
RUN apt-get update -qq && apt-get install -y --force-yes \
nodejs; \
apt-get clean
RUN npm install -g fluent-ffmpeg
RUN rm -rf /usr/local/src
RUN apt-get autoremove -y; apt-get clean -y
WORKDIR /work
COPY package.json .
COPY . .
CMD ["node", "sizeCalculator.js"]
Easy way to volume mount on docker run command
docker run -it -v /<Source Dir>/:/<Destination Dir> <container_name> bash
Another way is using docker-compose.
Let's try it with docker-compose
put your dockerfile and docker-compose at the same location or dir
main focus
volumes:
- E:\dirToMap:/work
docker-compose.yaml
version: "3"
services:
ampervue:
build:
context: ./
image: <Image Name>
container_name: ampervueservice
volumes:
- E:\dirToMap:/vol1
ports:
- 8080:8080
And add volume in dockerfile
FROM ampervue/ffmpeg
RUN curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -
RUN apt-get update -qq && apt-get install -y --force-yes \
nodejs; \
apt-get clean
RUN npm install -g fluent-ffmpeg
RUN rm -rf /usr/local/src
RUN apt-get autoremove -y; apt-get clean -y
WORKDIR /work
VOLUME /vol1
COPY package.json .
COPY . .
CMD ["node", "sizeCalculator.js"]
and run following command to up the container
docker-compose -f "docker-compose-sr.yml" up -d --build
At the examples below which come directly from the docs:
The --mount and -v examples below produce the same result. You can't run them both unless you remove the devtest container after running the first one.
with -v:
docker run -d -it --name devtest -v "$(pwd)"/target:/app nginx:latest
with --mount:
docker run -d -it --name devtest --mount type=bind,source="$(pwd)"/target,target=/app nginx:latest
This is where you have to type your 2 different paths:
-v /path/from/your/host:/path/inside/the/container
<-------host------->:<--------container------->
--mount type=bind,source=/path/from/your/host,target=/path/inside/the/container
<-------host-------> <--------container------->

How can I run this docker image in detached mode, and export the port 3000 so I can view locally?

All the examples I see online are using docker-compose, but I want to try and run this simple rails application (no db needed) using just the docker run command.
I want to expose the port 3000 so I can view it locally on my laptop in the browser on the same port.
I have this Dockerfile so far:
FROM ruby:2.6-alpine
RUN apk update && apk --update add \
build-base \
nodejs \
postgresql-dev \
tzdata \
imagemagick
# yarn
ENV PATH=/root/.yarn/bin:$PATH
RUN apk add --virtual build-yarn curl && \
touch ~/.bashrc && \
curl -o- -L https://yarnpkg.com/install.sh | sh && \
apk del build-yarn
RUN mkdir /app
WORKDIR /app
RUN gem update --system
RUN gem install bundler
COPY Gemfile Gemfile.lock ./
RUN bundle install --binstubs
CMD ["./puma -C config/puma.rb"]
I was able to build the docker image so far using:
docker build -t my-rails .
What command should I use to run the rails app, detached, exposing the port now?
To run your app in docker run the below command
docker run -d -p 3000:8080(your app port) -v host_volume_or_directory:container_volume -t image_name:tag
just change the port as per your needs, first port will be used with your host machine ip to get the application frontend or this is a expose port. remove the volume part from -v if you don't have any volume to mount. -d means for detached mode.

SOnarqube using docker and how to run it?

i need to install SonarQube using my docker.
i tried this below code to install
`FROM ubuntu:14.04
RUN apt-get update
RUN apt-get -y install unzip curl openjdk-7-jre-headless
RUN cd /tmp && curl -L -O
https://sonarsource.bintray.com/Distribution/sonarqube/sonarqube-7.0.zip
RUN unzip /tmp/sonarqube-7.0.zip
EXPOSE 9000
CMD ["chmod +x","/tmp/sonarqube-7.0/bin/linux-x86-64/sonar.sh"]
CMD ["/sonarqube-7.0/bin/linux-x86-64/sonar.sh","start"]`
its build is successful.
MY QUESTION IS:
1.how can i run it on server?
I tried "docker run -d --name image -p 9000:9000 -p 9092:9092 sonarqube"
but its not connecting..can anyone help me from here or do i need to change in script??
Try below steps.
Modify the Dockerfile last line to:
RUN echo "/sonarqube-7.0/bin/linux-x86-64/sonar.sh start" >> .bashrc
Rebuild the image
Start a container:
docker run -d --name image -p 9000:9000 -p 9092:9092 sonarqube /bin/bash

Not being able to access webapp from host in Docker

I have a simple webproject which I want to "Dockerize" but I keep failing at exposing the webapp to host.
My Dockerfile looks like:
FROM debian:jessie
RUN apt-get update -y && \
apt-get install -y python-pip python-dev curl && \
pip install --upgrade pip setuptools
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip install -r requirements.txt
COPY . /app
WORKDIR /app/web
And requirements.txt looks like:
PasteScript==2.0.2
Pylons==1.0.2
The web directory was built using:
paster create --template=pylons web
And finally start_server.sh:
#!/bin/bash
paster serve --daemon development.ini start
Now I am able to build with :
docker build -t webapp .
And then run command:
docker run -it -p 5000:5000 --name app webapp:latest /bin/bash
And then inside the docker container I run:
bash start_server.sh
which successfully starts the webapp on port 5000 and if I curl inside docker container I get expected response. Also the container is up and running with the correct port mappings:
bc6511d584ae webapp:latest "/bin/bash" 2 minutes ago Up 2 minutes 0.0.0.0:5000->5000/tcp app
Now if I run docker port app it looks fine:
5000/tcp -> 0.0.0.0:5000
However I cannot get any response from server from host with :
curl localhost:5000
I have probably misunderstood something here but it seems fine to me.
In your dockerfile you need to add EXPOSE 5000 your port mapping is correct think of it as opening the port on your container and then you map it with localhost with the -p
Answer in the comment
when you make_server bind to 0.0.0.0 instead of localhost

Docker-machine Port Forwarding on Windows not working

I'm attempting to access my django app running within Docker on my windows machine. I'm using docker-machine. I've been taking a crack at this for hours now.
Here's my Dockerfile for my django app:
FROM python:3.4-slim
RUN apt-get update && apt-get install -y \
gcc \
gettext \
vim \
curl \
postgresql-client libpq-dev \
--no-install-recommends && rm -rf /var/lib/apt/lists/*
EXPOSE 8000
WORKDIR /home/
# add app files from git repo
ADD . server/
WORKDIR /home/server
RUN pip install -r requirements.txt
CMD ["python", "manage.py", "runserver", "8000"]
So that should be exposing (at least in the container) port 8000.
When I use the command docker-machine ip default I am given the IP 192.168.99.101. I go to that IP on port 8000 but get no response.
I went into the VirtualBox to see if forwarding those ports would work. Here is the configuration:
I also tried using 127.0.0.1 as the Host IP. I also tried disabling the windows firewall.
Here's my command for starting the container:
docker run --rm -it -p 8000:8000 <imagename>
I am at a loss on why I am unable to connect on that port. When I run docker-machine ls the url it gives me is tcp://192.168.99.101:2376 and when I go to that it gives me some kind of file back, so I know the docker-machine is active on that port.
Also when I run docker ps I get this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5c00cc28a2bd <image name> "python manage.py run" 7 minutes ago Up 7 minutes 0.0.0.0:8000->8000/tcp drunk_knuth
Any help would be greatly appreciated.
The issue was that the server was running on 127.0.0.1 when it should have been running on 0.0.0.0.
I changed the CMD line in the Dockerfile from
CMD ["python", "manage.py", "runserver", "8000"]
to
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
and it now works.

Resources