Question about needing host network in running pouchdb-server inside docker - docker

I have this docker image that exposes port 5984 from a pouchdb-server running inside a docker container.
Here's what Dockerfile looks like:
From node:16-alpine
WORKDIR /pouchdb
RUN apk update
RUN npm install --location=global pouchdb-server
EXPOSE 5984
ENTRYPOINT ["pouchdb-server"]
CMD ["--port", "5984"]
Running the container using the default bridge network doesn't work:
docker run -d -v $(pwd)/pouchdb -p 5984:5984 pouchdb-server:v1
But upon running the container using the host docker network, it works like a charm.
docker run -d -v $(pwd)/pouchdb -p 5984:5984 --network host pouchdb-server:v1.
I understand that it removes the network isolation between docker and host network but this has the caveat of possible port conflicts.
My question is, is there any way to export make this work without using host network?

Related

Docker file in host machine not available in container using bind volume

I am facing an issue where after runnig the container and using bind mount to mount the directory on host to container I am not able to see new files created in host machine inside container.Below is my project structure.
The python code creates a file inside the container which should be available inside the host machine too however this does happen when I start the container with below command. However updates to python code and html is available inside the container.
sudo docker container run -p 5000:5000 --name flaskapp --volume feedback1:/app/feedback/ --volume /home/deepak/PycharmProjects/NewDockerProject/sampleapp:/app flask_image
However after starting the container using below command, everything seems to work fine. I can see all the files from container to host and vice versa(new created , edited).I git this command from docker in the month of lunches book.
sudo docker container run --mount type=bind,source=/home/deepak/PycharmProjects/NewDockerProject/sampleapp,target=/app -p 5000:5000 --name flaskapp
Below is the content of my dockerfile
FROM python:3.8-alpine
WORKDIR /app
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python","main.py"]
Could someone please help me in figuring out the difference between the two commands ? I am using ubuntu. Thank you
In my case i got working volumes using following docker run args (but i am running without --mount type=bind):
docker run -it ... -v mysql_data:/var/lib/mysql -v storage:/usr/shared/app_storage
where:
mysql_data is a volume name
/var/lib/mysql path inside container machine
you could list volumes as:
docker volume ls
and inspect them to see where it points on your system (usually /var/lib/docker/volumes/{volume_nanme}/_data):
docker volume inspect mysql_data
to create volume use following command:
docker volume create {volume_name}

docker container can't communicate to other container on the same network

I have two images that should communicate with each other. The first image is the web API, and the second image is the web itself.
Before I build and run the images, I create a new network called nat using the command:
docker network create nat
After this, I start to create my images which I called image-api that runs on port 8080 and image-web that run on port 8081.
Then, I run the image-api image using the command:
docker run -d -p 3000:8080 --network nat image-api
I mapped the container port 8080 to my host port 3000. I tried to access the localhost port 3000 in my browser and it's running without an error and giving me the response as it should be.
The problem here is, my second image, the image-web image. I try to run it using the command:
docker run -d -p 3001:8081 --network nat image-web
When I try to access localhost:3001 in my browser, it's running, but not giving the data from image-api container. When I try to logs the image-web container, it's giving me an error like:
Error: getaddrinfo ENOTFOUND image-api
Just in case, I try to access the image-api container from my image-web container using URL like this:
http://image-api/ping
Here's my image-api Dockerfile:
FROM node:14 as builder
WORKDIR /app
COPY package.json /app
RUN npm install
FROM node:14
EXPOSE 8080
CMD ["node", "server.js"]
WORKDIR /server
COPY --from=builder /app/node_modules /server/node_modules
COPY server.js .
And here's my image-web Dockerfile:
FROM node:14 as builder
WORKDIR /app
COPY package.json .
RUN npm install
FROM node:14
ENV IMAGE_URL=http://image-api/ping
EXPOSE 8081
CMD ["node", "app.js"]
WORKDIR /web
COPY --from=builder /app/node_modules /web/node_modules
COPY /src .
Just in case, I already tried to run both of them without docker, and they run as it should be in my local computer.
EDIT:
I've also tried to run the container using name like this:
docker run -d -p 3000:8080 --network nat --name imageapi image-api
And try to access it using:
http://imageapi/ping
But it's still giving me the same error
SOLUTIONS:
As being pointed out by #davidMaze on the answer. After running our container using --name tag. I can access my container using http://imageapi:8080/ping
You need to explicitly specify a docker run --name when you start the container. Since you can run multiple containers from the same image, Docker doesn't automatically assign a container name based on the image name; you need to set it yourself.
docker run -d -p 3000:8080 --network nat --name image-api image-api
docker run -d -p 3001:8081 --network nat --name image-web image-web
Once you set the container name, Docker provides an internal DNS service and the container will be reachable by that name from other containers on the same network.
Use bridge networks in the Docker documentation describes this further, though mostly in contrast to an obsolete networking mode.

Docker port not being exposed

Using Windows and I have pulled the Jenkins image successfully via
docker pull jenkins
I am running a new container via following command and it seems to start the container fine. But when I try to access the Jenkins page on my browser, I just get following error message. I was expecting to see the Jenkins main log in page. Same issue when I tried other images like Redis, Couchbase and JBoss/Wildfly. What am I doing wrong? New to Docker and following tutorials which has described the following command to expose ports. Same for some answers given here + docs. Please advice. Thanks.
docker run -tid -p 127.0.0.1:8097:8097 --name jen1 --rm jenkins
In browser, just getting a normal 'Problem Loading page Error'.
The site could be temporarily unavailable or too busy.
First, it looks a little bit strange use -tid. Since you're trying to run it detached, maybe, it'd be better just -d, and use -ti for example to access via shell docker exec -ti jen1 bash.
Second, docker localhost is not the same than host localhost, so, I'd run the container directly without 127.0.0.1. If you want to use it, you may specify --net=host, what makes 127.0.0.1 is the same inside and outside docker.
Third, try to access first through port 8080 for initial admin password.
Definitively, in summary:
docker run -d -p 8097:8080 --name jen1 --rm jenkins
Then,
http://172.17.0.2:8080/
Finally, unlock Jenkins setting admin password. You can have a look at starting logs: docker logs jen1
Take a look at Jenkins Dockerfile from here:
FROM openjdk:8-jdk
RUN apt-get update && apt-get install -y git curl && rm -rf /var/lib/apt/lists/*
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
ARG http_port=8080
ARG agent_port=50000
.....
.....
# for main web interface:
EXPOSE ${http_port}
# will be used by attached slave agents:
EXPOSE ${agent_port}
As you can see port 8080 is being exposed and not 8097.
Change your command to
docker run -tid -p 8097:8080 --name jen1 --rm jenkins
What your command does is connects your host port 8097 with jenkins image port 8097, but how do you know that the image exposes/uses port 8097 (spoiler: it doesn't).
This image uses port 8080, so you want to port your local 8097 to port that one.
Change the command to this:
docker run -tid -p 127.0.0.1:8097:8080 --name jen1 --rm jenkins
Just tested your command with this small fix, and it works locally for me.

Unable to connect to Rabbit MQ instance when running from docker container built by dockerfile

We are attempting to put an instance of rabbit mq into our Kubernetes environment. To do so, we have to implement it into our build and release process, which includes creating a docker container by Dockerfile.
During our original testing, we created the docker container manually with the following commands, and it worked correctly:
docker pull rabbitmq
docker run -p 5672:5672 -d --hostname my-rabbit --name some-rabbit rabbitmq:3
docker start some-rabbit
To create our docker file, we have tried various iterations, with the latest being:
FROM rabbitmq:3 AS rabbitmq
RUN rabbitmq-server -p 5672:5672 -d --hostname my-rabbit --name some-rabbit
EXPOSE 5672
We have also tried it with just the Run rabbitmq-server and not the additional parameters.
This does create a rabbit mq instance that we are able to ssh into and verify it is running, but when we try to connect to it, we receive an error: "ExtendedSocketException: An attempt was made to access a socket in a way forbidden by its access permission" (we are using rabbit's default of 5672).
I'm not sure what the differences could be between what we've done in the command line and what has been done in the Dockerfile.
Looks like you need to expose quite a few other ports.
I was able to generate the Dockerfile commands for rabbitmq:latest (rabbitmq:3 looks the same) using this:
ENV PATH=/usr/lib/rabbitmq/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV GOSU_VERSION=1.10
ENV RABBITMQ_LOGS=-
ENV RABBITMQ_SASL_LOGS=-
ENV RABBITMQ_GPG_KEY=0A9AF2115F4687BD29803A206B73A36E6026DFCA
ENV RABBITMQ_VERSION=3.7.8
ENV RABBITMQ_GITHUB_TAG=v3.7.8
ENV RABBITMQ_DEBIAN_VERSION=3.7.8-1
ENV LANG=C.UTF-8
ENV HOME=/var/lib/rabbitmq
EXPOSE 25672/tcp
EXPOSE 4369/tcp
EXPOSE 5671/tcp
EXPOSE 5672/tcp
VOLUME /var/lib/rabbitmq
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["rabbitmq-server"]
Dockerfile is used to build your own image, not to run a container. The question is - why do you need to build your own rabbitmq image? If you don't - then just use the official rabbitmq image (as you originally did).
I'm sure it already has all the necessary EXPOSE directives built-in
Also note command line arguments "-p 5672:5672 -d --hostname my-rabbit --name some-rabbit rabbitmq:3" are passed to docker daemon, not to the rabbitmq process.
If you want to make sure you're forwarding all the necessary ports - just run it with -P.

How to use docker container as apache server?

I just started using docker and followed following tutorial: https://docs.docker.com/engine/admin/using_supervisord/
FROM ubuntu:14.04
RUN apt-get update && apt-get upgrade
RUN apt-get install -y openssh-server apache2 supervisor
RUN mkdir -p /var/lock/apache2 /var/run/apache2 /var/run/sshd /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22 80
CMD ["/usr/bin/supervisord"]
and
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
[program:apache2]
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
Build and run:
sudo docker build -t <yourname>/supervisord .
sudo docker run -p 22 -p 80 -t -i <yourname>/supervisord
My question is, when docker runs on my server with IP http://88.xxx.x.xxx/, how can I access the apache localhost running inside the docker container from the browser on my computer? I would like to use a docker container as a web server.
You will have to use port forwarding to be able to access your docker container from the outside world.
From the Docker docs:
By default Docker containers can make connections to the outside world, but the outside world cannot connect to containers.
But if you want containers to accept incoming connections, you will need to provide special options when invoking docker run.
So, what does this mean? You will have to specify a port on your host machine (typically port 80) and forward all connections on that port to the docker container. Since you are running Apache in your docker container you probably want to forward the connection to port 80 on the docker container as well.
This is best done via the -p option for the docker run command.
sudo docker run -p 80:80 -t -i <yourname>/supervisord
The part of the command that says -p 80:80 means that you forward port 80 from the host to port 80 on the container.
When this is set up correctly you can use a browser to surf onto http://88.x.x.x and the connection will be forwarded to the container as intended.
The Docker docs describes the -p option thoroughly. There are a few ways of specifying the flag:
# Maps the provided host_port to the container_port but only
# binds to the specific external interface
-p IP:host_port:container_port
# Maps the provided host_port to the container_port for all
# external interfaces (all IP:s)
-p host_port:container_port
Edit: When this question was originally posted there was no official docker container for the Apache web server. Now, an existing version exists.
The simplest way to get Apache up and running is to use the official Docker container. You can start it by using the following command:
$ docker run -p 80:80 -dit --name my-app -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4
This way you simply mount a folder on your file system so that it is available in the docker container and your host port is forwarded to the container port as described above.
There is an official image for apache. The image documentation contains instructions in how you can use this official images as a base for a custom image.
To see how it's done take a peek at the Dockerfile used by the official image:
https://github.com/docker-library/httpd/blob/master/2.4/Dockerfile
Example
Ensure files are accessible to root
sudo chown -R root:root /path/to/html_files
Host these files using official docker image
docker run -d -p 80:80 --name apache -v /path/to/html_files:/usr/local/apache2/htdocs/ httpd:2.4
Files are accessible on port 80.

Resources