Docker compose bind failed: port is already allocated - docker

I have been trying to get a socketio server moved over from EC2 to Docker.
I have been able to connect to the socket via a web (http) client, but connecting directly to the socket via iOS or Android seems to be impossible.
I read one of the issues can be the ports exposed are not actually published when using Docker. Since our mobile apps currently connect on port 8080 on our classic EC2 instance. I setup a docker-compose.yml file to try and open all ports and communication protocals, but I am two issues:
1. I am not sure what the service should be called so I went with "src" (see DockerFile below). But wondering if it should be app since server file is app.js?
2. Getting "Bind for 0.0.0.0:8080 failed: port is already allocated".
DockerFile
FROM ubuntu:14.04
ENV DEBIAN_FRONTEND noninteractive
RUN mkdir /src
ADD package.json /src
RUN apt-get update
RUN apt-get install --yes curl
RUN curl --silent --location https://deb.nodesource.com/setup_4.x | sudo bash -
RUN apt-get install --yes nodejs
RUN apt-get install --yes build-essential
RUN update-alternatives --install /usr/bin/node node /usr/bin/nodejs 10
RUN cd /src; npm install
RUN npm install --silent socket.io#0.9.14
WORKDIR /src
# Bundle app source
# Trouble with COPY http://stackoverflow.com/a/30405787/2926832
COPY . /src
ADD app.js /src/
EXPOSE 8080
CMD ["node", "/src/app.js"]
Docker-Compose.yml
src:
build: .
volumes:
- ./:/src
expose:
- 8080
ports:
- "8080"
- "8080:8080/udp"
- "8080:8080/tcp"
- "0.0.0.0:8080:8080"
- "0.0.0.0:8080:8080/tcp"
- "0.0.0.0:8080:8080/udp"
environment:
- NODE_ENV=development
- PORT=8080
command:
sh -c 'npm i && node server.js'
echo 'ready'

Getting "Bind for 0.0.0.0:8080 failed: port is already allocated".
you have duplicated port allocations.
when not specifying a connection type, the port defaults to tcp: meaning "0.0.0.0:8080:8080" and "0.0.0.0:8080:8080/tcp" both trying to bind to the same port and hence your error.
since docker uses 0.0.0.0 for default binding, same applies to "8080:8080/tcp" and "0.0.0.0:8080:8080/tcp" - you have no need in both of them.
therefore, you can shrink your ports section to:
ports:
- "8080:8080"
- "8080:8080/udp"
I am not sure what the service should be called
it is completely up to you. usually services are named after their content, or role in the network, such as nginx_proxy laravel_backend etc. so node_app sounds good to me, app is also ok in small networks, src doesnt appear to have any meaning but again - it is just some identifier for your service, without any additional effect.

You are just running another container on the same port. You can view it by docker ps and stop it by docker stop [CONTAINER ID].

you just need to open docker.yml file and change your port address.....this happen for me because already that container was used by my company another member
example from 0.0.0.0:80==>0.0.0.0:8000
and also port from ports:- 80/80 to ports:- 8000:8000

Related

Exposing Docker Volumes to Nginx

I'm trying to connect a Json file which resides in a docker volume of the following container to my main docker container which is running a django project.
Since I am using Caprover my Docker Compose options are very limited.
So Docker Composer is not really an option. I want to instead just expose the json file over the web with a link.
Something like domain.com/folder/jsonfile.json
Can somebody tell me if this is possible inside this dockerfile?
The image I am using is crucial to the container so can I just add an nginx image or do I need any other changes to make this work?
Or is nginx not even necessary?
FROM ubuntu:devel
ENV TZ=Etc/UTC
ARG APP_HOME=/app
WORKDIR ${APP_HOME}
ENV DEBIAN_FRONTEND=noninteractive
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime
RUN echo $TZ > /etc/timezone
RUN apt-get update && apt-get upgrade -y
RUN apt-get install gnumeric -y
RUN mkdir -p /etc/importer/data
RUN mkdir /voldata
COPY config.toml /etc/importer/
COPY datasets/* /etc/importer/data/
VOLUME /voldata
COPY importer /usr/bin/
RUN chmod +x /usr/bin/importer
COPY . ${APP_HOME}
CMD sleep 999d
Using the same volume in 2 containers
docker-compose:
volumes:
shared_vol:
services:
service1:
volumes:
- 'shared_vol:/path/to/file'
service2:
volumes:
- 'shared_vol:/path/to/file'
the mechanism above replaces the volumes_from since v3, but this works for v2 as well:
volumes:
shared_vol:
services:
service1:
volumes:
- 'shared_vol:/path/to/file'
service2:
volumes_from:
- service1
If you want to avoid unintentional altering add :ro for readonly to the target service:
service1:
volumes:
- 'shared_vol:/path/to/file'
service2:
volumes:
- 'shared_vol:/path/to/file:ro'
http-server
Surely you can provide the file via http (or other protocol). There are two oppertunities:
Including a http-service to your container (quite easy depending on what is already given in the container) e.g. using nodejs you can use this https://www.npmjs.com/package/http-server very easy. Size doesn't matter? So just install:
RUN apt-get install -y nodejs npm
RUN npm install -g http-server
EXPOSE 8080
CMD ["http-server", "--cors", "-p8080", "/path/to/your/json"]
docker-compose (Runs per default on 8080, so open this):
existing_service:
ports:
- '8080:8080'
Run a stand alone http-server (nginx, apache httpd,..) in another container, but then you depend again on using the same volume for two services, so for local solutions quite an overkill.
Base image
If you don't have good reasons i'll would never use something like :devel, :rolling or :latest as base image. Stick to a LTS version instead like ubuntu:22.04
Testing for http-server
Dockerfile
FROM ubuntu:20.04
ENV TZ=Etc/UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update
RUN apt-get install -y nodejs npm
RUN npm install -g http-server#13.1.0 # Issue with JSON-File in V14: https://github.com/http-party/http-server/issues/634
COPY ./test.json ./usr/wwwhttp/test.json
EXPOSE 8080
CMD ["http-server", "--cors", "-p8080", "/usr/wwwhttp/"]
# docker build -t test/httpserver:latest .
# docker run -p 8080:8080 test/httpserver:latest
Disclaimer:
I am not that familiar with node-docker-images, this is just to give a quick working solution and go on from there. I'm not using nodeJS in production, but I'm sure it can be optimized from being fat to.. well.. being rather fat. But for quick prototyping size doesn't matter.
If you want to just have two containers access the same file, just use a volume with --mount.

Extend prisma Docker image with another layer of node express image

I have got a prisma server image from dockerhub which is
prismagraphql/prisma:1.34
The above prisma image in order to run on PORT 4466 requires database connection string and the same is passed as an environment variable using a docker-compose file like shown below
prisma:
image: prismagraphql/prisma:1.34
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
databases:
default:
connector: mongo
uri: mongodb://mongodb
I am trying to extend the above prisma server image like shown below.
FROM prismagraphql/prisma:1.34
RUN apk add --no-cache --repository http://dl-cdn.alpinelinux.org/alpine/v3.7/main/ nodejs=8.9.3-r1
WORKDIR /project
COPY . .
# To handle 'not get uid/gid' error in alpine linux set unsafe-perm true
RUN apk update && apk upgrade \
&& npm config set unsafe-perm true \
&& npm install --g yarn \
&& npm install -g prisma \
&& yarn install \
&& chmod +x ./entrypoint.sh \
&& chmod +x ./wait-for-it.sh
EXPOSE 4466 4000
ENTRYPOINT ["./entrypoint.sh"]
The entrypoint.sh file is like this
#!/bin/bash
# wait for the prisma service to start.
# then run prisma deploy (more on that later)
./wait-for-it.sh prisma:4466 -- prisma deploy
# go into the project...
cd /project
# run an npm command to use nodemon to start/watch the server
npm run start
In the above Dockerfile
I try installing nodejs app on existing prisma image from dockerhub.
This nodejs application is called prisma nexus. nexus requires to be connected to prisma on
localhost:4466 and nexus runs on port 4000.
When I run the below below image I get this error. i.e nexus(nodejs app) is not able to connect to prisma
Could not connect to server at http://localhost:4466. Please check if your server is running.
Finally I run the extended image like this
mongodb:
image: mongo:4.2
container_name: mongodb
volumes:
- ./mongo-volume:/data/db
ports:
- "27017:27017"
networks:
- prisma
prisma:
image: extended-image-here:1.0
container_name: prisma-server
restart: always
ports:
- "4466:4466"
- "4000:4000"
environment:
PRISMA_CONFIG: |
port: 4466
databases:
default:
connector: mongo
uri: mongodb://mongodb
What am I doing here? Please help.
I guess the reason why it does not work is because the image prismagraphql/prisma:1.34 has an entrypoint and at the end of the Dockerfile there is another entrypoint. Docker accepts only a single entrypoint in a Dockerfile...
First: In your code, you put the MongoDB container on a specific named network called prisma but you do not do the same thing with the prisma container. When using compose, containers on the same overlay network are resolved by name, but requests will only be routed between containers if they're on the same network.
Next: you shouldn't be running two servers in the same container. It's better to not build your app on top of the prisma image at all, but to build it instead on top of alpine or ubuntu (or anything else really). It should connect to another container where the prisma server is running. In the comments you say that you really want to do this, but you really shouldn't. It's not much harder to run a compose configuration on a client's server rather than a single container, but it is that much harder to run 2 servers in a single container.
Finally: The localhost reference (nexus you say?) should be configurable in some way. Find out how, and have it address something like 'http://prisma:4466'. In this way you'll have 3 containers -- mongodb, prisma, and your own app.

Docker compose getting mysql db fully started before flask app starts

I just started to dockerize my app. I've built my Dockerfile and docker-compose.yml and everything seems to work fine except one thing. There are times my flask app will start too quick and throw a connection refused error (because the MySQL db is not fully up). I am using healthcheck to check if the db is up but this seems to not be reliable (I'm even making sure I can see show databases, but mysql apparently initializes more things after the healthcheck passes? not sure what the healthcheck is for then). In my output, I see that the db does get created first but it is still initializing when the flask app starts up. Ideally, when I run docker-compose up I want to be able to see this line first,
db_1_eae741771281 | 2018-11-10T00:50:21.473098Z 0 [Note] mysqld: ready for connections.
and then start my flask app entry point. Currently, it doesn't do this.
Is there a more reliable way to ensure the MySQL is fully up before starting my start.sh?
Dockerfile:
FROM python:3.5-alpine
RUN apk update && apk upgrade
RUN apk add --no-cache curl python build-base openldap-dev python2-dev python3-dev pkgconfig python-dev libffi-dev musl-dev make gcc
RUN pip install --upgrade pip
RUN adduser -D user
WORKDIR /home/user
COPY requirements.txt requirements.txt
RUN python -m venv venv
RUN venv/bin/pip install -r requirements.txt
COPY app app
COPY start.sh ./
RUN chmod +x start.sh
RUN chown -R user:user ./
USER user
EXPOSE 5000
ENTRYPOINT ["./start.sh"]
docker-compose.yml:
version: "2.1"
services:
db:
image: mysql:5.7
ports:
- "32000:3306"
environment:
- MYSQL_DATABASE=mydb
- MYSQL_USER=user
- MYSQL_PASSWORD=user123
- MYSQL_ROOT_PASSWORD=user123
volumes:
- ./db:/docker-entrypoint-initdb.d/:ro
healthcheck:
test: "mysql --user=user --password=user123 --execute \"SHOW DATABASES;\""
timeout: 20s
retries: 20
app:
build: ./
ports:
- "5000:5000"
depends_on:
db:
condition: service_healthy
start.sh
#!/bin/sh
source venv/bin/activate
# Start Gunicorn processes
echo Starting Gunicorn.
exec gunicorn -b 0.0.0.0:5000 wsgi --chdir my_app --timeout 9999 --workers 3 --access-logfile - --error-logfile - --capture-output --log-level debug
OK I also had problems with health_check...
Maybe not the most optimal, but the most reliable solution is to use a MySQL client (mysqladmin) to ping your MySQL server before starting your application.
1 - Create a wait.sh script (db is your MySQL service name here) :
#!/bin/sh
# Wait until MySQL is ready
while ! mysqladmin ping -h"db" -P"3306" --silent; do
echo "Waiting for MySQL to be up..."
sleep 1
done
2 - Get a MySQL client from your app Dockerfile :
# install mysql client, will be used to ping mysql
apt-get -y install mysql-client
3 - In your docker-compose.yml file, just add scripts to your container (I used volumes but you can keep using COPY) and run wait.sh before start.sh :
app:
build: ./
ports:
- "5000:5000"
depends_on:
db:
command: bash -c "/usr/local/bin/wait.sh && /usr/local/bin/start.sh"
volumes:
- ./start.sh:/usr/local/bin/start.sh
- ./wait.sh:/usr/local/bin/wait.sh
This should work.
If you really don't want having to download a MySQL client, try this (again db is your MySQL service name here). It has worked in most of my project but not in all (may depend of the distribution?) :
#!/bin/sh
# Wait until MySQL is ready
while ! exec 6<>/dev/tcp/db/3306; do
echo "Trying to connect to MySQL at 3306..."
sleep 5
done
PS : avoid naming your services "app" or "db", you may have problems later if you have other containers with those same service names (even in different networks).
While using a health check is easier, it totally depends on how reliable the check is.
Another approach would be to rely on projects like wait-for-it or wait-for, in your app container.
Since you are getting a connection refused, these scripts could return only once the connection is possible and your app can start only after.
Also, in case that doesn't work too, you could have a separate script (python in your case) to check until the DB is ready and you can call this script in your start.sh before starting the flask app.
This is a common problem with multi containers. It is difficult to control the speed at which different containers start. Container orchestration solution like Kubernetes might be able to help you in such cases.
Kubernetes has the concept of init containers which run to completion before your dependent container can start. You can find sample of init container here
https://www.handsonarchitect.com/2018/08/understand-kubernetes-object-init.html
This youtube vide might be helpful for you as well
https://www.youtube.com/watch?v=n2FPsunhuFc

How to run any commands in docker volumes?

After couple of days testing and working on docker (i am in general trying to migrate from vagrant to docker) i encountered a huge problem which i am not sure how or where to fix it.
docker-compose.yml
version: "3"
services:
server:
build: .
volumes:
- ./:/var/www/dev
links:
- database_dev
- database_testing
- database_dev_2
- mail
- redis
ports:
- "80:8080"
tty: true
#the rest are only images of database redis and mailhog with ports
Dockerfile
example_1
FROM ubuntu:latest
LABEL Yamen Nassif
SHELL ["/bin/bash", "-c"]
RUN apt-get install vim mc net-tools iputils-ping zip curl git -y
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN cd /var/www/dev
RUN composer install
Dockerfile
example_2
....
RUN apt-get install apache2 openssl php7.2 php7.2-common libapache2-mod-php7.2 php7.2-fpm php7.2-mysql php7.2-curl php7.2-dom php7.2-zip php7.2-gd php7.2-json php7.2-opcache php7.2-xml php7.2-cli php7.2-intl php7.2-mbstring php7.2-redis -y
# basically 2 files with just rooting to /var/www/dev
COPY docker/config/vhosts /etc/apache2/sites-available/
RUN service apache2 restart
....
now the example_1 composer.json file/directory not found
and example_2 apache will says the root dir is not found
file/directory = /var/www/dev
i guess its because its a volume and it wont be up until the container is fully up because if i launch the container without the prev commands which will lead to an error i can then login to the container and execute the commands from command line without anyerror
HOW TO FIX THIS ?
In your first Dockerfile, use the COPY directive to copy your application into the image before you do things like RUN composer install. It'd look something like
FROM php:7.0-cli
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN composer install
(cribbed from the php image documentation; that image may not have composer preinstalled).
In both Dockerfiles, remember that each RUN command creates a new empty container, runs its command, and cleans up after itself. That means commands like RUN cd ... have no effect, and you can't start a service in the background in one RUN command and have it available later; it will get stopped before the Dockerfile moves on to the next line.
In the second Dockerfile, commands like service or systemctl or initctl just don't work in Docker and you shouldn't try to use them. Standard practice is to start the server process as a foreground process when the container launches via a default CMD directive. The flip side of this is that, since the server won't start until docker run time, your volume will be available at that point. I might RUN mkdir in the Dockerfile just to be sure it exists.
The problem seems the execution order. At image build time /var/www/dev is available. When you start a container from that image the container /var/www/dev is overwritten with your local mount.
If you need no access from your host, the you can simple skip the extra volume.
In case you want use it in other containers to, the you should work with symlinks.

App running in Docker container on port 4567 can't be accessed from the outside

Updating the post with all files required to recreate the setup. – Still the same problem. Not able to access service running in container.
FROM python:3
RUN apt-get update
RUN apt-get install -y ruby rubygems
RUN gem install sinatra
WORKDIR /app
ADD . /app/
EXPOSE 4567
CMD ruby hei.rb -p 4567
hei.rb
require 'sinatra'
get '/' do
'Hello world!'
end
docker-compose.yml
version: '2'
services:
web:
build: .
ports:
- "4567:4567"
I'm starting the party by running docker-compose up --build .
docker ps returns:
0.0.0.0:4567->4567/tcp
Still, no respons from port 4567. Testing with curl from the host machine.
$ curl 127.0.0.1:4567 # and 0.0.0.0:4567
localhost:4567 replies within the containter
$ docker-compose exec web curl localhost:4567
Hello world!%`
What should I do to be able to access the Sinatra app running on port 4567?
Sinatra was binding to the wrong interface.
Fixed by adding the -o switch.
CMD ruby hei.rb -p 4567 -o 0.0.0.0
If there is no value assigned to the environment variable APP_ENV ( via (ENV['APP_ENV']), the default environment is ":development"
In a development environment with the run settings enabled, sinatra by default bind to the localhost interface of the running machine.
In order to make this service available outside this network, it needs to listen on all interfaces in the running environment. You can get this working by updating the default binding address as "0.0.0.0"
FROM ruby:latest
WORKDIR /usr/src/app/
ADD . /usr/src/app/
RUN bundle install
EXPOSE 4567
CMD ["ruby","app.rb","-o", "0.0.0.0"]

Resources