I have exposed the required ports in my dockerfiles as well mapped them in my docker-compose.yml.
If i create containers without a docker-compose.yml i can access everything, but if i use docker-compose.yml file i cannot access 2 out of 3 images via a http-get request.
But according to docker port <container-name> the ports are mapped:
bitmovin#bitmovin-VirtualBox:~/Documents$ docker port php-container 8080:
0.0.0.0:8080
bitmovin#bitmovin-VirtualBox:~/Documents$ docker port php-container:
8080/tcp -> 0.0.0.0:8080
bitmovin#bitmovin-VirtualBox:~/Documents$ docker port comp-container:
8080/tcp -> 0.0.0.0:8070
bitmovin#bitmovin-VirtualBox:~/Documents$ docker port phpmyadmin-container:
8080/tcp -> 0.0.0.0:8090
I don't know why i cannot access the phpmyadmin-container and the php-container but the comp-container if i use a docker-compose file.
Did i miss something important?
php-image:
FROM php:7.0-apache
EXPOSE 8080
COPY Frontend/ /var/www/html/aw3somevideo/
COPY Frontend/ /var/www/html/
RUN chown -R www-data:www-data /var/www/html
RUN chmod -R 755 /var/www/html
RUN docker-php-ext-install mysqli
RUN php -i | grep -F .default_socket
comp-image:
FROM java:openjdk-8u91-jdk
EXPOSE 8070
CMD java -jar encoding-comparison-1.0.jar
ADD encoding-comparison-1.0.jar /encoding-comparison-1.0.jar
phpmyadmin-image:
FROM phpmyadmin/phpmyadmin
EXPOSE 8090
docker-compose.yml:
db:
image: mysql-image
ports:
- "3306:3306"
volumes:
- /var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=Bitmovin
- DB_NAME=aw3somevideo
- DB_USER=Bitmovin
- DB_PASS=Bitmovin
container_name: mysql-container
admin:
image: phpmyadmin-image
ports:
- "8090:8080"
links:
- db
container_name: phpmyadmin-container
backend:
image: comp-image
ports:
- "8070:8080"
volumes:
- ./src:/var/www/backend
links:
- db
container_name: comp-container
php:
image: php-image
volumes:
- ./src:/var/www/html
links:
- db
ports:
- "8080:8080"
container_name: php-container
The solution was to change the ports from admin and php from "8080:8080" and "8090:8080" to "8080:80" and "8090:80" respectively.
Related
I just follow the docker docs example this, I have these lines in Dockerfile
FROM golang:1.18-buster AS build
WORKDIR /app
COPY go.mod .
COPY go.sum .
RUN go mod download
COPY *.go ./
RUN go build -o /docker-gs-ping-roach
FROM gcr.io/distroless/base-debian10
WORKDIR /
COPY --from=build /docker-gs-ping-roach /docker-gs-ping-roach
EXPOSE 4433
USER nonroot:nonroot
ENTRYPOINT ["/docker-gs-ping-roach"]
In docker-compose.yaml:
version: '3.8'
services:
docker-gs-ping-roach:
depends_on:
- roach
build:
context: .
container_name: rest-server
hostname: rest-server
networks:
- mynet
ports:
- 8000:8000
- 4433:4433
environment:
- PGUSER=${PGUSER:-totoro}
- PGPASSWORD=${PGPASSWORD:?database password not set}
- PGHOST=${PGHOST:-db}
- PGPORT=${PGPORT:-26257}
- PGDATABASE=${PGDATABASE:-mydb}
deploy:
restart_policy:
condition: on-failure
roach:
image: cockroachdb/cockroach:latest-v20.1
container_name: roach
hostname: db
networks:
- mynet
ports:
- 26257:26257
- 8080:8080
volumes:
- roach:/cockroach/cockroach-data
command: start-single-node --insecure
volumes:
roach:
networks:
mynet:
driver: bridge
There is no error shown in the terminal and the database is working on http://localhost:8080/ but when I visit the go app on http://localhost:4433 i got this error
curl: (52) Empty reply from server
I checked the containers to make sure that I hit the right port:
I'm not sure where you got port 4433 or 8000 from.
The docs show -p 80:8080, so change your ports to use that instead.
More specifically, the web-server for the Go app defaults to start on port 8080, but that conflicts with CockroachDB, so you need to change it on the host.
Or you need to define HTTP_PORT=4433, then the port mapping of 4433:4433 would work.
Hello i need expose the containers to internet i read thats is with ip table and works in in the instance but i can't access by internet for the moment and this is my configuration:
EC2 instance:
Ubuntu
Docker vserions:
Docker version 20.10.7, build f0df350
docker-compose version 1.25.0, build unknown
docker-compose.yml
version: '3.1'
services:
redis:
image: redis
command: redis-server --port 6379
ports:
- '6379:6379'
environment:
- REDIS_MODE="LRU"
- REDIS_MAXMEMORY="800mb"
server:
build: .
command: daphne rubrica.asgi:application --port 8000 --websocket_timeout -1 --bind 0.0.0.0 -v2
volumes:
- .:/src
- ${AWS_CREDENTIALS_PATH}:/root/.aws/
ports:
- "8000:8000"
links:
- redis
env_file:
- .env
nginx:
image: nginx
volumes:
- ./nginx/:/etc/nginx/
links:
- server
ports:
- "80:80"
- "443:443"
depends_on:
- server
command: ["nginx", "-g", "daemon off;"]
networks:
docker-network:
driver: networktest
network
sudo docker create --name networktest --network <instance ip> --publish 8080:80 --publish 443:443 nginx:latest
and this is the rules in my instance
If someone can help me to know what i need or a can't see in the configuration.
i have several flask applications which i want to run on a server as separate docker containers. on the server i already have several websites running with a reverse proxy and the letsencrypt-nginx-proxy-companion. unfortunately i can't get the containers to run. I think it is because of the port mapping. When I start the containers on port 80, I get the following error message "[ERROR] Can't connect to ('', 80)" from gunicorn. On all other ports it starts successfully, but then I can't access it from outside.
what am I doing wrong?
docker-compose.yml
version: '3'
services:
db:
image: "mysql/mysql-server:5.7"
env_file: .env-mysql
restart: always
app:
build: .
env_file: .env
expose:
- "8001"
environment:
- VIRTUAL_HOST:example.com
- VIRTUAL_PORT:'8001'
- LETSENCRYPT_HOST:example.com
- LETSENCRYPT_EMAIL:foo#example.com
links:
- db:dbserver
restart: always
networks:
default:
external:
name: nginx-proxy
Dockerfile
FROM python:3.6-alpine
ARG CONTAINER_USER='flask-user'
ENV FLASK_APP run.py
ENV FLASK_CONFIG docker
RUN adduser -D ${CONTAINER_USER}
USER ${CONTAINER_USER}
WORKDIR /home/${CONTAINER_USER}
COPY requirements requirements
RUN python -m venv venv
RUN venv/bin/pip install -r requirements/docker.txt
COPY app app
COPY migrations migrations
COPY run.py config.py entrypoint.sh ./
# runtime configuration
EXPOSE 8001
ENTRYPOINT ["./entrypoint.sh"]
entrypoint.sh
#!/bin/sh
source venv/bin/activate
flask deploy
exec gunicorn -b :8001 --access-logfile - --error-logfile - run:app
reverse-proxy/docker-compose.yml
version: '3'
services:
nginx:
image: nginx
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
container_name: nginx
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /srv/www/nginx-proxy/conf.d:/etc/nginx/conf.d
- /srv/www/nginx-proxy/vhost.d:/etc/nginx/vhost.d
- /srv/www/nginx-proxy/html:/usr/share/nginx/html
- /srv/www/nginx-proxy/certs:/etc/nginx/certs:ro
nginx-gen:
image: jwilder/docker-gen
command: -notify-sighup nginx -watch -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
container_name: nginx-gen
restart: always
volumes:
- /srv/www/nginx-proxy/conf.d:/etc/nginx/conf.d
- /srv/www/nginx-proxy/vhost.d:/etc/nginx/vhost.d
- /srv/www/nginx-proxy/html:/usr/share/nginx/html
- /srv/www/nginx-proxy/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
- /srv/www/nginx-proxy/nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
restart: always
volumes:
- /srv/www/nginx-proxy/conf.d:/etc/nginx/conf.d
- /srv/www/nginx-proxy/vhost.d:/etc/nginx/vhost.d
- /srv/www/nginx-proxy/html:/usr/share/nginx/html
- /srv/www/nginx-proxy/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
NGINX_DOCKER_GEN_CONTAINER: "nginx-gen"
NGINX_PROXY_CONTAINER: "nginx"
DEBUG: "true"
networks:
default:
external:
name: nginx-proxy
I am on the Mac with docker install version 2.0.0.3 (31259)
docker-compose up -d
Removing ab-insight_postgres_1
Starting ab-insight_data_1 ... done
Recreating 31d36fb9c48a_ab-insight_postgres_1 ... error
ERROR: for 31d36fb9c48a_ab-insight_postgres_1 Cannot start service postgres: b'driver failed programming external connectivity on endpoint ab-insight_postgres_1 (5ed1c634dd3a43c2cd988ff7f14b5c1f3cde848e375c2915cf92420f819e21ac): Error starting userland proxy: Bind for 0.0.0.0:5432 failed: port is already allocated'
ERROR: for postgres Cannot start service postgres: b'driver failed programming external connectivity on endpoint ab-insight_postgres_1 (5ed1c634dd3a43c2cd988ff7f14b5c1f3cde848e375c2915cf92420f819e21ac): Error starting userland proxy: Bind for 0.0.0.0:5432 failed: port is already allocated'
ERROR: Encountered errors while bringing up the project.
Here is my docker-compose.yml
version: '2'
services:
web:
restart: always
build: ./web
expose:
- "8000"
volumes:
- /home/flask/app/web
command: /usr/local/bin/gunicorn -w 2 -b :8000 project:app
depends_on:
- postgres
nginx:
restart: always
build: ./nginx
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
depends_on:
- web
data:
image: postgres:11
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
build: ./postgresql
volumes_from:
- data
expose:
- "5432"
and here is my Dockerfile
FROM python:3.6.1
MAINTAINER Ka So <kanel.soeng#kso.com>
# Create the group and user to be used in this container
RUN groupadd flaskgroup && useradd -m -g flaskgroup -s /bin/bash flask
# Create the working directory (and set it as the working directory)
RUN mkdir -p /home/flask/app/web
WORKDIR /home/flask/app/web
# Install the package dependencies (this step is separated
# from copying all the source code to avoid having to
# re-install all python packages defined in requirements.txt
# whenever any source code change is made)
COPY requirements.txt /home/flask/app/web
RUN pip install --no-cache-dir -r requirements.txt
# Copy the source code into the container
COPY . /home/flask/app/web
RUN chown -R flask:flaskgroup /home/flask
USER flask
run docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
This is happening due to postges running locally on your machine on the same port you have mentioned in your docker-compose.yml for postges service.
Either stop the sevice running on your local machine.(not recommended)
Or use other port to map to 5432 port of docker. To do so replace the
expose
-5432
in postgresa service with the following code
ports:
- "5433:5432"
The whole docker compose file will look like:
version: '2'
services:
web:
restart: always
build: ./web
expose:
- "8000"
volumes:
- /home/flask/app/web
command: /usr/local/bin/gunicorn -w 2 -b :8000 project:app
depends_on:
- postgres
nginx:
restart: always
build: ./nginx
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
depends_on:
- web
data:
image: postgres:11
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
build: ./postgresql
volumes_from:
- data
ports:
- "5433:5432"
I have this docker file and it is working as expected. I have php application that connects to mysql on localhost.
# cat Dockerfile
FROM tutum/lamp:latest
RUN rm -fr /app
ADD crm_220 /app/
ADD crmbox.sql /
ADD mysql-setup.sh /mysql-setup.sh
EXPOSE 80 3306
CMD ["/run.sh"]
When I tried to run the database as separate container, my php application is still pointing to localhost. When I connect to the "web" container, I am not able to connect to "mysql1" container.
# cat docker-compose.yml
web:
build: .
restart: always
volumes:
- .:/app/
ports:
- "8000:8000"
- "80:80"
links:
- mysql1:mysql
mysql1:
image: mysql:latest
volumes:
- "/var/lib/mysql:/var/lib/mysql"
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: secretpass
How does my php application connect to mysql from another container?
This is similar to the question asked here...
Connect to mysql in a docker container from the host
I do not want to connect to mysql from host machine, I need to connect from another container.
At first you shouldn't expose mysql 3306 port if you not want to call it from host machine. At second links are deprecated now. You can use network instead. I not sure about compose v.1 but in v.2 all containers in common docker-compose file are in one network (more about networks) and can be resolved by name each other. Example of docker-compose v.2 file:
version: '2'
services:
web:
build: .
restart: always
volumes:
- .:/app/
ports:
- "8000:8000"
- "80:80"
mysql1:
image: mysql:latest
volumes:
- "/var/lib/mysql:/var/lib/mysql"
environment:
MYSQL_ROOT_PASSWORD: secretpass
With such configuration you can resolve mysql container by name mysql1 inside web container.
For me, the name resolutions is never happening. Here is my docker file, and I was hoping to connect from app host to mysql, where the name is mysql and passed as an env variable to the other container - DB_HOST=mysql
version: "2"
services:
app:
build:
context: ./
dockerfile: /src/main/docker/Dockerfile
image: crossblogs
environment:
- DB_HOST=mysql
- DB_PORT=3306
ports:
- 8080:8080
depends_on:
- mysql
mysql:
image: mysql:5.7.20
environment:
- MYSQL_USER=root
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=crossblogs
ports:
- 3306:3306
command: mysqld --lower_case_table_names=1 --skip-ssl --character_set_server=utf8 --explicit_defaults_for_timestamp