I have such a system deployed in my local environment. There is a docker container in which nginx is installed (used as a proxy server), which redirects requests to other docker containers on which Apache is installed. I want to install the Xdebug debugger on Apache containers and use it accordingly.
When asked, I see the error in the logs:
Xdebug: [Step Debug] Could not connect to debugging client. Tried: host.docker.internal: 9005 (through xdebug.client_host / xdebug.client_port) :-(
In the Dockerfile of the Apache container, I wrote:
RUN pecl install xdebug \
&& docker-php-ext-enable xdebug \
&& echo "xdebug.mode = debug" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini \
&& echo "xdebug.client_host = host.docker.internal" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini \
I wrote in docker-compose.yml:
backend:
build: backend
container_name: backend
volumes:
# Re-use local composer cache via host-volume
- ~ / .composer-docker / cache: /root/.composer/cache: delegated
# Mount source-code for development
- ./:/app
expose:
- 80
- 9005
depends_on:
- console
environment:
- VIRTUAL_HOST = backend.cliq.com
nginx-proxy:
build: docker / nginx-proxy
container_name: nginx-proxy
expose:
- 9005
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
I assume that my xdebug connection does not reach the local machine through the proxy server, but I do not know how to fix it. Who has thoughts?
Question was resolved. I add in docker-compose.yml
extra_hosts:
- "host.docker.internal:host-gateway"
Related
I've an php application with docker environment, everything works fine but now, I want to install mosquitto broker in my php container
This is my docker-compose.yaml:
version: '3.8'
services:
nginx:
image: nginx:stable-alpine
ports:
- "8288:80"
volumes:
- ./:/var/www/
- ./dockers/nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php
php:
build:
context: .
dockerfile: dockers/php/Dockerfile
volumes:
- ./:/var/www/
- ./dockers/mosquitto/mosquitto.conf:/etc/mosquitto/conf.d/default.conf
ports:
- 9004:9000
- 1883:1883
depends_on:
- mysql
mysql:
image: mysql:8.0
container_name: access-control-mysql
tty: true
ports:
- "7306:3306"
volumes:
- ./mysql:/var/lib/mysql
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_USER: ${DB_USERNAME}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_ROOT_PASSWORD: ${DB_ROOT_PASSWORD}
This is Dockerfile which build php container
FROM 8.1-fpm
RUN apt-get update -yqq
# Install & enable Xdebug for code coverage reports
RUN pecl install xdebug imagick
RUN docker-php-ext-enable xdebug imagick
RUN apt-get install -y mosquitto libmosquitto-dev
# Set working directory
WORKDIR /var/www
# Add script file to run command
COPY scripts/cmd.sh /usr/local/bin/cmd
RUN chmod 0755 /usr/local/bin/cmd
# Change current user to www
USER www
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["/usr/local/bin/cmd"]
Then mosquitto.conf
listener 1883
allow_anonymous true
When I start all container, it works fine. However, mosquitto seem not work and when I check mosquitto by command docker-compose exec php mosquitto, it display error:
1669015395: mosquitto version 2.0.11 starting
1669015395: Using default config.
1669015395: Starting in local only mode. Connections will only be possible from clients running on this machine.
1669015395: Create a configuration file which defines a listener to allow remote access.
1669015395: For more details see https://mosquitto.org/documentation/authentication-methods/
1669015395: Opening ipv4 listen socket on port 1883.
1669015395: Opening ipv6 listen socket on port 1883.
1669015395: Error: Cannot assign requested address
1669015395: mosquitto version 2.0.11 running
It shows error Error: Cannot assign requested address, I've use the same config with separate mosquitto container in docker-compose.yaml file and it worked.
Mosquitto container in docker-compose
mqtt:
image: eclipse-mosquitto:latest
ports:
- 1884:1883
volumes:
- ./dockers/mosquitto/mosquitto.conf:/mosquitto/config/mosquitto.conf
- ./mosquitto/data:/mosquitto/data
- ./mosquitto/log:/mosquitto/log
I also try add bind_address into listener in mosquitto.conf: listener 1883 [php container ip] but it return same error.
Can someone help? Thanks.
Update
File cmd.sh
/usr/local/bin/composer install
php artisan config:cache
php artisan migrate
php artisan ide-helper:generate
php artisan ide-helper:models --nowrite
php artisan db:seed
php-fpm
mosquitto-php does not require you to run the broker in the same container, it's pre-requisite is libmosquitto which can be installed with just
RUN apt-get install -y libmosquitto-dev
You should then run the broker for your project from the eclipse-mosquitto container, not try and include it directly into the php container.
And for clarity, as I said in the comments mosquitto will not load ANY configuration file unless explicitly told where the file is with the -c command line option. Just because /etc/mosquitto.conf exists doesn't mean it will get loaded, unless you run mosquitto -c /etc/mosqutto.conf. This is why the logs you showed explicitly say it is not using any config file.
1669015395: Using default config.
1669015395: Starting in local only mode. Connections will only be possible from clients running on this machine.
I have project written in Django Restframework, Celery for executing long running task, Redis as a broker and Flower for monitoring Celery task. I have written a Dockerfile & docker-compose.yaml to create a network and run this services inside containers.
Dockerfile
FROM python:3.7-slim
ENV PYTHONUNBUFFERED 1
RUN apt-get update &&\
apt-get install python3-dev default-libmysqlclient-dev gcc -y &&\
apt-get install -y libssl-dev libffi-dev &&\
python -m pip install --upgrade pip &&\
mkdir /ibdax
WORKDIR /ibdax
COPY ./requirements.txt /requirements.txt
COPY . /ibdax
EXPOSE 80
EXPOSE 5555
ENV ENVIRONMENT=LOCAL
#install dependencies
RUN pip install -r /requirements.txt
RUN pip install django-phonenumber-field[phonenumbers]
RUN pip install drf-yasg[validation]
docker-compose.yaml
version: "3"
services:
redis:
container_name: redis-service
image: "redis:latest"
ports:
- "6379:6379"
restart: always
command: "redis-server"
ibdax-backend:
container_name: ibdax
build:
context: .
dockerfile: Dockerfile
image: "ibdax-django-service"
volumes:
- .:/ibdax
ports:
- "80:80"
expose:
- "80"
restart: always
env_file:
- .env.staging
command: >
sh -c "daphne -b 0.0.0.0 -p 80 ibdax.asgi:application"
links:
- redis
celery:
container_name: celery-container
image: "ibdax-django-service"
command: "watchmedo auto-restart -d . -p '*.py' -- celery -A ibdax worker -l INFO"
volumes:
- .:/ibdax
restart: always
env_file:
- .env.staging
links:
- redis
depends_on:
- ibdax-backend
flower:
container_name: flower
image: "ibdax-django-service"
command: "flower -A ibdax --port=5555 --basic_auth=${FLOWER_USERNAME}:${FLOWER_PASSWORD}"
volumes:
- .:/ibdax
ports:
- "5555:5555"
expose:
- "5555"
restart: always
env_file:
- .env
- .env.staging
links:
- redis
depends_on:
- ibdax-backend
This Dockerfile & docker-compose is working just fine and now I want to deploy this application to GKE. I came across Kompose which translate the docker-compose to kubernetes resources. I read the documentation and started following the steps and the first step was to run kompose convert. This returned few warnings and created few files as show below -
WARN Service "celery" won't be created because 'ports' is not specified
WARN Volume mount on the host "/Users/jeetpatel/Desktop/projects/ibdax" isn't supported - ignoring path on the host
WARN Volume mount on the host "/Users/jeetpatel/Desktop/projects/ibdax" isn't supported - ignoring path on the host
WARN Volume mount on the host "/Users/jeetpatel/Desktop/projects/ibdax" isn't supported - ignoring path on the host
INFO Kubernetes file "flower-service.yaml" created
INFO Kubernetes file "ibdax-backend-service.yaml" created
INFO Kubernetes file "redis-service.yaml" created
INFO Kubernetes file "celery-deployment.yaml" created
INFO Kubernetes file "env-dev-configmap.yaml" created
INFO Kubernetes file "celery-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "flower-deployment.yaml" created
INFO Kubernetes file "flower-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "ibdax-backend-deployment.yaml" created
INFO Kubernetes file "ibdax-backend-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "redis-deployment.yaml" created
I ignored the warnings and moved to the next step i.e running command
kubectl apply -f flower-service.yaml, ibdax-backend-service.yaml, redis-service.yaml, celery-deployment.yaml
but I get this error -
error: Unexpected args: [ibdax-backend-service.yaml, redis-service.yaml, celery-deployment.yaml]
Hence I planned to apply one by one like this -
kubectl apply -f flower-service.yaml
but I get this error -
The Service "flower" is invalid: spec.ports[1]: Duplicate value: core.ServicePort{Name:"", Protocol:"TCP", AppProtocol:(*string)(nil), Port:5555, TargetPort:intstr.IntOrString{Type:0, IntVal:0, StrVal:""}, NodePort:0}
Not sure where am I going wrong.
Also the prerequisites of Kompose is to have a Kubernetes cluster so I created an Autopilot cluster with public network. Now I am not sure how this apply command will identify the cluster I created and deploy my application on it.
After kompose convert your flower-service.yaml file have duplicate ports - that's what the error is saying.
...
ports:
- name: "5555"
port: 5555
targetPort: 5555
- name: 5555-tcp
port: 5555
targetPort: 5555
...
You can either delete port name: "5555" or name: 5555-tcp.
For example, replace ports block with
ports:
- name: 5555-tcp
port: 5555
targetPort: 5555
and deploy the service again.
I would also recommend changing port name to something more descriptive.
Same thing happens with ibdax-backend-service.yaml file.
...
ports:
- name: "80"
port: 80
targetPort: 80
- name: 80-tcp
port: 80
targetPort: 80
...
You can delete one of the definitions, and redeploy the service (changing port name to something more descriptive is also recommended).
kompose is not a perfect tool, that will always give you a perfect result. You should check the generated files for any possible conflicts and/or missing fields.
I have docker-compose that consists of 2 services:
front-end application that runs on port 3000
back-end applications that runs on port 443
mt_symfony:
container_name: mt_symfony
build:
context: ./html
dockerfile: dev.dockerfile
environment:
XDEBUG_CONFIG: "remote_host=192.168.220.1 remote_port=10000"
PHP_IDE_CONFIG: "serverName=mt_symfony"
ports:
- 443:443
- 80:80
networks:
- mt_network
volumes:
- ./html:/var/www/html
sysctls:
- net.ipv4.ip_unprivileged_port_start=0
mt_angular:
container_name: mt_angular
build:
context: ./web
dockerfile: dev.dockerfile
ports:
- 3000:3000
networks:
- mt_network
command: ./dev.entrypoint.sh
networks:
mt_network:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.220.0/28
And also in my php.ini file I have this:
[xdebug]
error_reporting = E_ALL
display_startup_errors = On
display_errors = On
xdebug.remote_enable=1
mt_symfony dockerfile:
FROM php:5.6.37-apache
EXPOSE 443 80
RUN pecl install xdebug-2.5.5
RUN docker-php-ext-enable xdebug
COPY ./docker/php5.6-fpm.conf /etc/apache2/conf-available
RUN a2enmod headers \
&& a2enmod ssl \
&& a2enmod rewrite \
&& a2enconf php5.6-fpm.conf \
&& a2ensite httpd.conf
In PhpStorm:
"Build, Execution, Deployment -> Docker" shows "Connection successful"
"Languages & Frameworks -> PHP -> CLI Interpreter" connects to docker mt_symfony container and detects installed Xdebug
"Languages & Frameworks -> PHP -> Xdebug -> Validate" I'm able to validate Xdebug on port 80, but it does not work at all on port 443
I've been trying to figure out why I cannot containers using "localhost:3000" from host. I've tried installing Docker via Homebrew, as well as the Docker for Mac installer. I believe I have the docker-compose file configured correctly.
Here is the output from docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------------------------------------------
ecm-datacontroller_db_1 docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
ecm-datacontroller_kafka_1 supervisord -n Up 0.0.0.0:2181->2181/tcp, 0.0.0.0:9092->9092/tcp
ecm-datacontroller_redis_1 docker-entrypoint.sh redis ... Up 0.0.0.0:6379->6379/tcp
ecm-datacontroller_web_1 npm start Up 0.0.0.0:3000->3000/tcp
Here is my docker-compose.yml
version: '2'
services:
web:
ports:
- "3000:3000"
build: .
command: npm start
env_file: .env
depends_on:
- db
- redis
- kafka
volumes:
- .:/app/user
db:
image: postgres:latest
ports:
- "5432:5432"
redis:
image: redis:alpine
ports:
- "6379:6379"
kafka:
image: heroku/kafka
ports:
- "2181:2181"
- "9092:9092"
I cannot access any ports that are exposed by docker-compose with curl localhost:3000 I get the following result from that
curl: (52) Empty reply from server
I should be getting {"hello":"world"}.
Dockerfile:
FROM heroku/heroku:16-build
# Which version of node?
ENV NODE_ENGINE 10.15.0
# Locate our binaries
ENV PATH /app/heroku/node/bin/:/app/user/node_modules/.bin:$PATH
# Create some needed directories
RUN mkdir -p /app/heroku/node /app/.profile.d
WORKDIR /app/user
# Install node
RUN curl -s https://s3pository.heroku.com/node/v$NODE_ENGINE/node-v$NODE_ENGINE-linux-x64.tar.gz | tar --strip-components=1 -xz -C /app/heroku/node
# Export the node path in .profile.d
RUN echo "export PATH=\"/app/heroku/node/bin:/app/user/node_modules/.bin:\$PATH\"" > /app/.profile.d/nodejs.sh
ADD package.json /app/user/
RUN /app/heroku/node/bin/npm install
ADD . /app/user/
EXPOSE 3000
Anyone have any ideas?
Ultimately, I ended up having a service that was listening on 127.0.0.1 instead of 0.0.0.0. Updating this resolved the connectivity issue I was having.
I am getting issues while setup and run the docker instance on my local system with Ruby on Rail. Please see my docker configuration files:-
Dockerfile
FROM ruby:2.3.1
RUN useradd -ms /bin/bash web
RUN apt-get update -qq && apt-get install -y build-essential
RUN apt-get -y install nginx
RUN apt-get -y install sudo
# for postgres
RUN apt-get install -y libpq-dev
# for nokogiri
RUN apt-get install -y libxml2-dev libxslt1-dev
# for a JS runtime
RUN apt-get install -y nodejs
RUN apt-get update
# For docker cache
WORKDIR /tmp
ADD ./Gemfile Gemfile
ADD ./Gemfile.lock Gemfile.lock
ENV BUNDLE_PATH /bundle
RUN gem install bundler --no-rdoc --no-ri
RUN bundle install
# END
ENV APP_HOME /home/web/cluetap_app
RUN mkdir -p $APP_HOME
WORKDIR $APP_HOME
ADD . $APP_HOME
RUN chown -R web:web $APP_HOME
ADD ./nginx/nginx.conf /etc/nginx/
RUN unlink /etc/nginx/sites-enabled/default
ADD ./nginx/cluetap_nginx.conf /etc/nginx/sites-available/
RUN ln -s /etc/nginx/sites-available/cluetap_nginx.conf /etc/nginx/sites-enabled/cluetap_nginx.conf
RUN usermod -a -G sudo web
docker-compose.yml
version: '2'
services:
postgres:
image: 'postgres:9.6'
environment:
- PGDATA=/var/lib/postgresql/data/pgdata
- POSTGRES_PASSWORD=
- POSTGRES_USER=postgres
- POSTGRES_HOST=cluetapapi_postgres_1
networks:
- default
- service-proxy
ports:
- '5432:5432'
volumes:
- 'postgres:/var/lib/postgresql/data'
labels:
description: "Postgresql Database"
service: "postgresql"
web:
container_name: cluetap_api
build: .
command: bash -c "thin start -C config/thin/development.yml && nginx -g 'daemon off;'"
volumes:
- .:/home/web/cluetap_app
ports:
- "80:80"
depends_on:
- 'postgres'
networks:
service-proxy:
volumes:
postgres:
When I have run docker-compose build and docker-compose up -d these two commands it run succesfully but when I have hit from the url the it thorught internal server error and the error is
Unexpected error while processing request: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
I have applied some solution but it did not work for me, please guide me I am new to docker and AWS.
The issue is that you are trying to connect to localhost inside the container for DB. The port mapping that you do 5432:5432 for postgres map 5432 to localhost of your host machine.
Now your web container code is running inside the container. And there is nothing on its localhost:5432.
So you need to change your connection details in the config to connect to postgres:5432 and this is because you named the postgres DB service as postgres
Change that and it should work.
By default the postgres image is already exposing to 5432 so you can just remove that part in your yml.
Then if you would like to check if web service can connect to your postgres service you can run this docker-compose exec web curl postgres:5432 then it should return:
curl: (52) Empty reply from server
If it cannot connect it will return:
curl: (6) Could not resolve host: postgres or curl: (7) Failed to connect to postgres port 5432: Connection refused
UPDATE:
I know the problem now. It's because you are trying to connect on the localhost you should connect to the postgres service.
I had this same issue when working on a Rails 6 application in Ubuntu 20.04 using Docker and Traefik.
In my case I was trying to connect the Ruby on Rails application running in a docker container to the PostgreSQL database running on the host.
So each time I try to connect to the database I get the error:
Unexpected error while processing request: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
Here's how I fixed it:
So first the applications running in the container are exposed to the host via Traefik which maps to the host on port 80.
Firstly, I had to modify my PostgreSQL database configuration file to accept remote connections from other IP addresses. This Stack Overflow answer can help with that - PostgreSQL: FATAL - Peer authentication failed for user (PG::ConnectionBad)
Secondly, I had to create a docker network that Traefik and other applications that will proxy through it will use:
docker network create traefik_default
Thirdly, I setup the applications in the docker-compose.yml file to use the same network that I just created:
version: "3"
services:
app:
build:
context: .
dockerfile: Dockerfile
env_file:
- .env
environment:
RAILS_ENV: ${RAILS_ENV}
RACK_ENV: ${RACK_ENV}
POSTGRES_USER: ${DATABASE_USER}
POSTGRES_PASSWORD: ${DATABASE_PASSWORD}
POSTGRES_DB: ${DATABASE_NAME}
POSTGRES_HOST_AUTH_METHOD: ${DATABASE_HOST}
POSTGRES_PORT: ${DATABASE_PORT}
expose:
- ${RAILS_PORT}
networks:
- traefik_default
labels:
- traefik.enable=true
- traefik.http.routers.my_app.rule=Host(`${RAILS_HOST}`)
- traefik.http.services.my_app.loadbalancer.server.port=${NGINX_PORT}
- traefik.docker.network=traefik_default
restart: always
volumes:
- .:/app
- gem-cache:/usr/local/bundle/gems
- node-modules:/app/node_modules
web-server:
build:
context: .
dockerfile: ./nginx/Dockerfile
depends_on:
- app
expose:
- ${NGINX_PORT}
restart: always
volumes:
- .:/app
networks:
traefik_default:
external: true
volumes:
gem-cache:
node-modules:
Finally, in my .env file, I specified the private IP address of my host machine as the DATABASE_HOST as an environment variable this way:
DATABASE_NAME=my_app_development
DATABASE_USER=my-username
DATABASE_PASSWORD=passsword1
DATABASE_HOST=192.168.0.156
DATABASE_PORT=5432
RAILS_HOST=my_app.localhost
NGINX_PORT=80
RAILS_ENV=development
RACK_ENV=development
RAILS_MASTER_KEY=e879cbg21ff58a9c50933fe775a74d00
RAILS_PORT=3000
I also had this problem just now. My solution is
Remove port port: '5432:5432' in Postgres service
Change POSTGRES_HOST=cluetapapi_postgres_1 to POSTGRES_HOST=localhost
When you need to access your db, just use like example sqlalchemy.url = postgresql+psycopg2://postgres:password#postgres/dbname