mosquitto cannot assign requested address in docker container - docker

I've an php application with docker environment, everything works fine but now, I want to install mosquitto broker in my php container
This is my docker-compose.yaml:
version: '3.8'
services:
nginx:
image: nginx:stable-alpine
ports:
- "8288:80"
volumes:
- ./:/var/www/
- ./dockers/nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php
php:
build:
context: .
dockerfile: dockers/php/Dockerfile
volumes:
- ./:/var/www/
- ./dockers/mosquitto/mosquitto.conf:/etc/mosquitto/conf.d/default.conf
ports:
- 9004:9000
- 1883:1883
depends_on:
- mysql
mysql:
image: mysql:8.0
container_name: access-control-mysql
tty: true
ports:
- "7306:3306"
volumes:
- ./mysql:/var/lib/mysql
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_USER: ${DB_USERNAME}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_ROOT_PASSWORD: ${DB_ROOT_PASSWORD}
This is Dockerfile which build php container
FROM 8.1-fpm
RUN apt-get update -yqq
# Install & enable Xdebug for code coverage reports
RUN pecl install xdebug imagick
RUN docker-php-ext-enable xdebug imagick
RUN apt-get install -y mosquitto libmosquitto-dev
# Set working directory
WORKDIR /var/www
# Add script file to run command
COPY scripts/cmd.sh /usr/local/bin/cmd
RUN chmod 0755 /usr/local/bin/cmd
# Change current user to www
USER www
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["/usr/local/bin/cmd"]
Then mosquitto.conf
listener 1883
allow_anonymous true
When I start all container, it works fine. However, mosquitto seem not work and when I check mosquitto by command docker-compose exec php mosquitto, it display error:
1669015395: mosquitto version 2.0.11 starting
1669015395: Using default config.
1669015395: Starting in local only mode. Connections will only be possible from clients running on this machine.
1669015395: Create a configuration file which defines a listener to allow remote access.
1669015395: For more details see https://mosquitto.org/documentation/authentication-methods/
1669015395: Opening ipv4 listen socket on port 1883.
1669015395: Opening ipv6 listen socket on port 1883.
1669015395: Error: Cannot assign requested address
1669015395: mosquitto version 2.0.11 running
It shows error Error: Cannot assign requested address, I've use the same config with separate mosquitto container in docker-compose.yaml file and it worked.
Mosquitto container in docker-compose
mqtt:
image: eclipse-mosquitto:latest
ports:
- 1884:1883
volumes:
- ./dockers/mosquitto/mosquitto.conf:/mosquitto/config/mosquitto.conf
- ./mosquitto/data:/mosquitto/data
- ./mosquitto/log:/mosquitto/log
I also try add bind_address into listener in mosquitto.conf: listener 1883 [php container ip] but it return same error.
Can someone help? Thanks.
Update
File cmd.sh
/usr/local/bin/composer install
php artisan config:cache
php artisan migrate
php artisan ide-helper:generate
php artisan ide-helper:models --nowrite
php artisan db:seed
php-fpm

mosquitto-php does not require you to run the broker in the same container, it's pre-requisite is libmosquitto which can be installed with just
RUN apt-get install -y libmosquitto-dev
You should then run the broker for your project from the eclipse-mosquitto container, not try and include it directly into the php container.
And for clarity, as I said in the comments mosquitto will not load ANY configuration file unless explicitly told where the file is with the -c command line option. Just because /etc/mosquitto.conf exists doesn't mean it will get loaded, unless you run mosquitto -c /etc/mosqutto.conf. This is why the logs you showed explicitly say it is not using any config file.
1669015395: Using default config.
1669015395: Starting in local only mode. Connections will only be possible from clients running on this machine.

Related

Not work Xdebug across a proxy server in docker

I have such a system deployed in my local environment. There is a docker container in which nginx is installed (used as a proxy server), which redirects requests to other docker containers on which Apache is installed. I want to install the Xdebug debugger on Apache containers and use it accordingly.
When asked, I see the error in the logs:
Xdebug: [Step Debug] Could not connect to debugging client. Tried: host.docker.internal: 9005 (through xdebug.client_host / xdebug.client_port) :-(
In the Dockerfile of the Apache container, I wrote:
RUN pecl install xdebug \
&& docker-php-ext-enable xdebug \
&& echo "xdebug.mode = debug" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini \
&& echo "xdebug.client_host = host.docker.internal" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini \
I wrote in docker-compose.yml:
backend:
build: backend
container_name: backend
volumes:
# Re-use local composer cache via host-volume
- ~ / .composer-docker / cache: /root/.composer/cache: delegated
# Mount source-code for development
- ./:/app
expose:
- 80
- 9005
depends_on:
- console
environment:
- VIRTUAL_HOST = backend.cliq.com
nginx-proxy:
build: docker / nginx-proxy
container_name: nginx-proxy
expose:
- 9005
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
I assume that my xdebug connection does not reach the local machine through the proxy server, but I do not know how to fix it. Who has thoughts?
Question was resolved. I add in docker-compose.yml
extra_hosts:
- "host.docker.internal:host-gateway"

HAProxy/Docker: 502 Bad Gateway when hitting Docker container running Flask/React app

I am trying to Dockerize a Flask/React web application for ease of development/collaboration, but having issues getting a proper response from the application. I am able to get the image built and Flask server started in a container, but having issues actually hitting it.
We use HAProxy to forward requests, and things work fine when I have the proxy and web server running locally. The issue has been getting docker into the mix. I believe it must be a port mapping issue, but I'm out of ideas and feel I may be missing key HAProxy/Docker subtleties. The proxy.cfg file looks as follows (with extraneous hosts not included):
global
maxconn 4096
pidfile ~/tmp/haproxy.pid
defaults
log global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
mode http
timeout connect 300000
timeout client 300000
timeout server 300000
maxconn 2000
option redispatch
retries 3
option httpclose
option httplog
option forwardfor
option httpchk HEAD / HTTP/1.0
frontend dev
bind *:8080 ssl crt ./proxy.pem
acl allow_web path_beg /app/
use_backend be_web if allow_web
backend be_web
balance roundrobin
server web_5000 localhost:5000
Dockerfile:
FROM node:10.6.0
RUN apt-get update
RUN apt-get install -y python-pip python-dev build-essential
WORKDIR /usr/src/app
COPY ./package.json .
RUN npm install
COPY . .
RUN pip install -e ./server
CMD ["npm", "start"]
docker-compose.yml:
version: "3"
services:
userportal:
build: .
volumes:
- /usr/src/app/node_modules
- .:/usr/src/app
ports:
- "5000:5000"
The Flask server binds to port 5000 hence the mapping. I've tried substituting the container IP address for localhost (e.g. 172.19.0.2:5000) but same result.
Edit:
I tried adding the proxy as a service in the docker-compose.yml and changing the host from localhost:5000 to userportal_1:5000, but this led to a 503. docker-compose.yml:
services:
userportal:
build: .
volumes:
- /usr/src/app/client/node_modules
- ./client:/usr/src/app/client
- ./server:/usr/src/app/server
ports:
- "5000:5000"
proxy:
image: haproxy:alpine
volumes:
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
- ./proxy.pem:/certs/proxy.pem
ports:
- "8080:8080"
You should know your container IP in advance
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
At some point if you want to use HAProxy it'll more easy to make it inside docker compose to to avoir networking issues
If you dont want you may try:
haproxy.conf
global
maxconn 4096
pidfile ~/tmp/haproxy.pid
defaults
log global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
mode http
timeout connect 300000
timeout client 300000
timeout server 300000
maxconn 2000
option redispatch
retries 3
option httpclose
option httplog
option forwardfor
option httpchk HEAD / HTTP/1.0
frontend dev
bind 0.0.0.0:8080 ssl crt ./proxy.pem // <-- change wildcard with 0.0.0.0
acl allow_web path_beg /app/
use_backend be_web if allow_web
default_backend be_web // <-- add this line
backend be_web
balance roundrobin
mode http
option forwardfor // <-- add this line
option httpchk GET / HTTP/1.1 // <-- add this line
server web_5000 userportal_1:5000 check // <-- change localhost to the nane of running container "userportal_1" or the IP if you get it
Dockerfile:
FROM node:10.6.0
RUN apt-get update
RUN apt-get install -y python-pip python-dev build-essential
WORKDIR /usr/src/app
COPY ./package.json .
RUN npm install
COPY . .
RUN pip install -e ./server
EXPOSE 5000 // <-- add this ine
CMD ["npm", "start"]
docker-compose.yml
version: "3"
services:
userportal:
build: .
volumes:
- /usr/src/app/node_modules
- .:/usr/src/app
ports:
- "5000:5000"
Try to include your HAProxy in your docker-compose.yml will help a lot

Hot reload fails when files are changed from the host mapped volume in Docker-compose

In a docker-compose I have two services that share the same mapped volume from the host. The mapped volume is the application source files located in the host.
When the source files are changed in the host, HMR is not triggered, including a manual refresh does not show the latest changes. Although, if editing the file directly in the container, the HMR reloads and displays the changes. Finally, the changes are visible from the host - meaning that the mapped volume is correct and pointing to the correct place.
The question is why isn't the webpack-dev-server watcher picking up the changes? How to debut this? What solutions are there?
The docker-compose services in question:
node_dev_worker:
build:
context: .
dockerfile: ./.docker/dockerFiles/node.yml
image: foobar/node_dev:latest
container_name: node_dev_worker
working_dir: /home/node/app
environment:
- NODE_ENV=development
volumes:
- ./foobar-blog-ui/:/home/node/app
networks:
- foobar-wordpress-network
node_dev:
image: foobar/node_dev:latest
container_name: node_dev
working_dir: /home/node/app
ports:
- 8000:8000
- 9000:9000
environment:
- NODE_ENV=development
volumes:
- ./foobar-blog-ui/:/home/node/app
- ./.docker/scripts/wait-for-it.sh:/home/node/wait-for-it.sh
command: /bin/bash -c '/home/node/wait-for-it.sh wordpress-reverse-proxy:80 -t 10 -- npm run start'
depends_on:
- node_dev_worker
- mysql
- wordpress
networks:
- foobar-wordpress-network
The node.yml:
FROM node:8.16.0-slim
WORKDIR /home/node/app
RUN apt-get update
RUN apt-get install -y rsync vim git libpng-dev libjpeg-dev libxi6 build-essential libgl1-mesa-glx
CMD npm install
The Webpack dev server configuration follows some recommendations found online, for container issues, such as the one I'm presenting. The webpack configuration is placed in a middleware provided by Gatsbyjs called gatsby-node.js, as follow:
devServer: {
port: 8000,
disableHostCheck: true,
watchOptions: {
poll: true,
aggregateTimeout: 500
}
}
The Linux distro is (Docker image called node:8.16.0-slim):
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=debian
Also, the browser does show that [HMR] is connected and listening. As follows:
[HMR] connected
[HMR] bundle rebuilt in 32899ms
The host in question is Macos 10.14.6 Mojave. Running Docker 2.1.0.2
Any hints on how to debug this issue?
To fix this problem I've checked the documents provided by Docker, regarding my host system that is MacOS where they announce OSFX ( https://docs.docker.com/docker-for-mac/osxfs/ ), so before anything else, I made sure that the volumes that I am allowed to mount from MacOS are listed:
My volume sits under the /Users parent directory, so I'm good to go!
Obs: I don't think it relates, but I did reset to factory before going ahead verifying the File Sharing tab.
Have in mind the previous changes I raised in the original ticket, as this helps and is recommended. Check your Webpack Dev Server configuration:
devServer: {
port: 8000,
disableHostCheck: true,
watchOptions: {
poll: true,
aggregateTimeout: 500
}
}
It's also important to start the development server by declaring the --host and --port, like:
gatsby develop -H 0.0.0.0 -p 8000
To complete and I believe this is the key to fix this problem is that I've set the following environment variable GATSBY_WEBPACK_PUBLICPATH in my Docker-compose yaml file, under the property environment:
node_dev:
image: moola/node_dev:latest
container_name: node_dev
working_dir: /home/node/app
ports:
- 8000:8000
- 9000:9000
environment:
- NODE_ENV=development
- GATSBY_WEBPACK_PUBLICPATH=/
volumes:
- ./foobar-blog-ui/:/home/node/app
- ./.docker/scripts/wait-for-it.sh:/home/node/wait-for-it.sh
command: /bin/bash -c '/home/node/wait-for-it.sh wordpress-reverse-proxy:80 -t 10 -- npm run start'
depends_on:
- node_dev_worker
- mysql
- wordpress
networks:
- foobar-wordpress-network

Docker: Is the server running on host "localhost" (::1) and accepting TCP/IP connections on port 5432?

I am getting issues while setup and run the docker instance on my local system with Ruby on Rail. Please see my docker configuration files:-
Dockerfile
FROM ruby:2.3.1
RUN useradd -ms /bin/bash web
RUN apt-get update -qq && apt-get install -y build-essential
RUN apt-get -y install nginx
RUN apt-get -y install sudo
# for postgres
RUN apt-get install -y libpq-dev
# for nokogiri
RUN apt-get install -y libxml2-dev libxslt1-dev
# for a JS runtime
RUN apt-get install -y nodejs
RUN apt-get update
# For docker cache
WORKDIR /tmp
ADD ./Gemfile Gemfile
ADD ./Gemfile.lock Gemfile.lock
ENV BUNDLE_PATH /bundle
RUN gem install bundler --no-rdoc --no-ri
RUN bundle install
# END
ENV APP_HOME /home/web/cluetap_app
RUN mkdir -p $APP_HOME
WORKDIR $APP_HOME
ADD . $APP_HOME
RUN chown -R web:web $APP_HOME
ADD ./nginx/nginx.conf /etc/nginx/
RUN unlink /etc/nginx/sites-enabled/default
ADD ./nginx/cluetap_nginx.conf /etc/nginx/sites-available/
RUN ln -s /etc/nginx/sites-available/cluetap_nginx.conf /etc/nginx/sites-enabled/cluetap_nginx.conf
RUN usermod -a -G sudo web
docker-compose.yml
version: '2'
services:
postgres:
image: 'postgres:9.6'
environment:
- PGDATA=/var/lib/postgresql/data/pgdata
- POSTGRES_PASSWORD=
- POSTGRES_USER=postgres
- POSTGRES_HOST=cluetapapi_postgres_1
networks:
- default
- service-proxy
ports:
- '5432:5432'
volumes:
- 'postgres:/var/lib/postgresql/data'
labels:
description: "Postgresql Database"
service: "postgresql"
web:
container_name: cluetap_api
build: .
command: bash -c "thin start -C config/thin/development.yml && nginx -g 'daemon off;'"
volumes:
- .:/home/web/cluetap_app
ports:
- "80:80"
depends_on:
- 'postgres'
networks:
service-proxy:
volumes:
postgres:
When I have run docker-compose build and docker-compose up -d these two commands it run succesfully but when I have hit from the url the it thorught internal server error and the error is
Unexpected error while processing request: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
I have applied some solution but it did not work for me, please guide me I am new to docker and AWS.
The issue is that you are trying to connect to localhost inside the container for DB. The port mapping that you do 5432:5432 for postgres map 5432 to localhost of your host machine.
Now your web container code is running inside the container. And there is nothing on its localhost:5432.
So you need to change your connection details in the config to connect to postgres:5432 and this is because you named the postgres DB service as postgres
Change that and it should work.
By default the postgres image is already exposing to 5432 so you can just remove that part in your yml.
Then if you would like to check if web service can connect to your postgres service you can run this docker-compose exec web curl postgres:5432 then it should return:
curl: (52) Empty reply from server
If it cannot connect it will return:
curl: (6) Could not resolve host: postgres or curl: (7) Failed to connect to postgres port 5432: Connection refused
UPDATE:
I know the problem now. It's because you are trying to connect on the localhost you should connect to the postgres service.
I had this same issue when working on a Rails 6 application in Ubuntu 20.04 using Docker and Traefik.
In my case I was trying to connect the Ruby on Rails application running in a docker container to the PostgreSQL database running on the host.
So each time I try to connect to the database I get the error:
Unexpected error while processing request: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
Here's how I fixed it:
So first the applications running in the container are exposed to the host via Traefik which maps to the host on port 80.
Firstly, I had to modify my PostgreSQL database configuration file to accept remote connections from other IP addresses. This Stack Overflow answer can help with that - PostgreSQL: FATAL - Peer authentication failed for user (PG::ConnectionBad)
Secondly, I had to create a docker network that Traefik and other applications that will proxy through it will use:
docker network create traefik_default
Thirdly, I setup the applications in the docker-compose.yml file to use the same network that I just created:
version: "3"
services:
app:
build:
context: .
dockerfile: Dockerfile
env_file:
- .env
environment:
RAILS_ENV: ${RAILS_ENV}
RACK_ENV: ${RACK_ENV}
POSTGRES_USER: ${DATABASE_USER}
POSTGRES_PASSWORD: ${DATABASE_PASSWORD}
POSTGRES_DB: ${DATABASE_NAME}
POSTGRES_HOST_AUTH_METHOD: ${DATABASE_HOST}
POSTGRES_PORT: ${DATABASE_PORT}
expose:
- ${RAILS_PORT}
networks:
- traefik_default
labels:
- traefik.enable=true
- traefik.http.routers.my_app.rule=Host(`${RAILS_HOST}`)
- traefik.http.services.my_app.loadbalancer.server.port=${NGINX_PORT}
- traefik.docker.network=traefik_default
restart: always
volumes:
- .:/app
- gem-cache:/usr/local/bundle/gems
- node-modules:/app/node_modules
web-server:
build:
context: .
dockerfile: ./nginx/Dockerfile
depends_on:
- app
expose:
- ${NGINX_PORT}
restart: always
volumes:
- .:/app
networks:
traefik_default:
external: true
volumes:
gem-cache:
node-modules:
Finally, in my .env file, I specified the private IP address of my host machine as the DATABASE_HOST as an environment variable this way:
DATABASE_NAME=my_app_development
DATABASE_USER=my-username
DATABASE_PASSWORD=passsword1
DATABASE_HOST=192.168.0.156
DATABASE_PORT=5432
RAILS_HOST=my_app.localhost
NGINX_PORT=80
RAILS_ENV=development
RACK_ENV=development
RAILS_MASTER_KEY=e879cbg21ff58a9c50933fe775a74d00
RAILS_PORT=3000
I also had this problem just now. My solution is
Remove port port: '5432:5432' in Postgres service
Change POSTGRES_HOST=cluetapapi_postgres_1 to POSTGRES_HOST=localhost
When you need to access your db, just use like example sqlalchemy.url = postgresql+psycopg2://postgres:password#postgres/dbname

docker-compose mongo rails connection fails

I have a rails application with mongodb, in development environment.
Unable to connect mongodb with docker. Can connect to local mongodb with same mongoid config. I tried changing host as localhost to 0.0.0.0 but did not work.
What is missing in the settings ?
My doubt is mongo in Docker hasn't started or binded. If i make changes in mongoid config to read: :nearest, it says no nodes found.
error message is,
Moped::Errors::ConnectionFailure in Product#index
Could not connect to a primary node for replica set #]>
Dockerfile
#FROM ruby:2.2.1-slim
FROM rails:4.2.1
MAINTAINER Sandesh Soni, <my#email.com>
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev
RUN mkdir /gmv
WORKDIR /gmv
# Add db directory to /db
ADD Gemfile /gmv/Gemfile
RUN bundle install
ADD ./database /data/db
ADD . /gmv
docker-compose.yml
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/gmv
ports:
- "3000:3000"
links:
- db
db:
image: mongo
command: "--smallfiles --bind_ip 0.0.0.0 --port 27027 -v"
volumes:
- data/mongodb:/data/db
ports:
- "27017:27017"
On your host machine execute docker run yourapp env, then in output look for ip address related to your database. That ip address & port you need to use to connect to your database running in container.
Similar question asked here

Resources