Xdebug with PhpStorm and Docker - docker

I have docker-compose that consists of 2 services:
front-end application that runs on port 3000
back-end applications that runs on port 443
mt_symfony:
container_name: mt_symfony
build:
context: ./html
dockerfile: dev.dockerfile
environment:
XDEBUG_CONFIG: "remote_host=192.168.220.1 remote_port=10000"
PHP_IDE_CONFIG: "serverName=mt_symfony"
ports:
- 443:443
- 80:80
networks:
- mt_network
volumes:
- ./html:/var/www/html
sysctls:
- net.ipv4.ip_unprivileged_port_start=0
mt_angular:
container_name: mt_angular
build:
context: ./web
dockerfile: dev.dockerfile
ports:
- 3000:3000
networks:
- mt_network
command: ./dev.entrypoint.sh
networks:
mt_network:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.220.0/28
And also in my php.ini file I have this:
[xdebug]
error_reporting = E_ALL
display_startup_errors = On
display_errors = On
xdebug.remote_enable=1
mt_symfony dockerfile:
FROM php:5.6.37-apache
EXPOSE 443 80
RUN pecl install xdebug-2.5.5
RUN docker-php-ext-enable xdebug
COPY ./docker/php5.6-fpm.conf /etc/apache2/conf-available
RUN a2enmod headers \
&& a2enmod ssl \
&& a2enmod rewrite \
&& a2enconf php5.6-fpm.conf \
&& a2ensite httpd.conf
In PhpStorm:
"Build, Execution, Deployment -> Docker" shows "Connection successful"
"Languages & Frameworks -> PHP -> CLI Interpreter" connects to docker mt_symfony container and detects installed Xdebug
"Languages & Frameworks -> PHP -> Xdebug -> Validate" I'm able to validate Xdebug on port 80, but it does not work at all on port 443

Related

Strapi dockerize with docker-compose complete guide

https://docs.strapi.io/developer-docs/latest/setup-deployment-guides/installation/docker.html#creating-a-strapi-project
dockerize strapi with docker and dockercompose
Resolve different error
strapi failed to load resource: the server responded with a status of 404 ()
you can use my dockerized project.
Dockerfile:
FROM node:16.15-alpine3.14
RUN mkdir -p /opt/app
WORKDIR /opt/app
RUN adduser -S app
COPY app/ .
RUN npm install
RUN npm install --save #strapi/strapi
RUN chown -R app /opt/app
USER app
RUN npm run build
EXPOSE 1337
CMD [ "npm", "run", "start" ]
if you don't use RUN npm run build your project on port 80 or http://localhost work but strapi admin templates call http://localhost:1337 on your system that you are running on http://localhost and there is no http://localhost:1337 stabile url and strapi throw exceptions like:
Refused to connect to 'http://localhost:1337/admin/init' because it violates the document's Content Security Policy.
Refused to connect to 'http://localhost:1337/admin/init' because it violates the following Content Security Policy directive: "connect-src 'self' https:".
docker-compose.yml:
version: "3.9"
services:
#Strapi Service (APP Service)
strapi_app:
build:
context: .
depends_on:
- strapi_db
ports:
- "80:1337"
environment:
- DATABASE_CLIENT=postgres
- DATABASE_HOST=strapi_db
- DATABASE_PORT=5432
- DATABASE_NAME=strapi_db
- DATABASE_USERNAME=strapi_db
- DATABASE_PASSWORD=strapi_db
- DATABASE_SSL=false
volumes:
- /var/scrapi/public/uploads:/opt/app/public/uploads
- /var/scrapi/public:/opt/app/public
networks:
- app-network
#PostgreSQL Service
strapi_db:
image: postgres
container_name: strapi_db
environment:
POSTGRES_USER: strapi_db
POSTGRES_PASSWORD: strapi_db
POSTGRES_DB: strapi_db
ports:
- '5432:5432'
volumes:
- dbdata:/var/lib/postgresql/data
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
#Volumes
volumes:
dbdata:
driver: local
in docker compose file I used postgres as database, you can use any other databases and set its config in app service environment variables like:
environment:
- DATABASE_CLIENT=postgres
- DATABASE_HOST=strapi_db
- DATABASE_PORT=5432
- DATABASE_NAME=strapi_db
- DATABASE_USERNAME=strapi_db
- DATABASE_PASSWORD=strapi_db
- DATABASE_SSL=false
for using environment variables in project you must use process.env for getting operating system environment variables.
change app/config/database.js file to:
module.exports = ({ env }) => ({
connection: {
client: process.env.DATABASE_CLIENT,
connection: {
host: process.env.DATABASE_HOST,
port: parseInt(process.env.DATABASE_PORT),
database: process.env.DATABASE_NAME,
user: process.env.DATABASE_USERNAME,
password: process.env.DATABASE_PASSWORD,
// ssl: Boolean(process.env.DATABASE_SSL),
ssl: false,
},
},
});
Dockerize Strapi with Docker-compose
FROM node:16.14.2
# Set up the working directory that will be used to copy files/directories below :
WORKDIR /app
# Copy package.json to root directory inside Docker container of Strapi app
COPY package.json .
RUN npm install
COPY . .
RUN npm run build
EXPOSE 1337
CMD ["npm", "start"]
#docker-compose file
version: '3.7'
services:
strapi:
container_name: strapi
restart: unless-stopped
build:
context: ./strapi
dockerfile: Dockerfile
volumes:
- strapi:/app
- /app/node_modules
ports:
- '1337:1337'
volumes:
strapi:
driver: local

Not work Xdebug across a proxy server in docker

I have such a system deployed in my local environment. There is a docker container in which nginx is installed (used as a proxy server), which redirects requests to other docker containers on which Apache is installed. I want to install the Xdebug debugger on Apache containers and use it accordingly.
When asked, I see the error in the logs:
Xdebug: [Step Debug] Could not connect to debugging client. Tried: host.docker.internal: 9005 (through xdebug.client_host / xdebug.client_port) :-(
In the Dockerfile of the Apache container, I wrote:
RUN pecl install xdebug \
&& docker-php-ext-enable xdebug \
&& echo "xdebug.mode = debug" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini \
&& echo "xdebug.client_host = host.docker.internal" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini \
I wrote in docker-compose.yml:
backend:
build: backend
container_name: backend
volumes:
# Re-use local composer cache via host-volume
- ~ / .composer-docker / cache: /root/.composer/cache: delegated
# Mount source-code for development
- ./:/app
expose:
- 80
- 9005
depends_on:
- console
environment:
- VIRTUAL_HOST = backend.cliq.com
nginx-proxy:
build: docker / nginx-proxy
container_name: nginx-proxy
expose:
- 9005
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
I assume that my xdebug connection does not reach the local machine through the proxy server, but I do not know how to fix it. Who has thoughts?
Question was resolved. I add in docker-compose.yml
extra_hosts:
- "host.docker.internal:host-gateway"

docker port mapping using docker-gen and letsencrypt-companion

i have several flask applications which i want to run on a server as separate docker containers. on the server i already have several websites running with a reverse proxy and the letsencrypt-nginx-proxy-companion. unfortunately i can't get the containers to run. I think it is because of the port mapping. When I start the containers on port 80, I get the following error message "[ERROR] Can't connect to ('', 80)" from gunicorn. On all other ports it starts successfully, but then I can't access it from outside.
what am I doing wrong?
docker-compose.yml
version: '3'
services:
db:
image: "mysql/mysql-server:5.7"
env_file: .env-mysql
restart: always
app:
build: .
env_file: .env
expose:
- "8001"
environment:
- VIRTUAL_HOST:example.com
- VIRTUAL_PORT:'8001'
- LETSENCRYPT_HOST:example.com
- LETSENCRYPT_EMAIL:foo#example.com
links:
- db:dbserver
restart: always
networks:
default:
external:
name: nginx-proxy
Dockerfile
FROM python:3.6-alpine
ARG CONTAINER_USER='flask-user'
ENV FLASK_APP run.py
ENV FLASK_CONFIG docker
RUN adduser -D ${CONTAINER_USER}
USER ${CONTAINER_USER}
WORKDIR /home/${CONTAINER_USER}
COPY requirements requirements
RUN python -m venv venv
RUN venv/bin/pip install -r requirements/docker.txt
COPY app app
COPY migrations migrations
COPY run.py config.py entrypoint.sh ./
# runtime configuration
EXPOSE 8001
ENTRYPOINT ["./entrypoint.sh"]
entrypoint.sh
#!/bin/sh
source venv/bin/activate
flask deploy
exec gunicorn -b :8001 --access-logfile - --error-logfile - run:app
reverse-proxy/docker-compose.yml
version: '3'
services:
nginx:
image: nginx
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
container_name: nginx
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /srv/www/nginx-proxy/conf.d:/etc/nginx/conf.d
- /srv/www/nginx-proxy/vhost.d:/etc/nginx/vhost.d
- /srv/www/nginx-proxy/html:/usr/share/nginx/html
- /srv/www/nginx-proxy/certs:/etc/nginx/certs:ro
nginx-gen:
image: jwilder/docker-gen
command: -notify-sighup nginx -watch -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
container_name: nginx-gen
restart: always
volumes:
- /srv/www/nginx-proxy/conf.d:/etc/nginx/conf.d
- /srv/www/nginx-proxy/vhost.d:/etc/nginx/vhost.d
- /srv/www/nginx-proxy/html:/usr/share/nginx/html
- /srv/www/nginx-proxy/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
- /srv/www/nginx-proxy/nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
restart: always
volumes:
- /srv/www/nginx-proxy/conf.d:/etc/nginx/conf.d
- /srv/www/nginx-proxy/vhost.d:/etc/nginx/vhost.d
- /srv/www/nginx-proxy/html:/usr/share/nginx/html
- /srv/www/nginx-proxy/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
NGINX_DOCKER_GEN_CONTAINER: "nginx-gen"
NGINX_PROXY_CONTAINER: "nginx"
DEBUG: "true"
networks:
default:
external:
name: nginx-proxy

Network problem beetwen two containers in Docker

I have four containers. A container for the php server, a container for the mysql server, a container for ngnix and a container for let's encrypt.
The problem is that the php server can't connect to the mysql server.
In the server, I can connect to the database on 127.0.0.1
Schema update in the server
In the container, I can't connect to the database on 127.0.0.1
Schema update in php container
I think it's a problem of network beetwen the containers.
This is the docker-compose :
version: "3.3"
services:
saas-smd-php:
build: ./html
container_name: saas-smd-php
ports:
- "80"
network_mode: bridge
env_file:
- .env
environment:
- VIRTUAL_PORT=80
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- conf:/etc/nginx/conf.d
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- dhparam:/etc/nginx/dhparam
- certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
network_mode: bridge
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-proxy-le
environment:
- NGINX_PROXY_CONTAINER=nginx-proxy
env_file:
- .env
volumes:
- conf:/etc/nginx/conf.d
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- dhparam:/etc/nginx/dhparam
- certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
network_mode: bridge
db:
container_name: saas-smd-mysql
image: mysql
ports:
- "3306:3306"
env_file:
- .env
volumes:
- data:/var/lib/mysql
network_mode: bridge
volumes:
data:
conf:
vhost:
html:
dhparam:
certs:
And the Dockerfile of the php server :
ARG PHP_VERSION=7.3
FROM php:${PHP_VERSION}-fpm-alpine
RUN apk update
RUN apk upgrade
RUN set -ex \
&& apk --no-cache add postgresql-libs postgresql-dev \
&& docker-php-ext-install pgsql pdo_pgsql \
&& apk del postgresql-dev
WORKDIR /var/www/html
COPY saas-api .
EXPOSE 80
EXPOSE 22
CMD ["php", "-S", "0.0.0.0:80", "-t", "./", "./web/app_dev.php"]
I would like to the php server can communicate with the mysql server, without breaking the access of the php server from a domain name with ngnix and let's encrypt.
Since you do not provide any info on how you connect, I will go ahead and assume you are just not using you database container name; and you're supposed to use saas-smd-mysql instead of localhost.

Docker exposing ports site can't be reached

I have exposed the required ports in my dockerfiles as well mapped them in my docker-compose.yml.
If i create containers without a docker-compose.yml i can access everything, but if i use docker-compose.yml file i cannot access 2 out of 3 images via a http-get request.
But according to docker port <container-name> the ports are mapped:
bitmovin#bitmovin-VirtualBox:~/Documents$ docker port php-container 8080:
0.0.0.0:8080
bitmovin#bitmovin-VirtualBox:~/Documents$ docker port php-container:
8080/tcp -> 0.0.0.0:8080
bitmovin#bitmovin-VirtualBox:~/Documents$ docker port comp-container:
8080/tcp -> 0.0.0.0:8070
bitmovin#bitmovin-VirtualBox:~/Documents$ docker port phpmyadmin-container:
8080/tcp -> 0.0.0.0:8090
I don't know why i cannot access the phpmyadmin-container and the php-container but the comp-container if i use a docker-compose file.
Did i miss something important?
php-image:
FROM php:7.0-apache
EXPOSE 8080
COPY Frontend/ /var/www/html/aw3somevideo/
COPY Frontend/ /var/www/html/
RUN chown -R www-data:www-data /var/www/html
RUN chmod -R 755 /var/www/html
RUN docker-php-ext-install mysqli
RUN php -i | grep -F .default_socket
comp-image:
FROM java:openjdk-8u91-jdk
EXPOSE 8070
CMD java -jar encoding-comparison-1.0.jar
ADD encoding-comparison-1.0.jar /encoding-comparison-1.0.jar
phpmyadmin-image:
FROM phpmyadmin/phpmyadmin
EXPOSE 8090
docker-compose.yml:
db:
image: mysql-image
ports:
- "3306:3306"
volumes:
- /var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=Bitmovin
- DB_NAME=aw3somevideo
- DB_USER=Bitmovin
- DB_PASS=Bitmovin
container_name: mysql-container
admin:
image: phpmyadmin-image
ports:
- "8090:8080"
links:
- db
container_name: phpmyadmin-container
backend:
image: comp-image
ports:
- "8070:8080"
volumes:
- ./src:/var/www/backend
links:
- db
container_name: comp-container
php:
image: php-image
volumes:
- ./src:/var/www/html
links:
- db
ports:
- "8080:8080"
container_name: php-container
The solution was to change the ports from admin and php from "8080:8080" and "8090:8080" to "8080:80" and "8090:80" respectively.

Resources