Setting up XDebug with Docker and VSCode - docker

I set up a Laravel dev environment using Docker - nginx:stable-alpine, php:8.0-fpm-alpine and mysql:5.7.32. I install Xdebug from my php.dockerfile:
RUN apk --no-cache add pcre-dev ${PHPIZE_DEPS} \
&& pecl install xdebug \
&& docker-php-ext-enable xdebug \
&& apk del pcre-dev ${PHPIZE_DEPS}
And include two volumes in docker-compose to point php to xdebug.ini and error_reporting.ini:
volumes:
- .:/var/www/html
- ../docker/php/conf.d/xdebug.ini:/usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini
- ../docker/php/conf.d/error_reporting.ini:/usr/local/etc/php/conf.d/error_reporting.ini
My xdebug.ini looks like this:
zend_extension=xdebug
[xdebug]
xdebug.mode=develop,debug,trace,profile,coverage
xdebug.start_with_request = yes
xdebug.discover_client_host = 0
xdebug.remote_connect_back = 1
xdebug.client_port = 9003
xdebug.remote_host='host.docker.internal'
xdebug.idekey=VSCODE
When I return phpinfo() I can see that everything looks set up correctly, showing xdebug version 3.0.4 is installed, but when I set a breakpoint in VSCode and run the debugger its not hitting it.
My launch.json looks like this:
{
"version": "0.2.0",
"configurations": [
{
"name": "XDebug Docker",
"type": "php",
"request": "launch",
"port": 9003,
"pathMappings": {
"/var/www/html": "${workspaceFolder}/src",
}
},
]
}
My folder structure looks like this:
- Project
-- /docker
-- nginx.dockerfile
-- php.dockerfile
--- /nginx
--- /certs
--- default.conf
--- /php
---- /conf.d
---- error_reporting.ini
---- xdebug.ini
-- /src (the laravel app)

Xdebug 3 has changed the names of settings. Instead of xdebug.remote_host, you need to use xdebug.client_host, as per the upgrade guide: https://xdebug.org/docs/upgrade_guide#changed-xdebug.remote_host
xdebug.remote_connect_back has also been renamed in favour of xdebug.discover_client_host, but when using Xdebug with Docker, you should leave that set to 0.

Related

AWS ECS EC2 ECR not updating files after deployment with docker volume nginx

I am being stuck on issue with my volume and ECS.
I would like to attach volume so i can store there .env files etc so i dont have to recreate this manually after every deployment.
The problem is, the way I have it set up it does not update(or overwrite) files, which are pushed to ECR. So If i do code change and push it to git, it does following:
Creates new image and pushes it to ECR
It Creates new containers with image pushed to ECR (it dynamically assigns tag to the image)
when I do docker ps on EC2 I see new containers, and container with code changes is built from correct image which has just been pushed to ECR. So it seems all is working fine until this point.
But the code changes dont appear when i refresh browser nor after clearing caches.
I am attaching volume to the folder /var/www/html where sits my app, so from my understanding this code should get replaced during deployment. But the problem is, it does not replaces the code.
When I remove the volume, I can see the code changes everytime deployment finishes but I also always have to create manually .env file + run couple of commands.
PS: I have another container (mysql) which is setting volume exactly the same way and changes I do in database are persistent even after new container is created.
Please see my Docker file and taskDefinition.json to see how I deal with volumes.
Dockerfile:
FROM alpine:${ALPINE_VERSION}
# Setup document root
WORKDIR /var/www/html
# Install packages and remove default server definition
RUN apk add --no-cache \
curl \
nginx \
php8 \
php8-ctype \
php8-curl \
php8-dom \
php8-fpm \
php8-gd \
php8-intl \
php8-json \
php8-mbstring \
php8-mysqli \
php8-pdo \
php8-opcache \
php8-openssl \
php8-phar \
php8-session \
php8-xml \
php8-xmlreader \
php8-zlib \
php8-tokenizer \
php8-fileinfo \
php8-json \
php8-xml \
php8-xmlwriter \
php8-simplexml \
php8-dom \
php8-pdo_mysql \
php8-pdo_sqlite \
php8-tokenizer \
php8-pecl-redis \
php8-bcmath \
php8-exif \
supervisor \
nano \
sudo
# Create symlink so programs depending on `php` still function
RUN ln -s /usr/bin/php8 /usr/bin/php
# Configure nginx
COPY tools/docker/config/nginx.conf /etc/nginx/nginx.conf
# Configure PHP-FPM
COPY tools/docker/config/fpm-pool.conf /etc/php8/php-fpm.d/www.conf
COPY tools/docker/config/php.ini /etc/php8/conf.d/custom.ini
# Configure supervisord
COPY tools/docker/config/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# Make sure files/folders needed by the processes are accessable when they run under the nobody user
RUN chown -R nobody.nobody /var/www/html /run /var/lib/nginx /var/log/nginx
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN apk update && apk add bash
# Install node npm
RUN apk add --update nodejs npm \
&& npm config set --global loglevel warn \
&& npm install --global marked \
&& npm install --global node-gyp \
&& npm install --global yarn \
# Install node-sass's linux bindings
&& npm rebuild node-sass
# Switch to use a non-root user from here on
USER nobody
# Add application
COPY --chown=nobody ./ /var/www/html/
RUN cat /var/www/html/resources/js/Components/Sections/About.vue
RUN composer install --optimize-autoloader --no-interaction --no-progress --ignore-platform-req=ext-zip --ignore-platform-req=ext-zip
USER root
RUN yarn && yarn run production
USER nobody
VOLUME /var/www/html
# Expose the port nginx is reachable on
EXPOSE 8080
# Let supervisord start nginx & php-fpm
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
# Configure a healthcheck to validate that everything is up&running
HEALTHCHECK --timeout=10s CMD curl --silent --fail http://127.0.0.1:8080/fpm-ping
taskDefinition.json
{
"containerDefinitions": [
{
"name": "fooweb-nginx-php",
"cpu": 100,
"memory": 512,
"links": [
"mysql"
],
"portMappings": [
{
"containerPort": 8080,
"hostPort": 80,
"protocol": "tcp"
}
],
"essential": true,
"environment": [],
"mountPoints": [
{
"sourceVolume": "fooweb-storage-web",
"containerPath": "/var/www/html"
}
]
},
{
"name": "mysql",
"image": "mysql",
"cpu": 50,
"memory": 512,
"portMappings": [
{
"containerPort": 3306,
"hostPort": 4306,
"protocol": "tcp"
}
],
"essential": true,
"environment": [
{
"name": "MYSQL_DATABASE",
"value": "123"
},
{
"name": "MYSQL_PASSWORD",
"value": "123"
},
{
"name": "MYSQL_USER",
"value": "123"
},
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "123"
}
],
"mountPoints": [
{
"sourceVolume": "fooweb-storage-mysql",
"containerPath": "/var/lib/mysql"
}
]
}
],
"family": "art_web_task_definition",
"taskRoleArn": "arn:aws:iam::123:role/ecs-task-execution-role",
"executionRoleArn": "arn:aws:iam::123:role/ecs-task-execution-role",
"networkMode": "bridge",
"volumes": [
{
"name": "fooweb-storage-mysql",
"dockerVolumeConfiguration": {
"scope": "shared",
"autoprovision": true,
"driver": "local"
}
},
{
"name": "fooweb-storage-web",
"dockerVolumeConfiguration": {
"scope": "shared",
"autoprovision": true,
"driver": "local"
}
}
],
"placementConstraints": [],
"requiresCompatibilities": [
"EC2"
],
"cpu": "1536",
"memory": "1536",
"tags": []
}
So I believe there will be some problem with the way I have set the volume or maybe there could be some permission issue ?
Many thanks !
"I am attaching volume to the folder /var/www/html where sits my app,
so from my understanding this code should get replaced during
deployment."
That's the opposite of how docker volumes work.
It is going to ignore anything in /var/www/html inside the docker image, and instead reuse whatever you have in the mounted volume. Mounted volumes are primarily for persisting files between container restarts and image changes. If there is updated code in /var/www/html inside the image you are building, and you want that updated code to be active when your application is deployed, then you can't mount that as a volume.
If you are specifying a VOLUME instruction in your Dockerfile, then the very first time you ran your container in it would have "initialized" the volume with the files that are inside the docker container, as part of the process of creating the volume. After that, the files in the volume on the host server are persisted across container restarts/deployments, and any new updates to that path inside the new docker images are ignored.

It seems you are running Vue CLI inside a container

I m trying to run my vuejs app using vs-code remote-containers. Its deployed and I can access it via the url: localhost:8080/ but If I update some js file, its not re-compiling and even not hot-reloading.
devcontainer.json
{
"name": "Aquawebvue",
"dockerFile": "Dockerfile",
"appPort": [3000],
"runArgs": ["-u", "node"],
"settings": {
"workbench.colorTheme": "Cobalt2",
"terminal.integrated.automationShell.linux": "/bin/bash"
},
"postCreateCommand": "yarn",
"extensions": [
"esbenp.prettier-vscode",
"wesbos.theme-cobalt2",
]
}
Dockerfile
FROM node:12.13.0
RUN npm install -g prettier
After opening container and running cmd 'yarn serve' in terminal it builds and deploy successfully but I got this warning:
It seems you are running Vue CLI inside a container.
Since you are using a non-root publicPath, the hot-reload socket
will not be able to infer the correct URL to connect. You should
explicitly specify the URL via devServer.public.
VSCode has a pre-defined .devcontainer directory for Vue projects. It can be found on GitHub. You can install it automatically by running the command Add Development Container Configuration Files... > Show All Definitions > Vue.
Dockerfile
# [Choice] Node.js version: 14, 12, 10
ARG VARIANT=14
FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-${VARIANT}
RUN su node -c "umask 0002 && npm install -g http-server #vue/cli #vue/cli-service-global"
WORKDIR /app
EXPOSE 8080
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
devcontainer.json
{
"name": "Vue (Community)",
"build": {
"dockerfile": "Dockerfile",
"context": "..",
// Update 'VARIANT' to pick a Node version: 10, 12, 14
"args": { "VARIANT": "14" }
},
// Set *default* container specific settings.json values on container create.
"settings": {
"terminal.integrated.shell.linux": "/bin/zsh"
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"dbaeumer.vscode-eslint",
"octref.vetur"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [
8080
],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "uname -a",
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "node"
}

No internet connection when building image

I created a custom configuration for docker daemon, here is the config
{
"bip" : "192.168.1.32/24",
"default-address-pools" : [
{
"base" : "172.16.200.0/24",
"size" : 24
},
{
"base" : "172.16.25.0/24",
"size" : 24
}
],
"debug" : true,
"hosts" : ["tcp://127.0.0.69:4269", "unix:///var/run/docker.sock"],
"default-gateway" : "192.168.1.1",
"dns" : ["8.8.8.8", "8.8.4.4"],
"experimental" : true,
"log-driver" : "json-file",
"log-opts" : {
"max-size" : "20m",
"max-file" : "3",
"labels" : "develope_status",
"env" : "developing"
}
}
bip is my host IP address and default-gateway is my router gateway
, I created 2 address pools so it can assign IP address to the container
But during the build process, the image has no internet so it can't do apk update.
Here is my docker-compose file
version: "3"
services:
blog:
build: ./
image: blog:1.0
ports:
- "456:80"
- "450:443"
volumes:
- .:/blog
- ./logs:/var/log/apache2
- ./httpd-ssl.conf:/usr/local/apache2/conf/extra/httpd-ssl.conf
container_name: blog
hostname: alpine
command: apk update
networks:
default:
external:
name: vpc
Here is the Dockerfile
FROM httpd:2.4-alpine
ENV REFRESHED_AT=2021-02-01 \
APACHE_RUN_USER=www-data \
APACHE_RUN_GROUP=www-data \
APACHE_LOG_DIR=/var/log/apache2 \
APACHE_PID_FILE=/var/run/apache2.pid \
APACHE_RUN_DIR=/var/run/apache2 \
APACHE_LOCK_DIR=/var/lock/apache2
RUN mkdir -p $APACHE_RUN_DIR $APACHE_LOCK_DIR $APACHE_LOG_DIR \
&& sed -i \
-e 's/^#\(Include .*httpd-ssl.conf\)/\1/' \
-e 's/^#\(LoadModule .*mod_ssl.so\)/\1/' \
-e 's/^#\(LoadModule .*mod_socache_shmcb.so\)/\1/' \
conf/httpd.conf \
&& echo "Include /blog/blog.conf" >> /usr/local/apache2/conf/httpd.conf
VOLUME ["/blog", "$APACHE_LOG_DIR"]
EXPOSE 80 443
The running container was able to ping google and do apk update like normal but if I put RUN apk update inside the dockerfile it won't update.
Any help would be great, thank you

VSCode: Display forwarding from docker container in Remote Development Extension

How to set up remote display forwarding from Docker container using the new Remote Development extension?
Currently, my .devcontainer contains:
devcontainer.json
{
"name": "kinetic_v5",
"context": "..",
"dockerFile": "Dockerfile",
"workspaceFolder": "/workspace",
"runArgs": [
"--net", "host",
"-e", "DISPLAY=${env:DISPLAY}",
"-e", "QT_GRAPHICSSYSTEM=native",
"-e", "CONTAINER_NAME=kinetic_v5",
"-v", "/tmp/.X11-unix:/tmp/.X11-unix",
"--device=/dev/dri:/dev/dri",
"--name=kinetic_v5",
],
"extensions": [
"ms-python.python"
]
}
Dockerfile
FROM docker.is.localnet:5000/amd/official:16.04
RUN apt-get update && \
apt-get install -y zsh \
fonts-powerline \
locales \
# set up locale
&& locale-gen en_US.UTF-8
RUN pip install Cython
# run the installation script
RUN wget https://github.com/robbyrussell/oh-my-zsh/raw/master/tools/install.sh -O - | zsh || true
CMD ["zsh"]
This doesn't seem to do the job.
Setup details:
OS: linux
Product: Visual Studio Code - Insiders
Product Version: 1.35.0-insider
Language: en
UPDATE: You can find a thread on official git repo about this issue here.

Docker Image Not Loading on Localhost. Network, etc

I have been having really hard time figuring out the problem with my docker image not loading on localhost. I am using Docker for Windows 7. Following is some information regarding my docker image, network, etc, if it can be any helpful in proposing solutions.
I have EXPOSE 80 in my docker file. and listen 8080 in my shinyserver.conf file. I run my image by typing docker run -p 80:8080 imagename. It goes back to command line. Then I check http://localhost or http://localhost:8080, or http://172.17.0.1 or http://172.17.0.1:8080 or http://192.168.99.100 and no image shows up.
When I run docker ps there is no running container (makes sense, since image is not running anyway on localhost). All exited containers show with docker ps -a. There some containers that I haven't even created by show (i guess it comes with creating images).
I run docker inspect to find the IP address and here is what I get (there is no IP on fitfarmz which is my imagename)
$ docker inspect -f '{{.Name}} - {{.NetworkSettings.IPAddress }}' $(docker ps -aq)
/mystifying_johnson -
/wonderful_newton -
/flamboyant_brown -
/gallant_austin -
/fitfarmz-2 -
/fitfarmz -
/amazing_easley -
/nervous_lewin -
/silly_wiles -
/lucid_hopper -
/quizzical_kirch -
/boring_gates -
/clever_booth -
/determined_mestorf -
/pedantic_wozniak -
/goofy_goldstine -
/sharp_ardinghelli -
/xenodochial_lamport -
/keen_panini -
/blissful_lamarr -
/suspicious_boyd -
/confident_hodgkin -
/vigorous_lewin - 172.17.0.2
/quirky_khorana -
/agitated_knuth -
I also got info on docker network:
$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "9105bd8d679ad2e7d814781b4fa99f375cff3a99a047e70ef63e463c35c5ae28"
,
"Created": "2018-09-08T22:22:51.075519268Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Here is the dockerfile:
FROM r-base:3.5.0
# Install Ubuntu packages
RUN apt-get update && apt-get install -y \
sudo \
gdebi-core \
pandoc \
pandoc-citeproc \
libcurl4-gnutls-dev \
libcairo2-dev/unstable \
libxt-dev \
libssl-dev
# Add shiny user
RUN groupadd shiny \
&& useradd --gid shiny --shell /bin/bash --create-home shiny
# Download and install ShinyServer
RUN wget --no-verbose https://download3.rstudio.org/ubuntu-14.04/x86_64/shiny-server-1.5.7.907-amd64.deb && \
gdebi shiny-server-1.5.7.907-amd64.deb
# Install R packages that are required
RUN R -e "install.packages(c('Benchmarking', 'plotly', 'DT'), repos='http://cran.rstudio.com/')"
RUN R -e "install.packages('shiny', repos='https://cloud.r-project.org/')"
# Copy configuration files into the Docker image
COPY shiny-server.conf /etc/shiny-server/shiny-server.conf
COPY /app /srv/shiny-server/
# Make the ShinyApp available at port 80
EXPOSE 80
# Copy further configuration files into the Docker image
COPY shiny-server.sh /usr/bin/shiny-server.sh
CMD ["/usr/bin/shiny-server.sh"]
And shinyserver.conf:
# Define the user we should use when spawning R Shiny processes
run_as shiny;
# Define a top-level server which will listen on a port
server {
# Instruct this server to listen on port 80. The app at dokku-alt need expose PORT 80, or 500 e etc. See the docs
listen 8080;
# Define the location available at the base URL
location / {
# Run this location in 'site_dir' mode, which hosts the entire directory
# tree at '/srv/shiny-server'
site_dir /srv/shiny-server;
# Define where we should put the log files for this location
log_dir /var/log/shiny-server;
# Should we list the contents of a (non-Shiny-App) directory when the user
# visits the corresponding URL?
directory_index on;
}
}
Here is the shiny-server.sh file:
mkdir -p /var/log/shiny-server
chown shiny.shiny /var/log/shiny-server
exec shiny-server >> /var/log/shiny-server.log 2>&1
Any ideas what is going wrong? I use the command: docker run -p 80:8080 fitfarmz to run the image.

Resources