How to get Xdebug working on Windows WSL2 Docker container? - docker

Let me start by saying I know this question has been asked, but I've been through all the answers I could find and none so far have worked for me. I'm also a frontend dev and my networking experience is pretty limited, so forgive my ignorance.
I'm running Docker Desktop on Windows 11 using WSL2 integration. I'm running a local WordPress install through Docker-Compose and can't for the life of me get Xdebug to connect.
Here's my docker-compose.yml:
version: "3.1"
services:
wordpress:
build: ./
ports:
- "80:80"
volumes:
- ./wp-content:/var/www/html/wp-content:delegated
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
depends_on:
- db
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
volumes:
- db-data:/var/lib/mysql:delegated
volumes:
db-data:
My Dockerfile:
FROM wordpress:latest
# We're going to use this path multile times. So save it in a variable.
ARG XDEBUG_INI="/usr/local/etc/php/conf.d/xdebug.ini"
# Install AND configure Xdebug
RUN pecl install xdebug \
&& docker-php-ext-enable xdebug \
&& echo "[xdebug]" > $XDEBUG_INI \
&& echo "xdebug.mode = debug" >> $XDEBUG_INI \
&& echo "xdebug.start_with_request = yes" >> $XDEBUG_INI \
&& echo "xdebug.discover_client_host = 0" >> $XDEBUG_INI \
&& echo "xdebug.client_port = 9003" >> $XDEBUG_INI \
&& echo "xdebug.client_host = host.docker.internal" >> $XDEBUG_INI \
&& echo "xdebug.log = /tmp/xdebug.log" >> $XDEBUG_INI
And my launch.json:
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Listen for Xdebug",
"type": "php",
"request": "launch",
"port": 9003,
"stopOnEntry": true,
"pathMappings": {
"/var/www/html": "${workspaceRoot}"
},
},
]
}
When I call xdebug_info(), I get the following error that others have mentioned before:
[Step Debug] Could not connect to debugging client. Tried: host.docker.internal:9003 (through xdebug.client_host/xdebug.client_port).
Interestingly, if I ifconfig from inside a WSL terminal and replace xdebug.client_host = host.docker.internal with my specified IP, the error message changes to:
[Step Debug] Time-out connecting to debugging client, waited: 200 ms. Tried: wsl2_ip_here:9003 (through xdebug.client_host/xdebug.client_port).
Any help with this would be much appreciated. I feel like I'm so close and am just missing some piece of connectivity. Thanks so much in advance.

Related

Not work Xdebug across a proxy server in docker

I have such a system deployed in my local environment. There is a docker container in which nginx is installed (used as a proxy server), which redirects requests to other docker containers on which Apache is installed. I want to install the Xdebug debugger on Apache containers and use it accordingly.
When asked, I see the error in the logs:
Xdebug: [Step Debug] Could not connect to debugging client. Tried: host.docker.internal: 9005 (through xdebug.client_host / xdebug.client_port) :-(
In the Dockerfile of the Apache container, I wrote:
RUN pecl install xdebug \
&& docker-php-ext-enable xdebug \
&& echo "xdebug.mode = debug" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini \
&& echo "xdebug.client_host = host.docker.internal" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini \
I wrote in docker-compose.yml:
backend:
build: backend
container_name: backend
volumes:
# Re-use local composer cache via host-volume
- ~ / .composer-docker / cache: /root/.composer/cache: delegated
# Mount source-code for development
- ./:/app
expose:
- 80
- 9005
depends_on:
- console
environment:
- VIRTUAL_HOST = backend.cliq.com
nginx-proxy:
build: docker / nginx-proxy
container_name: nginx-proxy
expose:
- 9005
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
I assume that my xdebug connection does not reach the local machine through the proxy server, but I do not know how to fix it. Who has thoughts?
Question was resolved. I add in docker-compose.yml
extra_hosts:
- "host.docker.internal:host-gateway"

How to use vs code dev container with existing docker-compose file?

I cannot seem to find a clear answer for this, I found https://stackoverflow.com/a/68864132/17183293 but it is not very clear and also might be outdated because "dockerComposeFile" is no longer a valid option.
I have a project with an existing docker-compose.yml file which spins up a MariaDB database, I added a generated devcontainer.json configuration file for Node which looks like
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.202.3/containers/javascript-node
{
"name": "Node.js",
"runArgs": ["--init"],
"build": {
"dockerfile": "Dockerfile",
// Update 'VARIANT' to pick a Node version: 16, 14, 12.
// Append -bullseye or -buster to pin to an OS version.
// Use -bullseye variants on local arm64/Apple Silicon.
"args": { "VARIANT": "12" }
},
// Set *default* container specific settings.json values on container create.
"settings": {},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"dbaeumer.vscode-eslint"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "yarn install",
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "node"
}
It also generated a Dockerfile
# See here for image contents: https://github.com/microsoft/vscode-dev-containers/tree/v0.202.3/containers/javascript-node/.devcontainer/base.Dockerfile
# [Choice] Node.js version (use -bullseye variants on local arm64/Apple Silicon): 16, 14, 12, 16-bullseye, 14-bullseye, 12-bullseye, 16-buster, 14-buster, 12-buster
ARG VARIANT="16-bullseye"
FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-${VARIANT}
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
# [Optional] Uncomment if you want to install an additional version of node using nvm
# ARG EXTRA_NODE_VERSION=10
# RUN su node -c "source /usr/local/share/nvm/nvm.sh && nvm install ${EXTRA_NODE_VERSION}"
# [Optional] Uncomment if you want to install more global node modules
# RUN su node -c "npm install -g <your-package-list-here>"
These files are inside my .devcontainer folder, now in my project's docker-compose.yml file
version: '3.8'
services:
mariadb:
image: mariadb:10.1
env_file: .env
environment:
ports:
- 3306:3306
volumes:
- ./docker/mariadb.conf.d/:/etc/mysql/conf.d:z
- ./docker/mariadb-init/:/docker-entrypoint-initdb.d:z
What I like to achieve is to be able to spin up this mariadb instance so my app inside my dev container can access it and ideally I'd also be able to access the database through my operating system, I'd like to use the existing docker-compose.yml file so that people without the dev containers extension can run docker-compose up manually, how could I achieve this?
Full working example (combined from answers mentioned in question and other SO):
Linux as host
Go as example language
zsh, oh-my-zsh, .zsh_history from host Linux
.devcontainer/devcontainer.json:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.217.4/containers/go
{
"name": "Go",
"service": "workspace",
"workspaceFolder": "/home/vscode/woskpaces/go-example/",
"dockerComposeFile": [
"docker-compose.yml",
"docker-compose.workspace.yml"
],
// Set *default* container specific settings.json values on container create.
"settings": {
"go.toolsManagement.checkForUpdates": "local",
"go.useLanguageServer": true,
"go.gopath": "/go",
"go.goroot": "/usr/local/go"
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"golang.go"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "go version",
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "vscode"
}
.devcontainer/docker-compose.workspace.yml:
version: '3'
networks:
myNetwork:
name: myNetwork
services:
workspace:
build:
context: ./
dockerfile: Dockerfile
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ..:/home/vscode/woskpaces/go-example/
- ~/.zshrc:/home/vscode/.zshrc
- ~/.oh-my-zsh/:/home/vscode/.oh-my-zsh/
- ~/.zsh_history:/home/vscode/.zsh_history
depends_on:
- kafka
tty: true # <- keeps container running
networks:
- myNetwork
.devcontainer/docker-compose.yml:
version: '3'
networks:
myNetwork:
name: myNetwork
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 22181:2181
networks:
- myNetwork
tmpfs: "/datalog"
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
networks:
- myNetwork
depends_on:
- zookeeper
.devcontainer/Dockerfile:
# See here for image contents: https://github.com/microsoft/vscode-dev-containers/tree/v0.209.6/containers/go/.devcontainer/base.Dockerfile
# [Choice] Go version (use -bullseye variants on local arm64/Apple Silicon): 1, 1.16, 1.17, 1-bullseye, 1.16-bullseye, 1.17-bullseye, 1-buster, 1.16-buster, 1.17-buster
ARG VARIANT="1.17-bullseye"
FROM mcr.microsoft.com/vscode/devcontainers/go:0-${VARIANT}
# [Choice] Node.js version: none, lts/*, 16, 14, 12, 10
ARG NODE_VERSION="none"
RUN if [ "${NODE_VERSION}" != "none" ]; then su vscode -c "umask 0002 && . /usr/local/share/nvm/nvm.sh && nvm install ${NODE_VERSION} 2>&1"; fi
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
# [Optional] Uncomment the next lines to use go get to install anything else you need
# USER vscode
# RUN go get -x <your-dependency-or-tool>
# [Optional] Uncomment this line to install global node packages.
# RUN su vscode -c "source /usr/local/share/nvm/nvm.sh && npm install -g <your-package-here>" 2>&1

Where is the pgpass file in pgadmin4 docker container when this file is mounted as a volume

I'm using the following image https://hub.docker.com/r/dpage/pgadmin4/ to set up pgAdmin4 on Ubuntu 18-04.
I have mounted a volume containing a pgpass file (which was also chmod for the pgadmin user inside the container) as you can see in my Compose file:
version: '3.8'
services:
pgadmin4:
image: dpage/pgadmin4
container_name: pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=me#localhost
- PGADMIN_DEFAULT_PASSWORD=******************
- PGADMIN_LISTEN_PORT=5050
- PGADMIN_SERVER_JSON_FILE=servers.json
volumes:
- ./config/servers.json:/pgadmin4/servers.json # <-- this file is well taken into account
- ./config/pgpass:/pgpass # <- there is no way to find this one on the other hand
ports:
- "5000:5000"
restart: unless-stopped
network_mode: host
But the it seems it's not recognized from the pgadmin webpage when I right click on a server and check its Advanced properties:
And if I manually specify /pgpass in the top greenish box where there's only a slash in the image, it says:
But if I log into the container, I can actually list that file:
/ $ ls -larth /pgpass
-rw------- 1 pgadmin pgadmin 574 Mar 10 22:37 /pgpass
What did I do wrong?
How can I get the pgpass file to be recognized by the application?
I got it working with the following insight.
In servers.json when you specify:
"PassFile": "/pgpass"
It means that / in the path begins in the user's storage dir, i.e.
pattern:
/var/lib/pgadmin/storage/<USERNAME>_<DOMAIN>/
example:
/var/lib/pgadmin/storage/postgres_acme.com/
Here's a working example that copies everything into the right spot and sets the perms correctly.
pgadmin:
image: dpage/pgadmin4
restart: unless-stopped
environment:
PGADMIN_DEFAULT_EMAIL: postgres#acme.com
PGADMIN_DEFAULT_PASSWORD: postgres
PGADMIN_LISTEN_ADDRESS: '0.0.0.0'
PGADMIN_LISTEN_PORT: 5050
tty: true
ports:
- 5050:5050
volumes:
- ~/data/pgadmin_data:/var/lib/pgadmin
- ./local-cloud/servers.json:/pgadmin4/servers.json # preconfigured servers/connections
- ./local-cloud/pgpass:/pgadmin4/pgpass # passwords for the connections in this file
entrypoint: >
/bin/sh -c "
mkdir -m 700 /var/lib/pgadmin/storage/postgres_acme.com;
chown -R pgadmin:pgadmin /var/lib/pgadmin/storage/postgres_acme.com;
cp -prv /pgadmin4/pgpass /var/lib/pgadmin/storage/postgres_acme.com/;
chmod 600 /var/lib/pgadmin/storage/postgres_acme.com/pgpass;
/entrypoint.sh
"
The following config worked for me:
pgpass
servers.json
docker-compose.yaml
dockerfile_for_pgadmin
pgpass
docker_postgres_db:5432:postgres:postgres:postgres
servers.json
{
"Servers": {
"1": {
"Name": "docker_postgres",
"Group": "docker_postgres_group",
"Host": "docker_postgres_db",
"Port": 5432,
"MaintenanceDB": "postgres",
"Username": "postgres",
"PassFile": "/pgpass",
"SSLMode": "prefer"
}
}
}
docker-compose.yaml
version: "3.9"
services:
docker_postgres_db:
image: postgres
volumes:
- ./postgres_db_data:/var/lib/postgresql/data # mkdir postgres_db_data before docker compose up
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- "15432:5432"
pgadmin:
build:
context: .
dockerfile: ./dockerfile_for_pgadmin
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin#pgadmin.com
PGADMIN_DEFAULT_PASSWORD: pgadmin
ports:
- "5050:80"
volumes:
- ./servers.json:/pgadmin4/servers.json # preconfigured servers/connections
dockerfile+for_pgadmin
FROM dpage/pgadmin4
USER pgadmin
RUN mkdir -p /var/lib/pgadmin/storage/pgadmin_pgadmin.com
COPY ./pgpass /var/lib/pgadmin/storage/pgadmin_pgadmin.com/
USER root
RUN chown pgadmin:pgadmin /var/lib/pgadmin/storage/pgadmin_pgadmin.com/pgpass
RUN chmod 0600 /var/lib/pgadmin/storage/pgadmin_pgadmin.com/pgpass
USER pgadmin
ENTRYPOINT ["/entrypoint.sh"]
On pgAdmin 6.2, PassFile points to the absolute path inside container, instead of a path under STORAGE_DIR (/var/lib/pgadmin).
Before the entrypoint phase, just need to set owner and permissions for the pgpass file.
docker-compose.yml
pgadmin:
image: dpage/pgadmin4:6.2
entrypoint: >
/bin/sh -c "
cp -f /pgadmin4/pgpass /var/lib/pgadmin/;
chmod 600 /var/lib/pgadmin/pgpass;
chown pgadmin:pgadmin /var/lib/pgadmin/pgpass;
/entrypoint.sh
"
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL:-pgadmin4#pgadmin.org}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD:-admin}
PGADMIN_CONFIG_SERVER_MODE: "False"
PGADMIN_CONFIG_MASTER_PASSWORD_REQUIRED: "False"
volumes:
- ./config/servers.json:/pgadmin4/servers.json
- ./config/pgpass:/pgadmin4/pgpass
ports:
- "${PGADMIN_PORT:-5050}:80"
servers.json
{
"Servers": {
"1": {
"Name": "pgadmin4#pgadmin.org",
"Group": "Servers",
"Host": "postgres",
"Port": 5432,
"MaintenanceDB": "postgres",
"Username": "postgres",
"SSLMode": "prefer",
"PassFile": "/var/lib/pgadmin/pgpass"
}
}
}
pgpass
postgres:5432:postgres:postgres:Welcome01
Update:
Updated entrypoint on docker-compose.yml and PassFile on servers.json for a cross platform working solution.
Update 2:
I created a container image (dcagatay/pwless-pgadmin4) for passwordless pgadmin4.
The problem here seems to be that '/' in the servers.json file does not mean '/' in the filesystem, but something relative to the STORAGE_DIR set in the config. In fact, a separate storage directory for each user is created, so with your user me#localhost you will have to mount ./config/pgpass to /var/lib/pgadmin4/storage/me_localhost/pgpass, but still refer to it as /pgpass in your servers.json.
I'm running the latest version of pgadmin4 as of this post (6.11). It took me forever to find the answer as to how to set a pgpass file location without storing it in the user's uploads dir (insecure IMO).
Unfortunately it does not seem to work using an absolute path e.g. /var/lib/pgadmin/pgpass.
However, what did work was this workaround I found here: https://github.com/rowanruseler/helm-charts/issues/72#issuecomment-1002300143
Basically if you use ../../pgpass, you can traverse the filesystem instead of the default behaviour of looking inside the user's uploads folder.
Example servers.json:
{
"Servers": {
"1": {
"Name": "my-postgres-instance",
"Group": "Servers",
"Host": "postgres",
"Port": 5432,
"MaintenanceDB": "postgres",
"Username": "postgres",
"SSLMode": "prefer",
"PassFile": "../../pgpass"
}
}
}
Also, setting the file permission as 0600 is a critical step - the file can not be world-readable, see https://stackoverflow.com/a/28152568/15198761 for more info.
In a K8s environment, using the offical pgadmin image I use a configmap for the servers.json along with the following command:
command:
- sh
- -c
- |
set -e
cp -f /pgadmin4/pgpass /var/lib/pgadmin/
chown 5050:5050 /var/lib/pgadmin/pgpass
chmod 0600 /var/lib/pgadmin/pgpass
/entrypoint.sh
Using a combination of the above, I was finally able to connect to my postgres instance without needing to put in a password or keeping the pgpass file in the user's uploads dir.

Hot reload fails when files are changed from the host mapped volume in Docker-compose

In a docker-compose I have two services that share the same mapped volume from the host. The mapped volume is the application source files located in the host.
When the source files are changed in the host, HMR is not triggered, including a manual refresh does not show the latest changes. Although, if editing the file directly in the container, the HMR reloads and displays the changes. Finally, the changes are visible from the host - meaning that the mapped volume is correct and pointing to the correct place.
The question is why isn't the webpack-dev-server watcher picking up the changes? How to debut this? What solutions are there?
The docker-compose services in question:
node_dev_worker:
build:
context: .
dockerfile: ./.docker/dockerFiles/node.yml
image: foobar/node_dev:latest
container_name: node_dev_worker
working_dir: /home/node/app
environment:
- NODE_ENV=development
volumes:
- ./foobar-blog-ui/:/home/node/app
networks:
- foobar-wordpress-network
node_dev:
image: foobar/node_dev:latest
container_name: node_dev
working_dir: /home/node/app
ports:
- 8000:8000
- 9000:9000
environment:
- NODE_ENV=development
volumes:
- ./foobar-blog-ui/:/home/node/app
- ./.docker/scripts/wait-for-it.sh:/home/node/wait-for-it.sh
command: /bin/bash -c '/home/node/wait-for-it.sh wordpress-reverse-proxy:80 -t 10 -- npm run start'
depends_on:
- node_dev_worker
- mysql
- wordpress
networks:
- foobar-wordpress-network
The node.yml:
FROM node:8.16.0-slim
WORKDIR /home/node/app
RUN apt-get update
RUN apt-get install -y rsync vim git libpng-dev libjpeg-dev libxi6 build-essential libgl1-mesa-glx
CMD npm install
The Webpack dev server configuration follows some recommendations found online, for container issues, such as the one I'm presenting. The webpack configuration is placed in a middleware provided by Gatsbyjs called gatsby-node.js, as follow:
devServer: {
port: 8000,
disableHostCheck: true,
watchOptions: {
poll: true,
aggregateTimeout: 500
}
}
The Linux distro is (Docker image called node:8.16.0-slim):
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=debian
Also, the browser does show that [HMR] is connected and listening. As follows:
[HMR] connected
[HMR] bundle rebuilt in 32899ms
The host in question is Macos 10.14.6 Mojave. Running Docker 2.1.0.2
Any hints on how to debug this issue?
To fix this problem I've checked the documents provided by Docker, regarding my host system that is MacOS where they announce OSFX ( https://docs.docker.com/docker-for-mac/osxfs/ ), so before anything else, I made sure that the volumes that I am allowed to mount from MacOS are listed:
My volume sits under the /Users parent directory, so I'm good to go!
Obs: I don't think it relates, but I did reset to factory before going ahead verifying the File Sharing tab.
Have in mind the previous changes I raised in the original ticket, as this helps and is recommended. Check your Webpack Dev Server configuration:
devServer: {
port: 8000,
disableHostCheck: true,
watchOptions: {
poll: true,
aggregateTimeout: 500
}
}
It's also important to start the development server by declaring the --host and --port, like:
gatsby develop -H 0.0.0.0 -p 8000
To complete and I believe this is the key to fix this problem is that I've set the following environment variable GATSBY_WEBPACK_PUBLICPATH in my Docker-compose yaml file, under the property environment:
node_dev:
image: moola/node_dev:latest
container_name: node_dev
working_dir: /home/node/app
ports:
- 8000:8000
- 9000:9000
environment:
- NODE_ENV=development
- GATSBY_WEBPACK_PUBLICPATH=/
volumes:
- ./foobar-blog-ui/:/home/node/app
- ./.docker/scripts/wait-for-it.sh:/home/node/wait-for-it.sh
command: /bin/bash -c '/home/node/wait-for-it.sh wordpress-reverse-proxy:80 -t 10 -- npm run start'
depends_on:
- node_dev_worker
- mysql
- wordpress
networks:
- foobar-wordpress-network

How to deploy spring-cloud-config server in docker-compose?

I've got a problem with my docker-compose. I'm trying to create mulitple containers with one spring-cloud-config-server, and im trying to use spring-cloud server with my webapps container.
That's the list of my containers:
1 for the BDD (db)
1 for spring cloud config server (configproperties)
1 for some webapps (webapps)
1 nginx for reverse proxying (cnginx)
I already try to use localhost, my config-server name as host in environment variable : SPRING_CLOUD_CONFIG_URI and in boostrap.properties.
My 'configproperties' container, use 8085 in the server.xml.
That's my docker-compose.yml
version: '3'
services:
cnginx:
image: cnginx
ports:
- 80:80
restart: always
depends_on:
- configproperties
- cprocess
- webapps
db:
image: postgres:9.3
environment:
POSTGRES_USER: xxx
POSTGRES_PASSWORD: xxx
restart: always
command:
- -c
- max_prepared_transactions=100
configproperties:
restart: on-failure:2
image: configproperties
depends_on:
- db
expose:
- "8085"
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://db:5432/mydbName?currentSchema=public
webapps:
restart: on-failure:2
image: webapps
links:
- "configproperties"
environment:
- SPRING_CLOUD_CONFIG_URI=http://configproperties:8085/config-server
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- configproperties
expose:
- "8085"
When i run my docker-compose, my webapps are deploying with somme error like:
- Could not resolve placeholder 'repository.temps' in value "${repository.temps}"
But when i'm using postman and send a request like that :
http://localhost/config-server/myApplication/docker/latest
It's working properly, the request return all configuration for 'myApplication'.
I think i have miss somethink, but i don't find it...
Anyone can help me ?
Regards,
For using springcloud server, we just need to run our webapps after the initialisation of the springcloud server.
#!/bin/bash
CONFIG_SERVER_URL=$SPRING_CLOUD_CONFIG_URI/myApp/production/latest
#Check si le le serveur de config est bien lancé
echo 'Waiting for tomcat to be available'
attempt_counter=0
max_attempts=10
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' $CONFIG_SERVER_URL)" != "200" ]]; do
if [ ${attempt_counter} -eq ${max_attempts} ];then
echo "Max attempts reached on $CONFIG_SERVER_URL"
exit 1
fi
attempt_counter=$(($attempt_counter+1))
echo "code retour $(curl -s -o /dev/null -w ''%{http_code}'' $CONFIG_SERVER_URL)"
echo "Attempt to connect to $CONFIG_SERVER_URL : $attempt_counter/$max_attempts"
sleep 15
done
echo 'Tomcat is available'
mv /usr/local/tomcat/webappsTmp/* /usr/local/tomcat/webapps/

Resources