I have a Symfony app (v4.3) running in an docker setup. This setup also has a container for the mailcatcher. No matter how I try to set the MAILER_URL in the .env file no mail shows up in the mailcatcher. If I just the call regular PHP mail() function the mails pops up in the mail catcher. The setup has been used for other projects as well and it worked without a flaw.
Only with the Symfony Swiftmailer I can't the mails.
My docker-compose file looks like this:
version: '3'
services:
#######################################
# PHP application Docker container
#######################################
app:
container_name: project_app
build:
context: docker/web
networks:
- default
volumes:
- ../project:/project:cached
- ./etc/httpd/vhost.d:/opt/docker/etc/httpd/vhost.d
# cap and privileged needed for slowlog
cap_add:
- SYS_PTRACE
privileged: true
env_file:
- docker/web/conf/environment.yml
- docker/web/conf/environment.development.yml
environment:
- VIRTUAL_HOST=.project.docker
- POSTFIX_RELAYHOST=[mail]:1025
#######################################
# Mailcatcher
#######################################
mail:
image: schickling/mailcatcher
container_name: project_mail
networks:
- default
environment:
- VIRTUAL_HOST=mail.project.docker
- VIRTUAL_PORT=1080
I pleayed around with the MAILER_URL setting hours but everything failed so far.
Hope soembody here has an idea how to set the MAILER_URL.
Thank you
According to docker-compose.yml, your MAILER_URL should be:
smtp://project_mail:1025
Double-check if it has the correct value
Then you can view container logs using
docker-compose logs -f mail to see if your messages reach the service at all.
It will be something like:
==> SMTP: Received message from '<user#example.com>' (619 bytes)
Second: try to restart your containers. Sometimes changes in .env* files are not applied instantly.
Related
I'm facing issue with my new Mac wit M1 Chip.
I use the same config as on my old mac where it worked:
version: '3'
services:
shop:
container_name: shop
image: dockware/dev:latest
ports:
- "22:22" # ssh
- "80:80" # apache2
- "443:443" # apache2 https
- "8888:8888" # watch admin
- "9998:9998" # watch storefront proxy
- "9999:9999" # watch storefront
- "3306:3306" # mysql port
volumes:
- "db_volume:/var/lib/mysql"
- "shop_volume:/var/www/html"
networks:
- web
environment:
- MYSQL_USER=shopware
- MYSQL_PWD=secret
- XDEBUG_ENABLED=0
rediscache:
image: redis:6.0
container_name: redis
networks:
- web
volumes:
db_volume:
driver: local
shop_volume:
driver: local
## ***********************************************************************
## NETWORKS
## ***********************************************************************
networks:
web:
external: false
The error i get is :
sudo: no tty present and no askpass program specified
I already pruned image and containers but still get this error.
On my research, i found solutions where i need to edit sudoer file or set permission, but it's docker image, so I can not use those solutions.
Anyone an idea why and how to solve that?
Please open a shell in the container (docker exec -it <container name> /bin/bash) and execute the entrypoint script manually.
You should see that prompt when running it manually.
Probably the setup or so is trying to ask for something interactively, which fails if there is no TTY.
First all was working great with default configuration until I hit one memory limit, then I needed to add a Redis configuration file (latest, 7.0). In this file bind is set to 127.0.0.1 with default port, so I tried that. I also changed that to bind 0.0.0.0 but I got the same error.
Now in for my environment variables I'm putting: ~redis//redis:6379~ or redis:6379 so here is my configuration (docker-compose.yml):
version: '3.7'
services:
classified-ads:
container_name: classified-ads
depends_on:
- redis-service
ports:
- 3000:3000
build:
context: ./
# restart: unless-stopped
environment:
- REDIS_URI=redis:6379
- INSIDE_DOCKER=wahoo
# our custom image
redis:
container_name: redis-service
build:
context: ./docker/redis/
privileged: true
command: sh -c "./init.sh"
ports:
- '6379:6379'
volumes:
- ./host-db/redis-data:/data/redis:rw
The error I'm getting is bloated and is from my client which is ioredis but it is clearly a connection error. (ioredis wrapped with Fastify/redis, so it is a failed promise that is very verbose but not clearly indicative, but it's 100% a connection error).
I checked Redis logs piped to Docker and it is running fine.
Edit: ping redis//redis:6379 does not work from my app image while ping redis:6379 works so I changed that.
I found the sollution. While the default redis:alpine image use configuration with protection_mode yes I was using a new configuration with protection_mode no. I also removed the bind 'address' all together.
And reconnected normally from other services with redis:port as usual.
all thanks to https://stackoverflow.com/a/57541475/1951298
I'm trying to set up a simple MLflow tracking server with docker that uses a mysql backend store and S3 bucket for artifact storage. I'm using a simple docker-compose file to set this up on a server and supplying all of the credentials through a .env file. When I try to run the sklearn_elasticnet_wine example from the mlflow repo here: https://github.com/mlflow/mlflow/tree/master/examples/sklearn_elasticnet_wine usingTRACKING_URI = "http://localhost:5005 from the machine hosting my tracking server, the run fails with the following error: botocore.exceptions.NoCredentialsError: Unable to locate credentials. I've verified that my environment variables are correct and available in my mlflow_server container. The runs show up in my backend store so the run only seems to be failing at the artifact logging step. I'm not sure why this isn't working. I've seen a examples of how to set up a tracking server online, including: https://towardsdatascience.com/deploy-mlflow-with-docker-compose-8059f16b6039. Some use minio also but others just specify their s3 location as I have. I'm not sure what I'm doing wrong at this point. Do I need to explicitly set the ARTIFACT_URI as well? Should I be using Minio? Eventually, I'll be logging runs to the server from another machine, hence the nginx container. I'm pretty new to all of this so I'm hoping it's something really obvious and easy to fix but so far the Google has failed me. TIA.
version: '3'
services:
app:
restart: always
build: ./mlflow
image: mlflow_server
container_name: mlflow_server
expose:
- 5001
ports:
- "5001:5001"
networks:
- internal
environment:
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
- AWS_S3_BUCKET=${AWS_S3_BUCKET}
- DB_USER=${DB_USER}
- DB_PASSWORD=${DB_PASSWORD}
- DB_PORT=${DB_PORT}
- DB_NAME=${DB_NAME}
command: >
mlflow server
--backend-store-uri mysql+pymysql://${DB_USER}:${DB_PASSWORD}#${DB_HOST}:${DB_PORT}/${DB_NAME}
--default-artifact-root s3://${AWS_S3_BUCKET}/mlruns/
--host 0.0.0.0
--port 5001
nginx:
restart: always
build: ./nginx
image: mlflow_nginx
container_name: mlflow_nginx
ports:
- "5005:80"
networks:
- internal
depends_on:
- app
networks:
internal:
driver: bridge
Finally figure this out. I didn't realize that the client also needed to have access to the AWS credentials for S3 storage.
I'm trying to run bitnami parse-server docker images with docker-compose configuration created by bitnami (link) locally (for testing)
i run the code provided on their page with ubuntu 20.04
$ curl -sSL https://raw.githubusercontent.com/bitnami/bitnami-docker-parse/master/docker-compose.yml > docker-compose.yml
$ docker-compose up -d
the dashboard run fine from the browser on http://localhost/login, but after entering the user and pass the browser start loading then ends up with blank white screen.
cosole errors
cosole errors
here's the docker-compose code
version: '2'
services:
mongodb:
image: docker.io/bitnami/mongodb:4.2
volumes:
- 'mongodb_data:/bitnami/mongodb'
environment:
- ALLOW_EMPTY_PASSWORD=yes
- MONGODB_USERNAME=bn_parse
- MONGODB_DATABASE=bitnami_parse
- MONGODB_PASSWORD=bitnami123
parse:
image: docker.io/bitnami/parse:4
ports:
- '1337:1337'
volumes:
- 'parse_data:/bitnami/parse'
depends_on:
- mongodb
environment:
- PARSE_DATABASE_HOST=mongodb
- PARSE_DATABASE_PORT_NUMBER=27017
- PARSE_DATABASE_USER=bn_parse
- PARSE_DATABASE_NAME=bitnami_parse
- PARSE_DATABASE_PASSWORD=bitnami123
parse-dashboard:
image: docker.io/bitnami/parse-dashboard:3
ports:
- '80:4040'
volumes:
- 'parse_dashboard_data:/bitnami'
depends_on:
- parse
volumes:
mongodb_data:
driver: local
parse_data:
driver: local
parse_dashboard_data:
driver: local
What am i missing here ?
The parse-dashboard knows the parse backend through its docker-compose hostname parse.
So after login, the parse-dashboard (UI), will generate requests to that host http://parse:1337/parse/serverInfo based on the default parse backend hostname. More details about this here.
The problem is that your browser (host computer) doesn't know how to resolve the ip for the hostname parse. Hence the name resolution errors.
As a workaround, you can add an entry to your hosts file to have the parse hostname resolved to 127.0.0.1.
This post describes it well: Linked docker-compose containers making http requests
I'm trying to setup development environment using PhpStorm, Docker on Windows 10 machine.
When remote PHP interpreter selected PhpStorm stops responding with message "Checking PHP Installation":
docker-compose.yaml
version: '3'
networks:
symfony:
services:
#nginx
nginx-ea:
image: nginx:stable-alpine
container_name: nginx-ea
ports:
- "8081:80"
volumes:
- ./app:/var/www/project
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php74-fpm
- mysql8-ea
networks:
- symfony
# php74
php74-fpm:
build:
context: .
dockerfile: ./php/Dockerfile
container_name: php74-fpm
ports:
- "9001:9000"
volumes:
- ./app:/var/www/project
- ./php/conf:/usr/local/etc/php/
networks:
- symfony
php74-cli:
# define the directory where the build should happened,
# i.e. where the Dockerfile of the service is located
# all paths are relative to the location of docker-compose.yml
build:
context: ./php-cli
container_name: php74-cli
# reserve a tty - otherwise the container shuts down immediately
# corresponds to the "-i" flag
tty: true
# mount the app directory of the host to /var/www in the container
# corresponds to the "-v" option
volumes:
- ./app:/var/www/project
# connect to the network
# corresponds to the "--network" option
networks:
- symfony
# mysql 8
mysql8-ea:
image: mysql:8
container_name: mysql8-ea
ports:
- "4309:3306"
volumes:
- ./mysql:/var/lib/mysql
command: --default-authentication-plugin=mysql_native_password --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
restart: always # always restart unless stopped manually
environment:
MYSQL_USER: root
MYSQL_ROOT_PASSWORD: secret
MYSQL_PASSWORD: secret
networks:
- symfony
#PhpMyAdmin
phpmyadmin-ea:
image: phpmyadmin/phpmyadmin:5.0.1
container_name: phpmyadmin-ea
restart: always
environment:
PMA_HOST: mysql8-ea
PMA_USER: root
PMA_PASSWORD: secret
ports:
- "8090:80"
networks:
- symfony
Docker Desktop Windows 10 settings
Have tried selecting both php74-fpm container and php74-cli container, as soon as settings applied in PhpStorm it stops responding completely.
Any idea what is wrong in here?
UPDATE
Including PHPStorm logs from system\log\idea.log
# appears in logs when Remote PHP Interpreter settings applied
2020-11-27 09:14:00,859 [ 479670] DEBUG - php.config.phpInfo.PhpInfoUtil - Loaded helper: /opt/.phpstorm_helpers/phpinfo.php
2020-11-27 09:14:01,106 [ 479917] INFO - .CloudSilentLoggingHandlerImpl - Creating container...
2020-11-27 09:14:02,019 [ 480830] INFO - .CloudSilentLoggingHandlerImpl - Creating container...
2 Docker containers are created after Remote PHP Interpreter settings applied however they don't seem to be activating and logs inside the container doesn't seem to say anything
if try starting "phpstorm_helpers_PS-191.8026.56" container manually from docker desktop it seems to start ok
if I manually try to start "festive_zhukovsky..." container it doesn't start.
logs inside container prints xml:
https://drive.google.com/file/d/1d5XbkJdHnc7vuN0V7heJdx3lBkmJfs3V/view?usp=sharing
UPDATE 2
if i remove local PHP version which comes from xampp package in PHPStorm the windows on the right shows where PHPStorm is hanging and become unresponsive:
UPDATE 3
according to this article https://ollyxar.com/blog/docker-phpstorm-windows
Docker should have Shared Drives configuration
however I don't seem to have this option in Docker Desktop Settings:
Can this be a problem?