How to setup maildev email persistant storage with docker - docker

I'm setup maildev for my project using docker compose.
It is working correct, I able can sent email to maildev however I facing with an issue when setup Directory for persisting mails. I try to setup by the userguide but is not work. Email will be clear after restart docker.
This is maildev github: https://github.com/maildev/maildev
Maildev docker image: https://hub.docker.com/r/maildev/maildev
This is my docker-compose.yml file: mail-directory seem not working
version: '3.8'
services:
maildev:
hostname: maildev
command: bin/maildev --web 80 --smtp 25 --mail-directory /home/maildev/data
volumes:
- ./var/data/maildev:/home/maildev/data
ports:
- "1080:80"
networks:
- my-network

This is not yet possible in the latest Docker release (v1.1.0). This version of MailDev does not support the --mail-directory flag and will use a temporary directory of its choice instead.
The contents of this temporary directory is cleared each time MailDev is started so creating a volume for the this directory will not be beneficial.

I was able do achieve this with this configuration :
mailer:
image: maildev/maildev
volumes:
- /home/maildev:/home/maildev:rw
environment:
- MAILDEV_MAIL_DIRECTORY=/home/maildev
networks:
- default
The SMTP port will be 1025 and Web access to port 1080.
Hope this will help !

Related

Sending API-requests between two docker containers

I have running a DDEV-Environment for Magento2, locally on my Mac OSX (Ventura)
https://ddev.readthedocs.io/en/stable/users/quickstart/#magento-2
For testing purpose I included Nifi per docker-compose.yaml inside my ddev project .ddev/docker-compose.nifi.yaml
Below you can see the docker-compose, which is really minimal at this point. Nifi works like expected, because I can login etc, although it is not persistent yet, but thats a different problem
version: '3'
services:
nifi:
image: apache/nifi:latest
container_name: ddev-${DDEV_SITENAME}-nifi
ports:
# HTTP
- "8080:8080"
# HTTPS
- "8443:8443"
volumes:
# - ./nifi/database_repository:/opt/nifi/nifi-current/database_repository
# - ./nifi/flowfile_repository:/opt/nifi/nifi-current/flowfile_repository
# - ./nifi/content_repository:/opt/nifi/nifi-current/content_repository
# - ./nifi/provenance_repository:/opt/nifi/nifi-current/provenance_repository
# - ./nifi/state:/opt/nifi/nifi-current/state
# - ./nifi/logs:/opt/nifi/nifi-current/logs
# - ./nifi/conf/login-identity-providers.xml:/opt/nifi/nifi-current/conf/login-identity-providers.xml
- ".:/mnt/ddev_config"
All I want to do is sending a POST-requst from Nifi to my Magento2 module.
I tried several IPs now, which I got from docker inspect ddev-ddev-magento2-web but I always receive "Connection refused"
My output from docker network ls:
NETWORK ID NAME DRIVER SCOPE
95bea4031396 bridge bridge local
692b58ca294e ddev-ddev-magento2_default bridge local
46be47991abe ddev_default bridge local
7e19ae1626f1 host host local
f8f4f1aeef04 nifi_docker_default bridge local
dbdba30546d7 nifi_docker_mynetwork bridge local
ca12e667b773 none null local
My Magento2-Module is working properly, because sending requests from Postmanto it works fine
You don't want most of what you have. Please remove the ports statement, which you shouldn't need at all; if you need anything, you'll need an expose. But I doubt you need that in this case?
You'll want to look at the docs:
Additional services and add-ons
Additional services with docker-compose
Then create a .ddev/docker-compose.nifi.yaml with something like
services:
nifi:
image: apache/nifi:latest
container_name: ddev-${DDEV_SITENAME}-nifi
container_name: "ddev-${DDEV_SITENAME}-someservice"
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.approot: ${DDEV_APPROOT}
expose:
- "8080"
environment:
- VIRTUAL_HOST=$DDEV_HOSTNAME
- HTTP_EXPOSE=8080:8080
- HTTPS_EXPOSE=9999:8080
volumes:
- ".:/mnt/ddev_config"
The name of the "web" container from inside your nifi container will be "web", curl http://web:8080, assuming that you have nifi on port 8080.
I don't know what you're trying to accomplish, but this may get you started. Feel free to come over to the DDEV Discord channel for more interactive help.

Can't log MLflow artifacts to S3 with docker-based tracking server

I'm trying to set up a simple MLflow tracking server with docker that uses a mysql backend store and S3 bucket for artifact storage. I'm using a simple docker-compose file to set this up on a server and supplying all of the credentials through a .env file. When I try to run the sklearn_elasticnet_wine example from the mlflow repo here: https://github.com/mlflow/mlflow/tree/master/examples/sklearn_elasticnet_wine usingTRACKING_URI = "http://localhost:5005 from the machine hosting my tracking server, the run fails with the following error: botocore.exceptions.NoCredentialsError: Unable to locate credentials. I've verified that my environment variables are correct and available in my mlflow_server container. The runs show up in my backend store so the run only seems to be failing at the artifact logging step. I'm not sure why this isn't working. I've seen a examples of how to set up a tracking server online, including: https://towardsdatascience.com/deploy-mlflow-with-docker-compose-8059f16b6039. Some use minio also but others just specify their s3 location as I have. I'm not sure what I'm doing wrong at this point. Do I need to explicitly set the ARTIFACT_URI as well? Should I be using Minio? Eventually, I'll be logging runs to the server from another machine, hence the nginx container. I'm pretty new to all of this so I'm hoping it's something really obvious and easy to fix but so far the Google has failed me. TIA.
version: '3'
services:
app:
restart: always
build: ./mlflow
image: mlflow_server
container_name: mlflow_server
expose:
- 5001
ports:
- "5001:5001"
networks:
- internal
environment:
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
- AWS_S3_BUCKET=${AWS_S3_BUCKET}
- DB_USER=${DB_USER}
- DB_PASSWORD=${DB_PASSWORD}
- DB_PORT=${DB_PORT}
- DB_NAME=${DB_NAME}
command: >
mlflow server
--backend-store-uri mysql+pymysql://${DB_USER}:${DB_PASSWORD}#${DB_HOST}:${DB_PORT}/${DB_NAME}
--default-artifact-root s3://${AWS_S3_BUCKET}/mlruns/
--host 0.0.0.0
--port 5001
nginx:
restart: always
build: ./nginx
image: mlflow_nginx
container_name: mlflow_nginx
ports:
- "5005:80"
networks:
- internal
depends_on:
- app
networks:
internal:
driver: bridge
Finally figure this out. I didn't realize that the client also needed to have access to the AWS credentials for S3 storage.

bitnami parse server with docker-compose give blank screen after dashboard login

I'm trying to run bitnami parse-server docker images with docker-compose configuration created by bitnami (link) locally (for testing)
i run the code provided on their page with ubuntu 20.04
$ curl -sSL https://raw.githubusercontent.com/bitnami/bitnami-docker-parse/master/docker-compose.yml > docker-compose.yml
$ docker-compose up -d
the dashboard run fine from the browser on http://localhost/login, but after entering the user and pass the browser start loading then ends up with blank white screen.
cosole errors
cosole errors
here's the docker-compose code
version: '2'
services:
mongodb:
image: docker.io/bitnami/mongodb:4.2
volumes:
- 'mongodb_data:/bitnami/mongodb'
environment:
- ALLOW_EMPTY_PASSWORD=yes
- MONGODB_USERNAME=bn_parse
- MONGODB_DATABASE=bitnami_parse
- MONGODB_PASSWORD=bitnami123
parse:
image: docker.io/bitnami/parse:4
ports:
- '1337:1337'
volumes:
- 'parse_data:/bitnami/parse'
depends_on:
- mongodb
environment:
- PARSE_DATABASE_HOST=mongodb
- PARSE_DATABASE_PORT_NUMBER=27017
- PARSE_DATABASE_USER=bn_parse
- PARSE_DATABASE_NAME=bitnami_parse
- PARSE_DATABASE_PASSWORD=bitnami123
parse-dashboard:
image: docker.io/bitnami/parse-dashboard:3
ports:
- '80:4040'
volumes:
- 'parse_dashboard_data:/bitnami'
depends_on:
- parse
volumes:
mongodb_data:
driver: local
parse_data:
driver: local
parse_dashboard_data:
driver: local
What am i missing here ?
The parse-dashboard knows the parse backend through its docker-compose hostname parse.
So after login, the parse-dashboard (UI), will generate requests to that host http://parse:1337/parse/serverInfo based on the default parse backend hostname. More details about this here.
The problem is that your browser (host computer) doesn't know how to resolve the ip for the hostname parse. Hence the name resolution errors.
As a workaround, you can add an entry to your hosts file to have the parse hostname resolved to 127.0.0.1.
This post describes it well: Linked docker-compose containers making http requests

Docker for symfony and nginx without mounting the source code

We have a develop and a production system that use symfony 5 + nginx + MySQL services running in a docker environment.
At the moment the nginx webserver runs in the same container as the symfony service because of this issue:
On our develop environment we are able to mount the symfony sourcecode into the docker container (by a docker-compose file).
In our production environment we need to deliver containers that contains all the source code inside because we must not put our source code on the server. So there is no folder on the server from which we can mount our source code.
Unfortunately nginx needs the sourceode as well to make his routing decisions so we decided to put the symfony and the nginx services together in one container.
Now we want to clean this up to have a better solution by run every service in its own container:
version: '3.5'
services:
php:
image: docker_sandbox
build: ../.
...
volumes:
- docker_sandbox_src:/var/www/docker_sandbox // <== VOLUME
networks:
- docker_sandbox_net
...
nginx:
image: nginx:1.19.0-alpine
...
volumes:
- ./nginx/server.conf:/etc/nginx/conf.d/default.conf:ro
- docker_sandbox_src:/var/www/docker_sandbox <== VOLUME
...
networks:
- docker_sandbox_net
depends_on:
- php
mysql:
...
volumes:
docker_sandbox_src:
networks:
docker_sandbox_net:
driver: bridge
One possible solution is to use a named volume that connects the nginx service with the symfony service. The problem with that is that on an update of our symfony image the volume keeps the old changes. So there is no update until we manually delete this volume.
Is there a better way to handle this issue? May be a volume that is able to overwrite its content when a new image is deployed. Or an nginx config that does not require the symfony source code in its own container.

Symfony Swiftmail with docker mailcatcher

I have a Symfony app (v4.3) running in an docker setup. This setup also has a container for the mailcatcher. No matter how I try to set the MAILER_URL in the .env file no mail shows up in the mailcatcher. If I just the call regular PHP mail() function the mails pops up in the mail catcher. The setup has been used for other projects as well and it worked without a flaw.
Only with the Symfony Swiftmailer I can't the mails.
My docker-compose file looks like this:
version: '3'
services:
#######################################
# PHP application Docker container
#######################################
app:
container_name: project_app
build:
context: docker/web
networks:
- default
volumes:
- ../project:/project:cached
- ./etc/httpd/vhost.d:/opt/docker/etc/httpd/vhost.d
# cap and privileged needed for slowlog
cap_add:
- SYS_PTRACE
privileged: true
env_file:
- docker/web/conf/environment.yml
- docker/web/conf/environment.development.yml
environment:
- VIRTUAL_HOST=.project.docker
- POSTFIX_RELAYHOST=[mail]:1025
#######################################
# Mailcatcher
#######################################
mail:
image: schickling/mailcatcher
container_name: project_mail
networks:
- default
environment:
- VIRTUAL_HOST=mail.project.docker
- VIRTUAL_PORT=1080
I pleayed around with the MAILER_URL setting hours but everything failed so far.
Hope soembody here has an idea how to set the MAILER_URL.
Thank you
According to docker-compose.yml, your MAILER_URL should be:
smtp://project_mail:1025
Double-check if it has the correct value
Then you can view container logs using
docker-compose logs -f mail to see if your messages reach the service at all.
It will be something like:
==> SMTP: Received message from '<user#example.com>' (619 bytes)
Second: try to restart your containers. Sometimes changes in .env* files are not applied instantly.

Resources