I've configured NGINX in a cloud instance with reverse proxy to a docker container. The app sends emails using nodemailer - Gmail SMTP but isn't working inside a docker container.
My guess:
Missing ports configurations
A mail proxy or something is needed...
I tried exposing ports 587 and 465 in the Dockerfile with no success (Not sure if that's correct or if I need something else).
Other considerations:
The container runs it's own server using Koa.
The cloud instance will host more containers that may send mail too. Each with their own domain and reverse proxy configurations.
Your help is really appreciated!
UPDATE
Running the app in the container: Gmail is giving a 534 response code (invalid login error)
Still working fine runnning the app outside the container.
Gmail authentication was giving login errors running the app in the container.
The correct way is to configure it through OAuth2 and it works flawlessly.
Here's the tutorial I found that helped me out: https://alexb72.medium.com/how-to-send-emails-using-a-nodemailer-gmail-and-oauth2-fe19d66451f9
Thanks timsmelik for your help.
Related
We have a server where we run different services in docker containers. The server is on the corporate network and does not have Internet access. We use a proxy to access the Internet. We collect images locally, where we have access to the Internet through a proxy, and then docker pull to our repository
Actually the question is: how to launch two new services using a docker compose that would have access to the Internet using a proxy?
Accordingly, simply trying to connect to a proxy in the code did not help, it failed to connect.
Passing the connection string to the proxy as an environment variable on startup didn't help either.
I have a demo application running perfectly on my local environment. However, I would like to run the same application remotely by giving it a HTTP endpoint. My goal is to test the performance of the application.
How to give a HTTP endpoint to any multi container docker application?
The following is the Github repository link for the demo application
https://github.com/LonareAman/BankCQRS.git
Use docker-compose and handle containers based on what you need
One of your containers should be web server like nginx. And then bind your machine port to your nginx like 80:80
Then handle your containers in nginx and make a proxy to them
You can find some samples in https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/
I'm using GKE for deployments.
Edit: I need to access a customer's API endpoint which is only accessible when using their VPN. So far I can run a container which connects to this VPN and I can cURL the endpoint successfully.
For the above, I have configured a Debian docker image which successfully connects to a VPN (specifically, using Kerio Control VPN) when deployed. Whenever I make a net request from this container, it runs through the VPN connection, as expected.
I have another image which runs a .NET Core program which makes necessary HTTP requests.
From this guide I know it is possible to run a container's traffic through another using pure docker. Specifically using the --net=container:something option (trimmed the example):
docker run \
--name=jackett \
--net=container:vpncontainer \
linuxserver/jackett
However, I have to use Kubernetes for this deployment so I think it would be good to use a 2-container pod. I want to keep the VPN connection logic and the program separated.
How can I achieve this?
Each container in pod have shared network resources. If you run vpn client in one container them all containers in this pod will have access to network via vpn.
Based on your comment I think I can advise you two methods.
Private GKE Cluster with CloudNAT
In this setup, you should you use Private GKE cluster with CloudNAT for external communication. You would need to to use manual externalIP.
This scenario is using specific externalIP for VPN connection, but it's required from your customer to whitelist access for this IP.
Site to site VPN using CloudVPN
You can configure your VPN to forward packets to your cluster. For details you should check other Stackoverflow threads:
Google Container Engine and VPN
Why can't I access my Kubernetes service via its IP?
I'm using a similar approach. I have a Django app for whose static files to be served files I need nginx. I want the app to be accessible through VPN for which I'm using OpenVPN.
Both the nginx container and the django container are in the same pod. My limited understanding is that it would be enough to run VPN in the background in the nginx container and it should successfully route requests to the backend using localhost because they're in the same pod.
But this doesn't seem to be working. I get a 504 Time-Out in the browser and the nginx logs confirm that the upstream timed out. Have you done anything extra to make this work in your case?
I'm looking for a simple way to programmatically send emails from a Linode Ubuntu server (not bulk or spamming, simple iOT type notifications). I have a dockerized postfix/dovecot system up and running, but I don't know how to use that from outside the container. I've looked into sendmail but that seems like duplication since I already have a configured SMTP server. My question is what can I install on my Ubuntu server that will allow me to send simple emails from the command line (script) that uses my existing SMTP server in my docker container?
This is similar to having a Jenkins container which must send emails, as described here:
For containerized Jenkins system, mail server can also be configured in same Manage Jenkins page, E-mail Notification section.
The only difference is the IP/hostname provided to SMTP server option. Instead of providing the known SMTP server’s IP and host, one should use the IP of docker0.
For corporate network, you may have to use an SMTP relay server instead. For those cases, you can configure SMTP communication by setting up Postfix.
After installing, update /etc/postfix/main.cf with correct relay information: myhostname, myorigin, mydestination, relayhost, alias_maps, alias_database.
But:
There are two changes need to be made on Postfix to expose it to Docker containers on one host.
Exposing Postfix to the docker network, that is, Postfix must be configured to bind to localhost as well as the docker network.
Accepting all incoming connections which come from any Docker containers.
Docker bridge (docker0) acts a a bridge between your ethernet port and docker containers so that data can go back and forth.
We achieve the first requirement by adding the IP of docker0 to inet_iterfaces.
For the second requirement, the whole docker network as well as localhost should be added to mynetworks.
For this problem, the easiest solution I have found is nodemailer, as my application that needs to send the emails is a node.js application, and I connect to it as you would from an email client.
I have a question about jhipster working combined with docker and localhost. I have started the registry and the uaa apps using docker compose, everything is fine. Then i started locallly one microservice and the gateway. Both of them are sucessfully seen in the registry instances view. The problem is, that when the gateway tries to connect to the uaa (uaa/oauth/token) it fails (I/O error on POST request for http://uaa/oauth/token). I have tried to set in /etc/hosts uaa localhost but it did not help. Does anybody have an idea how to deal with this issue? Thanks in advance
The UAA server will have a port as well as a host name. Both will need to be specified. To specify the port you will need to change your application.properties.