I'm looking for a simple way to programmatically send emails from a Linode Ubuntu server (not bulk or spamming, simple iOT type notifications). I have a dockerized postfix/dovecot system up and running, but I don't know how to use that from outside the container. I've looked into sendmail but that seems like duplication since I already have a configured SMTP server. My question is what can I install on my Ubuntu server that will allow me to send simple emails from the command line (script) that uses my existing SMTP server in my docker container?
This is similar to having a Jenkins container which must send emails, as described here:
For containerized Jenkins system, mail server can also be configured in same Manage Jenkins page, E-mail Notification section.
The only difference is the IP/hostname provided to SMTP server option. Instead of providing the known SMTP server’s IP and host, one should use the IP of docker0.
For corporate network, you may have to use an SMTP relay server instead. For those cases, you can configure SMTP communication by setting up Postfix.
After installing, update /etc/postfix/main.cf with correct relay information: myhostname, myorigin, mydestination, relayhost, alias_maps, alias_database.
But:
There are two changes need to be made on Postfix to expose it to Docker containers on one host.
Exposing Postfix to the docker network, that is, Postfix must be configured to bind to localhost as well as the docker network.
Accepting all incoming connections which come from any Docker containers.
Docker bridge (docker0) acts a a bridge between your ethernet port and docker containers so that data can go back and forth.
We achieve the first requirement by adding the IP of docker0 to inet_iterfaces.
For the second requirement, the whole docker network as well as localhost should be added to mynetworks.
For this problem, the easiest solution I have found is nodemailer, as my application that needs to send the emails is a node.js application, and I connect to it as you would from an email client.
Related
I have a client running in a docker container that subscribes to a MQTT broker and then writes the data into a database.
To connect to the MQTT Broker i will have to set up port forwarding.
While developing the client on my local machine the following worked fine:
SSH -L <mqtt-port:1883>:localhost:<9000> <user>#<ip-of-server-running-broker>
The client is then configured to subscribe to the MQTT broker via localhost:9000.
This all works fine on my local machine.
Within the container it wont, unless I run the container with --net=host but I'd rather not do that due to security concerns.
I tried the following:
Create docker network "testNetwork"
Run a ssh_tunnel container within "testNetwork" and implement port forwarding inside this container.
Run the database_client container within "testNetwork" and subscribe to the mqtt broker via the bridge network like ("ssh_tunnel.testNetwork:")
(I want 2 seperate containers for this because the ip address will have to be altered quite often and I don't want to re-build the client container all the time)
But all of my attempts have failed so far. The forwarding seems to work (I can access the shell on the server in the ssh container) but I haven't found a way to actually subscribe to the mqtt broker from within the client container.
Maybe this is actually quite simple and I just don't see how it works, but I've been stuck on this problem for hours by now...
Any help or hints are appreciated!
The solution was actually quite simple and works without using -net=host.
I needed to bind to 0.0.0.0 and use the Gateway Forwarding Option to allow remote hosts (the database client) to connect to the forwarded ports.
ssh -L -g *:<hostport>:localhost:<mqtt-port/remote port> <user>#<remote-ip>
Other containers within the same Docker bridge network can then simply use the connection string <name-of-ssh-container>:<hostport>.
In order to debug and setup a pair of docker stacks (one is a client and other a server along with their own private services they each require) using docker compose, I'm running them locally to make sure they're functioning correctly.
They will eventually be communicating across the internet with a nginx server on the server side to act as a reverse proxy. But for now, i'm specifying the client use the 172.19.0.3:1234 address of the server container.
I'm able to curl/ping both the client container and server container from the host machine, but running an interactive session and trying to curl the server's 172.19.0.3:1234 address just times out.
I feel the 172.x is being used incorrectly here. Is their some obvious issue with what I've described so far? What is the better approach for what I'm trying to do.
Seems that after doing some searching, I am in a similar situation to this question: Communicating between Docker containers in different networks on the same host.
I've decided to use docker network connect to connect the client to the server's network for my purposes.
I am trying to use docker behind corporate firewall.
I would like to force docker to use system Proxy, but this option is not available. How can I make docker to system Proxy.
I've written a blog post about using the weird DummyDesperatePoitras virtual switch as an anchor for CNTLM, and that resolves some of the problems I mentioned here (having to change the proxy address for Docker every time your IP changes, among other things):
http://mandie.net/2017/12/10/docker-for-windows-behind-a-corporate-web-proxy-tips-and-tricks/
As of November 2017, this feature was still not implemented in Docker for Windows: https://github.com/docker/for-win/issues/589
The best solution I've found is CNTLM, but I'm not delighted with it, because:
1) CNTLM has not been updated in 5 years
2) You have to set the proxy IP in the Docker GUI, making it rather automation-resistant. The Docker for Windows GUI reads the proxy settings from the MobyLinux VM, not from the Windows registry, a config file or Windows environment variables. Setting HTTP_PROXY and HTTPS_PROXY in Windows has absolutely no effect on Docker. I've not found any way of setting the proxy value programmatically; the MobyLinux VM doesn't accept ssh connections. If anyone ever finds a way to do this from a command line or script, I'd love to know.
3) Setting the proxy IP to 127.0.0.1 won't work, because that will get the virtual machine that Docker is really running on to try its own interface, not the one on the host PC running CNTLM. I have also tried the DockerNAT interface IP, 10.0.75.1, with no success.
4) This means that the proxy IP needs to be the current IP address of your active external network interface. If you move around buildings a lot, you need to check this every time you want to use Docker.
Set CNTLM to listen on 0.0.0.0 3128, not just 3128 or 127.0.0.1 3128. This will save you the trouble of updating this IP address every time your PC gets a new IP address. Just having the port number will keep traffic from the VM running Docker from being "heard".
Calculate the NTLMv2 hash and store that in the config file instead of your username and password. This will be different for every PC and user account, so don't share your unredacted config file with another PC unless you want to get locked out. You will need to update this stored hash when you next change your Windows password.
Restart the cntlm Windows service after any changes to its config file.
Run ipconfig in cmd.exe or PowerShell to find your current IP address. If you're using corporate VPN, use the IP address of the WiFi or Ethernet adapter, not the VPN.
Type http://ipfromipconfig:3128/ into the "Web Server (HTTP)" box. Make sure the checkbox "Use same for both" is checked.
Using CNTLM automates working behind proxy. It allows us to specify everywhere IP address without any credentials, so security is better and whenever we change password we only have to do it in one place, we can also specify URLs that should not be proxied.
Since 18.03 Docker version, there is available special DNS name: host.docker.internal. That allows to connect to the host machine from Docker containers. Now, when we setup our CNTLM proxy in cntlm.ini to make it listen on 0.0.0.0:3128:
Listen 0.0.0.0:3128
Then we can specify in Docker settings proxy using host.docker.internal:3128 address, which will be translated to appropriate and current local address of our machine.
you can set up two environment variables http_proxy and https_proxy
http_proxy with value http://username:password#proxyIp:proxyport
for example, in my case it was
http://venkat_krish:password#something.ad.somthing.com:80
you can use the same for https proxy
Note:
If you have any special characters apart from _ & . in the username or password
you have to encode the url. follow this link for url encoding https://grox.net/utils/encoding.html
For example if your password is abc#123, then it will be written as abc%40123
I have some docker containers running on Bluemix using private IP addresses. I would now like to setup a tunnel from my laptop (running linux) to access the private network on Bluemix.
I had first created a container running an ssh-server. Using ssh -D I was able to setup a SOCKS5 proxy connection. This worked fine with Chrome but not all applications support a SOCKS proxy.
(google-chrome --proxy-server=socks5://localhost:<tunnel port>)
So I tried to create a container with an OpenVPN server. Unfortunately this does not work on Bluemix as the containers are not running privileged and thus can not create a tun device.
Bluemix also has a VPN and a Secure Gateway service, which sound promising but so far I could not figure out how to get those working.
Does anybody know if it is possible to make the private docker network available locally and how to connect to that?
Generally speaking containers should be used to implement services available to external applications (an APIs service, or a runtime, or a dbms, or something like that).
According to this, what you could achieve is a set of services available for you on different containers, and a single container working as SSH tunnel gateway, making your local environment connected to it using SSH and defining a set of local and remote SSH ports forwarding, with different policies according to the service/port and the IP of the service.
It should work for all the services, and you haven't to use a socks proxy to forward requests to different hosts: using remote SSH forwarding your SSH endpoint will redirect your requests to the right service inside the local/private lan.
I found that this guide describes correctly how to work with local&remote port forwarding.
http://www.debianadmin.com/howto-use-ssh-local-and-remote-port-forwarding.html
About the OpenVPN solution, as you already know it is not possible to use software requiring privileged mode on containers, because it couldn't be allowed on Bluemix due to security reasons: if you wish to have this kind of solution I strongly suggest you to use OpenVPN on a VM on Bluemix UK region (still beta but an architecture expected to be the final architecture as soon as VM service will become GA service)
I think that these options are the ones available on Bluemix to achieve what you describe without using the VPN service suggested by #bill-wentworth
Can scalable group option on Bluemix container infrastructure work with protocols other than HTTP?
I created a simple TCP server, deployed in a single container on Bluemix and works fine. If I try to deploy it as a scalable group I can only assign HTTP port and it does not respond any more.
Is this a current limitation?
Thank you very much
If you are running a Cloud Foundry app you can only get port 80,433. However if you run a container you can bind to any port you want.
containers environment on Bluemix actually supports a limited list of TCP (not UDP) ports, and 9080 is contained in this list, so your server should work.
If you need any different port from the ones contained in this list you could always ask for the port being opened for your instance, through Bluemix support.