I built up my development environment using Docker containers, but currently all mails are sent by smtp server in my company, I cannot use it for testing. Is there a way that I can create a container that replaces the real smtp server? Do I need a DNS?
Thanks.
yes, just setup your SMTP server to run in a docker container using a Dockerfile in the normal way. Then when you run the container make sure you open the SMTP port ...
docker run -p 25:25 --name yourSmtpDockerContainer yourSmtpDockerImage
now if the server the container is running in exposes port 25 ... then any traffic sent to the server's domain name will be sent to the container.
You may need to expose other SMTP ports too as required - cheers
Related
I'm a bit confused. Trying to run both a HTTP server listening on port 8080 and a SSH server listening on port 22 inside a Docker container I managed to accomplish the latter but strangely not the former.
Here is what I want to achieve and how I tried it:
I want to access services running inside a Docker container using the IP address assigned to the container:
ssh user#172.17.0.2
curl http://172.17.0.2:8080
Note: I know this is not how you would configure a real web server but I want the container to mimic an embedded device which runs both services and which I don't have available all the time. (So it's really just a local non-production thing with no security requirements).
I didn't expect integrating the SSH server to be easy, but to my surprise I just installed and started it and had to do nothing else to be able to connect to the machine via ssh (no EXPOSE 22 or --publish).
Now I wanted to access the container via HTTP on port 8080 and fiddled with --publish and EXPOSE but only managed to make the HTTP server available through localhost/127.0.0.1 on the host. So now I can access it via
curl http://127.0.0.1:8080/
but I want to access both services via the same IP address which is NOT localhost (e.g. the address the container got randomly assigned is totally OK for me).
Unfortunately
curl http://172.17.0.2:8080/
waits until it times out every time I tied it.
I tried docker run together with -p 8080, -p 127.0.0.1:8080:8080, -p 172.17.0.2:8080:8080 and much more combinations, together or without EXPOSE 8080 in the Dockerfile but without success.
Why can I access the container via port 22 without having exposed anything?
And how do I make it accessible via the container's IP address?
Update: looks like I'm experiencing exactly what's described here.
I'm looking for a simple way to programmatically send emails from a Linode Ubuntu server (not bulk or spamming, simple iOT type notifications). I have a dockerized postfix/dovecot system up and running, but I don't know how to use that from outside the container. I've looked into sendmail but that seems like duplication since I already have a configured SMTP server. My question is what can I install on my Ubuntu server that will allow me to send simple emails from the command line (script) that uses my existing SMTP server in my docker container?
This is similar to having a Jenkins container which must send emails, as described here:
For containerized Jenkins system, mail server can also be configured in same Manage Jenkins page, E-mail Notification section.
The only difference is the IP/hostname provided to SMTP server option. Instead of providing the known SMTP server’s IP and host, one should use the IP of docker0.
For corporate network, you may have to use an SMTP relay server instead. For those cases, you can configure SMTP communication by setting up Postfix.
After installing, update /etc/postfix/main.cf with correct relay information: myhostname, myorigin, mydestination, relayhost, alias_maps, alias_database.
But:
There are two changes need to be made on Postfix to expose it to Docker containers on one host.
Exposing Postfix to the docker network, that is, Postfix must be configured to bind to localhost as well as the docker network.
Accepting all incoming connections which come from any Docker containers.
Docker bridge (docker0) acts a a bridge between your ethernet port and docker containers so that data can go back and forth.
We achieve the first requirement by adding the IP of docker0 to inet_iterfaces.
For the second requirement, the whole docker network as well as localhost should be added to mynetworks.
For this problem, the easiest solution I have found is nodemailer, as my application that needs to send the emails is a node.js application, and I connect to it as you would from an email client.
Problem statement:
On the standalone On-Prem server, using nvidia docker. Whenever users create a new environment - they can potentially open up any port for all traffic from outside world(by passing our client firewall) if they don't specify local host variables.
So, how to protect such server tunneling request & instead make it open just for localhost? Any thoughts / ideas??
You can't give untrusted users the direct ability to run docker commands. For instance, anyone who can run a Docker command can run
docker run --rm -v /:/host busybox cat /host/etc/shadow
and then run an offline password cracker to get your host's root password. Being able to bypass the firewall is probably the least of your concerns.
My app is running against a mssql server 2012 or above,
I tried to set up 2 containers - 1 for my app and one to be a DB server.
But I couldn't use the DB container due to mssql server version windows image is not supported by my app.
So I'm want to connect to a remote DB server that I have which is a different server than the Docker host.
How do I get the container to ping the remote DB server?
From the container-
C:\Installation>ping my0134.company.net
Ping request could not find host my0134.company.net. Please check the name and try again.
** NOTE - I am using Docker on windows
Maybe you could try adding <IP of my0134.company.net> my0134.company.net to the etc/hosts file. This way the url can be resolved to a IP address. You can also just use
docker run --add-host 'my0134.company.net':<IP of my0134.company.net> <image>
to spin up your container.
If IPV4 forwarding is enabled then container can connect to DB Server.There is no issue with that.
I have a server running inside a docker container, listening on UDP port, let's say 1234. This port is exposed in Dockerfile.
Also, I have an external server helping with NAT traversal, basically, just sending addresses of the registered server and a client to each other, and allowing to connect to a server by the name it sent during registration.
Now, if I run my container with -P option, my port is getting published as some random port, e.g. 32774. But on the helper server I see my server connected to it from port 1234, and so it can't send a correct address to a client. And a client can't connect at all.
If I run my container explicitly publishing my server on the same port with -p 1234:1234/udp, a client can connect to my server directly. But now on the helper server I see my server connected to it from port 1236, and again it can't send the correct port to a client.
How can this be resolved? My aim is to require as little addition configuration as possible from people who will use my docker image.
EDIT: So, I need either to know my external port number from inside the container to send it to the discovery server, which, as I understand, not possible at the moment, right? Or I need to make outgoing connections from the container and my port to use the same external port as configured for incoming connections - is that possible?
The ports are managed by docker and the docker network adaptor. When using solely -P then the port is exposed docker internally and accessible through docker linking. When using "1234:1234" then the port is mapped on a host port and directly available for a client and also available for linking.
Start the helper server with a link option "--link server container/name". The helper server will connect to host "server" on port 1234. The correct ip address will be managed by docker.
Enable docker to change your iptables configuration, which is docker default. Afterwards the client should be able to connect to both instances. Note that the helper server should provide the host ip and not the docker container ip address. The docker container ip address does only work inside the host where the docker network adapter is running.