I would like to access the Docker API (running on Windows Server).
Sadly a TCP connection is not possible in our network (at least for this case).
Here I found a solution to change the port. But I am not sure if changing the protocol is possible?
{
"hosts": ["tcp://0.0.0.0:4243"]
}
From the docs:
The Docker daemon can listen for Docker Engine API requests via three different types of Socket: unix, tcp, and fd.
... udp is not an option.
Related
I'm running Docker Desktop for Mac on host, it is running two containers.
Container-1: linux-based OS, running UDP-based server program listening on 14xxx port (udp://:14xxx/).
Container-2: linux-based OS, python application sending/receiving data via UDP address as udp://14xxx/ without any specific hostname.
Question: My python app on Container-2 is able to send on UDP port, but never receives anything back from Container-1.
Given UDP works differently from TCP & HTTP protocols..
How can I establish successful UDP communication between two docker containers running on same host (MacOS)?
Various things that I have tried, but no luck.
Tried running both containers using --network host option.
Tried creating a new docker network testnet and started containers using --network testnet option.
Never mind. I found the solution.
First, it was not a docker thing at all.
In my python application on Container-2, I used environment variables to determine the UDP address. Apparently, these variables were not set properly. Hence, the confusion/error.
Second, "--network host" is still a VALID argument to have for both running Docker containers to make sure they discover/talk to each other.
Hope it helps!
I have a mongodb docker container I only want to have access to it from inside of my server, not out side. even I blocked the port 27017/tcp with firewall-cmd but it seems that docker is still available to public.
I am using linux centos 7
and docker-compose for setting up docker
I resolved the same problem adding an iptables rule that blocks 27017 port on public interface (eth0) at the top of chain DOCKER:
iptables -I DOCKER 1 -i eth0 -p tcp --dport 27017 -j DROP
Set the rule after docker startup
Another thing to do is to use non-default port for mongod, modify docker-compose.yml (remember to add --port=XXX in command directive)
For better security I suggest to put your server behind an external firewall
If you have your application in one container and MongoDb in other container what you need to do is to connect them together by using a network that is set to be internal.
See Documentation:
Internal
By default, Docker also connects a bridge network to it to provide
external connectivity. If you want to create an externally isolated
overlay network, you can set this option to true.
See also this question
Here's the tutorial on networking (not including internal but good for understanding)
You may also limit traffic on MongoDb by Configuring Linux iptables Firewall for MongoDB
for creating private networks use some IPs from these ranges:
10.0.0.0 – 10.255.255.255
172.16.0.0 – 172.31.255.255
192.168.0.0 – 192.168.255.255
more read on Wikipedia
You may connect a container to more than one network so typically an application container is connected to the outside world network (external) and internal network. The application communicates with database on internal network and returns some data to the client via external network. Database is connected only to the internal network so it is not seen from the outside (internet)
I found a post here may help enter link description here. Just post it here for people who needed it in future.
For security concern we need both hardware firewall and OS firewall enabled and configured properly. I found that firewall protection is ineffective for ports opened in docker container listened on 0.0.0.0 though firewalld service was enabled at that time.
My situation is :
A server with Centos 7.9 and Docker version 20.10.17 installed
A docker container was running with port 3000 opened on 0.0.0.0
The firewalld service had started with the command systemctl start firewalld
Only ports 22 should be allow access outside the server as the firewall configured.
It was expected that no one others could access port 3000 on that server, but the testing result was opposite. Port 3000 on that server was accessed successfully from any other servers. Thanks to the blog post, I have had my server under firewall protected.
I am using Docker 18.06.1-ce-win73 on windows 10 and trying to perform the following udp operation:
Docker port 10001 --------------> host port 10620
It is mandatory for the application running on the host to receive packets from the port 10001.
Inside the docker container, using python I bind on the IP ('0.0.0.0', 10001) and use the socket to send my packets to the host IP on port 16020.
I have also started the container with the argument -p 10001:10001/udp.
Unfortunately, when receiving the packet on the Host application, the origin port is not 10001 but a random one.
Is it possible to force docker to use a specific source port when using UDP from inside the container ?
You can control the container source port, but when you communicate outside of docker, even to your host, the request will go through a NAT layer that will change the source to be the host with a random port. You may be able to modify the iptables rules to work around this NAT effect.
However, if you really need control of the source port like this, you may be better off switching to host networking (--net=host or network_mode: host depending on how you run your containers), or change to a networking driver like macvlan that exposes the container directly without going through the NAT rules.
I'm having a few docker containers (Using docker-compose and a single network - network-sol)
One of the containers is a Spring Boot application that sends UDP broadcast to the local network. 255.255.255.255 fails because It's the local broadcast address of network-sol
How can I broadcast UDP messages such as the "top local network" Will get those packets? Do i have to use directed broadcast address for that?
P.S
broadcast works if the application is deployed outside of docker (part of the local network
You should either run the service defined in your docker-compose.yml file with network_mode: host.
Alternatively you can publish the port of the container you intended to communicate with by publishing it using the following configuration. Note that the /udp is required for UDP communication to work.
service:
ports:
- "8080:8080/udp"
I have gotten some luck out of this. The guide specifies sysctl parameters that are needed for broadcast forwarding from a docker network, you should then be able to either use his script or specify these parameters when running docker.
I am currently using Ubuntu 10.04 for some rails development. It is installed as a guest machine using VirtualBox on a Windows 7 x64 host.
Within Ubuntu, I am trying to port tunnel several ports from a remote server directly to the Guest OS in order to avoid having to download a remote database.
Let's say I want to forward port 5000 on the remote server to port 5000 on the guest os.
I have set up a forwarder for the port on the Windows side, using VBoxManage.exe. This forwards HostPort 5000 to GuestPort 5000.
Then within ubuntu I run, ssh -L5000:127.0.0.1:5000. However, whenever I try to access "127.0.0.1:5000", I receive the message "channel 7: open failed: connect failed: Connection refused"
Am I missing something?
Thanks for the help!
connect failed: Connection refused
This means that you'r not able to connect to 5000 on the remote end.
If you'r only using this connection from within your guest through your SSH tunnel then you don't need the forward from VBoxManager, as this will open op so that outside computers can connect directly to your guest, it won't help your guest connect to the outside.
Are you sure the server you connect (SSH) to is the same server that runs your database? And is the database running on that server?
When you've connected (SSH) to the server, you can try to list what ports are listening for connections or you could try to connect to the database with telnet. To list listeners you can run "netstat -lnt" (-l shows listening, -n is numeric (show IP and port number) and -t is tcp). You should have a line like "tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN" if you have a service listening for TCP on port 5000. To try and connect you can simply do "telnet 127.0.0.1 5000", if you can't connect with telnet from the server then the database ain't listening/allowing your connection, or the server is running on another port or server.
SSH uses TCP traffic by default, right?
Just to verify, NAT in VirtualBox does have these limitations (per the User Manual):
There are four limitations of NAT mode which users should be aware of:
ICMP protocol limitations: Some frequently used network debugging tools (e.g. ping or tracerouting) rely on the ICMP protocol for sending/receiving messages. While ICMP support has been improved with VirtualBox 2.1 (ping should now work), some other tools may not work reliably.
Receiving of UDP broadcasts is not reliable: The guest does not reliably receive broadcasts, since, in order to save resources, it only listens for a certain amount of time after the guest has sent UDP data on a particular port. As a consequence, NetBios name resolution based on broadcasts does not always work (but WINS always works). As a workaround, you can use the numeric IP of the desired server in the \server\share notation.
Protocols such as GRE are unsupported: Protocols other than TCP and UDP are not supported. This means some VPN products (e.g. PPTP from Microsoft) cannot be used. There are other VPN products which use simply TCP and UDP.
Forwarding host ports lower than 1024 impossible: On Unix-based hosts (e.g. Linux, Solaris, Mac OS X) it is not possible to bind to ports below 1024 from applications that are not run by root. As a result, if you try to configure such a port forwarding, the VM will refuse to start.
Try ssh -L5000:0.0.0.0:5000 instead of ssh -L5000:127.0.0.1:5000
There is something called a "loopback" that is tangled up with 127.0.0.1 that will cause you grief if trying to access ports from a different machine. I.e. your host machine.