Problem statement:
On the standalone On-Prem server, using nvidia docker. Whenever users create a new environment - they can potentially open up any port for all traffic from outside world(by passing our client firewall) if they don't specify local host variables.
So, how to protect such server tunneling request & instead make it open just for localhost? Any thoughts / ideas??
You can't give untrusted users the direct ability to run docker commands. For instance, anyone who can run a Docker command can run
docker run --rm -v /:/host busybox cat /host/etc/shadow
and then run an offline password cracker to get your host's root password. Being able to bypass the firewall is probably the least of your concerns.
Related
I'm using an OpenVPN server in a Docker container for multiple client connections.
This container is located in a specific Docker network in which I have a web server as client target.
I want to publish the host name of my web server to clients so that they won't need to know its IP address in order to reach it.
To do so, I want to open the Docker's native DNS server to the OpenVPN clients and push to them the OpenVPN's IP as DNS server.
However, the Docker DNS server resides in the OpenVPN container, listening on 127.0.0.11 (with iptables internal redirections but that's another story).
Thus, in the OpenVPN server container, I need to add an iptables rule in order to forward a DNS request coming from the external OpenVPN IP to the internal 127.0.0.11 one.
But such an internal forward requires me to execute the following command:
sysctl -w net.ipv4.conf.tun0.route_localnet=1
Using the only NET_ADMIN capability when running docker run (--cap-add=NET_ADMIN), I get the following error message:
sysctl: error setting key 'net.ipv4.conf.tun0.route_localnet': Read-only file system
However, this perfectly works using the --privileged flag, but the one is too permissive.
Is there any Linux capability that can do the trick without using the --privileged flag?
I couldn't find the answer in the Linux capabilities manual.
I found a solution, using the --sysctl's docker run option
Solution in docker-compose.yml:
sysctls:
- net.ipv4.conf.tun0.route_localnet=1 # Doesn't work as tun0 doesn't
# exist yet at container start time
- net.ipv4.conf.default.route_localnet=1 # Workaround.
What would be some use case for keeping Docker clients or CLI and Docker daemon on separate machines?
Why would you keep the two separate?
You should never run the two separately. The only exception is with very heavily managed docker-machine setups where you're confident that Docker has set up all of the required security controls. Even then, I'd only use that for a local VM when necessary (as part of Docker Toolbox; to demonstrate a Swarm setup) and use more purpose-built tools to provision cloud resources.
Consider this Docker command:
docker run --rm -v /:/host busybox vi /host/etc/shadow
Anyone who can run this command can change any host user's password to anything of their choosing, and easily take over the whole system. There are probably more direct ways to root the host. The only requirement to run this command is that you have access and permissions to access the Docker socket.
This means: anyone who can access the Docker socket can trivially root the host. If it's network accessible, anyone who can reach port 2375 on your system can take it over.
This isn't an acceptable security position for the mild convenience of not needing to ssh to a remote server to run docker commands. The various common system-automation tools (Ansible, Chef, Salt Stack) all can invoke Docker as required, and using one of these tools is almost certainly preferable to trying to configure TLS for Docker.
If you run into a tutorial or other setup advising you to start the Docker daemon with a -H option to publish the Docker socket over the network (even just to the local system) be aware that it's a massive security vulnerability, equivalent to disabling your root password.
(I hinted above that it's possible to use TLS encryption on the network socket. This is a tricky setup, and it involves sharing around a TLS client certificate that has root-equivalent power over the host. I wouldn't recommend trying it; ssh to the target system or use an automation tool to manage it instead.)
I'm new to both docker and Nifi, I found this command that installs via docker and used it on a virtual machine that I have in GCP but I would like to access this container via webserver. In docker ps this appears to me:
What command do I need to execute to gain access to the tool via port 8080?
The container has already exposed port 8080 on the host, as evidence by the output 0.0.0.0:8080->8080/tcp. You read that as {HOST_INTERFACE}:{HOST_PORT}->{CONTAINER_PORT}/{PROTOCOL}.
Navigate to http://SERVER_ADDRESS:8080/ (or maybe http://SERVER_ADDRESS:8080/nifi) using your web browser. You may need to modify the firewall rules applied to your VM to ensure that you can access that port from your local machine.
I'm trying to launch a docker container that is running a tornado app in python 3.
It serves a few API calls and is writing data to a rethinkdb service on the system. RethinkDB does not run inside a container.
The system it runs on is ubuntu 16.04.
Whenever I tried to launch the docker with docker-compose, it would crash saying the connection to localhost:28015 was refused.
I went researching the problem and realized that docker has its own network and that external connections must be configured prior to launching the container.
I used this command from a a question I found to make it work:
docker run -it --name "$container_name" -d -h "$host_name" -p 9080:9080 -p 1522:1522 "$image_name"
I've changed the container name, host name, ports and image name to fit my own application.
Now, the docker is not crashing, but I have two problems:
I can't reach it from a browser by pointing to https://localhost/login
I lose the docker-compose usage. This is problematic if we want to add more services that talk to each other in the future.
So, how do I launch a docker that can talk to my rethinkdb database without putting that DB into a container?
Please, let me know if you need more information to answer this question.
I'd appreciate your guidance in this.
The end result is that the docker will serve requests coming over https.
for exmaple I have an end-point called /getURL.
The request includes a token verified in the DB. The URL is like this:
https://some-domain.com/getURL
after verification with the DB it will send back a relevant response.
the docker needs to be able to talk on 443 and also on 28015 with the rethinkdb service.
(Since 443 and https include the use of certificates, I'd appreciate a solution that handles this on regular http with some random port too and I'll take it from there)
Thanks!
P.S. The service works when I launch it without a docker on pycharm it's the docker configuration I have problems with.
I found a solution.
I needed to add this so that the container can connect to both the database and the rethinkdb:
--network="host"
Since this solution works for me right now, but it isn't the best solution, I won't mark this as the answer for now.
I'm using Kitematic to start my Docker containers. I'm trying to start the latest Neo4j container (3.2) and I cannot access the DB via the web browser because it requires authentication.
I tried to disable it via the environment variables (NEO4J_AUTH none) but that doesn't solve the problem.The standard passw/user neo4j/neo4j doesn't work
Any ideas how to set the authentication via Kitematic?
I have the same problem here but I can tell you why this is happening:
As you are using windows, you need to access Neo4J browser tool via the Docker Machine, that port forwards your request to the neo4j container. The forwarded request is basically an authentication request.
The problem is CHROME does not allow unsecured transfer of credentials and the forwarded request fall into that category. This is an issue that comes from chrome and not the Neo4J server. I'm still trying to find an elegant way of solving this for my students that uses windows.
The easiest way would be to connect to the neo4j container directly (which can be done in linux and mac).
Ok finally one of my student made it work.
make sure to publish both port 7687 and 7474 from your docker.
This way, both request will target localhost and the browser will stop complaining
here is a docker command showing how to publish both ports
docker run --rm --name neo4j_server -p 7474:7474 -p 7687:7687 -d neo4j