Why can't I access my container from my internal or external IP within GAE? - docker

I created a very simple docker practice script (Github link), and executed it via the docker application on my MAC OS computer without any problems. I wanted to test it on google clouds compute engine, so i created an instance and re-built the docker image & container via the SSH browser (Using Debian GNU/Linux)
Everything seems to work fine, except when i try to access the container via localhost/external IP. Both give me this response Site can't be reached.
I've adjusted the firewall settings many times, and end up with the same results as the screenshot provided. I ended up resetting the firewall settings to its default settings, just so I could bring this question here. Here are the default settings
What makes me think i'm missing something is the fact that I can use curl http://localhost:5000 (the port i've chosen for exposure), and i'll get this as a response, which is all i had set the page to say once it's launched.
What am I missing that's causing the container to not allow me to view it via localhost/external IP?

Related

Can a random person visiting a docker-app hosted online access the source code?

I am very new to the realm of dockers. I want to make sure I have understood the safety part of it correctly.
Imagine the following case:
I create an app that consists of multiple scripts and models.
I dockerize my app.
I host the dockerized app by using a cloud platform on their servers.
The app has an UI that can be accessed by anyone online, for instance through a web link.
The question is:
Can a person from the outside world access to the contents of this app in any way - or may I sleep in peace and be sure no one can see the stuff inside it?
As part of dockerizing your application, you exposed ports that allow interaction with the container (typically in your Dockerfile). If everything is configured correctly, then external visitors can only access the contents of the container via that port or ports.
Running your container at a well-known provider is a great start, but not a guarantee of a secure configuration.
A few things to consider:
Whatever runs on the port or ports that you expose, can provide whatever info from the container. The service there should be secure itself, regardless of Docker.
You host your Docker image in a registry, where the platform starts it from. That registry should also be configured to not allow unauthorized access to the image.
You should have no secrets in Docker images anyway. If the image needs some kind of a secret, that should be provided at runtime (eg. via environment variables), or even better, downloaded from a secret vault.

Dynamically set proxy for docker pull

I'm trying to pull an image from a server with multiple proxies.
Setting a proper proxy depends on which zone the machine is trying to docker pull from.
For the record, adding the one relevant proxy in /etc/systemd/system/docker.service.conf/http-proxy.conf of the machine which is pulling the image, works fine.
But the container is supposed to be downloaded on multiple zones, which require different proxies based on where the machine is.
I tried two things:
Passed the list of proxies in the http-proxy.conf, like this:
[Service]
Environment="HTTP_PROXY=http://proxy_1:port/,http://proxy_2:port/"
Environment="HTTPS_PROXY=http://proxy_1:port/,http://proxy_2:port/"
Environment="NO_PROXY=localhost"
Some machines require http://proxy_1:port/, which work fine.
But on a machine that requires http://proxy_2:port/ to pull; it does not work, meaning, Docker does not fallback to another proxy to try. It returns this error:
Error response from daemon: Get HTTP:<ip>:<proxy_1> proxyconnect tcp: dial tcp <ip>:<proxy_1>: connect: no route to host
Ofcourse if I were to provide only the second working proxy to the configuration, it will work.
Passing proxy as a parameter to docker pull, like in docker build/run but that is not supported as per the documentation.
I am looking for a way to set-up proxies in such a way that either
Docker falls back to trying other provided alternate proxies
OR
I can provide proxy dynamically at the time of pull. (This will be part of an automated process which determines relevant proxy to pass.)
I do not want to constantly change the http-proxy file and restart docker for obvious reasons.
What are my options?
If you're using a sufficiently recent docker (i.e. 17.07 and higher) you can have this configuration on the client side. Refer to the official documentation for details on the configuration.
You still need to have multiple configuration files for the various proxy configuration you need, but you can switch them without the need to restart the docker daemon.
In order to do something similar (not exactly related to proxy) I use a shell script that wraps the invocation of docker client pointing to a custom configuration file via the --config option.

Access to internal infrastructure from Kubernetes

If I run Docker (Docker for Desktop, 2.0.0.3 on Windows 10), then access to internal infrastructure and containers is fine. I can easily do
docker pull internal.registry:5005/container:latest
But ones I enable Kubernetes there, I completely lose an access to internal infrastructure and [Errno 113] Host is unreachable in Kubernetes itself or connect: no route to host from Docker appears.
I have tried several ways, including switching of NAT from DockerNAT to Default Switch. That one doesn't work without restart and restart changes it back to DockerNAT, so, no luck here. This option also seems not to work.
let's start from the basics form the official documentation:
Please make sure you meet all the prerequisites and all other instructions were met.
Also you can use this guide. It has more info with details pointing to what might have gone wrong in your case.
If the above won't help, there are few other things to consider:
In case you are using a virtual machine, make sure that the IP you are referring to is the one of the docker-engines’ host and not the one on which the client is running.
Try to add tmpnginx in docker-compose.
Try to delete the pki directory in C:\programdata\DockerDesktop (first stop Docker, delete the dir and than start Docker). The directory will be recreated and k8s-app=kube-dns labels should work fine.
Please let me know if that helped.

Centralized team development environment with docker

I want to build a "centralized" development environment using docker for my development team (4 PHP developers)
I have one big Linux server (lot of RAM, Disk, CPU) that runs the containers.
All developers have an account on this linux server (a home directory) where they put (git clone) the projects source code. Locally (on their desktop machine) they have access to their home directory via a network share.
I want that all developers are able to work at the same time on the same projects, but viewing the result of their code editing in different containers (or set of containers for project who use linking containers)
The docker PHP development environment by itself is not a problem. I already tried something like that with success : http://geoffrey.io/a-php-development-environment-with-docker.html
I can use fig, with a fig.yml at the root of each project source code, so each developer can do a fig up to launch the set of containers for a given project. I can even use a different FIG_PROJECT_NAME environment variable for each account so I suppose that 2 developer can fig up the same project and their will be no container names collisions
Does it make sense ?
But after, I don't really know how to dynamically giving access to the running containers : when running there will be typically a web server in a container mapped to a random port in the host. How can I setup a sort of "dynamic DNS" to point to the running container(s), accessible, let say, through a nginx reverse proxy (the vhost creation and destroy has to be dynamic too) ?
To summarize, the workflow I would like to have :
A developer ssh into the dev env (the big linux server).
from his home directory he goes into the project directory and do a fig up
a vhost is created in the nginx reverse proxy, pointing to the running container and a DNS entry (or /etc/hosts entry) is added that is the server_name of this previously generated vhost.
The source code is mounted into the container from a host directory (-v host/dir:container/dir, so the developer can edit any file while the container is running
The result can be viewed by accessing the vhost, for example :
randomly-generated-id.dev.example.org
when the changes are ok, the developper can do a git commit/push
then the dev do a fig stop which in turn delete the nginx reverse proxy corresponding vhost and also delete the dynamic DNS entry.
So, how would to do a setup like this ? I mentioned tool like fig but if you have any other suggestions ... but remember that I would like to keep a lightweight workflow (after all we are a small team :))
Thanks for your help.
Does it make sense ?
yes, that setup makes sense
I would suggest taking a look at one of these projects:
https://github.com/crosbymichael/skydock
https://github.com/progrium/registrator
https://github.com/bnfinet/docker-dns
They're all designed to create DNS entries for containers as they start. Then just point your DNS server at it and you should get a nice domain name every time someone starts up an environment (I don't think you'll need a nginx proxy). But you might also be interested in this approach: http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/
Now, there's an even better option for you: Traefik. It will act as a reverse proxy, listening on 80/443, and will differentiate by hostname. Then, it will forward traffic dynamically, based on labels applied to the containers.
Here is a good solution to your issue:
1) Setup Traefik to listen to the docker daemon, forwarding based on ports
2) Ensure the frontend app servers for your devs are on the same docker network as traefik
3) Set a wildcard dns entry point to your server. For example: *.localdev.example.com.
4) On each container, set the hostname in that wildcard namespace. For example: jsmith-dev1localdev.example.com. This would be specified in a docker label such as: traefik.frontend.rule=Host:jsmith-dev1localdev.example.com.
This would allow developers to dynamically forward traffic by domain to their own dev containers.
Yes, I'm aware this is a 3 year old question. It still comes up in 2018 first on google for "centralized docker development server" so I'm going to post this anyways for the help of those currently searching.

Linking containers in Docker

Docker allows you to link containers by name.
I have two questions on this:
Supposed A (client) is linked to B (service), and B's port is exposed dynamically (i.e. the actual host port is determined by Docker, not given by the user). What happens if B goes down and is being restarted?
Does Docker update the environment variable on A?
Does Docker assign the very same port again to B?
Is A link to B broken?
…?
Besides that, it's quite clear that this works fine if both containers are run on the same host machine. Does linking containers also work across machine boundaries?
Have you looked into the ambassador pattern?
It's ideal for this concept where you may want App server linked to DB server but if you take DB server down then App server needs to be restarted also.
http://docs.docker.io/en/latest/use/ambassador_pattern_linking/
I would say: try ;).
At the moment, docker as no control whatsoever on the process once started as it execve(3) without fork. It is not possible to update the env, that's why the links need to be done before the container runs and can't be edited afterward.
Docker will try to reassign the same port to B, but there is no warranty as an other container could be using it.
What do you mean by 'broken'? If you disabled the networking between unlinked container, it should still be working if you stop/start a container.
No, you can't link container across network yet.

Resources