Docker connect to local secure registry - docker

I have set up a private registry and secured it according to the recipe with an nginx reverse proxy. nginx listens on port 5000 using SSL.
docker pull myregistry:5000/foo:latest from a remote machine to that registry works fine.
However, that same command on myregistry itself results in docker trying to access the registry (through nginx) via HTTP, not HTTPS.
Since nginx listens using SSL, it returns an error ("The plain HTTP request was sent to HTTPS port").
According to the Docker documentation, local registries are automatically considered as insecure.
In my case, I want the local registry also to be considered as secure, so that docker pull myregistry:5000/foo:latest works on the same machine. How to achieve that?
There is only an option to mark remote registries as insecure, but not to mark a specific registry as secure.
Obviously, I cannot use a different port to listen for plain HTTP, since that would change the image name. I also did not find a way to make nginx accept HTTP traffic on the same port based on IP address.

Related

How to configure Docker to use a HTTP proxy for only one registry, not all of them

At my job, we have internal services that can only be reached over a HTTP proxy. One such service is our internal Docker registry. I'm unable to communicate with this registry because my Docker daemon isn't configured to use a HTTP proxy.
If I do configure my Docker daemon to use the company HTTP proxy, I can push/pull images from the internal registry, but I'm now unable to communicate with any other registries. Changing the HTTP proxy environment variables and restarting my entire Docker daemon several times per day is a massive hassle and waste of time.
Basically what I need to do is configure Docker to use a HTTP proxy to communicate with one registry, but not all the other ones.
Is it possible to configure Docker this way, or is it all or nothing?

Map Google Cloud VM docker port to HTTPS

I have a Google Cloud VM which runs a docker image. The docker image runs a specific JAVA app which runs on port 1024. I have pointed my domain DNS to the VM public IP.
This works, as I can go to mydomain.com:1024 and access my app. Since Google Cloud directly exposes the docker port as a public port. However, I want to access the app through https://example.com (port 443). So basically map port 443 to port 1024 in my VM.
Note that my docker image starts a nginx service. Previously I configured the java app to run on port 443, then the nginx service listened to 443 and Google Cloud exposed this HTTPS port so everthing worked fine. But I cannot use the port 443 anymore for my app for specific reasons.
Any ideas? Can I configure nginx somehow to map to this port? Or do I setup a load balancer to proxy the traffic (which seems rather complex as this is all pretty new to me)?
Ps. in Google Cloud you cannot use "docker run -p 443:1024 ..." which basically does the same if I am right. But the containerized VMs do not allow this.
Container Optimized OS maps ports one to one. Port 1000 in the container is mapped to 1000 on the public interface. I am not aware of a method to change that.
For your case, use Compute Engine with Docker or a load balancer to proxy connections.
Note: if you use a load balancer, your app does not need to manage SSL/TLS. Offload SSL/TLS to the load balancer and just publish HTTP within your application. Google can then manage your SSL certificate issuance and renewal for you. You will find that managing SSL certificates for containers is a deployment pain.

Using edge / proxy node certificate identity

I'm using haproxy as a bastion server / cluster gateway, as only some of the nodes in my network have direct access to the external network. My internal nodes are part of a kubernetes cluster, and need to be able to pull images from a private registry external to my cluster which requires certificate identities.
k8s cluster internal node -> haproxy on edge node -> docker registry
I'm trying to configure my back_end in my haproxy.cfg to route to the docker registry, and update the request with the certificate identity of the edge node. (Internal node does not have certs acceptable to the docker registry, and I've not been allowed to host the external node's certs on the internal node.) What I have right now looks like the below...
frontend ft_ssl
bind <boxIP>:443
mode http
default_backend bk_global_import_registry_certs
backend bk_global_import_registry_certs
mode http
balance roundrobin
server registry_alias-0 <registryIP>:443 ssl ca-file fullyqualified/ca.crt crt fullyqualified/file.pem check
server registry_alias-1 <registryIP2>:443 ssl ca-file fullyqualified/ca.crt crt fullyqualified/file.pem check
I currently have the HTTPS_PROXY setting in /etc/systemd/system/docker.service.d/http-proxy.conf and am getting a 400 Bad Request. Scrubbed log message below, with only changes as removal of IPs or typos.
InternalIP:randomPort [09/Jul/2019:13:28:08.659] ft_ssl bk_global_import_registry_certs 0/0/10/5/15 400 350 - - ---- 1/1/0/0/0 0/0 {} "CONNECT externalFQDN:443 HTTP/1.1"
For those looking at this via the kubernetes or docker tags, I also considered setting up a pull-through cache, but realized this only works with Docker's public registry - see open Docker GitHub issue 1431 for other folks trying to find ways to get past that, as well.
Posting the answer that resolved the situation for us, in case it helps others...
All traffic from the internal node now routes through to the HAProxy.
I'm no longer using the HTTPS_PROXY setting in /etc/systemd/system/docker.service.d/http-proxy.conf. Proxying through was not appropriate, since I could not use my internal node's certificate to authenticate against the docker registry.
From the internal node, I'm now treating the docker registry as an insecure registry by adding its address to /etc/docker/daemon.json in insecure-registries: [].
We can now access our internal private registry using the certificates passed along on the forward on the HAProxy backend. We do verify the certificate on the front-end, but because we have the registry listed in insecure-registries, Docker accepts the cert name mismatch.
One side effect noticed: pulling an image from "default" docker registry without specifying a prefix does not succeed using our solution. To pull from Docker, for example, you'd pull from registry-1.docker.io/imageName:imageVersion, rather than just imageName:imageVersion.
I might be mistaking but as far as I know Haproxy cannot be used as a direct HTTP proxy, it only acts as a reverse HTTP proxy. You should use Squid or something like that to be a direct HTTP proxy.
If you want Haproxy to terminate SSL for you like you do, then you will need to change hostname part of your Docker image to be the hostname of your Haproxy node. You should also make sure your Docker daemons trust the Haproxy certificate or add Haproxy to the list of INSECURE_REGISTRIES on all Kube nodes.

Ban outgoing traffic from docker container

Can I forbid all outgoing traffic from a docker container except a http proxy server, without sophisticated configuration of iptable?
I don't want this container to access any network at all, exception AAA.BBB.CCC.DDD:80. Is there any convenient way to achieve so?
EDIT:
I found that using --internal can do the trick, and link it to a proxy server container on the same host would allow traffic. Is this method secure though?

Port forwarding in Jelastic with Docker

I have simple application which has rest api on port 4567 and run it in my docker container in jelastic cloud.
Now I want to forward port 4567 to the external world. When I run docker locally I can do it like that: docker run -d -p 4567:4567 -ti myapp /bin/bash
But how can I do that in jelastic without external IP? I've also tried to use jelastic endpoints but port is not available.
Also found some information on jelastic's docs: "In case your Docker container does not have an external IP attached, Jelastic performs an automatic port redirect.
This means that if application listens to a custom port on TCP level, Jelastic will try to automatically detect it and forward all the incoming requests to this port number.
As a result, in most cases, your dockerized app or service will become available over the Internet under the corresponding node’s domain right after creation."
To build docker image I use Dockerfile and it has "EXPOSE 4567" field.
#Catalina,
Pay attention that there is no need to expose ports in Jelastic because it uses PCS container-based virtualization, which is more technologically advanced compared to the native Docker containers’ implementation: it has the built-in support of the natural virtual host-routed network adapters.
By default, Jelastic automatically detects the ports, that are predefined to be listened by an application within the appropriate Docker image settings, and applies the required redirects to ensure container’s accessibility right after the deployment.
Let us explain which ports are listening on Shared Load Balancer (SLB) and can be forwarded to the containers:
80 -> HTTP
8080 -> HTTP
8686 -> HTTP
8443 -> SSL
4848 (glassfish admin) -> SSL
4949 (wildfly admin) -> HTTP
7979 (import/export feature) -> SSL
In the case when you want to specify another port instead of selected by auto-redirect functionality you can do it by specifying the JELASTIC_EXPOSE docker variable in the environment settings wizard to specify the needed port.
JELASTIC_EXPOSE variable should be used, with the following values as possible:
0 or DISABLED or FALSE - to disable auto-redirect
a number within the 1-65535 range - to define the required port for setting the corresponding redirect
Also, you can either map the required private port via endpoint (for being accessible over Shared LB) and bind your service to the received address and shared port.

Resources