Making containers use host's proxy on MacOS - docker

I'm running Docker Desktop for MacOS. Like many of us, I'm working from home right now and I access certain services that are inside the corporate firewall from my home machine via a SOCKS5 proxy. I'm trying to build a number of services in Docker containers and these containers also need access to the services at work. The problem is that Docker on MacOS does not support SOCKS proxies. I was hoping I could use something like host network mode, but that is also not supported on MacOS.
Any suggestions on how I do the above (other than switching to Linux ;-))
EDIT: add more detail on my environment.
Working from home on a Mac running MacOS Catalina Connected to
various services at work using SOCKS5 proxy and a few SSH tunnels.
Running the Proxifier application to make it easier to allow variousapplications to tunnel via the SOCKS proxy.
Docker Desktop for MacOS.
Ruby on Rails stack inside a Docker container in which certain Gems are developed in-house and
hosted on our internal (behind the corporate firewall) Gitlab server. So the stack needs access to that Gitlab server when the Gems are installed.

OK. I think I've come up with something workable. I'd still appreciate comments on this in case there's a better way. I'm always happy to learn.
Proxifier routes all traffic to *.mydomain.com to the SOCKS5 proxy.
I've created an SSH tunnel on my MacOS host: -L 8443:gitlab-server.mydomain.com:443
In the Gemfile, I reference the gitlab server as https://host.docker.internal:8443
The only sticky bit was that I get an SSL cert verification error since I'm not referencing the Gitlab server at the *.mydomain.com domain. I worked around this by doing git config --global http.sslVerify "false" just before the bundle install so that the git clone commands issued by the Bundler to install the Gitlab-hosted Gems ignore SSL verification errors.
This seems to be working fairly well. The actual Gitlab URL in the Gemfile is parameterized so that the "real" Gitlab URL can be used when inside the corporate firewall.

Related

Azure IoT Edge EFLOW behind proxy

PC on which I have installed IoT Edge EFLOW is behind proxy.
I have tried to setup proxy as I did for standardalone Ubuntu instance i.a. by setting proxy for ubuntu (export https_proxy="...") and then setting proxy for docker.
Unfortunately I have noticed proxy settings are removed after each time host system has been restarted.
Futhermore I cannot modify docker configuration file via powershell and SSH connection.
When I try to open any file via nano or vim the powershell window is empty I cannot type anything, I can only exit by closing powershell window.
Is there any particular steps to do to start using EFLOW behind proxy?
Make sure you follow this: Configure deployment manifests "Once your IoT Edge device is configured to work with your proxy server, you need to continue to declare the HTTPS_PROXY environment variable in future deployment manifests."
As part of our remote automation scripts, we've added the EFLOW Autodeploy scripts that will let you deploy the EFLOW VM and all the configurations (including Proxy) just by declaring a JSON file.
Thanks,
Francisco

Is it possible to run ssl offline?

I have a web-app deployed on cloud with ssl (using freeencrypt with nginx)
The app is dockerized.
Is it possible for me to run it on localhost just by copying it and run docker-compose up?
Is it possible for me to run it on localhost just by copying it and run docker-compose up?
Sure, that's entirely possible. There's nothing particularly different about running it locally vs running it remotely: in both cases, you're still interacting with your web app with a browser over a network connection.
The only tricky bit may be in ensuring that you can continue to use the appropriate hostname so that your SSL certificate will validate correctly. The easiest way to do this is probably to modify your /etc/hosts file to map the hostname to the ip address of your webapp container. This will override DNS. Just remove to remove the modification when you're done testing, otherwise you won't be able to reach the remote site!

Docker Desktop for Windows configure to use Proxy Auto-Config Script (PAC)

I am using windows 10 Enterprise Version 1607,
We use a Proxy Auto Config (PAC) script for Proxy config.
The problem is docker connectivity. I have Docker 17.12.0-ce (stable release) is installed. I'm not able to configure Docker to use PAC to pull docker registry images.
Kindly help! I've gone through the official documentation several times, but nothing helpful. I'm not sure if I'm missing something.
.pac configuration file is actually returning a proxy server address based on which url you are visiting.
So you can skip using .pac and set your HTTP PROXY directly to docker.
If you want to know what is your proxy server address, visit the .pac from your browser, read it and you will find the proxy server address in clear text there.

Docker cannot acces registry from openshift

Here is my whole scenario.
I have a RHEL 7.1 vmware image, with the corporate proxy properly configured, accessing stuff over http or https works properly.
Installed docker-engine, and added the HTTP_PROXY setting to /etc/systemd/system/docker.service.d/http-proxy.conf. I can verify the proxy setting is picked up by executing:
sudo systemctl show docker --property Environment
which will print:
Environment=HTTP_PROXY=http://proxy.mycompany.com:myport/ with real values of course.
Pulling and running docker images works correctly this way.
The goal is to work with the binary distribution of openshift-origin. I downloaded the binaries, and started setting up things as per the walkthrough page on github:
https://github.com/openshift/origin/blob/master/examples/sample-app/README.md
Starting openshift seems to work as I can:
* login via the openshift cli
* create a new project
* even access the web console
But when I try to create an app in the project (also via the cli):
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-hello-world.git
It fails:
error: can't look up Docker image "centos/ruby-22-centos7": Internal error occurred: Get https://registry-1.docker.io/v2/: dial tcp 52.71.246.213:443: connection refused
I can access (without authentication though) this endpoint via the browser on the VM or via WGET.
Hence I believe DOCKER fails to pick up the proxy settings. After some searching I also fear if there are IPTABLES settings missing. Referring to:
https://docs.docker.com/v1.7/articles/networking/
But I don't know if I should fiddle with the IPTABLES settings, should not Docker figure that out itself?
Check your HTTPS_PROXY environment property.

Rails app call APIs using proxy

I have subscribed to an API service which provides access based on static IP (For both Live and Testing).
Since my development area ISP doesn't provide a static IP, I have enabled API access to my staging machine IP, which is static. I installed squid and enabled/setup a proxy server in my staging server so that I can use it as a proxy and make calls to the API while i do development.
I am using Mac for my development and Networking>Proxy settings wont work for system wide( Terminal ). Due to this, I was using Trial versions of MacProxy, proxifier( proxy clients) and all was was working fine till trial expired. Are there any free alternatives to this for Mac?
I tried to setup proxy by creating ssh socks proxy and setting http_proxy="xxx". In terminal. When I check terminal IP post setting using curl ipecho.net/plain ; echo, it shows proper IPs but when I run local rails development server and tries to access the API, its rejecting call with invalid IP (it shows non proxied IP)
An free alternative that might solve your problem might be a project on github:
sshuttle (read me)
It forwards TCP and DNS requests a remote ssh server.
The most basic use of sshuttle looks like this:
./sshuttle -r username#sshserver 0.0.0.0/0 -vv
To tunnel all traffic you might do:
./sshuttle --dns -vr ssh_server 0/0
There are also helper functions available here, which can simpify some of the commands.
The system level proxy settings aren't used by ruby applications. Typically this is a code level option passed to the library you are using to make connections.
If you want Savon to use a proxy then you need to pass this to Savon when you create the client:
client = Savon.client(proxy: "http://example.org", ...)
If this call is being made inside a gem, then unless that gem already provides that option then you would need to fork it to add the option
The gem you mention seems to already implement this - it's configuration class has a proxy attribute that seems to be passed through to savon.

Resources