Use LetsEncrypt docker for local development environment - docker

I have a project where i need to set up a dev environment with letsEncrypt.
Self signed cert doesn't work for me as i need to connect to react native, unless i tinker with the android code/ objective-C code, which i dont think is the right thing to do. (Ignore errors for self-signed SSL certs using the fetch API in a ReactNative App?)
I am aware there are some docker projects: https://hub.docker.com/r/jrcs/letsencrypt-nginx-proxy-companion/
I followed along to start the NGINX-LETSENCRYPT container, and bind them with my own container using:
docker run --name loginPOC -e "VIRTUAL_HOST=XPS15"
-e "LETSENCRYPT_HOST=XPS15" -p 8000:80 -d f91893ef3a6f
Note:
f91893ef3a6f(my C# image - web api)
XPS15(local machine)
i only get result when i connect to :
http://xps15:8000/api/values [Works fine]
https://xps15:8000/api/values [HTTPS] [Received "This site can’t provide a secure connection"]
I then check my cert status with
docker exec d9d1b9b5c933 /app/cert_status
Result:No cert status is found.
After some googling i found:
https://letsencrypt.org/docs/certificates-for-localhost/
and
https://community.letsencrypt.org/t/can-i-use-letsencrypt-in-localhost/21741
I have few questions in mind:
1. Most of the examples they have a top level in their domain name. My doubt is perhaps XPS15 is not a valid host name ?
Appreciate if anyone knows any workaround. Thanks

Related

Issue with adding trusted mitmproxy certificate to the docker container

I tried to run mitmproxy via docker to collect API requests we send from the app to the server. Already set up the process locally and start working on putting it into a docker container.
Firstly I tried to use a standard docker image: https://hub.docker.com/r/mitmproxy/mitmproxy
Run the following command:
docker run --rm -it -p 8282:8080 -p 127.0.0.1:8182:8081
mitmproxy/mitmproxy mitmweb --web-host 0.0.0.0 --web-port 8282
And faced the issue with mitmproxy certificate, while tryining to collect the 'https' traffic, it has not been trusted.
When I tried to write a custom image based on standard one through the docker file, I added a corresponding mitmproxy certificate to the container in there, but it doesn't help for some reasons.
Not Truster sertificate example: https://i.stack.imgur.com/nSWb6.png
Browser view after performing some search: https://i.stack.imgur.com/l9RXV.png
Dockerfile:
https://i.stack.imgur.com/P5qOm.png

.NET Core in Docker, expired SSL CA certificate

I have a Docker image, based on the microsoft/dotnet:2.1-runtime image (Linux).
However, since 1/6, the .NET code (and other code) in the container is not able to connect to certain sites that use a COMODO CA certificate.
E.g. the following fails inside the container, due to the expired certificate:
curl https://code.jquery.com
Result:
curl: (60) SSL certificate problem: certificate has expired
I have tried calling update-ca-certificates inside the container, but that does not change anything.
My desktop browsers somehow have updated the CA certs themselves, but the docker have not.
I don't really need curl to work (that was just a test to show the issue), but I do need my own .NET code to work (which causes similar error). Any ideas how I can force the container to update the CA certs, or make this work somehow? I obviously do not want to skip certificate validation!
Not sure if this is the answer. After you update the certificate, updating the docker container image itself. the pseudo commands look like below:
$ docker run -p <port>:<port> <image> bash --name <image name>
root#xxxx <ca-cert folder>: update-ca-certificates
Don't exit out of the container.
On the host machine:
$ docker commit <image name>
docker commit will create a new image from the running container.
Theory
Probably you are running update-ca-certificates after you start a container instance.
using some steps shared in these answers
This will probably work one time if your docker run commands look something like below
$ docker run -p 8000:80 <image name> bash
and inside the bash, you updated the certificate.
This will only run for the lifetime of this container. When this container dies and a new one is created it's created with the same old image (with expired cert).

How to browse container files in Docker for Windows? My folder mapping didn't work

I run Windows machine and I'm super new to docker, I'm trying to setup LetsEncrypt on my site for HomeAssistant purpose.
I create a folder in C:/Docker/LetsEncrypt in my Windows machine and then I run this command.
PS C:\Users\test> docker run -it --rm -p 80:80 --name certbot -v "C:Docker/LetsEncrypt/etc/letsencrypt:/etc/letsencrypt" -v "C:Docker/LetsEncrypt/var/lib/letsencrypt:/var/lib/letsencrypt" -v "C:Docker/LetsEncrypt/var/log/letsencrypt:/var/log/letsencrypt" quay.io/letsencrypt/letsencrypt:latest certonly --standalone --standalone-supported-challenges http-01 --email myemail#mail.com -d mysite.duckdns.org
This is the result I got
Warning: This Docker image will soon be switching to Alpine Linux.
You can switch now using the certbot/certbot repo on Docker Hub.
The standalone specific supported challenges flag is deprecated. Please use the --preferred-challenges flag instead.
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
/opt/certbot/venv/local/lib/python2.7/site-packages/josepy/jwa.py:107: CryptographyDeprecationWarning: signer and verifier have been deprecated. Please use sign and verify instead.
signer = key.signer(self.padding, self.hash)
-------------------------------------------------------------------------------
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf. You must
agree in order to register with the ACME server at
https://acme-v01.api.letsencrypt.org/directory
-------------------------------------------------------------------------------
(A)gree/(C)ancel: a
-------------------------------------------------------------------------------
Would you be willing to share your email address with the Electronic Frontier
Foundation, a founding partner of the Let's Encrypt project and the non-profit
organization that develops Certbot? We'd like to send you email about EFF and
our work to encrypt the web, protect its users and defend digital rights.
-------------------------------------------------------------------------------
(Y)es/(N)o: y
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for mysite.duckdns.org
Waiting for verification...
Cleaning up challenges
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/mysite.duckdns.org/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/mysite.duckdns.org/privkey.pem
Your cert will expire on 2018-06-22. To obtain a new or tweaked
version of this certificate in the future, simply run certbot
again. To non-interactively renew *all* of your certificates, run
"certbot renew"
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
- If you like Certbot, please consider supporting our work by:
Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le
Looks like everything is fine except I can't find the file fullchaim.pem and privkey.pem in my Windows machine which is suppose to be inside C:\Docker\LetsEncrypt\etc\letsencrypt.
What am I missing?
Here is the command you executed
PS C:\Users\test> docker run -it --rm -p 80:80 --name certbot
-v "C:Docker/LetsEncrypt/etc/letsencrypt:/etc/letsencrypt"
-v "C:Docker/LetsEncrypt/var/lib/letsencrypt:/var/lib/letsencrypt"
-v "C:Docker/LetsEncrypt/var/log/letsencrypt:/var/log/letsencrypt"
quay.io/letsencrypt/letsencrypt:latest
certonly --standalone --standalone-supported-challenges
http-01 --email myemail#mail.com -d mysite.duckdns.org
docker allows you to mount directories on our local machine such that internal to the launched container those same directories are mapped to new names however the directory contents are identical. For example in above it says
-v "C:Docker/LetsEncrypt/etc/letsencrypt:/etc/letsencrypt"
which is a volume pair where left of : delimiter is a directory local to your machine C:Docker/LetsEncrypt/etc/letsencrypt and on right hand side is what that same directory gets called from perspective inside container as per /etc/letsencrypt ... this mapping frees up the container's internal perspective to be isolated from a given person's local directory structure ... now look closely at this message :
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/mysite.duckdns.org/fullchain.pem
that is from perspective of inside the container ... so now your are armed with the knowledge to discover where you missing keys are
SOLUTION when inside of container it says
/etc/letsencrypt/live/mysite.duckdns.org/fullchain.pem
that same file is mapped to your local machine at location
C:Docker/LetsEncrypt/etc/letsencrypt/live/mysite.duckdns.org/fullchain.pem

How to add private registry certs to Docker Machine

I upgraded my Mac (OS X) from an older Docker installation to Docker Toolbox, meaning that I'm now working with Docker Machine, and in the process discovered that certs I had working for push/pull with a private registry are not there, and I can't for the life of me figure out how to get them in place. At the moment when I try a test pull I get the dreaded x509: certificate signed by unknown authority error. I've searched around, looked at issues in Github, but nothing has worked for me. I even tried ssh'ing into the machine VM and manually copying them into /etc/ssl/certs, and various other things, with no luck. And I certainly don't want to get into the "insecure-registry" stuff. This used to work with boot2docker prior to moving to docker-machine.
This seems like a very simple question: I have a couple of .crt files that I need put in the right place so that I can do a push/pull. How does one do this? And secondarily, how can this not be documented anywhere? Can we wish for a docker-machine add-cert command someday?
Thanks for any help, and I hope a good answer here can stick around to assist others who run into this.
Okay so let's imagine I have a registry running at the address: 192.168.188.190:5000 and I have a proper certificate for this address.
I would now run the following commands to install the root certificate into my machine:
docker-machine scp ./dockerCA.crt $MACHINE_NAME:dockerCA.crt
docker-machine ssh $MACHINE_NAME sudo mkdir -p /etc/docker/certs.d/192.168.188.190:5000
docker-machine ssh $MACHINE_NAME sudo mv dockerCA.crt /etc/docker/certs.d/192.168.188.190:5000/dockerCA.crt
Set the variable MACHINE_NAME to whatever the name of your machine is. The machine will now trust your root certificate.
Having the same issue I read the Documentation in Docker on how to add a certificate to my computer.
As you mentioned that you are on a updated Mac OS X, proceed by doing the following:
Copy the cert file from your docker registry to your hard drive, e.g.
scp user#docker.reg.ip:/path/to/crt/domain.crt /tmp/domain.crt
Add the certificate to your trusted certificates using the following command
sudo security add-trusted-cert -d -r trustRoot \
-k /Library/Keychains/System.keychain /tmp/domain.crt
Restart your local docker handler and now you should be able to upload your local Docker images to the Docker registry.
If you are running on any other operating systems please check this site on how to add trusted root certificates.

Unable to connect to docker hub from China

I'm getting the same thing every time trying to run busybox either with docker on fedora 20 or running boot2docker in VirtualBox:
[me#localhost ~]$ docker run -it busybox Unable to find image
'busybox:latest' locally Pulling repository busybox FATA[0105] Get
https://index.docker.io/v1/repositories/library/busybox/images: read
tcp 162.242.195.84:443: i/o timeout
I can open https://index.docker.io/v1/repositories/library/busybox/images in a browser and sometimes without using a vpn tunnel so tried to set a proxy in the network settings to the proxy provided by Astrill when using VPN sharing but it will always time out.
Currently in China where there basically is no Internet due to the firewall, npm, git and wget seem to use the Astrill proxy in the terminal (when setting it in network setting of Fedora 20) but somehow I either can't get the docker daemon to use it or something else is wrong.
It seems the answer was not so complicated according to the following documentation (had read it before but thought setting proxy in network settings ui would take care of it)
So added the following to /etc/systemd/system/docker.service.d/http-proxy.conf (after creating the docker.service.d directory and conf file):
[Service]
Environment="HTTP_PROXY=http://localhost:3213/"
Environment="HTTPS_PROXY=http://localhost:3213/"
In the Astrill app (I'm sure other provider application provide something similar) there is an option for vpn sharing which will create a proxy; it can be found under settings => vpn sharing.
For git, npm and wget setting the proxy in the ui (gnome-control-center => Network => network proxy) is enough but when doing a sudo it's better to do a sudo su, set the env and then run the command needing a proxy, for example:
sudo su
export http_proxy=http://localhost:3213/
export ftp_proxy=http://localhost:3213/
export all_proxy=socks://localhost:3213/
export https_proxy=http://localhost:3213/
export no_proxy=localhost,127.0.0.0/8,::1
export NO_PROXY="/var/run/docker.sock"
npm install -g ...
I'd like to update the solution for people who still encounter this issue today
I don't know the details, but when using the wireguard protocol on Astrill, docker build and docker run will use the VPN. If for some reason it doesn't work, try restarting the docker service sudo service docker restart while the VPN is active
Hope it helps, I just wasted one hour trying to figure out why it stopped working

Resources