I tried to run mitmproxy via docker to collect API requests we send from the app to the server. Already set up the process locally and start working on putting it into a docker container.
Firstly I tried to use a standard docker image: https://hub.docker.com/r/mitmproxy/mitmproxy
Run the following command:
docker run --rm -it -p 8282:8080 -p 127.0.0.1:8182:8081
mitmproxy/mitmproxy mitmweb --web-host 0.0.0.0 --web-port 8282
And faced the issue with mitmproxy certificate, while tryining to collect the 'https' traffic, it has not been trusted.
When I tried to write a custom image based on standard one through the docker file, I added a corresponding mitmproxy certificate to the container in there, but it doesn't help for some reasons.
Not Truster sertificate example: https://i.stack.imgur.com/nSWb6.png
Browser view after performing some search: https://i.stack.imgur.com/l9RXV.png
Dockerfile:
https://i.stack.imgur.com/P5qOm.png
Related
I have a Docker image, based on the microsoft/dotnet:2.1-runtime image (Linux).
However, since 1/6, the .NET code (and other code) in the container is not able to connect to certain sites that use a COMODO CA certificate.
E.g. the following fails inside the container, due to the expired certificate:
curl https://code.jquery.com
Result:
curl: (60) SSL certificate problem: certificate has expired
I have tried calling update-ca-certificates inside the container, but that does not change anything.
My desktop browsers somehow have updated the CA certs themselves, but the docker have not.
I don't really need curl to work (that was just a test to show the issue), but I do need my own .NET code to work (which causes similar error). Any ideas how I can force the container to update the CA certs, or make this work somehow? I obviously do not want to skip certificate validation!
Not sure if this is the answer. After you update the certificate, updating the docker container image itself. the pseudo commands look like below:
$ docker run -p <port>:<port> <image> bash --name <image name>
root#xxxx <ca-cert folder>: update-ca-certificates
Don't exit out of the container.
On the host machine:
$ docker commit <image name>
docker commit will create a new image from the running container.
Theory
Probably you are running update-ca-certificates after you start a container instance.
using some steps shared in these answers
This will probably work one time if your docker run commands look something like below
$ docker run -p 8000:80 <image name> bash
and inside the bash, you updated the certificate.
This will only run for the lifetime of this container. When this container dies and a new one is created it's created with the same old image (with expired cert).
enter image description here
I have built a docker image of IBM WAS 9 Base for Windows. My image is named as was9_new. After the image is successfully built, I use docker run command as follows :
docker run --name was_test -h was_test -p 9043:9043 -p 9443:9443 -d was9_new
It returns as output a container ID, and then exits
After that when I try to open the admin console -
https://localhost:9043/ibm/console/login.do?action=secure
I get an error
This site cannot be reached
localhost refused to connect
Is it because after the docker run command outputs a container id, it exits?
Or something else needs to be done to make the admin console work.
I have referred to instructions here - https://hub.docker.com/r/ibmcom/websphere-traditional/
The only difference is, I have built my own image for windows
Printing the container ID and returning to the shell is normal behavior because you specified -d which runs the container in the background. You should be able to see your container with docker ps.
How long after startup did you wait to try to access the admin console? WAS Base can take several minutes to start up depending on system load and other factors, but docker printing the ID only means the container was created, not that it has finished initializing.
Check that 9043 is the adminhost_secure port, or try using just http:// instead of https:// in the admin console URL.
Can you enter the container with docker exec -it was_test bash, and attempt to access the URL from within the container? wget https://localhost:9043/ibm/console. If you get a message about not trusting the certificate, the server is accepting connections, but for some reason docker isn't forwarding your browser's requests into the container.
These steps should help you narrow down whether it is WAS, or docker, that is not cooperating.
I have a project where i need to set up a dev environment with letsEncrypt.
Self signed cert doesn't work for me as i need to connect to react native, unless i tinker with the android code/ objective-C code, which i dont think is the right thing to do. (Ignore errors for self-signed SSL certs using the fetch API in a ReactNative App?)
I am aware there are some docker projects: https://hub.docker.com/r/jrcs/letsencrypt-nginx-proxy-companion/
I followed along to start the NGINX-LETSENCRYPT container, and bind them with my own container using:
docker run --name loginPOC -e "VIRTUAL_HOST=XPS15"
-e "LETSENCRYPT_HOST=XPS15" -p 8000:80 -d f91893ef3a6f
Note:
f91893ef3a6f(my C# image - web api)
XPS15(local machine)
i only get result when i connect to :
http://xps15:8000/api/values [Works fine]
https://xps15:8000/api/values [HTTPS] [Received "This site can’t provide a secure connection"]
I then check my cert status with
docker exec d9d1b9b5c933 /app/cert_status
Result:No cert status is found.
After some googling i found:
https://letsencrypt.org/docs/certificates-for-localhost/
and
https://community.letsencrypt.org/t/can-i-use-letsencrypt-in-localhost/21741
I have few questions in mind:
1. Most of the examples they have a top level in their domain name. My doubt is perhaps XPS15 is not a valid host name ?
Appreciate if anyone knows any workaround. Thanks
installed ownCloud with docker as following:
docker pull owncloud
docker run -v /var/www/owncloud:/var/www/html -d -p 80:80 owncloud
that works. Setup a client with access to server, works also.
Following issue: when i copy a file by command line to the volume it is copied to the container (also good), BUT MY CLIENTS ARE NOT SYNCED. It looks that clients are only synced, if the webinterface is used.
Any idea, how to fix this?
thanks ralfg
Background:
To setup a private docker registry server at path c:\dkrreg on localhost on Windows 10 (x64) system, installed with Docker for Windows, have successfully tried following commands:
docker run --detach --publish 1005:5000 --name docker-registry --volume /c/dkrreg:/var/lib/registry registry:2
docker pull hello-world:latest
docker tag hello-world:latest localhost:1005/hello-world:latest
docker push localhost:1005/hello-world:latest
docker pull localhost:1005/hello-world:latest
Push and Pull from localhost:1005/hello-world:latest via command line succeeds too.
Issue:
If i use my IP address via docker pull 192.168.43.239:1005/hello-world:latest it gives following error in command shell:
Error response from daemon: Get https://192.168.43.239:1005/v1/_ping: http: server gave HTTP response to HTTPS client
When using 3rd party Docker UI Manager via docker run --detach portainer:latest it also shows error to connect as:
2017/04/19 14:30:24 http: proxy error: dial tcp [::1]:1005: getsockopt: connection refused
Tried other stuff also. How can I connect my private registry server that is localhost:1005 from LAN using any Docker Management UI tool ?
At last find solution to this which was tricky
Generated CA private key and certificate as ca-cert-mycompany.pem and ca-cert-key-companyname.pem. And configured docker-compose.yml to save both files as :ro in these locations: /usr/local/share/ca-certificates, /etc/ssl/certs/, /etc/docker/certs.d/mysite.com. But I also tried only copying certificate to /usr/local/share/ca-certificates was enough as docker will ignore duplicate CA certificates. This extra copying is because at many placed docker fellow recommended the same. I did not executed command: update-ca-certificates this time in registry container but was doing earlier as against what is suggested by many.
Defined in docker-compose.yml: random number as REGISTRY_HTTP_SECRET, and server's chained certificate (CA certificate appended to end of it) to REGISTRY_HTTP_TLS_CERTIFICATE amd server's public key to REGISTRY_HTTP_TLS_KEY. Had disabled HTTP authentication. Especially used some naming for file names as found with other certificates in container folder as mysite.com_server-chained-certificate.crt instead of just certificate.crt.
V-Imp: pushed certificate to trusted root in windows using command certutil.exe -addstore root .\Keys\ca-certificate.crt followed with restarting Docker for Windows from taskbar icon and then creating container using docker-compose up -d. This is most important step without this nothing worked.
Now can perform docker pull mysite.com:1005/my-repo:my-tag.
You need to specify to your Docker daemon that your registry is insecure: https://docs.docker.com/registry/insecure/
Based on your OS/system, you need to change the configuration of the daemon to specify the registry address (format IP:PORT, use 192.168.43.239:1005 rather than localhost:1005).
Once you have done that, you should be able to execute the following:
docker pull 192.168.43.239:1005/hello-world:latest
You should also be able to access it via Portainer using 192.168.43.239:1005 in the registry field.
If you want to access your registry using localhost:1005 inside Portainer, you can try to run it inside the host network.
docker run --detach --net host portainer:latest