I have a selenium grid setup with chrome on my ubuntu vm.
I want to test an app with a certificate generated from a test CA system with .p12 extension.
How can I import the CA root certificate in chrome programmatically ?
Currently ran docker container using
- docker run -d -p 4444:4444 --shm-size="2g" selenium/standalone-chrome:4.8.0-20230123
My script is able to launch the browser but app page is not loaded as the certificate on this browser is missing.
Is there's any way to create new chrome image using my own certificate?
Related
I tried to run mitmproxy via docker to collect API requests we send from the app to the server. Already set up the process locally and start working on putting it into a docker container.
Firstly I tried to use a standard docker image: https://hub.docker.com/r/mitmproxy/mitmproxy
Run the following command:
docker run --rm -it -p 8282:8080 -p 127.0.0.1:8182:8081
mitmproxy/mitmproxy mitmweb --web-host 0.0.0.0 --web-port 8282
And faced the issue with mitmproxy certificate, while tryining to collect the 'https' traffic, it has not been trusted.
When I tried to write a custom image based on standard one through the docker file, I added a corresponding mitmproxy certificate to the container in there, but it doesn't help for some reasons.
Not Truster sertificate example: https://i.stack.imgur.com/nSWb6.png
Browser view after performing some search: https://i.stack.imgur.com/l9RXV.png
Dockerfile:
https://i.stack.imgur.com/P5qOm.png
I have a Docker image, based on the microsoft/dotnet:2.1-runtime image (Linux).
However, since 1/6, the .NET code (and other code) in the container is not able to connect to certain sites that use a COMODO CA certificate.
E.g. the following fails inside the container, due to the expired certificate:
curl https://code.jquery.com
Result:
curl: (60) SSL certificate problem: certificate has expired
I have tried calling update-ca-certificates inside the container, but that does not change anything.
My desktop browsers somehow have updated the CA certs themselves, but the docker have not.
I don't really need curl to work (that was just a test to show the issue), but I do need my own .NET code to work (which causes similar error). Any ideas how I can force the container to update the CA certs, or make this work somehow? I obviously do not want to skip certificate validation!
Not sure if this is the answer. After you update the certificate, updating the docker container image itself. the pseudo commands look like below:
$ docker run -p <port>:<port> <image> bash --name <image name>
root#xxxx <ca-cert folder>: update-ca-certificates
Don't exit out of the container.
On the host machine:
$ docker commit <image name>
docker commit will create a new image from the running container.
Theory
Probably you are running update-ca-certificates after you start a container instance.
using some steps shared in these answers
This will probably work one time if your docker run commands look something like below
$ docker run -p 8000:80 <image name> bash
and inside the bash, you updated the certificate.
This will only run for the lifetime of this container. When this container dies and a new one is created it's created with the same old image (with expired cert).
I am running docker in windows 10 professional edition. I need to access to container with browser.
screenshot of running container
I tried to access by typing : http://172.17.0.2:9000 and http://localhost:9000
But my browser says:
This site can’t be reached
172.17.0.2 took too long to respond.
Any ideas to resolve this?
use simpleDockerUI which is a chrome extension. and enter you docker daemon IP https://"docker-machine ip":2376
before connecting via simpleDockerUI, import the docker certificates
inside the chrome certificates
go to the folder where docker certificates are installed(in my machine it was in C:\Users\"name"\.docker\machine\machines\default)
then do the following steps
1) $ cat cert.pm ca.pem >> clientcertchain.pem
2) $ openssl pkcs12 -inkey key.pm -in clientcertchain.pem -export -out import.pfx -passout pass:"password"
3) now go to google chrome setting --> manage certificates
4) under trusted root certification authoirities import ca.pem. it will prompt for password ( same as above)
5) import import.pfx as a personal certificate under personal certificate tab
(it will ask to set the password so set it)
to test the connection open new tab in google chrome and type https://ip:2376/_ping
you should get OK response
or use portainer image
docker run -d -p 9000:9000 portainer/portainer
Your container web service should start using 0.0.0.0 host instead localhost, in that way you can access from your local machine.
Simply
Go to Settings -> General -> activate Expose daemon
Expose daemon on tcp://localhost:2375 without TLS: Click this option
to enable legacy clients to connect to the Docker daemon. You must use
this option with caution as exposing the daemon without TLS can result
in remote code execution attacks.
https://docs.docker.com/docker-for-windows/
I am trying to run the Cosmos emulator in a docker container, following the instructions on the docker hub.
When I start run the container:
md $env:LOCALAPPDATA\CosmosDBEmulatorCert 2>null
docker run -v $env:LOCALAPPDATA\CosmosDBEmulatorCert:C:\CosmosDB.Emulator\CosmosDBEmulatorCert -P -t -i -m 2GB microsoft/azure-cosmosdb-emulator
I see the expected output. However, the documentation says that
After starting the Emulator container you will find two forms of the certificate at %LOCALAPPDATA%\azure-cosmosdb-emulator.hostd:
I see nothing, and no files are written to the directory. When I try to import the certificate, there is nothing to import. How do I get the emulator to import the SSL certificate?
I have a project where i need to set up a dev environment with letsEncrypt.
Self signed cert doesn't work for me as i need to connect to react native, unless i tinker with the android code/ objective-C code, which i dont think is the right thing to do. (Ignore errors for self-signed SSL certs using the fetch API in a ReactNative App?)
I am aware there are some docker projects: https://hub.docker.com/r/jrcs/letsencrypt-nginx-proxy-companion/
I followed along to start the NGINX-LETSENCRYPT container, and bind them with my own container using:
docker run --name loginPOC -e "VIRTUAL_HOST=XPS15"
-e "LETSENCRYPT_HOST=XPS15" -p 8000:80 -d f91893ef3a6f
Note:
f91893ef3a6f(my C# image - web api)
XPS15(local machine)
i only get result when i connect to :
http://xps15:8000/api/values [Works fine]
https://xps15:8000/api/values [HTTPS] [Received "This site can’t provide a secure connection"]
I then check my cert status with
docker exec d9d1b9b5c933 /app/cert_status
Result:No cert status is found.
After some googling i found:
https://letsencrypt.org/docs/certificates-for-localhost/
and
https://community.letsencrypt.org/t/can-i-use-letsencrypt-in-localhost/21741
I have few questions in mind:
1. Most of the examples they have a top level in their domain name. My doubt is perhaps XPS15 is not a valid host name ?
Appreciate if anyone knows any workaround. Thanks