I have an issue where a self-signed certificate has been added to a testing environment.
So this means my selenium grid that is hosted in Docker containers is unable to get to this environment due to the certificate.
I get this error when executing tests
Message: OpenQA.Selenium.WebDriverException : The HTTP request to the remote WebDriver server for URL http://xxx.xx.x.x:4444/wd/hub/session/0ee03d72bff0d5527cff926121b496bb/url timed out after 60 seconds.
----> System.Net.WebException : The request was aborted: The operation has timed out.
TearDown : OpenQA.Selenium.WebDriverException : The HTTP request to the remote WebDriver server for URL http://xxx.xx.x.x:4444/wd/hub/session/0ee03d72bff0d5527cff926121b496bb/screenshot timed out after 60 seconds.
----> System.Net.WebException : The operation has timed out
The docker environment is set up with docker-compose and using chrome and hub images.
Compose file is this
version: "3"
services:
selenium-hub:
image: selenium/hub:latest
container_name: selenium-hub
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome:latest
volumes:
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
I added the certificates to the host hoping this would be enough but obviously not as each container is separated.
My question is how do I insert the certificates into each chrome node that spins up?
More information
When running a curl from within the container I get the following error
#b94ed81b0110:/etc# curl https://xxxx.xxxx.co.uk
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
But I have installed the required certificates to the container
root#b94ed81b0110:/etc# update-ca-certificates
Updating certificates in /etc/ssl/certs...
2 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
Adding debian:admin.pem
Adding debian:assessor.pem
done.
done.
Related
I am struggling with Go requests between containers.
The issue that I have that the rest of my containers can send request to the node Container that give response, but when I send request from my GoLang application to node I get that refuse error "dial tcp 172.18.0.6:3050: connect: connection refused".
So my whole docker set up is:
version: "3.3"
services:
##########################
### SETUP SERVER CONTAINER
##########################
node:
# Tell docker what file to build the server from
image: myUserName/mernjs:node-dev
build:
context: ./nodeMyApp
dockerfile: Dockerfile.dev
# The ports to expose
expose:
- 3050
# Port mapping
ports:
- 3050:3050
# Volumes to mount
volumes:
- ./nodeMyApp/src:/app/server/src
# Run command
# Nodemon for hot reloading (-L flag required for polling in Docker)
command: nodemon -L src/app.js
# Connect to other containers
links:
- mongo
# Restart action
restart: always
react:
ports:
- 8000:8000
build:
context: ../reactMyApp
dockerfile: Dockerfile.dev
volumes:
- ../reactMyApp:/usr/src/app
- /usr/src/app/node_modules
- /usr/src/app/.next
restart: always
environment:
- NODE_ENV=development
golang:
build:
context: ../goMyApp
environment:
- MONGO_URI=mongodb://mongo:27017
# Volumes to mount
volumes:
- ../goMyApp:/app/server
links:
- mongo
- node
restart: always
So my React app can send the request to "http://node:3050/api/greeting/name" and it get the response even that react app is not linked to the node app but when Golang app sends request to node docker container it gets connection refuse message GetJson err: Get "http://node:3050/api/greeting/name": dial tcp 172.18.0.6:3050: connect: connection refused
func GetJson(url string, target interface{}) error {
r, err := myClient.Get(url)
if err != nil {
fmt.Println("GetJson err: ", err)
return err
}
defer r.Body.Close()
return json.NewDecoder(r.Body).Decode(target)
}
type ResultsDetails struct {
Greeting string `bson:"greatingMessage" json:"greatingMessage"`
Message string `bson:"message" json:"message"`
}
func GetGreetingDetails(name string) ResultsDetails {
var resp ResultsDetails
GetJson("http://node:3050/api/greeting/"+name, &resp)
return resp
}
So how do I solve the Golang request to another Docker Node Container when docker doesnt see the host as the name of my container 'node'?
Update:
By accident i put Golang port, which it doenst run on any port since it is application that checks on database records. So it hasnt got any api, therefore it is not running on any port.
Is that could be the problem why my golang application cannot communication to other containers?
Since i have also another golang application which is api application and it is running on 5000 port and it is well communicating to my node application?
Network info:
After checking the network if node and golang share the same network and the answer is yes. All containers share the same network
(Unrelated to my issue) To anyone who has "dial tcp connection refused" issue I suggest to go though that guide https://maximorlov.com/4-reasons-why-your-docker-containers-cant-talk-to-each-other/. Really helpful. To those who this guide wont help prob read bellow this, maybe you trying to request the container api after just containers were built :D
For those who was interested what was wrong:
Technically reason why I was getting this error is because of the request that I was trying to run, was just when all containers were built.
I believe there is some delay to the network after containers are built. Thats why there host was throwing "dial tcp 172.18.0.6:3050: connect: connection refused" I've run that test on other containers that could possibly send request to that node container and they were all failing after the build time. But when re-requesting after few seconds all worked out.
Sorry to bother you guys. I really spent 3 days into this issue. And I was looking into completely wrong direction. Never thought that the issue is that silly :D
Thanks for you time.
I've met the same error in my harbor registry service.
After I docker exec -it into the container, and check if the service is available, and finally I found that http_proxy has been set.
Remove the http_proxy settings for docker service, then it works like a charm.
Failed on load rest config err:Get "http://core:8080/api/internal/configurations": dial tcp 172.22.0.8:8080: connect: connection refused
$docker exec -it harbor-jobservice /bin/bash
$echo $http_proxy $https_proxy
I set up a Shopware 6 project with ddev. Now I want to write cypress tests for one of my plugins. The shopware testsuite starts a node express server on port 8005 in the web container. I have configured the port for ddev so that I can open the express endpoint in my browser: http://my.ddev.site:8005/cleanup. That is working.
For cypress I have created a new ddev container with a new docker-compose file:
version: '3.6'
services:
cypress:
container_name: ddev-${DDEV_SITENAME}-cypress
image: cypress/included:4.10.0
tty: true
ipc: host
links:
- web:web
environment:
- CYPRESS_baseUrl=https://web
- DISPLAY
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.approot: $DDEV_APPROOT
volumes:
# Project root
- ../shopware:/project
# Storefront and Administration
- ../shopware/vendor/shopware/platform/src/Storefront/Resources/app/storefront/test/e2e:/e2e-Storefront
- ../shopware/vendor/shopware/platform/src/Administration/Resources/app/administration/test/e2e:/e2e-Administration
# Custom plugins
- ../shopware/custom/plugins/MyPlugin/src/Resources/app/administration/test/e2e:/e2e-MyPlugin
# for Cypress to communicate with the X11 server pass this socket file
# in addition to any other mapped volumes
- /tmp/.X11-unix:/tmp/.X11-unix
entrypoint: /bin/bash
I can now successfully open the cypress interface and I see my tests. The problem is now, that always before a cypress test is executed, the express endpoint is called (with the URL from above) and the cypress container seems to has no access to the endpoint. This is the output:
cy.request() failed trying to load:
http://my.ddev.site:8005/cleanup
We attempted to make an http request to this URL but the request failed without a response.
We received this error at the network level:
> Error: connect ECONNREFUSED 127.0.0.1:8005
-----------------------------------------------------------
The request we sent was:
Method: GET
URL: http://my.ddev.site:8005/cleanup
So I can call this endpoint in my browser, but cypress can't. Is there any configuration in the cypress container missing to call the port 8005 from the web container?
You need to add this to the cypress service:
external_links:
- "ddev-router:${DDEV_HOSTNAME}"
and then your http URL will be accessed through the router via ".ddev.site".
If you need a trusted https URL it's a little more complicated, but for http this should work fine.
I have tried a lot, but I can't find a solution to this problem.
I am running a nexus sonatype (3.21.1-01) docker image on a centos7 server behind a vthunder a10 proxy.
The docker login and pull works great but docker push fail with EOF after some retrying.
Here the interested routes:
docker image port 8081 > my.server:8081
docker image port 8443 > my.server:8443
proxy.domain.local:443 > my.server:8081
proxy.domain.local:8443 > my.server:8443
I have created a docker repository in nexus which have the http connector exposed on 8443
The proxy is exposed under ssl with self signed certificate
The client's /etc/docker/daemon.json file contains the insecure registry options:
"insecure-registries": ["proxy.domain.local:8443","proxy.domain.local"]
Here the situation:
If I try to push from the client an image of which all layers already exist on the remote server (but missing on nexus repository), it works.
If I try the same but adding some difference to the same image (such as a new LABEL), it fail in this way:
(9c27e219663c: Layer already exists
Patch https://proxy.domain.local:8443/v2/test4/blobs/uploads/6862fe60-d63b-4942-bbb6-f403307e677a: EOF)
If I push directly from my.server machine, pointing to localhost:8443 it works.
If i push from the client machine an image with new layers it fail in this way after some retrying (the same behavior with smaller images):
docker push proxy.domain.local:8443/ara
The push refers to repository [proxy.domain.local:8443/ara]
edb7a4f74e22: Retrying in 8 seconds
de421654540d: Retrying in 8 seconds
-------------
The push refers to repository [proxy.domain.local:8443/ara]
edb7a4f74e22: Pushing [==================================================>] 172.6MB/172.6MB
de421654540d: Pushing [==================================================>] 200.8MB/200.8MB
EOF
this is a summary of what happen in wireshark
the.client my.server HTTP 316 GET /v2/ HTTP/1.1
...
my.server the.client HTTP 654 HTTP/1.1 401 Unauthorized (application/json)
...
the.client my.server HTTP 442 HEAD /v2/alpine-test/blobs/sha256:95f5ecd24e438e09033c8e69ec136079f8774ab8284f1431f5433a829054b5e7 HTTP/
(asking to nexus if the image is already uploaded)
my.server the.client HTTP 493 HTTP/1.1 404 Not Found
(it isn't)
the.client my.server HTTP 437 POST /v2/alpine-test/blobs/uploads/ HTTP/1.1
(so it start to post the image)
my.server the.client HTTP 584 HTTP/1.1 202 Accepted
...
the.client my.server HTTP 437 POST /v2/alpine-test/blobs/uploads/ HTTP/1.1
...
my.server the.client HTTP 584 HTTP/1.1 202 Accepted
..
and so on with some FIN/ACK in the middle until the client stops to send it...
** on nexus server log there is absolutely no trace about this **
this is the nexus docker compose:
services:
nexus:
build:
context: .
args:
DOCKER_GID: ${DOCKER_GID}
NEXUS_UID: ${NEXUS_UID}
NEXUS_GID: ${NEXUS_GID}
restart: always
environment:
- NEXUS_UID_GID=${NEXUS_UID_GID}
- HOSTNAME_DOCKER_NEXUS=${HOSTNAME_DOCKER_NEXUS}
ports:
- "8081:8081"
- "8443:8443"
user: ${NEXUS_UID_GID}
hostname: ${HOSTNAME_DOCKER_NEXUS}
volumes:
- /var/nexus-data:/nexus-data
- /etc/hosts:/etc/hosts
- /var/run/docker.sock:/var/run/docker.sock
Can you help me?
I was thinking about a possibile nexus-docker-user permission issue on the local machine/docker binary permissions (if i try from localhost it works, yes, but the image is already stored on the system of course) - but I think it is not so probable.
I was thinking also about proxy configuration issue (more probable), but I don't know much about proxy.
[Workaround]
Because I can not figure out the problem, I ended up with make proxy transparent and configuring nexus to serve directly in https throught it's jetty.xml, jetty.https and nexus.properties.
Serving https directly from jetty instead of let the proxy upgrade the connection solved the above problem.
I'm a little bit desperate and starting to going mad.
I tried to configure my gitlab instance (omnibus) to work with external private docker image registry. Initialy I thought is should be relatively easy task. But now I totaly confused.
My initial installation looked like this:
generate selfsigned cert
clear instance of docker registry on Ubuntu 18.04 with docker-compose and nginx. Secured with letsEncrypt on registry.domain.com
I use following script:
version: '3'
services:
registry:
restart: always
image: registry:2
ports:
- "5000:5000"
environment:
REGISTRY_AUTH: token
REGISTRY_AUTH_TOKEN_REALM: https://registry.domain.com:5000/auth
REGISTRY_AUTH_TOKEN_SERVICE: "Docker registry"
REGISTRY_AUTH_TOKEN_ISSUER: "gitlab-issuer"
REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE: /etc/gitlab/registry-certs/registry-auth.crt
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
volumes:
- ./auth:/auth
- ./data:/data
clear instance of gitlab on Ubuntu 18.04. Secured with letsEncrypt on gitlab.domain.com
some changes in gitlab.rb like:
registry_external_url ‘https://registry.domain.com/’
gitlab_rails[‘registry_enabled’] = true
gitlab_rails[‘registry_host’] = “registry.domain.com”
gitlab_rails[‘registry_port’] = “5000”
gitlab_rails[‘registry_api_url’] = “htps://registry.prismstudio.space:5000”
gitlab_rails[‘registry_key_path’] = “/etc/gitlab/registry-certs/registry-auth.key”
gitlab_rails[‘registry_issuer’] = “gitlab-issuer”
After gitlab-ctl reconfigure i receive letsEncrypt error:
letsencrypt_certificate[gitlab.domain.net] (letsencrypt::http_authorization line 5) had an error: RuntimeError: acme_certificate[staging] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/letsencrypt/resources/certificate.rb line 25) had an error: RuntimeError: ruby_block[create certificate for gitlab.domain.net] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/acme/resources/certificate.rb line 108) had an error: RuntimeError: [gitlab.domain.com] Validation failed, unable to request certificate
I literary try everything but nothing helps me.
Is there any straightforward way to se tup GitLab server with external Docker registry? How to configure it properly? I'm open to burn everything to the ground and make it once again with working configuration.
I followed the docker manuals for setting up a private registry, and acquired a Let's Encrypt certificate. This is my docker-compose.yml:
version: '2'
services:
registry:
restart: always
image: registry:2.3.1
ports:
- 5000:5000
environment:
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/live/git.xxxx.com/fullchain.pem
REGISTRY_HTTP_TLS_KEY: /certs/live/git.xxxx.com/privkey.pem
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
volumes:
- ./data:/var/lib/registry
- /etc/letsencrypt:/certs
- ./auth:/auth
This is my curl command and result:
curl https://git.xxxx.com:5000/v2/
<htpassword auth succeeds>
{}
Also Chrome/Firefox are green and can reach this without cert errors.
But docker login keeps failing.
docker login https://git.xxxx.com:5000/v2/
Username: raarts
Password:
Email:
Error response from daemon: invalid registry endpoint https://git.xxxx.com:5000/v2/: Get https://git.xxxx.com:5000/v2/: x509: certificate signed by unknown authority. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry git.xxxx.com:5000` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/git.xxxx.com:5000/ca.crt
Using docker 1.10.3
I fixed the problem. And it's embarrassing. I'd rather not talk about it if it weren't for the stupid and confusing error message I got.
I had on my own laptop pointed git.xxxx.com to another ip. So docker could not actually reach the registry server, connections were refused.
But the error message I got really pointed me in the wrong direction and cost me several hours of my time.