Docker secrets not getting loaded while using docker-compose file - docker

I wish to use secrets mechanism of dockers through docker-compose file. I created below docker-compose file. But I am unable to connect to the peer as while starting peer service it is exiting with the code "2023-02-16 04:31:33.807 UTC [nodeCmd] serve -> FATA 024 Failed to set TLS client certificate (error parsing client TLS key pair: tls: failed to find any PEM data in certificate input)
"
My docker-compose file is as follows. I am running docker in swarm mode. Can anyone tell me what is going wrong here. I followed inputs from here
version: '3'
networks:
basic:
secrets:
P0ORG1SRVCRT:
file: ./P0ORG1SRVCRT.txt
services:
peer0.org1.example.com:
container_name: peer0.org1.example.com
image: hyperledger/fabric-peer:$IMAGE_TAG
environment:
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_basic
- FABRIC_LOGGING_SPEC=DEBUG
- CORE_CHAINCODE_LOGGING_LEVEL=INFO
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/run/secrets/P0ORG1SRVCRT
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_ID=peer0.org1.example.com
- CORE_PEER_ADDRESS=peer0.org1.example.com:8050
- CORE_PEER_LISTENADDRESS=0.0.0.0:8050
- CORE_PEER_CHAINCODEADDRESS=peer0.org1.example.com:8051
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:8051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.example.com:8050
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:8050
- CORE_PEER_OPERATIONS_LISTENADDRESS=0.0.0.0:9335
- CORE_PEER_METRICS_PROVIDER=prometheus
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
secrets:
- P0ORG1SRVCRT
command: peer node start
volumes:
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls
ports:
- 8050:8050
- 9335:9335
depends_on:
- orderer0.consortiumorderer.example.com
networks:
- basic

The error message suggests that the issue is with the TLS client certificate for the peer. Specifically, it says "Failed to set TLS client certificate (error parsing client TLS key pair: tls: failed to find any PEM data in certificate input)".
It's possible that the content of the secret file P0ORG1SRVCRT.txt is not in the correct format, or that it's not being mounted correctly into the container. Here are a few things you can try to troubleshoot:
Check the content of the P0ORG1SRVCRT.txt file to ensure that it
contains the correct PEM-encoded certificate and private key for the
peer. You can use a text editor to open the file and inspect its
content.
Make sure that the secrets section is in the docker-compose.yaml
file is properly defined, and the path to the secret file is
correct. You can try running the docker-compose config command to
see if there are any issues with the syntax or configuration of the
file.
Check the logs of the peer container to see if there are any other
error messages that might provide additional context. You can use
the docker logs command to view the logs, like this:
docker logs peer0.org1.example.com
If the issue persists, you can try mounting the secret file as a
volume directly, instead of using the secrets mechanism. For
example:
volumes:
./P0ORG1SRVCRT.txt:/run/secrets/P0ORG1SRVCRT
This will mount the secret file as a volume at the expected path
inside the container, which should allow the TLS certificate to be
loaded correctly

Related

use sensitive data in docker containers with docker compose

I have a docker container that is running NGINX. Within the container I have an SSL cert that is currently being copied into the container. I would like to avoid using this approach and instead have the value of the SSL cert being passed in, so it is not stored on the container. In the docker-compose file, I have specified the public and private portions of the SSL certs as volumes and I have removed the commands from the Dockerfile that copies the values onto the image. However I am getting an error when running docker-compose up that the certificate cannot be loaded due to it not existing. Any advice on how I can accomplish this would be helpful. Thanks!
Docker Compose
version: "3"
services:
nginx:
container_name: nginx
build: nginx
volumes:
- ./:/var/www
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/files/localhost.crt:/etc/nginx/ssl/nginx.crt
- ./nginx/files/localhost.key:/etc/nginx/ssl/nginx.key
ports:
- 80:80
- 443:443
networks:
- MyNetwork
networks:
MyNetwork:
Dockerfile
FROM nginx:latest
COPY nginx.conf /etc/nginx/conf.d/
Error
[emerg] 1#1: cannot load certificate "/etc/nginx/nginx/files/localhost.crt": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/nginx/nginx/files/localhost.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file)

How can I connect ray to laravel sail docker container using the forge user?

I am trying to connect my ray application to my project that is running on a docker container. I am using a Macbook Pro. I've followed the instructions given at https://spatie.be/docs/ray/v1/environment-specific-configuration/docker.
I've also followed all of the ray installation instructions, by entering
composer require spatie/laravel-ray
setting my ray.php's remote_path to
'/var/www/html'
and its local_path to
'/Users/me/test-projects/project/'
I've also updated my project's docker-compose.yaml to include the extra_hosts: line that https://spatie.be/docs/ray/v1/environment-specific-configuration/docker recommends.
services:
laravel.test:
build:
context: ./vendor/laravel/sail/runtimes/8.0
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.0/app
ports:
- '${APP_PORT:-8000}:80'
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
volumes:
- '.:/var/www/html'
networks:
- sail
extra_hosts:
- "host.docker.internal:host-gateway"
I do have custom ssh entries if that matters. I have a id_rsa_personal key and an id_rsa_work rsa key instead of just an id_rsa key that ray's "servers" configuration looks for by default, but even when changing the "private key path" to an existing private key, I get
Error: Error invoking remote method 'connectSsh': Error: connect ECONNREFUSED 127.0.0.1:22
If there is anyone out there that has any ideas of why I would not be able to make ray connect to the docker instance, I'd be very grateful.
update
After some digging and die'ing through ray's source code, I'm finding that when ray tries to cURL to my host machine, it is getting : Could not resolve host: host.docker.internal. I don't know why this is, because, as I said, I do have extra_hosts: - "host.docker.internal:host-gateway" set in my docker-compose.yaml file.

traefik, docker, and SSL: "Error reading configuration file: no such file or directory"

I'm trying to set up a very basic reverse proxy to start some experimentation, but embarrassingly, I can't get even a very simple configuration to work.
In my (otherwise empty) home directory, I have 4 files: docker-compose.yml (defining reverse-proxy), certs.toml and my two certificate files. If I run my reverse-proxy by itself, it generates self-signed certificates and works fine. However, if I try to feed it my actual certificates, it throws the error:
Cannot start the provider *file.Provider: error reading configuration file: certs.toml - open certs.toml: no such file or directory
docker-compose.yml:
version: '3'
services:
reverse-proxy:
image: traefik:latest
container_name: "reverse-proxy"
command:
- --entrypoints.web.address=:80
- --providers.docker
- --entrypoints.web-secure.address=:443
- --providers.file.filename=certs.toml
ports:
- "80:80"
- "443:443"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
certs.toml:
[tls.stores.default.defaultCertificate]
certFile = "<domain>.crt"
keyFile = "<domain>.key"
And the crt (or I could use pem?) and key files are as one would expect.
So, what really basic mistake am I making here? :)
It looks like certs.toml is only on your Linux host and not in your container.
You should add it to your container. You should check the the volumes section of your docker-compose.yml.
Also be carefull of relative vs absolute path in your docker-compose.yml (both for the volumes and --providers.file.filename).
under volumes you can include
`- "/var/run/docker.sock:/var/run/docker.sock"
- "${PWD}/treafik.toml:/etc/traefik/traefik.toml "
- "${PWD}/certs:/certs"`
and include the self signed certificate in certs folders in a current working directory.

How do I reference a self-signed SSL certificates for traefik v2 in a docker-compose file?

There is very limited documentation for referencing self-signed certificates for Træfik v2 in the docker-compose YAML file. Here is how you can do it for Let's Encrypt:
https://github.com/containous/blog-posts/blob/master/2019_09_10-101_docker/docker-compose-07.yml#L11-L14
version: "3.3"
services:
traefik:
image: "traefik:v2.0.0"
command:
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --providers.docker
- --api
- --certificatesresolvers.leresolver.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
- --certificatesresolvers.leresolver.acme.email=your#email.com
- --certificatesresolvers.leresolver.acme.storage=/acme.json
- --certificatesresolvers.leresolver.acme.tlschallenge=true
But I tried to check the documentation, and I have not seen any way to reference a self-signed certificate in the docker-compose file without having a toml file.
I have tried this:
version: "3.3"
services:
traefik:
image: "traefik:v2.0.0"
command:
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --providers.docker
- --api
- --providers.docker.tls.cert=/etc/certs/server.crt
- --providers.docker.tls.key=/etc/certs/server.key
But I got the following error:
Failed to retrieve information of the docker client and server host:
error during connect: Get
https://%2Fvar%2Frun%2Fdocker.sock/v1.24/version: http: server gave
HTTP response to HTTPS client" providerName=docker
Here are resources I have used that do not provide any way to set up self-signed certificates to enable HTTPS for Træfik v2 in the docker-compose YAML file:
https://docs.traefik.io/reference/static-configuration/cli/
https://docs.traefik.io/https/tls/#user-defined
I do see this on this page: https://docs.traefik.io/https/tls/#user-defined
tls:
certificates:
- certFile: /path/to/domain.cert
keyFile: /path/to/domain.key
But it is for file YAML configuration file, and I need to convert this to the docker-compose YAML file equivalent as it is above how they have done it for Let's Encrypt.
It seems this is not doable at the moment. Someone posted a very similar question on the Træfik community forum.
The certificates you are passing as flags (providers.docker.tls.cert and providers.docker.tls.key) are useful if Træfik listen to Docker events via a secure TCP endpoint instead of a file socket, which is not what you want.
It would be cool to have everything configured in a single docker-compose file but unfortunately the self-signed related configuration must be stored in a separate file.
Here is an example for the record:
File docker-compose.yml
traefik:
image: traefik:v2.1
command:
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --providers.docker=true
- --providers.file.directory=/etc/traefik/dynamic_conf
- --providers.file.watch=true
ports:
- 80:80
- 443:443
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./certs/:/certs/:ro
- ./traefik.yml:/etc/traefik/dynamic_conf/conf.yml:ro
web:
image: nginx:1.17.8-alpine
labels:
# http with redirection
- traefik.http.middlewares.redirect-middleware.redirectscheme.scheme=https
- traefik.http.routers.web-router.entrypoints=web
- traefik.http.routers.web-router.rule=Host(`your-domain.net`)
- traefik.http.routers.web-router.middlewares=redirect-middleware
# https
- traefik.http.routers.websecure-router.entrypoints=websecure
- traefik.http.routers.websecure-router.tls=true
- traefik.http.routers.websecure-router.rule=Host(`your-domain.net`)
File traefik.yml
tls:
certificates:
- certFile: /certs/awx.afone.priv.crt
keyFile: /certs/awx.afone.priv.key

Odoo example with docker compose no work

This is oficial Odoo file docker-compose example:
version: '2'
services:
web:
image: odoo:10.0
depends_on:
- db
ports:
- "8069:8069"
volumes:
- odoo-web-data:/var/lib/odoo
- ./config:/etc/odoo
- ./addons:/mnt/extra-addons
db:
image: postgres:9.4
environment:
- POSTGRES_PASSWORD=odoo
- POSTGRES_USER=odoo
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- odoo-db-data:/var/lib/postgresql/data/pgdata
volumes:
odoo-web-data:
odoo-db-data:
When I run 'docker-compose up -d' output the next error:
ERROR: for test2_db_1 Cannot start service db: failed to create endpoint test2_db_1 on network test2_default: failed to add the host (veth95f6516) <=> sandbox (veth4949623) pair interfaces: operation not supported
ERROR: for db Cannot start service db: failed to create endpoint test2_db_1 on network test2_default: failed to add the host (veth95f6516) <=> sandbox (veth4949623) pair interfaces: operation not supported
ERROR: Encountered errors while bringing up the project.
The docker-compose.yml file is within test2 directory.
This is Odoo with Docker docs: https://hub.docker.com/_/odoo/
What can be happening?
Thanks!
Whenever you see errors related veth interfaces, it usually means that docker service has gone in some state where the network allocation doesn't work
ERROR: for test2_db_1 Cannot start service db: failed to create endpoint test2_db_1 on network test2_default: failed to add the host (veth95f6516) <=> sandbox (veth4949623) pair interfaces: operation not supported
ERROR: for db Cannot start service db: failed to create endpoint test2_db_1 on network test2_default: failed to add the host (veth95f6516) <=> sandbox (veth4949623) pair interfaces: operation not supported
ERROR: Encountered errors while bringing up the project.
You should restart the docker service in such cases. If that doesn't help then restart the whole system

Resources