I'm running Grafana in a Docker container on my NAS. Everything is fine when using http.
However I fail to start the container when I setup Grafana for https, as the Certificate file can't be found according to the Docker log.
I create a self-certificate using OpenSSL in order to use Grafana with https.
I modified the docker script to overwrite the enviroment Server section for https and defined the path for the cert and key file.
INFO[12-08|12:28:50] Config overridden from Environment variable logger=settings var="GF_SERVER_PROTOCOL=https"
INFO[12-08|12:28:50] Config overridden from Environment variable logger=settings var="GF_SERVER_CERT_FILE=/share/CACHEDEV2_DATA/Container/grafana/config/ssl/grafana.crt"
INFO[12-08|12:28:50] Config overridden from Environment variable logger=settings var="GF_SERVER_CERT_KEY=/share/CACHEDEV2_DATA/Container/grafana/config/ssl/grafana.key"
As far as I can see, this seems to be fine, however for unknown reason the cert-file isn't found, even it is available in the defined path.
INFO[12-08|12:28:50] HTTP Server Listen logger=http.server address=0.0.0.0:3000 protocol=https subUrl= socket=
EROR[12-08|12:28:50] Stopped HTTPServer logger=server reason="Cannot find SSL cert_file at /share/CACHEDEV2_DATA/Container/grafana/config/ssl/grafana.crt"
When I check the path I see it is valid
[/share/CACHEDEV2_DATA/Container/grafana] # ls -l /share/CACHEDEV2_DATA/Container/grafana/config/ssl/grafana.crt
-rw-r--r-- 1 admin administrators 1228 2019-12-08 10:55 /share/CACHEDEV2_DATA/Container/grafana/config/ssl/grafana.crt
Any idea what could be the reason for this?
Could the Certificate be invalid and the error message is just misleading?
Many thanks for a hint :)
Stefan
Edit:
The script I use to start the Docker Container:
GRAFANA_DIR_CONF=$(readlink -f ./config)
GRAFANA_VER='latest'
docker run -it \
--name=grafana \
-v $GRAFANA_DIR_CONF:/var/lib/grafana \
-v /etc/localtime:/etc/localtime:ro \
-e "GF_SECURITY_ALLOW_EMBEDDING=true" \
-e "GF_USERS_ALLOW_SIGN_UP=false" \
-e "GF_AUTH_ANONYMOUS_ENABLED=true" \
-e "GF_AUTH_BASIC_ENABLED=false" \
-e "GF_SERVER_PROTOCOL=https" \
-e "GF_SERVER_CERT_FILE=$GRAFANA_DIR_CONF/ssl/grafana.crt" \
-e "GF_SERVER_CERT_KEY=$GRAFANA_DIR_CONF/ssl/grafana.key" \
-d \
--restart=always \
-p 3000:3000 \
grafana/grafana:$GRAFANA_VER
[/share/CACHEDEV2_DATA/Container/grafana/config/ssl] # ls -l
total 16
-rw-r--r-- 1 admin administrators 1228 2019-12-08 10:55 grafana.crt
-rw-r--r-- 1 admin administrators 1702 2019-12-08 10:44 grafana.key
[/share/CACHEDEV2_DATA/Container/grafana/config/ssl] #
You are using volume for the configuration folder, so correct path to the cert/key in the container is:
-e "GF_SERVER_CERT_FILE=/var/lib/grafana/ssl/grafana.crt" \
-e "GF_SERVER_CERT_KEY=/var/lib/grafana/ssl/grafana.key" \
Related
I am trying to create a secure docker registry to be used inside a development kind cluster. I am going to use a container for the registry and 3 other containers for kind workers. In order to be consistent with the production environment I want to use TLS, so I created a self signed certificate for the docker registry. I connected the containers using docker network. However, when I create a deployment based on an image from that registry, I get x509 certificate signed by unknown authority error.
I used this tutorial
containerdConfigPatches: # Enable a local image registry, placeholders automatically replaced in bootstrap script -- https://kind.sigs.k8s.io/docs/user/local-registry/
- |-
[plugins."io.containerd.grpc.v1.cri".registry.configs.my-registry.tls]
cert_file = "/etc/docker/certs.d/my-registry/domain.crt"
key_file = "/etc/docker/certs.d/my-registry/domain.key"
But it does not seem to work.
My kind version:
kind v0.17.0 go1.20 linux/amd64
The command I use to create the registry:
docker run -d \
--restart=always \
--name my-registry \
-v `pwd`/auth:/auth \
-v `pwd`/certs:/certs \
-v `pwd`/certs:/certs \
-e REGISTRY_AUTH=htpasswd \
-e REGISTRY_AUTH_HTPASSWD_REALM="Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
-e REGISTRY_HTTP_ADDR=0.0.0.0:80 \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
-p 7443:80 \
registry:2
You are using a self-signed certificate for your docker registry instead of a certificate issued by a trusted certificate authority (CA). The docker daemon does not trust the self-signed certificate, which is causing the x509 error.
This may occur due to the expiration of the current certificate, due to a changed hostname, and other changes.
Verify that the $HOME/.kube/config file contains a valid certificate, and regenerate a certificate if necessary. The certificates in a kubeconfig file are base64 encoded. The base64 --decode command can be used to decode the certificate and openssl x509 -text -noout can be used for viewing the certificate information.
Unset the KUBECONFIG environment variable using:
unset KUBECONFIG
Or set it to the default KUBECONFIG location:
export KUBECONFIG=/etc/kubernetes/admin.conf
Another workaround is to overwrite the existing kubeconfig for the "admin" user:
mv $HOME/.kube $HOME/.kube.bak
mkdir $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
For more information refer to the documentation
i am trying to setup and run a docker image that runs a game server.
My docker command looks like this:
docker run --name 7dtd -d -t \
-p 26900-26905:26900-26905/tcp \
-p 26900-26905:26900-26905/udp \
-e SEVEN_DAYS_TO_DIE_UPDATE_CHECKING="1" \
-e SEVEN_DAYS_TO_DIE_CONFIG_FILE="/home/7dtd/server/serverconfig.xml" \
-e SEVEN_DAYS_TO_DIE_BRANCH="latest_experimental" \
--restart unless-stopped \
-v /home/7dtd/server:/steamcmd/7dtd \
-v /home/7dtd/data:/root/.local/share/7DaysToDie \
didstopia/7dtd-server```
I keep recieving a error when this image starts by saing that it cant find the config file from the give path.
I changed the permission of that file and location to be read/writable for everyone but it still does not fix it.
Is it because the docker image cannot access my file system from outside its own container?
If so, how can i have this docker image access files/folders from outside its containter?
Here is the error output:
2021-10-06T16:59:53 0.251 INF Command line arguments: /steamcmd/7dtd/7DaysToDieServer.x86_64 -quit -batchmode -nographics -dedicated -configfile=/home/7dtd/server/serverconfigreal.xml
2021-10-06T16:59:53 0.263 ERR ====================================================================================================
2021-10-06T16:59:53 0.263 ERR Specified configfile not found: /home/7dtd/server/serverconfigreal.xml
2021-10-06T16:59:53 0.263 ERR ====================================================================================================
I am able to execute nano /home/7ttd/server/serverconfigreal.xml and see the file and its contents.
I am not sure what the problem is
This is the docker image in question: https://github.com/Didstopia/7dtd-server
You have shared the config file with:
-v /home/7dtd/server:/steamcmd/7dtd
this means that the files from your host's /home/7dtd/server will be mapped to the container's /steamcmd/7dtd.
So you need to specify the config file path from the container's prospective:
-e SEVEN_DAYS_TO_DIE_CONFIG_FILE="/steamcmd/7dtd/serverconfig.xml"
Running kafka on a container and trying to create a new pgsql container on the same host.
the pgsql container keeps exiting and the logs indicates
ERROR: Failed to connect to Kafka at kafka.domain, check the docker run -e KAFKA_FQDN= value
the kafka container is built with the following attributes
docker run -d \
--name=app_kafka \
-e KAFKA_FQDN=localhost \
-v /var/app/kafka:/data/kafka \
-p 2181:2181 -p 9092:9092 \
app/kafka
the pgsql container with
docker run -d --name app_psql \
-h app-psql \
**-e KAFKA_FQDN=kafka.domain \
--add-host kafka.domain:172.17.0.1 \**
-e MEM=16 \
--shm-size=512m \
-v /var/app/config:/config \
-v /var/app/postgres/main:/data/main \
-v /var/app/postgres/ts:/data/ts \
-p 5432:5432 -p 9005:9005 -p 8080:8080 \
app/postgres
If i'm using docker0 ip address, the logs indicates no route to host, if i'm using the kafka docker ip, i'm getting connection refused.
I guess i'm missing something basic here that needs to be modified to my environment, but I'm lacking in knowledge here.
Will appreciate any assistance here.
You need edit container file hosts, you can pass a script in dockerFile like to
COPY dot.sh .
ENTRYPOINT ["sh","domain.sh"]
And domain.sh
#!/bin/sh
echo Environment container kafka is: "kafka.domain"
echo PGSQL container is "pgsql.domain"
echo "127.0.0.1 kafka.domain" >> /etc/hosts
echo "127.0.0.1 pgsql.domain" >> /etc/hosts
Feel free change ip or domain to needs.
I want to setup a private registry behind a nginx server. To do that I configured nginx with a basic auth and started a docker container like this:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/home/example/registry \
-p 5000:5000 \
registry
By doing that, I can login to my registry, push/pull images... But if I stop the container and start it again, everything is lost. I would have expected my registry to be save in /home/example/registry but this is not the case. Can someone tell me what I missed ?
I would have expected my registry to be save in /home/example/registry but this is not the case
it is the case, only the /home/exemple/registry directory is on the docker container file system, not the docker host file system.
If you run your container mounting one of your docker host directory to a volume in the container, it would achieve what you want:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/registry \
-p 5000:5000 \
-v /home/example/registry:/registry \
registry
just make sure that /home/example/registry exists on the docker host side.
I have created my private docker registry running on localhost:5000/v1 but it does not provide authentication, How to have username and password so that only authorized users can push an image to it.
I am also not able to list all the images present in private registry, all document says running below command will list it localhost:5000/v1/search but it gives a blank json response as:
{
"num_results": 0,
"query": "",
"results": []
}
How to resolve this?
Thanks,
Yash
An answer to your first question: You need to use something like nginx in front of the registry to do the actual password authentication. There are example nginx configuration files for pre-1.3.9 nginx and later versions in the Docker Registry Github repo for wrapping the registry with nginx; there is more information on authentication configuration on the nginx wiki.
You can use htpasswd to setup a login with dockers registry image. However, I don't believe they have implemented a search function in this image yet. To create a user, I have the following script:
#!/bin/sh
usage() { echo "$0 user"; exit 1; }
if [ $# -ne 1 ]; then
usage
fi
user=$1
cd `dirname $0`
if [ ! -d "auth" ]; then
mkdir -p auth
fi
chmod 666 auth/htpasswd
docker run --rm -it \
-v `pwd`/auth:/auth \
--entrypoint htpasswd registry:2 -B /auth/htpasswd $user
chmod 444 auth/htpasswd
Then to run the registry, I use the following script (from the same folder):
#!/bin/sh
cd `dirname $0`
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs:ro \
-v `pwd`/auth/htpasswd:/auth/htpasswd:ro \
-v `pwd`/registry:/var/lib/registry \
-e "REGISTRY_HTTP_TLS_CERTIFICATE=/certs/host-cert.pem" \
-e "REGISTRY_HTTP_TLS_KEY=/certs/host-key.pem" \
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
-e "REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd" \
-e "REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry" \
registry:2
Note that I'm also using TLS certificates in the above under the certs directory. You can create these with openssl commands (same ones used for securing the docker daemon socket).