Can’t sign plugin using Grafana with Docker - docker

I tried install this plugin to Grafana from github:
https://github.com/Vertamedia/chtable
I cloned this repository to pligins folder then added plugin to my grafana container:
grafana:
image: grafana/grafana
ports:
- '3000:3000'
environment:
- GF_PATHS_CONFIG="grafana/etc/grafana.ini"
- GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=vertamedia-clickhouse-datasource,vertamedia-chtable
- GF_INSTALL_PLUGINS=grafana-piechart-panel,grafana-worldmap-panel,vertamedia-clickhouse-datasource,vertamedia-chtable
Then when I tried create new dashboard panel using this plugin get error with message:
An unexpected error happened TypeError: Cannot read property ‘emit’ of
undefined
Grafana version: Grafana v7.4.3 (010f20c1c8)
My plugin is unsigned. How I can fix this error and use this plugin?

Here I will list steps I used to install zabbix plugin to grafana container. You may try following the similar way to this plugin.
First I downloaded grafana-zabbix plugin related files from offcial github.
wget https://github.com/alexanderzobnin/grafana-zabbix/releases/download/v4.1.4/alexanderzobnin-zabbix-app-4.1.4.zip
Extract that zip file.
Then in gragana.ini you have to uncomment allow_loading_unsigned_plugins. By default its commented.
To get this grafana.ini file, I ran docker run grafana/grafana:latest and connected to that running grafana container and copied /etc/grafana/grafana.ini
[plugins]
allow_loading_unsigned_plugins = true
Dockerfile
FROM grafana/grafana:latest
COPY grafana.ini /etc/grafana/grafana.ini
COPY alexanderzobnin-zabbix-app /var/lib/grafana/plugins/

Using #SachithMuhandiram's answer I was able to get a signed plugin into running Grafana container. I do realize this doesn't answer the question asked (allow unsigned plugins). However, I did land on the thread while I was researching this problem. So, I'll leave the answer here, some may find it useful.
docker run -d -p 3000:3000 grafana/grafana
docker ps -a
docker cp relative/path-to/sample_plugin [container_id]:/var/lib/grafana/plugins/
docker restart [container_id]

Related

how to add the plugin fluent-plugin-opensearch to docker

I'm trying to send logs from fluentd (installed using docker) to opensearch.
In configuration file, there's #type opensearch that uses the plugin fluent-plugin-opensearch which I installed locally as a Ruby gem.
I get the following error:
2022-04-22 15:47:10 +0000 [error]: config error file="/fluentd/etc/fluentd.conf" error_class=Fluent::NotFoundPluginError error="Unknown output plugin 'opensearch'. Run 'gem search -rd fluent-plugin' to find plugins"
As a solution, I found out that I need to add the plugin to the fluentd docker container, but I couldn't find a way to do that.
Any way to add the plugin to docker or an alternative to this solution would be appreciated.
The comments already gave a hint, you will need to build your own Docker image. Depending on the infrastructure you have available, you can either build the image, store it in some registry and then use it in your compose file, or build it on the machine that you use docker on.
The Dockerfile
Common to both approaches is that you'll need a Dockerfile. I am using Calyptias Docker image as a base, but you can use whatever fluentd image you like to. My docker file looks as follows:
FROM ghcr.io/calyptia/fluentd:v1.14.6-debian-1.0
USER root
RUN gem install fluent-plugin-opensearch
RUN fluent-gem install fluent-plugin-rewrite-tag-filter fluent-plugin-multi-format-parser
USER fluent
ENTRYPOINT ["tini", "--", "/bin/entrypoint.sh"]
CMD ["fluentd"]
As you can see it installs a few more plugins, but the first RUN line is the important one for you.
Option 1
If you have a container registry available, you can build the image and push it there, either using a CI/CD pipeline or simply locally. Then you can reference this custom image instead of whatever other fluentd image you're using today as such:
fluentd:
image: registry.your-domain.xyz/public-projects/fluentd-opensearch:<tag|latest>
container_name: fluentd
ports:
- ...
restart: unless-stopped
volumes:
- ...
Adjust the config to your needs.
Option 2
You can also have docker-compose build the container locally for you. For this, create a directory fluentd in the same folder where you store your docker-compose.yml and place the Dockerfile there.
fluentd:
build: ./fluentd
container_name: fluentd
ports:
- ...
restart: unless-stopped
volumes:
- ...
Instead of referencing the image from some registry, you can reference a local build directory. This should get you started.

Dockerized Jenkins not able to find docker

I'm trying to establish a Jenkins Pipeline, that's able to build docker images. But I ran into the problem docker: not found after executing the pipeline. The Jenkinsfile has the following content:
pipeline {
agent { dockerfile true }
stages {
stage('Test') {
steps {
sh 'docker --version '
}
}
}
}
It's a simple script to get things started. But it seems that the dockerized Jenkins installation can't find a suitable docker installation to use.
The required plugins (Docker and Docker pipeline) are installed and a global docker installation configuration is present. But the error keeps going.
Jenkins setup is done by using this docker-compose:
version: '3.1'
networks:
docker:
volumes:
jenkins-data:
jenkins-docker-certs:
services:
jenkins:
image: jenkins/jenkins:lts
restart: always
networks:
- docker
ports:
- 8090:8080
- 50000:50000
tty: true
volumes:
- jenkins-data:/var/jenkins_home
- jenkins-docker-certs:/certs/client:ro
- $HOME:/home
environment:
- DOCKER_HOST=tcp://docker:2376
- DOCKER_CERT_PATH=/certs/client
- DOCKER_TLS_VERIFY=1
dind:
image: docker:dind
privileged: true
restart: always
networks:
docker:
aliases:
- docker
ports:
- 2376:2376
tty: true
volumes:
- jenkins-data:/var/jenkins_home
- jenkins-docker-certs:/certs/client
- $HOME:/home
environment:
- DOCKER_TLS_CERTDIR=/certs
After reading some more posts about that issue and following the official Jenkins doc, I thought that for this purpose docker:dind is used. Maybe I miss some important configurations here? When launching the docker:dind container, the log states the following warning message: could not change group /var/run/docker.sock to docker: group docker not found, but the group exists and I'm able to run docker commands without specifying sudo. (Followed the official docker post-installation steps)
Another problematic point right now is, that Jenkins can't persist configuration data in general or pipeline related stuff. After restarting the machine I have to go through the wizard every single time and I don't know why.
Did someone suffer similar problems?
Many thanks in advice!
Your docker-compose file is correct, you just need to add a volume in the jenkins container :
- /usr/bin/docker:/usr/bin/docker
You have also a lot of configuration not required, you can check this link to see others possible configurations. You use actually the Solution 3 and you can switch to this docker-compose file.
For volumes, they should be persisted since they are declared in the volume section. You can try to use external volumes if needed.
Fast forward one year and I've run into analogous problem, only with mismatched GLIBC versions, as described here.
I solved it by upgrading GLIBC version in the Jenkins container to 2.35 (as shipped with Ubuntu Jammy on the host). To achieve this I had to build my own Jenkins container based on ubuntu:jammy and JDK 17, using a template from the official Debian-based one (sourced from here). Now GLIBC versions agree, and docker-in-docker Jenkins builds can be made using docker installed on a host with Ubuntu Jammy:
$ ldd --version
ldd (Ubuntu GLIBC 2.35-0ubuntu3.1) 2.35
# vs.
$ docker run --rm -it mirekphd/jenkins-jdk17-on-ubuntu-2204:2.374 ldd --version
ldd (Ubuntu GLIBC 2.35-0ubuntu3.1) 2.35
Feel free to use this container (best served with the latest tag), as I will have to maintain it for our own in-house use, setting its builds as one of... Jenkins pipelines (bootstrap problem notwithstanding). It will be a Docker-in-Docker Jenkins-in-Jenkins pipeline:)

How do I switch from docker-compose to Dockerfile?

I'm trying to run Magento 2 locally, so I built built a Docker LAMP stack using docker compose. This is the docker-compose.yml portion related to the php container:
php:
image: 'docker.io/bitnami/php-fpm:7.4-debian-10'
ports:
- 9001:9000
volumes:
- './docker/php/php.ini:/opt/bitnami/php/etc/conf.d/custom.ini'
networks:
- 'web'
and it works great. The point is that the bitnami image doesn't seem to have cron pre-installed, and for the sake of simplicity it's probably easier to have it directly in the php container (so I can reach it through the Magento cli functionalities, e.g. bin/magento cron:install)
So, I changed the php instruction into the docker-compose file from image to build, added a separated Dockerfile written like this:
FROM docker.io/bitnami/php-fpm:7.4-debian-10
I didn't even add the RUN instructions for adding packages. What I'm expecting at the moment is that by running docker-compose up -d --remove-orphan the result is basically the same. But it's not, because refreshing the homepage now gives me a 503 error that doesn't seem to leave any trace into the log files, so I'm a bit stuck.

exec: "com.docker.cli": executable file not found in $PATH

I am getting this error when docker-compose up on one of the containers only.
exec: "com.docker.cli": executable file not found in $PATH
The terminal process "/bin/zsh '-c', 'docker logs -f f6557b5dd19d9b2bc5a63a840464bc2b879d375fe72bc037d82a5358d4913119'" failed to launch (exit code: 1).
I uninstalled and reinstalled docker desktop#2.3.0.5 on Mac
docker-compose build from scratch
other containers are running
I get the above error.
It used to be running. I am not sure why this is happening. I know that I upgraded docker from I think 2.3
also I think I received an update on my mac
Dockerfile
FROM tiangolo/uvicorn-gunicorn:python3.8
COPY requirements.txt /app/
RUN pip install -r requirements.txt
COPY ./app /app/app
#COPY config.py /app/app/
docker-compose.yml
version: "3"
services:
postgresql:
container_name: postgresql
image: postgres:12
ports:
- "5433:5432"
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
volumes:
- ./postgres-data:/var/lib/postgresql/data
fastapi:
build:
context: ./fastapi/
dockerfile: Dockerfile
volumes:
- ./fastapi/app/imgs:/app/app/imgs
ports:
- "1001:80"
depends_on:
- postgresql
env_file:
- .env
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=pgadmin4#pgadmin.org
- PGADMIN_DEFAULT_PASSWORD=admin
ports:
- "5050:80"
depends_on:
- postgresql
solr:
build:
context: ./solr/
dockerfile: Dockerfile
restart: always
ports:
- "8983:8983"
volumes:
- data:/var/solr
volumes:
data:
update:
It worked when I downgraded to docker desktop 2.3.0.4
Updated Answer:
Since VSCode Docker 1.14.0 you can now set the Docker executable path in the settings, which should help in most cases.
Old Answer (Option was removed from Docker Desktop):
The Desktop Docker Version 2.4.0.0 is working for me after I did deactivate the feature Enable cloud experience. You can find it under Preferences --> Command Line.
If you are still experience the problem, you may try a clean remove and install of Docker and also make sure that Docker is actually running, see other possible solution(s) here.
History of GitHub Issues:
https://github.com/docker/for-mac/issues/4956
https://github.com/microsoft/vscode-docker/issues/2366
https://github.com/microsoft/vscode-docker/issues/2578
https://github.com/microsoft/vscode-docker/issues/2894
Status (2021-06-22): VSCode Version 1.57.0 seems to have fixed the issue again.
You might get the following error message simply because you did not start Docker just yet
exec: "com.docker.cli": executable file not found in $PATH
In my case the problem was I had installed and then crudely removed the docker compose cli. This resulted in the above error to start popping up.
I got the compose CLI back using instructions from https://docs.docker.com/cloud/ecs-integration/#install-the-docker-compose-cli-on-linux and running (as root):
curl -L https://raw.githubusercontent.com/docker/compose-cli/main/scripts/install/install_linux.sh | sh
This fixed it for me.
Note: I would not recommend installing docker-compose cli to fix this issue, but to share some insights in case this is applicable to you as well.
Ensure that docker CLI is installed not just docker desktop on Linux. YOu can install it using:
sudo apt install docker.io
Update: The "cloud experience" no longer exists even as an experimental feature in Docker Desktop v3.2.1. This should no longer be an issue.
If you continue to see this problem on a newer version, you will need to downgrade to Docker v3.1.0, disable the cloud experience feature, then upgrade to the newest version.
Had the exact same issue. Was fixed after starting the upgraded docker first, then running this command.
dostarr#DOSTARR-M-38LF ~ % docker run busybox
exec: "com.docker.cli": executable file not found in $PATH
<started docker>
dostarr#DOSTARR-M-38LF ~ % docker run busybox
dostarr#DOSTARR-M-38LF ~ %
I had the same problem when trying to run minikube tunnel, and since I didn't want to re-install anything, I ended up running it from the docker bin path (on Windows it's in 'C:\Program Files\Docker\Docker\resources\bin') and it worked.
An alternative to Docker Desktop is colima, container runtimes on macOS (and Linux) with minimal setup.
# Homebrew
brew install colima docker
colima start
Now, you can use the docker commands as before.
For docker compose commands, you have to install:
brew install docker-compose
if already have installed docker, it may not have started. So type in terminal,"docker run -d -p 80:80 docker/getting-started" and it should solve the issue.

Push image built with docker-compose to dockerhub

I have a golang script which interacts with postgres. Created a service in docker-compose.yml for both golang and postgre. When I run it locally with "docker-compose up" it works perfect, but now I want to create one single image to push it to my dockerhub so it can be pulled and ran with just "docker run ". What is the correct way of doing it?
Image created by "docker-compose up --build" launches with no error with "docker run " but immediately stops.
docker-compose.yml:
version: '3.6'
services:
go:
container_name: backend
build: ./
volumes:
- # some paths
command: go run ./src/main.go
working_dir: $GOPATH/src/workflow/project
environment: #some env variables
ports:
- "80:80"
db:
image: postgres
environment: #some env variables
volumes:
- # some paths
ports:
- "5432:5432"
Dockerfile:
FROM golang:latest
WORKDIR $GOPATH/src/workflow/project
CMD ["/bin/bash"]
I am a newbie with docker so any comments on how to do things idiomatically are appreciated
docker-compose does not combine docker images into one, it runs (with up) or builds then runs (with up --build) docker containers based on the images defined in the yml file.
More info are in the official docs
Compose is a tool for defining and running multi-container Docker applications.
so, in your example, docker-compose will run two containers:
1 - based on the go configurations
2 - based on the db configurations
to see what containers are actually running, use the command:
docker ps -a
for more info see docker docs
It is always recommended to run each searvice on a separate container, but if you insist to make an image which has both golangand postrges, you can take a postgres base image and install golang on it, or the other way around, take golangbased image and install postgres on it.
The installation steps can be done inside the Dockerfile, please refer to:
- postgres official Dockerfile
- golang official Dockerfile
combine them to get both.
Edit: (digital ocean deployment)
Well, if you copy every thing (docker images and the yml file) to your droplet, it should bring the application up and running similar to what happens when you do the same on your local machine.
An example can be found here: How To Deploy a Go Web Application with Docker and Nginx on Ubuntu 18.04
In production, usually for large scale/traffic applications, more advanced solutions are used such as:
- docker swarm
- kubernetes
For more info on Kubernetes on digital ocean, please refer to the official docs
hope this helps you find your way.

Resources