Adding Plugin to Kibana Image in docker-compose.yml - docker

I am new to using docker and trying to add the elastalert plugin to my kibana image. I am using Kibana 7.0.1 and Elasticsearch 7.0.1 and trying to use the elastalert 7.0.1 kibana plugin from github. When I run docker-compose up using the below docker-compose.yml file it does seem to install the plugin, but it doesn't actually start up kibana. Am I missing another command? Thanks
services:
...
kibana:
image: docker.elastic.co/kibana/kibana:7.0.1
...
command: ./bin/kibana-plugin install https://github.com/bitsensor/elastalert-kibana-plugin/releases/download/1.0.4/elastalert-kibana-plugin-1.0.4-7.0.1.zip

So when you override command section you must remember to keep existing behavior which is set by image author.
So in you case you can actually install kibana plugin this way but you must also add Kibana start at the end of the command by using e.g. && to run process after plugin installation. So in your case it should be:
command: sh -c './bin/kibana-plugin install https://github.com/bitsensor/elastalert-kibana-plugin/releases/download/1.0.4/elastalert-kibana-plugin-1.0.4-7.0.1.zip && exec /usr/local/bin/kibana-docker'

Related

Docker with Ruby on Rails on a development environment

I'm learning Docker and I'm trying to configure a Ruby on Rails project to run on it (on development environment). But I'm having some trouble.
I managed to configure docker-compose to start a container with the terminal open, so I can do bundle install, start a server or use rails generators. However, every time I run the command to start, it starts a new container, where I have to do bundle install again (it takes a while).
So I'd like to know if there is a way to reuse components already created.
Here is my Dockerfile.dev
FROM ruby:2.7.4-bullseye
WORKDIR '/apps/gaia_api'
EXPOSE 3000
RUN gem install rails bundler
CMD ["/bin/bash"]
And here is my docker-compose file:
version: "3.8"
services:
gaia_api:
build:
dockerfile: Dockerfile.dev
context: "."
volumes:
- .:/apps/gaia_api
environment:
- USER_DB_RAILS
- PASSWORD_DB_RAILS
ports:
- "3000:3000"
The command I'm using to run is: docker-compose run --service-ports gaia_api.
I tried to use the docker commands build, create and start, however the volume mapping doesn't work. On the terminal of the container, the files of the volume are not there.
The commands I tried.
docker build -t gaia -f Dockerfile.dev .
docker create -v ${pwd}:/apps/gaia_api -it -p 3000:3000 gaia
docker start -i f36d4d9044b08e42b2b9ec1b02b03b86b3ae7da243f5268db2180f3194823e48
There is probably something I still don't understand. So I ask: Whats the best way to configure docker for ruby on rails development? And will it be possible to add new services later (I plan once I get the first part to work, to add postgres and a vue project).
EDIT: Forgot to say that I'm on Mac OS Big Sur
EDIT 2: I found what was wrong with the volumes, I was tying -v ${pwd}:/apps instead of $(pwd):/apps.

Can’t sign plugin using Grafana with Docker

I tried install this plugin to Grafana from github:
https://github.com/Vertamedia/chtable
I cloned this repository to pligins folder then added plugin to my grafana container:
grafana:
image: grafana/grafana
ports:
- '3000:3000'
environment:
- GF_PATHS_CONFIG="grafana/etc/grafana.ini"
- GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=vertamedia-clickhouse-datasource,vertamedia-chtable
- GF_INSTALL_PLUGINS=grafana-piechart-panel,grafana-worldmap-panel,vertamedia-clickhouse-datasource,vertamedia-chtable
Then when I tried create new dashboard panel using this plugin get error with message:
An unexpected error happened TypeError: Cannot read property ‘emit’ of
undefined
Grafana version: Grafana v7.4.3 (010f20c1c8)
My plugin is unsigned. How I can fix this error and use this plugin?
Here I will list steps I used to install zabbix plugin to grafana container. You may try following the similar way to this plugin.
First I downloaded grafana-zabbix plugin related files from offcial github.
wget https://github.com/alexanderzobnin/grafana-zabbix/releases/download/v4.1.4/alexanderzobnin-zabbix-app-4.1.4.zip
Extract that zip file.
Then in gragana.ini you have to uncomment allow_loading_unsigned_plugins. By default its commented.
To get this grafana.ini file, I ran docker run grafana/grafana:latest and connected to that running grafana container and copied /etc/grafana/grafana.ini
[plugins]
allow_loading_unsigned_plugins = true
Dockerfile
FROM grafana/grafana:latest
COPY grafana.ini /etc/grafana/grafana.ini
COPY alexanderzobnin-zabbix-app /var/lib/grafana/plugins/
Using #SachithMuhandiram's answer I was able to get a signed plugin into running Grafana container. I do realize this doesn't answer the question asked (allow unsigned plugins). However, I did land on the thread while I was researching this problem. So, I'll leave the answer here, some may find it useful.
docker run -d -p 3000:3000 grafana/grafana
docker ps -a
docker cp relative/path-to/sample_plugin [container_id]:/var/lib/grafana/plugins/
docker restart [container_id]

exec: "com.docker.cli": executable file not found in $PATH

I am getting this error when docker-compose up on one of the containers only.
exec: "com.docker.cli": executable file not found in $PATH
The terminal process "/bin/zsh '-c', 'docker logs -f f6557b5dd19d9b2bc5a63a840464bc2b879d375fe72bc037d82a5358d4913119'" failed to launch (exit code: 1).
I uninstalled and reinstalled docker desktop#2.3.0.5 on Mac
docker-compose build from scratch
other containers are running
I get the above error.
It used to be running. I am not sure why this is happening. I know that I upgraded docker from I think 2.3
also I think I received an update on my mac
Dockerfile
FROM tiangolo/uvicorn-gunicorn:python3.8
COPY requirements.txt /app/
RUN pip install -r requirements.txt
COPY ./app /app/app
#COPY config.py /app/app/
docker-compose.yml
version: "3"
services:
postgresql:
container_name: postgresql
image: postgres:12
ports:
- "5433:5432"
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
volumes:
- ./postgres-data:/var/lib/postgresql/data
fastapi:
build:
context: ./fastapi/
dockerfile: Dockerfile
volumes:
- ./fastapi/app/imgs:/app/app/imgs
ports:
- "1001:80"
depends_on:
- postgresql
env_file:
- .env
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=pgadmin4#pgadmin.org
- PGADMIN_DEFAULT_PASSWORD=admin
ports:
- "5050:80"
depends_on:
- postgresql
solr:
build:
context: ./solr/
dockerfile: Dockerfile
restart: always
ports:
- "8983:8983"
volumes:
- data:/var/solr
volumes:
data:
update:
It worked when I downgraded to docker desktop 2.3.0.4
Updated Answer:
Since VSCode Docker 1.14.0 you can now set the Docker executable path in the settings, which should help in most cases.
Old Answer (Option was removed from Docker Desktop):
The Desktop Docker Version 2.4.0.0 is working for me after I did deactivate the feature Enable cloud experience. You can find it under Preferences --> Command Line.
If you are still experience the problem, you may try a clean remove and install of Docker and also make sure that Docker is actually running, see other possible solution(s) here.
History of GitHub Issues:
https://github.com/docker/for-mac/issues/4956
https://github.com/microsoft/vscode-docker/issues/2366
https://github.com/microsoft/vscode-docker/issues/2578
https://github.com/microsoft/vscode-docker/issues/2894
Status (2021-06-22): VSCode Version 1.57.0 seems to have fixed the issue again.
You might get the following error message simply because you did not start Docker just yet
exec: "com.docker.cli": executable file not found in $PATH
In my case the problem was I had installed and then crudely removed the docker compose cli. This resulted in the above error to start popping up.
I got the compose CLI back using instructions from https://docs.docker.com/cloud/ecs-integration/#install-the-docker-compose-cli-on-linux and running (as root):
curl -L https://raw.githubusercontent.com/docker/compose-cli/main/scripts/install/install_linux.sh | sh
This fixed it for me.
Note: I would not recommend installing docker-compose cli to fix this issue, but to share some insights in case this is applicable to you as well.
Ensure that docker CLI is installed not just docker desktop on Linux. YOu can install it using:
sudo apt install docker.io
Update: The "cloud experience" no longer exists even as an experimental feature in Docker Desktop v3.2.1. This should no longer be an issue.
If you continue to see this problem on a newer version, you will need to downgrade to Docker v3.1.0, disable the cloud experience feature, then upgrade to the newest version.
Had the exact same issue. Was fixed after starting the upgraded docker first, then running this command.
dostarr#DOSTARR-M-38LF ~ % docker run busybox
exec: "com.docker.cli": executable file not found in $PATH
<started docker>
dostarr#DOSTARR-M-38LF ~ % docker run busybox
dostarr#DOSTARR-M-38LF ~ %
I had the same problem when trying to run minikube tunnel, and since I didn't want to re-install anything, I ended up running it from the docker bin path (on Windows it's in 'C:\Program Files\Docker\Docker\resources\bin') and it worked.
An alternative to Docker Desktop is colima, container runtimes on macOS (and Linux) with minimal setup.
# Homebrew
brew install colima docker
colima start
Now, you can use the docker commands as before.
For docker compose commands, you have to install:
brew install docker-compose
if already have installed docker, it may not have started. So type in terminal,"docker run -d -p 80:80 docker/getting-started" and it should solve the issue.

when using docker compose 3.8 getting 'version is unsupported' error

I'm trying to use the latest docker compose version 3.8 but I keep getting "Version in ".\docker-compose.yml" is unsupported." error.
I'm using the latest version of Docker engine, 19.03.8 and Docker desktop, 2.2.0.5.
EDIT:
Here is my docker compose version: docker-compose version 1.25.4, build 8d51620a
Here is my docker compose file:
version: "3.8"
services:
portal:
image: portal-dev
ports:
- "5000:80"
- "4200:4200"
container_name: portal
build:
context: .
dockerfile: Dockerfile.dev
environment:
ASPNETCORE_ENVIRONMENT: Development
DOTNET_SYSTEM_NET_HTTP_USESOCKETSHTTPHANDLER: 0
My docker compose file works using version 3.7. I can't figure out why it doesn't work using version 3.8. Can anyone help?
The 3.8 compose version was added in docker-compose 1.25.5 (ref: https://github.com/docker/compose/releases/tag/1.25.5 )
Docker Desktop 2.3 include the new version of compose but it's on the edge channel for now.
If you can/doesn't want to use the Edge Channel, you can download the latest version of docker-compose manually on the github release page https://github.com/docker/compose/releases
Follow the below steps to resolve the issue:
Uninstall docker-compose (if installed)
sudo apt-get remove docker-compose -y
Get latest binary version of docker-compose from github releases
curl -O -J -L https://github.com/docker/compose/releases/download/v2.11.2/docker-compose-linux-x86_64
Make the binary executable.
chmod +x docker-compose-linux-x86_64
Add this to binary look up location.
sudo cp ./docker-compose-linux-x86_64 /usr/bin/docker-compose
check version
docker-compose version
in my case, i solved via replace the version to 3.3 as the error log shows.
my docker compose is located at: /usr/bin/docker-compose
and its version is: docker-compose version 1.20.0, build xxxxxx
I totally agree the docker official website is misleading with the incorrect matched version with 3.9 for me.

Building an web app which can perform npm tasks

Before I post any configuration, I try to explain what I would like to archive and would like to mention, that I’m new to docker.
To make path conversations easier, let's assume we talk about the project "Docker me up!" and it's located in X:\docker-projects\docker-me-up\.
Goal:
I would like to run multiple nginx project with different content, each project represents a dedicated build. During development [docker-compose up -d] a container should get updated instantly; which works fine.
The tricky part is, that I want to outsource npm [http://gruntjs.com] from my host directly into the container/image, so I’m able to debug and develop wherever I am, by just installing docker. Therefore, npm must be installed in a “service” and a watcher needs to be initialized.
Each project is encapsulated in its own folder on the host/build in docker and should not be have any knowledge of anything else but itself.
My solution:
I have tried many different versions, with “volumes_from” etc. but I decided to show you this, because it’s minified but still complete.
Docker-compose.yml
version: '2'
services:
web:
image: nginx
volumes:
- ./assets:/website/assets:ro
- ./config:/website/config:ro
- ./www:/website/www:ro
links:
- php
php:
image: php:fpm
ports:
- "9000:9000"
volumes:
- ./assets:/website/assets:ro
- ./config:/website/config:ro
- ./www:/website/www:ro
app:
build: .
volumes:
- ./assets:/website/assets
- ./config:/website/config:ro
- ./www:/website/www
Dockerfile
FROM debian:jessie-slim
RUN apt-get update && apt-get install -y \
npm
RUN gem update --system
RUN npm install -g grunt-cli grunt-contrib-watch grunt-babel babel-preset-es2015
RUN mkdir -p /website/{assets,assets/es6,config,www,www/js,www/css}
VOLUME /website
WORKDIR /website
Problem:
As you can see, the “data” service contains npm and should be able to execute a npm command. But, if I run docker-compose up -d everything works. I can edit the page content, work with it, etc. But the data container is not running and because of that cannot perform any npm command. Unless I have a huge logic error; which is quite possible ;-)
Environment:
Windows 10 Pro [up2date]
Shared drive for docker is used
Docker version 1.12.3, build 6b644ec
docker-machine version 0.8.2, build e18a919
docker-compose version 1.8.1, build 004ddae
After you call docker-compose up, you can get an interactive shell for your app container with:
docker-compose run app
You can also run one-off commands with:
docker-compose run app [command]
The reason your app container is not running after docker-compose up completes is that your Dockerfile does not define a service. For app to run as a service, you would need to keep a thread running in the foreground of the container by adding something like:
CMD ./run-my-service
to the end of your Dockerfile.

Resources