Grafana docker image with zabbix plugin - docker

I want to add zabbix data source into my grafana in kunernetes, for that I created a custom image using this dockerfile and added
ARG GF_INSTALL_PLUGINS="alexanderzobnin-zabbix-app"
Then build the image and ran.
But when I logged to that docker container and run grafana-cli plugins ls, it shows nothing.
How can I create a docker image with zabbix datasource into that?

Since the base image use the GF_INSTALL_PLUGINS environment variable at run-time, it is best for you to set it during the run of the image
While running in docker run -e GF_INSTALL_PLUGINS="alexanderzobnin-zabbix-app" ..., if you use docker-compose or kubernetes then you should pass that value in environment variables.
If you want to install the plugin in the image you can use below statement
RUN grafana-cli plugins install $GF_INSTALL_PLUGINS
But this will not work if you volume mount /var/lib/grafana-plugins to a folder on host

Related

Docker bind mount is empty inside a containerized TeamCity build agent with DOCKER_IN_DOCKER enabled

I'm running containerized build agents using the linux-sudo image tag and, using a Dockerfile (here) I have successfully customized it to suit our needs. This is running very well and successfully running the builds I need it to.
I am running it in a Swarm cluster with a docker-compose file (here), and I have recently enabled the DOCKER_IN_DOCKER variable. I can successfully run docker run hello-world in this container and I'm happy with the results. However I am having an issue running a small utility container inside the agent with a bind mount volume.
I want to use this Dockerfile inside the build agent to run npm CLI commands against the files in a mounted directory. I'm using the following command to run the container with a custom command and a volume as a bind mount.
docker run -it -v $(pwd):/app {IMAGE_TAG} install
So in theory, running npm install against the local directory that is mounted in the container (npm is the command in the ENTRYPOINT so I can just pass install to the container for simplicity). I can run this on other environments (Ubuntu and WSL) and it works very well. However when I run it in the linux-sudo build agent image it doesn't seem to mount the directory properly. If I inspect the directory in the running utility container (the npm one), the /app folder is empty. Shouldn't I expect to see the contents of the bind mount here as I do in other environments?
I have inspected the container, and it confirms there is a volume created of type bind and I can also see it when I list out the docker volumes.
Is there something fundamental I am doing wrong here? I would really appreciate some help if possible please?

Nextcloud docker image rebranding

I want to use the nextcloud image from dockerhub as the base image for the purpose of creating the a new child image having my own company's logo in place of nextcloud and my preferred background colour.Can anyone help me with the process or any link to the solution to this?
https://nextcloud.com/changelog
-download this zip
-make a docker file
-your should install apache and setup it
-change logo and colour theme in your css file
-built a new image
The general approach is this:
Run the official image locally, follow the instructions on Docker Hub to get started.
Modify the application using docker and shell commands. You can:
open a shell (docker exec -it <container> sh) in the running container and use console commands to edit files;
copy files from from the container and back with docker cp;
mount local files into the container by using -v <src>:<dest> in docker run command.
When you're done with editing, you need to repeat all the steps in a Dockerfile:
# use the version you used to start the local container
FROM nextcloud
# write commands that you used inside the container (if any)
RUN echo hello!
# Push edited files that you copied/mounted inside
COPY local.file /to/some/place/inside/the/image
# More possible commands in Dockerfile Reference
# https://docs.docker.com/engine/reference/builder/
After that you can use docker build to create your modified image.

How do I run startup scripts for DOCKER when deployed via Cloud Compute Engine instances?

I'm creating an instance template under GCP's Cloud Engine section.
Usually when deploying a docker image, there's a docker file that includes some startup scripts after specifying the base image to pull and build, but I can't see where I can either submit a docker file, or enter startup scripts.
I can see a field for startup scripts for the Cloud Compute instance, but that's different from the scripts passed on for the Docker's startup.
Are these perhaps to be filled in under "Command", "Command arguments", or "Environment Variables"?
For clarification, this is someone else's image of a dockerfile I pulled from Google Images. The part I wish to add is "rectangled" in red, the RUN commands, but not these exact commands.
In my case, I would like to add something like
RUN python /pythonscript.py
If I understood well, you are trying to create a Docker image not a compute instance image.
Compute instance can run a docker image that you already builded and pushed to either gcr or any other repository.
Try to build your docker image normaly, push it in a docker repo then use it.
You can run a startup script directly in the Docker container by using a ‘command’ section. If you need to install something after starting a container for example Apache, you should use a Docker image that has Apache.
If you need to run some other arguments, like creating environment variables, here you can find the full list of flags when creating a container image on VM instance.

how to configure Cassandra.yaml which is inside docker image of cassandra at /etc/cassandra/cassandra.yaml

I am trying to edit cassandra.yaml which is inside docker container at /etc/cassandra/cassandra.yaml, I can edit it from logging inside the container, but how can i do it from host?
Multiple ways to achieve this from host to container. You can simple use COPY or RUN in Dockerfile or with basic linux commands as sed, cat, etc. to place your configuration into the container. Another way you can pass environment variables while running your cassandra image which will pass those environment variables to the spawning container. Also, can use the docker volume mount it from host to container and you can map the configuration you want into the cassandra.yaml as shown below,
$ docker container run -v ~/home/MyWorkspace/cassandra.yaml:/etc/cassandra/cassandra.yaml your_cassandra_image_name
If you are using Docker Swarm then you can use Docker configs to externally store the configuration files(Even other external services can be used as etcd or consul). Hope this helps.
To edit cassandra.yaml :
1) Copy your file from your Docker container to your system
From command line :
docker ps
(To get your container id)
Then :
docker cp your_container_id:\etc\cassandra\cassandra.yaml C:\Users\your_destination
Once the file copied you should be able to see it in your_destination folder
2) Open it and make the changes you want
3) Copy your file back into your Docker container
docker cp C:\Users\your_destination\cassandra.yaml your_container_id:\etc\cassandra
4) Restart your container for the changes to be effective

How can I upgrade openshift origin if it is run as a docker container?

According to the document, all configuration and stored application definitions will also be removed when a container is removed if openshift origin runs as a docker container.
My question is is there way to upgrade openshift without losing the configurations if I am running the container?
You can create a Dockerfile, use the original image as a base image an run your update statements.
FROM openshift/origin
RUN your-update-statement
ENTRYPOINT ["/usr/bin/openshift"]
After that just build and run your Docker image. For more info see https://docs.docker.com/engine/reference/builder/

Resources