How to use grafana-cli on docker installed Grafana? - docker

I have installed grafana via docker.
Is it possible to export and run grafana-cli on my host?

If you meant running Grafana with some plugins installed, you can do it by passing a list of plugin names to a variable called GF_INSTALL_PLUGINS.
sudo docker run -d -p 3000:3000 -e "GF_INSTALL_PLUGINS=gridprotectionalliance-openhistorian-datasource,gridprotectionalliance-osisoftpi-datasource" grafana/grafana
I did this on Grafana 4.x
Installing plugins for Grafana 3 "or above"

For a full automatic setup of your Grafana install with the plugins you want I would follow Ricardo's suggestion. Its much better if you can configure your entire container as wanted in a single hit like that.
However if you are just playing with the plugins and want to install some manually, then you can access a shell on the running docker instance from the host.
host:~$ docker exec -it grafana /bin/bash
... assuming you named the docker container "grafana" otherwise you will need to substitute the given container name. The shell prompt that returns will allow you to run the standard
root#3e04b4578ebe:/# grafana-cli plugins install ....
Be warned that it may tell you to run service grafana-server restart afterwards. In my experience that didn't work (Not sure it runs as a traditional service in the container). However if you exit the container, and restart the container from the host...
host:~$ docker restart grafana
That should restart the grafana service and your new plugins should be in place.

Grafana running in docker container
Docker installed on Windows 10
Test: command to display grafana-cli help
c:\>docker exec -it grafana grafana-cli --help
Tested with a version: Version 6.4.4 November 6, 2019

Related

How can I implement Docker outside of Docker setup for Jenkins using Docker Desktop for Windows?

I'm trying to create a CI/CD infrastructure using Jenkins. Considering recoverability, performance and maintainability topics I decided to handle both Jenkins and agents as Docker containers.
There are some certain restrictions that I cannot workaround:
Cannot build this setup on Linux environment (IT policy)
Cannot use WSL2 on Windows (I don't know when IT department will release regarding Windows update that supports WSL2)
Security is a very high priorty topic
As far as I see, Docker outside of Docker setup is the proper way to implement. If I run the container as root using the command below, I can bind the docker.sock file and Jenkins jobs can create containers from Dockerfiles as agents:
docker run --name dood `
-d -u root --restart on-failure `
-p "8080:8080" -p "50000:50000" `
-v //var/run/docker.sock:/var/run/docker.sock `
-v /usr/local/bin/docker:/usr/bin/docker `
jenkins/jenkins:lts
However, it doesn't work if Jenkins container is run with non-root user. This is not acceptable as it creates vulnerability. Suggested way is to run the container without root user and assign "jenkins" user to "docker" group:
groupadd docker
usermod -a -G docker jenkins
newgrp docker
Unfortunately, it doesn't work. "Got permission denied..." error occurs when Jenkins jobs try to create agent containers. I restarted Docker Desktop and container but result is the same. I am not sure but possible reason might be the Windows environment. This may work in Linux environment.
As a final effort, I tried the solution that is described in a stackoverflow topic. I noticed "setfacl" command does not work when Docker runs with Hyper-V. If I switch to WSL2 on my demo PC then the commands below solve the problem:
gpasswd -a jenkins docker
apt-get install acl
setfacl -m user:jenkins:rw /var/run/docker.sock
Unfortunately, target Windows environment does not support WSL2 so I cannot use this solution. Moreover, setfacl command is not persistent but this is another story.
An alternative solution might be activating "Expose daemon on tcp://localhost:2375 without TLS" option. However, this is not acceptable in security point of view so I cross it out.
I am curious if it is even possible to implement Docker outside of Docker setup for Jenkins on Docker Desktop for Windows. Considering the named restrictions, I am open to alternative setups/solutions as well.
I am quite new to Docker and not very experienced with Jenkins. If I use wrong terminology or approach please let me know.

How to run sitespeed.io in apache/ngnix server?

I have recently heard about sitespeed.io and started using it to measure performance of my site.
I am running it in a docker container on my gcp cloud instance.
The problem is everytime i run the command it stores the result in a particular directory sitespeed-result and then I need to copy the whole thing on my local windows machine to view index.html file.
Is it possible to run this on a server like apache? I mean for example I can run an apache container on my docker host but how do i map this sitespeed io result so that it can be available using http://my-gcp-instance:80 where my apache container is running on port 80.
sudo docker run -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io:13.3.0 https://mywebsite.com
Sorry for posting thr question this but I got it working.
sudo docker run -dit --name my-apache -p 8080:80 -v "$(pwd)":/usr/local/apache2/htdocs/ httpd:2.4
(pwd) is where i am storing the sitespeed results.

CI & Docker-in-a-Docker

I am trying to integrate docker into my CI platform. After getting this working properly with a Docker-in-a-docker solution, I came across a blog post by one of the Docker maintainers, where he says that instead of using a Docker-in-a-docker solution for my CI, I should instead simply mount the /var/run/docker.sock to my CI container.
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
Simply put, when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
So I tried this. I ran the following command:
docker run -p 8080:8080 -p 50000:50000 -v /var/run/docker.sock:/var/run/docker.sock jenkins
Using jenkins as my CI container.
When running the above command, jenkins starts up properly, and I can jump into the container to see that the docker.sock file is located in the /var/run/ path.
However, when I run the command: docker, the machine returns with the following message:
bash: docker: command not found
Does anyone know what I am missing in order to make this work per the author's instructions?
I am using Docker v. 1.11.1, on a fresh CentOS 7 box.
Thanks in advance
Figured this out today. The above command will work so long as the docker daemon + dependencies are added to the container. In my case, I ended up writing a simple Dockerfile, which also included the line:
RUN curl -sSL https://get.docker.com/ | sh
This installed Docker on the container, and when I ran docker images from within the container, I could see all of the images from my host machine. I am now able to use all of the docker commands from within the container.

Is it possible to restart docker container from inside it

I'd like to package Selenium grid exrtas into a docker image.
This service being run without using docker container can reboot the OS it's running in. I wonder if I can setup the container to restart by Selemiun grid extras service running inside the container.
I am not familiar with Selenium Grid, but as a general idea: you could mount a folder from the host as data volume, then let Selenium write information to there, like a flag file.
On the host, you have a scheduled task / cronjob running on the host that would check for this flag in the shared folder and if it has a certain status, you would invoke a docker restart from there.
Not sure if there are other more elegant solutions for this, but this is what came to my mind adhoc.
Update:
I just found this on the Docker forum:
https://forums.docker.com/t/how-can-i-run-docker-command-inside-a-docker-container/337
I'm not sure about CoreOS but normally you can manage your host
containers from within a container by mounting the Docker socket.
Such as
docker run -it -v /var/run/docker.sock:/var/run/docker.sock ubuntu:latest sh -c "apt-get update ; apt-get install docker.io -y ;
bash"
or
https://registry.hub.docker.com/u/abh1nav/dockerui/

How to use --volume option with Docker Toolbox on Windows?

How can I share a folder between my Windows files and a docker container, by mounting a volume with simple --volume command using Docker Toolbox on?
I'm using "Docker Quickstart Terminal" and when I try this:
winpty docker run -it --rm --volume /C/Users/myuser:/myuser ubuntu
I have this error:
Invalid value "C:\\Users\\myuser\\:\\myuser" for flag --volume: bad mount mode specified : \myuser
See 'docker run --help'.
Following this, I also tried
winpty docker run -it --rm --volume "//C/Users/myuser:/myuser" ubuntu
and got
Invalid value "\\\\C:\\Users\\myuser\\:\\myuser" for flag --volume: \myuser is not an absolute path
See 'docker run --help'.
This is an improvement of the selected answer because that answer is limited to c:\Users folder. If you want to create a volume using a directory outside of c:\Users this is an extension.
In windows 7, I used docker toolbox. It used Virtual Box.
Open virtual box
Select the machine (in my case default).
Right clicked and select settings option
Go to Shared Folders
Include a new machine folder.
For example, in my case I have included:
**Name**: c:\dev
**Path**: c/dev
Click and close
Open "Docker Quickstart Terminal" and restart the docker machine.
Use this command:
$ docker-machine restart
To verify that it worked, following these steps:
SSH to the docker machine.
Using this command:
$ docker-machine ssh
Go to the folder that you have shared/mounted.
In my case, I use this command
$ cd /c/dev
Check the user owner of the folder. You could use "ls -all" and verify that the owner will be "docker"
You will see something like this:
docker#default:/c/dev$ ls -all
total 92
drwxrwxrwx 1 docker staff 4096 Feb 23 14:16 ./
drwxr-xr-x 4 root root 80 Feb 24 09:01 ../
drwxrwxrwx 1 docker staff 4096 Jan 16 09:28 my_folder/
In that case, you will be able to create a volume for that folder.
You can use these commands:
docker create -v /c/dev/:/app/dev --name dev image
docker run -d -it --volumes-from dev image
or
docker run -d -it -v /c/dev/:/app/dev image
Both commands work for me. I hope this will be useful.
This is actually an issue of the project and there are 2 working workarounds:
Creating a data volume:
docker create -v //c/Users/myuser:/myuser --name data hello-world
winpty docker run -it --rm --volumes-from data ubuntu
SSHing directly in the docker host:
docker-machine ssh default
And from there doing a classic:
docker run -it --rm --volume /c/Users/myuser:/myuser ubuntu
If you are looking for the solution that will resolve all the Windows issues and make it work on the Windows OS in the same way as on Linux, then see below. I tested this and it works in all cases. I’m showing also how I get it (the steps and thinking process). I've also wrote an article about using Docker and dealing with with docker issues here.
Solution 1: Use VirtualBox (if you think it's not good idea see Solution 2 below)
Open VirtualBox (you have it already installed along with the docker tools)
Create virtual machine
(This is optional, you can skip it and forward ports from the VM) Create second ethernet card - bridged, this way it will receive IP address from your network (it will have IP like docker machine)
Install Ubuntu LTS which is older than 1 year
Install docker
Add shared directories to the virtual machine and automount your project directories (this way you have access to the project directory from Ubuntu) but still can work in Windows
Done
Bonus:
Everything is working the same way as on Linux
Pause/Unpause the dockerized environment whenever you want
Solution 2: Use VirtualBox (this is very similar to the solution 1 but it shows also the thinking process, which might be usefull when solving similar issues)
Read that somebody move the folders to /C/Users/Public and that works https://forums.docker.com/t/sharing-a-volume-on-windows-with-docker-toolbox/4953/2
Try it, realize that it doesn’t have much sense in your case.
Read entire page here https://github.com/docker/toolbox/issues/607 and try all solutions listed on page
Find this page (the one you are reading now) and try all the solutions from other comments
Find somewhere information that setting COMPOSE_CONVERT_WINDOWS_PATHS=1 environment variable might solve the issue.
Stop looking for the solution for few months
Go back and check the same links again
Cry deeply
Feel the enlightenment moment
Open VirtualBox (you have it already installed along with the docker tools)
Create virtual machine with second ethernet card - bridged, this way it will receive IP address from your network (it will have IP like docker machine)
Install Ubuntu LTS which is very recent (not older than few months)
Notice that the automounting is not really working and the integration is broken (like clipboard sharing etc.)
Delete virtual machine
Go out and have a drink
Rent expensive car and go with high speed on highway
Destroy the car and die
Respawn in front of your PC
Install Ubuntu LTS which is older than 1 year
Try to run docker
Notice it’s not installed
Install docker by apt-get install docker
Install suggested docker.io
Try to run docker-compose
Notice it’s not installed
apt get install docker-compose
Try to run your project with docker-compose
Notice that it’s old version
Check your power level (it should be over 9000)
Search how to install latest version of docker and find the official guide https://docs.docker.com/install/linux/docker-ce/ubuntu/
Uninstall the current docker-compose and docker.io
Install docker using the official guide https://docs.docker.com/install/linux/docker-ce/ubuntu/
Add shared directories to the virtual machine and automount your project directories (this way you have access to the project directory from Ubuntu, so you can run any docker command)
Done
As of August 2016 Docker for windows now uses hyper-v directly instead of virtualbox, so I think it is a little different. First share the drive in settings then use the C: drive letter format, but use forward slashes. For instance I created an H:\t\REDIS directory and was able to see it mounted on /data in the container with this command:
docker run -it --rm -v h:/t/REDIS:/data redis sh
The same format, using drive letter and a colon then forward slashes for the path separator worked both from windows command prompt and from git bash.
I found this question googling to find an answer, but I couldn't find anything that worked. Things would seem to work with no errors being thrown, but I just couldn't see the data on the host (or vice-versa). Finally I checked out the settings closely and tried the format they show:
So first, you have to share the whole drive to the docker vm in settings here, I think that gives the 'docker-machine' vm running in hyper-v access to that drive. Then you have to use the format shown there, which seems to only exist in this one image and in no documentation or questions I could find on the web:
docker run --rm -v c:/Users:/data alpine ls /data
Simply using double leading slashes worked for me on Windows 7:
docker run --rm -v //c/Users:/data alpine ls /data/
Taken from here: https://github.com/moby/moby/issues/12590
Try this:
Open Docker Quickstart Terminal. If it is already open, run $ cd ~ to make sure you are in Windows user directory.
$ docker run -it -v /$(pwd)/ubuntu:/windows ubuntu
It will work if the error is due to typo. You will get an empty folder named ubuntu in your user directory. You will see this folder with the name windows in your ubuntu container.
For those using Virtual Box who prefer command-line approach
1) Make sure the docker-machine is not running
Docker Quickstart Terminal:
docker-machine stop
2) Create the sharing Windows <-> docker-machine
Windows command prompt:
(Modify following to fit your scenario. I feed my Apache httpd container from directory synced via Dropbox.)
set VBOX=D:\Program Files\Oracle\VirtualBox\VBoxManage.exe
set VM_NAME=default
set NAME=c/htdocs
set HOSTPATH=%DROPBOX%\htdocs
"%VBOX%" sharedfolder add "%VM_NAME%" --name "%NAME%" --hostpath "%HOSTPATH%" --automount
3) Start the docker-machine and mount the volume in a new container
Docker Quickstart Terminal:
(Again, I am starting an Apache httpd container, hence that port exposing.)
docker-machine start
docker run -d --name my-apache-container-0 -p 80:80 -v /c/htdocs:/usr/local/apache2/htdocs my-apache-image:1.0
share folders virtualBox toolbox and windows 7 and nodejs image container
using...
Docker Quickstart Terminal [QST]
Windows Explorer [WE]
lets start...
[QST] open Docker Quickstart Terminal
[QST] stop virtual-machine
$ docker-machine stop
[WE] open a windows explorer
[WE] go to the virtualBox installation dir
[WE] open a cmd and execute...
C:\Program Files\Oracle\VirtualBox>VBoxManage sharedfolder add "default" --name
"/d/SVN_FOLDERS/X2R2_WP6/nodejs" --hostpath "\?\d:\SVN_FOLDERS\X2R2_WP6\nodejs" --automount
check in the oracle virtual machine, that the new shared folder has appeared
[QST] start virtual-machine
$ docker-machine start
[QST] run container nodejs
docker stop nodejs
docker rm nodejs
docker run -d -it --rm --name nodejs -v /d/SVN_FOLDERS/X2R2_WP6/nodejs:/usr/src/app -w /usr/src/app node2
[QST] open bash to the container
docker exec -i -t nodejs /bin/bash
[QST] execute dir and you will see the shared files
I solved it!
Add a volume:
docker run -d -v my-named-volume:C:\MyNamedVolume testimage:latest
Mount a host directory:
docker run -d -v C:\Temp\123:C:\My\Shared\Dir testimage:latest

Resources