I am searching for the ./data folder described in Storage section pf Prometheus Documentation.
I run a basic Prometheus Docker container prom/prometheus on Kubernetes. If I execute a shell inside the container, the working directory is /prometheus and it contains the wal directory, but it does not have the structure mentioned in the documentation and I can not find any metric data.
Where are the metrics stored which I can query over Prometheus GUI?
As you can check the Prometheus Dockerfile on Github (https://github.com/prometheus/prometheus/blob/master/Dockerfile#L24), the working directory is /prometheus and that where you will find all the metrics and data.
Below is the data present in /prometheus directory
/prometheus $ ls -l
total 8
-rw-r--r-- 1 nobody nogroup 0 May 28 12:39 lock
-rw-r--r-- 1 nobody nogroup 20001 May 28 12:45 queries.active
drwxr-xr-x 2 nobody nogroup 4096 May 28 12:39 wal
And this is the *Time Series Database * which you can't decode . If you will run more exporters, probably you can see more folders like chunks etc.
Ref: https://prometheus.io/docs/prometheus/1.8/storage/
I have figured out the problem now. The structure, which was mentioned in the documentation, need several hours to build up.
./data
├── 01BKGTZQ1HHWHV8FBJXW1Y3W0K
│ └── meta.json
├── 01BKGV7JC0RY8A6MACW02A2PJD
│ ├── chunks
│ │ └── 000001
│ ├── tombstones
│ ├── index
│ └── meta.json
└── wal
├── 00000002
└── checkpoint.000001
The first of those folders starting with 01... appeared after 4 hours. The next one took an other 2 hours. It seems to vary according to the amount of metrics you are pulling.
Sidenote: As nischay goyal wrote in his answer, Prometheus TSDB can not be decoded. You can query the Prometheus API with a script to export metrics.
Related
It has been stated that:
The shim allows for daemonless containers. It basically sits as the parent of the container's process to facilitate a few things.
It keeps the STDIO and other fds open for the container incase containerd and/or docker both die. If the shim was not running then the parent side of the pipes or the TTY master would be closed and the container would exit.
However from a process level, it appears that containerd spawns containerd-shim, so if containerd is down I would expect containerd-shim to go down too.
Can someone explain how containerd-shim can remain up if containerd/docker are down?
$ ps fxa | grep dockerd -A 3
PID TTY STAT TIME COMMAND
43449 pts/2 S+ 0:00 \_ grep dockerd -A 3
117536 ? Ssl 163:36 /usr/bin/containerd
93633 ? Sl 1:01 \_ containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/8f75a1b32bb09611430ea55958b11a482b6c83ba2a75f7ca727301eb49a2770f -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
$ pstree -lpTs
systemd(1)─┬─VGAuthService(45146)
├─accounts-daemon(1053)
├─agetty(104696)
├─agetty(104707)
├─agetty(104716)
├─atd(993)
├─containerd(117536)─┬─containerd-shim(8394)─┬─bash(8969)
│ │ └─sh(8420)─┬─sshd(8512)
│ │ └─tail(8514)
│ ├─containerd-shim(13170)───bash(13198)
│ ├─containerd-shim(13545)───portainer(13577)
│ ├─containerd-shim(14156)───mysqld(14184)
...
├─dockerd(42320)─┬─docker-proxy(42700)
│ ├─docker-proxy(42713)
│ ├─docker-proxy(42725)
│ ├─docker-proxy(42736)
│ └─docker-proxy(42749)
UPDATE: Based on the explanation provided in the accepted answer:
$ pstree -lpTs
systemd(1)─┬─VGAuthService(45146)
├─accounts-daemon(1053)
├─agetty(104696)
├─agetty(104707)
├─agetty(104716)
├─atd(993)
├─containerd(117536)─┬─containerd-shim(8394)─┬─bash(8969)
│ │ └─sh(8420)─┬─sshd(8512)
│ │ └─tail(8514)
│ ├─containerd-shim(13170)───bash(13198)
│ ├─containerd-shim(13545)───portainer(13577)
│ ├─containerd-shim(14156)───mysqld(14184)
$ sudo kill -9 117536
$ pstree -lpTs
systemd(1)─┬─VGAuthService(45146)
├─accounts-daemon(1053)
├─agetty(104696)
├─agetty(104707)
├─agetty(104716)
├─atd(993)
├─containerd-shim(8394)─┬─bash(8969)
│ └─sh(8420)─┬─sshd(8512)
│ └─tail(8514)
├─containerd-shim(13170)───bash(13198)
├─containerd-shim(13545)───portainer(13577)
├─containerd-shim(14156)───mysqld(14184)
However from a process level, it appears that containerd spawns containerd-shim, so if containerd is down I would expect containerd-shim to go down too.
Child processes don't automatically die when their parent dies, they are simply re-parented to PID 1. systemd takes over as parent and containerd-shim continues running.
Started by user sabari k
Building in workspace /var/lib/jenkins/workspace/actualdairy
[actualdairy] $ /bin/sh -xe /tmp/jenkins4465259595371700187.sh
+ echo hello
+ cd
+ ./actualDairy/deploy.sh
Pseudo-terminal will not be allocated because stdin is not a terminal.
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-36-generic x86_64)
Documentation: https://help.ubuntu.com
Management: https://landscape.canonical.com
Support: https://ubuntu.com/advantage
System information as of Thu Oct 25 20:05:25 UTC 2018
System load: 0.09 Processes: 90
Usage of /: 8.7% of 24.06GB Users logged in: 1
Memory usage: 38% IP address for eth0:
Swap usage: 0% IP address for eth1:
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
43 packages can be updated.
6 updates are security updates.
Welcome to DigitalOcean's One-Click Node.js Droplet.
To keep this Droplet secure, the UFW firewall is enabled.
All ports are BLOCKED except 22 (SSH), 80 (HTTP), and 443 (HTTPS).
To get started, visit http://do.co/node1804
To delete this message of the day: rm -rf /etc/update-motd.d/99-one-click
mesg: ttyname failed: Inappropriate ioctl for device
git#bitbucket.org: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
npm WARN optional Skipping failed optional dependency /chokidar/fsevents:
npm WARN notsup Not compatible with your operating system or architecture: fsevents#1.2.4
Use --update-env to update environment variables
[PM2] Applying action restartProcessId on app [all](ids: 0)
[PM2] www ✓
┌──────────┬────┬─────────┬──────┬───────┬────────┬─────────┬────────┬─────┬──────────┬──────┬──────────┐
│ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
├──────────┼────┼─────────┼──────┼───────┼────────┼─────────┼────────┼─────┼──────────┼──────┼──────────┤
│ www │ 0 │ 0.0.2 │ fork │ 23688 │ online │ 65 │ 0s │ 0% │ 5.4 MB │ root │ disabled │
└──────────┴────┴─────────┴──────┴───────┴────────┴─────────┴────────┴─────┴──────────┴──────┴──────────┘
Use pm2 show <id|name> to get more details about an app
Finished: SUCCESS
i dont know why the git pull is not working. my deploy script is
#!/bin/bash
ssh root#ipaddress <<EOF
cd actualdairy
git pull
npm install
pm2 restart all
exit
EOF
i added remote server public key in bitbucket but its not pulling from the repo.saying Permission denied (publickey)
Problem Statement
The NGINX image is configured to send the main NGINX access and error logs to the Docker log collector by default. This is done by linking them to stdout and stderr, which causes all messages from both logs to be stored in the file /var/lib/docker/containers/<container id>/<container id>-json.log on the Docker Host.
Since the hard work of getting the logs out of the container and into the host has already been taken care of us, perhaps we should try to leverage that? But there are numerous indistinguishable folders in /var/lib/docker/containers/
# ls -alrt /var/lib/docker/containers/
total 84
drwx--x--x 14 root root 4096 Jul 4 13:40 ..
drwx------ 4 root root 4096 Jul 4 13:55 a4ee4224c3e4c68a8023eb63c01b2a288019257440b30c4efb7226eb83629956
drwx------ 4 root root 4096 Jul 6 16:24 59d1465b5c42f2ce6b13747c39ff3995191d325d641b6ef8cad1a8446247ef24
...
drwx------ 4 root root 4096 Jul 9 06:34 cab3407af18d778b259f54df16e60f5e5187f14b01a020b30f6c91c6f8003bdd
drwx------ 4 root root 4096 Jul 9 06:35 0b99140af456b29af6fcd3956a6cdfa4c78d1e1b387654645f63b8dc4bbf049c
drwx------ 21 root root 4096 Jul 9 06:35 .
Even if we narrow them down by searching recursively through /var/lib/docker/containers/ for any files that are of type -json.log and contain the string upstream_response_time
# grep -lr "upstream_response_time" /var/lib/docker/containers/ --include "*-json.log"
/var/lib/docker/containers/cfe8...fe18/cfe8...fe18-json.log
/var/lib/docker/containers/c3c3...6662/c3c3...6662-json.log
... still leaves us in a situation where we will constantly have to step in to find the correct folders due to containers starting/stopping ... we would be stuck reconfiguring FileBeat to crawl them.
Question: So how can the docker container log folders be renamed to give them a predictable name?
Alternatives
Here are certain other methods that I've ruled out but feel free to differ.
Setting up a named volume
$ tree /var/lib/docker/volumes/*nginx-log-volume
/var/lib/docker/volumes/my_swarm_stack_nginx-log-volume
└── _data
├── access.log -> /dev/stdout
└── error.log -> /dev/stderr
The named volume exists as a combination of the stack name and the named volume name: my_swarm_stack_nginx-log-volume. BUT rather than being regular files, these are some sort of a softlink/pipe to std streams. So I felt that this approach is invalid.
I think you are over-complicating the problem at hand. Filebeat already has a lot of configurable options, you don't need to reinvent stuff like this.
I suggest you just use add_docker_metadata processor. This will attach useful information like image & container name for each log produced by the container, which could then be checked by drop processor and you could make the conditions here such that you only accept logs from a specific container only.
processors:
- add_docker_metadata:
- drop_event:
when:
not:
regexp:
docker.container.name: "^nginx"
Adding Docker Metadata Documentation
Filtering Using Drop Processor
I am implementing Notary in a virtual machine. To have a reference, I have a docker registry on host A and I want to deploy Notary Server, Signer and CLI on host B to get push images to registry and sign them from different machine. However, the problem happens when I try to sign an image on host B of Notary with role targets. The following error message appears:
[root#HostB ~]# docker push my.registry:443/galera-leader-proxy:v1.0.0
The push refers to a repository [my.registry:443/galera-leader-proxy]
5f70bf18a086: Layer already exists
1de59669c563: Layer already exists
17dd9fb03617: Layer already exists
26093688fdcb: Layer already exists
e08be57f5919: Layer already exists
v1.0.0: digest: sha256:6e48967416ea76ba2825511da7b05107a41f585629009d18ccbf30a1e1ce0e5a size: 2179
Signing and pushing trust metadata
ERRO[0000] couldn't add target to targets: could not find necessary signing keys, at least one of these keys must be available: b92334936cf0a0f0e3fb9dce459212537387847ee288ce27762fd54850f89e6f
Failed to sign "my.registry:443/galera-leader-proxy":v1.0.0 - could not find necessary signing keys, at least one of these keys must be available: b92334936cf0a0f0e3fb9dce459212537387847ee288ce27762fd54850f89e6f
Error: could not find signing keys for remote repository my.registry:443/galera-leader-proxy, or could not decrypt signing key: could not find necessary signing keys, at least one of these keys must be available: b92334936cf0a0f0e3fb9dce459212537387847ee288ce27762fd54850f89e6f
Docker image is pushed to the registry but at the time of signing I get the error message that does not find the "keys" to sign. However, if I see the keys of notary, the key that can not be found to sign if it is available. Then I do not know why this happens or that I have configured badly:
[root#HostB ~]# dockernotary key list
ROLE GUN KEY ID LOCATION
---- --- ------ --------
root 7b8139837e3bf8b013f69bf0750d46ba0f70a6a6d9640eadcb592ae8a5ae2c0d /home/gmaurelia/.docker/trust/private
snapshot ...43/galera-leader-proxy 92cf3f72d573cab7b6045f72fe224a4ccf786e9ddd29c89b3a542b610061c763 /home/gmaurelia/.docker/trust/private
targets ...43/galera-leader-proxy b92334936cf0a0f0e3fb9dce459212537387847ee288ce27762fd54850f89e6f /home/gmaurelia/.docker/trust/private
PD: alias dockernotary="notary -c
/home/gmaurelia/.docker/trust/config.json -d
/home/gmaurelia/.docker/trust/ -s https://notary-server:4443"
I can not even sign under the role: targets or targets/releases
For notary on multiple hosts, you need to perform a delegation step on your first host. This is a multi-step process documented by docker that involves the following:
generate a TLS key pair on host B (the below includes a self signed step, you could also sign by a trusted CA):
openssl genrsa -out delegation.key 2048
openssl req -new -sha256 -key delegation.key -out delegation.csr
openssl x509 -req -sha256 -days 365 -in delegation.csr -signkey delegation.key -out delegation.crt
Copy the crt file from host B to host A and add the new certificate delegation with a notary command on host A. Then publish that change up to the server (the below assumes docker.io is your server):
notary delegation add docker.io/<username>/<imagename> targets/releases delegation.crt --all-paths
notary publish docker.io/<username>/<imagename>
Import the new TLS key on host B to be used by notary:
notary key import delegation.key --role user
Now you should be able to generate signatures on host B.
With notary, you should take care to protect and backup the root certificate that was generated on host A. This is often referred to as the offline certificate. If security of your two hosts is not a concern (you fully trust them), you could simply copy the $HOME/.docker/trust folder between the two.
The problem I had was that before I docker push, I applied the command: notary init my.registry:443/collection so notary generated a collection with different keys and in this way I could not do push docker of any image under any role nor even targets.
Once I did it the right way, I applied the steps you mentioned to me and the problem was solved. The notary configuration is the following:
command: tree $HOME/.docker/trust/
.docker/trust
├── certs
│ ├── delegation.crt
│ └── proof
│ ├── delegation.crt
│ ├── delegation.csr
│ └── delegation.key
├── config.json
├── private
│ ├── root_keys
│ │ └── 4e46a197de40621094f86e0cea4aa892d7c3cfb1b3400c64f6d7d82e4b97a470.key
│ └── tuf_keys
│ ├── 3269a0858ca91001c543435d0242e747bd08e68b52533f1b42028388ed02c7e6.key
│ └── my.registry:443
│ └── galera-leader-proxy
│ └──
| 873ba8267df2be149fba2230441961812159c35537b18c133247239f4bafa989.key
├── root-ca.crt
├── tls
│ └── my.registry:443
│ └── root-ca.crt
└── tuf
└── my.registry:443
└── galera-leader-proxy
├── changelist
└── metadata
├── root.json
├── snapshot.json
├── targets
│ ├── kube1.json
│ └── releases.json
├── targets.json
└── timestamp.json
On the other hand, to configure the client correctly I defined the following alias:
alias dockernotary="notary -c $HOME/.docker/trust/config.json -d $HOME/.docker/trust/ -s https://notary-server:4443"
Saludos.
I know the question has already been asked (a long time ago), but I cannot find any answer, so I ask it one more time: I have a "complex" (ie deep) tree of docker images locally, and I would like to see the difference between images.
[lgmasapp203 ~]$ docker images -t
Warning: '-t' is deprecated, it will be removed soon. See usage.
├─64e5325c0d9d Virtual Size: 125.1 MB
│ └─bf84c1d84a8f Virtual Size: 125.1 MB
│ └─87de57de6955 Virtual Size: 169.5 MB
│ └─6a974bea7c0d Virtual Size: 291.8 MB
│ └─06c293acac6e Virtual Size: 292.6 MB
│ └─b8a058108e9e Virtual Size: 292.6 MB
│ └─9aa09af53eee Virtual Size: 292.6 MB
│ └─a0513c939a75 Virtual Size: 292.6 MB
│ └─f509350ab0be Virtual Size: 292.6 MB
│ └─b0b7b9978dda Virtual Size: 292.6 MB
│ └─6a0b67c37920 Virtual Size: 815.9 MB
I already tried docker save <image-id> method, then extract the tar file and compare the entries, but what I get is only a bunch of json, VERSION and layer.tar files:
[lgmasapp203 ~]$ find 226a
226a
226a/9aa09af53eeee5a36dfd4f0542cf61ec16c3c168e3b6303b49a7bd5b804b1f56
226a/9aa09af53eeee5a36dfd4f0542cf61ec16c3c168e3b6303b49a7bd5b804b1f56/json
226a/9aa09af53eeee5a36dfd4f0542cf61ec16c3c168e3b6303b49a7bd5b804b1f56/VERSION
226a/9aa09af53eeee5a36dfd4f0542cf61ec16c3c168e3b6303b49a7bd5b804b1f56/layer.tar
226a/e617952427002a05bebf16ba89b0bcaf93a91c786171a6bebedae828ccce7c48
226a/e617952427002a05bebf16ba89b0bcaf93a91c786171a6bebedae828ccce7c48/json
226a/e617952427002a05bebf16ba89b0bcaf93a91c786171a6bebedae828ccce7c48/VERSION
226a/e617952427002a05bebf16ba89b0bcaf93a91c786171a6bebedae828ccce7c48/layer.tar
226a/dddd9e457da7e4ad86d2f6323541bfd439cf290716416a40a9fb4944ecee5c87
226a/dddd9e457da7e4ad86d2f6323541bfd439cf290716416a40a9fb4944ecee5c87/json
226a/dddd9e457da7e4ad86d2f6323541bfd439cf290716416a40a9fb4944ecee5c87/VERSION 226a/dddd9e457da7e4ad86d2f6323541bfd439cf290716416a40a9fb4944ecee5c87/layer.tar
I also tried to take a look directly into /var/lib/docker directory, but did not find anything.
So I started back from "scratch", with a very simple example:
[lgmasapp203 ~]$ docker run centos touch xxx
[lgmasapp203 ~]$ docker ps -n 1
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b43b9b84172f centos:7 "touch xxx" 7 seconds ago Exited (0) 5 seconds ago backstabbing_hoover
[lgmasapp203 ~]$ docker commit b43b xxx
4b0ed5d4fd1a09e062a02b7066b83115d73a1811863c597f6c5bda01a90507f4
[lgmasapp203 ~]$ docker run xxx ls -l xxx
-rw-r--r-- 1 root root 0 Jul 2 14:31 xxx
Everything looks fine, but:
[lgmasapp203 docker]# find /var/lib/docker/graph/4b0ed5d4fd1a09e062a02b7066b83115d73a1811863c597f6c5bda01a90507f4/
/var/lib/docker/graph/4b0ed5d4fd1a09e062a02b7066b83115d73a1811863c597f6c5bda01a90507f4/
/var/lib/docker/graph/4b0ed5d4fd1a09e062a02b7066b83115d73a1811863c597f6c5bda01a90507f4/layersize
/var/lib/docker/graph/4b0ed5d4fd1a09e062a02b7066b83115d73a1811863c597f6c5bda01a90507f4/json
An I did not find anything related to "layer" (as mentionned here). I do not understand why?
Furthermore:
[lgmasapp203 docker]# find /var/lib/docker/ | grep xxx
[lgmasapp203 docker]#
So where did my file xxx go?
It appears to me that this would be a "basic" feature... and then I'm surprised this has not been addressed already...
Does that mean that I have to rely on the "comment" section of the json??? That would be seriously astonishing :-/
I know that is a lot of questions ;-)
Thanxs in advance for any enlightment
Christophe
I think you didn't find your file with a find because of filesystem rights.
If you look in /var/lib/docker/aufs/diff/4b0ed5d4fd1a09e062a02b7066b83115d73a1811863c597f6c5bda01a90507f4 you might find your xxx file.