I have a Influxdb database that is losing data due the activation of the retention policy.
I upgraded the influxdb code from the v1.6.3 to v1.7.7, but the behavior is the same.
> SHOW RETENTION POLICIES ON "telegraf"
name duration shardGroupDuration replicaN default
---- -------- ------------------ -------- -------
autogen 0s 168h0m0s 1 false
forever 0s 168h0m0s 1 true
Aug 16 06:02:25 influxdb influxd[805]: ts=2019-08-16T09:02:25.623073Z lvl=info msg="Retention policy deletion check (start)" log_id=0HEpQh70000 service=retention trace_id=0HIQTFLW000 op_name=retention_delete_check op_event=start
Aug 16 06:02:25 influxdb influxd[805]: ts=2019-08-16T09:02:25.623477Z lvl=info msg="Retention policy deletion check (end)" log_id=0HEpQh70000 service=retention trace_id=0HIQTFLW000 op_name=retention_delete_check op_event=end op_elapsed=0.487ms
Aug 16 06:32:25 influxdb influxd[805]: ts=2019-08-16T09:32:25.623033Z lvl=info msg="Retention policy deletion check (start)" log_id=0HEpQh70000 service=retention trace_id=0HISB6aW000 op_name=retention_delete_check op_event=start
Aug 16 06:32:25 influxdb influxd[805]: ts=2019-08-16T09:32:25.623339Z lvl=info msg="Retention policy deletion check (end)" log_id=0HEpQh70000 service=retention trace_id=0HISB6aW000 op_name=retention_delete_check op_event=end op_elapsed=0.352ms
Aug 16 07:02:25 influxdb influxd[805]: ts=2019-08-16T10:02:25.622970Z lvl=info msg="Retention policy deletion check (start)" log_id=0HEpQh70000 service=retention trace_id=0HITtyqW000 op_name=retention_delete_check op_event=start
Aug 16 07:02:25 influxdb influxd[805]: ts=2019-08-16T10:02:25.623272Z lvl=info msg="Retention policy deletion check (end)" log_id=0HEpQh70000 service=retention trace_id=0HITtyqW000 op_name=retention_delete_check op_event=end op_elapsed=0.362ms
Aug 16 07:32:25 influxdb influxd[805]: ts=2019-08-16T10:32:25.622899Z lvl=info msg="Retention policy deletion check (start)" log_id=0HEpQh70000 service=retention trace_id=0HIVbq5W000 op_name=retention_delete_check op_event=start
Aug 16 07:32:25 influxdb influxd[805]: ts=2019-08-16T10:32:25.623780Z lvl=info msg="Retention policy deletion check (end)" log_id=0HEpQh70000 service=retention trace_id=0HIVbq5W000 op_name=retention_delete_check op_event=end op_elapsed=0.917ms
Aug 16 08:02:25 influxdb influxd[805]: ts=2019-08-16T11:02:25.622839Z lvl=info msg="Retention policy deletion check (start)" log_id=0HEpQh70000 service=retention trace_id=0HIXKhLW000 op_name=retention_delete_check op_event=start
Aug 16 08:02:25 influxdb influxd[805]: ts=2019-08-16T11:02:25.622987Z lvl=info msg="Retention policy deletion check (end)" log_id=0HEpQh70000 service=retention trace_id=0HIXKhLW000 op_name=retention_delete_check op_event=end op_elapsed=0.171ms
I should not see the retention policy being activated ever, as the duration is set to '0s'. Any help is much appreciated.
If you dont want forever retention policy to stay just write following query to influx.
> DROP RETENTION POLICY "forever" ON "telegraf"
And make autogen retention policy as default for telegraf database.
> ALTER RETENTION POLICY "autogen" ON "telegraf" DEFAULT
Related
First post here so apologize for any error.
I have a docker environment that exhibits a really strange problem.
It used to work flawlessly when I was on 18.09.2 but then I needed to upgrade docker version as it was needed for some dockers, due to change in API version ( IIRC ).
I've ugraded to 20.10.2 ( without reboot ) and everything seemed to be ok, dockers starts and I can use them .
After some time I had a power failure that lead me to a reboot and since then I have the problem.
At boot dockers command results in :
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Thus I've searched in logs ( /var/log/docker.log ) and found:
time="2021-08-30T16:40:11.702266553+02:00" level=info msg="Starting up"
time="2021-08-30T16:40:11.715505120+02:00" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
time="2021-08-30T16:40:11.728188524+02:00" level=info msg="libcontainerd: started new containerd process" pid=9883
time="2021-08-30T16:40:11.728497763+02:00" level=info msg="parsed scheme: \"unix\"" module=grpc
time="2021-08-30T16:40:11.728564781+02:00" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
time="2021-08-30T16:40:11.728723243+02:00" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
time="2021-08-30T16:40:11.728841483+02:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
time="2021-08-30T16:40:11.813209337+02:00" level=info msg="starting containerd" revision=269548fa27e0089a8b8278fc4fc781d7f65a939b version=1.4.3
time="2021-08-30T16:40:11.928783093+02:00" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
time="2021-08-30T16:40:11.929009055+02:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
time="2021-08-30T16:40:11.936721860+02:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Modu
le aufs not found in directory /lib/modules/5.4.65-v7l-sarpi4\\n\"): skip plugin" type=io.containerd.snapshotter.v1
time="2021-08-30T16:40:11.936880396+02:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
time="2021-08-30T16:40:11.937437133+02:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4)
must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
time="2021-08-30T16:40:11.937510744+02:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
time="2021-08-30T16:40:11.937618391+02:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
time="2021-08-30T16:40:11.937684465+02:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
time="2021-08-30T16:40:11.937796094+02:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
time="2021-08-30T16:40:11.938041796+02:00" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
time="2021-08-30T16:40:11.938477682+02:00" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a z
fs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
time="2021-08-30T16:40:11.938549200+02:00" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
time="2021-08-30T16:40:11.938622793+02:00" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
time="2021-08-30T16:40:11.938674255+02:00" level=info msg="metadata content store policy set" policy=shared
time="2021-08-30T16:40:11.938972068+02:00" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
time="2021-08-30T16:40:11.939055994+02:00" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
time="2021-08-30T16:40:11.939191530+02:00" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
time="2021-08-30T16:40:11.939374825+02:00" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
time="2021-08-30T16:40:11.939489232+02:00" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
time="2021-08-30T16:40:11.939557250+02:00" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
time="2021-08-30T16:40:11.939634268+02:00" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
time="2021-08-30T16:40:11.939699008+02:00" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
time="2021-08-30T16:40:11.939768008+02:00" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
time="2021-08-30T16:40:11.939834674+02:00" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
time="2021-08-30T16:40:11.939925785+02:00" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
time="2021-08-30T16:40:11.940284968+02:00" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
time="2021-08-30T16:40:12.729504178+02:00" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc
= \"transport: error while dialing: dial unix:///var/run/docker/containerd/containerd.sock: timeout\". Reconnecting..." module=grpc
time="2021-08-30T16:40:15.081866772+02:00" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc
= \"transport: error while dialing: dial unix:///var/run/docker/containerd/containerd.sock: timeout\". Reconnecting..." module=grpc
time="2021-08-30T16:40:18.723223037+02:00" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc
= \"transport: error while dialing: dial unix:///var/run/docker/containerd/containerd.sock: timeout\". Reconnecting..." module=grpc
time="2021-08-30T16:40:23.950263284+02:00" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc
= \"transport: error while dialing: dial unix:///var/run/docker/containerd/containerd.sock: timeout\". Reconnecting..." module=grpc
failed to start containerd: timeout waiting for containerd to start
I've banged my head to wall and finally I've found that if I remove the
/var/run/docker/containerd
directory I can start dockerd and containerd without any issue, but obviously loosing every docker instance and need to docker rm and docker start my containers again.
Do you have any idea why this happens ?
My environment:
root#casa:/var/adm/packages# cat /etc/slackware-version
Slackware 14.2+
root#casa:/var/adm/packages# uname -a
Linux casa.pigi.org 5.4.65-v7l-sarpi4 #3 SMP Mon Sep 21 10:13:26 BST 2020 armv7l BCM2711 GNU/Linux
root#casa:/var/adm/packages# docker info
Client:
Context: default
Debug Mode: false
Server:
Containers: 5
Running: 5
Paused: 0
Stopped: 0
Images: 9
Server Version: 20.10.2
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 269548fa27e0089a8b8278fc4fc781d7f65a939b
runc version:
init version: fec3683 (expected: de40ad0)
Security Options:
seccomp
Profile: default
Kernel Version: 5.4.65-v7l-sarpi4
Operating System: Slackware 14.2 arm (post 14.2 -current)
OSType: linux
Architecture: armv7l
CPUs: 4
Total Memory: 3.738GiB
Name: casa.pigi.org
ID: HF4Y:7TDZ:O5GV:HM7H:YCVS:CLKW:GNOM:6PSA:XRCQ:3BQU:TZ3P:URLD
Docker Root Dir: /var/lib/docker
Debug Mode: false
Username: pigi102
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
WARNING: No blkio weight support
WARNING: No blkio weight_device support
root#casa:/var/adm/packages# runc -v
runc version spec: 1.0.1-dev
root#casa:/var/adm/packages# /usr/bin/docker -v
Docker version 20.10.2, build 2291f61
root#casa:/var/adm/packages# containerd -v
containerd github.com/containerd/containerd 1.4.3 269548fa27e0089a8b8278fc4fc781d7f65a939b
docker-proxy-20201215_fa125a3
Thanks in advance.
Pigi_102
I did some more tests, and it seems that if I run containerd ( with all the option and flags as dockerd does ) and wait long enough it eventually starts and from there on dockerd is able to start.
I manage to fix my problems by downgrading to docker 19.03.15 and containerd 1.2.13
With these versions everything is working as expected.
Pigi
ok I installed (in ubuntu 20.04) as it said the official page of influxdb https://portal.influxdata.com/downloads/, specifically these commands:
wget https://dl.influxdata.com/influxdb/releases/influxdb_2.0.2_amd64.deb
sudo dpkg -i influxdb_2.0.2_amd64.deb
then add commands to start and create persistence with the daemon.
systemctl enable --now influxdb
systemctl status influxdb
and it comes out as if it was activated and running normally
● influxdb.service - InfluxDB is an open-source, distributed, time series database
Loaded: loaded (/lib/systemd/system/influxdb.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-11-20 17:43:54 -03; 55min ago
Docs: https://docs.influxdata.com/influxdb/
Main PID: 750 (influxd)
Tasks: 7 (limit: 1067)
Memory: 33.8M
CGroup: /system.slice/influxdb.service
└─750 /usr/bin/influxd
Nov 20 17:44:03 hypercc influxd[750]: ts=2020-11-20T20:44:03.754479Z lvl=info msg="Open store (start)" log_id=0QarEkHl000 service=storage-engine op_name=tsdb_open op_event=start
Nov 20 17:44:03 hypercc influxd[750]: ts=2020-11-20T20:44:03.754575Z lvl=info msg="Open store (end)" log_id=0QarEkHl000 service=storage-engine op_name=tsdb_open op_event=end op_elapsed=0.098ms
Nov 20 17:44:03 hypercc influxd[750]: ts=2020-11-20T20:44:03.754661Z lvl=info msg="Starting retention policy enforcement service" log_id=0QarEkHl000 service=retention check_interval=30m
Nov 20 17:44:03 hypercc influxd[750]: ts=2020-11-20T20:44:03.754888Z lvl=info msg="Starting precreation service" log_id=0QarEkHl000 service=shard-precreation check_interval=10m advance_period=30m
Nov 20 17:44:03 hypercc influxd[750]: ts=2020-11-20T20:44:03.755164Z lvl=info msg="Starting query controller" log_id=0QarEkHl000 service=storage-reads concurrency_quota=10 initial_memory_bytes_quota_per_query=9223372036854775807 memory_bytes_quota_per_query=9223372036854775807 max_memory_bytes=0 queue_size=10
Nov 20 17:44:03 hypercc influxd[750]: ts=2020-11-20T20:44:03.755725Z lvl=info msg="Configuring InfluxQL statement executor (zeros indicate unlimited)." log_id=0QarEkHl000 max_select_point=0 max_select_series=0 max_select_buckets=0
Nov 20 17:44:04 hypercc influxd[750]: ts=2020-11-20T20:44:04.071001Z lvl=info msg=Starting log_id=0QarEkHl000 service=telemetry interval=8h
Nov 20 17:44:04 hypercc influxd[750]: ts=2020-11-20T20:44:04.071525Z lvl=info msg=Listening log_id=0QarEkHl000 transport=http addr=:8086 port=8086
Nov 20 18:14:03 hypercc influxd[750]: ts=2020-11-20T21:14:03.757182Z lvl=info msg="Retention policy deletion check (start)" log_id=0QarEkHl000 service=retention op_name=retention_delete_check op_event=start
Nov 20 18:14:03 hypercc influxd[750]: ts=2020-11-20T21:14:03.757233Z lvl=info msg="Retention policy deletion check (end)" log_id=0QarEkHl000 service=retention op_name=retention_delete_check op_event=end op_elapsed=0.074ms
What should I add to be able to write "influx" and go directly to the DB to make queries? is it something with the ip address?
When I enter influx, I only get help options but it doesn't say anything about connecting or something like that.
by the way here https://docs.influxdata.com/influxdb/v2.0/get-started/ it is installed in a different way but it is supposed that both ways work well.
thanks.
Usually tools like Telegraf are used to collect data and write it to InfluxDB. You can install Telegraf on each server you want to collect data from.
https://docs.influxdata.com/telegraf/v1.17/
You can browse to http://your_server_ip:8086 and login to chronograf (included to InfluxDB 2.0). Here you can create dashboards and query data from InfluxDB.
Its also possible to do manual queries via the InfluxDB CLI. You can simply use it with the influx query command in your terminal.
https://docs.influxdata.com/influxdb/v2.0/query-data/
Note that some commands need authentication before you are allowed to execute them (e.g. the user command). You can authenticate by adding the -t parameter followed by a valid user token (can be found in the web interface).
Example: influx -t token_here user list
Hope this helps you out.
My docker containers are getting removed intermittent after few days.
-- Logs begin at Mon 2020-08-31 10:12:44 IST, end at Thu 2020-09-17 23:10:25 IST. --
Aug 31 11:31:02 SPK-X-0036 systemd[1]: Starting Docker Application Container Engine...
Aug 31 11:31:02 SPK-X-0036 dockerd[6678]: time="2020-08-31T11:31:02.538275526+05:30" level=info msg="Starting up"
Aug 31 11:31:02 SPK-X-0036 dockerd[6678]: time="2020-08-31T11:31:02.539105284+05:30" level=warning msg="[!] DON'T BIND ON ANY IP ADDRESS WITHOUT setting --tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING [!]"
Aug 31 11:31:02 SPK-X-0036 dockerd[6678]: time="2020-08-31T11:31:02.544986324+05:30" level=info msg="parsed scheme: \"unix\"" module=grpc
Aug 31 11:31:02 SPK-X-0036 dockerd[6678]: time="2020-08-31T11:31:02.545033917+05:30" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Aug 31 11:31:02 SPK-X-0036 dockerd[6678]: time="2020-08-31T11:31:02.545086917+05:30" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
Aug 31 11:31:02 SPK-X-0036 dockerd[6678]: time="2020-08-31T11:31:02.545114012+05:30" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Aug 31 11:31:02 SPK-X-0036 dockerd[6678]: time="2020-08-31T11:31:02.548578610+05:30" level=info msg="parsed scheme: \"unix\"" module=grpc
Aug 31 11:31:02 SPK-X-0036 dockerd[6678]: time="2020-08-31T11:31:02.548640183+05:30" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Aug 31 11:31:02 SPK-X-0036 dockerd[6678]: time="2020-08-31T11:31:02.548673867+05:30" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
Aug 31 11:31:02 SPK-X-0036 dockerd[6678]: time="2020-08-31T11:31:02.548698658+05:30" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Aug 31 11:31:02 SPK-X-0036 dockerd[6678]: time="2020-08-31T11:31:02.842884585+05:30" level=info msg="[graphdriver] using prior storage driver: overlay2"
Aug 31 11:31:02 SPK-X-0036 dockerd[6678]: time="2020-08-31T11:31:02.944238578+05:30" level=info msg="Loading containers: start."
Aug 31 11:31:02 SPK-X-0036 dockerd[6678]: time="2020-08-31T11:31:02.985979893+05:30" level=warning msg="7e9847e8c2ccb1cb3690316d19b66136d8f9fd5c9c436969bc7b6303db345d30 cleanup: failed to unmount IPC: umount
Aug 31 11:31:11 SPK-X-0036 dockerd[6678]: time="2020-08-31T11:31:11.081963154+05:30" level=info msg="Removing stale sandbox 69a759dcb5b230c1020e53a89ef8887e7447ce3064f930a8a914324d800cedc4 (404f29278912116b3cd04a407fe57cf16538cc623b2a2faa4328ab0cdb59fcba)"
Aug 31 11:31:11 SPK-X-0036 dockerd[6678]: time="2020-08-31T11:31:11.542843230+05:30" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 6711cd7b1ce0ede11e852ca3bd0114934d14e83292364c27ee5808cffa1062c4 87ea9779402a38ffacf62ce84fdbf7cc2cd8419c4d0cae22ddc8468072b7ea6c], retrying...."
Aug 31 11:31:11 SPK-X-0036 dockerd[6678]: time="2020-08-31T11:31:11.804699526+05:30" level=error msg="getEndpointFromStore for eid 2cc1662e8f61a8b7c549754ddc92c0c6d59d03905719ec32be5b63bd7f0ea881 failed while trying to build sandbox for cleanup: could not find endpoint 2cc1662e8f61a8b7c549754ddc92c0c6d59d03905719ec32be5b63bd7f0ea881: []"
Aug 31 11:31:11 SPK-X-0036 dockerd[6678]: time="2020-08-31T11:31:11.804739901+05:30" level=info msg="Removing stale sandbox 6c96e15d6989d7c27fd0a304aefc7bdc9bbae90335523ffbefb0fb6e0624afd0 (527c74012cd1d0447850ea3fe13670f11dd8d294758b470e8e6b21cd7ea46edd)"
Aug 31 11:31:11 SPK-X-0036 dockerd[6678]: time="2020-08-31T11:31:11.804780951+05:30" level=warning msg="Failed deleting endpoint 2cc1662e8f61a8b7c549754ddc92c0c6d59d03905719ec32be5b63bd7f0ea881: failed to get endpoint from store during Delete: could not find endpoint 2cc1662e8f61a8b7c549754ddc92c0c6d59d03905719ec32be5b63bd7f0ea881: []\n"
Aug 31 11:31:12 SPK-X-0036 dockerd[6678]: time="2020-08-31T11:31:12.049373612+05:30" level=info msg="Removing stale sandbox a3ac3dd4ea15a633a27320da2cae2d39de808a96d0841a34271b657f10f51483 (38bf61d74d42fad07beef7f68e1eec34a2854a149d78c1008f454b02df1bf9c4)"
Aug 31 11:31:12 SPK-X-0036 dockerd[6678]: time="2020-08-31T11:31:12.412946082+05:30" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 6711cd7b1ce0ede11e852ca3bd0114934d14e83292364c27ee5808cffa1062c4 b2f51b62ffe030ca537438ff0c547597c61428e3079a91f5c6b62e46fdbfe955], retrying...."
Aug 31 11:31:12 SPK-X-0036 dockerd[6678]: time="2020-08-31T11:31:12.660895622+05:30" level=info msg="Removing stale sandbox edd9206a3ac0e5cd0adb28d04bdc15d4ffb36446b3557cbc7e3621e40459d9f8 (3167d0b093703c12e6978d052022241fb4c9088eec12247e743ad5ef9845240d)"
Aug 31 11:31:13 SPK-X-0036 dockerd[6678]: time="2020-08-31T11:31:13.009080357+05:30" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 6711cd7b1ce0ede11e852ca3bd0114934d14e83292364c27ee5808cffa1062c4 946f45722e853160f4d194eb01916c566e9dba2b57e8b48a4265147e1707385a], retrying...."
I tried to setup the tool chain of mosquitto, telegraf, and influxdb. All three are installed
on a raspberry pi using apt. To debug, I use a file output from telegraf.
This connection does not work when the pi boots. Mosquito is working if subscribed from outside.
telegraf collects system and disk information. However telegraf does not collect mqtt information.
When I restart mosquitto like
sudo service mosquitto stop
mosquitto -v
the connection is working.
When I restart mosquitto like
sudo service mosquitto stop
sudo service mosquitto start
it is again not working.
What could be the difference?
I just upgraded to the latest versions, but that did not change anything.
mosquitto 1.5.7
telegraf 1.15.3
influxdb 1.8.2
The boot messages of mosquitto are fine:
Sep 14 21:34:30 raspberrypi systemd[1]: Starting Mosquitto MQTT v3.1/v3.1.1 Broker...
Sep 14 21:34:31 raspberrypi systemd[1]: Started Mosquitto MQTT v3.1/v3.1.1 Broker.
The boot messages from telegraf report connection to mosquitto, though there is some trouble with influxdb
Sep 14 21:34:54 raspberrypi telegraf[401]: 2020-09-14T19:34:54Z I! Starting Telegraf 1.15.3
Sep 14 21:34:54 raspberrypi influxd[407]: ts=2020-09-14T19:34:54.300652Z lvl=info msg="Opened shard" log_id=0PFXdCuW000 service=store trace_id=0PFXdEbG000 op_name=tsdb_open index_version=inmem path=/var/lib/influxdb/data/base/autogen/16 duration=598.507ms
Sep 14 21:34:54 raspberrypi influxd[407]: ts=2020-09-14T19:34:54.300796Z lvl=info msg="Opened shard" log_id=0PFXdCuW000 service=store trace_id=0PFXdEbG000 op_name=tsdb_open index_version=inmem path=/var/lib/influxdb/data/base/autogen/152 duration=675.711ms
Sep 14 21:34:54 raspberrypi influxd[407]: ts=2020-09-14T19:34:54.366628Z lvl=info msg="Opened file" log_id=0PFXdCuW000 engine=tsm1 service=filestore path=/var/lib/influxdb/data/base/autogen/2/000000001-000000001.tsm id=0 duration=11.324ms
Sep 14 21:34:54 raspberrypi influxd[407]: ts=2020-09-14T19:34:54.374469Z lvl=info msg="Opened file" log_id=0PFXdCuW000 engine=tsm1 service=filestore path=/var/lib/influxdb/data/base/autogen/24/000000319-000000002.tsm id=0 duration=22.091ms
Sep 14 21:34:54 raspberrypi telegraf[401]: 2020-09-14T19:34:54Z I! Loaded inputs: system mqtt_consumer disk
Sep 14 21:34:54 raspberrypi telegraf[401]: 2020-09-14T19:34:54Z I! Loaded aggregators:
Sep 14 21:34:54 raspberrypi telegraf[401]: 2020-09-14T19:34:54Z I! Loaded processors:
Sep 14 21:34:54 raspberrypi telegraf[401]: 2020-09-14T19:34:54Z I! Loaded outputs: influxdb file
Sep 14 21:34:54 raspberrypi telegraf[401]: 2020-09-14T19:34:54Z I! Tags enabled: host=raspberrypi
Sep 14 21:34:54 raspberrypi telegraf[401]: 2020-09-14T19:34:54Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"raspberrypi", Flush Interval:10s
Sep 14 21:34:54 raspberrypi influxd[407]: ts=2020-09-14T19:34:54.489708Z lvl=info msg="Opened shard" log_id=0PFXdCuW000 service=store trace_id=0PFXdEbG000 op_name=tsdb_open index_version=inmem path=/var/lib/influxdb/data/base/autogen/2 duration=188.821ms
Sep 14 21:34:54 raspberrypi telegraf[401]: 2020-09-14T19:34:54Z I! [inputs.mqtt_consumer] Connected [tcp://localhost:1883]
Sep 14 21:34:54 raspberrypi influxd[407]: ts=2020-09-14T19:34:54.548591Z lvl=info msg="Opened shard" log_id=0PFXdCuW000 service=store trace_id=0PFXdEbG000 op_name=tsdb_open index_version=inmem path=/var/lib/influxdb/data/base/autogen/24 duration=239.663ms
Sep 14 21:34:54 raspberrypi influxd[407]: ts=2020-09-14T19:34:54.552787Z lvl=info msg="Opened file" log_id=0PFXdCuW000 engine=tsm1 service=filestore path=/var/lib/influxdb/data/base/autogen/32/000000271-000000002.tsm id=0 duration=22.821ms
Sep 14 21:34:54 raspberrypi influxd[407]: ts=2020-09-14T19:34:54.788229Z lvl=info msg="Opened file" log_id=0PFXdCuW000 engine=tsm1 service=filestore path=/var/lib/influxdb/data/base/autogen/62/000000006-000000002.tsm id=0 duration=203.005ms
Sep 14 21:34:54 raspberrypi influxd[407]: ts=2020-09-14T19:34:54.842928Z lvl=info msg="Opened shard" log_id=0PFXdCuW000 service=store trace_id=0PFXdEbG000 op_name=tsdb_open index_version=inmem path=/var/lib/influxdb/data/base/autogen/32 duration=352.965ms
Sep 14 21:34:56 raspberrypi influxd[407]: ts=2020-09-14T19:34:56.503706Z lvl=info msg="Opened file" log_id=0PFXdCuW000 engine=tsm1 service=filestore path=/var/lib/influxdb/data/base/autogen/40/000000004-000000002.tsm id=0 duration=71.762ms
Sep 14 21:34:58 raspberrypi systemd[1]: systemd-fsckd.service: Succeeded.
Sep 14 21:34:59 raspberrypi influxd[407]: ts=2020-09-14T19:34:59.734290Z lvl=info msg="Opened shard" log_id=0PFXdCuW000 service=store trace_id=0PFXdEbG000 op_name=tsdb_open index_version=inmem path=/var/lib/influxdb/data/base/autogen/62 duration=5185.491ms
Sep 14 21:34:59 raspberrypi influxd[407]: ts=2020-09-14T19:34:59.762419Z lvl=info msg="Opened file" log_id=0PFXdCuW000 engine=tsm1 service=filestore path=/var/lib/influxdb/data/base/autogen/41/000000001-000000001.tsm id=0 duration=8.874ms
Sep 14 21:34:59 raspberrypi influxd[407]: ts=2020-09-14T19:34:59.785965Z lvl=info msg="Opened shard" log_id=0PFXdCuW000 service=store trace_id=0PFXdEbG000 op_name=tsdb_open index_version=inmem path=/var/lib/influxdb/data/base/autogen/40 duration=4942.818ms
The relevant parts oc telegraf.conf are
[[outputs.influxdb]]
urls = ["http://127.0.0.1:8086"]
database = "base"
skip_database_creation = true
username = "telegraf"
password = "****"
content_encoding = "identity"
[[outputs.file]]
files = ["stdout", "/tmp/metrics.out"]
[[inputs.mqtt_consumer]]
servers = ["tcp://localhost:1883"]
topics = ["home/garden/+"]
topic_tag = "mqtt_topic"
qos = 1
max_undelivered_messages = 1
persistent_session = true
client_id = "lord_of_the_pis"
data_format = "json"
The client_id was the problem.
client_id = "lord_of_the_pis"
With a shorter client_id, it works fine.
I have a strange issue with the grafana docker image: it totally ignores my custom.ini file.
The goal is to set the app_mode to development with no environment variables (otherwise it could be possible using GF_DEFAULT_APP_MODE: development in docker-compose).
Here is the interesting part of my docker-compose:
grafana:
image: grafana/grafana:6.2.2
ports:
- "3000:3000"
user: ${ID}
volumes:
- "$PWD/data:/var/lib/grafana"
- "$PWD/custom.ini:/etc/grafana/custom.ini"
- "$PWD/custom.ini:/usr/share/grafana/conf/custom.ini"
- "$PWD/custom.ini:/usr/share/grafana/conf/sample.ini"
As you can see, I tried a lot of locations (just in case).
I deploy the stack using the command: ID=$(id -u) docker-compose up -d
Except the config problem, Grafana works great.
I can see my mounts correctly in the container, and the custom.ini file is well formatted (and I did not forget to remove the comment sign ;)
Here are the logs (we can see no mentions about a custom.ini or sample.ini):
Attaching to dev_grafana_1
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Starting Grafana" logger=server version=6.2.2 commit=07540df branch=HEAD compiled=2019-06-05T13:04:21+0000
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Config loaded from" logger=settings file=/usr/share/grafana/conf/defaults.ini
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Config loaded from" logger=settings file=/etc/grafana/grafana.ini
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.data=/var/lib/grafana"
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.logs=/var/log/grafana"
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.plugins=/var/lib/grafana/plugins"
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.provisioning=/etc/grafana/provisioning"
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.log.mode=console"
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_DATA=/var/lib/grafana"
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_LOGS=/var/log/grafana"
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Path Home" logger=settings path=/usr/share/grafana
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Path Data" logger=settings path=/var/lib/grafana
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Path Logs" logger=settings path=/var/log/grafana
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Path Plugins" logger=settings path=/var/lib/grafana/plugins
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="Path Provisioning" logger=settings path=/etc/grafana/provisioning
grafana_1 | t=2019-06-11T14:20:44+0000 lvl=info msg="App mode production" logger=settings
I use the image grafana/grafana:6.2.2
Thanks for your help !
Note: I also tried a bunch of time to restart and even recreate my containers.
I just fixed this on my Grafana container, so perhaps this helps you. All I was trying to set was the SMTP config. I'm running Docker on Windows, so you will need to change the script for your needs of course. I am redirecting the data outside the container as well, so that config is included.
docker run -d -p 3000:3000 --name=grafana `
-v C:/DockerData/Grafana:/var/lib/grafana `
-v C:/DockerData/Grafana/custom.ini:/etc/grafana/grafana.ini `
grafana/grafana
I launch from powershell, so that is why ` is used for continue next line. Additionally, it did not like the local file being also called grafana.ini. It just would not start with that. So, hense you see the local file is custom.ini, yet I override the grafana.ini file. I hope this helps.
Ran into this issue as well. Apparently /etc/grafana/grafana.ini is the custom file on deb or rpm packages.
Note. If you have installed Grafana using the deb or rpm packages, then your configuration file is located at /etc/grafana/grafana.ini. This path is specified in the Grafana init.d script using --config file parameter.
So in your volumes section, update it to and it should pick up your custom settings:
volumes:
- "$PWD/data:/var/lib/grafana"
- "$PWD/grafana.ini:/etc/grafana/grafana.ini"