Cannot connect to influxd by influx CLI tool - influxdb

I installed influx in my CentOS 8.2 Server.
[root#dele opt]# influxd version
InfluxDB 2.0.2 (git: 84496e507a) build_date: 2020-11-19T03:59:35Z
[root#dele opt]# influx version
Influx CLI 2.0.2 (git: 84496e507a) build_date: 2020-11-19T03:59:35Z
I started influxd:
influxd &
there it LISTEN on 8086.
tcp6 0 0 :::8086 :::* LISTEN
but I cannot connect to influxd:
[root#fastnetmon opt]# influx -host localhost -p 8086
Error: unknown shorthand flag: 'o' in -ost
See 'influx -h' for help
[root#fastnetmon opt]# influx --host localhost --p 8086
Error: unknown flag: --host
See 'influx -h' for help
nor by command:
[root#dele opt]# /usr/bin/influx -precision rfc3339
Error: unknown shorthand flag: 'p' in -precision

After struggling for sometime, I verified that these commands are not supported by 2.0 version.
In case of 2.0 version, you can perform queries in the influxdb in the "Data Explorer section". In 2.0 version, you have buckets similarly to databases and you can create them from cli as below example :
influx bucket create -n <bucketname> --org-id <Organisation ID> -r 10h -t <your token>
Token are generated from the homepage of the influxdb and you can access every part of your InfluxDB 2.0 instance from this homepage . You can access your organizations, dashboards, and documentation from here.

Related

Error accessing Scylladb cluster outside docker container

I'm running Scylladb locally in a docker container and I want to access the cluster outside the docker container. That's when I'm getting the following error: cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers')
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 172.17.0.2 776 KB 256 ? ad698c75-a465-4deb-a92c-0b667e82a84f rack1
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
Cluster Information:
Name: Test Cluster
Snitch: org.apache.cassandra.locator.SimpleSnitch
DynamicEndPointSnitch: disabled
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
443048b2-c1fe-395e-accd-5ae9b6828464: [172.17.0.2]
I have no problem accessing the cluster using cqlsh on port 9042:
Connected to at 172.17.0.2:9042.
[cqlsh 5.0.1 | Cassandra 3.0.8 | CQL spec 3.3.1 | Native protocol v4]
Now I'm trying to access the cluster from my fastapi app that is outside the docker container.
from cassandra.cluster import Cluster
cluster = Cluster(['172.17.0.2'])
session = cluster.connect('Test Cluster')
And here's the Error that I'm getting:
raise NoHostAvailable("Unable to connect to any servers", errors)
cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers', {'172.17.0.2:9042': OSError(51, "Tried connecting to [('172.17.0.2', 9042)]. Last error: Network is unreachable")})
with a little bit of tinkering, it's possible to achieve a connection to the Scylla running in a container outside of the container for local development.
I've tried on M1 Mac with docker desktop:
Run scylla container with couple of new parameters[src]:
--listen-address 0.0.0.0 for simplification as we are spawning Scylla inside the container to allow connection to the container from any network
--broadcast-rpc-address 127.0.0.1 required if --listen-address set to 0.0.0.0. We are going to port forward 9042 from container to host (local) machine, so this is an IP where it will be acessible.
The final command to spawn the container is:
$ docker run --rm -ti \
-p 127.0.0.1:9042:9042 \
scylladb/scylla \
--smp 1 \
--listen-address 0.0.0.0 \
--broadcast-rpc-address 127.0.0.1
The -p 127.0.0.1:9042:9042 is to make port 9042 accessible on host (local) machine.
Install pip3 install scylla-driver as it has support of darwin/arm64 architecture.
Write a simple python script:
# so74265199.py
from cassandra.cluster import Cluster
cluster = Cluster(['127.0.0.1'])
session = cluster.connect()
# Select from a table that is available without keyspace
res = session.execute('SELECT * FROM system.versions')
print(res.one())
Run your script
$ python3 so74265199.py
Row(key='local', build_id='71178cf6db7021896cd8251751b78b3d9e3afa8d', build_mode='release', version='5.0.5-0.20221009.5a97a1060')
Disclaimer: I'm not an expert in Scylla's configuration, so feel free to point out a better approach.

can't connect to MySQL with GCP VM Instance docker ERROR 2002 (HY000)

I have a VM instance booting on container optimised OS and with the following Startup script:
docker pull gcr.io/cloudsql-docker/gce-proxy:1.16
docker run -d \
-p 0.0.0.0:3306:3306 \
gcr.io/cloudsql-docker/gce-proxy:1.16 /cloud_sql_proxy \
-instances=<cloudsql-connection-name>=tcp:0.0.0.0:3306
When trying to connect to the db running the following command from the cloud shell mysql -ppass -u root I have the following error:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
What does this mean? What should I do?
The context is that I need to use this vm mysql proxy to connect data fusion.
Add this command line option to connect via TCP:
-h 127.0.0.1
Example:
mysql -h 127.0.0.1 -ppass -u root
Note: You are specifying an older version of the Cloud SQL Auth Proxy container.
https://console.cloud.google.com/gcr/images/cloudsql-docker/GLOBAL/gce-proxy

Failed to connect to http://localhost:8086, Please check your connection settings and ensure 'influxd' is running

Searched online but I don't see the solution. I have influx installed: InfluxDB shell version: v1.6.2. But it throws me this error:
Failed to connect to http://localhost:8086: Get http://localhost:8086/ping: dial tcp [::1]:8086: connect: connection refused
Please check your connection settings and ensure 'influxd' is running.
Just a couple things to check: make sure the the service is running (use the service manager on your OS or the influxd command to check). Another test you can do is to use the actual machine IP address http://:8086 instead of localhost. It could be access is restricted (iptables).
If none of that works, I would check out the discussion on this GitHub issue.
In my case on a Mac, I had to run influxd -config /usr/local/etc/influxdb.conf first before running influx.
I was facing the same challenge when I upgraded influxdb to 1.8.9, so I had to downgrade back to 1.8.5.
https://vibhubithar.medium.com/workaround-latest-version-of-influxdb-not-starting-on-raspberry-pi-buster-a8b5afa84fce
sudo apt update
sudo apt upgrade -y
wget https://s3.amazonaws.com/dl.influxdata.com/influxdb/releases/influxdb_1.8.5_armhf.deb
sudo systemctl unmask influxdb.service
sudo systemctl start influxdb
sudo systemctl enable influxdb.service
First check and see if the influxdb instance is running or not. If it is already running you might need to kill the process by issuing command,
ps -ef |grep influxdb
influxdb 5781 1 99 18:15 pts/0 00:00:22 /usr/bin/influxd -pidfile /var/run/influxdb/influxd.pid -config /etc/influxdb/influxdb.conf
pkill -f influxdb
Once the process is killed, there are chances that port is still in used, which can be verified by issuing command shown below.
sudo netstat -tulpn | grep LISTEN |grep influx
root#db1:/usr/bin# sudo netstat -tulpn | grep LISTEN |grep influx
tcp 0 0 127.0.0.1:8088 0.0.0.0:* LISTEN 28558/influxd
tcp6 0 0 :::8086 :::* LISTEN 28558/influxd
root#db1:/usr/bin#
In the above example, kill the process 28558 by issuing command pkill -9 28558
Once the port is released, cd to /etc/init.d directory and run the below mention service.
root#jvision-db1:/etc/init.d# influx
DB instance should come back and can be verified by the ps -ef |grep influxdb command.
Also, cd to /usr/bin directory and issue below mention command to verify InfluxDB is also available.
root#db1:~# cd /usr/bin
root#db1:/usr/bin# ./influx
Connected to http://localhost:8086 version 1.7.9
InfluxDB shell version: 1.7.9
>
if :
bind-address = "10.0.0.32:8086"
use
$> influx -host 10.0.0.32
Connected to http://10.2.3.102:8086 version 1.8.10
InfluxDB shell version: 1.8.10

neo4j-shell can not connect to neo4j Server

I'm using docker version of neo4j (v3.1.0) and I'm having difficulties connecting to neo4j server using neo4j-shell.
After running an instance of neo4r:3.1.0 docker, I run a bash inside the container:
$ docker exec -it neo4j /bin/bash
And from there I try to run the neo4j-shell like this:
/var/lib/neo4j/bin/neo4j-shell
But it errors:
$ /var/lib/neo4j/bin/neo4j-shell
ERROR (-v for expanded information):
Connection refused
-host Domain name or IP of host to connect to (default: localhost)
-port Port of host to connect to (default: 1337)
-name RMI name, i.e. rmi://<host>:<port>/<name> (default: shell)
-pid Process ID to connect to
-c Command line to execute. After executing it the shell exits
-file File containing commands to execute, or '-' to read from stdin. After executing it the shell exits
-readonly Connect in readonly mode (only for connecting with -path)
-path Points to a neo4j db path so that a local server can be started there
-config Points to a config file when starting a local server
Example arguments for remote:
-port 1337
-host 192.168.1.234 -port 1337 -name shell
-host localhost -readonly
...or no arguments for default values
Example arguments for local:
-path /path/to/db
-path /path/to/db -config /path/to/neo4j.config
-path /path/to/db -readonly
I also tried other hosts like: localhost, 127.0.0.1 and 172.17.0.6 (the container IP). Since it didn't work I tried to list open ports on my container:
$ netstat -l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 :::7687 :::* LISTEN
tcp 0 0 :::7473 :::* LISTEN
tcp 0 0 :::7474 :::* LISTEN
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node Path
As you can see there's no 1337 open! I've looked into the config file and the line for specifying port is commented out which means it should be set to its default value (1337).
Can anyone help me connect to neo4j using neo4j-shell?
BTW, the neo4j server is up and running and I can use its web access through port :7474.
In 3.1 it seems the shell is not enabled by default.
You will need to pass your own configuration file with the shell enabled :
Uncomment
# Enable a remote shell server which Neo4j Shell clients can log in to.
dbms.shell.enabled=true
(I find the amount of worker for changing one value in docker quite heavy but yeah..)
Or use the new cypher-shell :
ikwattro#graphaware-team ~> docker ps -a | grep 'neo4j'
34b3c6718504 neo4j:3.1.0 "/docker-entrypoint.s" 2 minutes ago Up 2 minutes 7473-7474/tcp, 7687/tcp compassionate_easley
2395bd0b1fe9 neo4j:3.1.0 "/docker-entrypoint.s" 5 minutes ago Exited (143) 3 minutes ago cranky_goldstine
949feacbc0f9 neo4j:3.1.0 "/docker-entrypoint.s" 5 minutes ago Exited (130) 5 minutes ago modest_boyd
c38572b078de neo4j:3.0.6-enterprise "/docker-entrypoint.s" 6 weeks ago Exited (0) 6 weeks ago fastfishpim_neo4j_1
ikwattro#graphaware-team ~> docker exec --interactive --tty compassionate_easley bin/cypher-shell
username: neo4j
password: *****
Connected to Neo4j 3.1.0 at bolt://localhost:7687 as user neo4j.
Type :help for a list of available commands or :exit to exit the shell.
Note that Cypher queries must end with a semicolon.
neo4j>
NB: Cypher-shell supports begin and commit :
neo4j> :begin
neo4j# create (n:Node);
Added 1 nodes, Added 1 labels
neo4j# :commit;
neo4j>
-
neo4j> :begin
neo4j# create (n:Person {name:"John"});
Added 1 nodes, Set 1 properties, Added 1 labels
neo4j# :rollback
neo4j> :commit
There is no open transaction to commit
neo4j>
http://neo4j.com/docs/operations-manual/current/tools/cypher-shell/

How to debug "WSREP: SST failed: 1 (Operation not permitted)" with a MariaDB Galera cluster in Docker?

Requirement: CentOS-based Docker container providing a MariaDB 10.x Galera cluster
Host Environment: OX X El Capitan 10.11.6, Docker 1.12.5 (14777)
Docker Container OS: CentOS Linux release 7.3.1611 (Core)
DB: 10.1.20-MariaDB
I found a promising Docker image, but the documentation seems to be obsolete, the commands to start the cluster do not work. At the time of writing the image uses wsrep_sst_method = rsync and so I figured that the following commands should work (replace /Users/Me/somedb with an empty directory on your host):
docker pull dayreiner/centos7-mariadb-10.1-galera
docker run -d --name db1 -h db1host -p 3306:3306 -e CLUSTER_NAME=joe -e CLUSTER=BOOTSTRAP -e MYSQL_ROOT_PASSWORD='pwd' -v /Users/Me/somedb:/var/lib/mysql dayreiner/centos7-mariadb-10.1-galera:latest
docker run -d --name db2 -h db2host -p 3307:3306 --link db1 -e CLUSTER_NAME=joe -e CLUSTER=db1host,db2host -e MYSQL_ROOT_PASSWORD='pwd' -v /Users/Me/somedb:/var/lib/mysql dayreiner/centos7-mariadb-10.1-galera:latest
The first container (db1) comes up and seems OK. But the last line that tries to add db2 as a second node to the Galera cluster results in the following error (docker logs db2):
2017-01-10 15:26:10 139742710823680 [Note] WSREP: New cluster view: global state: :-1, view# 0: Primary, number of nodes: 1, my index: 0, protocol version 3
2017-01-10 15:26:10 139742711142656 [ERROR] WSREP: SST failed: 1 (Operation not permitted)
2017-01-10 15:26:10 139742711142656 [ERROR] Aborting
I could not figure out what is wrong here and would appreciate ideas on how to analyze this further. Is this a problem of rsync, Galera or even Docker?
That's my image on dockerhub.
I had not tested the cluster (until now) on a single host, only running on multiple hosts. You're right though, running two on a single host seems to abort the second node on start.
This looks to be caused by the default bridge network not behaving nicely. Possibly some issue with handling the ports for state transfer. Not really sure why.
If you modify your commands to first create a custom network for your clustered containers to use on the backend, and then run the cluster members using that network, that seems to work when running two nodes on a single host:
# docker network create mariadb
# docker run -d --network=mariadb -p 3307:3306 --name db1 -e CLUSTER_NAME=test -e CLUSTER=BOOTSTRAP -e MYSQL_ROOT_PASSWORD=test -v /opt/test/db1:/var/lib/mysql dayreiner/centos7-mariadb-10.1-galera:latest
# docker run -d --network=mariadb -p 3308:3306 --name db2 -e CLUSTER_NAME=test -e CLUSTER=db1,db2 -e MYSQL_ROOT_PASSWORD=test -v /opt/test/db2:/var/lib/mysql dayreiner/centos7-mariadb-10.1-galera:latest
No errors this time on the second node:
# docker logs db2 -f
...snip
2017-01-12 20:33:08 139726185019648 [Note] WSREP: Signalling provider to continue.
2017-01-12 20:33:08 139726185019648 [Note] WSREP: SST received: 42eaa277-d906-11e6-b98a-3e6b9531c1b7:0
2017-01-12 20:33:08 139725604124416 [Note] WSREP: 1.0 (f170852fe1b6): State transfer from 0.0 (951fdda2454b) complete.
2017-01-12 20:33:08 139725604124416 [Note] WSREP: Shifting JOINER -> JOINED (TO: 0)
2017-01-12 20:33:08 139725604124416 [Note] WSREP: Member 1.0 (f170852fe1b6) synced with group.
2017-01-12 20:33:08 139725604124416 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 0)
2017-01-12 20:33:08 139726105180928 [Note] WSREP: Synchronized with group, ready for connections
2017-01-12 20:33:08 139726105180928 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2017-01-12 20:33:08 139726185019648 [Note] mysqld: ready for connections.
Version: '10.1.20-MariaDB' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Server
Try that, see how it goes. Also, if you run it using docker-compose it will also work without any problems. This is likely because compose creates a dedicated compose container network by default. You can see an example compose file in this gist.
Just make sure to use a different directory for each mariadb instance, and after you have your cluster started, stop db1 and relaunch it as a regular cluster member (otherwise the next time db1 is started it will keep bootstrapping a new cluster).
Works after upgrading the Docker image to MariaDB 10.2.3 (from 10.1.20).
I am not 100% sure whether I have a truly valid cluster now, but at least show status like "wsrep_cluster_size"; produces the following output and the DB is usable:
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 3 |
+--------------------+-------+
Note: I also omitted the -v option and placed the DB files inside the Docker container instead of on an external volume. I don't think that this makes a difference regarding the cluster, but I did not verify 10.2.3 with -v. However, I tried 10.1.20 with both variations (external volume with -v and container-internal files) and both did not work.

Resources