Why can't I connect to the influx shell? - influxdb

After installing influxdb, the ui screen works well.
In the example(my environments ubuntu 18.04), if you enter './influx', you are connected to the
influx shell, but the following appears.
Incorrect Usage. flag provided but not defined: -precision
NAME:
influx - Influx Client
USAGE:
influx [command]
COMMANDS:
version Print the influx CLI version
write Write points to InfluxDB
bucket Bucket management commands
completion Generates completion scripts
query Execute a Flux query
config Config management commands
org, organization Organization management commands
delete Delete points from InfluxDB
user User management commands
task Task management commands
telegrafs List Telegraf configuration(s). Subcommands manage Telegraf configurations.
dashboards List Dashboard(s).
export Export existing resources as a template
secret Secret management commands
v1 InfluxDB v1 management commands
auth, authorization Authorization management commands
apply Apply a template to manage resources
stacks List stack(s) and associated templates. Subcommands manage stacks.
template Summarize the provided template
bucket-schema Bucket schema management commands
ping Check the InfluxDB /health endpoint
setup Setup instance with initial user, org, bucket
backup Backup database
restore Restores a backup directory to InfluxDB
remote Remote connection management commands
replication Replication stream management commands
server-config Display server config
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--help, -h show help
Error: flag provided but not defined: -precision```
Could it be because the version(1.xx -> 2.xx) is different? Or is there another way?

influxdb 1.x and 2.x is very different. 1.x used database and 2.x used bucket. so It's different to use influx cli.

Related

Grafana clone alerts

Grafana 9.2.2 works with created alerts. I need to run exactly the same Grafana with notifications in a docker container on any other host. Do this without manually creating alerts.
I can't find JSON file with alert variables.
The alerts config stored inside the Grafana DB, not in any config file, so its not easy migration.
You can take out the DB dump of Grafana from the container where alerts are configured and restore the DB in new Grafana container.
Reference : https://grafana.com/blog/2020/01/13/how-to-migrate-your-configuration-database/
One more option is to create alerts using file Provisioning instead of UI so that the YAML can be applied to any cluster.

Using Transfer for on-premises option to transfer files

[google-cloud-storage]I am trying to copy files from Linux directory to GCP bucket using "Transfer for on-premises" option. I’ve installed docker script on Linux and GCP bucket is created. I now need to run Docker Run command to copy files. My question is how do I specify the source & target places in the docker command. For example;
Sudo docker run –source –target --hostname=$(hostname) --agent-id-prefix=ID123456789
The short answer is you can't supply a source/destination to this command, because its purpose is not to transfer the data. This command starts the agents for the service - agents are always-running processes that help you move data.
After starting agents that have access to your files, you issue a copy command in the Cloud Console, where you can specify a source directory and target bucket+prefix. When you do this, the service will contact the agents and use them to push the data to Google Cloud in parallel, for faster transfers. See the following links for more details:
Overview of how Transfer Service for on-premises data works
Setting up the service, and how to submit a transfer job

How to make custom reports in Jenkins?

In Jenkins, I want to get information like how many times builds failed in a given period, which tests failed multiple times in successive builds, did each of these failed tests fail due to same or different reasons each time, is a test failing in multiple environments or only some environments etc.
How do I get such information from Jenkins ?
Your question is a bit vague. So I will give you the solution I used to solve this problem with the use of jenkins's influxDB plugin with InfluxDB as a database and Grafana as a Dashboard tool.
Setup InfluxDB
I use the docker image: influxdb:1.7-alpine
mounted volumes /docker-entrypoint-initdb.d and /var/lib/influxdb
In folder /docker-entrypoint-initdb.d I added a file db.iql to create my database
CREATE DATABASE "jenkins" WITH DURATION 24w REPLICATION 1 SHARD DURATION 1d NAME "jenkins_retention_6month"
Setup the InfluxDB plugin
See section configuration of the plugin's page
https://wiki.jenkins.io/display/JENKINS/InfluxDB+Plugin
Use the plugin
the InfluxDbPublisher step can be used to collect data using plugins like the Metrics Plugin, however I use it with customDataMap
influxDbPublisher(
selectedTarget: 'myTarget',
customDataMap: [
myMeasure: [
field: value
]
],
customDataMapTags: [
myMeasure: [
tag: 'someTag'
]
]
])
Everything is documented on
https://wiki.jenkins.io/display/JENKINS/InfluxDB+Plugin
Setup Grafana
I use the docker image: grafana/grafana:6.4.3
I mounted volume /var/lib/grafana
When the instance of grafana is running, add your influxdb database as a datasource
I configured grafana with the following environment variables:
GF_SERVER_DOMAIN=grafana.mydomain.com
GF_SECURITY_ADMIN_PASSWORD=MyPassword
GF_SMTP_ENABLED=true
GF_SMTP_HOST=smtp:25
GF_SMTP_FROM_ADDRESS=grafana#grafana.mydomain.com
I used docker image namshi/smtp to get a smtp server
Create Grafana Dashboards
It is very easy to create a new dashboard with the auto completion feature of grafana. You will certainly need to tweak few times the data you sent with the influxDbPublisher step.
Now you have your dashboards, you can setup alerts in order to get notified in advance by email when something od is happening with your CI

How do you migrate Docker Desktop Kubernetes clusters to Google Kubernetes Engine

I'm trying to migrate and host a Kubernetes cluster that I made locally on my machine using Docker Desktop to Google Kubernetes Engine but I'm not sure where to start or how to do it properly.
Any help is appreciated, thanks!
There's no migration in the sense of virtual machines. If you your deployments / services /etc defined in a CVS of some sort (github, gitlab etc), you could just change the target of kubectl and apply them in bulk using the -f switch to kubectl.
I would recommend creating namespaces first, and then using kubens to swap between namespaces as you do the separate deployments.
If you DON't have them already stored, you'll want to iterate through your namespaces and issue:
k get <object> --export -o yaml
This would be (not limited to)
deployments
secrets
configmaps
daemonsets
statefulsets
services
Once you have everything, run through re-applying them on the remote cluster, and if you missed something, just export it and reapply it remotely.
Does does NOT include your data layer. If you're running databases et all in Kubernetes, you'll need to use tools native to your data platform to export that data, and then re-import it on the other side.

Connecting a local gephi instance to a remote titan server

Here is the scenario I want to resolve: I have two environments: a local machine and a virtual machine hosted in Azure
In the virtual machine I start a gremlin container which includes the gremlin client, server and connects to a cassandra graph database.
This is the information of the container running when i run the docker container ls command:
CONTAINER ID: 029095e26f53
IMAGE: 3f03c6bfb0a2
COMMAND: "/bin/sh -c /gremlin…"
CREATED: 2 weeks ago
STATUS: Up 2 weeks
PORTS: 0.0.0.0:8182->8182/tcp
NAME: gremlin
When I enter inside the container, I run the following command in order to run the gremlin client:
./bin/gremlin.sh
Once inside the gremlin console i run the following command to connect to the tinkerpop server:
:remote connect tinkerpop.server conf/remote.yaml
==>Connected - localhost/127.0.0.1:8182 ---> answer from gremlin console
If I run the following gremlin query:
:> g.V().count()
I get a number different from zero, telling me that there are a records on the graph database.
Now on the other side I have the Gephi client on my local machine which I want it to be able to show that graph database. Or at least, make Gephi to show the the visual data from a
graph = TinkerFactory.createModern()
running inside the gremlin container.
I want to do this because I need to choose a visualization tool for gremlin and titan ecosystem.
I tried to set up Gephi client feature to connect to the virtual machine's ip and the port 8182 but it shows me the red dot telling me that is not possible. What am i missing? I am pretty sure there are a few steps missing. Thanks in advance,
Juan Ignacio
If your graph is "remote" and not in-memory in the Gremlin Console then you have to devise a way to make it available locally that way. This situation is typical for Graphs that run in Gremlin Server or are wholly remote like CosmosDB, DSE Graph or Amazon Neptune.
They typical method to make it available locally is to use [subgraph()][1]-step to pull out just the portion of the graph that you care about and return that to the Gremlin Console. It will be returned as a TinkerGraph for graphs that support subgraph()-step (like Titan, though I assume you would use JanusGraph), so for your test which is using TinkerFactory and a tiny graph you could just do this:
gremlin> :remote connect tinkerpop.server conf/remote-objects.yaml
Note the configuration of "remote-objects.yaml" because that configuration will return actual objects - an actual TinkerGraph rather than a string representation of a TinkerGraph.
gremlin> :> TinkerFactory.createModern()
That will create the "modern" graph remotely and return the TinkerGraph to the Gremlin Console. You can access that result
gremlin> graph = result[0].object
The :> stores the response from the server in a variable named "result" and that contains your TinkerGraph in a List. This is explained in the reference documentation. From there you can use that "graph" object as you would using the standard Gephi instructions.

Resources