This is my first time posting to stackoverflow, so I apologize in advance if I am not following certain protocols. I will fix and / or expand my question as needed.
I am trying to add 2 different influxdb sources that are hosted on 2 different servers to chronograf kapacitor but I cannot get it working.
Can you connect to 2 different influxdb instances through the UI?
How do you configure kapacitor.conf to read from 2 different influxdb instances?
Through the Chronograf UI I can get either source working correctly but not both at the same time. This seems to be expected through the UI so I must be missing something.
If I set the sources in kapacifor.conf, chronograf does not read from them. There are also no errors in kapacitor logs.
This is my kapacitor.conf influxdb settings that do not work:
[[influxdb]]
enabled = true
default = true
name = "localcluster"
urls = ["http://localhost:8086"]
username = ""
password = ""
timeout = 0
[[influxdb]]
enabled = true
default = false
name = "remoteCluster"
urls = ["http://remotehost:8086"]
username = ""
password = ""
timeout = 0
I have read the documentation and also have the latest TICK stack packages.
I have also searched online and found some references that look like my configuration and are said to work, but they do not seem to work for me.
TICK stack host information:
CentOS Linux release 7.6.1810 (Core)
telegraf-1.9.1-1.x86_64
influxdb-1.7.2-1.x86_64
chronograf-1.7.4-1.x86_64
kapacitor-1.5.1-1.x86_64
Any help would be greatly appreciated.
I got it working but I am not sure if the configuration is recommended:
Add a new InfluxDB connection through the Chronograf web UI.
Do not create another Kapacitor Connection as only one can be active at a time.
In the graph Queries tab, select the new
InfluxDB connection from the drop down list.
Metrics from the alternate InfluxDB instance will appear and can be queried.
Related
I'm running a single ksqlDB Server on embedded mode on our Kubernetes cluster and I want to add a connector.
Adding a connector produces a Request timed out on Kafka Connect exactly similar to this blog post by Robin Moffatt.
So he suggests to change the KAFKA_OFFSET_REPLICATION_FACTOR contained in his docker-compose example.
But unfortunately in our Test environment, I don't have easy access to the existing Kafka cluster (we have admins there), so I think the fastest way to go about is to instead change the:
KSQL_CONNECT_CONFIG_STORAGE_TOPIC - change to a different topic name
KSQL_CONNECT_OFFSET_STORAGE_TOPIC
KSQL_CONNECT_STATUS_STORAGE_TOPIC
KSQL_CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR to -1 (originally this value is 1)
KSQL_CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR to -1 (originally this value is 1)
KSQL_CONNECT_STATUS_STORAGE_REPLICATION_FACTOR to -1 (originally this value is 1)
But when I change the topic names, I can see that new topics are created (using ksqlDB's SHOW TOPICS command), but it always shuts down and restarts forever, here are the logs:
[2021-07-22 01:27:19,889] INFO ProcessingLogConfig values:
ksql.logging.processing.rows.include = false
ksql.logging.processing.stream.auto.create = false
ksql.logging.processing.stream.name = KSQL_PROCESSING_LOG
ksql.logging.processing.topic.auto.create = false
ksql.logging.processing.topic.name =
ksql.logging.processing.topic.partitions = 1
ksql.logging.processing.topic.replication.factor = 1
(io.confluent.ksql.logging.processing.ProcessingLogConfig:372)
[2021-07-22 01:27:19,891] ERROR Aborting application start (io.confluent.ksql.rest.server.KsqlRestApplication:378)
io.confluent.ksql.rest.server.KsqlRestApplication$AbortApplicationStartException: Shutting down application during waitForPreconditions
at io.confluent.ksql.rest.server.KsqlRestApplication.waitForPreconditions(KsqlRestApplication.java:441)
at io.confluent.ksql.rest.server.KsqlRestApplication.startKsql(KsqlRestApplication.java:386)
at io.confluent.ksql.rest.server.KsqlRestApplication.startAsync(KsqlRestApplication.java:370)
at io.confluent.ksql.rest.server.MultiExecutable.doAction(MultiExecutable.java:68)
at io.confluent.ksql.rest.server.MultiExecutable.startAsync(MultiExecutable.java:42)
at io.confluent.ksql.rest.server.KsqlServerMain.tryStartApp(KsqlServerMain.java:89)
at io.confluent.ksql.rest.server.KsqlServerMain.main(KsqlServerMain.java:64)
[2021-07-22 01:27:19,892] INFO Server up and running (io.confluent.ksql.rest.server.KsqlServerMain:90)
[2021-07-22 01:27:19,892] INFO Server shutting down (io.confluent.ksql.rest.server.KsqlServerMain:96)
[2021-07-22 01:27:19,892] INFO ksqlDB shutdown called (io.confluent.ksql.rest.server.KsqlRestApplication:498)
[2021-07-22 01:27:34,926] INFO API server stopped (io.confluent.ksql.api.server.Server:196)
[2021-07-22 01:27:34,927] INFO ksqlDB shutdown complete (io.confluent.ksql.rest.server.KsqlRestApplication:553)
I don't have anymore details, it's just that.
When I return the config, offset and status topic names to what I had at first, the ksqlDB Server starts fine, but again I'm stuck with the problem that I can't create connectors.
I have a fear that when I attempt to delete the topics manually, ksqlDB server wont be able to start properly because it keeps on finding the original config, offset and status topics I had at first.
I have solved the issue, apparently using -1 as value for:
KSQL_CONNECT_CONFIG_REPLICATION_FACTOR
KSQL_CONNECT_OFFSET_REPLICATION_FACTOR
KSQL_CONNECT_STATUS_REPLICATION_FACTOR
doesn’t work properly, the Config topic becomes 20 partitions,
when I read in the Confluent Docs it should only be 1 partition, I think that’s why the ksqlDB Server just restarts endlessly, I just need to gather the right evidences.
Turning those values to 3 (which is our Kafka broker's default rep factor config) I think solved the issue, it was hard, because no error message/s are seen, like when it doesn’t want more than 1 partition of the created Config topic.
I am working on IB Gateway and want to get the historical data.
As i have completed the steps on IB Gateway software to enable the API.
I am using python notebook for this.
For now i am running this code and i am able to import the given library but rest of the code giving me this error. Important thing is connection is established as I have mention client id 1. then it is created and can be seen on IB Gateway application.
My code is here.
from ib_insync import *
#util.startLoop() # uncomment this line when in a notebook
ib = IB()
ib.connect('127.0.0.1', 5021, clientId=1)
bars = ib.reqHistoricalData(
contract=Stock('TSLA', 'SMART', 'USD'),
endDateTime='',
durationStr='30 D',
barSizeSetting='1 hour',
whatToShow='TRADES',
useRTH=True)
print(bars)
Here is the error.
Peer closed connection
clientId 1 already in use?
API connection failed: CancelledError()
As i am using notebook if i uncomment the second line (util.startLoop()) it adds one more error about timeout..
Need help to get this done.
Big Thanks
Assign a different clientID to this connecion:
ib.connect('127.0.0.1', 5021, clientId=2)
Apparently you already have another connection with clientId=1.
I am trying to migrate a Play 2.5 version to 2.6.2. I keep getting the URI-length exceeds error. Anyone knows how to override this?
I tried below Akka setting but still no luck.
play.server.akka{
http.server.parsing.max-uri-length = infinite
http.client.parsing.max-uri-length = infinite
http.host-connection-pool.client.parsing.max-uri-length = infinite
http.max-uri-length = infinite
max-uri-length = infinite
}
Simply add
akka.http {
parsing {
max-uri-length = 16k
}
}
to your application.conf. The prefix play.server is only used for a small subset of convenience features for Akka-HTTP integration into the Playframework, e.g. play.server.akka.requestTimeout. Those are documented in the Configuring the Akka HTTP server backend documentation.
I was getting error due to header length exceeding default 8 KB(8192). Added the following to build.sbt and it worked for me :D
javaOptions += "-Dakka.http.parsing.max-header-value-length=16k"
You can try similar for uri length if other options don't work
This took me way to long to figure out. It is somehow NOT to be found in the documentation.
Here is a snippet (confirmed working with play 2.8) to put in your application.conf which is also configurable via an environment variable and works for BOTH dev and prod mode:
# Dev Mode
play.akka.dev-mode.akka.http.parsing.max-uri-length = 16384
play.akka.dev-mode.akka.http.parsing.max-uri-length = ${?PLAY_MAX_URI_LENGTH}
# Prod Mode
akka.http.parsing.max-uri-length = 16384
akka.http.parsing.max-uri-length = ${?PLAY_MAX_URI_LENGTH}
You can then edit the config or with an already deployed application just set PLAY_MAX_URI_LENGTH and it is dynamically configurable without the need to modify commandline arguments.
env PLAY_MAX_URI_LENGTH=16384 sbt run
If anyone getting this type of error in chrome browser when trying to access a site or login. [HTTP header value exceeds the configured limit of 8192 characters]
, Go to chrome
settings -> Security and Privacy -> Site Settings , View Permission and data stored across sites
Search for the specific website and on that site do Clear all data.
I am using Telegraf as a server to collect StatsD data from Python and send it to InfluxDB. However, the name values I receive in InfluxDB have the prefix "statsd_". How can I remove it?
In python I do:
ctr_name = 'foo'
client = statsd.StatsClient('MY_DOMAIN.com', 8125)
client.incr(ctr_name, 1)
And then in InfluxDB I see:
> show measurements
name: measurements
------------------
name
statsd_foo
By default, the statsd python package does not put a prefix in front of your measurements.
Below is the default
client = statsd.StatsClient('MY_DOMAIN.com', 8125, prefix=None)
I am pretty sure you set prefix="statsd" in your actual code
The problem does not seem to come from the statsd plugin configuration in Telegraf since it doesnt offer to prefix all measurements with a string constant. See here
Hullo,
We are attempting to set up an Neo4j HA cluster in our Rails dev environment, much like what is explained here: https://github.com/andreasronge/neo4j/wiki/Neo4j%3A%3ARails-Config
We have two instances in the cluster. Server 1 is the app, Server 2 is the Rails console. They both start fine, but eventually one of them will fall over. Usually, it's one of the following:
1) java.io.FileNotFound: /server_1_path/path/to/some/RailsModel_exact/_2.fxm file. Somehow, the indexes expect a file to exist that does not exist. Sometimes, the file does not exist in EITHER server directory, and the only thing that helps is to make both sets of index files identical by copying one to the other.
2) Orphaned index.lock files. The error here will say that a certain index is locked, and removing the specific .lock file fixes the issue. Annoying.(maybe similar issue)
3) Add data in one instance, never shows up in the other instance. In this case, I create a node in the Rails console, and it never shows up in the app, or vice versa. In this case, it seems that both instances start up as master and will never sync. Usually have to delete one of the dbs and restart to get them working again.
I am not sure if the new 1.9 HA stuff isn't ready for prime time or we are being too nonchalant with how we quit the app/console and Neo4j is not shutting down cleanly.
This is a highly frustrating issue. We'd appreciate any help/pointers to get it working right.
We are using the 1.9 M03 version of the gem, and here is our config:
server_id = ((defined? Rails::Console)) ? 2 : 1
config.neo4j['enable_ha'] = true
config.neo4j['enable_remote_shell'] = "port=133#{server_id}"
config.neo4j['ha.server_id'] = server_id
config.neo4j['ha.server'] = "localhost:600#{server_id}"
config.neo4j['ha.pull_interval'] = '1s'
config.neo4j['ha.discovery.enabled'] = false
config.neo4j['ha.initial_hosts'] = [1,2,3].map{|id| ":500#{id}"}.join(',')
config.neo4j['ha.cluster_server'] = ":5001-5099" #"#{server_id}"
config.neo4j.storage_path = File.expand_path("db/ha_neo_#{server_id}", Object::Rails.root)
config.neo4j['online_backup_server']= "localhost:636#{server_id}"
config.neo4j['ha.cluster_server'] = "localhost:500#{server_id}"
config.neo4j['webserver.port'] = "747#{server_id}"
config.neo4j['webserver.https.port'] = "748#{server_id}"
config.neo4j['enable_remote_shell'] = "port=933#{server_id}"
config.neo4j['use_adaptive_cache'] = false
puts "Config HA cluster, ha.server_id: #{config.neo4j['ha.server_id']}, db: #{config.neo4j.storage_path}"
Thanks for any/all help/advice.