Alerts defined in bosun failed as the metadata was not populated when bosun restarts.
Is there a way to configure scollector to send metadata once the connection to bosun breaks?
I always need to restart bosun as I keep modifying the configuration file.
Bosun stores metadata (and incident info) in redis/ledis. This information is expected to be persisted and not destroyed when Bosun is restarted. There will be a gap in Bosun's ability to receive metadata during this restart though. It doesn't look like the metadata has any retry logic - so not sure if this is what you mean?
Also 0.6.0-beta (due towards the end of Sept 2016) will have config reloading, so restarts are not required when rules etc are edited. You could start using this feature now if you want to run off of master.
Related
Matching service use consistent hash decide which queue is assigned to which server.
Most of time, the server will poll task from cache instead of persistent database.
If I add a new matching service, All cache in queue will be re-consistent-hash to new places, and this will cause all old cache outdated. Will it cause any problem?
Most of the time tasks are not cached but matched immediately to a waiting long poll. We call it a sync match. So adding a matching service shouldn't affect the health of the running applications.
I try to find some document about it, that when some queries are running and KSQL-Server restarts. What will happened?
Does it perform similar to Kafka-Streams, so the consumer offset is not committed and at-least-once is guaranteed?
I can observe that the queries stored in the command topic, and queries are executed when ksql-server restarts
I try to find some document about it, that when some queries are running and KSQL-Server restarts. What will happened?
If you only have a single KSQL server, then stopping that server will of course stop all the queries. Once the server is running again, all queries will continue from the points they stopped processing. No data is lost.
If you have multiple KSQL servers running, then stopping one (or some) of them will cause the remaining servers to take over any query processing tasks from the stopped servers. Once the stopped servers have been restarted the query processing workload will be shared again across all servers.
Does it perform similar to Kafka-Streams, so the consumer offset is not committed and at-least-once is guaranteed?
Yes.
But (even better): Whether the processing guarantees are at-least-once or exactly-once depends solely on the KSQL server's configuration. It does of course not depend on whether or when the server is being restarted, crashes, etc.
Do I understand correctly that a server worker file in a PWA should not be cached by a PWA? As I understand, once registered, installed and activated, it exits as an entity separate from a page in a browser environment and gets reloaded by the browser once a new version is found (I am omitting details that are not important here). So I see no reason to cache a service worker file. Browser kind of caching it by storing it in memory (or somewhere). I think caching a service worker file will complicate discovery of its code update and will bring no benefits.
However, if a service worker is not cached, there will be an error trying to retrieve it while refreshing a page that registers it in an offline mode because the service worker file is not available when the network is down.
What's the best way to eliminate this error? Or should I cache a service worker file? What's the most efficient strategy here?
I was doing some reading on PWA but found no clear explanation of the matter. Please help me with your advice if possible.
You're correct. Never cache service-worker.js.
To avoid the error that comes from trying to register without connectivity simply check the connection state from window.navigator.onLine and skip calling register if offline.
You can listen for network state changes and call registration at a later point in time if you want.
Ref : https://neo4j.com/docs/operations-manual/current/clustering/high-availability/haproxy/
I try to configure HAProxy as said above. I try writing to master and read from slaves, as suggested. When the read is happening the salves is not up-to date with the master, resulting in discrepancy(after some time lag it is updated). How to ensure that data is in sync with master before reading?
First thing make sure you have configured neo4j.conf correctly. ha.tx_push_factor determines to how many slaves a transaction should be pushed to synchronously. When setting this to ha.tx_push_factor=-1 you have immediate full consistency.
If inconsistent more, then read this link for consistency. There they are telling that, splitting should be based on consistency, not read vs write
We are using ActiveMQ 5.6 with the following configuration:
- Flow control on
- Memory limit for topics 1MB
- Mirror Queues enabled (no explicit Virtual Topics defined)
There are persistent messages being sent to a queue QueueA. Obviously, this message is copied to Mirror.QueueA which is a non persistent and automatically created topic.
On this topic, there are no consumers. If there are consumers once in a while, they are non-durable subscribers.
After a while, the producer blocks and we get the following error:
Usage Manager memory limit reached for topic://Mirror.QueueA
According to various sources including the ActiveMQ documentation, there messages in a topic without durable subscribers will be dropped which is what I want and what had expected. But this is obviously not the case.
There is one related StackOverflow question but the accepted solution suggests using flow control but disabling disk-spooling:
That would not use the disk, and block producers when the memoryLimit is hit.
But I do not want to block producers because they will block indefinitely because there is no consumer coming. Why are these messages are being persisted?
I see few options:
- This is a bug and probably fixed in later AMQ versions
- This some configuration issue (of which I don't know how to resolve it)
- There is some option to simply drop the oldest message when the memory limit is hit (I couldn't find any such option)
I hope someone can help!
Thanks,
//J
[Update]
Although we have already deployed versions of 5.6 out in the field, I am currently running the same endurance/load test on a 5.8 installation of AMQ with the same configuration. Right now, I have already transmitted 10 times the messages as on the 5.6 system without any issues. I will let this test run over night or even the next days to see if there is some other limit.
Ok,
as stated in the update before, I was running the same laod test on a 5.8 installation of ActiveMQ with the same configuration that cause the storage exceedance.
This was happening after approximately sending 450 transactions into 3 queues with a topic memory limit of 1MB. You could even watch the size of the KahaDB database file growing.
With AMQ 5.8, I stopped the load test after 4 days resulting in about 280.000 transactions sent. No storage issues, no stuck producer and the KahaDB file stayed approximately the same size all the time.
So, although I cannot say for sure that this is a bug in ActiveMQ 5.6, 5.8 is obviously behaving differently and as expected and documented. It is not storing message in the mirrored queues persistently when no subscriber is registered.
For existing installations of AMQ 5.6, we used a little hack to avoid changing the application code.
Since the application was consuming from topics prefixed with "Mirror." (the default prefix) and some wildcards, we simply defined a topic at start-up in the configuration using the <destinations> XML tag. Where wildcards were used we just used a hardcoded name like all-device. This was unfortunately required for the next step:
We defined a <compositeQueue> within the <destinationInterceptors> section of the config that routed copies of all messages (<forwardTo>) from the actual (mirrored) queue to one topic. This topic needs to be defined in advance or being created manually since simply defining the compositeQueue does not also create the topic. Plus, you cannot use
Then we removed the mirrored queue feature from the config
To sum it up, it looks a bit like this:
<destinations>
<topic name="Mirror.QueueA.all-devices" physicalName="Mirror.all-devices" />
</destinations>
<destinationInterceptors>
<virtualDestinationInterceptor>
<virtualDestinations>
<compositeQueue name="QueueA.*" forwardOnly="false">
<forwardTo>
<topic physicalName="Mirror.QueueA.all-devices" />
</forwardTo>
</compositeQueue>
</virtualDestinations>
</virtualDestinationInterceptor>
</destinationInterceptors>
Hope this helps. This "hack" may not be possible in every situation but since we never consumed on individual Mirror topics, this was possible.