For testing heartbeat, I think "the above 6th column" is actual heartbeat value. Is it right?
If true, what's the meaning of empty for rabbitmq-c client. Rabbitmq-c client always stay there and never die.
How to solve it?
Heartbeat interval can be suggested by the server, but the client might not take that value. I was looking the the source and you might be able to use this function:
AMQP_PUBLIC_FUNCTION
int
AMQP_CALL amqp_tune_connection(amqp_connection_state_t state,
int channel_max,
int frame_max,
int heartbeat);
Related
I'd like to "share" a Mono as I do with Flux.
Flux share() example with Kotlin:
fun `test flux share`() {
val countDownLatch = CountDownLatch(2)
val originalFlux = Flux.interval(Duration.ofMillis(200))
.map { "$it = ${Instant.now()}" }
.take(7)
.share()
.doOnTerminate {
countDownLatch.countDown()
}
println("Starting #1...")
originalFlux.subscribe {
println("#1: $it")
}
println("Waiting ##2...")
CountDownLatch(1).await(1000, TimeUnit.MILLISECONDS)
println("Starting ##2...")
originalFlux.subscribe {
println("##2: $it")
}
countDownLatch.await(10, TimeUnit.SECONDS)
println("End!")
}
I couldn't find a share() operator to Mono. Why doesn't it exist?
I couldn't find a share() operator to Mono. Why doesn't it exist?
The specific behaviour of share() doesn't make much sense with a Mono, but we have cache() which may be what you're after.
share() is equivalent to you calling publish().refcount() on your Flux. Specifically, publish() gives you a ConnectableFlux, or a "hot" flux. (refcount() just automatically connects / stops the flux based on the first / last subscriber.)
The "raison d'être" for ConnectableFlux is allowing multiple subscribers to subscribe whenever they wish, missing the data that was emitted before they subscribed. In the case of Mono this doesn't make a great deal of sense, as by definition there is only one value emitted - so if you've missed it, then you've missed it.
However, we do have cache() on Mono, which also turns it into a "hot" source (where the original supplier isn't called for each subscription, just once on first subscribe.) The obvious difference from above is that the value is replayed for every subscriber, but that's almost certainly what you want.
(Sidenote if you test the above - note that you'll need to use Mono.fromSupplier() rather than Mono.just(), as the latter will just grab the value once at instantiation, thus cache() has no meaningful effect.)
From Project Reactor 3.4.x onwards we have Mono#share()
Prepare a Mono which shares this Mono result similar to Flux.shareNext(). This will effectively turn this Mono into a hot task when the first Subscriber subscribes using subscribe() API. Further Subscriber will share the same Subscription and therefore the same result. It's worth noting this is an un-cancellable Subscription.
I tried mongo replica sets for the first time.
I am using ubuntu on ec2 and I booted up three instances.
I used the private IP address of each of the instances. I picked on as the primary and below is the code.
mongo --host Private IP Address
rs.initiate()
rs.add(“Private IP Address”)
rs.addArb(“Private IP Address”)
All at this point is fine. When I go to the http://ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com:28017/_replSet site I see that I have a primary, seconday, and arbitor.
Ok, now for a test.
On the primary create a database in this is the code:
use tt
db.tt.save( { a : 123 } )
on the secondary, I then do this and get the below error:
db.tt.find()
error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
I am very new to mongodb and replicates but I thought that if I do something in one, it goes to the other. So, if I add a record in one, what do I have to do to replicate across machines?
You have to set "secondary okay" mode to let the mongo shell know that you're allowing reads from a secondary. This is to protect you and your applications from performing eventually consistent reads by accident. You can do this in the shell with:
rs.secondaryOk()
After that you can query normally from secondaries.
A note about "eventual consistency": under normal circumstances, replica set secondaries have all the same data as primaries within a second or less. Under very high load, data that you've written to the primary may take a while to replicate to the secondaries. This is known as "replica lag", and reading from a lagging secondary is known as an "eventually consistent" read, because, while the newly written data will show up at some point (barring network failures, etc), it may not be immediately available.
Edit: You only need to set secondaryOk when querying from secondaries, and only once per session.
To avoid typing rs.slaveOk() every time, do this:
Create a file named replStart.js, containing one line: rs.slaveOk()
Then include --shell replStart.js when you launch the Mongo shell. Of course, if you're connecting locally to a single instance, this doesn't save any typing.
in mongodb2.0
you should type
rs.slaveOk()
in secondary mongod node
THIS IS JUST A NOTE FOR ANYONE DEALING WITH THIS PROBLEM USING THE RUBY DRIVER
I had this same problem when using the Ruby Gem.
To set slaveOk in Ruby, you just pass it as an argument when you create the client like this:
mongo_client = MongoClient.new("localhost", 27017, { slave_ok: true })
https://github.com/mongodb/mongo-ruby-driver/wiki/Tutorial#making-a-connection
mongo_client = MongoClient.new # (optional host/port args)
Notice that 'args' is the third optional argument.
WARNING: slaveOk() is deprecated and may be removed in the next major release. Please use secondaryOk() instead. rs.secondaryOk()
I got here searching for the same error, but from Node.js native driver. The answer for me was combination of answers by campeterson and Prabhat.
The issue is that readPreference setting defaults to primary, which then somehow leads to the confusing slaveOk error. My problem is that I just wan to read from my replica set from any node. I don't even connect to it as to replicaset. I just connect to any node to read from it.
Setting readPreference to primaryPreferred (or better to the ReadPreference.PRIMARY_PREFERRED constant) solved it for me. Just pass it as an option to MongoClient.connect() or to client.db() or to any find(), aggregate() or other function.
https://docs.mongodb.com/v3.0/reference/read-preference/#primaryPreferred
http://mongodb.github.io/node-mongodb-native/3.6/api/Collection.html (search readPreference)
const { MongoClient, ReadPreference } = require('mongodb');
const client = await MongoClient.connect(MONGODB_CONNECTIONSTRING, { readPreference: ReadPreference.PRIMARY_PREFERRED });
slaveOk does not work anymore. One needs to use readPreference https://docs.mongodb.com/v3.0/reference/read-preference/#primaryPreferred
e.g.
const client = new MongoClient(mongoURL + "?readPreference=primaryPreferred", { useUnifiedTopology: true, useNewUrlParser: true });
I am just adding this answer for an awkward situation from DB provider.
what happened in our case is the primary and secondary db shifted reversely (primary to secondary and vice versa) and we are getting the same error.
so please check in the configuration settings for database status which may help you.
Adding readPreference as PRIMARY
const { MongoClient, ReadPreference } = require('mongodb');
const client = new MongoClient(url, { readPreference: ReadPreference.PRIMARY_PREFERRED});
client.connect();
In short, while the code seems to work fine, I'm curious of whether less hacky approaches (than the ones I came up so far) with exist.
Suppose you create a coroutine via lua_newthread, and later suspend it from a cclocure via lua_yield. Your program takes a lap around non-Lua code, and it is now time to resume the coroutine via lua_resume, but - suppose the arguments that Lua code provided are extremely illegal, and we should give it an error to indicate that.
As you might know, you can't call lua_error (or luaL_error) on a state unless it's currently running. So the state must resume but immediately get hit by an error.
In 5.3 you would use lua_yieldk, provide a continuation function, and call lua_error or luaL_error in there. Et voilà.
But, alas - LuaJIT does not implement lua_yieldk, so what options are we left to?
A single-use hook
Suppose the error message is stored in a char error_text[256]. We could then bind a per-instruction hook immediately before resuming,
lua_sethook(L, throw_error, LUA_MASKCOUNT, 1);
int result = lua_resume(L, NULL, ret_count);
and then unbind the hook and throw an error in there
void throw_error(lua_State *L, lua_Debug *ar) {
lua_sethook(L, throw_error, 0, 0);
if (error_text == nullptr) return; // trust no one, especially yourself
luaL_error(L, "%s", error_text);
}
Should string cleanup be required, you would of course prefer to concatenate error_text to luaL_where(L, 1) yourself before calling _error, as both lua_error and luaL_error do a long jump and thus will be the last thing your function does.
A Lua-side wrapper
Suppose you decide to pull a somewhat node.js-like on this and have your C code resume with an error, result pair so that you could have a wrapper function like so
function some(arg)
local e, r = some_native(arg)
if (e) then
error(e)
else
return r
end
end
Or maybe refactor your API entirely so that errors are also handled using the same pattern, but that's a story for another day.
Option 1 seems less hacky of two.
Option 2 seems less likely to cause any trouble.
I can't help the feeling that there's a much better way of doing this that I'm overlooking (after all, lua_yieldk is a relatively recent addition).
Apache Beam has recently introduced state cells, through StateSpec and the #StateId annotation, with partial support in Apache Flink and Google Cloud Dataflow.
I cannot find any documentation on what happens when this is used with a GlobalWindow. In particular, is there a way to have a "state garbage collection" mechanism to get rid of states for keys that have not been seen for a while according to some configuration, while still maintaining a single all-time state for keys are that seen frequently enough?
Or, is the amount of state used in this case going to diverge, with no way to ever reclaim state corresponding to keys that have not been seen in a while?
I am also interested in whether a potential solution would be supported in either Apache Flink or Google Cloud Dataflow.
Flink and direct runners seem to have some code for "state GC" but I am not really sure what it does and whether it is relevant when using a global window.
State can be automatically garbage collected by a Beam runner at some point after a window expires - when the input watermark exceeds the end of the window by the allowed lateness, so all further input is droppable. The exact details depend on the runner.
As you correctly determined, the Global window may never expire. Then this automatic collection of state will not be invoked. For bounded data, including drain scenarios, it actually will expire, but for a perpetual unbounded data source it will not.
If you are doing stateful processing on such data in the Global window you can use user-defined timers (used through #TimerId, #OnTimer, and TimerSpec - I haven't blogged about these yet) to clear state after some timeout of your choosing. If the state represents an aggregation of some sort, then you'll want a timer anyhow to make sure your data is not stranded in state.
Here is a quick example of their use:
new DoFn<Foo, Baz>() {
private static final String MY_TIMER = "my-timer";
private static final String MY_STATE = "my-state";
#StateId(MY_STATE)
private final StateSpec<ValueState<Bizzle>> =
StateSpec.value(Bizzle.coder());
#TimerId(MY_TIMER)
private final TimerSpec myTimer =
TimerSpecs.timer(TimeDomain.EVENT_TIME);
#ProcessElement
public void process(
ProcessContext c,
#StateId(MY_STATE) ValueState<Bizzle> bizzleState,
#TimerId(MY_TIMER) Timer myTimer) {
bizzleState.write(...);
myTimer.setForNowPlus(...);
}
#OnTimer(MY_TIMER)
public void onMyTimer(
OnTimerContext context,
#StateId(MY_STATE) ValueState<Bizzle> bizzleState) {
context.output(... bizzleState.read() ...);
bizzleState.clear();
}
}
There is not automatic garbage collection of state if you use GlobalWindows. Only if you use some non-global window will state be garbage collected after the watermark passes the end of a window plus the allowed lateness.
What you can do if you must work with GlobalWindows is to manually keep as state the last update timestamp. Then you would periodically set a timer where you check this timestamp against the current time and delete state if necessary. You would set this timer when encountering a key for the first time (which you can see from the absence of your timestamp state) and then re-set it in the #OnTimer method.
In the description of xSemaphoreGiveFromISR at http://www.freertos.org/a00124.html is written: "From FreeRTOS V7.3.0 pxHigherPriorityTaskWoken is an optional parameter and can be set to NULL."
The question is: If the parameter is NULL and there is higher priority task affected by the semaphore, will it automatically be switched after ISR - without portEND_SWITCHING_ISR( xHigherPriorityTaskWoken )?
If you set the parameter to NULL, the context switch will happen at the next tick (ticks happen each millisecond if you are using the default settings), not immediatlely after the end of the ISR. Depending on your use-case this may be acceptable or not.
No. The purpose of the pxHigherPriorityTaskWokenflag is only to indicate that a context switch is required. Then you need to call portEND_SWITCHING_ISR() or portYIELD_FROM_ISR() in your ISR code in order to request the context switch.