SQLite Serialized Mode - xamarin.android

I have an Xamarin Android project and was using mono.data.sqlite and had problems with multithreading, so I tried the Zumero component. I'm still having problems. I'm trying to set serialized mode as with the flag SQLITE_CONFIG_SERIALIZED in http://www.sqlite.org/threadsafe.html. I'm still getting random crashes. Can I set the serialized flag with Zumero? Any other suggestions other than recompiling SQLite from the source?
Thanks,
Brian

I used to have this problem. And despite conflicting recommendations here's how I stopped getting the exceptions:
Share a static instance of SQLiteConnection between all threads. This is safe to do as SQLite connection is only a file pointer it's not like a traditional data connection.
Wrapped all my SQLite queries/inserts/updates in a mutex with the statix instance of my SQLiteConnection as the lock. I've been advised that I shouldn't need to do this when using serialized mode however my experience with it begs to differ.
lock(myStaticConnection) {
myStaticConnection.Query<Employee>("....");
}
As a backup I also use some added retry logic to encapsulate every query. Not sure if SQLite does this on its own (I've seen reference to busytimeout and people claiming it is now gone?). Something like this:
public static List<T> Query<T> (string query, params object[] args) where T : new()
{
return Retry.DoWithLock (() => {
return Data.connection.Query<T> (query, args);
}, Data.connection, 0);
}
public static T DoWithLock<T>(
Func<T> action,
object lockable,
long retryIntervalTicks = defaultRetryIntervalTicks,
int retryCount = defaultRetryCount)
{
return Do (() => {
lock (lockable) {
return action();
}
});
}
public static T Do<T>(
Func<T> action,
long retryIntervalTicks = defaultRetryIntervalTicks,
int retryCount = defaultRetryCount)
{
var exceptions = new List<Exception> ();
for (int retry = 0; retry < retryCount; retry++) {
try{
return action();
} catch (Exception ex) {
exceptions.Add (ex);
ManualSleepEvent (new TimeSpan(retryIntervalTicks));
}
}
throw new AggregateException (exceptions);
}

Related

apache ignite datastreamer how to set data into ignitefuture?

I am creating a batch data streamer in apache ignite, and need to control what happening after data receive.
My batch has a structure:
public class Batch implements Binarylizable, Serializable {
private String eventKey;
private byte[] bytes;
etc..
Then i trying to stream my data:
try (IgniteDataStreamer<Integer, Batch> streamer = serviceGrid.getIgnite().dataStreamer(cacheName);
StreamBatcher batcher = StreamBatcherFactory.create(event) ){
streamer.receiver(StreamTransformer.from(new BatchDataProcessor(event)));
streamer.autoFlushFrequency(1000);
streamer.allowOverwrite(true);
statusService.updateStatus(event.getKey(), StatusType.EXECUTING);
int counter = 0;
Batch batch = null;
IgniteFuture<?> future = null;
while ((batch = batcher.batch()) != null) {
future = streamer.addData(counter++, batch);
}
Object getted = future.get();
Just for test use lets get only the last future, and try to analyze this object. In the code above I'm using BatchDataProcessor, that look like this:
public class BatchDataProcessor implements CacheEntryProcessor<Integer, Batch, Object> {
private final Event event;
private final String eventKey;
public BatchDataProcessor(Event event) {
this.event = event;
this.eventKey = event.getKey();
}
#Override
public Object process(MutableEntry<Integer, Batch> mutableEntry, Object... objects) throws EntryProcessorException {
Node node = NodeIgniter.node(Ignition.localIgnite().cluster().localNode().id());
ServiceGridContainer container = (ServiceGridContainer) node.getEnvironmentContainer().getContainerObject(ServiceGridContainer.class);
ProcessMarshaller marshaller = (ProcessMarshaller) container.getService(ProcessMarshaller.class);
LocalProcess localProcess = marshaller.intoProccessing(event.getLambdaExecutionKey());
try {
localProcess.addBatch(mutableEntry);
} catch (IOException e) {
e.printStackTrace();
} finally {
return new String("111");
}
}
}
So after localProcess.addBatch(mutableEntry) I want to send back an information about the status of this particular batch, so I think that I should do this in IgniteFuture object, but I don't find any information how to control the future object that's received in addData function.
Can anybody help with understanding, where can I control future that receives in addData function or some other way to realize a callback to streamed batch?
When you do StreamTransformer.from(), you forfeit the result of your BatchDataProcessor, because
for (Map.Entry<K, V> entry : entries)
cache.invoke(entry.getKey(), this, entry.getValue());
// ^ result of cache.invoke() is discarded here
DataStreamer is for one-directional streaming of data. It is not supposed to return values as far as I know.
If you depend on the result of cache.invoke(), I recommend calling it directly instead of relying on DataStreamer.
BTW, be careful with fut.get(). You should do dataStreamer.flush() first, or DataStreamer's futures will wait indefinitely.

Not able to receive onNext and onComplete call on subscribed mono

I was trying reactor library and I'm not able to figure out why below mono never return back with onNext or onComplete call. I think I missing very trivial thing. Here's a sample code.
MyServiceService service = new MyServiceService();
service.save("id")
.map(myUserMono -> new MyUser(myUserMono.getName().toUpperCase(), myUserMono.getId().toUpperCase()))
.subscribe(new Subscriber<MyUser>() {
#Override
public void onSubscribe(Subscription s) {
System.out.println("Subscribed!" + Thread.currentThread().getName());
}
#Override
public void onNext(MyUser myUser) {
System.out.println("OnNext on thread " + Thread.currentThread().getName());
}
#Override
public void onError(Throwable t) {
System.out.println("onError!" + Thread.currentThread().getName());
}
#Override
public void onComplete() {
System.out.println("onCompleted!" + Thread.currentThread().getName());
}
});
}
private static class MyServiceService {
private Repository myRepo = new Repository();
public Mono<MyUser> save(String userId) {
return myRepo.save(userId);
}
}
private static class Repository {
public Mono<MyUser> save(String userId) {
return Mono.create(myUserMonoSink -> {
Future<MyUser> submit = exe.submit(() -> this.blockingMethod(userId));
ListenableFuture<MyUser> myUserListenableFuture = JdkFutureAdapters.listenInPoolThread(submit);
Futures.addCallback(myUserListenableFuture, new FutureCallback<MyUser>() {
#Override
public void onSuccess(MyUser result) {
myUserMonoSink.success(result);
}
#Override
public void onFailure(Throwable t) {
myUserMonoSink.error(t);
}
});
});
}
private MyUser blockingMethod(String userId) throws InterruptedException {
Thread.sleep(5000);
return new MyUser("blocking", userId);
}
}
Above code only prints Subcribed!main. What I'm not able to figure out is why that future callback is not pushing values through myUserMonoSink.success
The important thing to keep in mind is that a Flux or Mono is asynchronous, most of the time.
Once you subscribe, the asynchronous processing of saving the user starts in the executor, but execution continues in your main code after .subscribe(...).
So the main thread exits, terminating your test before anything was pushed to the Mono.
[sidebar]: when is it ever synchronous?
When the source of data is a Flux/Mono synchronous factory method. BUT with the added pre-requisite that the rest of the chain of operators doesn't switch execution context. That could happen either explicitly (you use a publishOn or subscribeOn operator) or implicitly (some operators like time-related ones, eg. delayElements, run on a separate Scheduler).
Simply put, your source is ran in the ExecutorService thread of exe, so the Mono is indeed asynchronous. Your snippet on the other hand is ran on main.
How to fix the issue
To observe the correct behavior of Mono in an experiment (as opposed to fully async code in production), several possibilities are available:
keep subscribe with system.out.printlns, but add a new CountDownLatch(1) that is .countDown() inside onComplete and onError. await on the countdown latch after the subscribe.
use .log().block() instead of .subscribe(...). You lose the customization of what to do on each event, but log() will print those out for you (provided you have a logging framework configured). block() will revert to blocking mode and do pretty much what I suggested with the CountDownLatch above. It returns the value once available or throws an Exception in case of error.
instead of log() you can customize logging or other side effects using .doOnXXX(...) methods (there's one for pretty much every type of event + combinations of events, eg. doOnSubscribe, doOnNext...)
If you're doing a unit test, use StepVerifier from the reactor-tests project. It will subscribe to the flux/mono and wait for events when you call .verify(). See the reference guide chapter on testing (and the rest of the reference guide in general).
Issue is that in created anonymous class onSubscribe method does nothing.
If you look at implementation of LambdaSubscriber, it requests some number of events.
Also it's easier to extend BaseSubscriber as it has some predefined logic.
So your subscriber implementation would be:
MyServiceService service = new MyServiceService();
service.save("id")
.map(myUserMono -> new MyUser(myUserMono.getName().toUpperCase(), myUserMono.getId().toUpperCase()))
.subscribe(new BaseSubscriber<MyUser>() {
#Override
protected void hookOnSubscribe(Subscription subscription) {
System.out.println("Subscribed!" + Thread.currentThread().getName());
request(1); // or requestUnbounded();
}
#Override
protected void hookOnNext(MyUser myUser) {
System.out.println("OnNext on thread " + Thread.currentThread().getName());
// request(1); // if wasn't called requestUnbounded() 2
}
#Override
protected void hookOnComplete() {
System.out.println("onCompleted!" + Thread.currentThread().getName());
}
#Override
protected void hookOnError(Throwable throwable) {
System.out.println("onError!" + Thread.currentThread().getName());
}
});
Maybe it's not the best implementation, I'm new to reactor too.
Simon's answer has pretty good explanation about testing asynchronous code.

How can I do batch deletes millions on entities using DatastoreIO and Dataflow

I'm trying to use Dataflow to delete many millions of Datastore entities and the pace is extremely slow (5 entities/s). I am hoping you can explain to me the pattern I should follow to allow that to scale up to a reasonable pace. Just adding more workers did not help.
The Datastore Admin console has the ability to delete all entities of a specific kind but it fails a lot and takes me a week or more to delete 40 million entities. Dataflow ought to be able to help me delete millions of entities that match only certain query parameters as well.
I'm guessing that some type of batching strategy should be employed (where I create a mutation with 1000 deletes in it for example) but its not obvious to me how I would go about that. DatastoreIO gives me just one entity at a time to work with. Pointers would be greatly appreciated.
Below is my current slow solution.
Pipeline p = Pipeline.create(options);
DatastoreIO.Source source = DatastoreIO.source()
.withDataset(options.getDataset())
.withQuery(getInstrumentQuery(options))
.withNamespace(options.getNamespace());
p.apply("ReadLeafDataFromDatastore", Read.from(source))
.apply("DeleteRecords", ParDo.of(new DeleteInstrument(options.getDataset())));
p.run();
static class DeleteInstrument extends DoFn<Entity, Integer> {
String dataset;
DeleteInstrument(String dataset) {
this.dataset = dataset;
}
#Override
public void processElement(ProcessContext c) {
DatastoreV1.Mutation.Builder mutation = DatastoreV1.Mutation.newBuilder();
mutation.addDelete(c.element().getKey());
final DatastoreV1.CommitRequest.Builder request = DatastoreV1.CommitRequest.newBuilder();
request.setMutation(mutation);
request.setMode(DatastoreV1.CommitRequest.Mode.NON_TRANSACTIONAL);
try {
DatastoreOptions.Builder dbo = new DatastoreOptions.Builder();
dbo.dataset(dataset);
dbo.credential(getCredential());
Datastore db = DatastoreFactory.get().create(dbo.build());
db.commit(request.build());
c.output(1);
count++;
if(count%100 == 0) {
LOG.info(count+"");
}
} catch (Exception e) {
c.output(0);
e.printStackTrace();
}
}
}
There is no direct way of deleting entities using the current version of DatastoreIO. This version of DatastoreIO is going to be deprecated in favor of a new version (v1beta3) in the next Dataflow release. We think there is a good use case for providing a delete utility (either through an example or PTransform), but still work in progress.
For now you can batch your deletes, instead of deleting one at a time:
public static class DeleteEntityFn extends DoFn<Entity, Void> {
// Datastore max batch limit
private static final int DATASTORE_BATCH_UPDATE_LIMIT = 500;
private Datastore db;
private List<Key> keyList = new ArrayList<>();
#Override
public void startBundle(Context c) throws Exception {
// Initialize Datastore Client
// db = ...
}
#Override
public void processElement(ProcessContext c) throws Exception {
keyList.add(c.element().getKey());
if (keyList.size() >= DATASTORE_BATCH_UPDATE_LIMIT) {
flush();
}
}
#Override
public void finishBundle(Context c) throws Exception {
if (keyList.size() > 0) {
flush();
}
}
private void flush() throws Exception {
// Make one delete request instead of one for each element.
CommitRequest request =
CommitRequest.newBuilder()
.setMode(CommitRequest.Mode.NON_TRANSACTIONAL)
.setMutation(Mutation.newBuilder().addAllDelete(keyList).build())
.build();
db.commit(request);
keyList.clear();
}
}

How to get Parse.com object count in class

I want to get the object count from 5 classes in Parse.com (to check if all objects were fetched successfully).
Because I'm using findObjectsInBackgroundWithBlock: sometimes not all the objects are fetched before I'm using it, that's why I want to check.
How can I do that?
Update 2: just noticed you were asking about iOS. its basically the same principle, use fetchAllIfNeeded like so:
https://parse.com/docs/ios/api/Categories/PFObject(Synchronous).html#/c:objc(cs)PFObject(cm)fetchAllIfNeeded:
Update: a better way than the naive one (below) would probably be using fetchAllIfNeededInBackground:
ArrayList<ParseObject> objectsToFetch = new ArrayList<>();
objectsToFetch.add(object1);
objectsToFetch.add(object2);
objectsToFetch.add(object3);
ParseObject.fetchAllIfNeededInBackground(objectsToFetch, new FindCallback<ParseObject>() {
#Override
public void done(List<ParseObject> objects, ParseException e) {
//all are fetched, do stuff
}
});
My native way of doing this is adding an outer boolean array where each boolean is responsible for one of the classes.
When a findObjectsInBackgroundWithBlock's 'done' function runs, I set that boolean to "true" and run a function that checks whether all array is true, if so -> all classes have been fetched and I can continue.
example from my code:
boolean[] itemFetched;
protected void getAllClassesAndDoStuffWithThem() {
itemFetched= new boolean[NUM_OF_CLASSES];
for (int i=0;i<NUM_OF_CLASSES;i++){
final int finalI = i;
itemFetched[i] = false;
parseObjectArray[i].fetchIfNeededInBackground(new GetCallback<ParseObject>() {
#Override
public void done(ParseObject object, ParseException e) {
itemFetched[finalI] = true;
finishInitIfPossible();
}
});
}
}
private void finishInitIfPossible() {
for (int i=0;i<NUM_OF_CLASSES;i++){
if (!itemFetched[i])
return;
}
//all classes fetched
finishInit();
}
private void finishInit() {
//do stuff with all 5 classes
}

Problem while adding a new value to a hashtable when it is enumerated

`hi
I am doing a simple synchronous socket programming,in which i employed twothreads
one for accepting the client and put the socket object into a collection,other thread will
loop through the collection and send message to each client through the socket object.
the problem is
1.i connect to clients to the server and start send messages
2.now i want to connect a new client,while doing this i cant update the collection and add
a new client to my hashtable.it raises an exception "collection modified .Enumeration operation may not execute"
how to add a NEW value without having problems in a hashtable.
private void Listen()
{
try
{
//lblStatus.Text = "Server Started Listening";
while (true)
{
Socket ReceiveSock = ServerSock.Accept();
//keys.Clear();
ConnectedClients = new ListViewItem();
ConnectedClients.Text = ReceiveSock.RemoteEndPoint.ToString();
ConnectedClients.SubItems.Add("Connected");
ConnectedList.Items.Add(ConnectedClients);
ClientTable.Add(ReceiveSock.RemoteEndPoint.ToString(), ReceiveSock);
//foreach (System.Collections.DictionaryEntry de in ClientTable)
//{
// keys.Add(de.Key.ToString());
//}
//ClientTab.Add(
//keys.Add(
}
//lblStatus.Text = "Client Connected Successfully.";
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
private void btn_receive_Click(object sender, EventArgs e)
{
Thread receiveThread = new Thread(new ThreadStart(Receive));
receiveThread.IsBackground = true;
receiveThread.Start();
}
private void Receive()
{
while (true)
{
//lblMsg.Text = "";
byte[] Byt = new byte[2048];
//ReceiveSock.Receive(Byt);
lblMsg.Text = Encoding.ASCII.GetString(Byt);
}
}
private void btn_Send_Click(object sender, EventArgs e)
{
Thread SendThread = new Thread(new ThreadStart(SendMsg));
SendThread.IsBackground = true;
SendThread.Start();
}
private void btnlist_Click(object sender, EventArgs e)
{
//Thread ListThread = new Thread(new ThreadStart(Configure));
//ListThread.IsBackground = true;
//ListThread.Start();
}
private void SendMsg()
{
while (true)
{
try
{
foreach (object SockObj in ClientTable.Keys)
{
byte[] Tosend = new byte[2048];
Socket s = (Socket)ClientTable[SockObj];
Tosend = Encoding.ASCII.GetBytes("FirstValue&" + GenerateRandom.Next(6, 10).ToString());
s.Send(Tosend);
//ReceiveSock.Send(Tosend);
Thread.Sleep(300);
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
}
You simply can't modify a Hashtable, Dictionary, List or anything similar while you're iterating over it - whether in the same thread or a different one. There are concurrent collections in .NET 4 which allow this, but I'm assuming you're not using .NET 4. (Out of interest, why are you still using Hashtable rather than a generic Dictionary?)
You also shouldn't be modifying a Hashtable from one thread while reading from it in another thread without any synchronization.
The simplest way to fix this is:
Create a new readonly variable used for locking
Obtain the lock before you add to the Hashtable:
lock (tableLock)
{
ClientTable.Add(ReceiveSock.RemoteEndPoint.ToString(), ReceiveSock);
}
When you want to iterate, create a new copy of the data in the Hashtable within a lock
Iterate over the copy instead of the original table
Do you definitely even need a Hashtable here? It looks to me like a simple List<T> or ArrayList would be okay, where each entry was either the socket or possibly a custom type containing the socket and whatever other information you need. You don't appear to be doing arbitrary lookups on the table.
Yes. Don't do that.
The bigger problem here is unsafe multi-threading.
The most basic "answer" is just to say: use a synchronization lock on the shared object. However this hides a number of important aspects (like understanding what is happening) and isn't a real solution to this problem in my mind.

Resources