Is it possible to have a map of map in MapState? - join

private MapState<String, EventsHistory> eventsMap = null;
public void processElement2(Event event,
Context context,
Collector<JoinedEvent> collector) throws Exception {
String name = event.getExperimentName();
if (eventsMap.get(name) == null) {
eventsMap.put(name, new EventsHistory());
}
eventsMap.get(name).put(event.getEventTime(), event);
}
class EventsHistory {
private final Map<Long, Event> events = new HashMap<>();
public Map<Long, Event> getEvents() {
return events;
}
public void put(final Long eventTime, final Event event) {
events.put(eventTime, event);
}
}
I have the above code and would like to use Flink's MapState to maintain a map of maps.
When I test this locally, I can see the state update fine. But when I run it in a cluster, the eventsMap is always empty.
Is it valid to use a map of maps in MapState? Is there a better way to achieve this?
As an alternate, I tried the below version, where I do the grouping myself. Strangely enough this works.
private MapState<EventKey, Event> assignmentEventsMap = null;
public final class EventKey {
private String name;
private long eventTime;
}
public void processElement2(Event event,
Context context,
Collector<JoinedEvent> collector) throws Exception {
String name = event.getExperimentName();
eventsMap
.put(new EventKey(event.getName(), event.getEventTime()),
event);
}

The code you have shared is difficult to understand, but perhaps you have misunderstood what MapState is. ValueState provides a sharded key/value store, distributed across the cluster. MapState gives you a sharded key/value store, where the values themselves are nested Maps.
In other words, MapState is always map of maps. You ended up trying to create a map of maps of maps -- which is one level too far.
I'm assuming you are trying to build this structure, where you effectively have a map from experiment names to nested maps of timestamps to events:
name -> (time -> event)
Assuming that your stream of events has already been keyed by the experiment name, then rather than using MapState<String, EventsHistory> eventsMap, what you really want is MapState<Long, Event> eventsMap, and rather than
eventsMap.get(name).put(event.getEventTime(), event);
you should be doing
eventsMap.put(event.getEventTime(), event);
See the tutorial about ValueState and an example using MapState in the Flink docs for more background how to work with these mechanisms.

Related

Using Spring AMQP consumer in spring-webflux

I have an app that's using Boot 2.0 with webflux, and has an endpoint returning a Flux of ServerSentEvent. The events are created by leveraging spring-amqp to consume messages off a RabbitMQ queue. My question is: How do I best bridge the MessageListener's configured listener method to a Flux that can be passed up to my controller?
Project Reactor's create section mentions that it "can be very useful to bridge an existing API with the reactive world - such as an asynchronous API based on listeners", but I'm unsure how to hook into the message listener directly since it's wrapped in the DirectMessageListenerContainer and MessageListenerAdapter. Their example from the create section:
Flux<String> bridge = Flux.create(sink -> {
myEventProcessor.register(
new MyEventListener<String>() {
public void onDataChunk(List<String> chunk) {
for(String s : chunk) {
sink.next(s);
}
}
public void processComplete() {
sink.complete();
}
});
});
So far, the best option I have is to create a Processor and simply call onNext() each time in the RabbitMQ listener method to manually produce an event.
I have something like this:
#SpringBootApplication
#RestController
public class AmqpToWebfluxApplication {
public static void main(String[] args) {
ConfigurableApplicationContext applicationContext = SpringApplication.run(AmqpToWebfluxApplication.class, args);
RabbitTemplate rabbitTemplate = applicationContext.getBean(RabbitTemplate.class);
for (int i = 0; i < 100; i++) {
rabbitTemplate.convertAndSend("foo", "event-" + i);
}
}
private TopicProcessor<String> sseFluxProcessor = TopicProcessor.share("sseFromAmqp", Queues.SMALL_BUFFER_SIZE);
#GetMapping(value = "/sseFromAmqp", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<String> getSeeFromAmqp() {
return this.sseFluxProcessor;
}
#RabbitListener(id = "fooListener", queues = "foo")
public void handleAmqpMessages(String message) {
this.sseFluxProcessor.onNext(message);
}
}
The TopicProcessor.share() allows to have many concurrent subscribers which we get when we return this TopicProcessor as a Flux to our /sseFromAmqp REST request via WebFlux.
The #RabbitListener just delegates its received messages to that TopicProcessor.
In the main() I have a code to confirm that I can publish to the TopicProcessor even if there is no subscribers.
Tested with two separate curl sessions and published messages to the queue via RabbitMQ Management Plugin.
By the way I use share() because of: https://projectreactor.io/docs/core/release/reference/#_topicprocessor
from multiple upstream Publishers when created in the shared configuration
That' because that #RabbitListener really can be called from different ListenerContainer threads, concurrently.
UPDATE
Also I moved this sample to my Sandbox: https://github.com/artembilan/sendbox/tree/master/amqp-to-webflux
Let's suppose you want to have a single RabbitMQ listener that somehow puts messages to one or more Flux(es). Flux.create is indeed a good way how to create such a Flux.
Let's start with Messaging with RabbitMQ Spring guide and try to adapt it.
The original Receiver would have to be modified in order to be able to put received messages to a FluxSink.
#Component
public class Receiver {
/**
* Collection of sinks enables more than one subscriber.
* Have to keep in mind that the FluxSink instance that the emitter works with, is provided per-subscriber.
*/
private final List<FluxSink<String>> sinks = new ArrayList<>();
/**
* Adds a sink to the collection. From now on, new messages will be put to the sink.
* Method will be called when a new Flux is created by calling Flux.create method.
*/
public void addSink(FluxSink<String> sink) {
sinks.add(sink);
}
public void receiveMessage(String message) {
sinks.forEach(sink -> {
if (!sink.isCancelled()) {
sink.next(message);
} else {
// If canceled, don't put any new messages to the sink.
// Sink is canceled when a subscriber cancels the subscription.
sinks.remove(sink);
}
});
}
}
Now we have a receiver that puts RabbitMQ messages to sink. Then, creating a Flux is rather simple.
#Component
public class FluxFactory {
private final Receiver receiver;
public FluxFactory(Receiver receiver) { this.receiver = receiver; }
public Flux<String> createFlux() {
return Flux.create(receiver::addSink);
}
}
Receiver bean is autowired to the factory. Of course, you don't have to create a special factory. This only demonstrates the idea how to use the Receiver to create the Flux.
The rest of the application from Messaging with RabbitMQ guide may stay the same, including the bean instantiation.
#SpringBootApplication
public class Application {
...
#Bean
SimpleMessageListenerContainer container(ConnectionFactory connectionFactory,
MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(queueName);
container.setMessageListener(listenerAdapter);
return container;
}
#Bean
MessageListenerAdapter listenerAdapter(Receiver receiver) {
return new MessageListenerAdapter(receiver, "receiveMessage");
}
...
}
I used similar design to adapt Twitter streaming API sucessfuly. Though, there may be a nicer way how to do it.

How are Firebase offline capabilities supposed to detect when the cache is outdated? [duplicate]

Whenever I use addListenerForSingleValueEvent with setPersistenceEnabled(true), I only manage to get a local offline copy of DataSnapshot and NOT the updated DataSnapshot from the server.
However, if I use addValueEventListener with setPersistenceEnabled(true), I can get the latest copy of DataSnapshot from the server.
Is this normal for addListenerForSingleValueEvent as it only searches DataSnapshot locally (offline) and removes its listener after successfully retrieving DataSnapshot ONCE (either offline or online)?
Update (2021): There is a new method call (get on Android and getData on iOS) that implement the behavior you'll like want: it first tries to get the latest value from the server, and only falls back to the cache when it can't reach the server. The recommendation to use persistent listeners still applies, but at least there's a cleaner option for getting data once even when you have local caching enabled.
How persistence works
The Firebase client keeps a copy of all data you're actively listening to in memory. Once the last listener disconnects, the data is flushed from memory.
If you enable disk persistence in a Firebase Android application with:
Firebase.getDefaultConfig().setPersistenceEnabled(true);
The Firebase client will keep a local copy (on disk) of all data that the app has recently listened to.
What happens when you attach a listener
Say you have the following ValueEventListener:
ValueEventListener listener = new ValueEventListener() {
#Override
public void onDataChange(DataSnapshot snapshot) {
System.out.println(snapshot.getValue());
}
#Override
public void onCancelled(FirebaseError firebaseError) {
// No-op
}
};
When you add a ValueEventListener to a location:
ref.addValueEventListener(listener);
// OR
ref.addListenerForSingleValueEvent(listener);
If the value of the location is in the local disk cache, the Firebase client will invoke onDataChange() immediately for that value from the local cache. If will then also initiate a check with the server, to ask for any updates to the value. It may subsequently invoke onDataChange() again if there has been a change of the data on the server since it was last added to the cache.
What happens when you use addListenerForSingleValueEvent
When you add a single value event listener to the same location:
ref.addListenerForSingleValueEvent(listener);
The Firebase client will (like in the previous situation) immediately invoke onDataChange() for the value from the local disk cache. It will not invoke the onDataChange() any more times, even if the value on the server turns out to be different. Do note that updated data still will be requested and returned on subsequent requests.
This was covered previously in How does Firebase sync work, with shared data?
Solution and workaround
The best solution is to use addValueEventListener(), instead of a single-value event listener. A regular value listener will get both the immediate local event and the potential update from the server.
A second solution is to use the new get method (introduced in early 2021), which doesn't have this problematic behavior. Note that this method always tries to first fetch the value from the server, so it will take longer to completely. If your value never changes, it might still be better to use addListenerForSingleValueEvent (but you probably wouldn't have ended up on this page in that case).
As a workaround you can also call keepSynced(true) on the locations where you use a single-value event listener. This ensures that the data is updated whenever it changes, which drastically improves the chance that your single-value event listener will see the current value.
So I have a working solution for this. All you have to do is use ValueEventListener and remove the listener after 0.5 seconds to make sure you've grabbed the updated data by then if needed. Realtime database has very good latency so this is safe. See safe code example below;
public class FirebaseController {
private DatabaseReference mRootRef;
private Handler mHandler = new Handler();
private FirebaseController() {
FirebaseDatabase.getInstance().setPersistenceEnabled(true);
mRootRef = FirebaseDatabase.getInstance().getReference();
}
public static FirebaseController getInstance() {
if (sInstance == null) {
sInstance = new FirebaseController();
}
return sInstance;
}
Then some method you'd have liked to use "addListenerForSingleEvent";
public void getTime(final OnTimeRetrievedListener listener) {
DatabaseReference ref = mRootRef.child("serverTime");
ref.addValueEventListener(new ValueEventListener() {
#Override
public void onDataChange(DataSnapshot dataSnapshot) {
if (listener != null) {
// This can be called twice if data changed on server - SO DEAL WITH IT!
listener.onTimeRetrieved(dataSnapshot.getValue(Long.class));
}
// This can be called twice if data changed on server - SO DEAL WITH IT!
removeListenerAfter2(ref, this);
}
#Override
public void onCancelled(DatabaseError databaseError) {
removeListenerAfter2(ref, this);
}
});
}
// ValueEventListener version workaround for addListenerForSingleEvent not working.
private void removeListenerAfter2(DatabaseReference ref, ValueEventListener listener) {
mHandler.postDelayed(new Runnable() {
#Override
public void run() {
HelperUtil.logE("removing listener", FirebaseController.class);
ref.removeEventListener(listener);
}
}, 500);
}
// ChildEventListener version workaround for addListenerForSingleEvent not working.
private void removeListenerAfter2(DatabaseReference ref, ChildEventListener listener) {
mHandler.postDelayed(new Runnable() {
#Override
public void run() {
HelperUtil.logE("removing listener", FirebaseController.class);
ref.removeEventListener(listener);
}
}, 500);
}
Even if they close the app before the handler is executed, it will be removed anyways.
Edit: this can be abstracted to keep track of added and removed listeners in a HashMap using reference path as key and datasnapshot as value. You can even wrap a fetchData method that has a boolean flag for "once" if this is true it would do this workaround to get data once, else it would continue as normal.
You're Welcome!
You can create transaction and abort it, then onComplete will be called when online (nline data) or offline (cached data)
I previously created function which worked only if database got connection lomng enough to do synch. I fixed issue by adding timeout. I will work on this and test if this works. Maybe in the future, when I get free time, I will create android lib and publish it, but by then it is the code in kotlin:
/**
* #param databaseReference reference to parent database node
* #param callback callback with mutable list which returns list of objects and boolean if data is from cache
* #param timeOutInMillis if not set it will wait all the time to get data online. If set - when timeout occurs it will send data from cache if exists
*/
fun readChildrenOnlineElseLocal(databaseReference: DatabaseReference, callback: ((mutableList: MutableList<#kotlin.UnsafeVariance T>, isDataFromCache: Boolean) -> Unit), timeOutInMillis: Long? = null) {
var countDownTimer: CountDownTimer? = null
val transactionHandlerAbort = object : Transaction.Handler { //for cache load
override fun onComplete(p0: DatabaseError?, p1: Boolean, data: DataSnapshot?) {
val listOfObjects = ArrayList<T>()
data?.let {
data.children.forEach {
val child = it.getValue(aClass)
child?.let {
listOfObjects.add(child)
}
}
}
callback.invoke(listOfObjects, true)
}
override fun doTransaction(p0: MutableData?): Transaction.Result {
return Transaction.abort()
}
}
val transactionHandlerSuccess = object : Transaction.Handler { //for online load
override fun onComplete(p0: DatabaseError?, p1: Boolean, data: DataSnapshot?) {
countDownTimer?.cancel()
val listOfObjects = ArrayList<T>()
data?.let {
data.children.forEach {
val child = it.getValue(aClass)
child?.let {
listOfObjects.add(child)
}
}
}
callback.invoke(listOfObjects, false)
}
override fun doTransaction(p0: MutableData?): Transaction.Result {
return Transaction.success(p0)
}
}
In the code if time out is set then I set up timer which will call transaction with abort. This transaction will be called even when offline and will provide online or cached data (in this function there is really high chance that this data is cached one).
Then I call transaction with success. OnComplete will be called ONLY if we got response from firebase database. We can now cancel timer (if not null) and send data to callback.
This implementation makes dev 99% sure that data is from cache or is online one.
If you want to make it faster for offline (to don't wait stupidly with timeout when obviously database is not connected) then check if database is connected before using function above:
DatabaseReference connectedRef = FirebaseDatabase.getInstance().getReference(".info/connected");
connectedRef.addValueEventListener(new ValueEventListener() {
#Override
public void onDataChange(DataSnapshot snapshot) {
boolean connected = snapshot.getValue(Boolean.class);
if (connected) {
System.out.println("connected");
} else {
System.out.println("not connected");
}
}
#Override
public void onCancelled(DatabaseError error) {
System.err.println("Listener was cancelled");
}
});
When workinkg with persistence enabled, I counted the times the listener received a call to onDataChange() and stoped to listen at 2 times. Worked for me, maybe helps:
private int timesRead;
private ValueEventListener listener;
private DatabaseReference ref;
private void readFB() {
timesRead = 0;
if (ref == null) {
ref = mFBDatabase.child("URL");
}
if (listener == null) {
listener = new ValueEventListener() {
#Override
public void onDataChange(DataSnapshot dataSnapshot) {
//process dataSnapshot
timesRead++;
if (timesRead == 2) {
ref.removeEventListener(listener);
}
}
#Override
public void onCancelled(DatabaseError databaseError) {
}
};
}
ref.removeEventListener(listener);
ref.addValueEventListener(listener);
}

How do I make View's asList() sortable in Google Dataflow SDK?

We have a problem making asList() method sortable.
We thought we could do this by just extending the View class and override the asList method but realized that View class has a private constructor so we could not do this.
Our other attempt was to fork the Google Dataflow code on github and modify the PCollectionViews class to return a sorted list be using the Collections.sort method as shown in the code snippet below
#Override
protected List<T> fromElements(Iterable<WindowedValue<T>> contents) {
Iterable<T> itr = Iterables.transform(
contents,
new Function<WindowedValue<T>, T>() {
#SuppressWarnings("unchecked")
#Override
public T apply(WindowedValue<T> input){
return input.getValue();
}
});
LOG.info("#### About to start sorting the list !");
List<T> tempList = new ArrayList<T>();
for (T element : itr) {
tempList.add(element);
};
Collections.sort((List<? extends Comparable>) tempList);
LOG.info("##### List should now be sorted !");
return ImmutableList.copyOf(tempList);
}
Note that we are now sorting the list.
This seemed to work, when run with the DirectPipelineRunner but when we tried the BlockingDataflowPipelineRunner, it didn't seem like the code change was being executed.
Note: We actually recompiled the dataflow used it in our project but this did not work.
How can we be able to achieve this (as sorted list from the asList method call)?
The classes in PCollectionViews are not intended for extension. Only the primitive view types provided by View.asSingleton, View.asSingleton View.asIterable, View.asMap, and View.asMultimap are supported.
To obtain a sorted list from a PCollectionView, you'll need to sort it after you have read it. The following code demonstrates the pattern.
// Assume you have some PCollection
PCollection<MyComparable> myPC = ...;
// Prepare it for side input as a list
final PCollectionView<List<MyComparable> myView = myPC.apply(View.asList());
// Side input the list and sort it
someOtherValue.apply(
ParDo.withSideInputs(myView).of(
new DoFn<A, B>() {
#Override
public void processElement(ProcessContext ctx) {
List<MyComparable> tempList =
Lists.newArrayList(ctx.sideInput(myView));
Collections.sort(tempList);
// do whatever you want with sorted list
}
}));
Of course, you may not want to sort it repeatedly, depending on the cost of sorting vs the cost of materializing it as a new PCollection, so you can output this value and read it as a new side input without difficulty:
// Side input the list, sort it, and put it in a PCollection
PCollection<List<MyComparable>> sortedSingleton = Create.<Void>of(null).apply(
ParDo.withSideInputs(myView).of(
new DoFn<Void, B>() {
#Override
public void processElement(ProcessContext ctx) {
List<MyComparable> tempList =
Lists.newArrayList(ctx.sideInput(myView));
Collections.sort(tempList);
ctx.output(tempList);
}
}));
// Prepare it for side input as a list
final PCollectionView<List<MyComparable>> sortedView =
sortedSingleton.apply(View.asSingleton());
someOtherValue.apply(
ParDo.withSideInputs(sortedView).of(
new DoFn<A, B>() {
#Override
public void processElement(ProcessContext ctx) {
... ctx.sideInput(sortedView) ...
// do whatever you want with sorted list
}
}));
You may also be interested in the unsupported sorter contrib module for doing larger sorts using both memory and local disk.
We tried to do it the way Ken Knowles suggested. There's a problem for large datasets. If the tempList is large (so sort takes some measurable time as it's proportion to O(n * log n)) and if there are millions of elements in the "someOtherValue" PCollection, then we are unecessarily re-sorting the same list millions of times. We should be able to sort ONCE and FIRST, before passing the list to the someOtherValue.apply's DoFn.

NEO4J Spatial: tips about batch inserter

This is my scenario: we are building a routing system by using neo4j and the spatial plugin. We start from the OSM file and we read this file and import nodes and relationships in our graph (a custom graph model)
Now, if we don't use the batch inserter of neo4j, in order to import a compressed OSM file (with compressed dimension of around 140MB, and normal dimensions around 2GB) it takes around 3 days on a dedicated server with the following characteristics: CentOS 6.5 64bit, quad core, 8GB RAM; pease note that the most time is related to the Neo4J Nodes and relationships creation; in-fact if we read the same file without doing anything with neo4j, the file is read in around 7 minutes (i'm sure about this becouse in our process we first read the file in order to store the correct osm nodes ids and then we read again the file in order to create the neo4j graph)
Obviously we need to improve the import proces so we are trying to use the batchInserter. So far, so good (I need to check how much it will perform by using the batchInserter but I guess it will be faster); so the first thing I did was: let's try to use the batch inserter in a simple test case (very similar to our code, but without modifying our code directly)
I list my software versions:
Neo4j: 2.0.2
Neo4jSpatial: 0.13-neo4j-2.0.1
Neo4jGraphCollections: 0.7.1-neo4j-2.0.1
Osmosis: 0.43.1
Since I'm using osmosis in order to read the osm file, I wrote the following Sink implementation:
public class BatchInserterSinkTest implements Sink
{
public static final Map<String, String> NEO4J_CFG = new HashMap<String, String>();
private static File basePath = new File("/home/angelo/Scrivania/neo4j");
private static File dbPath = new File(basePath, "db");
private GraphDatabaseService graphDb;
private BatchInserter batchInserter;
// private BatchInserterIndexProvider batchIndexService;
private SpatialDatabaseService spatialDb;
private SimplePointLayer spl;
static
{
NEO4J_CFG.put( "neostore.nodestore.db.mapped_memory", "100M" );
NEO4J_CFG.put( "neostore.relationshipstore.db.mapped_memory", "300M" );
NEO4J_CFG.put( "neostore.propertystore.db.mapped_memory", "400M" );
NEO4J_CFG.put( "neostore.propertystore.db.strings.mapped_memory", "800M" );
NEO4J_CFG.put( "neostore.propertystore.db.arrays.mapped_memory", "10M" );
NEO4J_CFG.put( "dump_configuration", "true" );
}
#Override
public void initialize(Map<String, Object> arg0)
{
batchInserter = BatchInserters.inserter(dbPath.getAbsolutePath(), NEO4J_CFG);
graphDb = new SpatialBatchGraphDatabaseService(batchInserter);
spatialDb = new SpatialDatabaseService(graphDb);
spl = spatialDb.createSimplePointLayer("testBatch", "latitudine", "longitudine");
//batchIndexService = new LuceneBatchInserterIndexProvider(batchInserter);
}
#Override
public void complete()
{
// TODO Auto-generated method stub
}
#Override
public void release()
{
// TODO Auto-generated method stub
}
#Override
public void process(EntityContainer ec)
{
Entity entity = ec.getEntity();
if (entity instanceof Node) {
Node osmNodo = (Node)entity;
org.neo4j.graphdb.Node graphNode = graphDb.createNode();
graphNode.setProperty("osmId", osmNodo.getId());
graphNode.setProperty("latitudine", osmNodo.getLatitude());
graphNode.setProperty("longitudine", osmNodo.getLongitude());
spl.add(graphNode);
} else if (entity instanceof Way) {
//do something with the way
} else if (entity instanceof Relation) {
//do something with the relation
}
}
}
Then I wrote the following test case:
public class BatchInserterTest
{
private static final Log logger = LogFactory.getLog(BatchInserterTest.class.getName());
#Test
public void batchInserter()
{
File file = new File("/home/angelo/Scrivania/MilanoPiccolo.osm");
try
{
boolean pbf = false;
CompressionMethod compression = CompressionMethod.None;
if (file.getName().endsWith(".pbf"))
{
pbf = true;
}
else if (file.getName().endsWith(".gz"))
{
compression = CompressionMethod.GZip;
}
else if (file.getName().endsWith(".bz2"))
{
compression = CompressionMethod.BZip2;
}
RunnableSource reader;
if (pbf)
{
reader = new crosby.binary.osmosis.OsmosisReader(new FileInputStream(file));
}
else
{
reader = new XmlReader(file, false, compression);
}
reader.setSink(new BatchInserterSinkTest());
Thread readerThread = new Thread(reader);
readerThread.start();
while (readerThread.isAlive())
{
try
{
readerThread.join();
}
catch (InterruptedException e)
{
/* do nothing */
}
}
}
catch (Exception e)
{
logger.error("Errore nella creazione di neo4j con batchInserter", e);
}
}
}
By executing this code, I get this exception:
Exception in thread "Thread-1" java.lang.ClassCastException: org.neo4j.unsafe.batchinsert.SpatialBatchGraphDatabaseService cannot be cast to org.neo4j.kernel.GraphDatabaseAPI
at org.neo4j.cypher.ExecutionEngine.<init>(ExecutionEngine.scala:113)
at org.neo4j.cypher.javacompat.ExecutionEngine.<init>(ExecutionEngine.java:53)
at org.neo4j.cypher.javacompat.ExecutionEngine.<init>(ExecutionEngine.java:43)
at org.neo4j.collections.graphdb.ReferenceNodes.getReferenceNode(ReferenceNodes.java:60)
at org.neo4j.gis.spatial.SpatialDatabaseService.getSpatialRoot(SpatialDatabaseService.java:76)
at org.neo4j.gis.spatial.SpatialDatabaseService.getLayer(SpatialDatabaseService.java:108)
at org.neo4j.gis.spatial.SpatialDatabaseService.containsLayer(SpatialDatabaseService.java:253)
at org.neo4j.gis.spatial.SpatialDatabaseService.createLayer(SpatialDatabaseService.java:282)
at org.neo4j.gis.spatial.SpatialDatabaseService.createSimplePointLayer(SpatialDatabaseService.java:266)
at it.eng.pinf.graph.batch.test.BatchInserterSinkTest.initialize(BatchInserterSinkTest.java:46)
at org.openstreetmap.osmosis.xml.v0_6.XmlReader.run(XmlReader.java:95)
at java.lang.Thread.run(Thread.java:744)
This is related to this code:
spl = spatialDb.createSimplePointLayer("testBatch", "latitudine", "longitudine");
So now I'm wondering: how can I use the batchInserter for my case? I have to add the created nodes to the SimplePointLayer....so how can I create it by using the batchInserter graph db service?
Is there any little simple sample?
Any tip is really really appreciated
cheers
Angelo
The OSMImporter class in the code has an example of using the batch inserter to import OSM data. The main thing is that the batch inserter is not really supported by neo4j spatial, so you need to do a few things manually. If you look at the class OSMImporter.OSMBatchWriter, you will see how it does things. It is not using the SimplePointLayer at all, since that does not support the batch inserter. It is creating the graph structure it wants directly. The simple point layer is quite simple, certainly much simpler than the OSM model created by the code I'm referencing, so I think you should be able to write a batch-inserter compatible version yourself without too much trouble.
What I would recommend is that you create the layer and nodes using the batch inserter to create the correct graph structure, then switch to the normal embedded API and use that to iterate through the nodes and add them to the spatial index.

Using persistence to display number of times visits for a BB Application?

I have developed an application. I want to display a message before the user starts implementing my application. Like when it is used first time i want to show "Count = 1". And when app is visited second time, "Count = 2".
How can i achieve it? I had done such thing in android using sharedperferences. But how can i do it in blackberry. I had tried something with PersistentStore. But cant achieve that, for i dont know anything about the Persistance in BB.
Also i would wish to restrict the use for 100. Is it possible?
sample codes for this will be appreciable, since i am new to this environment..
You can achieve it with Persistent Storage.
Check this nice tutorial about storing persistent data.
Also you can use SQLite. Link to a development guide which describes how to use SQLite databases in Java® applications: Storing data in SQLite databases.
You can restrict user for trying your application at most 100 times using your own logic with the help of persistent data. But I think there may be some convention, so try Google for that.
got it...
I created a new class which implements Persistable. In that class i had created an integer variable and set an getter and setter function for that integer...
import net.rim.device.api.util.Persistable;
public class Persist implements Persistable
{
private int first;
public int getCount()
{
return first;
}
public void setCount()
{
this.first += 1;
}
}
Then in the class which initializes my screen, i had declared persistence variables and 3 functions to use my Persist.java, initStore(), savePersist(), and getPersist()
public final class MyScreen extends MainScreen implements FieldChangeListener
{
/*
* Declaring my variables...
*/
private static PersistentObject store;
public Persist p;
public MyScreen()
{
//my application codes
//here uses persistence
initStore();
p = getPersist();
if(p.getCount()<100)
{
savePersist();
UiApplication.getUiApplication().invokeLater(new Runnable()
{
public void run()
{
Dialog.alert(p.getCount.toString());
}
});
}
else
{
close();
System.exit(0);
}
}
//three function....
public static void initStore()
{
store = PersistentStore.getPersistentObject(0x4612d496ef1ecce8L);
}
public void savePersist()
{
synchronized (store)
{
p.setCount();
store.setContents(p);
store.commit();
}
}
public Persist getPersist()
{
Persist p = new Persist();
synchronized(store)
{
p = (Persist)store.getContents();
if(p==null)
{
p = new Persist();
}
}
return p;
}
}
I hope u all will get it right now....
If there are another simple way, plz let me know...
Thanks

Resources