Neo4j TimeTree REST API Previous and Next Navigation - neo4j

I am currently using Neo4j TimeTree REST API and is there any way to navigate to the time before and after a given timestamp? My resolution is Second and I just realize that if the minute has changed, then there is no 'NEXT' relationship bridging the previous Second in previous Minute to the current Second. This makes the cypher query quite complicated and I just don't want to reinvent the wheel again if it's already available.
Thanks in advance and your response would be really appreciated!
EDIT
I've got to reproduce the missing NEXT relationship issue again, as you can see in the picture below. This starts to happen from the third time I add a new Second time instant.
I actually create a NodeEntity to operate with the Second nodes. The class is like below.
#NodeEntity(label = "Second")
public class TimeTreeSecond {
#GraphId
private Long id;
private Integer value;
#Relationship(type = "CREATED_ON", direction = Relationship.INCOMING)
private FilterVersionChange relatedFilterVersionChange;
#Relationship(type = "NEXT", direction = Relationship.OUTGOING)
private TimeTreeSecond nextTimeTreeSecond;
#Relationship(type = "NEXT", direction = Relationship.INCOMING)
private TimeTreeSecond prevTimeTreeSecond;
public TimeTreeSecond() {
}
public Long getId() {
return id;
}
public void next(TimeTreeSecond nextTimeTreeSecond) {
this.nextTimeTreeSecond = nextTimeTreeSecond;
}
public FilterVersionChange getRelatedFilterVersionChange() {
return relatedFilterVersionChange;
}
}
The problem here is the Incoming NEXT relationship. When I omit that, everything works fine.
Sometimes I even get this kind of exception in my console when I create the time instant repetitively with short delay.
Exception in thread "main" org.neo4j.ogm.session.result.ResultProcessingException: Could not initialise response
at org.neo4j.ogm.session.response.GraphModelResponse.<init>(GraphModelResponse.java:38)
at org.neo4j.ogm.session.request.SessionRequestHandler.execute(SessionRequestHandler.java:55)
at org.neo4j.ogm.session.Neo4jSession.load(Neo4jSession.java:108)
at org.neo4j.ogm.session.Neo4jSession.load(Neo4jSession.java:100)
at org.springframework.data.neo4j.repository.GraphRepositoryImpl.findOne(GraphRepositoryImpl.java:50)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.executeMethodOn(RepositoryFactorySupport.java:452)
at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.doInvoke(RepositoryFactorySupport.java:437)
at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.invoke(RepositoryFactorySupport.java:409)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:99)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:281)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:136)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207)
at com.sun.proxy.$Proxy32.findOne(Unknown Source)
at de.rwthaachen.service.core.FilterDefinitionServiceImpl.createNewFilterVersionChange(FilterDefinitionServiceImpl.java:100)
at sampleapp.FilterLauncher.main(FilterLauncher.java:50)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
Caused by: org.neo4j.ogm.session.result.ResultProcessingException: "errors":[{"code":"Neo.ClientError.Statement.InvalidType","message":"Expected a numeric value for empty iterator, but got null"}]}
at org.neo4j.ogm.session.response.JsonResponse.parseErrors(JsonResponse.java:128)
at org.neo4j.ogm.session.response.JsonResponse.parseColumns(JsonResponse.java:102)
at org.neo4j.ogm.session.response.JsonResponse.initialiseScan(JsonResponse.java:46)
at org.neo4j.ogm.session.response.GraphModelResponse.initialiseScan(GraphModelResponse.java:66)
at org.neo4j.ogm.session.response.GraphModelResponse.<init>(GraphModelResponse.java:36)
... 27 more
2015-05-23 01:30:46,204 INFO ork.data.neo4j.config.Neo4jConfiguration: 62 - Intercepted exception
Below is one REST call example which I use to create the time instant nodes:
http://localhost:7474/graphaware/timetree/1202/single/1432337658713?resolution=Second&timezone=Europe/Amsterdam
method that I use to create the data :
public FilterVersionChange createNewFilterVersionChange(String projectName,
String filterVersionName,
String filterVersionChangeDescription,
Set<FilterState> filterStates)
{
Long filterVersionNodeId = filterVersionRepository.findFilterVersionByName(projectName, filterVersionName);
FilterVersion newFilterVersion = filterVersionRepository.findOne(filterVersionNodeId, 2);
// Populate all the existing filters in the current project
Map<String, Filter> existingFilters = new HashMap<String, Filter>();
try
{
for(Filter filter : newFilterVersion.getProject().getFilters())
{
existingFilters.put(filter.getMatchingString(), filter);
}
}
catch(Exception e) {}
// Map the filter states to the populated filters, if any. Otherwise, create new filter for it.
for(FilterState filterState : filterStates)
{
Filter filter = existingFilters.get(filterState.getMatchingString());
if(filter == null)
{
filter = new Filter(filterState.getMatchingString(), filterState.getMatchingType(), newFilterVersion.getProject());
}
filterState.stateOf(filter);
}
Long now = System.currentTimeMillis();
TimeTreeSecond timeInstantNode = timeTreeSecondRepository.findOne(timeTreeService.getFilterTimeInstantNodeId(projectName, now));
FilterVersionChange filterVersionChange = new FilterVersionChange(filterVersionChangeDescription, now, filterStates, filterStates, newFilterVersion, timeInstantNode);
FilterVersionChange addedFilterVersionChange = filterVersionChangeRepository.save(filterVersionChange);
return addedFilterVersionChange;
}

Leaving aside for a moment the specific use of TimeTree, I'd like to describe how to generally manage a doubly-linked list using SDN 4, specifically for the case where the underlying graph uses a single relationship type between nodes, e.g.
(post:Post)-[:NEXT]->(post:Post)
What you can't do
Due to limitations in the mapping framework, it is not possible to reliably declare the same relationship type twice in two different directions in your object model, i.e. this (currently) will not work:
class Post {
#Relationship(type="NEXT", direction=Relationship.OUTGOING)
Post next;
#Relationship(type="NEXT", direction=Relationship.INCOMING)
Post previous;
}
What you can do
Instead we can combine the #Transient annotation with the use of annotated setter methods to obtain the desired result:
class Post {
Post next;
#Transient Post previous;
#Relationship(type="NEXT", direction=Relationship.OUTGOING)
public void setNext(Post next) {
this.next = next;
if (next != null) {
next.previous = this;
}
}
}
As a final point, if you then wanted to be able to navigate forwards and backwards through the entire list of Posts from any starting Post without having to continually refetch them from the database, you can set the fetch depth to -1 when you load the post, e.g:
findOne(post.getId(), -1);
Bear in mind that an infinite depth query will fetch every reachable object in the graph from the matched one, so use it with care!
Hope this is helpful

The Seconds are linked to each other via a NEXT relationship, even across minutes.
Hope this is what you meant

Related

Is it possible to have a map of map in MapState?

private MapState<String, EventsHistory> eventsMap = null;
public void processElement2(Event event,
Context context,
Collector<JoinedEvent> collector) throws Exception {
String name = event.getExperimentName();
if (eventsMap.get(name) == null) {
eventsMap.put(name, new EventsHistory());
}
eventsMap.get(name).put(event.getEventTime(), event);
}
class EventsHistory {
private final Map<Long, Event> events = new HashMap<>();
public Map<Long, Event> getEvents() {
return events;
}
public void put(final Long eventTime, final Event event) {
events.put(eventTime, event);
}
}
I have the above code and would like to use Flink's MapState to maintain a map of maps.
When I test this locally, I can see the state update fine. But when I run it in a cluster, the eventsMap is always empty.
Is it valid to use a map of maps in MapState? Is there a better way to achieve this?
As an alternate, I tried the below version, where I do the grouping myself. Strangely enough this works.
private MapState<EventKey, Event> assignmentEventsMap = null;
public final class EventKey {
private String name;
private long eventTime;
}
public void processElement2(Event event,
Context context,
Collector<JoinedEvent> collector) throws Exception {
String name = event.getExperimentName();
eventsMap
.put(new EventKey(event.getName(), event.getEventTime()),
event);
}
The code you have shared is difficult to understand, but perhaps you have misunderstood what MapState is. ValueState provides a sharded key/value store, distributed across the cluster. MapState gives you a sharded key/value store, where the values themselves are nested Maps.
In other words, MapState is always map of maps. You ended up trying to create a map of maps of maps -- which is one level too far.
I'm assuming you are trying to build this structure, where you effectively have a map from experiment names to nested maps of timestamps to events:
name -> (time -> event)
Assuming that your stream of events has already been keyed by the experiment name, then rather than using MapState<String, EventsHistory> eventsMap, what you really want is MapState<Long, Event> eventsMap, and rather than
eventsMap.get(name).put(event.getEventTime(), event);
you should be doing
eventsMap.put(event.getEventTime(), event);
See the tutorial about ValueState and an example using MapState in the Flink docs for more background how to work with these mechanisms.

Apply Side input to BigQueryIO.read operation in Apache Beam

Is there a way to apply a side input to a BigQueryIO.read() operation in Apache Beam.
Say for example I have a value in a PCollection that I want to use in a query to fetch data from a BigQuery table. Is this possible using side input? Or should something else be used in such a case?
I used NestedValueProvider in a similar case but I guess we can use that only when a certain value depends on my runtime value. Or can I use the same thing here? Please correct me if I'm wrong.
The code that I tried:
Bigquery bigQueryClient = start_pipeline.newBigQueryClient(options.as(BigQueryOptions.class)).build();
Tabledata tableRequest = bigQueryClient.tabledata();
PCollection<TableRow> existingData = readData.apply("Read existing data",ParDo.of(new DoFn<String,TableRow>(){
#ProcessElement
public void processElement(ProcessContext c) throws IOException
{
List<TableRow> list = c.sideInput(bqDataView);
String tableName = list.get(0).get("table").toString();
TableDataList table = tableRequest.list("projectID","DatasetID",tableName).execute();
for(TableRow row:table.getRows())
{
c.output(row);
}
}
}).withSideInputs(bqDataView));
The error that I get is:
Exception in thread "main" java.lang.IllegalArgumentException: unable to serialize BeamTest.StarterPipeline$1#86b455
at org.apache.beam.sdk.util.SerializableUtils.serializeToByteArray(SerializableUtils.java:53)
at org.apache.beam.sdk.util.SerializableUtils.clone(SerializableUtils.java:90)
at org.apache.beam.sdk.transforms.ParDo$SingleOutput.<init>(ParDo.java:569)
at org.apache.beam.sdk.transforms.ParDo.of(ParDo.java:434)
at BeamTest.StarterPipeline.main(StarterPipeline.java:158)
Caused by: java.io.NotSerializableException: com.google.api.services.bigquery.Bigquery$Tabledata
at java.io.ObjectOutputStream.writeObject0(Unknown Source)
at java.io.ObjectOutputStream.defaultWriteFields(Unknown Source)
at java.io.ObjectOutputStream.writeSerialData(Unknown Source)
at java.io.ObjectOutputStream.writeOrdinaryObject(Unknown Source)
at java.io.ObjectOutputStream.writeObject0(Unknown Source)
at java.io.ObjectOutputStream.writeObject(Unknown Source)
at org.apache.beam.sdk.util.SerializableUtils.serializeToByteArray(SerializableUtils.java:49)
... 4 more
The Beam model does not currently support this kind of data-dependent operation very well.
A way of doing it is to code your own DoFn that receives the side input and connects directly to BQ. Unfortunately, this would not give you any parallelism, as the DoFn would run completely on the same thread.
Once Splittable DoFns are supported in Beam, this will be a different story.
In the current state of the world, you would need to use the BQ client library to add code that would query BQ as if you were not in a Beam pipeline.
Given the code in your question, a rough idea on how to implement this is the following:
class ReadDataDoFn extends DoFn<String,TableRow>(){
private Tabledata tableRequest;
private Bigquery bigQueryClient;
private Bigquery createBigQueryClientWithinDoFn() {
// I'm not sure how you'd implement this, but you had the right idea
}
#Setup
public void setup() {
bigQueryClient = createBigQueryClientWithinDoFn();
tableRequest = bigQueryClient.tabledata();
}
#ProcessElement
public void processElement(ProcessContext c) throws IOException
{
List<TableRow> list = c.sideInput(bqDataView);
String tableName = list.get(0).get("table").toString();
TableDataList table = tableRequest.list("projectID","DatasetID",tableName).execute();
for(TableRow row:table.getRows())
{
c.output(row);
}
}
}
PCollection<TableRow> existingData = readData.apply("Read existing data",ParDo.of(new ReadDataDoFn()));

SDN4 or neo4j-ogm performances issue

I wrote some simple java code and I encountered some bad performances with SDN4 that I didn't have with SDN3. I suspect the find repositories methods depth parameter to not work exactly in the way it should be. Let me explain the problem:
Here are my java classes(it's just an example) in which I removed getters, setters, contructors, ...
First class is 'Element' :
#NodeEntity
public class Element {
#GraphId
private Long id;
private int age;
private String uuid;
#Relationship(type = "HAS_VALUE", direction = Relationship.OUTGOING)
private Set<Value> values = new HashSet<Value>();
Second one is 'Attribute'
#NodeEntity
public class Attribute {
#GraphId
private Long id;
#Relationship(type = "HAS_PROPERTIES", direction = Relationship.OUTGOING)
private Set<HasInterProperties> properties;
The 'value' class allow my user to add a value on an Element for a specific attribute :
#RelationshipEntity(type = "HAS_VALUE")
public class Value {
#GraphId
private Long id;
#StartNode
Element element;
#EndNode
Attribute attribute;
private Integer value;
private String uuid;
public Value() {
}
public Value(Element element, Attribute attribute, Integer value) {
this.element = element;
this.attribute = attribute;
this.value = value;
this.element.getValues().add(this);
this.uuid = UUID.randomUUID().toString();
}
'Element' classe really need to know its values but 'Attribute' class do not care at all about values.
An attribute has references on InternationalizedProperties class which is like that :
#NodeEntity
public class InternationalizedProperties {
#GraphId
private Long id;
private String name;
The relationship entity between an attribute and it InternationalizedProperties is like the following :
#RelationshipEntity(type = "HAS_PROPERTIES")
public class HasInterProperties {
#GraphId
private Long id;
#StartNode
private Attribute attribute;
#EndNode
private InternationalizedProperties properties;
private String locale;
I then created a little main method to create two attributes and 10000 elements. All my elements have a specific value for the first attribute but no values for the second one (no relation between them). Both attributes hav two differents internationalizedProperties. Here is a sample :
public static void main(String[] args) {
ApplicationContext context = new ClassPathXmlApplicationContext("spring/*.xml");
Session session = context.getBean(Session.class);
session.query("START n=node(*) OPTIONAL MATCH n-[r]-() WHERE ID(n) <> 0 DELETE n,r", new HashMap<String, Object>());
ElementRepository elementRepository = context.getBean(ElementRepository.class);
AttributeRepository attributeRepository = context.getBean(AttributeRepository.class);
InternationalizedPropertiesRepository internationalizedPropertiesRepository = context.getBean(InternationalizedPropertiesRepository.class);
HasInterPropertiesRepository hasInterPropertiesRepository = context.getBean(HasInterPropertiesRepository.class);
//Creation of an attribute object with two internationalized properties
Attribute att = new Attribute();
attributeRepository.save(att);
InternationalizedProperties p1 = new InternationalizedProperties();
p1.setName("bonjour");
internationalizedPropertiesRepository.save(p1);
InternationalizedProperties p2 = new InternationalizedProperties();
p2.setName("hello");
internationalizedPropertiesRepository.save(p2);
hasInterPropertiesRepository.save(new HasInterProperties(att, p1, "fr"));
hasInterPropertiesRepository.save(new HasInterProperties(att, p2, "en"));
LOGGER.info("First attribut id is {}", att.getId());
//Creation of 1000 elements having a differnt value on a same attribute
for(int i = 0; i< 10000; i++) {
Element elt = new Element();
new Value(elt, att, i);
elementRepository.save(elt);
if(i%50 == 0) {
LOGGER.info("{} elements created. Last element created with id {}", i+1, elt.getId());
}
}
//Another attribut without any values from element.
Attribute att2 = new Attribute();
attributeRepository.save(att2);
InternationalizedProperties p12 = new InternationalizedProperties();
p12.setName("bonjour");
internationalizedPropertiesRepository.save(p12);
InternationalizedProperties p22 = new InternationalizedProperties();
p22.setName("hello");
internationalizedPropertiesRepository.save(p22);
hasInterPropertiesRepository.save(new HasInterProperties(att2, p12, "fr"));
hasInterPropertiesRepository.save(new HasInterProperties(att2, p22, "en"));
LOGGER.info("Second attribut id is {}", att2.getId());
Finally, in another main method, I try to get several times the first attribute and the second one :
private static void getFirstAttribute(AttributeRepository attributeRepository) {
StopWatch st = new StopWatch();
st.start();
Attribute attribute = attributeRepository.findOne(25283L, 1);
LOGGER.info("time to get attribute (some element have values on it) is {}ms", st.getTime());
}
private static void getSecondAttribute(AttributeRepository attributeRepository) {
StopWatch st = new StopWatch();
st.start();
Attribute attribute2 = attributeRepository.findOne(26286L, 1);
LOGGER.info("time to get attribute (no element have values on it) is {}ms", st.getTime());
}
public static void main(String[] args) {
ApplicationContext context = new ClassPathXmlApplicationContext("spring/*.xml");
AttributeRepository attributeRepository = context.getBean(AttributeRepository.class);
getFirstAttribute(attributeRepository);
getSecondAttribute(attributeRepository);
getFirstAttribute(attributeRepository);
getSecondAttribute(attributeRepository);
getFirstAttribute(attributeRepository);
getSecondAttribute(attributeRepository);
getFirstAttribute(attributeRepository);
getSecondAttribute(attributeRepository);
}
Here are the logs of this execution :
time to get attribute (some element have values on it) is 2983ms
time to get attribute (no element have values on it) is 4ms
time to get attribute (some element have values on it) is 1196ms
time to get attribute (no element have values on it) is 2ms
time to get attribute (some element have values on it) is 1192ms
time to get attribute (no element have values on it) is 3ms
time to get attribute (some element have values on it) is 1194ms
time to get attribute (no element have values on it) is 3ms
Getting the second attribut (and its internationalized properties thanks to depth=1) is very quick but to get the first one remains very slow. I know that there are many relations (10000 exactly) which are pointing on the first attribute, but when I want to get an attribute with its internationalized properties I clearly do not want to get all the values which are pointing on it. (since Set is not specified on Attribute class).
That's why I think there is a performance problem here. Or may be I do something wrong ?
Thanks for your help
When loading data from the graph we don't currently analyse how your domain model is wired together, so we may potentially bring back related nodes that you do not require. These will then be discarded if they are not mappable in your domain, but if there are many of them, it could potentially impact response times.
There are two reasons for this approach.
It is obviously much simpler to create generic queries to any depth,than it would be dynamically analyse your domain model to any arbitrary depth and generate on-the-fly custom queries; its also much simpler to analyse and prove the correctness of generic queries.
We want to preserve the capability to support polymorphic domain
models in the future, where we don't necessarily know what's in the
database from one day to the next, but we want to adapt our domain
model hydration according to what we find.
In this case I would suggest writing a custom query to load the Attribute objects, to ensure you don't bring back all the unwanted relationships.

How to limit parsing depth using Tinkerpop Frames

Hi I have an interface and a corresponding implementation class like:
public interface IActor extends VertexFrame {
#Property(ActorProps.nodeClass)
public String getNodeClass();
#Property(ActorProps.nodeClass)
public void setNodeClass(String str);
#Property(ActorProps.id)
public String getId();
#Property(ActorProps.id)
public void setId(String id);
#Property(ActorProps.name)
public String getName();
#Property(ActorProps.name)
public void setText(String text);
#Property(ActorProps.uuid)
public String getUuid();
#Property(ActorProps.uuid)
public void setUuid(String uuid);
#Adjacency(label = RelClasses.CoActors, direction = Direction.OUT)
public Iterable<IActor> getCoactors();
}
And I use OrientDB with it that looks something like that. I had similar implementation with Neo4j as well:
Graph graph = new OrientGraph("remote:localhost/actordb");
FramedGraph<Graph> manager = new FramedGraphFactory().create(graph);
IActor actor = manager.frame(((OrientGraph)graph).getVertexByKey("Actor.uuid",uuid), IActor.class);
Above works but the problem is that in this case or similar, because there is a relationship between two vertices of class Actor, there could be potentially a graph loop. Is there a way to define either by Annotation or some other way (e.g through Manager) to stop after x steps for a specific #Adjacency so this won't go forever? If #GremlinGroovy (https://github.com/tinkerpop/frames/wiki/Gremlin-Groovy) annotation is the answer could you please give an example ?
I'm not sure I understand the question/problem. (You say "potentially", but haven't actually proven that there's a problem!)
Is the problem that there is a loop in the Vertex/Frames, and (you think) loading the object will result in an infinite loop?
Have you been able to prove that there is a problem loading a Vertex/Frame with a loop? (show me the code/problem)
As I understand it, the Pipelines will lazy-load objects (only load then when required). The frames (I imagine) only load adjacent frames when requested. Basically, as far as I can tell, theres no problem.
Example (Groovy)
// create some framed vertices
Person nick = createPerson(name: 'Nick')
Person michail = createPerson(name: 'Michail')
// create a recursive loop
nick.addKnows(michail)
michail.addKnows(nick)
// handles recursion = true!
Person nick2 = framedGraph.getVertex(nick.asVertex().id, Person)
assert nick2.knows.knows.knows.knows.knows.name == 'Michail'

java.lang.IllegalStateException: trying to requery an already closed cursor android.database.sqlite.SQLiteCursor#

I've read several related posts and even posted and answer here but it seems like I was not able to solve the problem.
I have 3 Activities:
Act1 (main)
Act2
Act3
When going back and forth Act1->Act2 and Act2->Act1 I get no issues
When going Act2->Act3 I get no issues
When going Act3->Act2 I get occasional crashes with the following error: java.lang.IllegalStateException: trying to requery an already closed cursor android.database.sqlite.SQLiteCursor#.... This is a ListView cursor.
What I tried:
1. Adding stopManagingCursor(currentCursor);to the onPause() of Act2 so I stop managing the cursor when leaving Act2 to Act3
protected void onPause()
{
Log.i(getClass().getName() + ".onPause", "Hi!");
super.onPause();
saveState();
//Make sure you get rid of the cursor when leaving to another Activity
//Prevents: ...Unable to resume activity... trying to requery an already closed cursor
Cursor currentCursor = ((SimpleCursorAdapter)getListAdapter()).getCursor();
stopManagingCursor(currentCursor);
}
When returning back from Act3 to Act2 I do the following:
private void populateCompetitorsListView()
{
ListAdapter currentListAdapter = getListAdapter();
Cursor currentCursor = null;
Cursor tournamentStocksCursor = null;
if(currentListAdapter != null)
{
currentCursor = ((SimpleCursorAdapter)currentListAdapter).getCursor();
if(currentCursor != null)
{
//might be redundant, not sure
stopManagingCursor(currentCursor);
// Get all of the stocks from the database and create the item list
tournamentStocksCursor = mDbHelper.retrieveTrounamentStocks(mTournamentRowId);
((SimpleCursorAdapter)currentListAdapter).changeCursor(tournamentStocksCursor);
}
else
{
tournamentStocksCursor = mDbHelper.retrieveTrounamentStocks(mTournamentRowId);
}
}
else
{
tournamentStocksCursor = mDbHelper.retrieveTrounamentStocks(mTournamentRowId);
}
startManagingCursor(tournamentStocksCursor);
//Create an array to specify the fields we want to display in the list (only name)
String[] from = new String[] {StournamentConstants.TblStocks.COLUMN_NAME, StournamentConstants.TblTournamentsStocks.COLUMN_SCORE};
// and an array of the fields we want to bind those fields to (in this case just name)
int[] to = new int[]{R.id.competitor_name, R.id.competitor_score};
// Now create an array adapter and set it to display using our row
SimpleCursorAdapter tournamentStocks = new SimpleCursorAdapter(this, R.layout.competitor_row, tournamentStocksCursor, from, to);
//tournamentStocks.convertToString(tournamentStocksCursor);
setListAdapter(tournamentStocks);
}
So I make sure I invalidate the cursor and use a different one. I found out that when I go Act3->Act2 the system will sometimes use the same cursor for the List View and sometimes it will have a different one.
This is hard to debug and I was never able to catch a crashing system while debugging. I suspect this has to do with the time it takes to debug (long) and the time it takes to run the app (much shorter, no pause due to breakpoints).
In Act2 I use the following Intent and expect no result:
protected void onListItemClick(ListView l, View v, int position, long id)
{
super.onListItemClick(l, v, position, id);
Intent intent = new Intent(this, ActivityCompetitorDetails.class);
intent.putExtra(StournamentConstants.App.competitorId, id);
intent.putExtra(StournamentConstants.App.tournamentId, mTournamentRowId);
startActivity(intent);
}
Moving Act1->Act2 Act2->Act1 never gives me trouble. There I use startActivityForResult(intent, ACTIVITY_EDIT); and I am not sure - could this be the source of my trouble?
I would be grateful if anyone could shed some light on this subject. I am interested in learning some more about this subject.
Thanks,D.
I call this a 2 dimensional problem: two things were responsible for this crash:
1. I used startManagingCursor(mItemCursor); where I shouldn't have.
2. I forgot to initCursorAdapter() (for autocomplete) on onResume()
//#SuppressWarnings("deprecation")
private void initCursorAdapter()
{
mItemCursor = mDbHelper.getCompetitorsCursor("");
startManagingCursor(mItemCursor); //<= this is bad!
mCursorAdapter = new CompetitorAdapter(getApplicationContext(), mItemCursor);
initItemFilter();
}
Now it seems to work fine. I hope so...
Put this it may work for you:
#Override
protected void onRestart() {
// TODO Auto-generated method stub
super.onRestart();
orderCursor.requery();
}
This also works
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.HONEYCOMB) {
startManagingCursor(Cursor);
}

Resources