I'm currently trying the Neo4j 2.0.0 M3 and see some strange behaviour. In my unit tests, everything works as expected (using an newImpermanentDatabase) but in the real thing, I do not get results from the graphDatabaseService.findNodesByLabelAndProperty.
Here is the code in question:
ResourceIterator<Node> iterator = graphDB
.findNodesByLabelAndProperty(Labels.User, "EMAIL_ADDRESS", emailAddress)
.iterator();
try {
if (iterator.hasNext()) { // => returns false**
return iterator.next();
}
} finally {
iterator.close();
}
return null;
This returns no results. However, when running the following code, I see my node is there (The MATCH!!!!!!!!! is printed) and I also have an index setup via the schema (although that if I read the API, this seems not necessary but is important for performance):
ResourceIterator<Node> iterator1 = GlobalGraphOperations.at(graphDB).getAllNodesWithLabel(Labels.User).iterator();
while (iterator1.hasNext()) {
Node result = iterator1.next();
UserDao.printoutNode(emailAddress, result);
}
And UserDao.printoutNode
public static void printoutNode(String emailAddress, Node next) {
System.out.print(next);
ResourceIterator<Label> iterator1 = next.getLabels().iterator();
System.out.print("(");
while (iterator1.hasNext()) {
System.out.print(iterator1.next().name());
}
System.out.print("): ");
for(String key : next.getPropertyKeys()) {
System.out.print(key + ": " + next.getProperty(key).toString() + "; ");
if(emailAddress.equals( next.getProperty(key).toString())) {
System.out.print("MATCH!!!!!!!!!");
}
}
System.out.println();
}
I already debugged through the code and what I already found out is that I pass via the InternalAbstractGraphDatabase.map2Nodes to a DelegatingIndexProxy.getDelegate and end up in IndexReader.Empty class which returns the IteratorUtil.EMPTY_ITERATOR thus getting false for iterator.hasNext()
Any idea's what I am doing wrong?
Found it:
I only included neo4j-kernel:2.0.0-M03 in the classpath. The moment I added neo4j-cypher:2.0.0-M03 all was working well.
Hope this answer helps save some time for other users.
#Neo4j: would be nice if an exception would be thrown instead of just returning nothing.
#Ricardo: I wanted to but I was not allowed yet as my reputation wasn't good enough as a new SO user.
Related
I found another person apparently having this issue but I thought I'd re-ask the question to see if I could make it more explicit.
I'm using the JIRA 6 REST web API and successfully pulling lots of data that matches our web cloud UI.
Now I'd like to see the transitions a given issue has been thru, preferably with info about who performed the transition.
I can see this transition history in our JIRA web UI but I haven't figured out how to access programmatically yet.
There's a promising sounding API:
http://example.com:8080/jira/rest/api/2/issue/{issueIdOrKey}/transitions [GET, POST]
And this is the API the previous asker seemed to have been using. From what I can tell it only returns the valid transitions you can ask for on the issue at a given point in time.
I would like a history of transitions, such as when the issue went to code review, QA, closed, etc.
I have done a expand=changelog but the change log does not correlate with the transitions that I can see.
Any tips would be appreciated. Thanks.
When you use expand=changelog, then all changes that have been done in issue are there. Exactly same info as in All tab in Activity section when viewing in web browser.
When I send:
http://jira.my.server.se/rest/api/2/issue/KEYF-42346?expand=changelog
Under changelogkey I find list of histories. Each historyhas list of items. Those items are changes performed on the certain field, with to and from values.
To find all status changes you need to do something like this:
for history in issue.changelog.histories:
for item in history.items:
if item.field == "status":
print item.toString # new value
print item.fromString # old value
Or use GET /rest/api/3/issue/{issueIdOrKey}/changelog like explained in the "get changelog" docs
You can try using the jql parameter for the REST API call.
So your call for,
JQL = project=XYZ and status was resolved
fields = key
will look like this,
http://example.com/rest/api/2/search?jql=project%3DXYZ%20and%20status%20was%20resolved&fields=key
where key will return only relevant information and not excessive for each issue.
public void changeStatus(IssueRestClient iRestClient,
List<Statuses> JiraStatuses, String key) {
String status = "To Do";
for (Statuses statuses : vOneToJiraStatuses) {
if (1 == statuses.compareTo(status)) {
try {
String _transition = statuses.getTransition();
Issue issue = iRestClient.getIssue(key).get();
Transition transition = getTransition(iRestClient, issue,
_transition);
if (!(isBlankOrNull(transition))) {
if (!(issue.getStatus().getName()
.equalsIgnoreCase(_transition)))
transition(transition, issue, null, iRestClient,
status);
}
} catch (Exception e) {
Constants.ERROR.info(Level.INFO, e);
}
break;
}
}
}
List is a pojo implementation where statuses and transitions defined in xml are injected through setter/constructor.
private void transition(Transition transition, Issue issue,
FieldInput fieldInput, IssueRestClient issueRestClient,
String status) throws Exception {
if (isBlankOrNull(fieldInput)) {
TransitionInput transitionInput = new TransitionInput(
transition.getId());
issueRestClient.transition(issue, transitionInput).claim();
Constants.REPORT.info("Status Updated for : " + issue.getKey());
} else {
TransitionInput transitionInput = new TransitionInput(
transition.getId());
issueRestClient.transition(issue, transitionInput).claim();
Constants.REPORT.info("Status Updated for : " + issue.getKey());
}
}
public Transition getTransition(IssueRestClient issueRestClient,
Issue issue, String _transition) {
Promise<Iterable<Transition>> ptransitions = issueRestClient
.getTransitions(issue);
Iterable<Transition> transitions = ptransitions.claim();
for (Transition transition : transitions) {
if (transition.getName().equalsIgnoreCase(_transition)) {
return transition;
}
}
return null;
}
In Short using Transition API of JIRA we can fetch all the transitions to set statuses
I use BatchInserters.batchDatabase to create an embedded Neo4j 2.1.5 data base. When I only put a small amount of data in it, everything works fine.
But if I increase the size of data put in, Neo4j fails to persist the latest properties set with setProperty. I can read back those properties with getProperty before I call shutdown. When I load the data base again with new GraphDatabaseFactory().newEmbeddedDatabase those properies are lost.
The strange thing about this is that Neo4j doesn't report any error or throw an exception. So I have no clue what's going wrong or where. Java should have enough memory to handle both the small data base (Database 2.66 MiB, 3,000 nodes, 3,000 relationships) and the big one (Database 26.32 MiB, 197,267 nodes, 390,659 relationships)
It's hard for me to extract a running example to show you the problem, but I can do if this helps. Here the main steps I do though:
def createDataBase(rules: AllRules) {
// empty the data base folder
deleteFileOrDirectory(new File(mainProjectPathNeo4j))
// Create an index on some properties
db = new GraphDatabaseFactory().newEmbeddedDatabase(mainProjectPathNeo4j)
engine = new ExecutionEngine(db)
createIndex()
db.shutdown()
// Fill the data base
db = BatchInserters.batchDatabase(mainProjectPathNeo4j)
//createBatchIndex
try {
// Every function loads some data
loadAllModulesBatch(rules)
loadAllLinkModulesBatch(rules)
loadFormalModulesBatch(rules)
loadInLinksBatch()
loadHILBatch()
createStandardLinkModules(rules)
createStandardLinkSets(rules)
// validateModel shows the problem
validateModel(rules)
} catch {
// I want to see if my environment (BIRT) is catching any exceptions
case _ => val a = 7
} finally {
db.shutdown()
}
}
validateModel is updating some properties of already created nodes
def validateModule(srcM: GenericModule) {
srcM.node.setProperty("isValidated", true)
assert(srcM.node == Neo4jScalaDataSource.testNode)
assert(srcM.node eq Neo4jScalaDataSource.testNode)
assert(srcM.node.getProperty("isValidated").asInstanceOf[Boolean])
When I finally use Cypher to get some data back
the properties set by validateModel are missing
class Neo4jScalaDataSet extends ScriptedDataSetEventAdapter {
override def beforeOpen(...) {
result = Neo4jScalaDataSource.engine.profile(
"""
MATCH (fm:FormalModule {isValidated: true}) RETURN fm.fullName as fullName, fm.uid as uid
""");
iter = result.iterator()
}
override def fetch(...) = {
if (iter.hasNext()) {
for (e <- iter.next().entrySet()) {
row.setColumnValue(e.getKey(), e.getValue())
}
count += 1;
row.setColumnValue("count", count)
return true
} else {
logger.log(Level.INFO, result.executionPlanDescription().toString())
return super.fetch(dataSet, row)
}
}
batchDatabase indeed causes this problem.
I have switched to BatchInserters.inserter and now everything works just fine.
I'm using the grails elasticsearch plugin in my application, and I am running across an odd exception.
I have a method that builds a request using a geo_distance filter that way:
def activityList = ElasticSearchService.search(
[types:[DomainA, DomainB],
sort: orderBy, order: orderAs, size: (int)maxRes],
{
// bla bla bla some working closure building a working query
}
if(centerLat != -1){
filter{
geo_distance(
'distance': searchRadius,
'activityLocation': [lat: (Double)centerLat, lon: (Double)centerLon]
)
}
}
}
)
And whenever I try to use the method with the filter (that is, when I set for example searchRadius to '5km' and centerLat/ centerLon to correct coordinates, my ElasticSearch node goes crazy and keeps logging the following error until I shut it down:
| Error 2014-08-27 10:31:49,076 [elasticsearch[Agent Zero][bulk][T#2]] ERROR index.IndexRequestQueue - Failed bulk item: MapperParsingException[failed to parse]; nested: ElasticsearchParseException[field must be either 'lat', 'lon' or 'geohash'];
I tried looking around the web for the reason why this MappingParserException is thrown, and I ended up looking at the source code for the org.elasticsearch.common.geo.GeoUtils class. Apparenty, the exception is thrown because my lat and lon fields are not numbers (see lines 372 and 381).
Why is this happening? Did I declare my filter wrong?
I think you have a problem is with your index types mapping, in order to understand what is the problem you have to post some example of the data you are indexing and type mappings (you can get your current mappings from host:port/{index}/_mapping/{type}).
Try to define the mapping as explained here: http://noamt.github.io/elasticsearch-grails-plugin/guide/mapping.html look at 3.3.2 GeoPoint, make sure you define your data points correctly.
Assuming the mapping for activityLocation is "geo_point" then you shouldn't need to preface your coordindate with "lat" and "lon" and should just be able to send
'activityLocation': [(Double)centerLat,(Double)centerLon]
I think this "lat" "lon" preface might have been supported as a geo_point format in earlier versions, but as of 1.0.0 I don't think it still is.
It's been a while and this problem made me lose my nerves quite a bit, but I have finally solved it.
I was mapping the following classes that way:
class A{
static searchable={
activityLocation geoPoint:true, component:true
// some other mappings on some other fields
}
Location activityLocation
// some other fields
}
a
class Location{
static searchable = true
String locationName
Double lat
Double lon
}
There were two problems:
I didn't want to use the root false option when mapping my activityLocation objects, because I wanted to be able to search them directly, which caused my A objects to contain the following property (note the presence of the id field):
'activityLocation' : { 'id':'1', 'locationName' : 'foo', 'lat': '42.5464', 'lon': '65.5647' }
which isn't a geoPoint because it has more than just a lat and lon field.
I was mapping the locationName property as part of my activityLocation objects, which makes them not geoPoints.
I ended up solving the problem by changing the mapping to the following values for activityLocation:
ActivityLocation{
static searchable = {
only = ['lat', 'lon']
root false
}
String locationName
Double lat
Double lon
}
This solved my problem and my search went well from then on.
I have to add that I was pretty confused by the docs regarding this and I am a bit disappointed that I can't map a geoPoint with more attributes. I wish it were possible to have other fields in the mapping.
Furthermore, I wish the plugin gave me better access to the ES logs, it took me a while to figure out where to look for the error code.
I have managed to fix this with few lines of coded added to "DeepDomainClassMarshaller.groovy" and overriding this file.
Previous code:-
if (DomainClassArtefactHandler.isDomainClass(propertyClass)) {
String searchablePropertyName = getSearchablePropertyName()
if (propertyValue.class."$searchablePropertyName") {
// todo fixme - will throw exception when no searchable field.
marshallingContext.lastParentPropertyName = prop.name
marshallResult += [(prop.name): ([id: propertyValue.ident(), 'class': propertyClassName] + marshallingContext.delegateMarshalling(propertyValue, propertyMapping.maxDepth))]
} else {
marshallResult += [(prop.name): [id: propertyValue.ident(), 'class': propertyClassName]]
}
// Non-domain marshalling
}
New Code:-
if (DomainClassArtefactHandler.isDomainClass(propertyClass)) {
String searchablePropertyName = getSearchablePropertyName()
if (propertyValue.class."$searchablePropertyName") {
// todo fixme - will throw exception when no searchable field.
marshallingContext.lastParentPropertyName = prop.name
if(propertyMapping?.isGeoPoint())
marshallResult += [(prop.name): (marshallingContext.delegateMarshalling(propertyValue, propertyMapping.maxDepth))]
else
marshallResult += [(prop.name): ([id: propertyValue.ident(), 'class': propertyClassName] + marshallingContext.delegateMarshalling(propertyValue, propertyMapping.maxDepth))]
} else {
marshallResult += [(prop.name): [id: propertyValue.ident(), 'class': propertyClassName]]
}
// Non-domain marshalling
}
Let me know in case issue still persists.
Note:- I am still using elasticsearch:0.0.3.3.
Following is a code fragment obtained from Grails website.
<script>
function messageKeyPress(field,event) {
var theCode = event.keyCode ? event.keyCode : event.which ? event.which : event.charCode;
var message = $('#messageBox').val();
if (theCode == 13){
<g:remoteFunction action="submitMessage" params="\'message=\'+message" update="temp"/>
$('#messageBox').val('');
return false;
} else {
return true;
}
}
function retrieveLatestMessages() {
<g:remoteFunction action="retrieveLatestMessages" update="chatMessages"/>
}
function pollMessages() {
retrieveLatestMessages();
setTimeout('pollMessages()', 5000);
}
pollMessages();
</script>
The above code worked but when i added the Controller it stopped working. I meant that the records gets saved in the DB, but i am not able to retrieve the data and display on screen.
This is what i did
<g:remoteFunction controller="message" action="retrieveLatestMessages" update="chatMessages"/>
The MessageController function is as follows:
#Secured([ 'ROLE_USER'])
def retrieveLatestMessages() {
println "test"
def messages = Message.listOrderByDate(order: 'desc', max:1000)
[messages:messages.reverse()]
println messages
}
The above controller function gets executed (I see the println statements on console), but the data isn't getting populating on the screen.
Can someone help me out here
UPDATE
[{"class":"myPro.Message","id":3,"date":"2014-07-23T17:31:58Z","message":"dfdf","name":"hi"},{"class":"myPro.Message","id":2,"date":"2014-07-23T17:31:56Z","message":"dfdfdf","name":"dd"},{"class":"myPro.Message","id":1,"date":"2014-07-23T17:31:18Z","message":"xxxx","name":"fie"}]
Your method - retrieveLatestMessages() action in your case - must return a model, but it returns the output of println instead.
To make your code work, you must place the model in the last line, or explicitly return it by using return statement:
def retrieveLatestMessages() {
println "test"
def messages = Message.listOrderByDate(order: 'desc', max:1000)
println messages
[messages:messages.reverse()]
}
Try this
import grails.converters.JSON
#Secured([ 'ROLE_USER'])
def retrieveLatestMessages() {
println "test"
def messages = Message.listOrderByDate(order: 'asc', max:1000)
render messages as JSON
}
Enjoy.
I had this sample app working on mine with no issues but here is the thing, this process requires you to poll the page consistently and it is resource intensive:
I ended up writing a domainClass that was bound to a Datasource that was using the HQL db and was outside of my own app, the process requires a DB table to stream chat....
Alternative is to move away from polling and use websockets:
check out this video
https://www.youtube.com/watch?v=8QBdUcFqRkU
Then check out this video
https://www.youtube.com/watch?v=BikL52HYaZg
Finally look at this :
https://github.com/vahidhedayati/grails-websocket-example
This has been updated and includes the 2nd method of using winsocket to make simple chat ....
I have a fairly simple cache configuration:
<cache name="MyCache"
maxElementsInMemory="200000"
eternal="false"
timeToIdleSeconds="43200"
timeToLiveSeconds="43200"
overflowToDisk="false"
diskPersistent="false"
memoryStoreEvictionPolicy="LRU"
/>
I create my cache in the following way:
private Ehcache myCache =
CacheManager.getInstance().getEhcache("MyCache");
I use my cache like this:
public MyResponse processRequest(MyRequest request) {
Element element = myCache.get(request);
if (element != null) {
return (MyResponse)element.getValue();
} else {
MyResponse response = remoteService.process(request);
myCache.put(new Element(request, response));
return response;
}
}
Every 10,000 calls to processRequest() method, I log stats about my cache like this:
logger.debug("Cache name: " + myCache.getName());
logger.debug("Max elements in memory: " + myCache.getMaxElementsInMemory());
logger.debug("Memory store size: " + myCache.getMemoryStoreSize());
logger.debug("Hit count: " + myCache.getHitCount());
logger.debug("Miss count: " + myCache.getMissCountNotFound());
logger.debug("Miss count (because expired): " + myCache.getMissCountExpired());
..I see a good amount of hits, which tells me that it's working.
..However, what I'm seeing is that after a couple hours, the getMemoryStoreSize() is starting to exceed getMaxElementsInMemory(). Eventually, it gets bigger and bigger, and renders the jvm unstable because GC is starting to do Full GCs nonstop to reclaim memory (and I have a pretty large cap set). When I profiled the heap, it pointed to the LRU's SpoolingLinkedHashMap taking most of the space.
I do have a lot of requests hitting this cache, and my theory is that ehcache's LRU algorithm is perhaps not keeping up with evicting the elements when it's full. I tried LFU policy and it also caused the memory store to go over maxElements.
I then started looked at the ehcache code to see if I could prove my theory (inside LruMemoryStore$SpoolingLinkedHashMap):
private boolean removeLeastRecentlyUsedElement(Element element) throws CacheException {
//check for expiry and remove before going to the trouble of spooling it
if (element.isExpired()) {
notifyExpiry(element);
return true;
}
if (isFull()) {
evict(element);
return true;
} else {
return false;
}
}
..from here looks ok, then looked at the evict() method:
protected final void evict(Element element) throws CacheException {
boolean spooled = false;
if (cache.isOverflowToDisk()) {
if (!element.isSerializable()) {
if (LOG.isDebugEnabled()) {
LOG.debug(new StringBuffer("Object with key ").append(element.getObjectKey())
.append(" is not Serializable and cannot be overflowed to disk"));
}
} else {
spoolToDisk(element);
spooled = true;
}
}
if (!spooled) {
cache.getCacheEventNotificationService().notifyElementEvicted(element, false);
}
}
..this looks like it doesn't actually evict (despite the name) but rather relies on the caller to evict. So I looked at the implementation of the put() method and I don't see it calling it. I'm clearly missing something here and would appreciate some help on this.
Thanks!
Your configuration looks fine to me. Only need is to use right key for caching.
Do not put complete request object as your key for cache. Put some unique value from your request object. For example:
MyResponse response = remoteService.process(request);
myCache.put(new Element(request.getCustomerID(), response));
return response;
This should work for you. The reason your caching is not working is that each time your request object is new object; it never finds the response from cache, so it keeps adding into cache.
maxElementsInMemory attribute is deprecated, use maxEntriesLocalHeap instead