I don't really know how to describe why it happens, or how to reproduce it. I have a method that downloads some data from an external site, and saves it to a document. If i look for the document later (ex: via find) it's gone.
If i'm in the console and i have the object assigned to a variable before hand, i can access the data via that variable, but .find won't find it, and the Collection.count is one fewer.
Why would this this happen? Any ideas?
This is hosted on heroku using mongolab. I was thinking maybe the database is running out of space, but the stats page seems to indicate otherwise.
Am I reading it wrong? Here is the db.stats() output:
{
"serverUsed": "A_URL_HERE",
"db": "DB_NAME_HERE",
"collections": 11,
"objects": 116295,
"avgObjSize": 3300.993611075283,
"dataSize": 383889052,
"storageSize": 474427392,
"numExtents": 49,
"indexes": 9,
"indexSize": 4259696,
"fileSize": 520093696,
"nsSizeMB": 16,
"ok": 1
}
Could be a lot of different problems, but the most likely is that you've hit quota. You can see an explanation for each of the different storage stats in the MongoLab UI. The fact that storageSize and fileSize are near quota mean you're close if you're not hitting it.
No matter the source of the problem you should make sure you're using safe mode. It's the default when using the new MongoClient method of creating connections through your driver. It will check for errors before moving one, whereas in the past that was not the default behavior. There's a good chance the server is returning an error (which it will do when you hit quota) but the driver isn't checking for it.
As always, you can write us directly at support#mongolab.com and we'd be more than happy to help!
Related
I'm trying to log to file / seq an event that's an API response from a web service. I know it's not best practice, but under some circumstances, I need to do so.
The JSON saved on disk is around 400Kb.to be honest, I could exclude 2 part of it (that are images returned as base64), I think I should use a destructured logger, is it right?
I've tried increasing the Seq limit to 1mb but the content is not saved even to log file so I think that's not the problem...I use Microsoft Logging (ILogger interface) with Serilog.AspnetCore
Is there a way I can handle such a scenario?
Thanks in advance
You can log a serialized value by using the # format option on the property name. For example,
Log.Information("Created {#User} on {Created}", exampleUser, DateTime.Now);
As you've noted it tends to be a bad idea unless you are certain that the value being serialized will always be small and simple.
this is my data
[{"Id": "21", "name": "Šport", "value": "Šport","tokens": ["Šport", "Sport"]}]
I want to the data to be found even if user does not use 'special' characters so it doesn't matter if he types in "Sport" or "Šport"' the data "Šport" will be found.
Should this be working or maybe any workaround sugestions ?
Thanks.
It should work. If it doesn't work, then I would start by checking whether this is really the data the Typeahead sees.
Is this remote? If so, have you made sure this is indeed what you server returns and what your client sees? Look at the Firefox/Chrome developer tools network requests to be sure.
If not, maybe you have stale (old) data in local storage (where Typeahead caches stuff)? Remove any dataset name just to be sure.
Finally, to verify that it works, try this exact data with `local.
I recently wrote a mailing platform for one of our employees to use. The system runs great, scales great, and is fun to use. However, it is currently inoperable due to a bug that I can't figure out how to fix (fairly inexperienced developer).
The process goes something like this...
Upload a CSV file to a specific FTP directory.
Go to the import_mailing_list page.
Choose a CSV file within the FTP directory.
Name and describe what the list contains.
Associate file headings with database columns.
Then, the back-end loops over each line of the file, associating the values with a heading, and importing these values into a database.
This all works wonderfully, except in a specific case, when a raw CSV is not correctly formatted. For example...
fname, lname, email
Bob, Schlumberger, bob#bob.com
Bobbette, Schlumberger
Another, Record, goeshere#email.com
As you can see, there is a missing comma on line two. This would cause an error when attempting to pull "valArray[3]" (or valArray[2], in the case of every language but mine).
I am looking for the most efficient solution to keep this error from happening. Perhaps I should check the array length, and compare it to the index we're going to attempt to pull, before pulling it. But to do this for each and every value seems inefficient. Anybody have another idea?
Our stack is ColdFusion 8/9 and MySQL 5.1. This is why I refer to the array index as [3].
There's ArrayIsDefined(array, elementIndex), or ArrayLen(array)
seems inefficient?
You gotta code what you need to code, forget about inefficiency. Get it right before you get it fast (when needed).
I suppose if you are looking for another way of doing this (instead of checking the array length each time, although that really doesn't sound that bad to me), you could wrap each line insert attempt in a try/catch block. If it fails, then stuff the failed row in a buffer (including the line number and error message) that you could then display to the user after the batch has completed, so they could see each of the failed lines and why they failed. This has the advantages of 1) not having to explicitly check the array length each time and 2) catching other errors that you might not have anticipated beforehand (maybe a value is too long for your field, for example).
Since I'm not really seeing any content anywhere that doesn't point back to the original Microsoft documents on this matter, or source code that really doesn't seem to answer the questions I'm having, I thought I might ask a few things here. (Delphi tag is there because that's what my dev environment is on the code I'm making from this)
That said, I had a few questions the API document wasn't answering. First one: fdi_notify messages. What is "my responsibility" is in coding these: fdintCABINET_INFO: fdintPARTIAL_FILE: fdintNEXT_CABINET: fdintENUMERATE: ? I'll illustrate what I mean by an example. For fdintCLOSE_FILE_INFO, "my responsibility" is to Close a file related to handle given me, and set the file's date and time according to the data passed in fdi_notify.
I figure I'm missing something since my code isn't handling extracting spanned CAB files...any thoughts on how to do this?
What you're more than likely running into is that FDICopy only reads the cab you passed in. It will use fdintNEXT_CABINET to get spanned data for any files you extract in response to fdintCOPY_FILE, but it only calls fdintCOPY_FILE for files that start on that first cab.
To get a directory listing for the entire set, you need to call FDICopy in a loop. Every time you get a fdintCABINET_INFO event, save off the psz1 parameter (next cab name). When FDICopy returns, check that. If it's an empty string you're done, if not call FDICopy again with the next cab as the new path.
fdintCABINET_INFO: The only responsibility for this is returning 0 to continue processing. You can use the information provided (the path of the next cabinet, next disk, path name, nad set ID), but you don't need to.
fdintPARTIAL_FILE: Depending on how you're processing your cabs, you can probably ignore this. You'll only see it for the second and later images in a set, and it's to tell you that the particular entry is continued from a previous cab. If you started at the first cab in the set you'll have already seen an fdintCOPY_FILE for the file. If you're processing random .cabs, you won't really be able to use it either, since you won't have the start of the file to extract.
fdintNEXT_CABINET: You can use this to prompt the user for a new directory for the next cabinet, but for simple spanning support just return 0 if the passed in filename is valid or -1 if it isn't. If you return 0 and the cab isn't valid, or is the wrong one, this will get called again. The easiest approach (if you don't request a new disk/directory), is just to check pfdin^.fdie. If it's FDIError_None it's equal the first time being called for the requested cab, so you can return 0. If it's anything else it's already tried to open the requested cab at least once, so you can return -1 as an error.
fdintENUMERATE: I think you can ignore this. It isn't covered in the documentation, and the two cab libraries I've looked at don't use it. It may be a leftover from a previous API version.
Here's my situation:
From my Grails controller, I call a service, which queries a database read-only, transforms the result into JSON, and returns the result.
Specs are: JDK 1.6, Tomcat 5.5, Grails 1.3.4, DB via JNDI
Tomcats MaxPermSize is set to 256m and Xmx to 128m.
EDIT: Increasing the memory should be the last resort
The service method:
String queryDB(String queryString) {
StringWriter writer = new StringWriter()
JSonBuilder json = new JSonBuilder(writer)
def queryResult = SomeDomain.findAllBySomePropIlike("%${queryString}%")
json.whatever {
results {
queryResult.eachWithIndex { qr, i ->
// insert domain w/ properties
}
}
}
queryResult = null
return writer.toString()
}
Now, when queryString == 'a' the result set is huge and I end up with this:
[ERROR] 03/Nov/2010#09:46:39,604 [localhost].[/grails-app-0.1].[grails] - Servlet.service() for servlet grails threw exception
java.lang.OutOfMemoryError: GC overhead limit exceeded
at org.codehaus.groovy.util.ComplexKeyHashMap.init(ComplexKeyHashMap.java:81)
at org.codehaus.groovy.util.ComplexKeyHashMap.<init>(ComplexKeyHashMap.java:46)
at org.codehaus.groovy.util.SingleKeyHashMap.<init>(SingleKeyHashMap.java:29)
at groovy.lang.MetaClassImpl$Index.<init>(MetaClassImpl.java:3381)
at groovy.lang.MetaClassImpl$MethodIndex.<init>(MetaClassImpl.java:3364)
at groovy.lang.MetaClassImpl.<init>(MetaClassImpl.java:140)
at groovy.lang.MetaClassImpl.<init>(MetaClassImpl.java:190)
at groovy.lang.MetaClassImpl.<init>(MetaClassImpl.java:196)
at groovy.lang.ExpandoMetaClass.<init>(ExpandoMetaClass.java:298)
at groovy.lang.ExpandoMetaClass.<init>(ExpandoMetaClass.java:333)
at groovy.lang.ExpandoMetaClassCreationHandle.createNormalMetaClass(ExpandoMetaClassCreationHandle.java:46)
at groovy.lang.MetaClassRegistry$MetaClassCreationHandle.createWithCustomLookup(MetaClassRegistry.java:139)
at groovy.lang.MetaClassRegistry$MetaClassCreationHandle.create(MetaClassRegistry.java:122)
at org.codehaus.groovy.reflection.ClassInfo.getMetaClassUnderLock(ClassInfo.java:165)
at org.codehaus.groovy.reflection.ClassInfo.getMetaClass(ClassInfo.java:182)
at org.codehaus.groovy.runtime.callsite.ClassMetaClassGetPropertySite.<init>(ClassMetaClassGetPropertySite.java:35)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.createClassMetaClassGetPropertySite(AbstractCallSite.java:308)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.createGetPropertySite(AbstractCallSite.java:258)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.acceptGetProperty(AbstractCallSite.java:245)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGetProperty(AbstractCallSite.java:237)
at org.codehaus.groovy.grails.plugins.web.filters.FilterToHandlerAdapter.accept(FilterToHandlerAdapter.groovy:196)
at org.codehaus.groovy.grails.plugins.web.filters.FilterToHandlerAdapter$accept.callCurrent(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:44)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:143)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:159)
at org.codehaus.groovy.grails.plugins.web.filters.FilterToHandlerAdapter.preHandle(FilterToHandlerAdapter.groovy:107)
at org.springframework.web.servlet.HandlerInterceptor$preHandle.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:40)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:117)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:133)
at org.codehaus.groovy.grails.plugins.web.filters.CompositeInterceptor.preHandle(CompositeInterceptor.groovy:42)
at org.codehaus.groovy.grails.web.servlet.GrailsDispatcherServlet.doDispatch(GrailsDispatcherServlet.java:282)
One approach I found on the web regards some leaks in Hibernate and domain validation, explained here and in detail here. I'm just about to test it, but I don't know if this is really the solution for my problem and (if it is) at which point it's best to clean up GORM.
Or is there another memory leak in my code?
Ideas anyone?
EDIT: As far as I am now, the exception occurs at the point where the finder method is called. That means that GORM isn't able to handle the amount of data returned by the database, right? Sorry for asking like a greenhorn, but I have never encountered such a problem, even with very large result sets.
Sun (this link isn't valid anymore) had documented this OutOfMemoryError as follows:
The parallel / concurrent collector
will throw an OutOfMemoryError if too
much time is being spent in garbage
collection: if more than 98% of the
total time is spent in garbage
collection and less than 2% of the
heap is recovered, an OutOfMemoryError
will be thrown. This feature is
designed to prevent applications from
running for an extended period of time
while making little or no progress
because the heap is too small. If
necessary, this feature can be
disabled by adding the option
-XX:-UseGCOverheadLimit to the command line.
In other words, that error is a feature, a hint to increase available memory (which is not a preferred option in your case, as you've mentioned). Some developers consider this feature not to be useful in every use case, so check out turning it off.
Another option to those already suggested would be to work in pages of results. Instead of using the dynamic finder, use Criteria and page through the results yourself. Here's a naive, pseudocode example:
def offset = 0
def max = 50
while(stillMoreResults) {
def batch = SomeDomain.findAllBySomePropIlike("%${queryString}%", [max: max, offset: offset])
appendBatchToJsonResult(batch)
offset += max
}
You could tweak the batch size according to your memory requirements. This would avoid having to adjust the memory.
Edit
I just re-read Fletch's answer and noticed that he mentioned this as a solution and you commented on it. I'll leave mine here since it's got an example, but if Fletch adds a paging example to his, I'll delete this answer since he mentioned it before I did.
If you don't want to increase memory, maybe you should only search strings larger than a certain amount. I guess this is some kind of type-ahead/suggestion function; maybe you could start searching when there are three characters or so. Otherwise, maybe paged results is an option?
By the way architecturally the controller is intended to handle interaction with the outside world and its formats, i.e. you would probably want your service just to return the objects and your controller to do the JSON conversion. But this won't solve your current problem.
I would also suggest you only return the properties you need with this query type ahead query and get the full domain object with the actual data the user requires.
The JSON builder is going to create alot of objects and comsume memory. For example in a type-ahead for users, I would only return basic name information and an id instead of the complete object
In a Grails app using a MySQL database (MySQL having been installed through Homebrew), I got this same problem, oddly enough, only by running the app without having started the MySQL server first. Thus simply running
mysql.server start
fixed the problem for me.