I am using ZF2 with doctrine ORM and every so often error reporting fails and I get meaningless errors:
http://screencast.com/t/UgZMb89vZ
When it works correctly, errors look like:
http://screencast.com/t/RlOEZxuGUsfu
To bug test this I have run git bisect and have checked every change made between the time when proper errors are reported and when they fail.
Nothing obvious shows up.
My only solution to fixing this problem is to roll back to a version where error reporting is working and to re-add my changes.
Has anyone experienced a similar issue to this?
Related
I'm working on an Umbraco cloud project. I pulled the website from the git repositories and built it. First thing to do there when you run the site is to restore the content that's in the development environment to the local project so we can create new features. Yet Umbraco fails to do so with the following error:
The source environment has thrown a Umbraco.Deploy.Exceptions.ProcessArtifactException
with message: Process pass #3 failed for artifact
umb://document/xxthexguidxofxsomexpagexxxxxxxxx. It might have been
caused by an inner Umbraco.Deploy.Exceptions.EnvironmentException with
message: Could not get parent with id xxthexxx-guid-xofx-xthe-xxhomepagexx.
The following artifacts might be involved:
umb://document/xxthexxxguidxofxxthexxhomepagexx
The technical details may contain more information.
I've noticed that I some strange errors occur if not everything is deployed in the development site in the cloud. So I made sure everything is published.. Still errors though... I'm kinda lost here.
Has anyone come across simular issues? And how did you fix it?
Thanks in advance?
This can happen for a number of reasons, so it's a bit hard to say what exactly the problem is in your case.
Most of the time this happens due to either a circular reference of some sort causing a state that can't really be restored. For example that could be a datatype having a dependency on a node - but the node doesn't exist in a blank new environment. The content restore then refuses to start until the structural data (datatypes, contenttypes and such) is completely in sync, but the datatypes will never be able to be in sync until the content node exists. It's a sort of catch22 situation that might need to be resolved manually.
I would suggest you contact support through the Cloud portal and they will assist you in getting your problem resolved.
I'm using CKFetchRecordZoneChangesOperation to sync changes in my custom zone. It seemed to be working fine in my development environment, but I'm getting some errors when testing in the Production environment. The localized description of the two errors is below:
client knowledge differs from server knowledge
Couldn't fetch some items when fetching changes
Any suggestions on what can cause these errors would be appreciated.
Thanks!
Edit:
OK. I did some further digging and found the root cause of the error messages. I was printing the error's localizedDescription for debugging purposes; however, upon further inspection of the error object, the ckErrorCode == .changeTokenExpired which obviously I did not handle properly!
Resetting the change token and retrying CKFetchRecordZoneChangesOperation() then yielded the correct results.
I hope this helps somebody!
I am using homestead as my development environment, I turned on the hhvm option for the site
sites:
- map: homestead.app
to: /home/vagrant/Code/wheremyprojectis
hhvm: true
I found that when there is an exception, everything is fine, but If I forgot to use namespace, got syntax error in the blade templates, I got nothing, blank page. I go check the logs and still nothing, the debug option is true. It's quite frustrating until I turned off the hhvm option.
I know it is not a big deal, but I still want to know is there any way to fix this?
I experienced the same problem. I searched around and found that it seems to be intentional:
https://github.com/facebook/hhvm/issues/4818
https://github.com/facebook/hhvm/issues/2571
Now you can poke through the github issues mentioned above, as well as these stackoverflow questions:
Display fatal/notice errors in browser
hhvm-fastcgi + nginx how to make it display fatal errors in the browser
laravel 5 show blank page on server error (and no laravel log), running with hhvm and nginx
For the time being, it ultimately boils down to writing your own handler, which isn't too bad. You can also tail the errors at /var/log/hhvm/error.log. Any errors that you intentionally want going to the browser you could of course handle using Laravel's error handling and logging.
UPDATE:
I reported this issue(and a fix) to the Laravel github here:
https://github.com/laravel/framework/issues/8744
I wasn't able to find anything helpful through google, so:
My Dart webapplication worked perfectly. Next time I opened Darteditor and (without changing anything) ran it again, Darteditor showed the Error
Breaking on exception: Strict get failed, invalid object.
This Error doesn't always show up and even when it does, the App still functions. Darteditor doesn't give me any hit where that Error occurs, because the debugger claims some source not to be available.
Does anyone know why/when this Error occurrs and what to do to fix it?
EDIT 1:
As suggested in comments:
updating Darteditor
pub cache repair
pub upgrade
Did not work
EDIT 2:
A Day after I tried the things mentioned in EDIT 1. Thus also after I rebooted the PC. At the moment, the Error doesn't appear anymore. I tried restarting the Darteditor after each try in EDIT 1 but nothing changed. Now...some of the things in EDIT 1 seem to have taken effect only after rebooting the PC. Not sure which of those though - am I supposed to answer my own question mentioning all 3 options from EDIT 1 or what should I do?
EDIT 3:
(Sorry for all the edits)
I changed some code now and the Error is back here again...
Sometimes it goes away, but not with a obvious reason like a specific line of code added/removed. Right now, ther is an additional line by the Error:
Application Cache NoUpdate event (https://www.google.ch/xjs/_/js/k=xjs.ntp.en_US.mqcA3JMW-QU.O/m=jsa,ntp,pcc,csi/rt=j/d=1/t=zcms/rs=AItRSTO3mHFV3hPPmf2KYlzqp_GC2s-5GQ:119)
Breaking on exception: Strict get failed, invalid object.
I get the same error message when I click the back button in Dartium,
or when I use the backspace key and the focus is not in an editable
field (which triggers the back button). I think it's either a bug in
Dart or Dartium. – Damien Aug 21 at 14:00
Interesting: When I pressed the back button in dartium, the error did
show up, but after this, it said something about application cache
(about 5 lines of text) and now I cannot reproduce the "strict get
failed" Error message – lucidbrot yesterday
pressing the back button in dartium, which I never did before, seemed to help in my case. Updating Darteditor again after this didn't change anything - the strict get error disappeared.
I won't accept this answer until somebody comments that it worked for him too, but I guess it is better to view here than as a comment to my question.
I developed a simple plugin to bar files ending in .exe from being uploaded into my jira app. I overrode the AttachFile.doValidation() method to check for .exe in the filename. If it's there I return an error.
Now when I try to delete an attachment, 9 times out of 10 it won't work. I simply get the error "Failed to delete attachment with id {id}". Nothing in the stacktrace or logs indicated that something went wrong. Then it will suddenly delete succcessfully. I've found no rhyme or reason for this.
Again, I overrode AttachFile, not DeleteAttachment, so I don't know how my fix could be related to this problem. Could it be though??
If I remove my plugin entirely, I still get an error when I delete. The error says "The action can't be completed because the file is open in Java(TM) Platform SE binary". Somehow AttachFile() is leaving a reference to the file, but I have no clue where or how to clean up.
Permissions aren't the issue, because occasionally the delete command will work. It always works when the server first starts up, and after that only periodically.
We've come to the conclusion that this is a Windows-only problem, and Linux doesn't lock files. Our production server is Linux, so I'm not going to spend anymore time on this.