When I turn on "hhvm" on Homestead, I don't get any syntax error or missing class error, just blank page - hhvm

I am using homestead as my development environment, I turned on the hhvm option for the site
sites:
- map: homestead.app
to: /home/vagrant/Code/wheremyprojectis
hhvm: true
I found that when there is an exception, everything is fine, but If I forgot to use namespace, got syntax error in the blade templates, I got nothing, blank page. I go check the logs and still nothing, the debug option is true. It's quite frustrating until I turned off the hhvm option.
I know it is not a big deal, but I still want to know is there any way to fix this?

I experienced the same problem. I searched around and found that it seems to be intentional:
https://github.com/facebook/hhvm/issues/4818
https://github.com/facebook/hhvm/issues/2571
Now you can poke through the github issues mentioned above, as well as these stackoverflow questions:
Display fatal/notice errors in browser
hhvm-fastcgi + nginx how to make it display fatal errors in the browser
laravel 5 show blank page on server error (and no laravel log), running with hhvm and nginx
For the time being, it ultimately boils down to writing your own handler, which isn't too bad. You can also tail the errors at /var/log/hhvm/error.log. Any errors that you intentionally want going to the browser you could of course handle using Laravel's error handling and logging.
UPDATE:
I reported this issue(and a fix) to the Laravel github here:
https://github.com/laravel/framework/issues/8744

Related

Roblox Studio - Random Errors that i can't understand i get

As you can see on the picture below I'm getting some errors in my code:
I have no clue why I'm getting these errors or what they mean and while they don't seem to do anything bad I would rather be able to get rid of them. So I'm asking if someone can tell me what they mean and how I would go around stopping them.
Here are the errors that are shown in the picture:
02:28:40.947 - Roact is not a valid member of CorePackages
02:28:40.949 - Requested module experienced an error while loading
02:28:40.950 - Requested module experienced an error while loading
02:28:40.953 - LocalizationPlugin is not a valid member of CorePackages
02:28:40.981 - LocalizationPlugin is not a valid member of CorePackages
Those errors are from scripts that are part of Roblox itself - you have no control over them and it's normal for them to have errors sometimes. Just ignore them.
The reason why you're getting these errors is because of the Roblox application you're using is using the same Lua engine Roblox uses to code their back-end. Basically, these errors are squawked out by the engine because of errors in Roblox's code. Now you can't fix this, but it doesn't really matter. To be honest, most of them just look like warnings, nothing to worry about. If you use Chrome Dev Tools on most websites you'll see console errors or warnings. They're nothing to worry about if they don't affect the functionality.
Edit: Other people might not get these errors as they may have a different version of the Roblox Lua Engine or their hardware/software may communicate better compared to yours.

Large amount of 404 Not Found errors due to unknown reason

My website worked correctly until last week, when suddenly lots of "not found" errors appeared. The error message is visible but I cannot find the reason. The errors stated that the pages that are unable to be found are linked from sitemap.xml, however prior to the errors appearing Google was able to crawl through the website correctly. Here is an example:
Real link in sitemap (This is the old link that is still functional):
https://rohamweb.com/webdesign/174-طراحی-حرفه-ای-سایت-در-تهران.html
What the search console is actually pointing to:
https://rohamweb.com/webdesign/174-
Apparently the crawlers cannot read content after -, likely due to the different language. I have never encountered this issue until last week, in which perviously was functional.
Thanks in advance for the help!
If this is the actual link :https://rohamweb.com/webdesign/174-طراحی-حرفه-ای-سایت-در-تهران.html, you are doing it wrong and it should be URL encoded before sending it to the response:
https://rohamweb.com/webdesign/174-%D8%B7%D8%B1%D8%A7%D8%AD%DB%8C-%D8%AD%D8%B1%D9%81%D9%87-%D8%A7%DB%8C-%D8%B3%D8%A7%DB%8C%D8%AA-%D8%AF%D8%B1-%D8%AA%D9%87%D8%B1%D8%A7%D9%86.html
In this case, all of the available engines are able to follow it.

ZF2 - Error messages failing to display

I am using ZF2 with doctrine ORM and every so often error reporting fails and I get meaningless errors:
http://screencast.com/t/UgZMb89vZ
When it works correctly, errors look like:
http://screencast.com/t/RlOEZxuGUsfu
To bug test this I have run git bisect and have checked every change made between the time when proper errors are reported and when they fail.
Nothing obvious shows up.
My only solution to fixing this problem is to roll back to a version where error reporting is working and to re-add my changes.
Has anyone experienced a similar issue to this?

Django+uwsgi: uwsgi_response_write_body_do(): Broken pipe [core/writer.c line 248]

I have this weird issue in my django application. I am using the admin interface. When I try to load the change page it doesn't render completely and when i see in logs it says:
**uwsgi_response_write_body_do(): Broken pipe [core/writer.c line 248]
IOError: write error**
The page has been working fine before and suddenly this started happening. And also the behavior is inconsistent. If I reload the page multiple times, few times it renders correctly. This issue is happening in production environment and I am not able to replicate it on my local. The production server is using uwsgi 1.9.10 with nginx and django 1.5. Also I am writing custom HTML on page and there is an inline table also on the same page.
Please let me know if anyone knows why it is happening...
that error means the client (browser) disconnected before getting the full response. Check your webserver logs, maybe you hit a timeout

MODx Parse Error on home page

I've been having this error over lot of modx evolution 1.0.5 installations (I allways use 1&1 servers, in Linux version) It fails to load the home page (not any other pages) it seems that every time I clean the cache, via the manager or via API code, it creates again this file in the cache:
docid_1.pageCache.php
But the thing is that even if the folder cache and all the files inside are set to 777 permissions when it creates files for caching they have any permission set and that's the thing causing this error.
Anybody has had this error? I've been searching in MODX forums but didn't find anybody worried about this, but still I can see it's something happening a lot since when I search on google this error, I don't find forum posts discussing this but instead lots of modx frontend pages showing this error at their home pages.
Maybe it's a problem with 1&1 PHP configuration.
I'm really worried about this cose it has happen some times when I client calls me "über mad" complaining about his home page showing this error.
I've seen that new 1.06 version has some fix on the pagecache parser but I don't know if it's related to my problem.
Here's the error page:
« MODx Parse Error »
MODx encountered the following error while attempting to parse the
requested resource: « PHP Parse Error »
PHP error debug Error: file(assets/cache/docid_1.pageCache.php) [function.file]: failed to open stream: Permission denied
Error type/ Nr.: Warning - 2
File: /homepages/3/d405318697/htdocs/t3st/manager/includes/document.parser.class.inc.php
Line: 413
Line 413 source: $flContent= implode("", file($cacheFile));
The cheeky answer? - "upgrade" Evolution is dead.
A more helpful answer, check the modx system settings, in Revolution you can tell modx what permissions to attempt to set on files, my guess is that maybe you have inadvertently set these to 000 if that's what you mean by this: "they have any permission set"
If that does not work/you get desperate, disable all caching and test or if possible [still not familiar with evo] set that resource to not be cached.
Though something odd is going on, please confirm; the index page will be cached but with no permissions i.e. 000, subsequent pages will be cached but do have correct permissions set? i.e. 666 [or 644/whatever]

Resources