I have an nginx Debian 7 server I have configured for hhvm successfully before (with hh code working fine) but this week I did it again (from a fresh install of Debian 7) and it installed successfully and even responds "HipHop" when i request the phpinfo(). What it doesn't do is load any page with hh code.
I am trying to do:
<?hh
echo "HHVM v".HHVM_VERSION;
?>
on a page called test.php, but the page just hangs.
php code runs fine just not anything in the hh code tags.
If you look in /var/log/hhvm/error.log, you'll see something like Fatal error: syntax error, unexpected T_HH_ERROR, expecting $end in /var/www/test.php on line 2. This is because closing tags are invalid in Hack. This is because closing tags (?>) aren't valid in Hack. To have your example work, use:
<?hh
echo "HHVM v".HHVM_VERSION;
Related
I had this happen on my regular instance of Artifactory oss so I made a clean install with minimum configuration change to check everything.
On a clean install of Artifactory :
Version : Artifactory oss 7.3.2 (docker version)
The command used to create the docker : docker run --privileged=true --name=artifactory -i -d -v /media/sdb1/Artifactory:/media/sdb1/Artifactory:z -p 8082:8082 docker.bintray.io/jfrog/artifactory-oss:latest
Everything works fine for regular file
I can upload a file with a hash symbol in it ex: test#1_hashtag. txt
When I try to download it with the UI I end up here : http://my.dns.com:8082/ui/api/v1/download?repoKey=generic-local&path=test%231_hashtag.txt
There is this error displayed :
errors
0
status 404
message "File not found."
I can download the file with curl
I still get the error even when I connect via IP.
I am looking to fix this since not being able to use the hash symbol (#) would need us to rename a lot of files. I don't know if it's due to redirect or something. But this installation is 100% what come out of the box.
Edit : It's not a problem of understanding how the hash symbol in the link is working, I know how it works. it's a problem of special character not being handled correctly by the app or by the redirect.
It looks like you are running into a regression. This seems to have been working 6.16.2 and broken in 7.3.2 (the versions I tested, not necessarily where the regression happened, which is likely in 7.0). There is a bug open for it: https://www.jfrog.com/jira/browse/RTFACT-21460. Please vote for and follow it up for updates.
So I have an API that works fine locally as well as on the server if I run the plumber commands manually, by which I mean ssh-ing in the server and running :
r <- plumb("plumber.R")
r$run(port=8000, host = "0.0.0.0")
It looks like this:
#* #serializer contentType list(type="application/html")
#* #get /test
function(res){
include_rmd("test.Rmd", res)
}
#* Echo the parameter that was sent in
#* #param msg The message to echo back.
#* #get /echo
function(msg=""){
list(msg = paste0("The message is: '", msg, "'"))
}
They both work with no problem. But when I keep them alive on the server with systemd only the /echo one works. The other one just says "An exception occurred."
The systemd setup looks like this:
[Unit]
Description=Plumber API
# After=postgresql
# (or mariadb, mysql, etc if you use a DB with Plumber, otherwise leave this commented)
[Service]
ExecStart=/usr/bin/Rscript -e "api <- plumber::plumb('/home/chrisbeeley/api/plumber.R'); api$run(port=8000, host='0.0.0.0')"
Restart=on-abnormal
WorkingDirectory=/home/chrisbeeley/api/
[Install]
WantedBy=multi-user.target
I can't find error logs anywhere and I'm very confused as to why it should work when I run the commands on the server but not when I use systemd.
I'm using Ubuntu 16.04.
Since I posted this last night I've deployed the whole thing on a totally separate server which is also running 16.04 and it shows the exact same behaviour on there.
Edit: I've also tried this, based on code on the plumber documentation that returns a pdf, and that also returns "an exception occurred"
#* #serializer contentType list(type="text/html; charset=utf-8")
#* #get /html
function(){
tmp <- tempfile()
render("test_report.Rmd", tmp, output_format = "html_document")
readBin(tmp, "raw", n=file.info(tmp)$size)
}
Well, I never solved this. Instead I tried it with pm2, as detailed here https://www.rplumber.io/docs/hosting.html#pm2
I was a bit put off by the npm dependency, seemed like baggage, but it works like a charm.
So if anyone does Google this with a similar problem, I advise you to use pm2. It took me approximately 5 minutes to have it up and running :-)
I should add that although I haven't used them yet I gather pm2 will create log files, too, which sounds useful.
Hello Im trying to create initial flash/build for IoT development following this tutorial https://developer.android.com/things/hardware/imx7d.html#flashing_the_image
Im sorry if my questions is too broad, this is my first IoT attempt, but it seems to me like I have a wrong setup, beacuse Im constantly running into new errors.
Im stuck at step 2.4 Execute the flash-all.sh. Running
sudo ./flash-all.sh
I got this in my logs:
./flash-all.sh: line 52: ./u-boot.imx: Permission denied
If I change permissons
chmod 777 u-boot.imx
I got
./flash-all.sh: line 52: ./u-boot.imx: cannot execute binary file:
Exec format error
I already solved several other issues which werent described in tutorial, including
I have to run script as sudo, otherwise I got
< waiting for any device >
I had to rewrite fastboot command to $(which fastboot) inside flash-all.sh (same with flash and bootloader), otherwise commands are unknown even thought I added them to PATH
I am using
ubuntu 16.14,
android studio with installed sdk 26
Pico Pro Maker Kit with Pico i.MX7 Dual Development Board
What am I doing wrong?
I had to rewrite fastboot command to $(which fastboot) inside flash-all.sh (same with flash and bootloader), otherwise commands are unknown even thought I added them to PATH
This seems like it might be the root of the problem, as somehow the subsequent lines for each command are not being parsed as arguments for fastboot, but rather as their own executable commands.
You also shouldn't need to run the script with sudo. This might be why you can run which fastboot successfully (which would indicate it's in your PATH), but the script cannot see this.
I have the Math extension installed in my MediaWiki 1.19. After I updated Ubuntu Server from 12.04 to 14.04 something seems to have messed it up and it has stopped working. Basically I get the following error when I try to display anything between the <math> and </math> tags:
Failed to parse (PNG conversion failed; check for correct installation
of latex and dvipng (or dvips + gs + convert))
I have tried the common troubleshooting that one can find online regarding this issue, and have recompiled texvc to check if that fixed the issue. The texvc executable in the extensions/Math/math directory seems to do its job when invoked from the command line. I have obviously checked that all the other executables (latex, dvipng, etc.) work as they should.
When I try to render math from my wiki, the corresponding *.tex file is created in images/tmp with the correct latex code in it, but nothing else happens.
The problem seems to be related to texvc having trouble invoking latex and dvipng.
What could be causing this issue and how can I fix it?
Well, I figured it out. Basically, any shell command is passed by a security filter. So in practice, texvc is executed by Mediawiki through bin/ulimit4.sh:
#!/bin/bash
ulimit -t $1 -v $2 -f $3
eval "$4"
where $4 is the actual texvc command being run and $2 is the amount of memory allowed for this process. The memory that comes by default is 102400 KB (exactly 100MB), which seems to be insufficient for this process to run. The amount of memory can be set in LocalSettings.php with the variable $wgMaxShellMemory. In my case I set it to 300MB, $wgMaxShellMemory = 307200;, which seems to be enough.
Why this small process of generating a png needs so much memory I do not know.
The reason why this stopped working after updating to Ubuntu 14.04 probably has to do with some newly shipped version of either latex, dvipng, convert, etc. requiring more memory than it did with the version that came with Ubuntu 12.04.
I'm trying to use Powershell ISE as a console app with little success. Comskip is a command line tool and I'm having a AHK script executing the Comskip commercial stipping normally through cmd.exe. Works fine, but lately I started using Unicode characters (star-rating) in the filenames and as you all know these won't display especially good in the console.
Investigation led me to Powershell ISE which is supposed to support Unicode. Running Comskip from ISE proved to be quite challenging. It works fine if typing in everything manually, but the problem starts when calling it from an another script.
Now the problematic part is if using the code calling the ordinary Powershell console it all went fine. But the same code calling the ISE fails and I can't see why. Can any of you? The error message I get doesn't give any explicit clues on what went wrong. The error message only says:
Usage: powershell_ise.exe or powershell_ise.exe fileName.ps1
The AHK line calling the ISE looks like this:
latest_file := "C:\Program\Comskip\q.ts"
Run, PowerShell_ISE.exe "C:\Program\Comskip\comskip.exe" -t --videoredo "%latest_file%"
Now if using the Powershell console the same code executes alright:
latest_file := "C:\Program\Comskip\q.ts"
Run, PowerShell.exe "C:\Program\Comskip\comskip.exe" -t --videoredo "%latest_file%"
How come I get these anomalies?
PowerShell ISE does not have ability to run commands like powershell.exe has. Reason for that is simple: ISE was designed as an interactive environment where powershell.exe had been thought out as something both for interactive and "batch" operations, like the one you tried to complete.
The error you get is IMO descriptive enough: you can only run PowerShell_ISE (no paramters) or specify .ps1 file that ISE will open once it started. In v3 there are 2 new added ( -NoProfile and -MTA) but still - nobody designed it as batch processing tool.. sorry.