Collection of PNaCl ports with compiled .pexe files (Ex: ImageMagick)? - imagemagick

The list of current NaCl ports is here: https://code.google.com/p/naclports/wiki/PortList
I'm curious if there is (or will be) a repository for PNaCl executables (.pexe files) since they only need a .nmf manifest wrapper to run?
Please list any PNaCl resources here, I'm looking for ImageMagick specifically.
I know I could probably build the .pexe myself but I don't have the time to learn Native Client.

The short answer is: this isn't available yet, but we are working on it.
The long answer:
There isn't currently a binary repository for naclports. We are planning to add that soon (likely this quarter).
I took a quick look at the ImageMagick port, and it looks like it currently only generates libraries, not an application binary (i.e. nexe/pexe). My guess is that it wouldn't be too hard to add this, however.
Even with a pexe, you'll need to have a way to launch the application, give it commandline options, and a set of files to process. I've discussed a way to do this with my team, but we haven't started work on it yet (though again, we'll likely work on this in Q1 2014).

Related

How to identify what projects have been affected by a code change

I have a large application to manage consisting of of three or four executables and as many as fifty .dlls. Many of the source code files are shared across many of the projects.
The problem is a familiar one to many of us - if I change some source code I want to be able to identify which of the binaries will change and, therefore, what it is appropriate to retest.
A simple approach would be simply to compare file sizes. That is an 80% acceptable solution, but there is at least a theoretical possibility of missing something. Secondly, it gives me very little indication as to WHAT has changed; It would be ideal to get some form of report on this so I can then filter out irrelevant (e.g. dates/versions copyrights etc..)
On the plus side :
all my .dcus are in a row - I mean they are all built into a single folder
the build is controlled by a script (.bat)(easy, for example, to emit .obj files if that helps)
svn makes it easy to collect together any (two) revisions for comparison
On the minus side
There is no policy to include all used units in all projects; some units get included because they are on a search path.
Just knowing that a changed unit is used/compiled by a project is not sufficient proof that the binary is affected.
Before I begin writing some code to solve the problem I would like to ask the panel what suggestions they might have as to how to approach this.
The rules of StackOverflow forbid me to ask for recommended software, but if anyone has any positive experiences of continuous integration tools that would help - great
I am open to any suggestion or observation that is relevant in this context.
It seems to me that your question boils down to knowing which units are contained in your various executables. Since you are using search paths, it will be hard for you to work this out ahead of time. The most robust way to find out is to consult the .map file that the compiler emits. This contains a list of all units contained in your executable.
Once you know which units are contained in each executable, you need to know whether or not anything has changed in those units. That information is contained in your revision control system. Put this all together and you have the information that you need.
Of course, just because the source code for a unit has changed, you might argue that re-testing is not needed. Perhaps the only change made was the version, or the date in a copyright label or some such. But it is asking too much to be able to ask a computer to make such a judgement. At some point you need a human to step up and take responsibility.
What is odd about this though is that you are asking the question at all. It seems to me to be enormously risky to attempt partial testing. I cannot understand why you don't simply retest the entire product.
After using it for > 10 years for commercial in-house and freelancer work in large projects, I can recommend to try Apache Ant. It is a build tool which supports dependencies, and has many very helpful features.
Apache Ant also integrates nicely with CI tools such as Hudson/Jenkins, Bamboo etc.
Another suggestion - based on experience with Maven - is to design the general software architecture as modular as possible. If modules (single or multiple source or DCU files in one directory) use a version number in the directory name as a version number, it is possible to control exactly how application are composed from these modules.
If you want to program such a tool yourself the approach would be something like this:
First you need to detect wheter there were any changes made to seperate source files. As you already figured out comparing the file size is bad idea as the file size can stay the same despite lots of changes made to it (as long as there is same amount of text in pas file its size won't change). So instead you could check the last modification time for specific file or create some hash value like MD5 hash for comparison (can be quite slow).
Then you need to generate yourself a dependancy tree which will tell you which files are used for which project/subproject.
Finally based on changes detected in seperate files you check the dependancy tree to see which projects needs to be recompiled.
The problem of such approach is that you would probably have to update the dependancy tree manually each time when new unit is added to the project or an existing one is removed from the project.
But the best way would be to go and use some version controll software istead of reinventing the wheel. I myself like the way how GIT works and I belive that with proper implementation of GIT into the project mannager itself could be quite powerfull do to GIT support of branching/subbranching (each project is its own branch, each version of your software can be its own subbranch).
Now latest version of Delphi does have GIT integration done though SVN but this unfortunately limits some of best GIT functionality. So if you maybe decide to go and integrate GIT support directly into Delphi I'm first in line to use it.

HipHop for PHP, deploying apps

After Googling, I found a lot of HipHop documentation, but plenty was posted between 2011 and 2013.
Earlier this year was launched a new version of HipHop that even supports Drupal and includes a lot of improvements...
I've always used the Zend Guard to deploy my commercial applications, but now I started to consider seriously the use of HipHop in production, but here comes the question:
We can run an application using only the bytecode HHBC (Without .php source code)?
Follows the reference of my research
https://github.com/facebook/hhvm/wiki/FAQ
The question may seem very obvious, but it is not so easy to find this answer in the project documentation.
Thanks in advance!
Well, yes and no.
HHVM has a so-called RepoAuthoritative mode in which the HHVM will no longer check the existence of the PHP files or how up-to-date they are; instead, it will retrieve the HHBC directly from its cache.
Theoretically, you can follow these steps:
pre-generate the HHBC for all your PHP files and insert that HHBC in HHVM's cache. This is the so-called pre-analysis phase (if you ever see it in HHVM documentation, this is what they mean by it)
turn on RepoAuthoritative mode (it's just 1 line in HHVM's config)
delete your PHP code
This way your PHP applications will run just fine without the source code being present. Doing a server restart won't change this since HHVM's bytecode cache lives on disk (it's implemented as an SQLite database).
However, it will be kind of a headache if you:
want to change something in your code. You would have to copy your code, make the change and repeat the pre-analysis phase.
want to upgrade HHVM to a newer version. HHVM uses its build ID as part of the cache key so, if you upgrade it, the bytecode cache becomes unreachable and, since you'll be running in RepoAuthoritative mode, your application will be reduced to a bunch of HTTP 404 errors. To fix this, you would have to repeat the pre-analysis phase as well.
Bottom line: no upside, big downside. There's just no point in doing it.
PS: I hope I answered your question. It's also possible that I misunderstood what you asked; if that's the case, please let me know in a comment.

Qt generic error message

This is the error messsage I get.
I know it's kind of an eye roller, that it's difficult nigh impossible to tell what I may need without the source, but it seems like a deployment problem as people that installed the Qt SDK can run it. Plus, I figured I'd have better luck asking here than with a chinese developer that speaks google-english.
So here's what I've done:
I installed the MSVC2012.
I used a program called cffexplorer to see what the exe was looking for. I have the 7 or so .dlls that are at the top of the tree.
I found a recent (jun 2013) qwindows.dll from elsewhere on my system and put it in ./plugins (I've tried this file in ./, ./plugins, and ./plugins/platforms
I created a qt.conf with the following data (I determined the format from an existing Qt based app that works)
[Paths]
Plugins = plugins
Yet, I continue to get this message. Any suggestions on what I might look for to clear this up?
Ask the developer what compiler was used to build the application. Then you will need the right dll (that was built with the same compiler as the application). Also notice that (by default) the documentation says that qwindows.dll should be in the platforms folder in the same path as your executable, read more here. Depending on whether the developer used a Qt built with angle, you may also need: libEGL.dll and libGLESv2.dll. Dependency walker might help you find dependencies that are not there.

Updating UMDF drivers during development

I am having some trouble updating UMDF drivers using "devcon" during a
standard code-deploy-debug cycle. The problem is that "devcon update" isn't
really updating anything unless the version number or the date of the DLL
file and the INF file has changed from what is stored in the system's driver
cache folder. After a maddening series of experiments I've discovered that
one way to force the thing to use the latest files is by doing the
following:
Change the parameters passed to
"stampinf.exe" in "makefile.inc" by
explicitly setting a version with
the "-v" option.
Modify the
resource script file ("DRIVER_NAME.rc") to first define
VER_USE_OTHER_MAJOR_MINOR_VER
before including "ntverp.h" and then
explicitly define
VER_PRODUCTMAJORVERSION and
VER_PRODUCTMINORVERSION. You'll
note that this system does not allow
us to change the build and the
revision numbers. On Win7 this
seems to be fixed at 7600 and 16385
in "ntverp.h". Is this by design?
So, I first modify "makefile.inc" and set the "-v" option to something like
"1.1.7600.16385" manually incrementing the minor version for every single
build and then modify the RC file and update VER_PRODUCTMINORVERSION with
the same number.
Alternatively, if I run a command prompt under the SYSTEM account and go and
delete the driver cache folder in
"C:\windows\system32\DriverStore\FileRepository\DRIVER FOLDER" before
running "devcon" then that works too.
Now, I am thinking I am missing something fairly basic here as this seems to
be a rather painful way of doing it. Please help! Thanks!
Why can't you just unplug the device and replace the unloaded DLL? You shouldn't need to reinstall the driver, just replace the module. Note that you shouldn't do this during production or anything that has to do with customers, but if you're writing a driver, just slam in the new module with the same version number.
On Win7 this seems to be fixed at 7600 and 16385 in "ntverp.h". Is this by design?
Yep, at least until the next service pack
As Paul Betts has suggested above, the way to go seems to be to simply replace the UMDF DLL directly in the driver folder (for e.g. c:\windows\system32\drivers\umdf\) after disabling the device either in the device manager or using "devcon". I'd asked this question on Microsoft's device drivers newsgroup before posting here but hadn't got a satisfactory response - but some folks ended up responding there after I posted here! So I'll put up a link to that post as well:
http://bit.ly/6PDxKT

Is it possible to create a custom distribution of OpenOffice, or a way to package it into my java application?

I've got simple java-based ppt->swf sub-project that basically works. The open source software out there, OpenOffice.org and JODConverter do the job great.
The thing is, to do this I need to install OO.o and run it in server mode. And to do that I have to install OO.o, which is allot of software (~160MB) just to convert the source PPT files to an intermediate format. Also, the public OO.o distributions are platform specific and I'd really like a single, cross platform set of files. And, I'd like to not interfer with a system's current settings, like file extension associations.
As things are now, my project is not particularly 'software distribution friendly'.
So, the questions are:
Is it possible to create a custom distribution of OpenOffice? How would one about this?
How lightweight and unobtrusive can I make the installation?
Would it be possible to have a truly cross platform distribution since there would be no OO.o UI?
Are there any licensing issues I need to be aware of? (On my list of things to check out, but if you them already then TIA!)
I have no idea to accomplish such task, but Microsoft has its PPT viewer that is for free and very small, maybe in .NET (C#) you can use some kinda function to save into a intermediate file that you need...
and by the way, how are you handling slide transictions?
I found a software that does that but you need MS PPT installed.
this was just an idea, now regarding your actually question:
you can create your own installation of OO, just jump to the Installation project and follow the lines.
I did not read 'til the end, but from the 1st paragraph it seams what you are searching for.
No, not unless you are neck deep coding in the OpenOffice project.

Resources