ios:createIPA gradle task fails with error in HfsCompressor.compressNative - ios

When building my libGDX game for iOS from the command line, using ./gradlew ios:createIPA, I sometimes get the following error:
...
:ios_lite:createIPA
RoboVM has detected that you are running on a slow HDD. Please consider mounting a RAM disk.
To create a 2GB RAM disk, run this in your terminal:
SIZE=2048 ; diskutil erasevolume HFS+ 'RoboVM RAM Disk' `hdiutil attach -nomount ram://$((SIZE * 2048))`
See http://docs.robovm.com/ for more info
RoboVM has detected that you are running on a slow HDD. Please consider mounting a RAM disk.
To create a 2GB RAM disk, run this in your terminal:
SIZE=2048 ; diskutil erasevolume HFS+ 'RoboVM RAM Disk' `hdiutil attach -nomount ram://$((SIZE * 2048))`
See http://docs.robovm.com/ for more info
:ios_lite:createIPA FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':ios_lite:createIPA'.
> org.robovm.compiler.util.io.HfsCompressor.compressNative(Ljava/lang/String;[BI)Z
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
BUILD FAILED
--info and --debug provide much more output, but no more useful information, and --stacktrace just shows the internal stack trace within Gradle.
Using Gradle 2.2, OS X 10.11.5, JVM 1.8.0_74, RoboVM 1.12.0.
What causes this error, and how can I fix it?

I still don't know what's causing it (better answers welcome in that regard), but I have found a workaround: restart the Gradle daemon. Before building, simply run:
$ ./gradlew --stop
The daemon will automatically be restarted for the next build. So far, this workaround has reliably fixed the error for me.

Related

perf can't find ELF for unwinding when tracing app in docker

I am tracing an application running inside a docker container. To do so, I am attaching to it with the following command
perf record -o /tmp/perd.data --call-graph dwarf --pid <pid>
The tracing works fine, but when I try to get a report I get the following issue, it doesn't show any of my application functions, they are all unknown.
In have also tried hotspot, and I get the following error
PerfUnwind::MissingElfFile: Could not find ELF file for /workspace/build/release/bin/shared-libs/libdeLog.so. This can break stack unwinding and lead to missing symbols.
I think the issue is that, since the app is running in a container, the libraries are in a particular directory (/workspace/build/release/bin/shared-libs), and when I run perf report on the host, it can't find where the libraries are, since the library directory only exists on the container.
How can I fix that?

valgrind reports "Operation not permitted" but permissions seem to be ok

I want to run valgrind to monitor a program binary named contextBroker this way:
valgrind -v --leak-check=full --track-origins=yes --trace-children=yes contextBroker
but I get this error message:
valgrind: /usr/bin/contextBroker: Operation not permitted
(It happens that the contextBroker binary is in /usr/bin/)
First thing I though was some kind of problem with permissions. However:
I run the valgrind command as root user
The permissions of the /usr/bin/contextBroker are even wider:
ls /usr/bin/contextBroker -l
-rwxr-xr-x 1 root root 7108992 Jun 3 18:15 /usr/bin/contextBroker
Additional facts:
The contextBroker binary works fine, e.g. if I run it using contextBroker it works.
valgrind version is 3.16.0
I'm running the valgrind command within a docker container. The same command in the hosting system works (although the valgrind version in the host is slightly different: 3.12.0.SVN)
How I can solve this problem and run valgrind on my process? Thanks!
Using --privileged in the docker run command line solved this issue.
Thanks Nick ODell for the hint! :)

How do I run Docker Swarm's integration tests?

I've followed the instructions at https://github.com/docker/swarm/blob/master/CONTRIBUTING.md to run Swarm's integration tests, but they do not work. The command ./test/integration/run.sh gives unusual error messages. (See http://pastebin.com/hynTXkNb for the full output).
The message about swappiness is the first thing that looks wrong. My kernel does support it - I tested it. /proc/sys/vm/swappiness is readable and writable, and configurable through sysctl.conf .
The next line that looks wrong is the chmod. It tries to access a file in /usr/local/bin - which is wrong because Docker is installed to /usr/bin , and because that file wouldn't be writable by anyone but root anyway.
I know the daemon is running, and working correctly. For example:
user#box:~$ docker run debian /bin/echo 'hello world asdf'
WARNING: Your kernel does not support memory swappiness capabilities, memory swappiness discarded.
hello world asdf
Does anyone know how to fix this issue, and get the integration tests running?
If not, does anyone at least know where to dig into the code in Docker to find out what is failing?

/home/travis/build.sh: line 41: $pid Killed (exit code 137)

In the Apache Jackrabbit Oak travis build we have a unit test that
makes the build erroring out
Running org.apache.jackrabbit.oak.plugins.segment.HeavyWriteIT
/home/travis/build.sh: line 41: 3342 Killed mvn verify -P${PROFILE} ${FIXTURES} ${SUREFIRE_SKIP}
The command "mvn verify -P${PROFILE} ${FIXTURES} ${SUREFIRE_SKIP}" exited with 137.
https://travis-ci.org/apache/jackrabbit-oak/jobs/44526993
The test code can be seen at
https://github.com/apache/jackrabbit-oak/blob/trunk/oak-core/src/test/java/org/apache/jackrabbit/oak/plugins/segment/HeavyWriteIT.java
What's the actual explanation for the error code? How could we
workaround/solve the issue?
Error code 137 usually comes up when a script gets killed due to exhaustion of available system resources, in this case it's very likely memory. The infrastructure this build is running on has some limitations due to the underlying virtualization that can cause these errors.
I'd recommend trying out our new infrastructure, which has more resources available and should give you more stable builds: http://blog.travis-ci.com/2014-12-17-faster-builds-with-container-based-infrastructure/
Usually Killed message means that you are out of memory. Check your limits by ulimit -a or available memory by free -m, then try to increase your stack size, e.g. ulimit -s 82768 or even more.

Criu/crtools restore fails to restore process on a different machine

I am trying to save a process to the disk using CRIU, I am able to save and restore it on the same machine, but when I try to restore the saved image on different machine it gives me an error.
I executed the yes command found its pid using ps aux|grep yes
then to save I did:
sudo ./criu dump -t 7483 -D ~/dumped --shell-job
then I copied the "dumped" directory to another machine and tried to restore it using following command:
sudo ./criu restore -t 7483 -D ../dumped/ --shell-job
but got the following error
(00.058476) Error (cr-restore.c:956): 7483 killed by signal 7
(00.058526) Error (cr-restore.c:1279): Restoring FAILED.
How do I resolve this? I want to migrate a process to a different machine having exactly similar configuration.
Configuration:
Ubuntu 12.04 64-bit desktop
linux 3.11.0.19-generic
RAM: 4 GB
Output of lscpu
Are you able to restore this process on the machine where you dumped it?
Could you run restore with additional keys to get verbose log? Like so:
sudo ./criu restore -D ../dumped/ --shell-job -v4 -o restore.log
And provide somehow this log?
Btw, -t option at restore is obsoleted. But it doesn't matter in this case, though. =)

Resources