TravisCI with Intel compiler - travis-ci

Does anyone have a solution for using intel tools (ifort and icc) on TravisCI? The problem is in getting the compilers on to the image and obtaining the proper license.
Here's a shell script which installs Intel 2016 and I've been passing a temporary Intel license to the Travis container as a "secret" set through the Travis dashboard. However, this script at times fails to install the intel compilers due to timeout. When it does work, it eats up about 5 minutes of my precious free time on travis so my test suite may not finish (a long test suite is its own issue, working on it).
Is it possible to create a custom Travis image which includes the tools I need for my tests? Any other ideas for using a licensed compiler and specifically the intel compiler on TravisCI?

Related

How do I install luac.cross on a Mac?

NodeMCU documentation states
NodeMCU firmware build now automatically generates a luac.cross image
as standard in the firmware root directory; this can be used to
compile and to syntax-check Lua source on the Development machine for
execution under NodeMCU Lua on the ESP8266.
Where do I get luac.cross from and how do I install it?
Do I build NodeMCU firmware from source on Mac and is luac.cross created as part of that process? I have been using the cloud service to create custom firmware. Is luac.cross available via cloud build?
Straight lua code has overwhelmed the MakerFocus NodeMCU board resulting runtime panic with out of memory issue. Hoping compiled code will reduce RAM needs.
Where do I get luac.cross from and how do I install it?
You gave the answer in the quote from the documentation you posted. Specifically this
NodeMCU firmware build now automatically generates a luac.cross image...
So, if you build the NodeMCU manually on your platform the build process will also create a lua.cross for your platform. That's the reason you cannot download or install lua.cross - it has to fit your platform i.e. OS et.al.
The logical next question would then be: how do I manually build NodeMCU on macOS?
I don't know the answer to that as I build with the Docker image (from yours truly) on macOS. Running the Docker build creates a luac.cross in the firmware root directory. However, as macOS is just the host OS for Docker in this setup luac.cross is for Linux rather than native for macOS. To use it you would start the Docker container again and run bash in it to get a shell to execute the Lua cross compilation: docker run --rm -ti -vpwd:/opt/nodemcu-firmware marcelstoer/nodemcu-build bash.
Straight lua code has overwhelmed the MakerFocus NodeMCU board resulting runtime panic with out of memory issue. Hoping compiled code will reduce RAM needs.
I hate to disillusion you, but if I had to bet I would expect that savings won't be significant enough to yield the expected results. As you already started reading documentation I'd like to point you to the relevant FAQ: How is NodeMCU Lua different to standard Lua? and Techniques for Reducing RAM
And maybe using LFS will be your life saver.
In case you want to use this tool regardless of the platform - you can use my API to build it:
curl -d #yourscript.lua -X POST https://nodemcu-luacross-run-64l7ehzjta-uc.a.run.app/compile > output.luac

jenkins installation in Solaris server

How to install the jenkins on a solaris server? i found articles that this cannot be done as jenkins has discontinued support for solaris.
Even though official IPS repositories for Solaris are discontinued, you can still run Jenkins in Solaris via the jenkins webapp (jenkins.war). To quote from Jenkins installation doc:
Solaris, OmniOS, SmartOS, and other siblings
Generally it should
suffice to install Java 8 and download the jenkins.war and run it as a
standalone process or under an application server such as Apache
Tomcat.
Some caveats apply:
Headless JVM and fonts: For OpenJDK builds on minimalized-footprint
systems, there may be issues running the headless JVM, because Jenkins
needs some fonts to render certain pages.
ZFS-related JVM crashes: When Jenkins runs on a system detected as a
SunOS, it tries to load integration for advanced ZFS features using
the bundled libzfs.jar which maps calls from Java to native libzfs.so
routines provided by the host OS. Unfortunately, that library was made
for binary utilities built and bundled by the OS along with it at the
same time, and was never intended as a stable interface exposed to
consumers. As the forks of Solaris legacy, including ZFS and later the
OpenZFS initiative evolved, many different binary function signatures
were provided by different host operating systems - and when Jenkins
libzfs.jar invoked the wrong signature, the whole JVM process crashed.
A solution was proposed and integrated in jenkins.war since weekly
release 2.55 (and not yet in any LTS to date) which enables the
administrator to configure which function signatures should be used
for each function known to have different variants, apply it to their
application server initialization options and then run and update the
generic jenkins.war without further workarounds. See the libzfs4j Git
repository for more details, including a script to try and "lock-pick"
the configuration needed for your particular distribution (in
particular if your kernel updates bring a new incompatible libzfs.so).
Also note that forks of the OpenZFS initiative may provide ZFS on
various BSD, Linux, and macOS distributions. Once Jenkins supports
detecting ZFS capabilities, rather than relying on the SunOS check,
the above caveats for ZFS integration with Jenkins should be
considered.

Build a project targeting MSVC on linux Jenkins

I have a private server that I've been slowly setting up for personal projects, but I've run into a bit of a roadblock. My server is running Arch linux [I like bleeding edge and minimalistic installs in situations like this] and I have Jenkins running on it so that I can have it automatically build projects. I have a project that I've been working on that is currently targeting the Win32/64 platform using MSVC, but I can't seem to find any info anywhere about setting up a job on Jenkins for this situation. I was hoping that I could maybe setup a Docker instance that would be able to provide the MSVC toolchain, especially since Visual Studio Code is available for Linux, and that I could use that as part of my Jenkins setup to generate Win binaries for me to test on my main machine. I mention this because naturally, Visual Studio is not a command line utility, and currently my server is a pure headless setup that only provides cli interaction, so if possible, I would like to avoid directly adding GUI packages to the server, but if it is the only way, I'd be willing to do so. Is there really no way to achieve what I'm going for with this?
Sorry if this lacks important details or is formatted poorly, this is my first time asking a question here as it's very rare for me to not be able to find the info I'm looking for in an already existing question.
After research, this is not currently possible as it stems from a misunderstanding of exactly what docker provides. Docker simply uses the underlying OS to provide everything and does not provide any virtualization of foreign OSs. Without a version of the MSVC toolchain that can run on linux, or possibly the use of WINE, there is not a way to achieve this short of a VM. Since WINE is not perfect, the most reliable solution as it appears to me is the VM, but YMMV. The other advantage to using a VM is that I can keep the server headless.
I can't answer this question completely, but this topic is interesting to me too.
Note: Visual Studio Code is open-source, but that's an Electron-based editor. Visual Studio IDE and MSVC are proprietary Windows-only apps.
The website https://blog.sixeyed.com/how-to-dockerize-windows-applications/ suggests it's possible to dockerize Windows apps, including Visual Studio.
Docker images for Windows apps need to be based on microsoft/nanoserver or microsoft/windowsservercore, or on another image based on one of those.
Once you get that working, I'd use Visual Studio command-line builds, like devenv /build file.sln [optionally /project file.vcxproj ]. (https://learn.microsoft.com/en-us/visualstudio/ide/reference/devenv-command-line-switches?view=vs-2017 ).
Note that the VS2017 installer does not function on Wine. I recently filed a bug for this (https://bugs.winehq.org/show_bug.cgi?id=45749 followed by https://bugs.winehq.org/show_bug.cgi?id=45757 ).
I personally use Appveyor for auto-building MSVC apps. Appveyor is a Windows-based centralized cloud service, not a self-hosted CI system.

Air application not packaging for iOS (air sdk 17)

I am posting this question because I stumbled upon the solution, despite not being able to find anything online which helped my specific problem. I am posting the accidental fix as an answer.
Problem: I am using adt.jar via cmd and an ANT script to package the air tablet application. Everything works fine on my workstation, but ipa builds fail on the build machine. The build machine is just a re-purposed workstation with more memory, larger hdd, and runs tomcat/hudson. Both environments are Win7 SP1. By 'everything' I mean apk builds in various configurations, and ipa builds with testing and production provisions files.
Error messages varied a little bit, but here are the common two messages:
Compilation failed while executing : compile-abc
Error #1042: Not an ABC file.
The stack dump was just a bunch of parameters passed into adt -- application specific.
Things I tried based on many internet searches:
Update to latest air 17 beta (17.115) Did not work. I did not expect this to fix my problem, because the PC which successfully builds the ipa does not have this version of the sdk
Hunted down empty case blocks in the code. There were a couple, but again this did not fix the problem. Still works on my machine and not the build machine. I actually made sure the empty blocks existed on the functional environment to disprove this attempt. I am not using "-useLegacyAOT no", so this should not have helped.
Compared all relevant environment vars between the two systems, and matched the ones that were different. This did not fix the issue.
Checked the version of jdk pointed to by JAVA_HOME. Both were already "64-Bit Server VM (build 20.45-b01, mixed mode)" aka: jdk-6u45-windows-x64.exe
Out of desperation, I ran Windows Update on the environment which failed to produce ipa files. There was a recommended update to the .NET framework which something in my tool chain must depend on. This fixed the problem.
Microsoft .NET Framework 4.5.2 for Windows 7 x64-based Systems (KB2901983)
My personal workstation is always up to date, and I restart often. This was not the case for the build workstation.
EDIT: A second update was also installed at the same time. This could be what fixed things, but I'm not going to question it.
Update for Windows 7 for x64-based Systems (KB3021917)

Installation of Z3 on a posix system without python?

Is it possible to get Z3 running on a system providing posix API without having python installed?
I have seen the new version 4.3 uses python already in the build-process (scripts/mk_make.py).
Whats about older versions like 4.1? Is it imaginable to get it to run on posix without python?
Is Python not available in your system?
Python was always used to automatically generate some parts of the Z3 code base. In the first source code release, we have included the automatically generated code. Actually, at that time, we were using a combination of python + sed + awk + grep to generate these parts of the code. Another problem with the first release is that the build system for Windows (+ Visual Studio) was completely different from the build system for the other platforms. The Makefiles for Linux and OSX were derived from Visual Studio Project files. Some users also started to report problems with the build system for Linux and OSX. So, to reduce these problems and have a uniform build system, we decided to use python (and python only) to:
Automatically generate code (for bindings for different languages, API logging support, etc)
Check the system for requirements
Generate the Makefiles
And any other form of automation
Python is very attractive for us because it works in most systems (even non posix compliant ones). We can easily write portable scripts. Moreover, after we made the switch, we can compile Z3 in more platforms. We successfully compiled it on Windows, Linux (Mint, Ubuntu, Suse, etc), OSX, Cygwin, and FreeBSD.
In the "unstable" (aka working-in-progress) branch, we don't even require autoconf anymore, we use python to do all system specific configuration. To build Z3, we just need: python, C++ compiler (Visual Studio C++, g++ or clang++), ar (on non-windows platform), make (or nmake). This is very small set of requirements. Python is available in most platforms by default.
That being said, is it possible to remove the python requirement? Yes, it is, but it would have to replace python with something else. Something, that would allows us to perform all tasks described above. Take a look in the directory scripts at http://z3.codeplex.com/SourceControl/changeset/view/0c1f2a82818a,
we would have to port all these automation scripts to something that can be used on all platforms we support.

Resources