How do I install luac.cross on a Mac? - lua

NodeMCU documentation states
NodeMCU firmware build now automatically generates a luac.cross image
as standard in the firmware root directory; this can be used to
compile and to syntax-check Lua source on the Development machine for
execution under NodeMCU Lua on the ESP8266.
Where do I get luac.cross from and how do I install it?
Do I build NodeMCU firmware from source on Mac and is luac.cross created as part of that process? I have been using the cloud service to create custom firmware. Is luac.cross available via cloud build?
Straight lua code has overwhelmed the MakerFocus NodeMCU board resulting runtime panic with out of memory issue. Hoping compiled code will reduce RAM needs.

Where do I get luac.cross from and how do I install it?
You gave the answer in the quote from the documentation you posted. Specifically this
NodeMCU firmware build now automatically generates a luac.cross image...
So, if you build the NodeMCU manually on your platform the build process will also create a lua.cross for your platform. That's the reason you cannot download or install lua.cross - it has to fit your platform i.e. OS et.al.
The logical next question would then be: how do I manually build NodeMCU on macOS?
I don't know the answer to that as I build with the Docker image (from yours truly) on macOS. Running the Docker build creates a luac.cross in the firmware root directory. However, as macOS is just the host OS for Docker in this setup luac.cross is for Linux rather than native for macOS. To use it you would start the Docker container again and run bash in it to get a shell to execute the Lua cross compilation: docker run --rm -ti -vpwd:/opt/nodemcu-firmware marcelstoer/nodemcu-build bash.
Straight lua code has overwhelmed the MakerFocus NodeMCU board resulting runtime panic with out of memory issue. Hoping compiled code will reduce RAM needs.
I hate to disillusion you, but if I had to bet I would expect that savings won't be significant enough to yield the expected results. As you already started reading documentation I'd like to point you to the relevant FAQ: How is NodeMCU Lua different to standard Lua? and Techniques for Reducing RAM
And maybe using LFS will be your life saver.

In case you want to use this tool regardless of the platform - you can use my API to build it:
curl -d #yourscript.lua -X POST https://nodemcu-luacross-run-64l7ehzjta-uc.a.run.app/compile > output.luac

Related

Is IntelliJ's support for Dockerized Python environments compatible with Python running on a Windows container?

My Python project is very windows-centric, we want the benefits of containers but we can't give up Windows just yet.
I'd like to be able to use the Dockerized remote python interpreter feature that comes with IntelliJ. This works flawlessly with Python running on a standard Linux container, but appears to work not at all for Python running on a Windows container.
I've built a new image based on a standard Microsoft Server core image. I've installed Miniconda, bootstrapped a Python environment and verified that I can start an interactive Python session from the command prompt.
Whenever I try to set this up I get an error message: "Can't retrieve image ID from build stream". This occurs at the moment when IntelliJ would have normally detected the python interpreter and it's installed libraries.
I also tried giving the full path for the interpreter: c:\miniconda\envs\htp\python.exe
I've never seen any mention that this works in the documentation, but nor have I seen any mention that it does not work. I totally accept that Windows Containers are an oddity, so it's entirely possible that IntelliJ's remote-Python feature was never tested on Python running in Windows containers.
So, has anybody got this feature working with Python running on a Windows container yet? Is there any reason to believe that it does or does not work?
Regrettably, it is not supported yet. Please vote for the feature request https://youtrack.jetbrains.com/issue/PY-45222 in order to increase its priority.

Building Python wheels in Docker for Raspberry Pi Zero on x86_64 machine

I'm hoping this is an appropriate venue for my question. There's a lot of pieces to this puzzle.
I'm building a container using Docker that is destined to run on a Raspberry Pi Zero. The RPi Zero has an ARMv6 hard-float processor. The container will run a Python program that includes some dependencies that must be compiled (uses binary libraries). I am able to build and run the container on the RPi Zero itself, but building the container literally takes hours. I'm hoping to 1) speed up the process of building and 2) allow this to happen in a CI environment.
The approach I've taken in the past to build minimal Python containers that have dependencies requiring compilation is to use a multistage Docker build. I first startup a container with a full toolchain, then run pip wheel to compile all requirements into .whl files. I then copy the .whl files to the final container, install any binary libraries using the typical package manager, and then point pip install at this cache (--find-links=/wheels) for the installation of Python dependencies. This approach also works just fine on the Pi, but as I said it takes forever.
I've considered a few different approaches I could take:
Figure out how to get the Docker engine on my main dev machine (also in CI) to run and build an ARM image using qemu-arm-static while running docker build, and then somehow tag the resulting image as ARMv6 and upload it to my registry somehow. (I could just use a tag or a different repo name) I haven't honestly dug too deep into this, but my main concern is that every example I've seen of qemu-arm seems to indicate that it runs ARMv7 emulation. The RPi Zero can't actually even run many Docker containers that are made available for ARM due to this (exit 139). The arm32v6 "user" does provide working base images that run fine on the RPi Zero, which is what I'm using as the source images to build on my Pi itself.
Emulate an entire RasPi using qemu-system-arm. Again though, it looks like this emulates ARMv7, meaning the compiled wheels might not be able to run on the Pi zero.
Setup a cross-compiling toolchain for ARMv6. A few problems: I wouldn't know how to make sure pip uses that toolchain when compiling, and also I'd need to get and compile any other dependent library (even possibly all the way down to glibc?) so the header files will resolve.
It looks like this is easy to do if you want to do it for ARMv7 (which I believe the RPi 2 uses) or later, but I'm specifically using a Zero for my project, so I don't have that option.
TL;dr: How do I build binary Python wheels for ARMv6 using Docker without having to do it on a slow, single-core Raspberry Pi Zero?

Use a Docker container as an install set

I'm currently building a Docker container that contains all the libraries needed for deployment of our app on a test machine, such as, for example, OpenCV 3.3 built with CUDA 9.
So, on a clean minimal OS install we can download the container and fire up our app in the desired environment, which is as I understand it one of the main reasons to use Docker.
So, after a while we decide to do our tests on the bare metal without the Docker file system, etc, in the way. Can we somehow replay the Dockerfile commands or image command history to run the apt-get, etc of not just the current package, but all FROM packages that are not yet installed on the raw environment?

How do you run an .exe file on Docker?

I am currently trying to understand and learn Docker. I have an app, .exe file, and I would like to run it on either Linux or OSX by creating a Docker. I've searched online but I can't find anything allowing one to do that, and I don't know Docker well enough to try and improvise something. Is this possible? Would I have to use Boot2Docker? Could you please point me in the right direction? Thank you in advance any help is appreciated.
Docker allows you to isolate applications running on a host, it does not provide a different OS to run those applications on (with the exception of a the client products that include a Linux VM since Docker was originally a Linux only tool). If the application runs on Linux, it can typically run inside a container. If the application cannot run on Linux, then it will not run inside a Linux container.
An exe is a windows binary format. This binary format incompatible with Linux (unless you run it inside of an emulator or VM). I'm not aware of any easy way to accomplish your goal. If you want to run this binary, then skip Docker on Linux and install a Windows VM on your host.
As other answers have said, Docker doesn't emulate the entire Windows OS that you would need in order to run an executable 'exe' file. However, there's another tool that may do something similar to what you want: "Wine" app from WineHQ. An abbreviated summary from their site:
Wine is a compatibility layer capable of running Windows applications
on several operating systems, such as Linux and macOS.
Instead of simulating internal Windows logic like a virtual
machine or emulator, Wine translates Windows API calls
on-the-fly, eliminating the performance and memory penalties of
other methods and allowing you to cleanly integrate Windows
applications into your desktop.
(I don't work with nor for WineHQ, nor have I actually used it yet. I've only heard of it, and it seems like it might be a solution for running a Windows program inside of a light-weight container.)

Compile Tensorflow from source with Docker to get CPU speed up

I am looking for a way to set up or modify an existing Docker image for installing tensorflow that will install it such that the SSE4, AVX, AVX2, and FMA instructions can be utilized for CPU speed up. So far I have found how to install from source using bazel How to Compile Tensorflow... and CPU instructions not compiled.... Neither of these explain how to do this within Docker. So I think what I am looking for is what you need to add to an existing docker image that installs without these options so that you can get a compile version of tensorflow with the CPU options enabled. The existing docker images do not do this because they want the image to run on as many machines as possible. I am using Ubuntu 14.04 on linux PC. I am new to docker but have installed tensorflow and have it working without getting the CPU warnings I get when I use the docker images. I may not need this for speed, but I have seen posts that claim the speed up can be significant. I searched for existing docker images that do this and could not find anything. I need this to work with gpu so needs to be compatible with nvidia-docker.
I just found this docker support for bazel and it might provide an answer, however I do not understand it well enough to know for sure. I believe this is saying that you can not build tensorflow with bazel inside a Dockerfile. You have to build a Dockerfile using bazel. Is my understanding correct and is this the only way to get a docker image with tensorflow compiled from source? If so, I could still use help in how to do it and still get the other dependencies that I would get if using an existing docker image for tensorflow.
Dockerfiles that build with CPU support can be found here.
Hope that helps! Spent many a late night here on Stack Overflow and Github Issues and stuff. Now it's my turn to give back! :)
The GPU stuff in particular is really hairy - especially when enabling the XLA/JIT/AOT stuff as well as the Graph Transform Tools.
Lots of hacks embedded in my Dockerfiles. Feel free to review and ask me questions!
The contributing guidelines mention building TensorFlow from source with Docker to run the unit tests:
Refer to the
CPU-only developer Dockerfile and
GPU developer Dockerfile
for the required packages. Alternatively, use the said
Docker images, e.g.,
tensorflow/tensorflow:nightly-devel and tensorflow/tensorflow:nightly-devel-gpu
for development to avoid installing the packages directly on your system.

Resources