Red-Pitaya Kernel: add additional Modules - redpitaya

Is it possible to add additional modules for some USB devices to the default redPitaya kernel?
Right now the kernel seems to be static without module support.
Especially focus on adding several usb-WIFI dongles to make them work out of the box - the only USB WIFI driver that I have found to be compiled in is: 8192cu.
It may also be helpful adding some other modules like USB-serial console or such...
(also enabling /proc/config.gz may help...)

In release 0.90, the Red Pitaya kernel is static with only 8192cu WiFi support in order to minimize the size of the kernel & especially the ramdisk.
But, like any other Linux kernel, it is configurable. You can build your own kernel flavour by modifying the Red Pitaya kernel configuration file and rebuilding the kernel & modules. But keep in mind the ramdisk size is limited to 10 MB due to current u-boot configuration. The elegant solution would therefore be to put the modules onto the SD card (/opt/lib) instead of increasing the ramdisk.
Regarding /proc/config.gz, it is already enabled in release 0.90 and can be accessed using:
redpitaya> gunzip -dc /proc/config.gz

Related

Why are Docker multi-architecture needed (instead of the Docker Engine abstracting the differences)

Short version
I would like to know the technical reasons why do Docker images need to be created for multiple architectures. Also, it is not clear whether the point here is creating an image for each CPU architecture or for an OS. Shouldn't the OS abstract the architecture?
Long version
I can understand why the Docker Engine must be ported to multiple architectures. It is a piece of software that will interact with the OS, make system calls, and ultimately it is just code that is represented as a sequence of instructions within a particular instruction set, for a particular architecture. So the Docker Engine must be ported to multiple OS/architectures much like, let's say, Microsoft Word would have to be ported.
The same thing would occur to - let's say - the JVM, or to VirtualBox.
But, different than with Docker, software written for the JVM on Windows would run on Linux. The JVM would abstract the differences of the underlying OS/architectures, and run the same code on both platforms.
Why isn't that the case with Docker images? Why can't the Docker Engine just abstract the differences, and provide a common interface, so the image itself wouldn't need to be compatible with a specific OS/architecture?
Is this a decision (like "let's make different images per architecture because it is better for reason X"), or a consequence of how Docker works (like "we need to do it this way because Docker requires Y")?
Note
I'm not crying "omg, why??". This is not a rant or criticism, I'm just looking for a technical explanation for the need of different images for different architectures.
I'm not asking how to create a multi-architecture image.
I'm not looking for an answer like "multi-architecture images are needed so you can run your images on various platforms", which answers "what for?", but not "why is that needed?" (which is my question).
Besides that, when you see an image, it usually has an os/arch in the digest, like this:
What exactly the image is targeting? The OS, the architecture, or both? Shouldn't the OS abstract the underlying architecture?
edit: I'm starting to assume that the need for different images per architecture is on the lines of: the image will contain applications inside it. Let's say, it will contain the Go compiler. The Go compiler itself is a binary that must have been complied to different architectures. The image for x86-64 will contain the Go compiler compiled to x86-64, and so on. Is this correct? If this is correct, is this the only reason?
Why can't the Docker Engine just abstract the differences, and provide a common interface
Performance would be a major factor. Consider how slow Cygwin is for some things when providing a POSIX API on top of Windows by emulating some POSIX things that don't map directly to the Windows API. (e.g. fork() / exec separately, instead of CreateProcess).
And that's just source compatibility; the resulting binaries are specific to Cygwin on Windows. It's even worse if you want to do that at runtime (binary compat instead of source compat).
There's also the amount of complexity Docker would need to provide an efficient portable JIT-compiling VM on top of various OSes, especially across various CPU ISAs like x86-64 vs. AArch64 that don't even share common machine code.
If Docker had gone this route, it would really just be re-inventing a JVM or .NET CLR bytecode-based VM.
Or more likely, instead of reinventing that wheel, it would just use an existing VM and add image management on top of that. But then it couldn't work with native programs written in C, unless it transpiled them to Java or CLR bytecode.
All tough the promise of Docker is the elimination of differences when moving software between machines, you'll still face the problem that Docker runs with the host machine's CPU architecture, which can't be crossed in Docker.
Neither Docker, nor a virtual machine, abstract a CPU to enable full cross compatibility.
Emulators do. If both Docker and VM's would run on Emulators, they would be less performant as they are today.
The docker buildx command and --build-arg ARCH flag leverages the advantage of the qemu emulator, emulating the full system with an architecture during a build. The downside of emulation is that it runs much slower than normal builds.

What kernel options is Google's Container Optimized OS built with?

I'm having trouble finding the kernel options that Google's Container Optimized OS is built with. I tried looking at the usual locations like boot/config-* and /proc/config.gz, but didn't find anything. I searched the source code and didn't find anything either, but I'm probably just searching wrong.
The specific option I'm curious about is CONFIG_CFS_BANDWIDTH and whether it is enabled or not. Thanks!
You can get it via running zcat /proc/config.gz in a Container-optimized OS VM.
The kernel config is generated from the source here. However, note that the source kernel config are changed during the OS image building process. So they are not 100% the same.

PocketBeagle Debian internet over USB

I am trying to follow directions from the book Exploring Beaglebone. I have also viewed this video which is wrong OSes. I have also read some posts (one, two).
Observed anomalies:
Network Preferences shows a warning of a self-assigned IP address and inability to support internet sharing:
macOS Network Preferences
Debian does not have a 'udhcpc' command but the following was executed:
Screen output
Has anyone been able to do internet over USB on macOS 10.13.2 and Debian 9 IoT?
tnx,
Jon
So a quick intro to what options you have:
Manually enable routing + NAT (No idea how to do that on OSX)
Configure the board system to use DHCP client functionality instead to talk to a shared connection from e.g. OSX.
Permanently disable the script (should be in /opt somewhere) that assigns the current network settings and enables DHCP serving. Might also just be a part of a larger script.
Enable DHCP client (e.g. in /etc/network/interfaces or by using Connman or Network-manager)
The default Debian image should also expose a serial console over USB. You can use this to gain access to and configure the system even when your network connection doesn't work. Of course, the debug-UART should work as well, by using a USB-serial converter.
Another note: the DHCP client on Debian for manual execution is usually dhclient. The interface on the Beaglebone side will be named usb0 or usb1 (as newer images use two types for increased compatibility e.g. with OSX).
A good place to ask questions is usually the Beagleboard Google group

can we run digits or caffe on Mac without GPU?

I have seen caffe installation for Mac. But I have a question. If my Mac does not have GPU, then I have no chances to use GPU?? and I have to use CPU-only?
or I have the chance of using (virtual!) GPU by NVIDIA web driver?
Moreover, can I have digits on my Mac? as I try to download it, it does not have any options for Mac download and it is just for Ubuntu!
I am very confused about these questions! Can you please make me clear about these?
The difference in architectures between CPU and GPU does not allow simple transformation of the code written for one architecture to the other. The GPU drivers are specifically written for the GPU architecture and cannot be easily virtualized. On the other hand, some software supports both. This includes OpenGL instructions and caffe (http://caffe.berkeleyvision.org/). NVidia DIGITS is based on caffe and therefore can work without a dedicated GPU (Here the thread how to install on Macs: https://github.com/NVIDIA/DIGITS/issues/88)
According to https://www.github.com/NVIDIA/DIGITS/issues/251 CUDA cannot be run on computers that do not have a dedicated NVidia GPU, but according to How to run my CUDA application on ATI or Intel card in software mode? there is a program gpuocelot that receives CUDA instructions and can work on NVidia GPU, AMD GPU and x86.
In scientific shared computing they wrote separate programs for different devices, e.g. Einstein at Home has four separate programs to find gravitational waves: CPU, NVidia GPU (CUDA), AMD GPU and ARM.
To make DIGITS work you need to
build Caffe with CPU_ONLY and tell DIGITS not to use any GPUs by
running digits-devserver with the --config flag
(https://github.com/NVIDIA/caffe/blob/v0.13.2/Makefile.config.example#L9-L10, https://github.com/NVIDIA/DIGITS/issues/251).
Other possibility:
you can still use the --config flag with the web installer. Try this:
./runme.sh --config. Choose "N" to select none.
Also a possibility:
I am trying to answer how you can choose CPU or GPUs.. Within the
caffe folder, there is a Makefile.config.example file.. Copy the
contents of this file into a new file and rename it as
"Makefile.config". If you want to use CPU, then
1. comment out the "USE_CUDNN :=1 Within "Makefile.config" file,
2. uncomment CPU_ONLY := 1
3. issue the make all command again within the caffe folder..
And if nothing helps you can do the procedure two times because it helped someone at the end of the thread.

Build Systems for Embedded Linux

I work on a device that uses Embedded Linux. In the near future this device is going to probably turn into a product family and a few more devices (i.e. hardware platforms) are going to added to the mix. These devices will be similar but may have processors, hardware peripherals (and device drivers), user space applications and kernel settings. In addition to compiling distributions for different devices, I'd also like to build debug distributions for development.
What are some of the more common ways to assemble and manage Embedded Linux systems? I have been playing around with Jenkins some and see some potential there.
There are several build systems targeting Embedded Linux:
OpenEmbedded
Buildroot
Scratchbox
PTXdist
LTIB
Emdebian
There is document which compares them here. But I suppose the Buildroot is the most famous and used among them. There are more interesting docs related to Embedded Linux on free-electrons.com.

Resources