Are there WiFi routers that support installing Linux packages other than Keenetic with OPKG? For example - Asus, TP-Link - wifi

Without installing OpenWRT - immediately in the "native" firmware. Like, for example, OPKG in Keenetics, but from other manufacturers

Related

Installing a linux package on Photon OS that has no package manager installed

I have a Photon based VM that apparently had the usual tdnf package manager deleted.
I'd like to reinstall tdnf or yum on this VM.
I have wget available on the Photon VM.
Is there a way to bootstrap my way back to having a package manager?

ideviceinstaller not connecting with network option

the option just outputs the usage of ideviceinstaller but with no error and does nothing.
own#penguin:~$ ideviceinstaller --network 172.20.10.3 --install dark.ipa
i've been stuck on this for the last couple days, any feedback is greatly appreciated
running x86_64 GNU/Linux penguin 5.10.106-15264
For libimobiledevice to connect to your iDevice, it needs a muxer to basically say what devices are available and where.
The vanilla usbmuxd that is shipped with apt, pacman etc. do NOT support detecting devices over the network, you will have to use alternatives.
usbmuxd2 is written in C++ by tihmstar from the ground up to replace usbmuxd, but in my experience it segfaults easily.
netmuxd is written in Rust by myself, but is largely untested because it's so new and does not support USB connected devices. It can act as an extension to vanilla usbmuxd, though.

Can I run a Docker container with CUDA 10 when host has CUDA 9?

Im deploying an application in a docker container that requires CUDA 10. This is necessary to run some of the underlying pytorch functionality that the application uses.
However, the host server is running docker ce 17, Nvidia-docker v 1.0 with CUDA version 9, and I will not be able to upgrade the host.
I’m under the impression that I’m handcuffed to the v1 nvidia docker runtime and CUDA version available on the host.
Is there a way to run CUDA 10 on the container so I can leverage the functionality of this toolkit?
In the general case, any specific CUDA version will require a minimum GPU driver version. That is covered in places like here and here (table 1). So to use CUDA 9.0 you would need at least a GPU driver version that supports CUDA 9.0, such as a R384 driver. To use CUDA 10.0 you would need at least a GPU driver version that supports CUDA 10.0, such as a R410 driver.
The usage of containers doesn't fundamentally change this. If you want to use a container that has CUDA 10 code in it, your base machine needs a driver that supports CUDA 10.
NVIDIA did start publishing compatibility libraries that allow modifications to the above statements. These compatibility libraries are available but not installed by default with a CUDA toolkit install. These compatibility libraries only work in certain cases, and they have certain requirements to be usable. The compatibility libraries are documented here.
One of the specific requirements for use of these compatibility libraries is that the GPU(s) in use must be Tesla-brand GPUs. GeForce, Quadro, Jetson, and Titan family GPUs are not supported by these compatibility libraries.
Furthermore, the libraries only work with certain combination of CUDA toolkit versions, and GPU driver versions installed on the base machine. This "compatibility matrix" is documented here (Table 3). Only the specific combinations of CUDA toolkit versions with installed driver versions will be usable for compatibility. To pick one example, if you wish to use CUDA 10.0, and your base machine has a Tesla GPU with a R396 driver installed, there is no compatibility support. In the same setup, however, if you wish to use CUDA 10.1, there is compatibility support for that.
If you have satisfied the requirements for compatibility usage, then the remaining step would be to install the compatibility libraries (or build your container from a base container that has the compatibility libraries already installed).
For a package manager CUDA install method, the method to install the compatibility libraries is simple (example on Ubuntu, installing the CUDA 10.1 compatibility to match CUDA 10.1 toolkit install):
sudo apt-get install cuda-compat-10.1
Make sure to match the version to the CUDA toolkit version that you are using (that you installed with the package manager method, or that was already installed in your container).
This compatibility "path" only began in the CUDA 9.0 timeframe. Systems that are equipped with drivers that predate CUDA 9.0 will not be usable in any way for this compatibility path. There are also various functional limitations and restrictions, which are covered in the documentation.
When this "compatibility path" is correctly installed and in use, the overall system configuration can "appear" to be violating the rules indicated at the top of this answer. For example a CUDA 10.1 application could possibly be running on a machine that had only a R396 driver installed.
For the specific question in view here, OP eventually indicated that the base machine had a Quadro GPU, so this "compatibility path" does not apply, and the only way to run e.g. a CUDA 10.0 container would be if a CUDA 10.0-capable driver is installed in the base machine, e.g. R410 or later driver.

Is there a precompiled version of libimobiledevice that I can distribute with my application?

I'm making an application that uses libimobiledevice and is cross-platform(Mac, Windows, and Linux). I don't have access to all the platforms so I can't compile it myself, and it's a pain to do so.
Are there pre-compiled versions of libimobiledevice for each platform that I can distribute with my application so the user doesn't have to install it manually?
It's relatively easy to provide a binary distribution of libimobiledevice for Windows and macOS.
For Windows and macOS, you can download pre-compiled versions of libimobiledevice at https://github.com/libimobiledevice-win32/imobiledevice-net (see the releases page). Admittedly, the repository name is a bit off. It does provide Windows and macOS binaries for libimobiledevice, and you don't have to use .NET if you just want to use the binaries.
The binaries are published via the Azure Pipelines build system, so you would fetch them at https://dev.azure.com/libimobiledevice-win32/imobiledevice-net/_build, or newer builds as they become available.
On Linux, it's a different story, because the various Linux distributions come with different versions of some of the dependencies of libimobiledevice (such as OpenSSL). You'll need a different binary package for most distributions of Linux.
There's a PPA you can use, https://launchpad.net/~quamotion/+archive/ubuntu/ppa, which provides compiled versions of libimobiledevice for Ubuntu 14.04, 16.04 and 18.04.
Most Linux distributions also include a libimobiledevice package, but that may be outdated - be aware.

opkg in beaglebone white A6 not updating

I am using beaglebone A6. I have istalled TI sdk prebuilt binaries and using Ti Arago Project filesystem.
I want to install ntp and gpsd packages for my application.
I am using opkg install ntp but it is showing error:
unknown package ntp.
opkg install cmd: Cannot install package ntp.
I also tried opkg update, but there was not any update.
I have tried pinging Google IP address and it was showing the ping address is available.
Please suggest for opkg and ntp issue.
Make sure /etc/opkg/base-feeds.conf (or /etc/opkg/arago-armv7a-feed.conf) points at the correct server (Updating Existing Images). And also make sure ntp package exist on the server. For example, there is no ntp package for 2009.11 release.
If you have package (ntp.v123.ipk) downloaded at beaglebone, you can install it with command:
opkg install ntp.v123.ipk
BTW. It is not so hard to rebuild Arago image from scratch. In that case you can build what ever package you want (Setting Up Build Environment).

Resources