How to install Torch on windows 8.1? - lua

Torch is a scientific computing framework with wide support for machine learning algorithms. It is easy to use and efficient, thanks to an easy and fast scripting language, LuaJIT, and an underlying C/CUDA implementation.
Q:
Is there a way to install torch on MS Windows 8.1?

I got it installed and running on Windows (although not 8.1, but I don't expect the process to be different) following instructions in this repository; it's now deprecated, but wasn't deprecated few months ago when I built it. The new instructions point to torch/torch7 repository, but it has a different structure and I haven't been able to build it on Windows yet.
There are instructions on how to install Torch7 from luarocks, but you may run into issues on windows as well; I haven't tried this process. It seems like there is no official support for Windows yet, but some work is being done by contributors (there is a link to a pull request in that thread).
Based on my experience, compiling that deprecated repo may be your best option on Windows at the moment.
Update (7/9/2015): I've recently submitted several changes that fix compilation issues with mingw, so you may try the most recent version of torch7 and follow the build instructions in the ticket. Note that the changes only apply to the core lib and additional libraries may need similar changes.

This webpage hosted by New York University recommends installing a Linux virtual machine in order to run Torch7 on Windows through Linux. Another option would off course be to install a Linux dist in parallel with Windows 8.
Otherwise, if you don't mind running an older version of Torch, there is a Windows installer for Torch5 at SourceForge.

I think to use a GPU from inside the virtual machine, the processor and the motherboard should not only support VT-x , but VT-d should be supported too.
But the question is, if I use a CPU with VT-d supported, do you think there will be a significant loss in PCIe connections efficiency?
From what I understand,
VT-d is important if I want to give the virtual machines direct access to my hardware components (like PCI Express cards). Like directly attach graphics card to vm instead of host machine. Isn't that mean that the PCIe connections efficiency will be the same just like if it was the host?

Related

it is possible to install docker desktop on VMWARE ESXI?

it is possible to install docker desktop on virtual machine (vmware) windows 10 on a VMWARE ESXI ?
i am trying to install desktop docker on my vmware virtual machine with windows 10.
I installed the wsl2 support but at the end of the installation docker crashes with the following error:
Docker desktop 4.0.1
Installation failed
Component CommunityIstaller.ServiceAction failed to start services: The service did not respond to the start or control request in a timely fashion
I have done several tests but I cannot avoid this crash in any way.
The Operating System is a build that meets the minimum requirements to install Docker.
However, I noticed that Hyper-V is not enabled in the windows features. can this be a problem?
I think maybe it's a grafted virtualization problem because I install docker inside a VM. it's possible?
How can I solve? (or do you think that i will fix this problem with linux virtual machines?)
Does your host machine have all the advanced flags for 'efficient' nested virtualization? I wouldnt really recommend a third layer install of docker (as the final container is then virtual , on paravirtual (wsl2) on virtual (HyperV), on virtual (Esxi). I heavily assume the performance will be terrible.
And yes: You need Hyper-V, it's a requirement still. I assume, as you say its not available on the features, youre running a windows 10 home? Then sorry, you need at least Windows 10 Pro for Hyper-V support.
But as youre running a ESXI host, go the better performing way: Install any Linux distro of your choise, install docker there - if you wanna use it for Visual Studio etc. , you can still remotely debug etc. - and its performing better than on an a even deeper nested virtualized windows-wsl2. And btw: if its because of GUI, simply install the free Visual Studio Code, it offers Docker Tools which offer you many configiruation and monitoring options in a GUI , without enforcing you to do such a super deep nesting.
Yes, it's definitely possible. I'd probably check the hardware assisted virtualization (if available) is enabled. If so, you might want to make sure you've satisfied the rest of the requirements for the WSL2 backend deployment. If you're still having issues, try an older version and try upgrading from there.

Is there a way to install just Mosquitto Pub?

I'm working on a Linux system (a based of OpenWRT version) that has not much storage (<3MB) and no active internet connection, however I need to be able to publish to a MQTT-Broker some outputs. Is there a way for me to install just the publisher part of Mosquitto to save space or another way to solve this issue?
Without a LOT more information about the system this question is basically impossible to answer. e.g. we have no idea about what OS is being used...
But for a system with such tightly constrained storage your best option will probably to build the components you need from scratch that way you have complete control over what gets installed.
You could build the mosquitto tools and then strip them before copying just the binary you want (and any require libraries) to the system.
If you install from pretty much any Linux package management system you are likely to get all the tools plus man pages which will inflate the install footprint.
EDIT -
But all that said, a quick look at the available packages for OpenWRT implies that the existing packages that include both the broker and the command line tools would use 129kb (99 + 30 and is less if you don't need SSL 85 + 28) when installed (this assumes the pre-requisites are already installed)
https://openwrt.org/packages/table/start?dataflt%5BDescription_wiki*%7E%5D=mosquitto

Looking for Coral M.2 Accelerator + RHEL/Centos 8 Drivers on x86_64

I'm a little lost (and admit that I'm pretty green to all this). I am looking for the drivers for the M.2 drivers for RHEL/CentOS 8 on x86_64 architecture. Previously I was successful installing the drivers under Ubuntu following the Getting Started guide on the Coral website (https://coral.ai/docs/m2/get-started). But I need to run CentOS 8 for other reasons. So I know that the board works. I know it can be supported in Linux, but don't know how to convert the instructions for CentOS.
My M.2 board is connected to my server using a M.2 to PCIe adapter.
Thanks in advance!
ben
I also believe that you should be able to get this working.
Couple things that you'll need:
libedgetpu.so - You can download the latest runtime from here: https://github.com/google-coral/edgetpu/tree/master/libedgetpu/direct/k8
apex/gasket modules - This is a required kernel module for talking to the pcie module. This is going to be very tricky, first you'll need to make sure that you don't already have apex/gasket module already installed. If you do, blacklist it and load our modules. Now our modules cannot be installed with apt-get since you are in centos, so your only option is to download the code from source and compile it on your own: https://coral.googlesource.com/linux-imx/+/refs/heads/release-day/drivers/staging/gasket
Cheers

Docker on embedded systems, why not?

There was a project thrown my way recently that involves the orchestration of several (Linux capable) embedded devices, deploying software to them, and allowing for the applications to be updated when the code base updates in a git repo.
The initial thought was to make a standard image for each device, and I set out, attempting to install docker on an UDOO Quad and an Intel Edison to start, but without any success up to this point.
My thinking is that it seems to be a good idea to install Docker on embedded devices--but if that's the case, surely it would have been ported by now. The only group out there that seems to be making these efforts is Resin.io.
Is there something I'm missing, or is there a clear reason why Docker doesn't make sense on embedded devices? If there isn't a reason, and it does make sense to run Docker on embedded systems, is there something I've overlooked out there: are there any sources of discussion on porting, or how-to's that cover this?
I have considered running docker on embedded devices (a mips system), but didn't go that way. There are some problems with it, in my humble view:
Docker is implemented in Golang. There is currently no available tool chain for mips to compile go. You will need to create the tool chain yourself using gcc-go.
The size of docker is larger than lxc. In a desktop computer this is not a problem, but the embedded device has limited flash storage.
Docker uses some quite up-to-date feature of linux kernel. Sometimes the kernel version on embedded devices are not so new and back-port is needed to make it work.
The docker image has to be built on the same architecture as the run time environment. It means that if you want to run a docker container on Raspberry Pi, the docker image has to be built on an ARM-architecture system. QEMU can be used to build docker image in the cloud, but it doesn't support all CPU architectures used in embedded system. (for example, it currently doesn't support MIPS)
In the end, lxc was chosen for the specific task of running a container on embedded device. It has limited features compared to docker, but currently it suits the requirement of the project.
As of year 2019, I would like to update this answer since I did port docker to embedded system with ARM cpu. With the price of flash usage, memory usage, by using docker you will have container management, image management, and many ready to run images from docker hub. So the decision is a balance between cost and features.
Here is an update for 2018:
You can work with Docker on embedded devices such as Raspberry Pi and Orange Pi quite easily now because of advancements in the development of Raspbian and Armbian operating system images. Specifically, both types of devices and their respective OS images now support kernels that are of sufficiently high enough versions to install Docker without any problems (at least version 3.10, though both now offer 4.x+ versions).
Your desire for faster rates of change can definitely be realized by using embedded Docker. I can say from experience that I have tested and regularly run the approach you describe. Basically, you start with a base operating system image such as Raspbian or Armbian, tweak that operating system enough that it's secure and has Docker installed, and then you use Docker for handling development iteration and application updates.
As an aside, if you are interested in running Docker on embedded Linux devices, then I recommend you check out a free, open-source, MIT-licensed command line tool I wrote to help developers work with embedded Docker on multiple devices at once: https://github.com/ForwardLoopLLC/floopcli .
Even if you are not interested in the tool itself, the documentation for the tool describes several patterns for working with Dockerized applications across multiple devices in multiple languages: https://docs.forward-loop.com/floopcli/master/index.html . The materials there should serve as a starting point for porting applications to Docker and then deploying them on embedded devices. The documentation also addresses some embedded device subtleties, such as differences between ARMv6 and ARMv7. Hopefully this helps you get started!
There is a great article on LinkedIn describing his experience with that
https://www.linkedin.com/pulse/whale-jar-when-running-docker-embedded-linux-good-thing-fletcher#pulse-comments-urn:li:article:7736487387895237975
Often embedded systems have a very slow rate of change. Docker works well on a minimum build then layering on top. If you want to sacrifice the overhead of running docker on a minimum embedded system for docker's ability to have a build system and steady rate of change then you could explorer it.

How to set-up vicidial in local system?

I want to set-up vicidial in my local computer server any information or a document for that?
I googled but I can't find exact resource.
I googled below links.
Link 1
Link 2
Thanks in advance.
Vicidial is an Open Source Predictive AutoDialer based on Asterisk with PHP/MySQL/Perl coding.
Installation of Vicidial is only viable on a Linux machine.
There are several locations with Scratch Install instructions for Ubuntu and CentOS. In fact, the Vicidial Wiki has a list of a few of them: http://wiki.vicidial.org/index.php/VICI:Installation
Most are quite old except for the Goautodial.com which has instructions for CentOS installation by adding the goautodial repositories and then just upgrading the OS to get all the necessary packages.
If you're not using CentOS or Ubuntu and none of those instructions work for your purpose, beware that Vicidial installation is not easy. It is MUCH better to dedicate a machine to the purpose by installing from Vicibox.com's .iso image which will wipe the computer clean. The installation becomes easy and then you need only argue with configuration.
If you can not dedicate a machine to this purpose, you should take the earlier suggestion of a Virtual server (vSphere or Virtualbox both work for Vicibox.com's .iso installer), but beware that you'll only be able to have one or two agents on the virtual dialer at the most. Luckily, if you do get the virtual vicidial working, it is possible to backup the virtual server's database and install it on a hardware based server later to bring everything with you without having to do it all over.

Resources