I'm creating a Docker image to run my node.js app based on the ubuntu:latest image and I need Intel Performance Primitives (IPP) to be installed on it.
I've looked online as to how to do that but haven't found anything really.
Any ideas?
Related
So we downloaded this docker image that we need to run on the machines of me and a friend. I am on a mac and he is on a linux box. We are not techie people so please forgive this naive question :)
When running the docker using docker build -t app-name/site.com . we were running into some very cryptic G++ errors which sent us on a wild goose chase and a vicious circle of googling and debugging. In the end we figured out that my friend's machine was Linux 18.04, but in the Dockerfile it said FROM ubuntu:16.04. So when we updated this to FROM ubuntu:18.04 his docker build was successful and he was able to launch the app.
So now we are trying to figure out how to get this running in my mac. Does anyone know what update we should do to this line to get it running in mac? I am running macOS Catalina.
Any help is much appreciated!
Docker is originally developed for Linux community. Then it is ok for Mac.
Here is one example of how to make your mac get a Linux container. Run this
docker pull ubuntu
This means you are able to construct a Ubuntu env on your mac machine.
FYI, you can release your docker image on docker's website then fetch it on your mac.
Refer to: https://ropenscilabs.github.io/r-docker-tutorial/04-Dockerhub.html
I have an outdated neural network training python2.7 script which utilizes keras 2.1 on top of tensorflow 1.4; and I want it to be trained on my nVidia GPU; and I have CUDA SDK 10.2 installed on Linux. I thought Docker Hub is exactly for publishing frozen software packages which just work, but it seems there is no way to find a container with specific software set.
I know docker >=19.3 has native gpu support, and that nvidia-docker utility has cuda-agnostic layer; but the problem is i cannot install both keras-gpu and tensorflow-gpu of required versions, cannot find wheels, and this legacy script does not work with other versions.
Where did you get the idea tha Docker Hub hosts images with all possible library combinations?
If you cannot find an image that suits you, simply build your own.
System: I've a Windows Server 2019 OS installed with a NVIDIA Tesla T4 Tensor Core GPU.
Goal: Planning to read real time streaming videos from an IP camera and to further process frame by frame. Goal is to leverage NVIDIA DeepStream SDK, but issue is, it isn't available for Windows OS. So, I'm thinking on the docker lines, but since am very new to docker containers, would like to know if I can install a docker on Windows and can run this deepstream docker image on that.
If not, is there any way I can run this Linux based DeepStream docker image on Windows? Any help shall be greatly acknowledged.
I have never worked with the windows server before it should be the same as a docker in Linux VM.
First, you need to pull docker images for deepstream
docker pull nvcr.io/nvidia/deepstream:5.0-dp-20.04-triton
and then try to run sample apps provided in the docker image.
Refer this for the procedure.
if you are interested in python apps you can check sample apps here.
Note:- make sure you are able to access display from inside the container cause deepstream use eglsink in their samples app which will try to open a display window on your screen or you can change the sink type to filesink if you want to save it is a file.
Refer this for available plugins and their attributes.
According to the post in Nivida forum, Windows not supported.
As alternative, I wonder if anyone used the Nvidia Graph Composer in Windows.
I have built a docker image with several packages loaded for a development environment.
I plan to use the image to make porting the environment to various machines simple.
In a container of the image, I can build my binary (using cmake & g++) on any of the machines I have loaded the docker image to. The binary has executed well within such containers on most machines.
But, on one machine executing the binary in the container results in a core dump with "illegal instruction" reported.
The crash happens on a machine with Intel Xeon CPUs. But it runs fine on another Xeon machine. It also runs fine on an AMD Ryzen machine.
I haven't tried executing the binary outside the container because the environment is so hard to set up.. why I'm using docker.
I just wonder if anyone's seen this happen much and how to start trying to resolve?
If it helps & anyone is familiar it, the base image I pulled from docker hub, to add other packages to, is floopcz/tensorflow_cc:ubuntu-shared. It is a Ubuntu image with the Tensorflow C++ API built for CPU use only (not CUDA).
The binary that's crashing does attempt to open a Tensorflow session before doing anything else.
I'm running Docker 19.03 on Ubuntu 16.04 and 18.04. The image has Ubuntu 18.04 loaded.
This was a huge surprise for me:
Today, using Docker For Mac (18.03.1-ce-mac65), I ran a Debian Stretch image. Inside the image I mounted the latest Raspbian Stretch image (2018-04-18-raspbian-stretch-lite) using mount. I then used chroot to this mounted Raspbian filesystem.
This is where it got weird. I was able to use apt (without any special modifications) to install software into this mounted filesystem.
Running:
dpkg --print-architecture
returned: armfh
and the software I installed (vim) worked like a charm
I was even able to compile a simple program using gcc and run it.
But, I need to know! How is this possible?
According to Docker:
Docker for Mac provides binfmt_misc multi architecture support, so you can run containers for different Linux architectures, such as arm, mips, ppc64le, and even s390x.
EDIT
On Linux, you can install qemu-user-static and then follow this git repo to get cross-architecture support!