I have seen caffe installation for Mac. But I have a question. If my Mac does not have GPU, then I have no chances to use GPU?? and I have to use CPU-only?
or I have the chance of using (virtual!) GPU by NVIDIA web driver?
Moreover, can I have digits on my Mac? as I try to download it, it does not have any options for Mac download and it is just for Ubuntu!
I am very confused about these questions! Can you please make me clear about these?
The difference in architectures between CPU and GPU does not allow simple transformation of the code written for one architecture to the other. The GPU drivers are specifically written for the GPU architecture and cannot be easily virtualized. On the other hand, some software supports both. This includes OpenGL instructions and caffe (http://caffe.berkeleyvision.org/). NVidia DIGITS is based on caffe and therefore can work without a dedicated GPU (Here the thread how to install on Macs: https://github.com/NVIDIA/DIGITS/issues/88)
According to https://www.github.com/NVIDIA/DIGITS/issues/251 CUDA cannot be run on computers that do not have a dedicated NVidia GPU, but according to How to run my CUDA application on ATI or Intel card in software mode? there is a program gpuocelot that receives CUDA instructions and can work on NVidia GPU, AMD GPU and x86.
In scientific shared computing they wrote separate programs for different devices, e.g. Einstein at Home has four separate programs to find gravitational waves: CPU, NVidia GPU (CUDA), AMD GPU and ARM.
To make DIGITS work you need to
build Caffe with CPU_ONLY and tell DIGITS not to use any GPUs by
running digits-devserver with the --config flag
(https://github.com/NVIDIA/caffe/blob/v0.13.2/Makefile.config.example#L9-L10, https://github.com/NVIDIA/DIGITS/issues/251).
Other possibility:
you can still use the --config flag with the web installer. Try this:
./runme.sh --config. Choose "N" to select none.
Also a possibility:
I am trying to answer how you can choose CPU or GPUs.. Within the
caffe folder, there is a Makefile.config.example file.. Copy the
contents of this file into a new file and rename it as
"Makefile.config". If you want to use CPU, then
1. comment out the "USE_CUDNN :=1 Within "Makefile.config" file,
2. uncomment CPU_ONLY := 1
3. issue the make all command again within the caffe folder..
And if nothing helps you can do the procedure two times because it helped someone at the end of the thread.
Related
Short version
I would like to know the technical reasons why do Docker images need to be created for multiple architectures. Also, it is not clear whether the point here is creating an image for each CPU architecture or for an OS. Shouldn't the OS abstract the architecture?
Long version
I can understand why the Docker Engine must be ported to multiple architectures. It is a piece of software that will interact with the OS, make system calls, and ultimately it is just code that is represented as a sequence of instructions within a particular instruction set, for a particular architecture. So the Docker Engine must be ported to multiple OS/architectures much like, let's say, Microsoft Word would have to be ported.
The same thing would occur to - let's say - the JVM, or to VirtualBox.
But, different than with Docker, software written for the JVM on Windows would run on Linux. The JVM would abstract the differences of the underlying OS/architectures, and run the same code on both platforms.
Why isn't that the case with Docker images? Why can't the Docker Engine just abstract the differences, and provide a common interface, so the image itself wouldn't need to be compatible with a specific OS/architecture?
Is this a decision (like "let's make different images per architecture because it is better for reason X"), or a consequence of how Docker works (like "we need to do it this way because Docker requires Y")?
Note
I'm not crying "omg, why??". This is not a rant or criticism, I'm just looking for a technical explanation for the need of different images for different architectures.
I'm not asking how to create a multi-architecture image.
I'm not looking for an answer like "multi-architecture images are needed so you can run your images on various platforms", which answers "what for?", but not "why is that needed?" (which is my question).
Besides that, when you see an image, it usually has an os/arch in the digest, like this:
What exactly the image is targeting? The OS, the architecture, or both? Shouldn't the OS abstract the underlying architecture?
edit: I'm starting to assume that the need for different images per architecture is on the lines of: the image will contain applications inside it. Let's say, it will contain the Go compiler. The Go compiler itself is a binary that must have been complied to different architectures. The image for x86-64 will contain the Go compiler compiled to x86-64, and so on. Is this correct? If this is correct, is this the only reason?
Why can't the Docker Engine just abstract the differences, and provide a common interface
Performance would be a major factor. Consider how slow Cygwin is for some things when providing a POSIX API on top of Windows by emulating some POSIX things that don't map directly to the Windows API. (e.g. fork() / exec separately, instead of CreateProcess).
And that's just source compatibility; the resulting binaries are specific to Cygwin on Windows. It's even worse if you want to do that at runtime (binary compat instead of source compat).
There's also the amount of complexity Docker would need to provide an efficient portable JIT-compiling VM on top of various OSes, especially across various CPU ISAs like x86-64 vs. AArch64 that don't even share common machine code.
If Docker had gone this route, it would really just be re-inventing a JVM or .NET CLR bytecode-based VM.
Or more likely, instead of reinventing that wheel, it would just use an existing VM and add image management on top of that. But then it couldn't work with native programs written in C, unless it transpiled them to Java or CLR bytecode.
All tough the promise of Docker is the elimination of differences when moving software between machines, you'll still face the problem that Docker runs with the host machine's CPU architecture, which can't be crossed in Docker.
Neither Docker, nor a virtual machine, abstract a CPU to enable full cross compatibility.
Emulators do. If both Docker and VM's would run on Emulators, they would be less performant as they are today.
The docker buildx command and --build-arg ARCH flag leverages the advantage of the qemu emulator, emulating the full system with an architecture during a build. The downside of emulation is that it runs much slower than normal builds.
I am using Andrej Karpathy's code to train a rnn. When I give the flag "-opencl 1" to tell it to use an opencl gpu, it uses integrated graphics and nothing else.
I tried reinstalling cltorch and using different flags, but nothing has seemed to work. To add to this I can't see if my gpu is under load because I'm on macos. I looked through the code and I could't find any errors, but I have little experience with lua.
Code can be found here: https://github.com/karpathy/char-rnn.
I expect with the flag "opencl 1" or something to the like, my radeon pro 560x will be used to train on my dataset and not my cpu or integrated graphics.
When reading instructions I thought just -opencl flag needed to be used but it turns out -gpuid needs to be used in conjuction with it as well. This was also after a reinstall of torch and opencl drivers, so that could have been a problem as well.
I have written a DLL in C++ which uses OpenCV. It is called by Labview. I found I can easily move it to other systems and use it with Labview by just including the necessary OpenCV DLLs in the folder of the actual DLL.
If I wrote a DLL that uses the OpenCV GPU capability on the first computer, could I transfer it as easily or would I need to recompile OpenCV for that particular system?
The Compute capability of different GPU is different. When you build the OpenCV with CUDA you build it for a range of compute_capability and a particular GPU architecture as long as the other machine has GPU of same architecture your code will work fine but if they differ from the build config that will trigger some OpenCV GPU errors
I build the project with CUDA module. Well, I think I did something incorrect because GPU load in process of train(8000 pos and 3000 neg) is 6%(Nvidia QUADRO). And CPU: On precalculation stage CPU load is 100%(core i7), but then falling down on 12%, and keep working on it. Can you give me some advise, what I should do? I'm new in OpenCV and want to learn it.
EDIT
There is no code writing by me. It is a module .exe of OpenCV library.
OpenCV doesn't have implicit CUDA optimization. opencv_gpu is a separate module and users should explicitly use it to enable CUDA optimization. opencv_traincascade doesn't use gpu module, so it doesn't work on GPU.
You can find more information in reference manual: http://docs.opencv.org/2.4.6/modules/gpu/doc/gpu.html
and in gpu samples: https://github.com/Itseez/opencv/tree/2.4/samples/gpu
Question: How do make a single make file to compile several different systems, environments, and sets of libraries at once?
Info:
I'm a student and as such most of my work is done for the sake of learning how these things work. Right now I'm building a game engine from the ground up. I'd like to be able to make it cross platform in terms of OS, but also for different environments. My target environments are both 32 and 64 bit (my desktop as well as my netbook), with a graphics card and with mesa, and linux and windows. so overall it should out put 8 binaries.
I'm still very new to make, as well as the whole concept of cross compiling. I imagine that the process of compiling more than 1 binary isn't hard. but where I'm kind of stuck is how do i get it to attach the right libraries? The Ubuntu Linux vs the WinAPI libraries, 32bit vs 64bit libraries. etc etc. Is it even possible to get libraries in such a manner?
If you need me to clarify further I can. Thanks.
Addendum: Basically I want to know how to compile headers for drivers i may not have. for example. I want to compile all the files on my netbook, including the ones for openCL, I don't want to run them, as my netbook has no GPU, I just want to compile. conversely, I want to use my desktop compile for my netbook which uses ocelot and mesa for its gpu dealings, but my desktop does not have mesa or ocelot on it. that sort of thing. Thanks.