Apparently in order to import an image in IBM InfoSphere Streams, (I am currently using the VMware-streams4.1.1) I have to download OpenCV libraries and I have followed this guide. http://ibmstreams.github.io/streamsx.opencv/doc/html/InstallingToolkit.html
And I get to the point where I have to use CMAKE, after that, a make file is created but is not "made" when I type make in the command line...
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=$OPENCV_INSTALL_PATH -D OPENCV_EXTRA_C_FLAGS="-DHAVE_CAMV4L -DHAVE_CAMV4L2" -D WITH_OPENCL=OFF ../opencv-3.1.0
make
sudo make install
Is it necessary to download OpenCV to upload images to IBM Streams? or is there any built in libraries already there? Since apparently there is not much information regarding the installation of these libraries.
The main task here is to take the average of an image's pixels in IBM Streams
No, IBM Streams does not have any built-in image processing libraries. However, there is an toolkit for Streams that shows how OpenCV functions can be used in Streams operators. The toolkit contains several dozen examples of operators and applications for processing video streams. The toolkit depends upon OpenCV libraries, which must be installed separately, and they depend upon FFmpeg libraries, which must also be installed.
There are step-by-step instructions for installing FFmpeg and OpenCV libraries, here:
http://ibmstreams.github.io/streamsx.opencv/doc/html/InstallingToolkit.html
After installing the toolkit and libraries, there are suggestions for verifying that everything was installed correctly by executing some of the sample applications, here:
http://ibmstreams.github.io/streamsx.opencv/doc/html/UsingToolkit.html
Related
I have an outdated neural network training python2.7 script which utilizes keras 2.1 on top of tensorflow 1.4; and I want it to be trained on my nVidia GPU; and I have CUDA SDK 10.2 installed on Linux. I thought Docker Hub is exactly for publishing frozen software packages which just work, but it seems there is no way to find a container with specific software set.
I know docker >=19.3 has native gpu support, and that nvidia-docker utility has cuda-agnostic layer; but the problem is i cannot install both keras-gpu and tensorflow-gpu of required versions, cannot find wheels, and this legacy script does not work with other versions.
Where did you get the idea tha Docker Hub hosts images with all possible library combinations?
If you cannot find an image that suits you, simply build your own.
I am trying to containerize my ROS + tensorflow application. The problem is that I want to use GPU. I can either use GPU and forget about ROS, or use ROS and forget about GPU, but I don't know how I can enable both of them.
I have tried starting FROM a cuda image and installing ROS as described here, but the ros couldn't be installed, not being able to find the package.
I also tried building them one by one in the same Dockerfile and copying all the ROS-related stuff from a ROS build, but that failed too.
Ideally, I want to make it work as if I "include" a Cuda image and a ROS image, but resource on building from multiple images like this could hardly be found. Any help or pointer would be appreciated.
I am the developer of a software product (NJOY) with build requirements of:
CMake 3.2
Python 3.4
gcc 6.2 or clang 3.9
gfortran 5.3+
In reading about Docker, it seems that I should be able to create an image with just these components so that I can compile my code and use it. Much of the documentation is written with the implication that one wants to create a scalable web architecture and thus, doesn’t appear to be applicable to compiled applications like what I’m trying to do. I know it is applicable, I just can’t seem to figure out what to do.
I’m struggling with separating the Docker concept from a Virtual Machine; I can only conceive of compiling my code in an environment that contains an entire OS instead of just the necessary components. I’ve begun a Docker image by starting with an Ubuntu image. This seems to work just fine, but I get the feeling that I’m overly complicating things.
I’ve seen a Docker image for gcc; I’d like to combine it with CMake and Python into an image that we can use. Is this even possible?
What is the right way to approach this?
Combining docker images is not available. Docker images are chained. You start from a base images and you then install additional tools that you want to add on top of the base image.
For instance, you can start from the gcc image and build on it by creating a Dockerfile. Your Dockerfile might look something like:
FROM gcc:latest
# install cmake
RUN apt-get install cmake
# Install python
RUN apt-get install python
Then you build this dockerfile to create the Docker image. This will give you an image that contains gcc, cmake and python.
I am looking for a way to set up or modify an existing Docker image for installing tensorflow that will install it such that the SSE4, AVX, AVX2, and FMA instructions can be utilized for CPU speed up. So far I have found how to install from source using bazel How to Compile Tensorflow... and CPU instructions not compiled.... Neither of these explain how to do this within Docker. So I think what I am looking for is what you need to add to an existing docker image that installs without these options so that you can get a compile version of tensorflow with the CPU options enabled. The existing docker images do not do this because they want the image to run on as many machines as possible. I am using Ubuntu 14.04 on linux PC. I am new to docker but have installed tensorflow and have it working without getting the CPU warnings I get when I use the docker images. I may not need this for speed, but I have seen posts that claim the speed up can be significant. I searched for existing docker images that do this and could not find anything. I need this to work with gpu so needs to be compatible with nvidia-docker.
I just found this docker support for bazel and it might provide an answer, however I do not understand it well enough to know for sure. I believe this is saying that you can not build tensorflow with bazel inside a Dockerfile. You have to build a Dockerfile using bazel. Is my understanding correct and is this the only way to get a docker image with tensorflow compiled from source? If so, I could still use help in how to do it and still get the other dependencies that I would get if using an existing docker image for tensorflow.
Dockerfiles that build with CPU support can be found here.
Hope that helps! Spent many a late night here on Stack Overflow and Github Issues and stuff. Now it's my turn to give back! :)
The GPU stuff in particular is really hairy - especially when enabling the XLA/JIT/AOT stuff as well as the Graph Transform Tools.
Lots of hacks embedded in my Dockerfiles. Feel free to review and ask me questions!
The contributing guidelines mention building TensorFlow from source with Docker to run the unit tests:
Refer to the
CPU-only developer Dockerfile and
GPU developer Dockerfile
for the required packages. Alternatively, use the said
Docker images, e.g.,
tensorflow/tensorflow:nightly-devel and tensorflow/tensorflow:nightly-devel-gpu
for development to avoid installing the packages directly on your system.
I'm looking for some hints. I've got my Pi running OpenCV, but I'm about to take on a project which will need several IP cameras, all piping video to OpenCV. I'm curious if it's possible to use the Pi+webcam in place of an IP camera?
I was attempting this by using Gstreamer on the Pi to pipe the video to a desktop PC, where I would use Python and OpenCV to process the images, then ship back answers to the Pi. The Pi is connected to actuators, so the described setup would save me the purchase of a few ip cams.
I've setup ffmpeg to capture the video and stream it, I just can't seem to find an appropriate Gstreamer pipe to get it pulled up in OpenCV on the Desktop.
I hope this is clear.
first of all, i strongly recommend the latest gestreamer code you can get to compile on the rpi.
some recent builds of gstreamer can be found in a 3rd party apt repository:
add
deb http://vontaene.de/raspbian-updates/ . main
to
/etc/apt/sources.list
and run
apt-get update && apt-get upgrade
as superuser.
hope that helps. if not you may find some useful info at
http://pi.gbaman.info/?p=150
http://sanjosetech.blogspot.de/2013/03/web-cam-streaming-from-raspberry-pi-to.html
or even https://raspberrypi.stackexchange.com/
I recommend you the UV4L driver for the pi, this driver will enable an URL were you can see the pi camera so you can just process the images with cv2.videocapture("http://raspberrypi-ip/live") in this way you dont need to process anything on the pi as is very limited in comparition with your pc which will give you nice results.