I am seeking advice on how to proper configure a multi project solution in relation to third party c++ libraries added with vcpkg.
How do you checkout a specific version of a library for a project?
How do you configure Visual Studio 2019 to use this version for debug and release (lib, dll, headers)?
How do I share the configuration with other developers and build servers?
Here is how I did it:
fork vcpkg repo to local repository (TFS Git in my case)
make a project-specific branch (project being internal company project, not Visual Studio project)
pile on my own port modifications
add few scripts that build a package containing only libraries aforementioned project needs (nuget on windows, 7zip for Linux), see vcpkg export command
label with package version (e.g. 1.0.0.2)
build and deploy to a share (that is properly backed up)
configure some IIS instance in company network to serve packages from aforementioned share
in Visual Studio related projects refer to nuget package
on Linux related cmake script pulls correct version of package using http GET, unpacks it and imports vcpkg cmake file
every time a change needs to be made to the package:
modify your vcpkg branch, label with next version and push
build new package version (filename should contain version)
deploy package to that share
update your cmake files and/or nuget config files
I also tried to export only 1 library (cpprestsdk) but instead vcpkg just exported everything it had installed! Can't it just export the dependent libs only?
vcpkg export cpprestsdk:x64-windows --zip
I'm interested in incorporating TensorFlow into a C++ server application built in Visual Studio on Windows 10 and I need to know if that's possible.
Google recently announced Windows support for TensorFlow: https://developers.googleblog.com/2016/11/tensorflow-0-12-adds-support-for-windows.html
but from what I can tell this is just a pip install for the more commonly used Python package, and to use the C++ API you need to build the repo from source yourself: How to build and use Google TensorFlow C++ api
I tried building the project myself using bazel, but ran into issues trying to configure the build.
Is there a way to get TensorFlow C++ to work in native Windows (not using Docker or the new Windows 10 Linux subsystem, as I've seen others post about)?
Thanks,
Ian
It is certainly possible to use TensorFlow's C++ API on Windows, but it is not currently very easy. Right now, the easiest way to build against the C++ API on Windows would be to build with CMake, and adapt the CMake rules for the tf_tutorials_example_trainer project (see the source code here). Building with CMake will give you a Visual Studio project in which you can implement your C++ TensorFlow program.
Note that the tf_tutorials_example_trainer project builds a Console Application that statically links all of the TensorFlow runtime into your program. At present we have not written the necessary rules to create a reusable TensorFlow DLL, although this would be technially possible: for example, the Python extension is a DLL that includes the runtime, but does not export the necessary symbols to use TensorFlow's C or C++ APIs directly.
There is a detailed guide by Joe Antognini and a similar TensorFlow ReadMe at GitHub explaining the building of TensorFlow source via CMake. You also need to have SWIG installed on your machine which allows connecting C/C++ source with the Python scripting language. I did use Visual CMAKE (cmake-gui) with the screen capture shown below.
In the CMake configuration, I used Visual Studio 15 2017 compiler. Once this stage successfully completes, you can click on the Generate button to go ahead with the actual build process.
However, on Visual Studio 2015, when I attempted building via the "ALL_BUILD" project, the setup gave me "build tools for v141 cannot be found" error. This did not go away even when I attempted to retarget my solution. Finally, the solution got built successfully with Visual Studio 2017. You also need to manually set the SWIG_EXECUTABLE path in CMake before it successfully configures.
As indicated in the Antognini link, for me the build took about half an hour on a 16GB RAM, Core i7 machine. Once done, you might want to validate your build by attempting to run the tf_tutorials_example_trainer.exe file.
Hope this helps!
For our latest work on building TensorFlow C++ API on Windows, please look at this github page. This works on Windows 10, currently without CUDA support (only CPU).
PS:
Only the bazel build method works, because CMake is not supported and not maintained anymore, resulting in CMake configuration errors.
I had to use a downgraded version of my Visual Studio 2017 (from 15.7.5 to 15.4) by adding "VC++ 2017 version 15.4 v14.11 toolset" through the installer (Individual Components tab).
The cmake command which worked for me was:
cmake .. -A x64 -DCMAKE_BUILD_TYPE=Release ^
-T "v141,version=14.11" ^
-DSWIG_EXECUTABLE="C:/Program Files/swigwin-3.0.12/swig.exe" ^
-DPYTHON_EXECUTABLE="C:/Program Files/Python/python.exe" ^
-DPYTHON_LIBRARIES="C:/Program Files/Python/libs/python27.lib" ^
-Dtensorflow_ENABLE_GPU=ON ^
-DCUDNN_HOME="C:/Program Files/cudnn-9.2-windows10-x64-v7.1/cuda" ^
-DCUDA_TOOLKIT_ROOT_DIR="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0"
After the build, open tensorflow.sln in Visual Studio and build ALL_BUILD.
If you want to enable GPU computation, do check your Graphics Card here (Compute Capability > 3.5). Do remember to install all the packages (Cuda Toolkit 9.0, cuDNN, Python 3.7, SWIG, Git, CMake...) and add the paths to the environment variable in the beginning.
I made a README detailing how to I built the Tensorflow dll and .lib file for the C++ API on Windows with GPU support building from source with Bazel. Tensorflow version 1.14
The tutorial is step by step and starts at the very beginning, so you may have to scroll down past steps you have already done, like checking your hardware, installing Bazel etc.
Here is the url: https://github.com/sitting-duck/stuff/tree/master/ai/tensorflow/build_tensorflow_1.14_source_for_Windows
Probably you will want to scroll all the way down to this part:
https://github.com/sitting-duck/stuff/tree/master/ai/tensorflow/build_tensorflow_1.14_source_for_Windows#step-7-build-the-dll
It shows how to pass command to create .lib and .dll.
Then to test your .lib you should link it into your c++ project,
Then it will show you how to identify and fix the missing symbols using the TF_EXPORT macro
I am actively working on making this tutorial better so feel free to leave comments on this answer if you are having problems.
For using CUDA, I need to compile OpenCV. I'm using CMake and OpenCV 3 sources. I do not get any errors when clicking und "Generate" in CMake. Then I compile the OpenCV.sln solution for Win64 using Visual Studio (I selected the right visual studio version). I do not get any errors when compiling.
But I do know what to include; normally, there is "opencv" and "opencv2" in the include folder. But this does not exist.
My opencv folder after compiling:
My include folder:
Includes located in sources folder, not in build folder (if you did not Build INSTALL project).
Does anybody know how to build cvBlobsLib using MinGW? On official page http://opencv.willowgarage.com/wiki/cvBlobsLib there is only instruction for VS.
There is also linux version of this lib http://opencv.willowgarage.com/wiki/cvBlobsLib?action=AttachFile&do=view&target=cvblobs8.3_linux.tgz , but its makefile cannot be used in windows as i see.
If you use eclipse then you dont have a lot of work:
Create a new project, using MinGW toolchain.
Go to the project properties, and under C/C++ General >> Paths and Symbols add the openCV library paths.
compile the project and it should be OK.
Use this
http://opencv.willowgarage.com/wiki/cvBlobsLib#Build_intructions
if you have more problems (especially NOTE 3)
I am compiling OpenCV for our project with specific build options (such as 64bit, QT and OpenNI). I was able to follow the instruction as given here: http://opencv.itseez.com/doc/tutorials/introduction/windows_install/windows_install.html
At the end of 2-3 hours of build process, I ended up with \install\build\ with collected bins, dlls and libs in their respective folder. I would like to distribute an .exe installer to other members in research group. But I could not because _CPack_Packages/win32/NSIS is nowhere to be found.
Note: To create an installer you need to install NSIS. Then just build the
Package project to build the installer into the
Build/_CPack_Packages/win32/NSIS folder. You can then use this to
distribute OpenCV with your build settings on other systems.
In the cmake-gui screen, I ticked "Build Package" which I hoped would enable me to see Build/_CPack_Packages/win32/NSIS folder. After build process, this is not found.
Could someone give a suggestion as why I don't see this _CPack_Packages/win32/NSIS folder as described? Could I use
Inno setup instead? If so, do I simply pack all \build\install folder and set path in system to include \build\install\bin?
Thank you.
Sticking with the KISS principle (Keep it simple, Stupid!):
Did you install NSIS prior to building the Package project?
INSTRUCTIONS TO BUILD WIN32 PACKAGES WITH CMAKE+CPACK
------------------------------------------------------
- Install NSIS.
- Generate OpenCV solutions for MSVC using CMake as usual.
- In cmake-gui:
- Mark BUILD_PACKAGE
- Mark BUILD_EXAMPLES (If examples are desired to be shipped as binaries...)
- Unmark ENABLE_OPENMP, since this feature seems to have some issues yet...
- Mark INSTALL_*_EXAMPLES
- Open the OpenCV solution and build ALL in Debug and Release.
- Build PACKAGE, from the Release configuration. An NSIS installer package will be
created with both release and debug LIBs and DLLs.
Jose Luis Blanco, 2009/JUL/29
I suggest instead of using Visual Studio to build, you should try using CMake.
http://www.cmake.org/
Let me know if this helps at all.