Im using Wxlua for a GUI im building but need minizip for one of my functions but have no idea how to install minizine to my lua dir.
I expect minizip = require('minizip') to work with my wxlua
minizip is not part of wxlua, so you need to follow the instructions from the source you got the package from. For example, this package provides all the necessary binaries, but requires LuaJIT or lua+luaffi to run.
Usually, it's sufficient to place binary files (*.so or *.dll) into a folder listed in package.cpath and lua files into a folder listed in package.path, but those files may have their own dependencies, so it's better to consult documentation for the module.
My electron app for Mac OSX has sox dependency. What's the best way to include it as part of electron-package? I'd prefer the user not having to install sox separately (unfortunately most of my users are not that savy). Is there a way to include sox binaries directly or sequence pre-install of sox before my app?
I ended up including the sox binary into the package. I used node-record-lpcm16 package and updated its path to use included sox binary. This way I can pass down the the lib path as a parameter.
I'm interested in incorporating TensorFlow into a C++ server application built in Visual Studio on Windows 10 and I need to know if that's possible.
Google recently announced Windows support for TensorFlow: https://developers.googleblog.com/2016/11/tensorflow-0-12-adds-support-for-windows.html
but from what I can tell this is just a pip install for the more commonly used Python package, and to use the C++ API you need to build the repo from source yourself: How to build and use Google TensorFlow C++ api
I tried building the project myself using bazel, but ran into issues trying to configure the build.
Is there a way to get TensorFlow C++ to work in native Windows (not using Docker or the new Windows 10 Linux subsystem, as I've seen others post about)?
Thanks,
Ian
It is certainly possible to use TensorFlow's C++ API on Windows, but it is not currently very easy. Right now, the easiest way to build against the C++ API on Windows would be to build with CMake, and adapt the CMake rules for the tf_tutorials_example_trainer project (see the source code here). Building with CMake will give you a Visual Studio project in which you can implement your C++ TensorFlow program.
Note that the tf_tutorials_example_trainer project builds a Console Application that statically links all of the TensorFlow runtime into your program. At present we have not written the necessary rules to create a reusable TensorFlow DLL, although this would be technially possible: for example, the Python extension is a DLL that includes the runtime, but does not export the necessary symbols to use TensorFlow's C or C++ APIs directly.
There is a detailed guide by Joe Antognini and a similar TensorFlow ReadMe at GitHub explaining the building of TensorFlow source via CMake. You also need to have SWIG installed on your machine which allows connecting C/C++ source with the Python scripting language. I did use Visual CMAKE (cmake-gui) with the screen capture shown below.
In the CMake configuration, I used Visual Studio 15 2017 compiler. Once this stage successfully completes, you can click on the Generate button to go ahead with the actual build process.
However, on Visual Studio 2015, when I attempted building via the "ALL_BUILD" project, the setup gave me "build tools for v141 cannot be found" error. This did not go away even when I attempted to retarget my solution. Finally, the solution got built successfully with Visual Studio 2017. You also need to manually set the SWIG_EXECUTABLE path in CMake before it successfully configures.
As indicated in the Antognini link, for me the build took about half an hour on a 16GB RAM, Core i7 machine. Once done, you might want to validate your build by attempting to run the tf_tutorials_example_trainer.exe file.
Hope this helps!
For our latest work on building TensorFlow C++ API on Windows, please look at this github page. This works on Windows 10, currently without CUDA support (only CPU).
PS:
Only the bazel build method works, because CMake is not supported and not maintained anymore, resulting in CMake configuration errors.
I had to use a downgraded version of my Visual Studio 2017 (from 15.7.5 to 15.4) by adding "VC++ 2017 version 15.4 v14.11 toolset" through the installer (Individual Components tab).
The cmake command which worked for me was:
cmake .. -A x64 -DCMAKE_BUILD_TYPE=Release ^
-T "v141,version=14.11" ^
-DSWIG_EXECUTABLE="C:/Program Files/swigwin-3.0.12/swig.exe" ^
-DPYTHON_EXECUTABLE="C:/Program Files/Python/python.exe" ^
-DPYTHON_LIBRARIES="C:/Program Files/Python/libs/python27.lib" ^
-Dtensorflow_ENABLE_GPU=ON ^
-DCUDNN_HOME="C:/Program Files/cudnn-9.2-windows10-x64-v7.1/cuda" ^
-DCUDA_TOOLKIT_ROOT_DIR="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0"
After the build, open tensorflow.sln in Visual Studio and build ALL_BUILD.
If you want to enable GPU computation, do check your Graphics Card here (Compute Capability > 3.5). Do remember to install all the packages (Cuda Toolkit 9.0, cuDNN, Python 3.7, SWIG, Git, CMake...) and add the paths to the environment variable in the beginning.
I made a README detailing how to I built the Tensorflow dll and .lib file for the C++ API on Windows with GPU support building from source with Bazel. Tensorflow version 1.14
The tutorial is step by step and starts at the very beginning, so you may have to scroll down past steps you have already done, like checking your hardware, installing Bazel etc.
Here is the url: https://github.com/sitting-duck/stuff/tree/master/ai/tensorflow/build_tensorflow_1.14_source_for_Windows
Probably you will want to scroll all the way down to this part:
https://github.com/sitting-duck/stuff/tree/master/ai/tensorflow/build_tensorflow_1.14_source_for_Windows#step-7-build-the-dll
It shows how to pass command to create .lib and .dll.
Then to test your .lib you should link it into your c++ project,
Then it will show you how to identify and fix the missing symbols using the TF_EXPORT macro
I am actively working on making this tutorial better so feel free to leave comments on this answer if you are having problems.
I have written c++ Code that requires Opencv libraries to compile in Ubuntu 12.04. When I try to run the binary generated on a different system, it asks for the library. Does this happens, or there is error in code. If no error how can I run the code than.
You should build it on the new system , if it asks for the library, you are not linking correctly.
I am building an app that I eventually would like to release on Cydia, however I'm having trouble finding any good documentation on developing apps for jailbroken devices. So firstly, if you have any good links for developing for jailbroken iOS devices that would also be much appreciated!
My current problem is that for my app to work I would require tools from other packages on Cydia like otool and possibly some script interpreter (haven't decided which one yet). Is there a way that I can have these dependencies install alongside my current app in Cydia? I feel like I've seen it before downloading other apps.
Yes, absolutely.
When you build your app, you should make sure to bundle it as a Debian package. Some repositories will let you just give them a normal .app bundle, which they will then use to build a .deb file. But, if you want this, I'd recommend learning to build a .deb bundle yourself. More instructions from Saurik here.
Inside the .deb bundle, you will have a DEBIAN subdirectory, with a file inside named control:
DEBIAN/control
DEBIAN/postinst
DEBIAN/postrm
DEBIAN/preinst
The control file is where the Cydia store app description, the app version number (used by the store), and a bunch of other information goes. An optional field in the control file lets you specify that your app has dependencies. If you list another package as a dependency, that package will automatically get installed when Cydia installs your app. Something like this:
Depends: bigbosshackertools
This line is to specify a dependency on the BigBoss Recommended Tools package (which is a very large set of packages, so be aware that you're adding a large install set to your own app).
Or, you could try
Depends: odcctools
to use Saurik's Darwin CC Tools package.
I have been building jailbreak apps for a while, so I do it with homemade scripts, but there's now a tool for helping with this called iOSOpenDev. You could use that to build your package, and edit your control file, if you aren't already familiar with .deb packages, and don't want to bother (although I'd recommend learning).