Is it possible to work with math.h library in contiki-cooja simulator ?.
I am using contiki 3.0 on ubuntu 18.04 LTS
I tried adding LDFLAGS += -lm in makefile of hello-world application. Moreover, i also tried adding -lm in Makefile.include file. Things don't work. What is the correct place to add -lm.
hello-world.c
#include "contiki.h"
#include <stdio.h> /* For printf() /
#include <math.h>
#define DEBUG DEBUG_PRINT
static float i;
/---------------------------------------------------------------------------/
PROCESS(hello_world_process, "Hello world process");
AUTOSTART_PROCESSES(&hello_world_process);
/---------------------------------------------------------------------------/
PROCESS_THREAD(hello_world_process, ev, data)
{
PROCESS_BEGIN();
i = 2.1;
printf("Hello, world\n");
printf("%i\n", (int)pow(10,i));
printf("%i\n", (int)(M_LOG2Ei));
PROCESS_END();
}
/---------------------------------------------------------------------------/
Makefile
CONTIKI_PROJECT = hello-world
all: $(CONTIKI_PROJECT)
CONTIKI = ../..
include $(CONTIKI)/Makefile.include
LDFLAGS += -lm
First, you can added external libraries to Contiki with:
TARGET_LIBFILES = -lm
Make sure you do this before the include $(CONTIKI)/Makefile.include line, not after!
Second, which platform are you compiling for? The msp430 platforms do not have the pow function in the math library. They only have the powf function operating on single-precision floating point numbers, and the built-in (intrinsic) function pow operating on integers.
If you want to operate on floating point numbers, change your code to this:
float f = 2.1;
pow(10, f);
to this
float f = 2.1;
powf(10, f);
Related
I am trying to run YOLOv3 on Visual Studio 2019 using CUDA 10.2 with cuDNN v7.6.5 on Windows 10 using NVidia GeForce 930M. Here is part of the code I used.
#include <fstream>
#include <sstream>
#include <iostream>
#include <opencv2/dnn.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
using namespace cv;
using namespace dnn;
using namespace std;
int main()
{
// Load names of classes
string classesFile = "coco.names";
ifstream ifs(classesFile.c_str());
string line;
while (getline(ifs, line)) classes.push_back(line);
// Give the configuration and weight files for the model
String modelConfiguration = "yolovs.cfg";
String modelWeights = "yolov3.weights";
// Load the network
Net net = readNetFromDarknet(modelConfiguration, modelWeights);
net.setPreferableBackend(DNN_BACKEND_CUDA);
net.setPreferableTarget(DNN_TARGET_CUDA);
// Open the video file
inputFile = "vid.mp4";
cap.open(inputFile);
// Get frame from the video
cap >> frame;
// Create a 4D blob from a frame
blobFromImage(frame, blob, 1 / 255.0, Size(inpWidth, inpHeight), Scalar(0, 0, 0), true, false);
// Sets the input to the network
net.setInput(blob);
// Runs the forward pass to get output of the output layers
vector<Mat> outs;
net.forward(outs, getOutputsNames(net));
}
Although I add $(CUDNN)\include;$(cudnn)\include; to Additional Include Directories in both C/C++ and Linker, added CUDNN_HALF;CUDNN; to C/C++>Preprocessor Definitions, and added cudnn.lib; to Linker>Input, I still get this warning:
DNN module was not built with CUDA backend; switching to CPU
and it runs on CPU instead of GPU, can anyone help me with this problem?
I solved it by using CMake, but I had first to add this opencv_contrib then rebuilding it using Visual Studio. Make sure that these WITH_CUDA, WITH_CUBLAS, WITH_CUDNN, OPENCV_DNN_CUDA, BUILD_opencv_world are checked in CMake.
I had a similar issue happen to me about a week ago, but I was using Python and Tensorflow. Although the languages were different compared to C++, I did get the same error. To fix this, I uninstalled CUDA 10.2 and downgraded to CUDA 10.1. From what I have found, there might be a dependency issue with CUDA, or in your case, OpenCV hasn't created support yet for the latest version of CUDA.
EDIT
After some further research it seems to be an issue with Opencv rather than CUDA. Referencing this github thread, if you installed Opencv with cmake, remove the arch bin version below 7 on the config file, then rebuild/reinstall Opencv. However, if that doesn't work, another option would be to remove CUDA arch bin version < 5.3 and rebuild.
I have a project at hand which I want to use one of the opencv modules (specifically dnn).
Instead of building the dnn module I want to use the source code of this modules in my project. by doing so, I can change the source code live and see the results at the same time.
I have a very simple scenario with one only source file:
main.cpp
#include "iostream"
#include <opencv2/dnn.hpp>
int main(int argc, char *argv[])
{
std::string ConfigFile = "tsproto.pbtxt";
std::string ModelFile = "tsmodel.pb";
cv::dnn::Net net = cv::dnn::readNetFromTensorflow(ModelFile,ConfigFile);
return 0;
}
now this function "cv::dnn::readNetFromTensorflow" is in dnn module. I tried many different methods to embedded the dnn source codes inside my project but all of them failed !
for example, the first time I tried to include every cpp and hpp file in the module/dnn/ folder of opencv in my project but I ended up in errors like
/home/user/projects/Tensor/tf_importer.cpp:28: error: 'CV__DNN_EXPERIMENTAL_NS_BEGIN' does not name a type
#include "../precomp.hpp" no such file or directory
HAVE_PROTOBUF is undefined
and ....
I tried to solve these errors but some more errors just happened, more undefined MACROs and more undefined hpp files !
#include "../layers_common.simd.hpp" no such file or directory
and many many more errors !
It seems that I'm stuck in a while(true) loop of errors !!! Is it really that hard to use opencv modules source code ?
P.S.
For those who are asking about why I want to use opencv source code instead of using the shared libraries I have to say that I want to import a customized tensorflow model which opencv read function doesn't support and I want to know where exactly it crashesh so I can fix it.
By the way, I am only using c++11 functions and gcc as compiler in Ubuntu 16.04
I am working on some Clang source-to-source transformation. I am doing it from Clang source code: clang/tools/extra. What I do is that I am adding a printf("Hello World\n"); at the beginning of the main method. It works perfectly, I run my tool as bin/add-code ../../test/hello.c and it turns:
#include <stdio.h>
int main(){
printf("This is from main ....\n");
return 0;
}
to this:
#include <stdio.h>
int main(){
printf("Hello World\n");
printf("This is from main ....\n");
return 0;
}
add-code is my clang libtool that I have written.
But this rewritter only write changes to the terminal; while I want to compile hello.c with the modification and want to do it with clang command, clang -c hello.c not like I have done here.
How could I do that?
i have installed opencv 3.1.0 and i want to compile following program:
#include <stdio.h>
#include "opencv/cv.h"
#include "opencv/highgui.h"
int main(void){
return 0;
}
but it returns error:undefined reference to cvRound and so on in header file types_c.h.
I know it must be a linking problem but the only libraries i can link are:
opencv_world310.lib, opencv_world310d.lib
I have already tried to link these but that doesn't solve the problem.
I m using a GNU GCC Compiler and dynamic libraries a link with mingw32-g++.exe.
According to some research in the internet i need to link libraries like:
`opencv_calib3d249d.lib`
opencv_contrib249d.lib
opencv_core249d.lib
opencv_features2d249d.lib
opencv_flann249d.lib
opencv_gpu249d.lib
opencv_highgui249d.lib
opencv_imgproc249d.lib
opencv_legacy249d.lib
opencv_ml249d.lib
opencv_nonfree249d.lib
opencv_objdetect249d.lib
opencv_ocl249d.lib
opencv_photo249d.lib
opencv_stitching249d.lib
opencv_superres249d.lib
opencv_ts249d.lib
opencv_video249d.lib
opencv_videostab249d.lib`
but those are not in my lib directory!
I'm trying to work with Opencv CUDA module, specially refer to cv::cuda::log function.
First, I'll give to details Opencv compilation.
I compiled Opencv with WITH_CUDA flag on, took the libs and dlls from the compilation, however I copied the headers files from the downloaded opencv folder, since the compilation folder does't include headers by default.
I wonder, whether is this the right thing to do ?
Second, I tried to use the cv::cuda:: function.
I include the cuda.hpp header
#include "opencv2/core/cuda.hpp"
cv::cuda::GpuMat source, dest;
GpuMat compiles great for me, However I don't know which file should I include in order to work with the log function. when I write the following line
cv::cuda::log(source, dest);
I kept on getting the error message:
error: C2039: log in not a member of cv::cuda
Windows 7, Visual studio 2013, Opencv 3.0.0, platform: 64 bit, CUDA toolkit 6.5
Third, I'd like to know about Opencv CUDA implementation, does it utilize npp functionality? Opencv vs npp, which one is better to use ?
I could easly write my code using npp, however I'd like to know the opencv CUDA module.
Thanks
After couple days of searching, I'd like to share my knowledge
First thing I was doing wrong, was taking the headers from Opencv compilation, the right thing to do is taking the header from all the Opencv modules (each module individually).
Second, After Opencv compilation with CUDA flag everything worked great.
Third, Several opencv CUDA functions does utilize NPP
Forth, Use github
This code should work for OpenCV 3.1:
#include <opencv2/opencv.hpp>
#include <opencv2/cudaarithm.hpp>
int main()
{
cv::Mat img = cv::imread("img.jpg", cv::IMREAD_GRAYSCALE);
cv::Mat img_32f;
img.convertTo(img_32f, CV_32F);
//To avoid log(0) that is undefined
img_32f += 1.0f;
cv::cuda::GpuMat gpuImg, gpuImgLog;
gpuImg.upload(img_32f);
cv::cuda::log(gpuImg, gpuImgLog);
cv::Mat imgLog, imgLog_32f;
gpuImgLog.download(imgLog_32f);
double min, max;
cv::minMaxLoc(imgLog_32f, &min, &max);
imgLog_32f.convertTo(imgLog, CV_8U, 255.0/(max-min), -255.0*min/(max-min));
cv::imshow("img", img);
cv::imshow("imgLog", imgLog);
cv::waitKey(0);
return 0;
}