Using Frame Buffer Objects (FBO) in Borland C++ Builder 6 - buffer

I have an "access violation" on the Frame Buffer Object (FBO)'s command glGenFramebuffersEXT :
void TGLForm::DrawScene()
{
wglMakeCurrent(ghDC, ghRC);
glEnable(GL_TEXTURE_2D);
GLuint framebuffer, texturefbo;
GLenum status;
glGenFramebuffersEXT(1, &framebuffer); // access violation here
Founding a help thread concerning the FBOs, I checked that the glext.h initialization were okay and repeated amidst the preprocessor lines this way :
#include "glext.h"
#include "wglext.h"
extern PFNGLGENFRAMEBUFFERSEXTPROC glGenFramebuffersEXT = (PFNGLGENFRAMEBUFFERSEXTPROC)wglGetProcAddress("glGenFramebuffersEXT");
extern PFNGLBINDFRAMEBUFFEREXTPROC glBindFramebufferEXT = (PFNGLBINDFRAMEBUFFEREXTPROC)wglGetProcAddress("glBindFramebufferEXT");
extern PFNGLFRAMEBUFFERTEXTURE2DEXTPROC glFramebufferTexture2DEXT = (PFNGLFRAMEBUFFERTEXTURE2DEXTPROC)wglGetProcAddress("glFramebufferTexture2DEXT");
extern PFNGLCHECKFRAMEBUFFERSTATUSEXTPROC glCheckFramebufferStatusEXT = (PFNGLCHECKFRAMEBUFFERSTATUSEXTPROC)wglGetProcAddress("glCheckFramebufferStatusEXT");
extern PFNGLGENRENDERBUFFERSEXTPROC glGenRenderbuffersEXT = (PFNGLGENRENDERBUFFERSEXTPROC)wglGetProcAddress("glGenRenderbuffersEXT");
extern PFNGLBINDRENDERBUFFEREXTPROC glBindRenderbufferEXT = (PFNGLBINDRENDERBUFFEREXTPROC)wglGetProcAddress("glBindRenderbufferEXT");
extern PFNGLRENDERBUFFERSTORAGEEXTPROC glRenderbufferStorageEXT = (PFNGLRENDERBUFFERSTORAGEEXTPROC)wglGetProcAddress("glRenderbufferStorageEXT");
extern PFNGLFRAMEBUFFERRENDERBUFFEREXTPROC glFramebufferRenderbufferEXT = (PFNGLFRAMEBUFFERRENDERBUFFEREXTPROC)wglGetProcAddress("glFramebufferRenderbufferEXT");
=> The access violation remains.
Another help thread induced me to download the NVIDIA OpenGL SDK's because I have a GT9800 Nvidia card : it didn't remove the "access violation".
I tried using GLee and Glew in Borland Builder 6 :
to include Glew in Borland it is first needed to convert Visual Studio "coff" lib from coff to borland builder "omf" lib,
but with the borland command script "coff2omf.exe" I get this error label : "invalid machine type" - and with "objconv.exe" I get this error : "import library cannot convert to static library".
=> does someone know how I may manage to convert the Glew "coff" lib to the Borland Builder format "omf" successfully ?
=> how can we convert an "import library" to a "static library" ?

download and use GLEW .h,.c source code
#define GLEW_STATIC
#include "gl\glew.c" // ~900KB file !!!
I am using it for many years inside borland source without any problems
if you have problems with include path then just use relative paths
do not forget to init glew first
glewInit();
of course your OpenGL must be initialized prior to this !!!
check if you have FBO support
if (glGenFramebuffersEXT==NULL) error ...
FBO usage
if all is OK
then you still can have access violations !!!
if FBO not used properly ...
but this is not your case yet ...

Related

Why does my code crash when libvlc_media_new_path() is called?

after several days of trying myself to solve my problem I would like to kindly ask for your help:
I am trying to make the libvlc / SDL 2.0 tutorial working.
I am coding in Visual Studio 2022 in x86 C++ Console.
I have linked the libvlc library path and include path and have added the libvlc.lib file in my project linker settings.
The program compiles without error and crashes when libvlc_media_new_path is called.
You see all different formats of path I have used in my minimal reproducible example below:
My sources:
I downloaded the vlc master from Github to get the headers / include directory.
I downloaded the vlc-3.0.17.4-win32 release and from there took the libvlc.dll.
From the libvlc.dll I created the lib file following a visual studio command prompt procedure.
What i noticed is that the function libvlc_media_new_path() only takes the path as an argument now. All examples i find in the internet are with the libvlc instance AND the path as arguments.
Thank you so much for your help!
#include <stdlib.h>
#include "vlc/vlc.h"
int main(int argc, char* argv[]) {
libvlc_instance_t* libvlc;
libvlc_media_t* m;
libvlc_media_player_t* mp;
libvlc = libvlc_new(0, NULL);
if (NULL == libvlc) {
printf("LibVLC initialization failure.\n");
return EXIT_FAILURE;
}
m = libvlc_media_new_path("/1.mp4");
//m = libvlc_media_new_path("C:\\Programmieren\\PACA\\1.mp4");
//m = libvlc_media_new_path("C:/Programmieren/PACA/1.mp4");
//m = libvlc_media_new_path("C://Programmieren//PACA//1.mp4");
//m = libvlc_media_new_path("C:\Programmieren\PACA\1.mp4");
//m = libvlc_media_new_path("file:///C:/Programmieren/PACA/1.mp4");
mp = libvlc_media_player_new_from_media(libvlc, m);
return 0;
}
If you go to Github and click on the Tags link, you can get the headers for version 3.0.17.4. In there you will see that libvlc_media_new_path takes an instance as an argument.
The other option would be to get or build the 3.0.18 DLL.

How to incorporate the C++ standard library into a Zig program?

In reading the documentation for zig, I was under the impression that zig could compile both C and C++ code. Consequently, I thought you could import a C++ file's header via #cImport and have zig build succeed. However, I can't seem to get this to work for a C++ library integration.
I first create my project, zig init-lib and then add my import to src/main.zig via the #cImport directive. Specifically, I #cInclude("hooks/hooks.h") the C++ header file of this library. If I attempt to zig build at this point, the build fails, unable to find the header. I fix this by modifying build.zig to lib.addIncludeDir("/usr/include/library").
Since this C++ library is now being parsed and uses the C++ standard library, the next error I get when I zig build is that the stdexcept header is not found. To fix this, I modify build.zig to lib.linkSystemLibrary("c++").
Lastly, and the error I'm stuck on now, is an assortment of errors in /path/to/zig-linux-x86_64-0.9.1/lib/libcxx/include/<files>. I get stuff like unknown type name '__LIBCPP_PUSH_MACROS, unknown type name 'namespace', or unknown type name 'template'.
Googling this, the only thing of partial relevance that I could find was that this is due to clang's default interpretation of .h files is as C files which obviously don't have namespace or template keywords, but I don't know what to do with that knowledge. LLVM on MacOs - unknown type name 'template' in standard file iosfwd
Does anyone have any insight as to how to actually integrate with a C++ (not pure C) library through zig?
Specifically, I #cInclude("hooks/hooks.h") the C++ header file of this library.
#cImport() is for translating C header files into zig so they can be used without writing bindings. Unfortunately, it does not support C++ headers. To use a C++ library, you'll have to write C bindings for it and then #cImport() those headers.
// src/bindings.cpp
#include <iostream>
extern "C" void doSomeCppThing(void) {
std::cout << "Hello, World!\n";
}
// src/bindings.h
void doSomeCppThing(void);
// build.zig
const std = #import("std");
pub fn build(b: *std.build.Builder) void {
const target = b.standardTargetOptions(.{});
const mode = b.standardReleaseOptions();
const exe = b.addExecutable("tmp", "src/main.zig");
exe.setTarget(target);
exe.setBuildMode(mode);
exe.linkLibC();
exe.linkSystemLibrary("c++");
exe.addIncludeDir("src");
exe.addCSourceFile("src/bindings.cpp", &.{});
exe.install();
}
// src/main.zig
const c = #cImport({
#cInclude("bindings.h");
});
pub fn main() !void {
c.doSomeCppThing();
}

How to make use of OpenCV source codes instead of shared libraries

I have a project at hand which I want to use one of the opencv modules (specifically dnn).
Instead of building the dnn module I want to use the source code of this modules in my project. by doing so, I can change the source code live and see the results at the same time.
I have a very simple scenario with one only source file:
main.cpp
#include "iostream"
#include <opencv2/dnn.hpp>
int main(int argc, char *argv[])
{
std::string ConfigFile = "tsproto.pbtxt";
std::string ModelFile = "tsmodel.pb";
cv::dnn::Net net = cv::dnn::readNetFromTensorflow(ModelFile,ConfigFile);
return 0;
}
now this function "cv::dnn::readNetFromTensorflow" is in dnn module. I tried many different methods to embedded the dnn source codes inside my project but all of them failed !
for example, the first time I tried to include every cpp and hpp file in the module/dnn/ folder of opencv in my project but I ended up in errors like
/home/user/projects/Tensor/tf_importer.cpp:28: error: 'CV__DNN_EXPERIMENTAL_NS_BEGIN' does not name a type
#include "../precomp.hpp" no such file or directory
HAVE_PROTOBUF is undefined
and ....
I tried to solve these errors but some more errors just happened, more undefined MACROs and more undefined hpp files !
#include "../layers_common.simd.hpp" no such file or directory
and many many more errors !
It seems that I'm stuck in a while(true) loop of errors !!! Is it really that hard to use opencv modules source code ?
P.S.
For those who are asking about why I want to use opencv source code instead of using the shared libraries I have to say that I want to import a customized tensorflow model which opencv read function doesn't support and I want to know where exactly it crashesh so I can fix it.
By the way, I am only using c++11 functions and gcc as compiler in Ubuntu 16.04

VideoCapture OpenCV 2.4.2 error in windows

I have a problem using VideoCapture class with OpenCV 2.4.2 under windows XP 32bits.
It doesn't open any file or camera and fixing it's being a pain.
Im using visual studio 2010 but i have also tried the code in QTcreator with the same result.
The testing code is the following:
#include "opencv/cv.h"
#include "opencv/highgui.h"
#include <iostream>
#include <string>
#include <iomanip>
#include <sstream>
using namespace cv;
using namespace std;
int main()
{
const char* videoPath = "C:/video/";
string videoName = string(videoPath) + "avi.avi";
VideoCapture cap(videoName);
if(!cap.isOpened())
{
std::cout<<"Fail"<<std::endl;
return -3;
}
return 0;
}
The output is always '-3'.
Qt Creator shows a
warning: Error opening file (../../modules/highgui/src/cap_ffmpeg_impl.hpp:361)
I debugged it and the problem appears in the first line of:
CvCapture* cvCreateFileCapture_FFMPEG_proxy(const char * filename)
{
CvCapture_FFMPEG_proxy* result = new CvCapture_FFMPEG_proxy;
if( result->open( filename ))
return result;
delete result;
#if defined WIN32 || defined _WIN32
return cvCreateFileCapture_VFW(filename);
#else
return 0;
#endif
}
in the cap_ffmpeg.cpp internal file.
I have tested the same code in a mac under snow leopard and it works. No surprises here since it must be a library issue.
I have opened the avi file with the same path route using the c-function cvCapture easy and fast.
I got all the dlls of 'C:\opencv\opencv\build\x86\vc10\bin'
included in mi debug file. I got the tbb.dll and all the 'C:\opencv\opencv\3rdparty\ffmpeg' content included too.
This is drving me crazy so any help would be appreciated.
Thanks in advance.
In my case, the same problem was resolved after deleting all opencv_***.dll files in C:\Windows\System32. So, I use the dll files just through the path like "%PATH%;C: \Program Files \OpenCV2.4.2\build\x86\vc10/bin". Please try it.
I also faced with this problem and solved it by correct the path of the function:
VideoCapture cap(videoName);
If the AVI file of videoName does't exist, it will be an error:
(../../modules/highgui/src/cap_ffmpeg_impl.hpp:XXX)
where XXX represents the line number.
I had the same issue with the open method whilst running under Windows 8 (64bit), opencv 2.4.10. IDE is running in x86.
I found that running the application in release configuration solved the problem.
Stumbled across the answer because I had the same issue with imread. Issue is presented in the this thread.
imread not working in Opencv
See the fix I found below, for mp4 files.
I faced the same issue on Windows 7, using OpenCV 2.4.9. I am using the java wrapper for opencv.
Matthias Krings has done a lot of research for this. See this. Apparently this is an issue based on the video file type. With .avi files, it seems to work for a lot of people. Unfortunately his solution of setting OPENCV_DIR did not work for me. But his comments in the bug listing gave me a hint to fix the issue.
You have to do two things:
Set java.library.path to include the directory {opencv\install\dir}opencv-2.4.9\build\x86\vc10\bin. You can set the variable using the -D option on the java command line: java -Djava.library.path=PATH_TO_YOUR_DLL .... Also fetch this variable from your environment, using System.getProperty(...), and print it before calling loadLibrary(), to verify that the path setting is working.
And in your java class, load the ffmpeg dll using System.loadLibrary("opencv_ffmpeg249");. The loadLibrary() function should be invoked from within a static block in java.
There is a file named opencv_ffmpeg249.dll in the java.library.path that we set.
This works on ubuntu also, for .so files.
I too faced the same issue and resolved after pointing to the correct location of the input video.

C++ Eclipse OpenCV : .exe file and binaries generated, but no image displayed

Here's my code (the first DisplayImage.cpp code in the OpenCV documentation)
/*
* DisplayImage.cpp
*
* Created on: Dec 25, 2011
* Author: Arcturus */
#include <iostream>
#include <opencv2\opencv.hpp>
using namespace cv;
using namespace std;
int main(int argc, char** argv){
Mat image;
image = imread(argv[1], 1);
if(argc!=2 || !image.data){
cout<<"no image data";
return -1;
}
namedWindow("Display Image", CV_WINDOW_AUTOSIZE);
imshow("Display Image", image);
waitKey(10000);
return 0;
}
Build complete, executable generated, binaries generated.
I have my image - blackbuck.bmp- in the DisplayImage Debug folder. To run the code, I go to Run> Run Configurations. Select the DisplayImage Debug exe file, key in blackbuck.bmp (also tried it with absolute path) and run it.
On the top of the console, I get the message : DisplayImage Debug. And it displays no image at all. What could be wrong here?
I am running it on Eclipse, using CDT.
Thank you for your time!
EDIT: Problem solved!!! I had to copy all the dll files from the library folder to the folder in which my executable file was being generated. I still do not understand why, though. After all, the linker was already linking the library folder containing all the dlls. If someone could explain this, it would be of great help for future debugging. Thank you karl and mevotron for your time :)
EDIT 2: From the msdn website:
"A potential disadvantage to using DLLs is that the application is not self-contained; it depends on the existence of a separate DLL module. The system terminates processes using load-time dynamic linking if they require a DLL that is not found at process startup and gives an error message to the user. The system does not terminate a process using run-time dynamic linking in this situation, but functions exported by the missing DLL are not available to the program."
I think this answers my question. Perhaps this means eclipse uses load-time dynamic linking.
How did you compile OpenCV with MinGW (i.e., what were your BUILD_TYPE and SSE* options set to during the CMake configuration)? The reason I ask, is that there is a known bug with SSE optimizations that will cause highgui operations to crash when using MinGW built versions. See my other SO answer here.

Resources