I'm coding up a simple (old) fashioned way to write a socket program and there is a part where there is
memset(&addrinfo, NULL, sizeof(addrinfo));
in Unix/Linux or Windows. In the MSDN tutorial, Microsoft uses
ZeroMemory(&addrinfo, sizeof (addrinfo));
I was just wondering, is there any difference between the two functions?
According to #dante:
In Win32, ZeroMemory is just a macro around RtlZeroMemory, which is a macro to memset. So, I don't think it makes a difference.
WinBase.h:
#define ZeroMemory RtlZeroMemory
WinNT.h:
#define RtlZeroMemory(Destination,Length) memset((Destination),0,(Length))
Related
I want my application to always run using the real gpu on nVidia Optimus laptops.
From "Enabling High Performance Graphics Rendering on Optimus Systems", (http://developer.download.nvidia.com/devzone/devcenter/gamegraphics/files/OptimusRenderingPolicies.pdf):
Global Variable NvOptimusEnablement (new in Driver Release 302)
Starting with the Release 302 drivers, application developers can
direct the Optimus driver at runtime to use the High Performance
Graphics to render any application–even those applications for which
there is no existing application profile. They can do this by
exporting a global variable named NvOptimusEnablement. The Optimus
driver looks for the existence and value of the export. Only the LSB
of the DWORD matters at this time. A value of 0x00000001 indicates
that rendering should be performed using High Performance Graphics. A
value of 0x00000000 indicates that this method should be ignored.
Example Usage:
extern "C" { _declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001; }
The problem is that I want to do this using Delphi. From what I've read Delphi does not support export of variables even though some hacks exists. I did try a few of them but couldn't make it work.
In the same nvidia document I read that forcing the proper GPU can be accomplished via linking statically to one of a handful listed dlls. But I don't want to link to dlls I'm not using. (Why the opengl.dll is not one of them is beyond me.) A simple exported variable seems much cleaner.
From what I've read Delphi does not support export of variables.
That statement is incorrect. Here's the simplest example that shows how to export a global variable from a Delphi DLL:
library GlobalVarExport;
uses
Windows;
var
NvOptimusEnablement: DWORD;
exports
NvOptimusEnablement;
begin
NvOptimusEnablement := 1;
end.
I think your problem is that you wrote it like this:
library GlobalVarExport;
uses
Windows;
var
NvOptimusEnablement: DWORD=1;
exports
NvOptimusEnablement;
begin
end.
And that fails to compile with this error:
E2276 Identifier 'NvOptimusEnablement' cannot be exported
I don't understand why the compiler doesn't like the second version. It's probably a bug. But the workaround in the first version is just fine.
I'm not a Delphi expert, but AFAIK it is possible to link to static libraries implemented in C from Delphi. So I'd simply create a small stub library, just providing this export, which is statically linked into your Delphi program. This adds the very export you need.
I found that on linux 3.0+ GFP_ZERO is no longer defined in headers.
All I found in gfp.h was,
/* Plain integer GFP bitmasks. Do not use this directly. */
...
#define ___GFP_ZERO 0x8000u
I've checked those "exported" bit masks, on one uses GFP_ZERO.
And the author says Do not use this directly, so, how should I get zeroed page,
Is kmalloc + memset the only option I have now?
I think the expected way to zero is kzalloc():
https://www.kernel.org/doc/htmldocs/kernel-api/API-kzalloc.html
but obviously alloc + memset works too.
Update
Sample diff from CFQ showing the expected updates:
- cfqd = kmalloc_node(sizeof(*cfqd), GFP_KERNEL | __GFP_ZERO, q->node);
+ cfqd = kzalloc_node(sizeof(*cfqd), GFP_KERNEL, q->node);
See also this: https://stackoverflow.com/a/12095263/2908724
On Solaris, processor_bind is used to set affinity for threads. You need to know the LWPID of the target thread or use the constant P_MYID to refer to yourself.
I have a function that looks like this:
void set_affinity(pthread_t thr, int cpu_number)
{
id_t lwpid = what_do_I_call_here(thr);
processor_bind(P_LWPID, lwpid, cpu_number, NULL);
}
In reality my function has a bunch of cross platform stuff in it that I've elided for clarity.
The key point is that I'd like to set the affinity of an arbitrary pthread_t so I can't use P_MYID.
How can I achieve this using processor_bind or an alternative interface?
Following up on this, and due to my confusion:
The lwpid is what is created by
pthread_create( &lwpid, NULL, some_func, NULL);
Thread data is available externally to a process that is not the one making the pthread_create() call - via the /proc interface
/proc/<pid>/lwp/<lwpid>/ lwpid == 1 is the main thread, 2 .. n are the lwpid in the above example.
But this tells you almost nothing about which thread you are dealing with, except that it is the lwpid in the example above.
/proc/pid/lwp/lwpid/lwpsinfo
can be read into a struct lwpsinfo which has some more information, from which you might be able to ascertain if you are looking at the thread you want. see /usr/include/sys/procfs.h
Or man -s 4 proc
The Solaris 11 kernel has critical threads optimization. You setup which threads require special care, the kernel does the rest. This appears to be what you want. Please read this short explanation to see if I understood what you want.
https://blogs.oracle.com/observatory/entry/critical_threads_optimization
The above is an alternate. It may not fly at all for you. But is the preferred mechanism, per Oracle.
For Solaris 10, use the pthread_t tid of the LWP with an idtype_t of P_LWPID in your call to processor_bind. This works in Solaris 8 -> 11. It works ONLY for LWP's in the process. It is not clear to me if that is your model.
HTH
I'm working with Fortran code that has to work with various Fortran compilers (and is interacting with both C++ and Java code). Currently, we have it working with gfortran and g95, but I'm researching what it would take to get it working with ifort, and the first problem I'm having is figuring out how to determine in the source code whether it's using ifort or not.
For example, I currently have this piece of code:
#if defined(__GFORTRAN__)
// Macro to add name-mangling bits to fortran symbols. Currently for gfortran only
#define MODFUNCNAME(mod,fname) __ ## mod ## _MOD_ ## fname
#else
// Macro to add name-mangling bits to fortran symbols. Currently for g95 only
#define MODFUNCNAME(mod,fname) mod ## _MP_ ## fname
#endif // if __GFORTRAN__
What's the macro for ifort? I tried IFORT, but that wasn't right, and further guessing doesn't seem productive. I also tried reading the man page, using ifort -help, and Google.
You're after __INTEL_COMPILER, as defined in http://software.intel.com/sites/products/documentation/hpc/compilerpro/en-us/fortran/win/compiler_f/bldaps_for/common/bldaps_use_presym.htm
According to their docs, they define __INTEL_COMPILER=910 . The 910 may be a version number, but you can probably just #ifdef on it.
I should note that ifort doesn't allow macros unless you explicity turn it on with the /fpp flag.
I have a little trouble with RAD Studio 2009.
As you know, it is possible to switch Unicode support off in MSVS (right click on solution->properties->character set=not set). I need to find this feature in RAD Studio, I know it exists but do not know where exactly.
It`s the only thing that stops my work on a Socket Chat university project.
P.S. The problem appeared after I have installed the update from CodeGear official site.
Short answer: No, there is no such feature to turn off Unicode in RAD Studio 2009.
chester - You don't need to call WideCharToMultiByte() directly. Let the RTL do the work for you:
AnsiString s = Form2->Edit1->Text;
MessageBoxA(NULL, s.c_str(), "It`s ok", MB_OK);
You have to be careful using the UnicodeString::t_str() method. If you call it in a project that is compiled for Ansi rather than Unicode, t_str() alters the internal contents of the UnicodeString. That can have unexpected side effects, especially for UnicodeString values that come from controls.
Is it possible to turn off it? The better question is: should you turn it off? And the answer is: NO.
It's far to design the application so that Unicode characters are sent properly when serialized (for example, in sockets in your application), than to design a non-Unicode program in a Unicode world. Even for a simple project, it's worth learning Unicode in principle.
To be precise, you can get your C++ Builder application to be built without the #UNICODE flag being defined by modifying the project options for "TCHAR maps to char".
This means that SendMessage will call SendMessageA, etc, and the TCHAR
However, if you're using any VCL functions, there are no non-unicode equivalents to those. The VCL is now inherrently Unicode, and that can NOT be changed.
Re: your "solution"- there's an easier way. which works with both TCHAR = char or wchar_t:
MessageBox(NULL, Form2->Edit1->Text.t_str(), _TEXT("It`s ok"), MB_OK);
There is a better way, I do it like this:
MessageBox(NULL, Form2->Edit1->Text.w_str(), L"It`s ok", MB_OK);
I`ve solved the problem this way:
wchar_t* str = Form2->Edit1->Text.w_str();
char* mystr = new char [Form2->Edit1->Text.Length() + 1];
WideCharToMultiByte(CP_ACP, 0, str, -1, mystr, Form2->Edit1->Text.Length() + 1, NULL, NULL);
MessageBox(NULL, mystr, "It`s ok", MB_OK);
delete []mystr;
but it seems to me that there`s another way