How to assign more memory to Netbeans? - memory

I have 24 GB of RAM on my PC, but sometimes when Netbeans compiles my projects, it says not enough memory to compile it, I looked at the memory useage, it shows : 586/590 M.
So how to tell Netbeans, there are plenty of RAM, use as much as you need ?

In the etc directory under your Netbeans-Home, edit the file netbeans.conf file.
-Xms and -Xmx should be increased to the values that allow your program to compile.
Here are the instructions in netbeans.conf:
# Note that default -Xmx and -XX:MaxPermSize are selected for you automatically.
# You can find these values in var/log/messages.log file in your userdir.
# The automatically selected value can be overridden by specifying -J-Xmx or
# -J-XX:MaxPermSize= here or on the command line.
Put the values in the netbeans_default_options string. Here is mine (remove linebreaks, added for readability):
netbeans_default_options="-J-client -J-Xss2m -J-Xms32m -J-XX:PermSize=32m
-J-Dapple.laf.useScreenMenuBar=true -J-Dapple.awt.graphics.UseQuartz=true
-J-Dsun.java2d.noddraw=true -J-Dsun.java2d.dpiaware=true
-J-Dsun.zip.disableMemoryMapping=true -J-Dsun.awt.disableMixing=true
-J-Dswing.aatext=true -J-Dawt.useSystemAAFontSettings=lcd --laf Nimbus"
EDIT: -J-Xms sets the minimum Java heap size, -J-Xmx sets the maximum Java heap size.

Related

Why can't ProcDump record memory contents of a 32-bit process under 64-bit Windows 10?

I would like to use ProcDump's ability to create minidumps with a custom MINIDUMP_TYPE via the -mc command-line switch to include memory contents beyond MiniDumpNormal.
Unfortunately neither MiniDumpWithFullMemory, MiniDumpWithIndirectlyReferencedMemory, nor MiniDumpWithPrivateReadWriteMemory | MiniDumpWithPrivateWriteCopyMemory seem to have any effect: A nonempty minidump is created without an error being displayed, but a lot smaller than expected and querying the minidump via WinDbg's .dumpdebug functionality does not list any of the aforementioned flags even if explicitly included in the minidump type. It seems as if none of the flags mentioned above have an impact on ProcDump's behavior.
The process in question is a 32-bit process running under 64-bit Windows 10, build 2004. I have tried both procdump.exe and procdump64.exe version 9.0, albeit without the -64 command-line switch since I do not want to include SysWOW64 overhead. I have also tried copying 32-bit and 64-bit versions of dbghelp.dll provided by the most recent Debugging Tools for Windows SDK into the corresponding folders in which procdump.exe and procdump64.exe are located. Finally, I have made sure to pass the minidump type as hexadecimal numbers and any other flags that I have tried seem to be recognized without an issue and are being listed when inspecting the minidump in WinDbg afterwards.
As an example, the invocation procdump.exe -mc 51B25 <process> should create a dump with
0x51B25 = 334629 = (MiniDumpWithDataSegs
| MiniDumpWithProcessThreadData
| MiniDumpWithHandleData
| MiniDumpWithPrivateReadWriteMemory
| MiniDumpWithUnloadedModules
| MiniDumpWithFullMemoryInfo
| MiniDumpWithThreadInfo
| MiniDumpWithTokenInformation
| MiniDumpWithPrivateWriteCopyMemory)
When inspecting the dump in WinDbg, neither MiniDumpWithPrivateReadWriteMemory nor MiniDumpWithPrivateWriteCopyMemory show up in the .dumpdebug information with corresponding memory regions being unavailable. Note that when I create the dump from within the application using MiniDumpWriteDump for demonstration purposes, the flags do show up when using .dumpdebug and the resulting minidump will be significantly larger (under otherwise comparable conditions).
Can someone confirm that ProcDump is indeed ignoring memory-related flags or explain to me what I am doing wrong?
(Writing a MiniPlus dump using the -mp switch does work but does not necessarily include the memory regions of interest.)

Continuous Integration with Blue Ocean, Github and Nuget causes path too long

NUnit.Extension.VSProjectLoader.3.7.0
I try to get a build chain to work with Jenkins Blue Ocean where the sources are in GitHub and additional dependencies are in nuget.
When I restore packages I get the error after the specific package NUnit.Extension.VSProjectLoader.3.7.0:
Errors in packages.config projects
The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters.
On the agent machine the path is very short: C:\guinode\ on top of that additional length is added making the packages folder the following size:
MyGitProject is replacing my actual project name, the length is equal.
C:\guinode\workspace\MyGitProject_master-CFRRXMXQEUULVB4YKQOFGB65CQNC4U5VJKTARN2A6TSBK5PBATBA\packages
Checking the package on the agent machine shows that NUnit.Extension.VSProjectLoader.3.7.0 was loaded completely.
Checking a local installation and replacing the first path of the package I can find two files that are 260 characters or longer.
They belong to an internal project, so I have a chance of influencing that.
None of the directories are 248 characters or more.
So the immediate solution for me is to redeploy the internal reference package.
My question for future reference is if I can do something to the packages location or something to workspace\MyGitProject_master-CFRRXMXQEUULVB4YKQOFGB65CQNC4U5VJKTARN2A6TSBK5PBATBA so that I save some characters per default.
According to the microsoft documentation it can be possible to modify the 260 length rule.
If you prefix your file with '\\?\' eg: '\\?\C:\guinode\workspace...' then long path will be in use ( a little bit more than 32000 char). I hope settings JENKINS_HOME to this kind of path make all process use that (I'm not sure)
On recent Windows version (10.1607, 2016?) there is an option in the registry to enable long path. Set 1 to the following key: HKLM\SYSTEM\CurrentControlSet\Control\FileSystem LongPathsEnabled (Type: REG_DWORD) and restart the process.

GDB 'Addresses'. What are they?

This should be a very simple,very quick qustion. These are the first 3 lines of a program in C I wrote:
Dump of assembler code for function main:
0x0804844d <+0>: push ebp
0x0804844e <+1>: mov ebp,esp
0x08048450 <+3>: and esp,0xfffffff0
... ... ... ... ... ... ...
What is 0x0804844d and 0x0804844e and 0x08048450? It is not affected by ASLR. Is it still a memory address, or a relative point to the file?
If you look at the Intel Developer Manual instruction-set reference you can see that 0x0804846d <+32>: eb 15 jmp 0x8048484 encodes a relative address. i.e. it's the jmp rel8 short encoding. This works even in position-independent code, i.e. code which can run when mapped / loaded at any address.
ASLR means that the address of the stack (and optionally code+data) in the executable can change every time you load the file into memory. Obviously, once the program is loaded, the addresses won't change anymore, until it is loaded again. So if you know the address at runtime, you can target it, but you can't write an exploit assuming a fixed address.
GDB is showing you addresses of code in the virtual-memory space of your process, after any ASLR. (BTW, GDB disables ASLR by default: set disable-randomization on|off to toggle.)
For executables, it's common that only the stack pointer is ASLRed, while the code is position-dependent and loaded at a fixed address, so code and static data addresses are link-time constants, so code like push OFFSET .LC0 / call puts can work, hard-coding the address of the string constant into a push imm32.
Libraries usually need to be position-independent anyway, so ASLR can load them at a randomized address.
But ASLR for executables is possible and becoming more common, either by making position-independent executables (Linux), or by having the OS fix-up every hard-coded address when it loads the executable at a different address than it was compiled for (Windows).
Addresses only have a 1:1 relation to the position within the file only in a relative sense within the same segment. i.e. the next byte of code is the next byte of the file. The headers of the executable describe which regions of the file are what (and where they should be mapped by the OS's program loader).
The meaning of the addresses shown differs in three cases:
For executable files
For DLLs (Windows) or shared objects (.so, Linux and Un*x-like)
For object files
For executables:
Executable files typically cannot be loaded to any address in memory. In Windows there is the possibility to add a "relocation table" to an executable file (required for very old Windows versions); if this is not present (typically the case when using GCC) then it is not possible to load the file to another memory location. In Linux it is never possible to load the executable to another location.
You may try something like this:
static int a;
printf("%X\n", &a);
When you execute the program 100 times you see that the address of a is always the same so no ASLR is done for the executable file itself.
The addresses dumped by objdump are absolute addresses.
For DLLs / .so files:
The addresses are relative to the base address of the DLL (under Linux) or they are absolute addresses (under Windows) that will change when the DLL is loaded into another memory area.
For object files:
When dumping an object file the addresses are relative to the currently displayed section. If there are multiple ".text" sections in a file the addresses will start at 0 for each section.

Compare WLS_MEM_ARGS_32BIT and EXTRA_JAVA_PROPERTIES in Oracle WebLogic Server Memory Arguments

With respect to "setDomainEnv.cmd" file for weblogic server (10.3.6), what is the difference between the memory argument set by "set WLS_MEM_ARGS_32BIT=-Xms512m -Xmx1024m" and the argument provided by "set EXTRA_JAVA_PROPERTIES=-Xms512m -Xmx512m".
I don't have that EXTRA_JAVA_OPTIONS being set anywhere in my setDomainEnv.cmd. That said, generally the last memory argument is what gets used if it's set twice:
Duplicated Java runtime options : what is the order of preference?
Are you sure there's not an if/else somehow wrapping both of those values? When you start your server Weblogic will check for 32-bit vs 64-bit and a Sun jvm vs. an Oracle jvm to determine what the memory arguments should be.

Uploading a file larger than 2GB using PHP

I'm trying to upload a file larger than 2GB to a local PHP 5.3.4 server. I've set the following server variables:
memory_limit = -1
post_max_size = 9G
upload_max_filesize = 5G
However, in the error_log I found:
PHP Warning: POST Content-Length of 2120909412 bytes exceeds the limit of 1073741824 bytes in Unknown on line 0
Can anyone tell me why this keeps failing please?
I had a similar problem, but my config was:
post_max_size = 1.8G
upload_max_filesize = 1.8G
and yet I could not upload a 1.2GB file. The error was very same:
PHP Warning: POST Content-Length of 1347484420 bytes exceeds the limit of 1073741824 bytes in Unknown on line 0
I spent a day wondering where the heck was this "limit of 1073741824" coming from!
Solution:
Actually, the error was in the php.ini parser: It only understands INTEGER numbers, so essentially it was parsing 1.8G as 1G !!
Changing the value to e.g. 1800M fixed it.
Pls ensure to restart the apache server with the below command service apache2 restart
I don't know about in 5.3.x, but in 5.2.x there are some int/long issues in the PHP code. even if you're on a 64-bit system and have a version of PHP compiled with 64-bit, there are several problems.
First, the code that converts post_max_size and others from ascii to integer stores the value in an int, so it converting "9G" and putting the result into this int will bork the value because 9G is a larger number than a 32-bit variable can hold.
But there are also several other areas of PHP code that are used with the Apache module, CGI, etc. that need to be changed from int to long.
So...for this to work, you need to edit the PHP code and compile it by hand (make sure you compile it as 64-bit). here's a link to a list of diffs:
http://www.archive.org/~tracey/downloads/patches/karmic-64bit-post-large-files.patch
Referenced from this php bug post: http://bugs.php.net/bug.php?id=44522
The file above is a diff on 5.2.10 code, but I just made the changes by hand to 5.2.17 code and i just uploaded a 3.4gb single file through apache/php (which hadn't worked before the change).
ope that helps.
I figure out how to use http and php to upload a 10G file.
php.ini:
post_max_size = 0
upload_max_filesize = 0
It works in php 5.3.10.
if you do not load that file all into memory , memory_limit is unrelated.
Maybe this can come from apache limitations on POST size:
http://httpd.apache.org/docs/current/mod/core.html#limitrequestbody
It seems this limitation on 2Gb can be greater on 64bits installations, maybe. And i'm not sure setting 0 in this directove does not reach the compilation limit. see for examples that thread:
http://ubuntuforums.org/archive/index.php/t-1385890.html
Then do not forget to alter as well the max_input_time in PHP.
But you are reaching high limits :-) maybe you could try a rich client (flash? js?) on the browser side, doing the transfer in chunks or some sort of FTP things, with progress indicators for the user.
As phliKtid mentioned, this is a limitation with the PHP framework. Save for editing the source code as mentioned in the bug report phliKtid linked, there is a workaround that involves setting the upload_max_filesize to 0 in the php.ini file.
; Maximum allowed size for uploaded files.
; http://php.net/upload-max-filesize
upload_max_filesize = 0
By doing this, PHP will not crash when trying to convert "5G" into a 32-bit integer and you will be able to upload files as big as you allow with the "post_max_size" variable.
We've had the same problem: uploads stopped at 2GB.
Under SLES (SUSE Linux Enterprise Server) 11 SP 2, php53 was the problem.
Then we added a new repository that has php54:
http://download.opensuse.org/repositories/server:/php/SLE_11_SP2/
and upgraded to that, we now can upload 5GB :-)

Resources