propel orm - migration - memory

I have this problem and I couldn't find very good informations. For those informations I have found, none is useful. Here it goes, I started a project with propel, I created a first database with a basic table in it, ran "php propel init", everything worked fine. Then I needed another table, I created in its schema.xml, but when I run any of those migration tools I get this error:
PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 32768 bytes) in D:\Desenvolvimento\workspace\Login\vendor\propel\propel\bin\propel.php on line 1
Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 32768 bytes) in D:\Desenvolvimento\workspace\Login\vendor\propel\propel\bin\propel.php on line 1
It has been very frustrating, I feel internet lacks of this propel info.

The problem was I shouldn't manually edit the databae. Once it's created the best thing to do is use the XML to edit.

Related

dispatch_io_read a socket will wait for more data if receiving data size is smaller than length

Hi I am using dispatch_io_read with a socket in swift 2 on Xcode 7 Beta3. It looks like the read action will hold there when the expected receiving data size is smaller than the length I specified. For example,
If I do
dispatch_io_read(channel!, 0, 1000, inputQueue!, myReadHandler)
and the data from the server is less than 1000 bytes, myReadHandler will never get called.
To solve this, I have to do read bytes one by one, is there a better solution?
Thanks.
This probably is a little late, but for anyone who has the same problem
apple's documentation shows that..
"The length parameter indicates the number of bytes that should be read from the I/O channel. Pass SIZE_MAX to keep reading until EOF is encountered (for a channel created from a disk-based file this happens when reading past the end of the physical file)."
So, simply using SIZE_MAX will read all the available data attached to the file descriptor.
Unfortunately, this seems to not work due to a bug in Swift 3 with DispatchIO.read().

Neo4j throws java heap exception while creating relationships

I done bulk upload on Neo4j of two different file, say file "A"(contaning 10000 records) and file "B" (contaning 9000 records).
Now I have third file say file "C" (having 10 million record (rows))
File "C" describes the relation between file "A" and file "B".
When the processing starts for file "C" it throws Java heap size exception, I have 4 GB of ram and increased heap size up to 3 GB. Althoug if I reduce the size of file "C" up to 2 million records then it works fine
Iam using Neo4j 1.9 version.
Please suggest why is it so? and how to sole it.
Thanks in advance :-)
Are you doing this with Neo4j's normal API, or with the bulk inserter? I'm assuming the normal API, and I'm assuming you're doing everything in one transaction? Either use the bulk inserter, or break up your transactions, as transactions are kept in Memory until flushed to disk on commit, which is most likely causing your Heap errors.

Allowed memory size of 16777216 bytes exhausted (tried to allocate 78 bytes) in

I am using PHPbb , everything works fine,
But i am getting the following error in a single page inside admin.
Allowed memory size of 16777216 bytes exhausted (tried to allocate 78 bytes) in home/mytestsite/public_html/includes/template.php on line 458
How to fix this error?
As you can imagine, this error message occurs when PHP tries to use more memory than is avialable. I'm assuming that changing code is not an option but you CAN increase the amount of memory available to PHP.
To change the memory limit for one specific script, include a line such as this at the top of the script:
ini_set("memory_limit","20M");
The 12M (for example) sets the limit to 20 Megs. If this does not work, keep increasing the memory limit until your script fits or your server squeals for mercy.
You can also make this a permanent change for all PHP scripts running on the server by adding a line such as this to the server’s php.ini file:
memory_limit = 20M
Hope this helps

CL_OUT_OF_RESOURCES for 2 millions floats with 1GB VRAM?

It seems like 2 million floats should be no big deal, only 8MBs of 1GB of GPU RAM. I am able to allocate that much at times and sometimes more than that with no trouble. I get CL_OUT_OF_RESOURCES when I do a clEnqueueReadBuffer, which seems odd. Am I able to sniff out where the trouble really started? OpenCL shouldn't be failing like this at clEnqueueReadBuffer right? It should be when I allocated the data right? Is there some way to get more details than just the error code? It would be cool if I could see how much VRAM was allocated when OpenCL declared CL_OUT_OF_RESOURCES.
I just had the same problem you had (took me a whole day to fix).
I'm sure people with the same problem will stumble upon this, that's why I'm posting to this old question.
You propably didn't check for the maximum work group size of the kernel.
This is how you do it:
size_t kernel_work_group_size;
clGetKernelWorkGroupInfo(kernel, device, CL_KERNEL_WORK_GROUP_SIZE, sizeof(size_t), &kernel_work_group_size, NULL);
My devices (2x NVIDIA GTX 460 & Intel i7 CPU) support a maximum work group size of 1024, but the above code returns something around 500 when I pass my Path Tracing kernel.
When I used a workgroup size of 1024 it obviously failed and gave me the CL_OUT_OF_RESOURCES error.
The more complex your kernel becomes, the smaller the maximum workgroup size for it will become (or that's at least what I experienced).
Edit:
I just realized you said "clEnqueueReadBuffer" instead of "clEnqueueNDRangeKernel"...
My answer was related to clEnqueueNDRangeKernel.
Sorry for the mistake.
I hope this is still useful to other people.
From another source:
- calling clFinish() gets you the error status for the calculation (rather than getting it when you try to read data).
- the "out of resources" error can also be caused by a 5s timeout if the (NVidia) card is also being used as a display
- it can also appear when you have pointer errors in your kernel.
A follow-up suggests running the kernel first on the CPU to ensure you're not making out-of-bounds memory accesses.
Not all available memory can necessarily be supplied to a single acquisition request. Read up on heap fragmentation 1, 2, 3 to learn more about why the largest allocation that can succeed is for the largest contiguous block of memory and how blocks get divided up into smaller pieces as a result of using the memory.
It's not that the resource is exhausted... It just can't find a single piece big enough to satisfy your request...
Out of bounds acesses in a kernel are typically silent (since there is still no error at the kernel queueing call).
However, if you try to read the kernel result later with a clEnqueueReadBuffer(). This error will show up. It indicates something went wrong during kernel execution.
Check your kernel code for out-of-bounds read/writes.

Address Error in Assembly (ColdFire MCF5307)

Taking my first course in assembly language, I am frustrated with cryptic error messages during debugging... I acknowledge that the following information will not be enough to find the cause of the problem (given my limited understanding of the assembly language, ColdFire(MCF5307, M68K family)), but I will gladly take any advice.
...
jsr out_string
Address Error (format 0x04 vector 0x03 fault status 0x1 status reg 0x2700)
I found a similar question on http://forums.freescale.com/freescale/board/message?board.id=CFCOMM&thread.id=271, regarding on ADDRESS ERROR in general.
The answer to the question states that the address error is because the code is "incorrectly" trying to execute on a non-aligned boundary (or accessing non-aligned memory).
So my questions will be:
What does it mean to "incorrectly" trying to execute a non-aligned boundary/memory? If there is an example, it would help a lot
What is non-aligned boundary/memory?
How would you approach fixing this problem, assuming you have little debugging technique(eg. using breakpoints and trace)
First of all, it is possible that isn't the instruction causing the error. Be sure to see if the previous or next instruction could have caused it. However, assuming that exception handlers and debuggers have improved:
An alignment exception is what occurs when, say 32 bit (4 byte) data is retrieved from an address which is not a multiple of 4 bytes. For example, variable x is 32 bits at address 2, then
const1: dc.w someconstant
x: dc.l someotherconstant
Then the instruction
mov.l x, %r0
would cause a data alignment fault on a 68000 (and 68010, IIRC). The 68020 eliminated this restriction and performs the unaligned access, but at the cost of decreased performance. I'm not aware of the jsr (jump to subroutine) instruction requiring alignment, but it's not unreasonable and it's easy to arrange—Before each function, insert the assembly language's macro for alignment:
.align long
func: ...
It has been a long time since I've used a 68K family processor, but I can give you some hints.
Trying to execute on an unaligned boundary means executing code at an odd address. If out_string were at an address with the low bit set for example.
The same holds true for a data access to memory of 2 or 4 byte data. I'm not sure if the Coldfire supports byte access to odd memory addresses, but the other 68K family members did.
The address error occurs on the instruction that causes the error in all cases.
Find out what instruction is there. If the pc matches (or is close) then it is an unaligned execution. If it is a memory access, e.g. move.w d0,(a0), then check to see what address is being read/written, in this case the one pointed at by a0.
I just wanted to add that this is very good stuff to figure out. I program high end medical imaging devices in my day job, but occasionally I need to get down to this level. I have found and fixed more than one COTS OS problem by being able to track down just this sort of problem.

Resources