Question on MPI with Fortran: how to broadcast data to shared memory? - memory

I am working on a Fortran code with MPI on ray-tracing.
The issue is that the code has a large array that is needed in all processors.
But this array is read-only for all processors at every time step,
then it is updated at the beginning of the next time step.
The current code uses one processor to read this array from an external file,
then simply broadcast it to all processors. In this way, every processor has a copy of this array.
To save some memory, I was planning to use OpenMP, but this question MPI Fortran code: how to share data on node via openMP? shows that MPI3.0 can do the trick.
But in the answers of the above question, unfortunately there is no example on how to broadcast the array to nodes.
I wonder anyone can help to provide an example?
Thanks in advance!
EDIT: I included some more details of the code structure as suggested by Vladimir.
Below is a pseudo-code showing the current treatment in the code:
allocate(largearray,size)
do i=1,10
!!! Update largearray
if(myrank==0) then
~~~read from file~~~ largearray
call MPI_BCAST(largearray,size,MPI_DOUBLE_PRECISION,0,MPI_COMM_WORLD,ierr)
do while(.true.)
call MPI_RECEIVE(results)
call collector(results,final_product)
exit
end do
call output(final_product,i)
else
call MPI_BCAST(largearray,size,MPI_DOUBLE_PRECISION,0,MPI_COMM_WORLD,ierr)
call radiation(largearray,beams)
call raytracing(beams,largearray,results)
call MPI_SEND(results,0)
end if
end do
deallocate(largearray)

Related

Lua: run code while waiting for input

I'm currently working on a lua program.
I want to use it in Minecraft with a mod called "OpenComputers" which allows the use of lua scripts on emulated systems.
The programm I'm working on is relatively simple: you have a console and you enter a command to control a machine.
It looks like this:
while(true) do
io.write("Enter command\n>")
cmd = io.read()
-- running code to process the command
end
But the problem is: I need a routine running in the background which checks data given by the machine.
while(true) do
-- checking and reacting
end
How can I make this work?
I can't jump to a coroutine while waiting on io.read()
It's not enough to check after someone used a command (sometimes I don't use it for days but I still have to keep an eye on it)
I'm relatively new to lua so please try to give a simple solution and - if possible - one that does not rely on third party tools.
Thank you :)
If you have some experience with opencomputers, you can add a(n asynchronous) listener for "key_down", and store the user input in a string (or whatever you want).
For example:
local userstr = ""
function keyPressed(event_name, player_uuid, ascii)
local c = string.char(ascii)
if c=='\n' then
print(userstr)
userstr = ""
else
userstr=userstr..c
end
--stores keys typed by user and prints them as a string when you press enter
end
event.register("key_down", keyPressed)
Running multiple tasks is a very broad problem solved by the operating system, not something as simple as Lua interpreter. It is solved on a level much deeper than io.read and deals with troubles numerous enough to fill a couple of books. For lua vm instead of physical computer, it may be simpler but it would still need delving deep into how the letters of code are turned into operations performed by the computer.
That mod of yours seems to already emulate os functionality for you: 1,2. I believe you'll be better off by making use of the provided functionality.

Hacking Lua - Inject new functions into built Lua

I am trying to hack a game (not for cheating though) by introducing new built-in methods and functions in order to communicate with the game using sockets. Here is a small "pseudo code" example of what I want to accomplish:
Inside the Lua code I am calling my_hack() and pass the current game state:
GameState = {}
-- Game state object to be passed on
function GameState:new()
-- Data
end
local gameState = GameState:new()
-- Collect game state data and pass it to 'my_hack' ..
my_hack(gameState)
and inside my_hack the object is getting sent away:
int my_hack(lua_State * l)
{
void* gameState= lua_topointer(l, 1);
// Send the game state:
socket->send_data(gameState);
return 0;
}
Now, the big question is how to introduce my_hack() to the game?
I assume, that all built in functions must be kept in some sort of lookup table. Since all Lua code is getting interpreted, functions like import etc. will have to be statically available, right? If that is correct, then it should be "enough" to find out where this code is residing in order to smuggle my code into the game that would allow me to call my_hack() in a Lua script.
There should be two options: The first is that the Lua built is embedded inside the executable and is completely static and the second is that all Lua code gets loaded dynamically from a DLL.
This question goes out to anybody who has a slightest clue about where and how I should keep looking for the built in functions. I've tried a few things with Cheat Engine but I wasn't too successful. I was able to cheat a bit ^^ but that's not what I'm looking out for.
Sorry for not providing a full answer, but if you can provide a custom Lua VM and change the standard libraries, you should be able to to change the luaL_openlibs method in the Lua source to provide a table with my_hack() inside of it.
Since the Lua interpreter is usually statically compiled into the host executable, modifying the interpreter in some way will probably not be possible.
I think your best bet is to find some piece of Lua code which gets called by the host, and from that file use dofile to run your own code.

Python C Extension - Memory Leak despite Refcount = 1 on returned PyObjects

I'm repeatedly calling a python module I wrote in C++ using the Python C API. My python program repeatedly calls my module's pyParse function, which does a bunch of stuff and returns a PyTuple that contains more PyTuple objects as elements. Every returned object ends up with a PyObject->refcnt of 1, so you know the object should be deleted when it goes out of scope in python. I repeatedly call this module with something like the following python code:
import my_module #this is my c++ module.
import os
path = 'C:/data/'
for filename in os.listdir(path):
data = my_module.pyParse(path+filename)
The longer this loop runs, the more the memory usage blows up. Every iteration produces about 2kb of tuples (which should be destroyed at end of every iteration). Yet when I take "heap snapshots" and compare an early one to another many more iterations later, you can see the allocation of memory called by PyTuple_New and other python objects keeps growing.
Yet because every returned object has 1 as a reference count, I would expect that it would be destroyed after going out of scope in python. Finally, my program ends in a read access violation in a random part of the code. Is there something I am missing? Or does anyone know how to possibly debug this and get a better handle on what's going on? I'm desperate!

Capturing stdout in Objective C

I am using C in Objective C and I want to capture stdout to UIView from Console.
Here is the line I'm talking about:
print(stdout, v=toplevel_eval(v));
Other than you are writing in C I have no idea how much you know about C, "Unix" and Cocoa I/O - so some of this you may already know.
Here is one possible solution, it looks more complicated than it is.
Reading:
You need to understand the system calls pipe, dup2 and read.
You need to understand the GCD function dispatch_async and how to obtain a GCD queue.
pipe and dup2 are often used in conjunction with fork and exec to launch a process and write/read to/from that process standard input/output. What you will be doing uses some of the same basic ideas so looking up examples of this common pattern will help you understand how these calls work. Here are some notes from a University: Pipe, Fork, Exec and Related Topics.
Outline:
Using dispatch_async schedule a block to handle the reading and writing of the data. The block will:
Use pipe To create a pipe and dup2 To connect stdout - file descriptor 1 - it.
Enter a loop which uses read to obtain the available data from the pipe. Data read will be in a byte array.
Within the loop convert the read bytes into an NSString
Within the loop append that string to your view - you must do this on the main thread as it involves the UI, and you can do that using another dispatch_async specifying the main queue.
That is it. Your block will now execute concurrently in the background reading whatever your C code writes to the standard output and adding it to your view.
If you get stuck you can ask a new question showing the code you have written and describing what doesn't work.
HTH

Sleep from within an Informix SPL procedure

What's the best way to do the semantic equivalent of the traditional sleep() system call from within an Informix SPL routine? In other words, simply "pause" for N seconds (or milliseconds or whatever, but seconds are fine). I'm looking for a solution that does not involve linking some new (perhaps written by me) C code or other library into the Informix server. This has to be something I can do purely from SPL. A solution for IDS 10 or 11 would be fine.
#RET - The "obvious" answer wasn't obvious to me! I didn't know about the SYSTEM command. Thank you! (And yes, I'm the guy you think I am.)
Yes, it's for debugging purposes only. Unfortunately, CURRENT within an SPL will always return the same value, set at the entry to the call:
"any call to CURRENT from inside the SPL function that an EXECUTE FUNCTION (or EXECUTE PROCEDURE) statement invokes returns the value of the system clock when the SPL function starts."
—IBM Informix Guide to SQL
Wrapping CURRENT in its own subroutine does not help. You do get a different answer on the first call to your wrapper (provided you're using YEAR TO FRACTION(5) or some other type with high enough resolution to show the the difference) but then you get that same value back on every single subsequent call, which ensures that any sort of loop will never terminate.
There must be some good reason you're not wanting the obvious answer:
SYSTEM "sleep 5". If all you're wanting is for the SPL to pause while you check various values etc, here are a couple of thoughts (all of which are utter hacks, of course):
Make the TRACE FILE a named pipe (assuming Unix back-end), so it blocks until you choose to read from it, or
Create another table that your SPL polls for a particular entry from a WHILE loop, and insert said row from elsewhere (horribly inefficient)
Make SET LOCK MODE your friend: execute "SET LOCK MODE TO WAIT n" and deliberately requery a table you're already holding a cursor open on. You'll need to wrap this in an EXCEPTION handler, of course.
Hope that is some help (and if you're the same JS of Ars and Rose::DB fame, it's the least I could do ;-)
I'm aware that the answer is too late. However I've recently encountered the same problem and this site shows as the first one. So it is beneficial for other people to place new anwser here.
Perfect solution was found by Eric Herber and published in April 2012 here: How to sleep (or yield) for a fixed time in a stored procedure
Unfortunately this site is down.
His solution is to use following function:
integer sysadmin:yieldn( integer nseconds )
I assume that you want this "pause" for debugging purposes, otherwise think about it, you'll always have some better tasks to do for your server than sleep ...
A suggestion: Maybe you could get CURRENT, add it a few seconds ( let mytimestamp ) then in a while loop select CURRENT while CURRENT <= mytimestamp . I've no informix setup around my desk to try it, so you'll have to figure the correct syntax. Again, do not put such a hack on a production server. You've been warned :D
Then you'll have to warp CURRENT in another function that you'll call from the first (but this is a hack on the previous hack ...).

Resources