How to use EXPORT / IMPORT to Memory - memory

I'm trying to export a value for an enhancement inside a BAPI call, that is executed in a background update task. But the value is not being read inside the enhancement, however if I do a local test the value is read correctly from memory.
Any ideas as to why this doesn't work?
This is my code:
Export program:
DATA: lv_id TYPE char30.
CONCATENATE 'ZTCODE' sy-uname
INTO lv_id.
EXPORT ztcode FROM iv_tcode TO MEMORY ID lv_id.
Import program(inside enhancement):
DATA: lv_tcode TYPE sy-tcode,
lv_id TYPE char30.
CONCATENATE 'ZTCODE' sy-uname
INTO lv_id.
IMPORT ztcode TO lv_tcode FROM MEMORY ID lv_id.

Apparently you are trying to transfer data from a user session to a background/update session. This won't work using the ABAP memory. Check the documentation on the ABAP Memory Organization: An RFC call or an update module is run in a different user session which has a different ABAP Memory.

Related

SystemVerilog: String to Circuit Net

Assuming that I set one environment variable before launching a logic simulation of my circuit wrapped in a testbench written in SystemVerilog, I want to check whether it is possible to read the variable and try to map it to a net of the circuit.
For instance:
#### from the bash script that invokes the logic simulator ####
export NET_A=tb_top.module_a.submodule_b.n1
//// inside the tb_top in system verilog ////
import "DPI-C" function string getenv(input string env_name);
always_ff #(posedge clk, nenedge rst_n) begin
if (getenv("NET_A") == 1'b1) begin
$display("Hello! %s has the value 1", getenv("NET_A"));
end
end
In the example above I simply want to check whether the current net i.e., NET_Ais assigned at a certain point in the simulation the logic value of 1.
Thanks in advance
SystemVerilog has a C-based API (Verilog Procedural Interface VPI) that gives you access to a simulator's database. There are routines like vpi_get_handle_by_name which gives you a handle to an signal looked up by a string name. And then you can use vpi_get_value the gives you the current value of that signal.
Use of the VPI needs quite a bit of additional knowledge and many simulators give you built-in routines to handle this common application without having to break into C code. In Modelsim/Questa, it is called Signal_Spy.
But regardless of whether you use the VPI or tool specific routines, looking up a signal by string name has severe performance implications because it prevents many optimizations. Unless a signal represents a storage element, it usually does not keep its value around for queries.
It would be much better to use the signal path name directly
vlog ... +define+NET_A=tb_top.module_a.submodule_b.n1
Then in your code
if (`NET_A == 1'b1) begin

Where exactly in IPFS.create() or IPFS.add() does my node propagate an updated distributed hash table upon adding a file?

Original question: Does the IPFS.add() method automatically update my local DHT and propagate it to other peers?
In order to test whether the IPFS.add() method alone allows other peers to download content from a my pc, I ran this script on my windows pc:
import * as IPFS from "ipfs";
const node = await IPFS.create();
var file = await node.add("Shiiiiiiiiiiitttt");
and ran this code on my macbook to fetch the file:
import * as IPFS from "ipfs";
const node = await IPFS.create();
//fetching Shiiiiiiiiiiitttt
const stream = node.cat("QmetK5x9nLUG5jDwp7Un25n47exuNjDZ3cKvnKfC6Hebmi")
const decoder = new TextDecoder()
let data = ''
for await (const chunk of stream) {
// chunks of data are returned as a Uint8Array, convert it back to a string
data += decoder.decode(chunk, { stream: true })
console.log("decoding")
}
//At the end, as long as ipfs is running in owner node, there was no need for ipfs.dht.provide method call
console.log(data)
What I found out through this test was that as long as I keep running jsipfs daemon or the script itself on the windows pc that adds a file or text, I can retrieve it from other devices using IPFS.cat(). This confuses me deeply, since IPFS also has a separate method, IPFS.dht.provide(), and my current understanding of IPFS dictates that an updated dht propagation to other peers is necessary to enable them to fetch files. However, from my test, I can logically conclude that there has to be some method within IPFS.add() that propagates an updated distributed hash table or at least a similar alternative to other peers so that they know I have the file. I'm having a very difficult time finding these source methods for automatic dht propagation upon adding a file and would appreciate any help on finding the said methods or an under-the-hood explanation of what happens during IPFS.add() .
Check out the ipfs docks https://docs.ipfs.tech/concepts/ , far more important than you first think.

Override dask scheduler to concurrently load data on multiple workers

I want to run graphs/futures on my distributed cluster which all have a 'load data' root task and then a bunch of training tasks that run on that data. A simplified version would look like this:
from dask.distributed import Client
client = Client(scheduler_ip)
load_data_future = client.submit(load_data_func, 'path/to/data/')
train_task_futures = [client.submit(train_func, load_data_future, params)
for params in train_param_set]
Running this as above the scheduler gets one worker to read the file, then it spills that data to disk to share it with the other workers. However, loading the data is usually reading from a large HDF5 file, which can be done concurrently, so I was wondering if there was a way to force all workers to read this file concurrently (they all compute the root task) instead of having them wait for one worker to finish then slowly transferring the data from that worker.
I know there is the client.run() method which I can use to get all workers to read the file concurrently, but how would you then get the data you've read to feed into the downstream tasks?
I cannot use the dask data primitives to concurrently read HDF5 files because I need things like multi-indexes and grouping on multiple columns.
Revisited this question and found a relatively simple solution, though it uses internal API methods and involves a blocking call to client.run(). Using the same variables as in the question:
from distributed import get_worker
client_id = client.id
def load_dataset():
worker = get_worker()
data = {'load_dataset-0': load_data_func('path/to/data')}
info = worker.update_data(data=data, report=False)
worker.scheduler.update_data(who_has={key: [worker.address] for key in data},
nbytes=info['nbytes'], client=client_id)
client.run(load_dataset)
Now if you run client.has_what() you should see that each worker holds the key load_dataset-0. To use this in downstream computations you can simply create a future for the key:
from distributed import Future
load_data_future = Future('load_dataset-0', client=client)
and this can be used with client.compute() or dask.delayed as usual. Indeed the final line from the example in the question would work fine:
train_task_futures = [client.submit(train_func, load_data_future, params)
for params in train_param_set]
Bear in mind that it uses internal API methods Worker.update_data and Scheduler.update_data and works fine as of distributed.__version__ == 1.21.6 but could be subject to change in future releases.
As of today (distributed.__version__ == 1.20.2) what you ask for is not possible. The closest thing would be to compute once and then replicate the data explicitly
future = client.submit(load, path)
wait(future)
client.replicate(future)
You may want to raise this as a feature request at https://github.com/dask/distributed/issues/new

LabVIEW and Keithley 2635A - Unable to read data

I'm using LabVIEW and its VISA capabilities to control a Keithley 2635A source meter. Whenever I try to identify the device, it works just fine, both in reading and writing.
viWRITE(*IDN?) /* VISA subVI to send the command to the machine */
viREAD /* VISA subVI to read output */
However, as soon as I set the voltage (or current), it does so. Then I send the command to perform a measurement, but I'm not able to read that data, with the error
VISA: (Hex 0xBFFF0015) Timeout expired before operation completed.
After that, I can not read the *IDN? output either anymore.
The source meter is connected to the PC via a National Instrument GPIB-USB-HS adaptor.
EDIT: I forgot to add, this happens in the VISA Interactive Control program as well.
Ok, apparently the documentation is not very clear. What the smua.measure.X() (where X is the needed parameter) command does is, of course, writing the measurement outcome on a buffer. In order to read that buffer, however, the simple viREAD[] is not sufficient.
So basically the answer was to simply add a print command: this way I have
viWRITE[print(smua.measure.X())];
viREAD[]
And I don't have the error anymore. Not sure why such a command is needed, but that's that. Thank you all for your time answering me.
As #Tom Blodget mentions in the comments, the machine may not have any response to read after you set the voltage. The *IDN? string is both command and query. That is, you will write the command *IDN? and read the result. Some commands do not have any response to read. Here's a quick test to see if you should be reading from the instrument. The following code is in python; I made up the GPIB command to set voltage.
sm = SourceMonitor()
# Prints out IDN
sm.query('*IDN?')
# Prints out current voltage (change this to your actual command)
sm.query('SOUR:VOLT?')
# Set a new voltage
sm.write('SOUR:VOLT 1V')
# Read the new voltage
sm.query('SOUR:VOLT?')
Note that question-marked GPIB commands and the query are used when you expect to get a response from the instrument. The instrument won't give a response for the write command. Query is a combination of write(...) and read(...). If you're using LabView, you may have to write the write and read separately.
If you need verification that the machine received your instruction and acted on it, most instruments have the following common commands:
*OPC? query to see if the operation is complete
SYST:ERR? query to see if any error was generated
Add a question mark ? to the end of the GPIB command used to set the voltage

Dart Language: how to convert a String into a Transferable (ByteBuffer)

I'll be using window.postMessage("", "*", [transferableData]) to send data between two browser windows. However, I didn't find any straight answer on how to convert types into Transferables.
So, in order for me to start learning this, it would be great to know how to convert a simple String into an Transferable (ByteBuffer) and read it on the other side (the side that is getting the message with the data). This would help me solving my problem and learning about this concept.
IMPORTANT UPDATE:
This question led me here: Dart Language: printing reports
Transferable Objects are not yet implemented on Dart VM (http://dartbug.com/4149). That means, if you're running your application via Dartium (Dart VM) the other window will be receiving and processing the first argument of postMessage, and not the Transferable Object. However, JavaScript does the job: the object gets transfered and the original array, emptied.
import 'dart:convert';
var list = Utf8.encode('xxx');
var data = list is Uint8List ? list.buffer : new Uint8List.fromList(list).buffer;
to send the data using window.PostMessage use
window.postMessage({'data': data}, '*', [data]);
and read it on the receiver side like
var string = Utf8.decode(message.data['data']);
See also http://dartbug.com/19968 for the status of transferrables.
The recent Dart dev channel release already ships with Dartium 38.xxx as far as I know.
Here is a small test case for transferrables https://code.google.com/p/dart/source/browse/branches/bleeding_edge/dart/tests/html/transferables_test.dart

Resources