Update: I also saw documentation and discussions that it must always use discrete GPU but it is not, it always use internal one at the moment.
I need to use discrete GPU in electron.js app in case there are integrated and discrete, how to force it in Electron?
In c++ it can be done like that:
extern "C"
{
__declspec(dllexport) unsigned long NvOptimusEnablement = 0x00000001;
__declspec(dllexport) int AmdPowerXpressRequestHighPerformance = 1;
}
How to do that in electron.js?
With current Electron.js/WebGL, there is no mechanism to enforce this. However, you shouldn't need to, because running on the discrete GPU is the default.
I figured out, you can silently restart the app with setting the the special windows env variable, which forces the process to use the dedicated GPU.
const { spawn } = require('child_process');
// Restart with force using the dedicated GPU
if (process.env.GPUSET !== 'true') {
spawn(process.execPath, process.argv, {
env: {
...process.env,
SHIM_MCCOMPAT: '0x800000001', // this forces windows to use the dedicated GPU for the process
GPUSET: 'true'
},
detached: true,
});
process.exit(0);
}
Related
I'm using rollup to bundle the build script for one of my projects. In the builds (their source is ts), I use node's worker_threads module to parallelize some work. I'm using an import of isMainThread (which is a boolean from the worker_threads module) to check whether or not to use the worker logic or the main thread logic. However, when building the build script from its source, rollup is removing the else statement. It seems to be checking the isMainThread variable during its tree-shaking process, and deciding that isMainThread will always be true, and so the else statement isn't needed. How can I change that logic?
Source:
if (isMainThread) {
const { dev, watch } = getOptions(process.argv);
if (watch) watcher(dev);
else singleBuild(dev);
} else {
worker();
}
Output:
if (isMainThread) {
const { dev, watch } = getOptions(process.argv);
if (watch)
watcher();
else
singleBuild();
}
Our application is an ASP.NET Core 2.0 WebAPI deployed in Linux Docker containers and running in Kubernetes.
During load testing, we discovered intermittent spikes in CPU usage that our application would never recover from.
We used perfcollect to collect traces from a container so that we could compare a successful test and a test with CPU spikes. We discovered that around 75% of the CPU time in the failing test was spent in JIT_MonRelaibleEnter_Protable, an interface of lock operations. The caller was System.Diagnostics.TraceSource.dll.
Our application was ported from .NET Framework and contained a lot of calls to System.Diagnostics.Trace.WriteLine(). When we removed all of these, our CPU/memory usage reduced by more than 50% and we don't see the CPU spikes anymore.
I want to understand the cause of this issue.
In the corefx repo, I can see that a default trace listener is setup in TraceInternal.cs:
public static TraceListenerCollection Listeners
{
get
{
InitializeSettings();
if (s_listeners == null)
{
lock (critSec)
{
if (s_listeners == null)
{
// In the absence of config support, the listeners by default add
// DefaultTraceListener to the listener collection.
s_listeners = new TraceListenerCollection();
TraceListener defaultListener = new DefaultTraceListener();
defaultListener.IndentLevel = t_indentLevel;
defaultListener.IndentSize = s_indentSize;
s_listeners.Add(defaultListener);
}
}
}
return s_listeners;
}
}
I can see that DefaultTraceListener.cs calls Debug.Write():
private void Write(string message, bool useLogFile)
{
if (NeedIndent)
WriteIndent();
// really huge messages mess up both VS and dbmon, so we chop it up into
// reasonable chunks if it's too big
if (message == null || message.Length <= InternalWriteSize)
{
Debug.Write(message);
}
else
{
int offset;
for (offset = 0; offset < message.Length - InternalWriteSize; offset += InternalWriteSize)
{
Debug.Write(message.Substring(offset, InternalWriteSize));
}
Debug.Write(message.Substring(offset));
}
if (useLogFile && !string.IsNullOrEmpty(LogFileName))
WriteToLogFile(message);
}
In Debug.Unix.cs, I can see that there is a call to SysLog:
private static void WriteToDebugger(string message)
{
if (Debugger.IsLogging())
{
Debugger.Log(0, null, message);
}
else
{
Interop.Sys.SysLog(Interop.Sys.SysLogPriority.LOG_USER | Interop.Sys.SysLogPriority.LOG_DEBUG, "%s", message);
}
}
I don't have a lot of experience working with Linux but I believe that I can simulate the call to SysLog by running the following command in the container:
logger --socket-errors=on 'SysLog test'
When I run that command, I get the following response:
socket /dev/log: No such file or directory
So it looks like I can't successfully make calls to SysLog from the container. If this is indeed what is going on when I call Trace.WriteLine(), why is it causing locking issues in my application?
As far as I can tell, EnvVar_DebugWriteToStdErr is not set in my container so it should not be attempting to write to StdErr.
The solution can be that rsyslog is not running. Is that installed in your container? Use a base image that has rsyslog built in.
This link can help too.
Background:
What I want to do is to be able to write from my ARM processor to a BRAM, on a Zynq 7000.
To do this, I have the following components:
-M_AXI_GP0 on PS7 connects to S_AXI_LITE on axi_cdma_0 through an AXI Interconnect
-cdma_introut on axi_cdma_0 connects to IRQ_F2P on PS7 through sys_concat, input 11. This means that this maps to Interrupt 87 on PS7.
-M_AXI on axi_cdma_0 connects to S00_AXI on axi_mem_intercon
-M01_AXI on axi_mem_intercon connects to S_AXI_HP3 on PS7
-M00_AXI on axi_mem_intercon connects to S_AXI on axi_bram_ctrl_0
-BRAM_PORTA on axi_bram_ctrl_0 connects to BRAM_PORTA on blk_mem_gen0
=========================================================================
In my mind, what this setup ought to do is this:
Once a transaction is submitted from the ARM DMA Engine, the Zynq will use GP0 to send a command to the CDMA controller via GP0.
The CDMA controller will receive the commands on its slave AXI_LITE port, and interpret the request to access RAM via HP3.
The CDMA controller will move data through axi_mem_intercon in order to take the transaction data from hp3 on M01_AXI, and send it through M00_AXI to the BRAM Controller
The BRAM controller will take in the AXI-4 input and convert that to the appropriate BRAM port to write the data into the BRAM generated by blk_mem_gen_0
After completing this action, the CDMA will send an interrupt through sys_concat to indicate to the DMA Engine that its work is complete.
After loading this hdl design into the PL fabric, I attempt to submit the transaction to the DMA engine via a kernel module. The result is a timeout, with the DMA engine apparently never finishing the task.
=========================================================================
In my attempts to figure out the problem, I've made these observations:
After attempting a write transaction, which times out, I attempted a read transaction to the same DMA channel, but configured to read data. What I get back is all the data that I had attempted to write. This, to me, seems to indicate that the DMA engine IS writing to somewhere, but isn't recognizing the completion of the task
The BRAM in question is a dual port RAM, and the other port reads the data in the BRAM and toggles LEDs to reflect the data. The LEDs are not toggling when I attempt this write transaction, so it seems as though the DMA transaction is not making it as far as the BRAM
When looking at cat /proc/interrupts, I can see several interrupts, but not GIC 87. As mentioned before, the interrupt line I am using goes to Input 11 of the IRQ concat block. I can confirm that the interrupt line which goes to Input 12 does indeed correspond to GIC 88 from /proc/interrupts, so I believe my understanding of which interrupt I am looking for is correct. So for some reason it is not registering that interrupt on the processor.
=========================================================================
Based on this, I believe my devicetree entry for this CDMA is what is incorrect.
In Vivado, I can see these entries in the Address Editor(Some entries omitted for brevity):
sys_ps7
Data(32 address bits:0x40000000 [1G])
axi_cdma_0 S_AXI_LITE Reg 0x43C0_0000 64K 0x43C0_FFFF
axi_cdma_0
Data(32 address bits : 4G)
axi_bram_ctrl_0 S_AXI Mem0 0xC000_0000 4K 0xC000_0FFF
sys_ps7 S_AXI_HP3 HP3... 0x0000_0000 1G 0x3FFF_FFFF
My attempt to write a devicetree entry is as follows:
axi-cdma#43C00000{
#dma-cells = <0x1>;
compatible = "tst,axi-cdma-ctrl-1.00.a";
reg = <0x10000000 0x1000>;
interrupts = <0x0 0x37 0x4>;
interrupt-parent = <0x1>;
dma-channel#C0000000{
buswidth = <0x20>;
}
Before I added this entry in my kernel module failed to even register a transaction channel, and now it does, so I am fairly certain that the kernel is accepting this entry at least enough to assign a DMA channel. However, I don't understand much about how exactly the devicetree works, specifically with the addressing, so there is a good chance I have written this incorrectly, and that is why my transaction doesn't succeed. Can anyone help me correct my design?
}
Declaring the IP core in device tree is not sufficient. You must also declare your DMA client, as Xilinx does in CDMA test client:
cdmatest_1: cdmatest#1 {
compatible ="xlnx,axi-cdma-test-1.00.a";
dmas = <&axi_cdma_0 0>;
dma-names = "cdma";
} ;
In dmas field, the axi_cdma_0 references the CDMA IP core and the 0 its first dma-channel, as defined in the devicetree:
axi_cdma_0: dma#4e200000 {
#dma-cells = <1>;
clock-names = "s_axi_lite_aclk", "m_axi_aclk";
clocks = <&clkc 15>, <&clkc 15>;
compatible = "xlnx,axi-cdma-1.00.a";
interrupt-parent = <&intc>;
interrupts = <0 31 4>;
reg = <0x4e200000 0x10000>;
xlnx,addrwidth = <0x20>;
xlnx,include-sg ;
dma-channel#4e200000 {
compatible = "xlnx,axi-cdma-channel";
interrupts = <0 31 4>;
xlnx,datawidth = <0x20>;
xlnx,device-id = <0x0>;
xlnx,include-dre ;
xlnx,max-burst-len = <0x10>;
};
};
After that, you should register your client as a platform driver. Again, from CDMA test client source:
static const struct of_device_id xilinx_cdmatest_of_ids[] = {
{ .compatible = "xlnx,axi-cdma-test-1.00.a", },
{ }
};
static struct platform_driver xilinx_cdmatest_driver = {
.driver = {
.name = "xilinx_cdmatest",
.owner = THIS_MODULE,
.of_match_table = xilinx_cdmatest_of_ids,
},
.probe = xilinx_cdmatest_probe,
.remove = xilinx_cdmatest_remove,
};
static int __init cdma_init(void)
{
return platform_driver_register(&xilinx_cdmatest_driver);
}
Please note the compatible field of device tree and of platform driver definition, these strings must match. If you did not do this, the dma_request_slave_channel() cannot reserve a channel from your CDMA IP core. Moreover, ensure you do not use dma_request_channel() which is not supported in xilinx kernel >= 4.0 and will fail to reserve channels properly, the transfer will not complete and the DMA will timeout with no interrupt. I am not sure about observation 1, it might be a caching effect. Try to use dma_alloc_coherent() instead of kmalloc().
PS: In any case, try to make sure your hardware is ok by using a bare metal app if possible.
I want to stop/sleep executing to simulate long time process, unfortunately I can't find information about it. I've read the following topic (How can I "sleep" a Dart program), but it isn't what I look for.
For example sleep() function from dart:io packages isn't applicable, because this package is not available in a browser.
For example:
import 'dart:html';
main() {
// I want to "sleep"/hang executing during several seconds
// and only then run the rest of function's body
querySelect('#loading').remove();
...other functions and actions...
}
I know that there is Timer class to make callbacks after some time, but still it doesn't prevent the execution of program as a whole.
There is no way to stop execution. You can either use a Timer, Future.delayed, or just use an endless loop which only ends after certain time has passed.
If you want a stop the world sleeping function, you could do it entirely yourself. I will mention that I don't recommend you do this, it's a very bad idea to stop the world, but if you really want it:
void sleep(Duration duration) {
var ms = duration.inMilliseconds;
var start = new DateTime.now().millisecondsSinceEpoch;
while (true) {
var current = new DateTime.now().millisecondsSinceEpoch;
if (current - start >= ms) {
break;
}
}
}
void main() {
print("Begin.");
sleep(new Duration(seconds: 2));
print("End.");
}
The test below attempts to run the less pager command and return once
the user quits. The problem is that it doesn't wait for user input, it
just lists the entire file and exits. Platform: xubuntu 12.04, Dart
Editor build: 13049.
import 'dart:io';
void main() {
shell('less', ['/etc/mime.types'], (exitCode) => exit(exitCode));
}
void shell(String cmd, List<String> opts, void onExit(int exitCode)) {
var p = Process.start(cmd, opts);
p.stdout.pipe(stdout); // Process output to stdout.
stdin.pipe(p.stdin); // stdin to process input.
p.onExit = (exitCode) {
p.close();
onExit(exitCode);
};
}
The following CoffeeScript function (using nodejs I/O) works:
shell = (cmd, opts, callback) ->
process.stdin.pause()
child = spawn cmd, opts, customFds: [0, 1, 2]
child.on 'exit', (code) ->
process.stdin.resume()
callback code
How can I make this work in Dart?
John has a good example about how to look at user input. But doesn't answer your original question. Unfortunately your question doesn't fit with how Dart operates. The two examples you have, the Dart version and CoffeeScript/Node.js version, do two completely different things.
In your CoffeeScript version, the spawn command is actually creating a new process and then passing execution over to that new process. Basically you're program is not interactively communicating with the process, rather your user is interacting with the spawned process.
In Dart it is different, your program is interacting with the spawned process. It is not passing off execution to the new process. Basically what you are doing is piping the input/output to and from the new process to your program itself. Since your program doesn't have a 'window height' from the terminal, it passes all the information at once. What you're doing in dart is almost equivalent to:
less /etc/mime.types | cat
You can use Process.start() to interactively communicate with processes. But it is your program which is interactively communicating with the process, not the user. Thus you can write a dart program which will launch and automatically play 'zork' or 'adventure' for instance, or log into a remote server by looking at the prompts from process's output.
However, at current there is no way to simply pass execution to the spawned process. If you want to communicate the process output to a user, and then also take user input and send it back to a process it involves an additional layer. And even then, not all programs (such as less) behave the same as they do when launched from a shell environment.
Here's a basic structure for reading console input from the user. This example reads lines of text from the user, and exits on 'q':
import 'dart:io';
import 'dart:isolate';
final StringInputStream textStream = new StringInputStream(stdin);
void main() {
textStream.onLine = checkBuffer;
}
void checkBuffer(){
final line = textStream.readLine();
if (line == null) return;
if (line.trim().toLowerCase() == 'q'){
exit(0);
}
print('You wrote "$line". Now write something else!');
}