nxwebplayer fails under nxserver4 linux - nomachine-nx

At start of a nxwebplayer session (nxserver4, under debian), the connection to display:0
fails (X server):
nxagentShadowInit: ERROR! Cannot shadow the display. PseudoColor
visual class is not supported. Only TrueColor visuals are supported.
I modified /etc/X11/xorg.conf, without result:
Section "Screen"
SubSection "Display"
Depth 24
Visual TrueColor
EndSubSection
EndSection
Thanks for ideas and remarks

Related

Error when opening a .nc file with raster package

I'm new to raster package in R, I was trying to open a .nc file with the package raster and some error popped out. In case you want to try I was using this dataset from copernicus of monthly sea salinity for the years 2018 and 2019 (the grid was Quebec St.Laurence stuary and Gaspesie coast).
I opened similar data files before but never got this error, and a search online did not clarify too much
Here is my script
library(raster)
library(ncdf4)
#Load the .nc files describing SSS.
SSS = stack('SSS.nc')
and the output error
Warning message:
In .rasterObjectFromCDF(x, type = objecttype, band = band, ...) :
"level" set to 1 (there are 17 levels)
thnx
I expected to create a a rasterstack object to work with
There is no error, there is a warning. If you want another level you can select it.
Either way, the "raster" package is obsolete, and you should try this with terra.
You can probably do
library(terra)
x <- rast('SSS.nc')
"Probably" because you do not provide a file. You should at least include a hyperlink to the file you are using. It is hard to help you without your example being reproducible.

Get error on 'gimp-image-set-active-layer' when trying to use script-fu-drop-shadow

I would like to add a drop shadow to picture files without using the Gimp UI. I've saved this content to ~~/.config/GIMP/2.10/scripts/my.scm~:
(define (my/add-drop-shadow filename)
(let* ((image (car (gimp-file-load RUN-NONINTERACTIVE filename filename)))
(drawable (car (gimp-image-get-active-layer image))))
;; Apply transformations
(script-fu-drop-shadow RUN-NONINTERACTIVE image drawable
20.0 20.0 10.0 0 0.5 nil)))
(I know this code is not saving the image, I will add that when this works).
Unfortunately, when trying to use that, I get an error:
$ gimp --version
2.10.30
$ gimp -i -b '(my/add-drop-shadow "2022-11-01-080429.png")' -b '(gimp-quit 0)'
GIMP-Error: Calling error for procedure 'gimp-image-set-active-layer':
Procedure 'gimp-image-set-active-layer' has been called with an invalid ID for argument 'active-layer'. Most likely a plug-in is trying to work on a layer that doesn't exist any longer.
batch command experienced an execution error:
Error: Procedure execution of gimp-image-set-active-layer failed on invalid input arguments: Procedure 'gimp-image-set-active-layer' has been called with an invalid ID for argument 'active-layer'. Most likely a plug-in is trying to work on a layer that doesn't exist any longer.
/home/cassou/.nix-profile/bin/gimp: GEGL-WARNING: (../gegl/buffer/gegl-tile-handler-cache.c:1076):gegl_tile_cache_destroy: runtime check failed: (g_queue_is_empty (&cache_queue))
EEEEeEeek! 2 GeglBuffers leaked
Am I doing something incorrect or is the plug-in buggy?
Update
If I add these statements to the script at the start of the let body:
(print "=============================")
(print image)
(print drawable)
(print "=============================")
I get:
"============================="
1
2
"============================="
as output.
No a script-fu expert but at least 3 problems in your call:
The "color" argument isn't just an integer (at least in a full-RGB image), most likely a triplet, or the result of (gimp-context-get-foreground)
The opacity is a 0-100 number, with .5 you won't see much, you probably want 50 instead
The resizing argument should be 0 or 1
Calling from python this works:
pdb.script_fu_drop_shadow(image, image.active_layer,20.0, 20.0, 10.0, (0,0,0), 80, 0)

How to use custom modetimings with NVIDIA on Linux? (In a mult-screen setup)

I am trying out the NREAL Air, and after extensive testing I've found that it is more comfortable for me to use it at the not-edid-available 90 Hz mode. I was able to find this by setting the following parameters in the "screen" section of my xorg conf.
Option "UseEDID" "false"
Option "ModeDebug" "true”
Option "ExactModeTimingsDVI" "true"
Option "ModeValidation" "NoEdidModes, NoMaxPClkCheck, AllowNonEdidModes"
And then appropriately configuring the rest of my xorg.conf. See https://gist.github.com/cnlohr/9375d6fc0097b50f73c4da7b75a20e83
This has the added beneift of you being able to use custom ModeLine with xrandr. Which is awesome, btw!
sudo xrandr --output DP-2 --newmode HS2 200 1920 1952 1968 2000 1080 1089 1094 1250 +hsync +vsync
sudo xrandr --addmode DP-2 HS2
sudo xrandr --output DP-2 --mode HS2
But, this does not work with display managers, like lightdm, which switch into my desired mode, and then immediately switch to the default 1024x768 for both monitors, which my laptop display cannot handle.
It does, however, work with startx but I'd rather not use my system that way, as I like display managers.
Even if I use custom ModeValidation commands, as soon as I don't disable UseEDID the NVIDIA drivers seem to block xrandr from doing any particularly interesting modes, failing with the following error:
X Error of failed request: BadMatch (invalid parameter attributes)
Major opcode of failed request: 140 (RANDR)
Minor opcode of failed request: 18 (RRAddOutputMode)
Serial number of failed request: 45
Current serial number in output stream: 46
So, I'm in a bit of a pickle. If I want to use the smoother, less headache-inducing mode for the NREAL AIR, I can't use EDIDs, but what is a life without EDIDs?

How can I tell if my laptop can do 4k resolution? (Using Linux Mint)

I am thinking about getting a new monitor. Short of plugging one in and seeing what happens, how can I tell if my laptop can output 4k resolution?
Is there a command I can run that will tell me this?
I thought maybe I could xrandr, but I think that only tells me what each monitor is capable of (even if the controller is capable of more.)
I also thought maybe I could look up the device from the lspci and find it on google, but I couldn't find much.
My lspci -v says:
00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09) (prog-if 00 [VGA controller])
Subsystem: Lenovo Device 500d
Flags: bus master, fast devsel, latency 0, IRQ 45
Memory at f3000000 (64-bit, non-prefetchable) [size=4M]
Memory at d0000000 (64-bit, prefetchable) [size=256M]
I/O ports at 6000 [size=64]
Expansion ROM at <unassigned> [disabled]
Capabilities: <access denied>
Kernel driver in use: i915
Update: Mr. Llama's suggestion about xrandr made me think that this output could be useful to someone who knows more than I do. Here's my xrandr:
Screen 0: minimum 8 x 8, current 5206 x 1080, maximum 16384 x 16384
eDP-1-0 connected 1366x768+0+0 309mm x 173mm
1366x768 60.1*+
... other resolution options removed for brevity ...
320x240 60.1
HDMI-1-0 connected 1920x1080+3286+0 160mm x 90mm
1920x1080 60.0*+ 50.0 59.9
... other resolution options removed for brevity ...
640x480 60.0 59.9
DisplayPort-1-1 connected 1920x1080+1366+0 509mm x 286mm
1920x1080 60.0*+
... other resolution options removed for brevity ...
720x400 70.1
VGA-1-0 disconnected
HDMI-1-1 disconnected
DisplayPort-1-0 disconnected
Looks like the xrandr command might be of use to you. It lists the available and current monitor resolution.
See this SuperUser question for more information.

Set mmc2 on beaglebone black

I am working with a Beaglebone Black and I would like to use the mmc2 slot.
according to AM335xx TRM, a beaglebone black should have 3 mmc available:
mmc0 (sd card);
mmc1 (2G flash),
mmc2.
I am trying to enable mmc2 by device tree (and I am quite sure to have the right pin settings) but, by doing
dmesg
I obtain:
/ocp/mmc#47810000: can't find DMA channel
omap_hsmmc mmc.11: unable to obtain RX DMA engine channel 65
By putting the oscilloscope probe on the header (e.g. the mmc2 clk signal), I do not see any transition.
I already removed R 160 to have mmc2 cmd accessible but I do not see any transition also there.
I tried both to enable it by
echo > /sys/devices/..../slots
and by
capemgr.enable_partno
with no success:
I can see it in
/sys/devices/..../slots
(with the L meaning loaded)..but no way to see any signal on the header.
I already googled it but answers are not clear at all.
Any ideas?
My
uname -a
is:
Linux beaglebone 3.8.13 #1 SMP Tue Jun 18 02:11:09 EDT 2013 armv7l GNU/Linux
Thanks for your help.
You need to configure the mmc2 DMA events to some DMA channel since these events are not direct mapped.
I was not able to do this successfully using device tree overlays. So I made a change in the
am335-x-bone-common.dtsi directly (not sure this is the best way though):
&edma {
ti,edma-xbar-event-map = <32 12>, /* gpevt2 -> 12 */
<30 20>, /* xdma_event_intr2 -> 20 */
+ <1 32>,
+ <2 33>;
};
In the example above the event 1 (SDTXEVT2) was mapped to channel 32 and event 2 (SDRXEVT2) to channel 33.
In case you want to pick another open DMA channel check tables 11-23. Direct Mapped and Table 11-24. Crossbar Mapped from the technical reference manual Rev J.
In your device tree overlay file add these channels in the mmc3 node:
dmas = <&edma 32
&edma 33>;
dma-names = "tx", "rx";

Resources