How does the balutil condition evaluate Windows DisplayVersion? - parsing

I have a bundle with a registry search fragment below it to find the Windows DisplayVersion. This seems to be the preferred way Microsoft does versioning after Windows 8.1.
The comparison between expressions all seems to work as intended, still I wonder if there are edge-cases where the comparison falls short. Does Burn evaluate each character one by one?
Tests done:
Expression
Result
21H1 > 22H2
False
21H1 > 19H2
True
21H1 > 20
True
21H1 > 22
False
21H1 >= 21
True
Registry key not found > 22H2
False
Registry key empty > 22H2
False
Code:
<Bundle
----
<util:RegistrySearchRef Id="SearchForWindowsDisplayVersion"/>
----
<\Bundle>
<Fragment>
<util:RegistrySearch
Id="SearchForWindowsDisplayVersion"
Variable="WindowsDisplayVersion"
Result="value"
Root="HKLM"
Key="SOFTWARE\Microsoft\Windows NT\CurrentVersion"
Value="DisplayVersion"/>
<bal:Condition Message="Minimum required Windows version is 22H2"
><![CDATA[WindowsDisplayVersion>"21H2"]]></bal:Condition>
</Fragment>

Related

FZF and NeoVim how to get files list

I try to use neovim with fzf plugins. Part of my init.lua is
-- Plugin imstallation section
local install_path = fn.stdpath('data')..'/site/pack/paqs/opt/paq-nvim'
if fn.empty(fn.glob(install_path)) > 0 then
cmd('!git clone --depth 1 https://github.com/savq/paq-nvim.git '..install_path)
end
-- Load the plugin manager
cmd 'packadd paq-nvim'
-- Set the short hand
local plug = require('paq-nvim').paq
-- Make paq manage it self
plug {'savq/paq-nvim', opt=true}
plug {'scrooloose/nerdtree', opt=true}
plug {'vim-airline/vim-airline', opt=true}
plug {'vijaymarupudi/nvim-fzf', opt=false}
plug {'ibhagwan/fzf-lua', opt=false}
require('paq-nvim').install()
require('paq-nvim').clean()
When I try to use command FzfLua files I observe message
fzf error 2 : unknown option: --headless
I work under Windows 10 and I use lua plugins : fzf-lua and nvim-fzf
fzf binary was installed and reachable
I cannot understand why rzr run with this strange option
What I did wrong?
Fzf-lua uses nvim-fzf actions for previews (and other functions) which calls lua functions by running neovim —headless … <lua function id>.
It seems that your neovim version does not support the headless option, I know of other users running fzf-lua on WSL, try with the official 0.5.1 appimage

Arangodb container reaches memory limit and crashes while filtering using 'path' for graph traversal

My Environment
ArangoDB Version: 3.6.2
Storage Engine: RocksDB
Deployment Mode: Single Server
Deployment Strategy: Manual Start in Docker
Infrastructure: Own
Operating System: Linux version 4.4.0-154-generic (gcc version 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.10) )
Total RAM in your machine: 4GB
Disks in use: HDD
Used Package: Docker-Official Docker library
My Problem:
I have a graph with 60k nodes and 4*60k edges. Whenever I try using 'path' for Filter or Return, the memory limit reaches, arangodb container crashes and it gets restarted. However, if I don't use 'path' and use 'vertex' or 'edge' only for filter or return the query executes and produces result as expected. This issue is seen in version 3.6.2.
However, in arangodb 3.1.18 this issue is not seen and everything is
working fine.
Sample Query:
FOR v, e, p IN 6 OUTBOUND "root_node" GRAPH "my_graph_db"
FILTER (
LENGTH(p.edges) == 6 &&
LIKE (p.edges[3]._from,"Data_level_3%",true) &&
(LIKE (p.edges[3]._to, "Data_level_4%") || LIKE (p.edges[3]._to, "Data_4%")) &&
...................................................................
)
LIMIT 0,10
RETURN {
result: Merge(
{data: v},
{parent_id: p.edge[5]._id}
),
.................
}
Expected result:
The arangodb container should not reach memory limit crash. 'Path' attributes needs to accessed while doing queries.
Please refer to https://github.com/arangodb/arangodb/issues/11277

NodeMCU lua: adc.readvdd33() always returns 65535 on ESP8266

I'm trying to read the system voltage, adc.readvdd33() always returns 65535.
This is the code I'm using, obviously just like in the docs:
if (adc.force_init_mode(adc.INIT_VDD33)) then
node.restart()
return
end
print("System voltage (mV):", adc.readvdd33(0))
Output:
NodeMCU 3.0.0.0 built on nodemcu-build.com provided by frightanic.com
branch: master
commit: 310faf7fcc9130a296f7f17021d48c6d717f5fb6
release: 3.0-master_20190907
release DTS: 201909070945
SSL: true
build type: float
LFS: 0x0
modules: adc,bme280,dht,enduser_setup,file,gpio,i2c,mqtt,net,node,rtcmem,rtctime,sjson,sntp,tmr,uart,wifi,tls
build 2020-01-03 12:07 powered by Lua 5.1.4 on SDK 3.0.1-dev(fce080e)
System voltage (mV): 65535
I've read about an issue with this in older SDK versions, is this something similar or what am I doing wrong? It's the same with an ESP01, an ESP01S and an ESP12F.
Is there a limitation using adc with other packages or having something wired to a specific pin?
Unfortunately this is a known bug. We're tracking it in issue 2925, see https://github.com/nodemcu/nodemcu-firmware/issues/2925 for details.

Change character set on Microsoft R Server 9.0.1

Q: How to you change/update the character set on Microsoft R Server?
Issue: I am trying to read a CSV that is delimited with '§' but the R Server is not able to interperet the '§' character when I work remotely. Similarly for other characters like 'ø' , 'æ' and 'å'. When I work locally it's not an issue.
For example:
This works fine:
> x <- '§'
> x
[1] "§"
But when i login remotely to the server the following happens:
REMOTE> x <- '§'
REMOTE> x
[1] "?"
Setup: I am running Microsoft R Server 9.0.1 on Windows Server 2012 R2
Detailed sessionInfo:
REMOTE> sessionInfo() R version 3.3.2 (2016-10-31) Platform:
x86_64-w64-mingw32/x64 (64-bit) Running under: Windows Server >= 2012
x64 (build 9200)
locale: [1] LC_COLLATE=Norwegian (Bokm�l)_Norway.1252 [2]
LC_CTYPE=Norwegian (Bokm�l)_Norway.1252 [3] LC_MONETARY=Norwegian
(Bokm�l)_Norway.1252 [4] LC_NUMERIC=C
[5] LC_TIME=Norwegian (Bokm�l)_Norway.1252
attached base packages: [1] stats graphics grDevices utils
datasets methods base
other attached packages: [1] RevoUtilsMath_10.0.0 RevoUtils_10.0.2
RevoMods_10.0.0 [4] RevoScaleR_9.0.1 lattice_0.20-34
rpart_4.1-10
loaded via a namespace (and not attached): [1] R6_2.2.0
tools_3.3.2 CompatibilityAPI_1.1.0 [4] codetools_0.2-15
grid_3.3.2 iterators_1.0.8 [7] foreach_1.4.3
mrupdate_1.0.0 jsonlite_1.1
In addition to installing version 9.1 of Microsoft R Server I also had to make the following change for the server to work correctly with remote login:
Stop the service 'RServe9.0.0.0'
and go to C:\Program Files\Microsoft\R Server\R_SERVER\o16n\RServe\RScripts\source.R on the compute nodes
and change
```
#add more here if necessary......
```
to
```
#add more here if necessary......
options(encoding = "UTF-8")
```
and then start that service again, you should be able to use §.
Thanks to Microsoft for providing this fix.
This is a known bug, and has been patched in Microsoft R Server 9.1, please upgrade to solve your issue.

Render WebGL in Xfvb

I want to headless test WebGL code using Xvfb. Does anybody know how to do that?
I have 2 machines - both running Ubuntu. One with NVidia card and one with ATI:
The NVidia machine:
ipmi:~ $>xvfb-run glxinfo
name of display: :455
display: :455 screen: 0
direct rendering: No (If you want to find out why, try setting LIBGL_DEBUG=verbose)
server glx vendor string: SGI
server glx version string: 1.4
server glx extensions:
GLX_ARB_multisample, GLX_EXT_visual_info, GLX_EXT_visual_rating,
GLX_EXT_import_context, GLX_EXT_texture_from_pixmap, GLX_OML_swap_method,
GLX_SGI_make_current_read, GLX_SGIS_multisample, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_MESA_copy_sub_buffer, GLX_INTEL_swap_event
client glx vendor string: NVIDIA Corporation
client glx version string: 1.4
...
ipmi:~ $>xvfb-run glxgears
3725 frames in 5.0 seconds = 741.884 FPS
3840 frames in 5.0 seconds = 767.310 FPS
4080 frames in 5.0 seconds = 814.811 FPS
4120 frames in 5.0 seconds = 821.859 FPS
The ATI machine:
shaka:~ $>xvfb-run glxinfo
name of display: :99
display: :99 screen: 0
direct rendering: Yes
server glx vendor string: SGI
server glx version string: 1.4
server glx extensions:
GLX_ARB_multisample, GLX_EXT_visual_info, GLX_EXT_visual_rating,
GLX_EXT_import_context, GLX_EXT_texture_from_pixmap, GLX_OML_swap_method,
GLX_SGI_make_current_read, GLX_SGIS_multisample, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_MESA_copy_sub_buffer, GLX_INTEL_swap_event
client glx vendor string: Mesa Project and SGI
client glx version string: 1.4
shaka:~ $>xvfb-run glxgears
4326 frames in 5.0 seconds = 865.095 FPS
4343 frames in 5.0 seconds = 868.540 FPS
Even if shaka supports direct rendering using Mesa, I can't get a WebGL context.
Thanks!
With modern X11, you would be better off ignoring Xvfb and using the dummy display driver. See the "Additional notes" at http://www.x.org/wiki/XorgTesting for information about using it (you would presumably specify a custom xorg.conf with the necessary Device section). http://www.karlrunge.com/x11vnc/Xdummy is another way to use the dummy driver.

Resources