NodeMCU lua: adc.readvdd33() always returns 65535 on ESP8266 - lua

I'm trying to read the system voltage, adc.readvdd33() always returns 65535.
This is the code I'm using, obviously just like in the docs:
if (adc.force_init_mode(adc.INIT_VDD33)) then
node.restart()
return
end
print("System voltage (mV):", adc.readvdd33(0))
Output:
NodeMCU 3.0.0.0 built on nodemcu-build.com provided by frightanic.com
branch: master
commit: 310faf7fcc9130a296f7f17021d48c6d717f5fb6
release: 3.0-master_20190907
release DTS: 201909070945
SSL: true
build type: float
LFS: 0x0
modules: adc,bme280,dht,enduser_setup,file,gpio,i2c,mqtt,net,node,rtcmem,rtctime,sjson,sntp,tmr,uart,wifi,tls
build 2020-01-03 12:07 powered by Lua 5.1.4 on SDK 3.0.1-dev(fce080e)
System voltage (mV): 65535
I've read about an issue with this in older SDK versions, is this something similar or what am I doing wrong? It's the same with an ESP01, an ESP01S and an ESP12F.
Is there a limitation using adc with other packages or having something wired to a specific pin?

Unfortunately this is a known bug. We're tracking it in issue 2925, see https://github.com/nodemcu/nodemcu-firmware/issues/2925 for details.

Related

ElectronJS Dependency Problem with Buildroot

I'm trying to run ElectronJS app to Buildroot-based Operating System for a vehicle instrument. The SBC is Raspberry Pi 4B and target packages are included based on Debian's dependencies list. Then after I run the app, it throws this message:
# DISPLAY=:0 ./my-app --no-sandbox --disable-gpu-sandbox
./my-app: /usr/lib64/libasound.so.2: no version information available (required by ./my-app)
./my-app: /usr/lib64/libasound.so.2: no version information available (required by ./my-app)
/root/my-app: /usr/lib64/libasound.so.2: no version information available (required by /root/my-app)
/root/my-app: /usr/lib64/libasound.so.2: no version information available (required by /root/my-app)
/root/my-app: /usr/lib64/libasound.so.2: no version information available (required by /root/my-app)
/root/my-app: /usr/lib64/libasound.so.2: no version information available (required by /root/my-app)
[1763:0101/075845.947675:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix")
server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix")
Trace/breakpoint trap
# _
Keep in mind that glxgears and glxinfo are working perfectly. The frame rate is around 198 FPS. Thus, GPU supposed to not be blamed.
I am pretty sure something wrong with ALSA libraries or DBUS, perhaps anyone experiencing same problem? Thanks!
Adding { frame: false, fullscreen: true } in the electron BrowserWindow constructor options make it work.

Having problems saving wifi config with nodemcu

Have a development esp12 (Amica) that has been working Ok for months. On a recent boot it appeared to stop connecting to my Wifi. I have the enduser setup code in the module, that connected ok, but the config was not save thru reboot.
if I connect via a terminal and do the following
cfg = {}
cfg.ssid = "SSIDHERE"
cfg.pwd = "password"
cfg.save = true
cfg.auto = false
saved = wifi.sta.config(cfg)
print (saved)
false
Have tried erasing the flash and reloading with no change.
I am building the binaries locally
branch: master
commit: 310faf7fcc9130a296f7f17021d48c6d717f5fb6
release: 3.0-master_20190907
release DTS: 201909070945
SSL: false
build type: float
LFS: 0x20000
modules: bit,enduser_setup,file,gpio,gpio_pulse,i2c,mqtt,net,node,ow,pcm,rotary,rtctime,sjson,sntp,spi,struct,tmr,uart,u8g2,wifi,ws2812,ws2812_effects
build 2019-10-15 21:16 powered by Lua 5.1.4 on SDK 3.0.1-dev(fce080e)

Beaglebone Black Latest Firmware Images U-Boot SPL 2018.01 not boot

I'm trying to flash Beaglebone back from element14 by step on Getting Started page , it can't flash or boot on sdcard on deabian 9.1 LXQT or 9.3 loT lasteset Firmware Image, but I can flash on Old Firmware debian7.5 from step
"dd if=/dev/mmcblk0 of=/dev/mmcblk1" on stackoverflow this page.
I use debug port, It show
U-Boot SPL 2018.01-00002-ge9ff418fb8 (Feb 20 2018 - 20:14:57)
Trying to boot from MMC1
U-Boot 2018.01-00002-ge9ff418fb8 (Feb 20 2018 - 20:14:57 -0600), Build: jenkins-github_Bootloader-Builder-38
CPU : AM335X-GP rev 2.1
I2C: ready
DRAM: 512 MiB
No match for driver 'omap_hsmmc'
No match for driver 'omap_hsmmc'
Some drivers were not found
Reset Source: Power-on reset has occurred.
MMC: OMAP SD/MMC: 0, OMAP SD/MMC: 1
Using default environment
what should I do?
thanks you.
You may use the following procedure with your required images from https://beagleboard.org/latest-images
http://old.ghielectronics.com/community/forum/topic?id=23763

caffe powered and GPU enabled Microsoft Azure VM

I'm trying to build a VM for model training in Azure. I found this Data Science Virtual Machine for Linux (Ubuntu) VM which seems to be a suitable candidate.
Unfortunately, when I spun up the VM and installed the caffe prerequisites I wasn't able to run the tests. I'm getting the following error on make runtest (make all and make test were completed without errors):
NVIDIA: no NVIDIA devices found
Cuda number of devices: 0
Setting to use device 0
Current device id: 0
Current device name:
Note: Randomizing tests' orders with a seed of 97204 .
[==========] Running 2041 tests from 267 test cases.
[----------] Global test environment set-up.
[----------] 11 tests from AdaDeltaSolverTest/3, where TypeParam = caffe::GPUDevice<double>
[ RUN ] AdaDeltaSolverTest/3.TestAdaDeltaLeastSquaresUpdateWithHalfMomentum
NVIDIA: no NVIDIA devices found
E0715 02:24:32.097311 59355 common.cpp:114] Cannot create Cublas handle. Cublas won't be available.
NVIDIA: no NVIDIA devices found
E0715 02:24:32.103780 59355 common.cpp:121] Cannot create Curand generator. Curand won't be available.
F0715 02:24:32.103914 59355 test_gradient_based_solver.cpp:80] Check failed: error == cudaSuccess (30 vs. 0) unknown error
*** Check failure stack trace: ***
# 0x7f77a463f5cd google::LogMessage::Fail()
# 0x7f77a4641433 google::LogMessage::SendToLog()
# 0x7f77a463f15b google::LogMessage::Flush()
# 0x7f77a4641e1e google::LogMessageFatal::~LogMessageFatal()
# 0x7115e3 caffe::GradientBasedSolverTest<>::TestLeastSquaresUpdate()
# 0x7122af caffe::AdaDeltaSolverTest_TestAdaDeltaLeastSquaresUpdateWithHalfMomentum_Test<>::TestBody()
# 0x8e6023 testing::internal::HandleExceptionsInMethodIfSupported<>()
# 0x8df63a testing::Test::Run()
# 0x8df788 testing::TestInfo::Run()
# 0x8df865 testing::TestCase::Run()
# 0x8e0b3f testing::internal::UnitTestImpl::RunAllTests()
# 0x8e0e63 testing::UnitTest::Run()
# 0x466ecd main
# 0x7f77a111c830 __libc_start_main
# 0x46e589 _start
# (nil) (unknown)
Makefile:532: recipe for target 'runtest' failed
make: *** [runtest] Aborted (core dumped)
Is it possible to spin up a virtual machine in Azure suitable for GPU enabled machine learning using caffe?
All the details about the VM here
The Data Science Virtual Machine (DSVM) for Ubuntu already has Caffe installed in /opt/caffe. To use it on a GPU, create a VM with a K80 GPU by choosing the one of the NC sizes. (Be sure to choose HDD as the storage type, or the NC sizes will not appear.) Caffe will then be available out of the box.
Also note that PyCaffe is available. At a terminal:
source activate root
And python will then have PyCaffe available.

Render WebGL in Xfvb

I want to headless test WebGL code using Xvfb. Does anybody know how to do that?
I have 2 machines - both running Ubuntu. One with NVidia card and one with ATI:
The NVidia machine:
ipmi:~ $>xvfb-run glxinfo
name of display: :455
display: :455 screen: 0
direct rendering: No (If you want to find out why, try setting LIBGL_DEBUG=verbose)
server glx vendor string: SGI
server glx version string: 1.4
server glx extensions:
GLX_ARB_multisample, GLX_EXT_visual_info, GLX_EXT_visual_rating,
GLX_EXT_import_context, GLX_EXT_texture_from_pixmap, GLX_OML_swap_method,
GLX_SGI_make_current_read, GLX_SGIS_multisample, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_MESA_copy_sub_buffer, GLX_INTEL_swap_event
client glx vendor string: NVIDIA Corporation
client glx version string: 1.4
...
ipmi:~ $>xvfb-run glxgears
3725 frames in 5.0 seconds = 741.884 FPS
3840 frames in 5.0 seconds = 767.310 FPS
4080 frames in 5.0 seconds = 814.811 FPS
4120 frames in 5.0 seconds = 821.859 FPS
The ATI machine:
shaka:~ $>xvfb-run glxinfo
name of display: :99
display: :99 screen: 0
direct rendering: Yes
server glx vendor string: SGI
server glx version string: 1.4
server glx extensions:
GLX_ARB_multisample, GLX_EXT_visual_info, GLX_EXT_visual_rating,
GLX_EXT_import_context, GLX_EXT_texture_from_pixmap, GLX_OML_swap_method,
GLX_SGI_make_current_read, GLX_SGIS_multisample, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_MESA_copy_sub_buffer, GLX_INTEL_swap_event
client glx vendor string: Mesa Project and SGI
client glx version string: 1.4
shaka:~ $>xvfb-run glxgears
4326 frames in 5.0 seconds = 865.095 FPS
4343 frames in 5.0 seconds = 868.540 FPS
Even if shaka supports direct rendering using Mesa, I can't get a WebGL context.
Thanks!
With modern X11, you would be better off ignoring Xvfb and using the dummy display driver. See the "Additional notes" at http://www.x.org/wiki/XorgTesting for information about using it (you would presumably specify a custom xorg.conf with the necessary Device section). http://www.karlrunge.com/x11vnc/Xdummy is another way to use the dummy driver.

Resources