I am trying to build for a board that has a 40MHz crystal. I am building on my own Linux machine. I do not see a config option to change this. Is there a place to set the crystal frequency? Or is it only set at time of flashing the firmware?
The boot ROM does the smart stuff. You may need to set it for flashing. ESPTOOL and the other flashers allow this. But once booted you have the node.setcpufreq(speed) function and the corresponding getcpufreq() function.
Related
Host machine: Debian 10 running NoMachine 7.2.3
Settings:
Specified H264
User Hardware Encoding enabled
Use Specific Frame Rate enabled (60FPS)
Use Acceleration enabled
Client: Windows 10 running NoMachine 7.2.3
Both machines have monitors attached.
Using NX protocol for connection.
FullScreen / Scale to Window / Desktop is currently 2560x1440 (reduced from native while testing this issue)
Specific issue:
I do a ton of work in the terminal and when viewing desktop via nomachine, the terminal caret is randomly not visible. The same issue is less noticeable with right click menus and other areas of "visual updates in small screen space." If this were another remote desktop vendor I would try to find the "don't update just regions" setting to force the entire display to update regularly, but I can't find similar settings for nomachine. I have a dedicated gigabit connection between the two machines with no other traffic on that line, so bandwidth is not an issue.
To recreate:
I disabled caret blink (using universal access / accessibility settings) so the caret is a solid block in terminal / vi. If I edit a text file in vi and move up and down, the caret will only update visually every other line or so (verified on the physical screen it is moving correctly). Same if I highlight or insert, etc. You inevitably miss a character or so or lose your place).
I have tried changing speed vs quality slider, resolutions, swapping from h264 to VP8, etc.
I have disabled:
multi-pass display encoding
frame buffering on decoding
client side image post-processing
Nothing seems to change this specific issue. Yes I can make dragging a quarter-screen-sized terminal window smoother, but that doesn't help me follow the caret in vi/vim. Both machines are nicely spec'd (client has 16G / RTX2080, server has 32G / GTX1080)
Is there a way to get nomachine to update all the screen all the time, or at least better refresh small areas like a terminal caret?
(OP): Based on a night of troubleshooting, the issue seemed to be either:
An issue with the Debian install of the nvidia drivers
The server machine is a laptop with a broken main screen (but with an HDMI external monitor plugged in). The Debian X-server may have been confused as to whether it was headless or not and caused issues with nomachine (which tries to detect headless and start a virtual session).
The solution to this exact problem would be to disable the GUI and force a virtual session, per https://www.nomachine.com/AR03P00973 (dummy dongles won't work because the laptop's main display is not a standard plug).
In my specific case, I needed GUI access on the server at times so I couldn't use the above methods, and I could not remedy the problem with Debian, so I wiped the system and installed Ubuntu 20.04, which is more forgiving with graphics drivers and monitors. After setting up the Ubuntu system as similarly as possible to the Debian system and letting the proprietary nvidia drivers auto install, nomachine connected at the same resolution and worked perfectly, without the lag in small screen areas.
I read from this link.
with the following statement:
"For most reliable service we recommend using stationary mode if your device has it. GPSD tools don’t yet directly support this, but that capability may be added in a future release.."
Anyone know if the stationary mode has been added to the latest release of GPSD 3.16 ?
Thx!
It certainly does not look like it:
http://git.savannah.gnu.org/cgit/gpsd.git/log/?qt=grep&q=stationary
You can always send the appropriate config commands to the gps before starting gpsd.
I am trying to use Cytoscape command line to run a script which imports and exports networks, as the following:
cytoscape.bat -S "script_for_cytoscape.txt"
The script works and performs the required tasks, however, Cytoscape is displayed and we can see the networks and GUI.
I want to run this as a background job without displaying Cytoscape. I tried "-noView" option but it does not work.
So I am wondering if there is a way to run Cytoscape in non-graphical (no view) mode?
Thank you very much in advance!
Unfortunately, there is no "no GUI" mode for Cytoscape, so you will need to have a graphical display. What I've done in the past is to use X Windows across ssh to provide the required display, but driven everything from command scripts, so no interaction is necessary.
-- scooter
For a project I need wps support in the nodemcu firmware. To enable that I have added wifi.wps.* commands in app/modules/wifi.c and I have added -lwps to the Makefile in app. All builds well, but after flashing the firmware I get problems in that the firmware reboots in a loop.
Commenting out the calls to libwps.a and only having the lua commands in place makes the problem disappear. Is there a know issue, why there is no wps support in nodemcu?
I have a clone of the nodemcu git repository and a docker build environment for building the firmware.
Arnulf
Found the problem myself. There seems to be a limit of the firmware size of 512 KB. I removed some modules when building to stay under that limit and then all worked as expected :)
Found out that if I use the ESPTOOL_FS environment variable of esptool.py to set the correct flash memory size, the firmware size can be greater than 512K and there are no problems starting the module.
I am running into hardware issues that perhaps someone here knows a workaround. I am using a PC and windows.
For several years I have been making interactive installations using video tracking: the Jmyron library in Processing, which has functioned marvelously for me. I use this set up: cctv type microcameras to a multiplexer, the I digitize this signal via a firewire cable to a pci card. Then Processing reads these quads (sometimes more) as a single window, and it has always worked (from windows xp all the way to 7). Then comes windows 8: Processing seems to prefer the built-in webcam to the firewire bus. On previous version of windows, the firewire bus would naturally override the webcam, provided I had first opened a video capture in Windows Maker, and then shut it down before running the Processing sketch. In Windows 7, which had no native video capture software, I used this great open source video editor called Capture Flux. The webcam never interfered. With Windows 8, no matter what I try, Processing defaults to the webcam, which for my purposes is useless. I have an exhibition coming up real soon, and there is no way I am going to have the time to rewrite all that code for Open CV or other newer libraries.
I am curious if anyone has had similar problems, found a work around? Is there a way of disabling the webcam in Windows 8 (temporarily of course, because I need it to be operational for other applications), or some other solution?
Thank you!
Try this:
type "windows icon+x" choose device manager (or use run/command line: "mmc devmgmt.msc")
look for imaganing devices, find your integrated webcamera
right click on it and choose disable - now processing should skip the device.
Repeat the steps to reenable the device.
Other solution would be using commands in processing:
println (Capture.list()); (google it on processing.org) this way you will get all avaliable devices and you can choose the particular one based on its name.
Hope this helps.