The examples and documentation for the Spresense have a lot of very clear information, yet I think there's something missing for using digital mics with the Arduino IDE. Modifications to the extension board for using digital mics are very clearly documented with nice pictures. The Arduino example projects are great, showing you to record, encode, etc. And I've also understood you must tell the recorder to use the digital microphones with the following:
theAudio->setRecorderMode(AS_SETRECDR_STS_INPUTDEVICE_MIC_D);
There are also nice details in the audio documentation explaining that CXD56_AUDIO_MIC_CHANNEL_SEL must be changed from the default value of 0xFFFF4321, which is for analog microphones, to values for digital microphones. I've been able to follow the instructions for rebuilding the Nuttx kernel and spresense SDK with a new value of 0xCBA98765 which should enable eight digital mics. The last piece that is not clear is what nuttx/sdk binary files now need to be copied over to the Arduino environment. I have a Windows PC for use with the Arduino IDE and I have a Linux PC for building Nuttx and those examples. Can you please list which files on the Linux machine that I need to copy over to the Windows PC for the Arduino IDE to use the SDK that enables the digital mics? Sorry if this is documented somewhere and I overlooked it!
The instructions provided by Sony to record using the digital mic work fine! It was a hardware problem with my microphones. I was able to use the nuttx example named audio_recorder. I haven't tried with Arduino and the process of copying files from a nuttx build to the arduino build folders is still not very clear, but that's a separate issue.
Related
My goal is to create a simple LED controlled by my iPhone through Homekit.
I'd like to do it using only a NodeMCU (ESP8266).
I found lots of solutions using a NodeJS library (HAP-NodeJS), which works well on my PC, but obviously can't run on a NodeMCU board.
As I understand, all these solutions require a RaspberryPI (or similar board running Linux) that talks with the NodeMCU board. But I don't like this solution.
Is there a way to achieve this goal only with a NodeMCU board?
Update 1 (25/01/2017)
Ok, I'm reading lots of blogs and watching some videos, and I'm understanding more about this topic.
I found NodeMCU Flasher to install the firmware on the board, and I found the firmware I'd like to use (I think I could be more comfortable with Lua).
First problem... I'm using a Mac, and NodeMCU Flasher is for Windows... Is there an alternative?
I downloaded also ESPlorer. Does it provide the same functionality as NodeMCU Flasher?
Please check this.
Public Apple's HomeKit protocol code has been around for some time for more potent processors (notably HAP-NodeJS). This is a rewrite for the ESP8266 to make the server foundation. This project uses ESP8266_RTOS_SDK and WolfCrypt 3.9.8 for the crypto. It will however NOT deliver a certified HomeKit device.
For development purposes I have been using the "NodeMCU Firmware Programmer" to flash the firmware to the ESP-12 NodeMCU Dev Kit V2, and then using ESPlorer to upload the lua files.
This works well for development purposes, but now we are moving into commercial production.
Is there a faster way (one-step?) to upload both the NodeMCU firmware and lua files? I need to program between 1-5k units per month.
Yes, there is a one-step way.
You first build a file system image using spiffsimg and then flash both the firmware and the image to the device (with esptool.py I suggest).
How could I transplantation Mplayer to iOS? and make it support SMB?
I build the ffmpeg,but the other thing is I do not know how to make this support smb.
what I want is to developer a player support smb on iOS .
I typed in ffmpeg smb support into google. And it came back with this part of the official documentation that you obviously should read about FFmpeg's supported protocols. It describes the smb:// protocol, which according to the heading, depends on libsmbclient (from the SAMBA project).
You will have to port libsmbclient to iOS, too, and build FFmpeg with smb support, make sure that iOS allows you to access the network like you'd need to to support SMB, and test things.
I am running into hardware issues that perhaps someone here knows a workaround. I am using a PC and windows.
For several years I have been making interactive installations using video tracking: the Jmyron library in Processing, which has functioned marvelously for me. I use this set up: cctv type microcameras to a multiplexer, the I digitize this signal via a firewire cable to a pci card. Then Processing reads these quads (sometimes more) as a single window, and it has always worked (from windows xp all the way to 7). Then comes windows 8: Processing seems to prefer the built-in webcam to the firewire bus. On previous version of windows, the firewire bus would naturally override the webcam, provided I had first opened a video capture in Windows Maker, and then shut it down before running the Processing sketch. In Windows 7, which had no native video capture software, I used this great open source video editor called Capture Flux. The webcam never interfered. With Windows 8, no matter what I try, Processing defaults to the webcam, which for my purposes is useless. I have an exhibition coming up real soon, and there is no way I am going to have the time to rewrite all that code for Open CV or other newer libraries.
I am curious if anyone has had similar problems, found a work around? Is there a way of disabling the webcam in Windows 8 (temporarily of course, because I need it to be operational for other applications), or some other solution?
Thank you!
Try this:
type "windows icon+x" choose device manager (or use run/command line: "mmc devmgmt.msc")
look for imaganing devices, find your integrated webcamera
right click on it and choose disable - now processing should skip the device.
Repeat the steps to reenable the device.
Other solution would be using commands in processing:
println (Capture.list()); (google it on processing.org) this way you will get all avaliable devices and you can choose the particular one based on its name.
Hope this helps.
I am a newbie to industry and as a part of my internship I have been assigned the above project.I have no experience in how to go about porting a particular application to a different OS.
So far,i have tried to understand the basic structure of a component(thats what an application is called IOS-XR) but as far as I can understand,porting wireshark will also require porting the libpcap lib to XR.
Can someone please shed some light as to how should i go about approaching it?
I know nothing about QNX;
However, I will note that Wireshark has a lot of dependencies on various libraries:
Some examples;
libgLib
libgtk
libffi-5
libfontconfig-1
libfreetype-6
libintl-8
libjasper-1
libjpeg-8
liblzma-5
libpixman-1-0
libpng15-15
libtiff-5
libxml2-2
...
Are these libraries available on QNX ?
With respect to libpcap:
libpcap is needed for capturing files. If not available, it certainly would need to be ported. I could imagine that this might be a large effort given that presumably the code is presumably quite dependent upon the exact OS capabilities to get access to the network level data.
For information about developing Wireshark (on Windows and *nix) see the
Wireshark Developer's Guide.