I upgraded my machine to Win 7 Windows Home Premium (32 bit). I bought just the Kinect device, no bundle with XBOX. I install the Kinect SDK.
I plug in the Kinect.
When the microphone array driver tries to install itself, it says Windows has stopped this device because it has reported problems. (Code 43). Not too specific lol.
It calls it an "unspecified device"
The camera works but the microphone doesn't.
I've tried plugging the Kinect into all 8 usb ports- all with the same result.
The machine says there's also an unspecified device called Flip CC, but it won't let me get rid of it.
Any ideas?
Thanks,
Rick
My only idea at this point is that you don't have everything installed necessary for the microphone/speech capabilities. I would review this readme and report back here. Be sure to follow it to a T! There are a lot of libraries to install for Speech.
Related
Does anyone know is it possible to do augmented faces on iOS with ARCore? I'm talking about this specifically:
https://developers.google.com/ar/develop/unity/augmented-faces/developer-guide
I know ARCore supports Cloud Anchor for iOS, I tried to paste 'ARkit device' from this sample project to Augmented Faces (replacing some components too) with no success. Do you know is it even possible?
Please help!
Yes, it is possible!
Here's a quote from the Augmented Faces iOS Developer Guide:
Clone or download the ARCore SDK for iOS from GitHub to obtain the
sample app code.
Open a Terminal window and run pod install from the
folder where the Xcode project exists.
Open the sample app in Xcode
version 10.3 or greater and connect the device to your development
machine via USB. To avoid build errors, make sure you are building
from the .xcworkspace file and not the .xcodeproj file.
Press Cmd+R or
click Run. Use a physical device, not the simulator, to work with
Augmented Faces.
Tap “OK” to give the camera access to the sample app.
The app should open the front camera and immediately track your face
in the camera feed. It should place images of fox ears over both sides
of your forehead, and place a fox nose over your own nose.
You can also watch a video tutorial here.
No, at the moment of the writing, Aug 26, 2019, there is still no sign of an iOS release with augmented face support from Google's ARCore team.
For the iOS platform, the best bet is still Apple's ARKit with face tracking support, though limited to devices which supports the TrueDepth camera system.
I'm trying to use ROS with dji matrice 100, i followed the tutorial on the website and i connected the drone and got the correct parameters. The problem is that i cannot run simulation and give commands because the signal of the gps is low. I'm working in a small office with a notebook and a pc desktop connected to the drone, is there a way to bypass the gps and run the simulation, or the only solution is to move in a place where gps signal is high?
Another question is how can i put my program (wrote in python using ros) on the drone?
Another question is how can i put my program (wrote in python using
ros) on the drone?
I assume you're referring to controlling the drone with your ROS program without a simulator?
You need to connect the drone to a PC using the UART port on the M100. My setup involves a USB to serial Cable which is connected to a JETSON TX1. If you're using ROS, edit the details of the sdk.launch here. Your PC needs to be small enough to fit on the drone. A raspberry pi will do the trick. For more details, take a look at the hardware setup guide at this link. I think the M100 + PC/Linux machine should work well for you. Good luck.
Perhaps you can run and download the mobile (Android or iOS) SDK simulation example app to start the simulator from there and then run the commands you want from the onboard sdk/ onboard sdk for testing. I am not sure if this would work, since it is unclear if
you need to run the simulator from onboard as opposed to mobile
or if you need to run both two simultations
dji may not allow running two simulators at the same time.
2.) would be a DJI issue and I haven't testing 2 simulations at once. My guess if you can't run 2, but it could be worth giving a try. 1.) depends more on what you are trying to accomplish. But I could be missing something and don't have experience trying multiple simulations if that is what you need.
Hi,did you open the DJI Assistant 2?You can connect your drone to the PC,then open the simulator of the DJI Assistant 2.In the simulator,you can set the latitude and longitude.After starting simulating,the GPS signal will be high at all times.
When I build and run the "Hello Sceneform" and "Solar System" projects that I downloaded while following the Android Quickstart https://developers.google.com/ar/develop/java/quickstart, all I see on my phone (a Galaxy S9) are these shifting gray/black lines, with the moving ARCore hand/phone on top.
I can download and run ARCore apps from the store without a hitch. The S9 is the only ARCore-compatible phone I can test with. I'm using Android Studio 3.2 preview, Windows 10, ARCore 1.2, and Android 8.0.0.
When I try running on any emulator device, it immediately crashes before displaying anything, which is likely due to the fact that I don't think my desktop GPU supports OpenGL ES 3.1, based on the output when I run "adb logcat | grep eglMakeCurrent". Right now my goal is just to get it working on my phone, though.
The grey bars for the Galaxy S9 is a bug (see: https://github.com/google-ar/sceneform-android-sdk/issues/28).
As of July 18th it is still open.
I had the same issue. Try using the NDK Quickstart instructions instead, found here.
After a few installations and updates of Gradle, etc.
running /samples/hello_ar_c via USB to an S9
worked flawlessy. Hope this helps.
Do you plan to publish an SDK for the Aplha and NEX cameras? You publish some apps yourself and it would be good to see what the developer community out there could do with these devices.
In particular I would like to see a Studio app that caused the OLED viewfinder to show a well exposed image regardless of manual camera settings. That would allow me to use the A6000, A7R and the like in a studio with high power studio strobes.
Many thanks
Nick SS
The original poster is/was asking Sony how to write/develop embedded applications for Sony cameras that support after-the-fact install of "apps" from the Sony "PlayMemories" application ( aka "store"). There is also a kinda related ( but technically very different ) SDK by Sony that is called the "remote api", which is used for apps outside the camera ( android and ios phones typically) to assist them in remotely performing actions to the camera over WiFi. One is an in-camera app, the other is an out-of-camera app.
The short answer is that there has been some reverse-engineering to discover that these camera/s aparently run some variant of android itself AND someone has figured out how to duplicate the process of installing your own "app" to the camera. see here: https://github.com/ma1co/Sony-PMCA-RE
Here is a link to Sony's SDK site. Its still in Beta and is only apk format.
https://developer.sony.com/downloads/camera-file/sony-camera-remote-api-beta-sdk/
I looked over the documentation from the Sony SDK site and added the rest of the functions for the A6000 camera. You can find it on git hub. Still a bit of a work in progress and I have not yet tested all the functions. This repo will generate a jar file that you can use in any apk applications.
https://github.com/keyserSoze42/SonySDK
Here is the repo to the apk that Sony built. I parsed out the sdk part and this uses the sdk built from the other repo. You should be able to find it in the libs directory.
github.com/keyserSoze42/SonySampleApp
Currently only Camera Remote API(beta) is published and constantly being updated with new capabilities and new devices.
I'm currently work with Kinect for Windows SDK version 1 under Win7, VS2010, C#.NET.
Demos in Microsoft Kinect for Windows SDK Sample Browser can't run properly after Kinect is connected to my PC.
Kinect Explorer(in C#) says Kinect is not ready (which is different from Please insert a Kinect... if Kinect is not connected).
Shapes Game(in C#) says Oops... Something's wrong with Kinect
Skeletal Viewer(in C++) can run, but only depth image run properly. Color image has a frame rate less than 1! And skeleton view show nothing but a black background.
Here's what I have tried:
The above thing will not happen(which means everything goes all right) if SDK is reinstalled and PC is not restarted. So... I have to reinstall Kinect SDK everytime I restart my PC!!!
I checked the Resource Manager after Kinect USB is plugged to my PC. The strange thing is that if the PC is not restarted after reinstalling SDK, there're 4 devices concerning Kinect: Audio, Speech, NUI and Security. But after restart PC, the Security Device won't be shown after Kinect is connected to PC.
I've tried with 2 different Kinect(one at a time) and have the same situation.
Using different USB slots make no difference.
I don't know what's wrong with my PC or how to do next. The only thing I know is that I don't want to reinstall Kinect SDK everytime I restart my PC! So would anybody offer some solution? Thanks very much!
I came across this problem just now, and finally solve it. I accidentally killed the process names "KinectManagementService" in the task manager, and then I could not connect the Kinect any more. Have a glance at the task manager whether the process is there.
Open Windows Service, and start the service "Kinect Management". That's it!