Are ARCore features on a desktop PC possible? - augmented-reality

Is it possible to run/simulate ARCore on a desktop PC? The idea is that users with low end phones can still use ARCore's features. I want the users to upload their videos with all required data for ARCore (clicks, accelerometer, gyroscope etc.) to a desktop pc server, linux or windows does not matter, and I want the desktop pc to apply ARCore features.
For example simulating the video stream and placing objects on the clicked location on the PC instead of on a smartphone. The PC could then for example send a video back with the results.
Kind regards,
Yorick

ARCore features are accessible on a desktop PC but only inside virtual emulator (AVD) in Android Studio. To find out how to install and run AVD along with ARCore SDK on your PC read Can't Install ARCore on emulator for Android Studio SO post.
Hope this helps.

Related

Problem with the simulation of dji matrice 100 with ros

I'm trying to use ROS with dji matrice 100, i followed the tutorial on the website and i connected the drone and got the correct parameters. The problem is that i cannot run simulation and give commands because the signal of the gps is low. I'm working in a small office with a notebook and a pc desktop connected to the drone, is there a way to bypass the gps and run the simulation, or the only solution is to move in a place where gps signal is high?
Another question is how can i put my program (wrote in python using ros) on the drone?
Another question is how can i put my program (wrote in python using
ros) on the drone?
I assume you're referring to controlling the drone with your ROS program without a simulator?
You need to connect the drone to a PC using the UART port on the M100. My setup involves a USB to serial Cable which is connected to a JETSON TX1. If you're using ROS, edit the details of the sdk.launch here. Your PC needs to be small enough to fit on the drone. A raspberry pi will do the trick. For more details, take a look at the hardware setup guide at this link. I think the M100 + PC/Linux machine should work well for you. Good luck.
Perhaps you can run and download the mobile (Android or iOS) SDK simulation example app to start the simulator from there and then run the commands you want from the onboard sdk/ onboard sdk for testing. I am not sure if this would work, since it is unclear if
you need to run the simulator from onboard as opposed to mobile
or if you need to run both two simultations
dji may not allow running two simulators at the same time.
2.) would be a DJI issue and I haven't testing 2 simulations at once. My guess if you can't run 2, but it could be worth giving a try. 1.) depends more on what you are trying to accomplish. But I could be missing something and don't have experience trying multiple simulations if that is what you need.
Hi,did you open the DJI Assistant 2?You can connect your drone to the PC,then open the simulator of the DJI Assistant 2.In the simulator,you can set the latitude and longitude.After starting simulating,the GPS signal will be high at all times.

UWP Game Capture on Xbox DirectX or MMF

I'm investigating building a "Game Capture" App that works within UWP on Xbox One, as for capturing the actual content of the screen during game-play, it appears there are two ways to go within the wider eco-system of Microsoft libraries:
DirectX (Now part of Windows API)
Microsoft Media Foundation
With that in mind, my assumption is that DirectX is natively accessible by UWP apps via the Windows Runtime API, and aside from limitations on the DirectX feature-sets and hardware, basic APIs exist for capturing the content of the Xbox's screen.
MMF I'm not so sure about, though it does encapsulate some interesting access to using an accelerated video encoding but does not appear to be part of the UWP subset of APIs available on the Xbox.
Beyond the correct library to use, are there any other known limitations on developing apps that "capture" the Xbox's screen that run natively on the device.
Thanks
It's not possible at this time.
The Xbox One is a closed platform and not as open as Windows 10 running on a desktop PC, for example.
On a PC it's possible to use existing APIs to capture the output from a game, app, etc. On Xbox One, this is handled by the system only. The console is recording all the time, but the user decides when to save that footage or broadcast it via Twitch, YouTube, etc.
UWP apps running on Xbox One cannot record footage themselves or access the built-in APIs for this functionality.

Sony Playmemories SDK For Aplha & NEX

Do you plan to publish an SDK for the Aplha and NEX cameras? You publish some apps yourself and it would be good to see what the developer community out there could do with these devices.
In particular I would like to see a Studio app that caused the OLED viewfinder to show a well exposed image regardless of manual camera settings. That would allow me to use the A6000, A7R and the like in a studio with high power studio strobes.
Many thanks
Nick SS
The original poster is/was asking Sony how to write/develop embedded applications for Sony cameras that support after-the-fact install of "apps" from the Sony "PlayMemories" application ( aka "store"). There is also a kinda related ( but technically very different ) SDK by Sony that is called the "remote api", which is used for apps outside the camera ( android and ios phones typically) to assist them in remotely performing actions to the camera over WiFi. One is an in-camera app, the other is an out-of-camera app.
The short answer is that there has been some reverse-engineering to discover that these camera/s aparently run some variant of android itself AND someone has figured out how to duplicate the process of installing your own "app" to the camera. see here: https://github.com/ma1co/Sony-PMCA-RE
Here is a link to Sony's SDK site. Its still in Beta and is only apk format.
https://developer.sony.com/downloads/camera-file/sony-camera-remote-api-beta-sdk/
I looked over the documentation from the Sony SDK site and added the rest of the functions for the A6000 camera. You can find it on git hub. Still a bit of a work in progress and I have not yet tested all the functions. This repo will generate a jar file that you can use in any apk applications.
https://github.com/keyserSoze42/SonySDK
Here is the repo to the apk that Sony built. I parsed out the sdk part and this uses the sdk built from the other repo. You should be able to find it in the libs directory.
github.com/keyserSoze42/SonySampleApp
Currently only Camera Remote API(beta) is published and constantly being updated with new capabilities and new devices.

Why do I have to reinstall Kinect SDK everytime I restart PC

I'm currently work with Kinect for Windows SDK version 1 under Win7, VS2010, C#.NET.
Demos in Microsoft Kinect for Windows SDK Sample Browser can't run properly after Kinect is connected to my PC.
Kinect Explorer(in C#) says Kinect is not ready (which is different from Please insert a Kinect... if Kinect is not connected).
Shapes Game(in C#) says Oops... Something's wrong with Kinect
Skeletal Viewer(in C++) can run, but only depth image run properly. Color image has a frame rate less than 1! And skeleton view show nothing but a black background.
Here's what I have tried:
The above thing will not happen(which means everything goes all right) if SDK is reinstalled and PC is not restarted. So... I have to reinstall Kinect SDK everytime I restart my PC!!!
I checked the Resource Manager after Kinect USB is plugged to my PC. The strange thing is that if the PC is not restarted after reinstalling SDK, there're 4 devices concerning Kinect: Audio, Speech, NUI and Security. But after restart PC, the Security Device won't be shown after Kinect is connected to PC.
I've tried with 2 different Kinect(one at a time) and have the same situation.
Using different USB slots make no difference.
I don't know what's wrong with my PC or how to do next. The only thing I know is that I don't want to reinstall Kinect SDK everytime I restart my PC! So would anybody offer some solution? Thanks very much!
I came across this problem just now, and finally solve it. I accidentally killed the process names "KinectManagementService" in the task manager, and then I could not connect the Kinect any more. Have a glance at the task manager whether the process is there.
Open Windows Service, and start the service "Kinect Management". That's it!

Kinect SDK - Not recogizing microphone array

I upgraded my machine to Win 7 Windows Home Premium (32 bit). I bought just the Kinect device, no bundle with XBOX. I install the Kinect SDK.
I plug in the Kinect.
When the microphone array driver tries to install itself, it says Windows has stopped this device because it has reported problems. (Code 43). Not too specific lol.
It calls it an "unspecified device"
The camera works but the microphone doesn't.
I've tried plugging the Kinect into all 8 usb ports- all with the same result.
The machine says there's also an unspecified device called Flip CC, but it won't let me get rid of it.
Any ideas?
Thanks,
Rick
My only idea at this point is that you don't have everything installed necessary for the microphone/speech capabilities. I would review this readme and report back here. Be sure to follow it to a T! There are a lot of libraries to install for Speech.

Resources