ns-3 5G In-door Localization - localization

We are bachelor Network Engineering Students and our graduation research project is about 5G mmWave in-door Localization for mobile devices.
The research is concerned to identify the in-door x,y,z location of a device using D2D Communication and to find the location by the RSSI using Localization techniques.
Is that possible to be done on ns3? If yes, what module should I use?
Regards,
Thanks..

Dedicated 5G indoor positioning measurements and procedures are focus area of 3GPP Release 16. Industrial scenarios with enhancements will be introduced with Release 17.
Location management function (LMF) sits in the center of 5G positioning architecture. The LMF receives measurements and assistance information from NG-RAN and the UE via AMF over the NLs interface to compute the position
New NR positioning protocol A (NRPPa) defined to convey information between NG-RAN and LMF over NG-C. LMF configures UE using the LTE positioning protocol (LPP) via AMF. The NG RAN configures the UE using radio resource control (RRC) protocol over LTE-Uu and NR-Uu.
NR Positioning reference signal (NR PRS) in the downlink and the sounding reference signal (SRS) for positioning in the uplink were added to NR specifications.
Since private 5G networks as 5G Edge will be deployed to big buildings, production zones same 5G positioning architecture will be applicable to 5G indoor positioning.

Related

Can STM32F1 (as part of MXChip) support CAN Bus

Background
I'm very new to electronics/IoT dev. I'm trying to create a solution to be able to read my wife's Car's CAN Bus signal (messages) and store it to an SD card. I hope to analyze the data and build a dashboard based on the car's telemetry.
This specific question is in relation to a chip (STM32F1) on an IoT board (MXChip AZ3166) I already own, which I hope to incorporate into my overall solution as the data acquisition layer.
For reference the:
Chips is the: STMicroelectronics STM32F103C8T6, 32bit ARM Cortex M3 Microcontroller
and the IoT board is the: (MXChip AZ3166 IoT DevKit)
Reading the MXChip AZ3166 board's spec and after doing some research, I have found out that the MXChip AZ3166 comprises two main chipsets:
Vendor
Part Number
Ref Link
STMicroelectronics
STM32F103C8T6
https://uk.rs-online.com/web/p/microcontrollers/1023545
MXChip
EMW3166
https://www.mxchip.com/en/products/module/54
Main Question
The product specification mentions the STM32F1 features Comprising of motor control peripherals plus CAN and USB full speed interfaces, it also states it has 1x CAN Channel. Does that mean I can interface the MXChip AZ3166 board featuring this chip via the GPIO pins to the CAB bus in my wife's car and receive the CAN Bus signals (I presume adhering to the
ISO 11898-1 CAN data communication protocol).
How would I find out which pins to connect to the CAN Hi & CAN low connections on the cars CAN Bus?
Concerning power, how would I determine that the CAN signal received doesn't fry the MXChip Board with a stated max Operation voltage of 3.3v?
Yes you'll want an MCU with a built-in CAN controller for communicating on a CAN bus. However, the CAN standard only covers the physical and data link layers. You need to know the application layer in order to meaningfully interact with a bus.
The application layer on a car may or may not be proprietary. It may even be encrypted. If you don't know what protocol is uses, then no can do. Reverse-engineering CAN protocols is hackish, hard and dangerous. Plugging into a CAN bus where you have no clue about timing considerations etc is also very dangerous.
But cars usually have an "on-board diagnostics" (OBD) port used for service purposes, with standardized application layers, through which you may have access to various parts of the car. There's lots of different standards for OBD and older ones didn't even use CAN. It depends on the car model.
In case of the OBD port the pinouts are standardized and you can find them on the Internet. Otherwise it is very simple to find out which signal that's CANH and CANL with an oscilloscope. CANH goes 2.5V +1V and CANL 2.5V -1V. A more hacky solution is to measure this with a multimeter, but it's perfectly possible since one signal with be slightly above 2.5V and the other slightly below.
CAN is standardized so if you have a CAN bus on the board, you connect there. In some cases there may be 12V supply wired together with the signal and that's the only one which could fry something.
Overall, please note that the project you describe here is very difficult and not a beginner task. It sounds as you have next to no experience of electronics/embedded systems, so I would recommend picking a far simpler project.
Furthermore, modifying car electronics or installing your own electronics in a car is illegal in most parts of the world. Third party type approvals with EMC tests are mandatory (and very expensive). If your car is involved in an accident and they find custom electronics without type approval in it, you could be facing serious legal consequences.

Why isnt UWB technology used for big file transfers?

I am working on my thesis right now and i have to compare near field communication technologies (WPAN) which can transfer files.
everybody is talking how great UWB is for locating things and how fast it is, but there is no one (but apple) that used it for file transmission. But why? Its has a bigger bandwith then Wifi Peer to Peer?
Apple seems to use it for airdrop and both for android and ios there is an API to develop based on this technology. But it looks that its designed for location services and only work with specific devices for location. So I would not be able to use it for example to transfer files between iOS/Android and a Raspberry Pi in Near Field.
Can anyone explain me, if UWB can transfer files and, or why i should use Wifi-Direct instead of UWB if I want to transfer Files that are >1GB with the fastest speed (but without internet of course)
Thank you very much
IR-UWB is more popular than MC-UWB
UWB modulation schemes can broadly be divided into two categories
multi-carrier UWB (MC-UWB): used for high throughput data transmission, 480 Mbps
impulse-radio UWB (IR-UWB): used for localization, sensing.

ARCORE on 96 hickey board

So I want to run ARcore on hickey board, I am currently running Android 9.0 AOSP on it. Is it possible to run an AR application on hickey board with an external USB camera? do I need a specific camera or is there anything else I need in order to run AR applications on hickey board? or hickey even supports the AR core? if you could answer this it would really help me.
Thank you.
ARCore has some minimum system requirements including sensors, cameras etc.
I don't believe there is an official minimum requirements set published openly at this point but there is a list of supported devices: https://developers.google.com/ar/discover/supported-devices
Google actually test and certify these devices so I don't think you will find 'official' support for you set up - they say:
To certify each device, we check the quality of the camera, motion sensors, and the design architecture to ensure it performs as expected. Also, the device needs to have a powerful enough CPU that integrates with the hardware design to ensure good performance and effective real-time calculations.

Is it possible to associate single wireless network card to multiple WiFi Access Points at a time?

Is it possible to associate single wireless network interface controller (WNIC) with multiple Wireless Access Points (WAP) at a time? If not: why?
I've never heard about such a feature, so I assume it's technically impossible or fairly difficult and rarely implemented. Is it really that difficult/impossible to implement driver providing such a feature? Is it software or hardware difficulty?
I assume that TCP/IP protocols' specifications doesn't limit us at all because if I attach multiple WNICs to my computer, I can easily connect to multiple APs.
If it's software difficulty, than what's the actual problem? Does Linux/Windows kernel or WNIC's drivers limits it? Or maybe system libraries (like libc on GNU/Linux systems)?
If it's hardware difficulty, what actually limits us? Antennas? Using single radio frequency at a time? If yes, than why can't we implement frequency hopping (like Kismet does)? Because of lost packets during time spent on other channels? If yes, than can we associate WNIC with multiple routers working on the same channel (I know that channel overlapping is bad)?
Note: I'm not talking about dual band routers. I assume that we consider most common WNIC and AP which both work on 2.4GHz channels. If I have to put my question into OS context, than I choose GNU/Linux context.
Yes. The basic technique is that the client tells AP 'A' that it is going to sleep and then talks to AP 'B' while A is buffering frames for it.
Microsoft research worked this out a while ago:
http://research.microsoft.com/en-us/um/redmond/projects/virtualwifi/
Many low-level drivers support Wi-Fi interface virtualization (e.g. the BRCM wl command has options which support this).
Apple's AirDrop and MultiPeer features for OS X and iOS use a similar technique, but instead of talking to a 2nd AP they talk to a peer device.

New to iOS development - mapping app for agriculture

Back Story: I was approached to write an app, but iOS isn't something that have any experience with.
Short Description: Want an app for coverage map for use in an airplane while spraying.
Long Description: The customer has a some airplanes that he uses to spray chemicals on farm fields. They want a system to display a map of the area, a boundary of the field(s) that are to be sprayed on the current flight, and record the flight path of the airplane. The user interface needs to be extremely clean and simple because the user is going to be flying an airplane while using it. Dropbox will be used to transfer data between the airplane and the main office. Someone in the office will create a list of fields that need to be sprayed, and the boundary information of those fields are stored in a shape file format. Those shape files need to be read by the app and displayed over satellite imagery. The airplane already has a high accuracy GPS receiver on it that outputs NMEA position data at 10Hz or faster. The customer also wants to attach a pressure sensor to the spray circuit to monitor if it is dropping spray or not. That information needs to go to the app as well to paint the screen where the plane has already been. This will help the operator to eliminate overlap and skips.
As for getting the GPS position data and pressure data into an iPad, I'm guessing that 802.11 wireless is the simplest way, with that data being supplied in a TCP data stream. I can build a device that makes the data available as a TCP server on a 802.11 wireless network.
From there, I need an app on the iPad that connects to that server to get the data stream. That data gets parsed and turned in to a map.
I have experience with developing apps for Windows in VB.net and two apps for Android. How much difference is there with development concepts in iOS?
I see that iOS uses OpenGL for the graphics, which is ideal for a map. Can I easily access terrain data like is available in Google Earth?
Like dasdom i will encourage you to not begin with that complex project, perhaps divide the several goals in your requirements and make tiny apps for getting in tune with the iPhone SDK, also you have to learn Objective-C that implies that you are already good enough in C programming.
study this topics: Objective-C, iOS Memory Management, sockets, MapKit, Quartz and CoreGraphics, etc.
Or you can buy this excellent book from Aaron Hillegas:
"iPhone Programming: The Big Nerd Ranch Guide"
That book cover almost all topics to introduce your self in the iOS programming madness :)

Resources