Does the 5G User Plane Function (UPF) supports multiple network slices? And if so how does it ensure the Qos? - 5g

Exploring 5G Network Slicing and read that NSSF is the function that controls the Network slicing in a 5g infrastructure. Pair of SMF and UPF, works for slicing.
But I am not clear about whether UPF supports only one slice or can support more. What are the information exchanged between UPF and SMF regarding network slicing.

Related

Weight transmission protocol in Federated Machine Learning

I am wondering, in federated machine learning, when we train our local models, and intend to update the cloud model, what protocol we use to transmit those weight? Also, when we use the tensorflow federated machine learning, how we transmit the weight (using which library and protocol)?
Kind regards,
Most authors of federated computation using TensorFlow Federated are using the "TFF Language". The specific protocol used during communication is determined by the platform running the computation and the instructions giving in the algorithm.
For computation authors, TFF supports a few different instructions for the platform which may result in different protocols, for example looking at summation operations of CLIENT values to a SERVER value:
tff.fedreated_sum that indicate any particular protocol.
tff.federated_secure_sum, tff.federated_secure_sum_bitwidth, and tfffederated_secure_modular_sum all use a secure protocol such that the server cannot learn the value of an individual summand, only the aggregate summation value (https://research.google/pubs/pub47246/ provides more details).
All of these could be composable with transport layer security schemes to prevent third parties on the network from learning transmitted values, and depend on the execution platform's implementation. For example TFF's own runtime uses gRPC which supports a few different schemes https://grpc.io/docs/guides/auth/.

large scale local network

For an application I need to extend WiFi range where a raspberry pi which is mounted on a drone and is away from station can connect to this WiFi network and stream video.what options are there for me to implement this network?
suppose that the maximum distance between drone(rpi which sends video) and station(router or some thing like that which is connected to a PC and receives video)is 1km.
first of all your project sounds amazing and I would like to see it working on my own eyes :)
And to answer your questions:
1km is quite a distance for all kinds of routers used at home or Access Points hidden inside buildings. Your only hope here is to setup multiple outdoor sector antennas (like THIS beauty from Mikrotik) and CAPsMAN or using Ubiquiti devices with seamless/fast roaming to cover space where drone will fly
With this setup you can easily transfer large streams over large area. Yet the maximal distance will also be affected by number of wireless networks in your vicinity.
Feel free to add more questions. We`ll try our best to help you out
And once done please share some videos, photos, etc with us :)

Install multiple Dual edge tpu on a motherboard

Is it possible to install multiple Dual Edge TPU on a motherboard? I need to build a system which support object detection on more than 100 video stream available from cameras in a campus.
There is a PCIe adapter developed that can be used with one dual edge TPU. With this setup you could stick as many dual TPU cards onto your motherboard as you have PCIe slots. I would subscribe to the GitHub issue to get informed about additional options. Last time I talked with the creator of the adapter he mentioned to work on an adapter for a Raspberry PI but also has plans to build an additional adapter that can take more then one edge TPU.
Edit: There is a waiting list that has an option for an adapter card with 4 TPU slots.
Short answer - yes you can as long as You have enough supported slots, power and good cooling system but it's not that easy!
Right now it's hard to find any cheap device that support dual lane pci-e m.2 E-slot, most are 1x and only one edge tpu core will work. According to docs each TPU can take up to 3A of power and heat up above 100C. So for each core You need to think about power, cooling and supported slot which adds much more to price of whole solution. Dual TPU is now priced at $39.
For more computing power there is Asus AI board which is PCI-e 16x card with same Edge TPU cores - 8x or 16x. You will get 32 TOPS or 64 TOPS of power. Card includes cooling and PCI-e slot is usually ready for high power consumption (as for GPUs). Of course You pay about 3x for each core but that does not include price for problematic m.2 slot and cooling. I think it's still the best option.
You can consider other device with better NPU, like something from nvidia jetson family, which are ready to use devices at few levels of price and power (up to 32TOPS). Also You can cluster such devices with kubernetes and add as much as You need.

Coral Edge TPU, Can use it for training?

I bought a Coral Edge TPU for my Raspberry Pi
to use TensorFlow Lite.
On provider's homepage, They said it's only for inferencing
and just limited transfer learning.
There example use it their own framework.
Is Tensorflow-core library support this device?
Colar Edge TPU designed for inferencing.
There is no FP32 or FP16 operation units.
And provider's library also don't have it.
So it's impossible what i want to do.
and Tensorflow-core library also do not support this device.
The EdgeTPU can only be used for tflite models that are compiled using the edgetpu_compiler. The job of the compiler is to map the model to the tpu, otherwise all operations will be executes on the CPU by default. The EdgeTPU, then, will not work with tensorflow core models because it hasn't been compiled.
In order for you model(s) to pass the compiler, it will have to meets all requirements lists here.

Comma.ai self-driving car neural network using client/server architecture in TensorFlow, why?

In comma.ai's self-driving car software they use a client/server architecture. Two processes are started separately, server.py and train_steering_model.py.
server.py sends data to train_steering_model.py via http and sockets.
Why do they use this technique? Isn't this a complicated way of sending data? Isn't this easier to make train_steering_model.py load the data set by it self?
The document DriveSim.md in the repository links to a paper titled Learning a Driving Simulator. In the paper, they state:
Due to the problem complexity we decided to learn video prediction with separable networks.
They also mention the frame rate they used is 5 Hz.
While that sentence is the only one that addresses your question, and it isn't exactly crystal clear, let's break down the task in question:
Grab an image from a camera
Preprocess/downsample/normalize the image pixels
Pass the image through an autoencoder to extract representative feature vector
Pass the output of the autoencoder on to an RNN that will predict proper steering angle
The "problem complexity" refers to the fact that they're dealing with a long sequence of large images that are (as they say in the paper) "highly uncorrelated." There are lots of different tasks that are going on, so the network approach is more modular - in addition to allowing them to work in parallel, it also allows scaling up the components without getting bottlenecked by a single piece of hardware reaching its threshold computational abilities. (And just think: this is only the steering aspect. The Logs.md file lists other components of the vehicle to worry about that aren't addressed by this neural network - gas, brakes, blinkers, acceleration, etc.).
Now let's fast forward to the practical implementation in a self-driving vehicle. There will definitely be more than one neural network operating onboard the vehicle, and each will need to be limited in size - microcomputers or embedded hardware, with limited computational power. So, there's a natural ceiling to how much work one component can do.
Tying all of this together is the fact that cars already operate using a network architecture - a CAN bus is literally a computer network inside of a vehicle. So, this work simply plans to farm out pieces of an enormously complex task to a number of distributed components (which will be limited in capability) using a network that's already in place.

Resources