In 802.11p (DSRC), what is the maximum number of connected devices per square kilometer? - wifi

I was reading about C-V2X and it has the capacity of 1 million connected devices per square kilometer. I wonder what is the corresponding number for DSRC (dedicated short range communication).
I googled it for a while but was not able to find any explicit information about that. Kindly let me know if there is any recommended platform to ask this question if stackoverflow is not the right place.
Thanks!

Related

how ram and rom size is depend on cpu? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am very interested to know how CPU is working. let say in 8-bit microcontroller(8051) how ram and rom in depends on cpu? according to these topics, I have some question in my mind which is confusing me. like
1 = how to ram and rom size is defined(in 8051 microcontroller)?
2 = what means of 8-bit controller?
3 = is rom size is depend on CPU size? if not so how much rom I interface with 8-bit controller?
I searched more regarding to this my questions but not found any solutions so please help me
and is there any have some document and books(microcontroller) so please suggest me
Thanks,
Not much different than the above answer...
In all of this there are no definitive definitions, they are often a slang or engineer speak or marketing speak for things. 8-bit is a little bit more firm, with exceptions. It implies that the processor operations or maximum size of the bulk of the operations is 8 bits wide, so an 8 bit wide alu if you will. Some folks try to make the register size define the bit size, the instruction size, some the number of address bits on the cpu core, etc. So is an x86 an 8-bit, 16-bit, 32-bit, 64-bit, 128, 256, 512 or 1024 based on above notions? could be any depending on who you ask...
The 8051 is considered 8-bit based on the time frame and that most things in it are 8 bit in size.
The 8051 has been so heavily cloned and as mentioned banking is sometimes used to expand the memory space, so it depends on the specific cpu/part/core you are using as to how much total it can access. ROM/RAM sizes are also specific to the part you are using, you start with the datasheet from the part vendor and then as needed other documentation. The part/IP vendor is the definitive source for RAM/ROM information for the 8051 variant you are using at any particular time.
Microcontrollers in general not just 8051s will tend to have more ROM/FLASH than RAM, it should be obvious why when you start writing applications and see that you need more of one than the other.
As answered by Guna the maximum addressing space is determined by the number of address bits on "the bus", but as mentioned above that can/will vary by implementation, there are some that can address a megabyte some that can only address some number of K bytes.
Some CPU architectures are more controlled than others, either by documentation and versions or by ownership and control of the IP (no clones that survive lawsuits for example). So some will have a fixed address space size and currently have no exceptions, but then there are those like the 8051 that have been cloned so heavily (8051s are still widely in use, there is a good chance your computer has at least one, if not the servers along the internet and websites like this definitely will) both their original clocking scheme and address space options vary from implementation to implementation. So this is not a case of the CPU name/type/brand determines maximum amount of ram/rom, and it almost never will determine the exact amount of each you have in a specific implementation, a specific chip or board.
It is very easy to find 8051 information, countless websites, more than there is space to provide links. Start with some chip vendors still actively producing 8051 chips. Silicon labs, microchip, cypress, and perhaps others.
For example it took only a few seconds to find a datasheet for a specific part that states:
512 bytes RAM
8 kB (F990/1/6/7, F980/1/6/7), 4 kB (F982/3/8/9), or 2 kB (F985) Flash; in-system programmable
The price of the part is heavily influenced by the ROM/FLASH size and the RAM size, so a particular family of parts will essentially have the same design with different sized memories depending on your needs, if you can keep the program smaller you can buy a part that is say a dollar less than another in the family but may have the same footprint so that design for the larger one and switch to the smaller one or vice versa hope for the smaller one and if your program is too big then have to switch to the bigger one and deal with the profit loss.
Please find below answers for your questions as per my knowledge.
1) The 8051 microcontroller's memory is divided into Program Memory and Data Memory. Program Memory (ROM) is used for permanent saving program being executed, while Data Memory (RAM) is used for temporarily storing and keeping intermediate results and variables.
2) an 8-bit microcontroller processes 8-bits of data at any particular time. The number of bits used by an MCU (sometimes called bit depth or data width) tells you the size of the registers (8 bits per register), number of memory addresses (only 2^8 = 256 addresses), and the largest numbers they can process (again, 2^8 = 256 integers, or integers 0 through 255). An 8-bit microcontroller has limited addressing, but some 8-bit microcontrollers use paging, where the contents of a page register determines which onboard memory bank to use.
3) Yes, The maximum rom size can be addressed by CPU depending of the width of address bus. for example in 8085 microprocessor the width of the address bus is 16bit so it can address upto 2^16 = 65536 (8 Bit values).

Reaching clock regions using BUFIO and BUFG

I need to realize a source-synchronous receiver in a Virtex 6 that receives data and a clock from a high speed ADC.
For the SERDES Module I need two clocks, that are basically the incoming clock, buffered by BUFIO and BUFR (recommended). I hope my picture makes the situation clear.
Clock distribution
My problem is, that I have some IOBs, that cannot be reached by the BUFIO because they are in a different, not adjacent clock region.
A friend recommended using the MMCM and connecting the output to a BUFG, which can reach all IOBs.
Is this a good idea? Can't I connect my LVDS clock buffer directly to a BUFG, without using the MMCM before?
My knowledge about FPGA Architecture and clocking regions is still very limited, so it would be nice if anybody has some good ideas, wise words or has maybe worked out a solution to a similar problem in the past.
It is quite common to use a MMCM for external inputs, if only to cleanup the signal and realize some other nice features (like 90/180/270 degree phase shift for quad-data rate sampling).
With the 7-series they introduced the multi-region clock buffer (BUFMR) that might help you here. Xilinx has published a nice answer record on which clock buffer to use when: 7 Series FPGA Design Assistant - Details on using different clocking buffers
I think your friends suggestion is correct.
Also check this application note for some suggestions: LVDS Source Synchronous 7:1
Serialization and Deserialization Using
Clock Multiplication

Capabilities of Samsung Galaxy S2 Wi-Fi network card

I am doing a research and trying to find the distance between 2 samsung S2 galaxy phones, using Wi-Fi, by measuring RTT.
For that, in order to get the highest accuracy, I need to access the network Phy, and see the exact time the packet has left one cellphone, and the exact time it arrives back, before it has been processed in the LAN card (again, I need a very high accuracy).
Is it possible? Did someone succeed in accessing the LAN card physical layer on samsung S2 galaxy?
BTW - my cell phones are "rooted".
Thanks in advance,
Tzach
It's not too feasible at SW level. There are several factors that severely limit your ability to perform such measurements:
Data is being propagated at speed of light, any timer that you might have access to, is no match for such resolution.
You need to do at least some processing to understand that the data has arrived; similarly, there is a delay from when you measure transmission start till the packet actually leaves to the air. These delays are much more significant than the actual time of travel in the air.
Also see a related question: WI-FI 802.11 speed depending on distance
Thanks a lot for your quick answer. I worked a lot on this ability at SW level, and as you have written - it is not feasible. My question is about testing this ability at HW level ("talk with the PY"). Do you know if it can be done with Samsung S2 galaxy?

Logical Address to Physical Address in Page Table

I really dont know where to start in the following question and how scoured the internet for hints.
If anyone could point me in the right direction or let me know a way of tackling this question it would be great.
Explain clearly how a logical address is
translated to a physical address in a computer system that uses a two level
page table with the following details:
Each address has 32 bits.
The lower order 16 bits are used as the offset.
The higher order 16 bits are divided into two parts of 8 bits each for
accessing the two-level page tables.
What is the total number of pages possible in the virtual memory of this
computer? What is the size of a page?
I understand the following but cant really go any further:
The logical address is generated by the CPU and divided into:
A page number which is used as an index in a page table which contains
the base address of each page in physical memory.
The page offset combined with the base address is then used to define the
physical memory address that is sent to the memory unit.
All you need to read is the Memory chapter from Modern Operating System 2'nd or 3'rd edition by A.S.Tanenbaum. He explains the 2 level memory allocation and I believe it will answer your questions.

Interpretation of results of "Energy Usage" instrument tool

I am running "energy usage" instrument over ios application using a device, I wanted to use it to check how much battery is getting drained because of the app I am testing. It shows "Energy usage level" which is giving me numbers like 13/20 , 12/20 , etc over different points of time.
How to interpret the results(I know, it gives relative energy usage on a scale of 0-20) in terms of :
1) How much battery is getting drained because of the app and particular operation.
2) Which operation / function is causing this drain.
3) What number is considered as safe and what number should be considered as high / too high.
4) Any other conclusion that we can make ?
I would appreciate if some one can answer above questions or give me link for reference. I have searched around and could not find answers to above questions, I just found how to find out those relative energy usage numbers only.
My 2 cents:
1) You can create a UIAutomation script to repeatedly run some actions, and collect 'energy usage' upon each action. So that you can say "if make a call of 5 minutes, it takes xxx battery", "if keep navigating for 5 minutes, it takes xxxx battery".....
2) As I mentioned above; You can collect data against each action
3) I would say, try to find similar apps, and bench mark, compare with theirs.
4) Try to use different devices, iOS, and you can probably tell customers that what device/iOS is minimal required or recommended.
Energy Diagnostics reports power consumption number (we call them "electricities" at my office) are fairly unreliable. Powergremlin gives you some insight into the actual numbers that make up said "electicity" units. That won't answer parts 2-4 of our question, but it does provide more detail and accuracy than Energy Diagnostics.
The scale of Batter Consumption of IOS App is given by max 20 points.
if your app is running at 1/20,it means your app takes 20 hours to complete the batter
if it is running at 20/20 it takes 1 hour to complete the full batter.

Resources