GPIO ports on STM32F4 Discovery? - port

i've seen on STM32F4 Programming Manual that GPIO ports are from A to K. but in some slides i've read that are 5 ports (A to E). What these ports (F to K) do ? These are dedicated for some else ?
Thanks for the explanation.

The number of ports depends on the pin count of the specific STM32F4 model you're using. Each port has at most 16 pins, so models with, say 64 pins, will have less ports (around 4 to 5) than models with 176 pins (10 or possibly 11 ports). The datasheet indicates which peripherals are tied to which specific pins and ports, but in principle there are no "special" ports.

Related

Page Table Entry (PTE) in multi-level paging

I'm reading about virtual memory and I have some doubts reguarding multi-level paging.
I saw that in a N-level page table, the N-1 tables are used to store indexes to the next level tables.
This is my point:
Assuming the case of RISC-V Sv39 (Section 4.4 of RISC-V privileged architecture). We have the physical address of 1-level page root in SATP.ppn.
Virtual address is composed as follows:
<38-29> VPN[2] (9 bits)
<28-21> VPN[1] (9 bits)
<20-12> VPN[0] (9 bits)
<11-0> offset (12 bits)
After a TLB-miss, a page walk starts:
index_to_second_level_PT = SATP.PPN + VPN[2]
index_to_third_level_PT = index_to_second_level_PT + VPN[1]
physical_address = index_to_third_level + VPN[0]
My questions is:
Do all the idexes in the PTE (i.e, computed in step 1., 2., and 3.) are physical addresses? this makes sense to me, since once we get the physical address of the next table we can retrieve it from physical memory.
Moreover, always in RISC-V manual, also physical addresses and PTE entries are formatted with multiple fields (PPN[0], PPN[1], and PPN[2]). I understand that the virtual address is partitioned into three 9-bit fields to index the multi-level page table, but I don't understand why also the physical address is partitioned in such way.
Thanks in advance to anyone who responds.

How is the NVIDIA RTX A6000 getting 300 watts via a single 8-pin?

According to:
https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/quadro-product-literature/proviz-print-nvidia-rtx-a6000-datasheet-us-nvidia-1454980-r9-web%20(1).pdf
the NVIDIA RTX A6000 uses up to 300 watts of power. It has a single 8-pin CPU power connector and of course it connects via PCI-E. In spite of what was once stated, the PCI-E 4, to the best of my knowledge, still only supplies 75w. With the 8-pin supplying at most another 150w, my math says 225w. Other high powered cards use two 8-pins for this reason I believed. What's going on with the $5k card?
njuffa's comment had the correct answer, but I cannot select a comment as correct.
An 8-pin CPU connector is different than an 8-pin auxiliary power connector.
https://www.pny.com/file%20library/company/support/product%20brochures/nvidia%20quadro/quadro-power-guidelines.pdf
In their power guide they show how to make an adapter for two 8 pin aux to an 8 pin CPU.

Unable to access Znyq AXI BRAM from Linux

In my project, data is written to a BRAM (generated through the Block Ram IP generator) from a custom IP. Then, I use an AXI BRAM controller to interface the memory with the AXI bus and make it accessible to the Linux running on the ARM.
The base address for the controller is 0x4200_0000 with a range of 8K (up to 0x4200_1FFF). The memory has 8K positions too, each with a width of 32 bits.
To make sure the access problem isn't in the data generated in my custom IP, I initialize the memory simply numbering each of the 8K address (so address 1 contains 0x01, etc, up to 0x1fff).
The problem comes when attempting to read those values from Linux. Using devmem 0x42000001 on command line returns 0x04000000 and the following:
Alignment trap: devmem (1257) PC=0x0001ca94 Instr=0xe7902005 Address=0xb6f9d2fd FSR 0x011
Which seems to indicate Linux is expecting each address value to map to a byte, not a 32bits word. The alignment traps happen until devmem 0x42000004, which returns 0x00000004, the correct value for the fourth direction, but the values in addresses not multiple of 4 can't be accessed. devmem 0x42000002 returns 0x00040000 (notice the 0x04 shifting) as well as the alignment trap. I found the problem with my original python script which uses mmap to map /dev/mem: I have to read each 4 address values since each individual address seems to map to a byte, but that means I only get one of each four values.
Any ideas on how to properly interface with the AXI controller and the memory behind it?
******* Edit to clarify the issue I have. When in doubt, add a picture:
Which seems to indicate Linux is expecting each address value to map to a byte
That is the standard mapping in all modern CPUs. When you use AXI with a data bus wider then 8 bits, the bottom address bits select a byte from the AXI data bus. Go to the ARM website and download the AXI specification.
The base address for the controller is 0x4200_0000 with a range of 8K (up to 0x4200_1FFF). The memory has 8K positions too, each with a width of 32 bits.
That is wrong 8K of 32 bits has an address range of 8K*4 = 0x0000 .. 0x7FFF.
I suggest you re-build the BRAM but use different parameters for the Block Ram IP generator.
I changed the RAM so that the port exposed to the AXI controller operates with 8 bits. .....
Your Zynq AXI bus is probably 32 bits wide. Thus a standard connected memory should be 32 bits wide, where you should have byte-write enables.
If you connect an 8-bit memory to a 32-bit bus and do not, or wrongly adapt the address you may lose 3 out of 4 bytes.
What is not clear to me is which behavior you exactly want.
Standard 8Kx32 bit memory with byte access
or
8kx8 bit memory where you have a byte at 0x0, 0x4, 0x8 etc.
In case 2 you should use the AXI address different: you should shift the address bits up two positions so each byte occupies 4 address locations.
You also have to decide where to place the byte:
LS position only: tie the MS 24 bits to zero
MS position only: tie the LS 24 bits to zero
Repeated over all 4 locations: replicate the byte four times over the 32 bits.
Whatever else you fancy. (It's your hardware, you can do what you want.)
Beware that for any module you connect to an AXI bus, the preceding AXI splitters should be set up to cover the correct address range. But I assume you have none of those.

More GPIO pins: use expansion boards, or 2 raspberry pi?

I'm planning to start a project using a RPi3 and Android Things. I need 50 GPIO pins (20 inputs, 30 outputs), so I have 2 options: use an expansion board, or use 2 RPis. So I have a question for each option:
If I use an expansion board: will be possible to use it with Android Things?
If I use 2 RPis: what's the best way to communicate between them? (for example: a signal received in a GPIO in RPi A, may trigger an output in RPi B)
EDIT: Here I link a post that describes 3 ways to extend RPi's GPIO ports -> https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=86738#p611850 It may be useful
EDIT 2: I will use 2 MCP23017 (16 port expander). So I will get 32 pins using only the 2 I2C pins. More info: http://ww1.microchip.com/downloads/en/DeviceDoc/21952b.pdf
I'm not familiar with Android Things but with some electronic work you will be able to achieve your results.
This 4 line decoder will only use 4 gpio pins to control 16 outputs.
http://www.nxp.com/documents/data_sheet/74HC_HCT154.pdf
The reverse process is also possible. You may use a 16 line "demultiplexer" to encode 16 bits of logic information in on 4 GPIO inputs of your Raspberry
http://www.ti.com/product/CD54HC4514
(the components I selected are the first one I stumbled across. They may not be the best products for your specific application. I used the 74HC238 before on a project and it worked like a charm)
You could consider the PCF8574, which is I2C an 8 bit port expander. You can have up to 8 of them on a single I2C bus, giving you up to 64 GPIO pins.
Here is a driver for the PCF8574 for Android Things:
https://github.com/davemckelvie/things-drivers

If the CS register of a 8086 has the value 0xA000, what is the range of the physical addresses of the associated segment?

As the title already says, I want to know what the range of the physical addresses of the associated segment is, if the CS register of a 8086 has the value 0xA000?
Shift left 4 bits.
0xa0000 + whatever value is hqving CS applied.
Since the cpu registers and other values are 16-bit, you get 0xAxxxx where xxxx is a 16 bit value. That is, the segment register specifies which 64k can be addressed. By windowing like that, you can get a 20-bit physical addresss space.
See this old post for more info. Once upon a time that was common teaching, but I suppose now it's harder to find. Maybe you can find some old books via Amazon Marketplace.
After a little research I found that this is the correct answer to the question:
0xA0000 + 0xFFFF = 0xAFFFF (highest physical address of the segment)
0xA0000 + 0x0000 = 0xA0000 (lowest physical address of the segment)
So the range of the physical addresses is 0xA0000 - 0xAFFFF.

Resources