How to add i2c devices on the Beaglebone Black using device tree overlays? - beagleboneblack

Why should I read this?
If you have a Beaglebone Black (BBB) and you want to wire up your own devices to it (not capes), you might already have heard about the device tree. In my case I wanted to connect a RTC device to the I2C bus on the BBB. There is lots of information scattered around the web and this article is meant to be a summary of what I found as also a guide to get it done.
So I'll give a full example of activating the I2C bus on the BBB as well as hooking up a DS1308 RTC chip using the device drivers included in the kernel. Sounds interesting?
Then read on and please leave comments if anything is not clear. If you are in a bit of a hurry you can also just grab the device tree overlay code on Github and fly away.
First things first.
I'm using ArchLinux ARM on my BBB mainly because Arch Linux is awesome and I am possibly too stupid to use debianoid distributions.
Here's a screenfetch of the system..
As you might notice the kernel version is already above that 3.x stuff. What you can not see in the screenfetch is that the kernel supports device tree overlays using the Capemgr utility.
What is the device tree?
I'll just do it quick, you can find deeper knowledge here, here, here and here.
The device tree is a structure describing the underlying hardware on your platform. It's heavily used in embedded devices since SOCs and stuff don't have buses like PCI where devices can be discovered. They have to be defined statically and are attached to a "platform bus" to give a handle to the device drivers shipped with the kernel.
Before the device tree was introduced to Linux all that work had to be done with specific C header files and custom implementations which then all had to be merged into the mainline kernel. Thus that being a imaginable exhaustive task it came to the famous Linus Torvalds rant. Here you go with still some more device tree background.
Yeah nice, but how does it work?
To describe the device tree we're using .dts (device tree source) files, which are human readable and get compiled by the device tree compiler (dtc) into device tree blobs (.dtb), a binary format. When the system boots the bootloader (e.g. u-boot) hands over that blob to the kernel. The kernel parses it and creates all the devices as given by the device tree.
If you're not believing me, use the device tree compiler to peak into the device tree your BBB is using right now.
If you haven't installed it yet, get the appropriate package..
pacman -Sy dtc-overlay
dtc -f -I fs /proc/device-tree | less
That pipe to the pager less is recommended due to a lot of output generated by that command. The result should look something like this..
All the parts of your device tree could also be investigated within the kernel source but since there is also an include mechanism the information is split among several files in
<kernel-source>/arch/arm/boot/dts/..
Some relevant files are:
am335x-bone-common.dtsi
am335x-boneblack.dts
am33xx.dtsi
Note: The .dtsi files are equivalent to .h files in C or C++
because they get included (therefore the 'i' at the end) by .dts
files
They all describe devices related to the processor, common devices on the Beaglebone platform or devices only suitable on the Beaglebone Black.
You mentioned overlays, what's that?
Good question, I see you're still with me. As I said before, the device tree blob is parsed when the kernel boots. So when your system is up and running the whole magic is already over. On a platform like the BBB with a whole bunch of expansion boards (Capes) this would require you to recompile the device tree every time you go for another cape to use.
Therefore you have the overlay mechanism that allows you to add or modify devices in your device tree AT RUNTIME! Amazing.
Note: to be able to compile device tree overlays make sure to install the appropriate package like above (dtc-overlay)
And how am I going to use all this?
I'll give you an example. Since the BBB has no real time clock (rtc) which would be useful to generate timestamps for measurements etc., we are going to fix that.
We'll use a ds1307 real time clock chip (in fact I have a ds1308 rtc but the driver is compatible) and communicate with it over the I2C1 bus on the BBB. By default that bus is disabled on the BBB as you can see from the device tree sources..
The important information in that snippet is:
a node named 'i2c1' is defined
it is defined as compatible to the omap4-i2c driver
the device gets assigned a memory mapped address (0x4802a000) and an appropriate address range (0x1000) according to the processors reference manual (page 181)
the devices status is disabled
Now we'll create an overlay to configure the GPIO pins for the i2c1 bus, activate that bus and afterwards we'll add the rtc-device i2c1 bus so that the appropriate driver is automatically loaded and the rtc-device gets created in /dev.
The GPIO pins on the P8 and P9 headers on the BBB have multiple functionalities which are muxed together and therefore we have to adjust the pinmux settings to use them for I2C communication. As you can see in this table for the I2C1 bus we'll have to use the header pins 17 and 18 in mux mode 2. To get more information on the GPIO handling on the BBB look here.
/dts-v1/;
/plugin/;
/{ /* this is our device tree overlay root node */
compatible = "ti,beaglebone", "ti,beaglebone-black";
part-number = "BBB-I2C1"; // you can choose any name here but it should be memorable
version = "00A0";
fragment#0 {
target = <&am33xx_pinmux>; // this is a link to an already defined node in the device tree, so that node is overlayed with our modification
__overlay__ {
i2c1_pins: pinmux_i2c1_pins {
pinctrl-single,pins = <
0x158 0x72 /* spi0_d1.i2c1_sda */
0x15C 0x72 /* spi0_cs0.i2c1_sdl */
>;
};
};
};
}; /* root node end */
OMG what just happend?
At a first glance the overlay syntax looks quite weird but it's basically made of so called fragments that target an already existing device node and modify that node (and it's children).
In this case we target the am33xx_pinmux device node that is defined in the processors device tree (am33xx.dtsi). Within that node we add a new child node called pinmux_i2c1_pins which was not existent before (take a look at am335x-bone-common.dtsi to verify) and the label i2c1_pins.
The next part is a bit more complex and if you are interested read this. Every GPIO pin is configured by a single register with several bits to control it's behavior and all the registers are controlled by the pinctrl-single driver. To set a specific pin just use it's address offset from the base address (you will find that in the P9 header table above) and it's pin configuration as second parameter..
I borrowed this overview from Derek Molloy to explain the pin mode. Since 0x72 is equivalent to 01110010b we have both pins configured as inputs with enabled pullup resistor and active slew control in mux mode 2.
And mux mode 2 for these pins means pin 17 on header P9 is the clock line SCL and pin 18 on header P9 is the data line SDA.
But we still have to enable I2C1?
That's absolutely correct so let's extend our overlay as follows..
/dts-v1/;
/plugin/;
/{ /* this is our device tree overlay root node */
compatible = "ti,beaglebone", "ti,beaglebone-black";
part-number = "BBB-I2C1"; // you can choose any name here but it should be memorable
version = "00A0";
fragment#0 {
target = <&am33xx_pinmux>; // this is a link to an already defined node in the device tree, so that node is overlayed with our modification
__overlay__ {
i2c1_pins: pinmux_i2c1_pins {
pinctrl-single,pins = <
0x158 0x72 /* spi0_d1.i2c1_sda */
0x15C 0x72 /* spi0_cs0.i2c1_sdl */
>;
};
};
};
fragment#1 {
target = <&i2c1>;
__overlay__ {
pinctrl-0 = <&i2c1_pins>;
clock-frequency = <100000>;
status = "okay";
rtc: rtc#68 { /* the real time clock defined as child of the i2c1 bus */
compatible = "dallas,ds1307";
#address-cells = <1>;
#size-cells = <0>;
reg = <0x68>;
};
};
};
}; /* root node end */
In the code above we added a new fragment that targets the i2c1 device node and tell's it to use our previously defined pin configuration. We set a I2C clock frequency of 100kHz and activate the device.
Furthermore the rtc clock was added as a child to the i2c1 node. The important information for the kernel is the compatible statement, naming the driver to use (ds1307) and the devices address on the I2C bus (0x68). The rtc's I2C address can be obtained from the datasheet.
And how do I get that code into the kernel?
At first the device tree source has to be compiled. Use the dtc compiler with the following call..
dtc -O dtb -o <filename>-00A0.dtbo -b 0 -# <filename>.dts
Caution! The filename must be a concatenation of the name you desire plus the version tag as seen above (-00A0) otherwise you'll have a hard time.
The resulting .dtbo file should be copied into /lib/firmware and I really have no idea where that "-00A0" naming convention comes from but there are other files in the firmware directory using it as well.
From now on you can load your overlay dynamically by using Capemgr. To do so move into /sys/devices/platform/bone_capemgr/ and then execute..
echo <filename> > slots
Capemgr will then look for your .dtbo file in the firmware directory and load it if possible. By looking into the slots file you can see if the procedure was successful. It should look something like this..
Examine the device tree used by the Beaglebone.
dtc -f -I fs /proc/device-tree | less
You'll find all the entries from the overlay..
Furthermore there should be a new I2C device (/dev/i2c-1) and a new rtc device (/dev/rtc1) in your filesystem.
To have a look on your i2c buses install the package i2c-tools and use..
i2cdetect -r 1
Output should be something like this..
As you can see address 0x68 is occupied by a device.
To query your rtc use..
hwclock -r -f /dev/rtc1
But that's not all, or is it?
No, there is one more option you have, loading device tree overlays at boot. AWESOME!
To do so open /boot/uEnv.txt and add bone_capemgr.enable_partno=<filename> to the optargs statement. That's what it looks on my BBB
optargs=coherent_pool=1M bone_capemgr.enable_partno=bbb-i2c1
Confusingly the filename is used in optargs not the part-number tag defined in the device tree overlay.
You can get my example code aside a useful Makefile on github if you like.
Sorry for long post.

This is very helpful and valuable information. I wrote an i2c kernel driver which I can load dynamically to talk to a custom chip at address 0x77. I have been successful in communicating with the chip in the past by instantiating the device manually as follows :echo act2_chip 0x77 > /sys/bus/i2c/devices/i2c-1/new_device.
After the device is instantiated, I can see it using i2cdetect tools and my loadable kernel driver can communicate with the chip.
Now I am trying to instantiate the device using the device tree method. So following your lead, I changed some parameters in your dtsi file like below:
fragment#1 {
target = <&i2c1>;
__overlay__ {
pinctrl-0 = <&i2c1_pins>;
clock-frequency = <100000>;
status = "okay";
act2_chip: act2_chip#77 { /* the real time clock defined as child of the i2c1 bus */
compatible = "xx,act2_chip";
#address-cells = <1>;
#size-cells = <0>;
reg = <0x77>;
};
I connected the chip at pins 17 and 18 for scl and sda. Here is the dmesg output I get after echo > slots :
But while inserting the driver into the kernel, I see the probe function being called. this means the driver is able to see the device as far as I think.
And when I try to write to the kernel driver, I get the following message :
omap_i2c 4802a000.i2c: controller timed out

Related

How do I find if my Wi-Fi card supports MIMO (Multiple Input Multiple Output)?

I want to know if my Wi-Fi card supports MIMO (Multiple Input Multiple Output), and specifically how to find the number of antennas.
Is there a command I can run to find out?
If you're using windows type in command line: netsh wlan show all | find /I “MIMO”.
If you see MU-MIMO : Supported then it means yes.
I'm not sure how to do this in Linux aside from checking network card model and looking at technical specification. That will give you 100% correct answer.
But You can try though this: iw phy | grep index; you will see something like this:
HT TX/RX MCS rate indexes supported: 0-15
If you see the index above 7 that means your card supports MIMO. Why is that ?
MIMO requires at least two antennas to work (means two spatial streams of data) and this table explains index / streams relation.

STM32 Current Flash Vector Address

I'm working on a dual OS system with STM32F103, I have two separate program that programmed on different FLASH locations. if both of the programs are the same, the only way to know which of them running is just by its start vector address.
But How I Can Read The Current Program Start Vector Address in STM32 ???
After reading the comments, it sounds like what you have/want is a bootloader. If your goal here is to have two different applications, one to do your main processing and real time handling and the other to just program new firmware, then you want to make a bootloader in your default boot flash space.
Bootloaders fundamentally do a few things, everything else is extra.
Check itself using some type of data integrity check like a CRC.
Checks the application
Jumps to the application.
Bootloaders will also program applications in the app space and verify they are programmed correctly before jumping as well. Colin gave some good advice about appending a CRC to the hex file before it is programmed in flash space to verify the applications.
There are a few things to look out for. The first would be the linker script and this is extremely important. A linker script will be used to map input objects to output objects and then determine based upon that script, what memory space they go into. For both of your applications, you need to create a memory map of how you want both programs to sit inside of the flash space. From this point, you can then make linker scripts for both programs so that a hex file can be generated within the parameters of what you deem acceptable flash space for the program. Each project you have will have its own linker script. An example would look something like this:
LR_IROM1 0x08000000 0x00010000 { ; load region size_region
ER_IROM1 0x08000000 0x00010000 { ; load address = execution address
*.o (RESET, +First)
*(InRoot$$Sections)
.ANY (+RO)
}
RW_IRAM1 0x20000000 0x00018000 { ; RW data
.ANY (+RW +ZI)
}
}
This will give RAM for the application to use as well as a starting point for the application.
After that, you can start the bootloader and give it information about where the application space lies for jumping and programming. Once again this is all determined by you from your memory map and both applications' linker scripts. You are going to need to add a separate entry inside of the linker for your CRC and length for a comparison of the calculated versus stored as well. Whatever tool you use to append the CRC to the hex file and have it programmed to flash space, remember to note the location and make it known to the linker script so you can reference those addresses to check integrity later.
After you check everything and it is determined that it is okay to go to the application, you can use some ARM assembly to jump to the starting application address. Before jumping, make sure to disable all peripherals and interrupts that were enabled in the bootloader. As Colin mentioned, these will share RAM, so it is important you de-initialize all used, otherwise, you'll end up with a hard fault.
At this point, the program used another hex file laid out by a linker script, so it should begin executing as planned, as long as you have the correct vector table offset, which gets into your question fully.
As far as your question on the "Flash vector address", I think what your really mean is your interrupt vector table address. An interrupt vector table is a data structure in memory that maps interrupt requests to the addresses of interrupt handlers. This is where the PC register grabs the next available instruction address upon hardware interrupt triggers, for example. You can see this by keeping track of the ARM pipeline in a few lines of assembly code. Each entry of this table is a handler's address. This offset must be aligned with your application, otherwise you will never go into the main function and the program will sit in the application space, but have nothing to do since all handlers addresses are unknown. This is what the SCB->VTOR is for. It is a vector interrupt table offset address register. In this case, there are a few things you can do. Luckily, these are hard-coded inside of STM generated files inside of the file "system_stm32(xx)xx.c" (xx is your microcontroller variant). There is a define for something called VECT_TAB_OFFSET which is the offset in the memory map of the vector table and is assigned to the SCB->VTOR register with the value that is chosen. Your interrupt vector table will always lie at the starting address of your main application, so for the bootloader it can be 0x00, but for the application, it will be the subtraction of the starting address of the application space, and the first addressable flash address of the microcontroller.
/************************* Miscellaneous Configuration ************************/
/*!< Uncomment the following line if you need to relocate your vector Table in
Internal SRAM. */
/* #define VECT_TAB_SRAM */
#define VECT_TAB_OFFSET 0x00 /*!< Vector Table base offset field.
This value must be a multiple of 0x200. */
/******************************************************************************/
Make sure you understand what is expected from the micro side using STM documentation before programming things. Vector tables in this chip can only be in multiples of 0x200. But to answer your question, this address can be determined by a few things. Your memory map, and eventually, you will have a hard-coded reference to it as a define. You can figure it out from there.
Hope this helps and good luck to you on your application.

Trying to understand the load memory address (LMA) and the binary file offset in an ARM binary image

I'm working in an ARM Cortex M4 (STM32F4xxxx) and I'm trying to understand how exactly the binaries (*.elf and *.bin) are built and flashed in memory, specially with regards to the memory locations. Specifically, what I don't understand is how the LMA gets 'translated' from the actual binary file offset. Let me explain with an example:
I have an *.elf file whose (relevant) sections are the following ones:(obtained from objdump -h)
my_file.elf: file format elf32-littlearm
Sections:
Idx Name Size VMA LMA File off Algn
0 .text 000001c4 08010000 08010000 00020000 2**0
CONTENTS, ALLOC, LOAD, READONLY, DATA
1 .bootloader 00004000 08000000 08000000 00010000 2**0
CONTENTS, ALLOC, LOAD, DATA
According to that file, the VMA and LMA are 0x8000000 and 0x8010000, what is perfectly fine since they are defined that way in the linker script file. In addition, according to that report, the offsets of those sections are 0x10000 and 0x20000 respectively. Next, I execute the following command for dumping the memory corresponding to the .bootloader:
xxd -s 0x10000 -l 16 my_file.elf
00010000: b007 c0de b007 c0de b007 c0de b007 c0de ................
Now, create the binary file to be flashed into memory:
arm-none-eabi-objcopy -O binary --gap-fill 0xFF -S my_file.elf my_file.bin
According to the information provided above, and as far as I understand, the generated binary file should have the .bootloader section located at 0x8000000. I understand that this is not how it actually works, inasmuch as the file would get extremely big, so the bootloader is placed at the beginning of the file, so the address 0x0 (check that both memory chunks are identical, even though the are at different addresses):
xxd -s 0x00000 -l 16 my_file.bin
00000000: b007 c0de b007 c0de b007 c0de b007 c0de ................
As far as I understand, when the mentioned binary file is flashed into memory, the bootloader will be at address 0x0, what is perfectly fine taking into account that the MCU in question jumps to the address 0x4 (after getting the SP from 0x0) when it starts working, as I have checked here (page 26): https://www.st.com/content/ccc/resource/technical/document/application_note/76/f9/c8/10/8a/33/4b/f0/DM00115714.pdf/files/DM00115714.pdf/jcr:content/translations/en.DM00115714.pdf
Finally, my questions are:
Will the bootloader actually be placed at 0x0? If so, what's the purpose of defining the memory sectors in the linker file?
Is this because 0x0 belongs to flash memory, and when the MCU starts, all the flash is copied into RAM at address 0x8000000? If so, will the bootloader be executed from flash memory and all the rest of the code from RAM?
Taking into account the above questions, if I have not understood anything, what's the relation/difference between the LMA and the File offset?
No, bootloader will be at 08000000, as defined in elf file.
Image will be burned in flash at that address and executed directly from there (not copied somewhere else or so).
There's somewhat undocumented behaviour, that unitialized area before actual data is skipped when producing binary image. As comment in BFDlib source states (https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=blob;f=bfd/binary.c;h=37f5f9f7363e7349612cdfc8bc579369bbabbc0c;hb=HEAD#l238)
/* The lowest section LMA sets the virtual address of the start
of the file. We use this to set the file position of all the
sections. */
Lowest section (.bootloader) LMA is 08000000 in your .elf, so binary file will start at this address.
You should take this address into account and add it to file offset when determining address in the image.
Sections:
Idx Name Size VMA LMA File off Algn
0 .text 000001c4 08010000 08010000 00020000 2**0
/* ^^^^^^^^ */
/* this section will be at offset 10000 in image */
CONTENTS, ALLOC, LOAD, READONLY, DATA
1 .bootloader 00004000 08000000 08000000 00010000 2**0
/* ^^^^^^^^ */
/* this is the lowest LMA in your case it will be used */
/* as a start of an image, and this section will be placed */
/* directly at start of the image */
CONTENTS, ALLOC, LOAD, DATA
Memory layout: Bin. image layout:
000000000 \ skipped
... ________________ /
080000000 .bootloader 0
... ________________
080004000 <gap> 4000
... ________________
080010000 .text 10000
... ________________
0800101C4 101C4
That address defined in ldscript, so binary image should start at fixed location. However you should be aware of this behaviour when dealing with ldscrips and binary images.
To summarize building and flashing process:
When linking, start address is defined in ldscript, and and first section in elf located there.
When converting to binary, start address is determined from LMA and binary image starts from that address.
When flashing image, same address given to flasher as a parameter, so image is placed at the right place (defined in ldscript).
Update: STM32F4xxx booting process.
Address region starting at address 0 is special to those MCUs. It can be configured to map other regions, which are flash, SRAM, or system ROM. They're selected by pins BOOTSELx.
From CPU side it looks like second copy of flash (SRAM or system ROM) appears at address 0.
When CPU starts, it first reads initial SP from adress 0 and initial PC from address 4. Actually, reads from the flash memory are performed.
If the code is linked to run from actual flash location, then initial PC will point there. In this case execution starts at actual flash address.
----- Mapped area (mimics contents as flash) ---
0: (02001000) ;
4: (0800ABCD) ----. ; CPU reads PC here
.... | ; (it points to flash)
----- FLASH ----- |
8000000: 20001000 | ; initial stack pointer
8000004: 0800ABCD --. | ; address of _start in flash
.... | |
800ABCD: <_start:> movw ... <-'<-' ; Code execution starts here
(Note: this does not apply to hex images (like intel hex or s-record) as such formats define loading address explicitly and it is used as is there).
The documentation is pretty clear on where the the address space is for the application code for the stm32's which is 0x08000000 (a competing vendor is like 0x01000000, and so on). And that when booting in a certain mode that 0x08000000 is mapped to address 0x00000000 as can easily be seen with a debugger (in both spaces).
The address space at 0x00000000 mapped to 0x08000000 is smaller than the potential address space at 0x08000000 depending on the chip. So it is wise to build for and use 0x08000000 rather than 0x00000000 but for small programs you can choose either.
Because the cortex-m is a vector table machine when the logic reads address 0x00000004 which is mapped to 0x08000004 in a normal boot mode it sees 0x080xxxxx and then gets out of the 0x00000000 memory space, avoiding any limitations there.
When you use the boot0/boot1 strap pins you can instead cause 0x00000000 to map elsewhere where the burned in bootloader lives. That bootloader of course can easily read 0x08000000 and easily simulate a reset by branching or it can change the logic and actually reset (if you ask it to, although I don't know if that bootloader actually supports running a program). Who knows if we did work there we couldn't necessarily say. Quite possible it always boots into the bootloader and then it changes the mapping depending on the straps.
Similar to an mmu but much simpler decoding addresses and aliasing them is pretty easy. if boot0 == 0 and address[31:16] = 0x0000 then address[31:16]=0x0800 and the memory system decodes it at the different address, as easy as it is to write in C it is that easy in the HDL if not easier.
This is not uncommon to be found in microcontrollers as well as others, but since microcontrollers generally boot from a flash/rom but that same boot space on some architectures is also the vector or exception table that an rtos might want to manipulate sometimes you see that ram can be swapped into that space so the cpu "sees" some ram after a control register is changed where on boot it "saw" the vector table on flash. that or you have the code on flash branch to somewhere in ram for the non-reset vector and then the rtos or any other application that cares to do this can make runtime changes to what code actually gets run for those exceptions or interrupts.
ARM imposes address space rules for where code can execute and data can live and where you might want to start your peripheral address space and what address space is reserved by arm for resources within the core. so you will sometimes see ram have an alias at a lower address implying that if you want to run a program in ram you want to use the lower address for execution but can use either address to copy the code there.
Up to the chip designers as to how simple or complicated to make this. For ST its pretty simple then have one or more boot pins on the package that at least let you choose between your application and the on chip bootloader, so far all the stm32s I have seen the application flash space is considered to live at 0x08000000 and is mapped/aliased to 0x00000000 for one of those boot modes. When there are two boot pins exposed then up to four possible boot conditions can exist of which one is the application with 0x00000000 aliased to 0x08000000.
As to how to get the bits into the flash, that varies widely by tool. The toolchains like gnu certainly will build a .bin file where the first byte of the file is the first byte from the elf that we desire to have at 0x08000000 (if built that way, if you built for 0x02000000 it will still be the first byte, and that code probably won't work). There are tools and you can certainly write your own, that knowing that can load a .bin file at the desired place of 0x08000000 or you can have your too write to address 0x00000000 in the right mode for a program that is not too big and have it still land in the right place to execute on reset. Likewise there are or you can write tools that can parse .elf files, intel hex, motorola srecord and others and based on the information in those binaries have the data be loaded into the address space you desire, assuming everything is bug free.
You might be trying to overcomplicate it. There isn't any magic to it, the tools need to do the sane thing and the sane thing is to take the binary from the compiler and put it in the chip where we want. We are responsible for the linker script and such and bootstrap code/vector table of course, but if we do that right the tools if they are designed right will put the bits in the right place in the chip and if the chip is designed right as documented then it will boot and run.
Will the bootloader actually be placed at 0x0? If so, what's the
purpose of defining the memory sectors in the linker file?
Ideally you want your application or bootloader as you are calling it to be at address 0x08000000 in the processors address space. In certain boot modes (boot0/boot1) that address is also aliased to 0x00000000 so you can see that vector table at both places at the same time. If you are not in the right boot mode then only 0x08000000 will show your code.
Is this because 0x0 belongs to flash memory, and when the MCU starts,
all the flash is copied into RAM at address 0x8000000? If so, will the
bootloader be executed from flash memory and all the rest of the code
from RAM?
The logic in the chip is designed to take the address the processor puts on its address bus and have more than one address land on the application flash, the application flash is not at 0x08000000 if its a 16Kbyte flash for example its only got an address from 0x0000 to 0xFFFF when you access 0x08001234 it actually sends 0x1234 to the flash controller and or the flash controller chops the top off if it knows it is supposed to handle that request. 0x00000000, 0x08000000 are the processors view of the address space, the reality is the upper bits are decoded and route the request to whomever it belongs to and the final handler ultimately looks at the lower bits to determine what is being addressed.
Like when you deliver a letter it has a first and last name, a street address, city state zip. Once it gets to the right post office in the right state then the street address is all that matters to the postal person. Once it gets to the right house, often the first name is all that matters, the rest can be ignored. No difference here. Portions of the address (can often) become don't cares as the responsible logic that inspects that address aims the request at the correct party.
Taking into account the above questions, if I have not understood
anything, what's the relation/difference between the LMA and the File offset?
the elf file format is generic, way overkill for microcontroller work but being well supported and easy to use why not. The load memory address is where we the programmer have desired that code to live with respect to the processors view of the world. From a readelf perspective the offset in the file is the offset for that information in the elf file and it is just wherever the tool put it it has no other interesting relationship. Or at least doesn't need to. Objcopy will rip that data out of the file and for -O binary put it in a sort of memory image file with the lowest address being copied out being offset 0 in that file and the size being determined by the total address space for all of the loadable blocks (unless you use more command line parameters).
And as you sort of implied but if you think about it and have a linker script bug if you were to have even a single instruction at 0x08000000 and a single byte of .data at 0x20000000 but didn't do the AT > thing then your file despite only having three relevant bytes will be 0x20000001 - 0x08000000 bytes long. (after a -O binary) so good idea to not put objcopy in your make file until you have debugged your linker script. Imagine say a target where flash is 0x00000000 and memory is 0xE0000000, pretty big .bin files until you get the linker script sorted out.

How can i tell if a peripheral is connected to GPIO?

I want to be able to detect when a peripheral sensor is NOT connected to my Raspberry Pi 3.
For example, if I have a GPIO passive infrared sensor.
I can get all the GPIO ports like this:
PeripheralManagerService manager = new PeripheralManagerService();
List<String> portList = manager.getGpioList();
if (portList.isEmpty()) {
Log.i(TAG, "No GPIO port available on this device.");
} else {
Log.i(TAG, "List of available ports: " + portList);
}
Then I can connect to a port like this:
try {
Gpio pir = new PeripheralManagerService().openGpio("BCM4")
} catch (IOException e) {
// not thrown in the case of an empty pin
}
However even if the pin is empty I can still connect to it (which technically makes sense, as gpio is just binary on or off). There doesn't seem to be any api, and I can't legitimately think of logically how you can differentiate between a pin that has a peripheral sensor connected and one that is "empty".
Therefore at the moment, there is no way for me to assert programmatically that my sensors and circuit is setup correctly.
Any one have any ideas? Is it even possible from a electronics point of view?
Reference docs:
https://developer.android.com/things/sdk/pio/gpio.html
There are lots of ways to do "presence detection" electrically, but nothing that you will find intrinsically in the SoC. You wouldn’t normally ask a GPIO pin if something is attached—it would have no way to tell you that.
Extra GPIO pins are often used to detect if a peripheral is attached to a connector. The plug for some sensor could include a “detect” line that is shorted to ground and pulls the GPIO low when the sensor is attached, for example. USB and SDIO do something similar with some dedicated circuitry in the interface.
You could also build more elaborate detection circuits using things like current sensing, but they would inevitably have to put out a binary signal that you capture through a dedicated GPIO.
This is easier to achieve for serial peripherals, since you can usually send a basic command and verify that you get a response.
Detection using solely the input line can be tough. First, you'd want to narrow the scope of the problem. Treat as not-present the condition of a sensor not being connected, the sensor being connected but not responding, or the sensor responding in an uncharacteristic manner.
So, if it is a digital sensor, then communicating with the sensor may be enough to tell if it is present or not (especially if checksums or parity bits are involved).
Some analog sensors also have specific specs on how it behaves when triggered. You can utilize deviation from those specs to determine if the sensor is not present.
If you have a digital sensor w/o any error checking on it's output, where you clock out data (so all 0s or all 1s is valid) or it's just a binary 1 or 0 for output, then you'd need external help. Same for most analog sensors.
This external help would be something where you put the system in a known controlled state, press a button, and it then checks the sensors for output within a specific range. To be absolutely sure, you'd want at least two different states, to ensure your digital or analog inputs didn't happen to be stuck at the correct state for your test.
Just about any other method would be external to the system. Using additional IO to "detect" a sensor could help increase confidence the sensor is there, but you could get false positives where all you've learned is that "something" is there - not necessarily the sensor you expect.

Beaglebone Black multiple HC-SR04 - Sensor (Ultrasonic)

i am currently trying to use more than 1 HC-SR04 on my BeagleBone Black (Rev C.)
I tried the following script:
https://github.com/luigif/hcsr04 And it is also working, but I have no idea, how I am able to change the used PINs and also how to use them in a serial way.
May someone help me please?
best regards
Ingo
One possible solution with the current code is to add two fast enough multiplexer to the echo/trigger pins of the sensors (8:1 or 16:1 depending on how many sensors you want to connect). The first will be to switch between trigger connections and the second to switch between echo connections. To control the mux you'll have connect the select lines of the mux to any of the GPIO pins(easiest are P8_14, P8_15, P8_16 and P8_18 since P8_11 and P8_12 are being used by PRU).
You'll have to change the present code something like this
/* Execute code on PRU */
printf(">> Executing HCSR-04 code\n");
prussdrv_exec_program(0, "hcsr04.bin");
/*Add code here to set GPIO pins high/low to choose the sensor */
/* Get measurements */
mux generally have 5v inputs and outputs, make sure that you step it down to 3V else you'll blow your beaglebone!
basic cheap mux have 35ns max response time which is more that sufficient to match the requirements
https://en.wikipedia.org/wiki/Multiplexer
http://socrates.berkeley.edu/~phylabs/bsc/PDFFiles/DM74151A.pdf
Addition: Tie all the trigger pins together and mux only the echo pins so that you'll need only one mux instead of 2

Resources