I am trying to use i2c on the beagle bone black with c++ but I keep getting 0x00 returned - beagleboneblack

Hi I am trying to read and write data from the i2c bus on the beagle bone black. But I keep reading 0x00 whenever I try to access the Who Am I register on a MMA84152 (or any other register for that matter), which is a constant register which means its value does not change. I am trying to read i2c-1 character driver located in /dev and I connect the sda and scl of the MMA852 lines to pins 19 and 20 on the p9 header. The sda and scl lines are both pulled high with 10 k resistors. Both pin 19 and pin 20 show 00000073 for their pin mux which means it is set for i2c functionality and slew control is slow, reciever is active, the pin is using a pull up resistor, and pull up is enabled. I ran i2cdetect -r 1 and my device shows up as 0x1d which is its correct address. I also ran i2cdump 1 0x1d and 0x2a shows up under 0x0d which is the register I am trying to read from my device and contains the correct value according to the datasheet. But when I read it it returns 0x00 for me. I am also running the latest angstrom distribution and I am logged in under root so no need for sudo.I'm lost now. Here is the code:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <linux/i2c-dev.h>
#include <linux/i2c.h>
#include <sys/ioctl.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <string>
using namespace std;
int main(int argc, char **argv){
int X_orientation=0;
char buffer1[256];
string i2cDeviceDriver="/dev/i2c-1";
int fileHandler;
if((fileHandler=open(i2cDeviceDriver.c_str(),O_RDWR))<0){
perror("Failed To Open i2c-1 Bus");
exit(1);
}
if(ioctl(fileHandler,I2C_SLAVE,0x1d)<0){
perror("Failed to acquire i2c bus access and talk to slave");
exit(1);
}
char buffer[1]={0x0D};
if(write(fileHandler,buffer,1)!=1){
perror("Failed to write byte to accelerometer");
exit(1);
}
if(read(fileHandler,buffer1,1)!=1){
perror("Failed to read byte from accelerometer");
exit(1);
}
printf("Contents of WHO AM I is 0x%02X\n",buffer1[0]);
}

It is likely that your I2C device does not support using separate write() and read() commands to query registers (or the equivalent i2c_smbus_write_byte() and i2c_smbus_read_byte() commands). The kernel adds a 'stop bit' to separate your messages on the wire, and some devices do not support this mode.
To confirm:
Try using the Linux i2cget command with the -c ('write byte/read byte' mode, uses separate read and write messages with a stop bit between) flag:
$ i2cget -c 1 0x1d 0x0d
Expected result: 0x00 (Incorrect response)
Then try using i2cget with the -b ('read byte data' mode, which combines the read and write messages without a stop bit) flag:
$ i2cget -b 1 0x1d 0x0d
Expected result: 0x2a (Correct response)
To resolve:
Replace your read() and write() commands with a combined i2c_smbus_read_byte_data() command if available on your system:
const char REGISTER_ID = 0x0d;
char result = i2c_smbus_read_byte_data(fileHandler, REGISTER_ID);
Alternatively (if the above is not available), you can use the the ioctl I2C_RDWR option:
const char SLAVE_ID = 0x1d;
const char REGISTER_ID = 0x0d;
struct i2c_rdwr_ioctl_data readByteData;
struct i2c_msg messages[2];
readByteData.nmsgs = 2;
readByteData.msgs = messages;
// Write portion (send the register we wish to read)
char request = REGISTER_ID;
i2c_msg& message = messages[0];
message.addr = SLAVE_ID;
message.flags = 0; // 0 = Write
message.len = 1;
message.buf = &request;
// Read portion (read in the value of the register)
char response;
message = messages[1];
message.addr = SLAVE_ID;
message.flags = I2C_M_RD;
message.len = 1; // Number of bytes to read
message.buf = &response; // Where to place the result
// Submit the combined read+write message
ioctl(fileHandler, I2C_RDWR, &readByteData);
// Output the result
printf("Contents of register 0x%02x is 0x%02x\n", REGISTER_ID, response);
More information:
https://www.kernel.org/doc/Documentation/i2c/smbus-protocol
https://www.kernel.org/doc/Documentation/i2c/dev-interface

Related

How to open and use huge page and transparent huge page in code on Ubuntu

I want to use huge page or transparent huge page in my code to optimize the performance of data structure. But when I use the madvise() in my code, it Can allocate memory for me.
There is always [madvise] never in /sys/kernel/mm/transparent_hugepage/enabled.
There is always defer defer+madvise [madvise] never in /sys/kernel/mm/transparent_hugepage/defrag.
#include <iostream>
#include <sys/mman.h>
#include <string.h>
int main()
{
void* ptr;
std::cout << madvise(ptr, 1, MADV_HUGEPAGE) << std::endl;
std::cout << strerror(errno) << std::endl;
return 0;
}
The result of the above code is:
-1
Cannot allocate memory
Problems with the provided code example in the question
On my system, your code prints:
-1
Invalid argument
And I don't see how it would work in the first place. madvise does not allocate memory for you, it it used to set policies for existing memory ranges. Therefore, specifying an uninitialized pointer as the first argument is not gonna work.
There exists documentation for the MADV_HUGEPAGE argument in the madvise manual:
Enable Transparent Huge Pages (THP) for pages in the range
specified by addr and length. Currently, Transparent Huge
Pages work only with private anonymous pages (see
mmap(2)). The kernel will regularly scan the areas marked
as huge page candidates to replace them with huge pages.
The kernel will also allocate huge pages directly when the
region is naturally aligned to the huge page size (see
posix_memalign(2)).
How to use permanently reserved huge pages
Here is a rewritten code that uses mmap instead of mavise. With that I can reproduce your error of Cannot allocate memory:
#include <iostream>
#include <sys/mman.h>
int main()
{
const auto memorySize = 16ULL * 1024ULL * 1024ULL;
void* data = mmap(
/* "If addr is NULL, then the kernel chooses the (page-aligned) address at which to create the mapping" */
nullptr,
memorySize,
/* memory protection / permissions */ PROT_READ | PROT_WRITE,
MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB,
/* fd should for compatibility be -1 even though it is ignored for MAP_ANONYMOUS */ -1,
/* "The offset argument should be zero [when using MAP_ANONYMOUS]." */ 0
);
if ( data == MAP_FAILED ) {
std::cout << "Failed to allocate memory: " << strerror( errno ) << "\n";
} else {
std::cout << "Allocated pointer at: " << data << "\n";
}
munmap( data, memorySize );
return 0;
}
That error can be solved by actually making the kernel reserve some huge pages that can be allocated. Normally, this should be done during boot time when most memory is unused for better success but in my case, I was able to allocate 37 huge pages with 2 MiB, i.e., 74 MiB of memory. I find that surprisingly low because I have 370 MiB "free" and 3.9 GiB "available" memory. Maybe I should close firefox first and then try to reserve more huge pages or maybe kswapd can somehow be triggered to defragment memory before reserving more huge pages.
echo 128 | sudo tee /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
head /sys/kernel/mm/hugepages/hugepages-2048kB/*
Output:
==> /sys/kernel/mm/hugepages/hugepages-2048kB/free_hugepages <==
37
==> /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages <==
37
==> /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages_mempolicy <==
37
==> /sys/kernel/mm/hugepages/hugepages-2048kB/nr_overcommit_hugepages <==
0
==> /sys/kernel/mm/hugepages/hugepages-2048kB/resv_hugepages <==
0
==> /sys/kernel/mm/hugepages/hugepages-2048kB/surplus_hugepages <==
0
Now when I run the code snipped with clang++ hugePages.cpp && ./a.out, I get this output:
Allocated pointer at: 0x7f4454e00000
As can be seen from the trailing zeros, it is aligned to quite a large alignment value of 2 MiB.
How to use transparent huge pages
I have not seen any system actually using these fixed reserved huge pages. It seems that transparent huge pages have superseded that usage. Probably, partly because:
Pages that are used as huge pages are reserved inside the kernel and cannot be used for other purposes. Huge pages cannot be swapped out under memory pressure.
To mitigate these complexities, transparent huge pages were introduced:
No application changes need to be made to take advantage of THP, but interested application developers can try to optimize their use of it. A call to madvise() with the MADV_HUGEPAGE flag will mark a memory range as being especially suited to huge pages, while MADV_NOHUGEPAGE will suggest that huge pages are better used elsewhere. For applications that want to use huge pages, use of posix_memalign() can help to ensure that large allocations are aligned to huge page (2MB) boundaries.
That basically says it all but I think the first statement is not true anymore because most systems nowadays are configured to madvise in /sys/kernel/mm/transparent_hugepage/enabled instead of always, for which the statement probably was intended for. So, here is another try with madvise:
#include <array>
#include <chrono>
#include <fstream>
#include <iostream>
#include <string_view>
#include <thread>
#include <stdlib.h>
#include <string.h> // streerror
#include <sys/mman.h>
int main()
{
const auto memorySize = 16ULL * 1024ULL * 1024ULL;
void* data{ nullptr };
const auto memalignError = posix_memalign(
&data, /* alignment equal or higher to huge page size */ 2ULL * 1024ULL * 1024ULL, memorySize );
if ( memalignError != 0 ) {
std::cout << "Failed to allocate memory: " << strerror( memalignError ) << "\n";
return 1;
}
std::cout << "Allocated pointer at: " << data << "\n";
if ( madvise( data, memorySize, MADV_HUGEPAGE ) != 0 ) {
std::cerr << "Error on madvise: " << strerror( errno ) << "\n";
return 2;
}
const auto intData = reinterpret_cast<int*>( data );
intData[0] = 3;
/* This access is at offset 3000 * 8 = 24 kB, i.e.,
* still in the same 2 MiB page as the access above */
intData[3000] = 3;
intData[memorySize / sizeof( int ) / 2] = 3;
/* Check whether transparent huge pages have been allocated. */
std::ifstream smapsFile( "/proc/self/smaps" );
std::array<char, 4096> lineBuffer;
while ( smapsFile.good() ) {
/* Getline always appends null. */
smapsFile.getline( lineBuffer.data(), lineBuffer.size(), '\n' );
std::string_view line{ lineBuffer.data() };
if ( line.starts_with( "AnonHugePages:" ) && !line.contains( " 0 kB" ) ) {
std::cout << "We are successfully using transparent huge pages!\n " << line << "\n";
}
}
/* During this sleep /proc/meminfo and /proc/vmstat can be checked for transparent anonymous huge pages. */
using namespace std::chrono_literals;
std::this_thread::sleep_for( 100s );
free( data );
return intData[3000] == 3 ? 0 : 3;
}
Running this with clang++ -std=c++2b hugeTransparentPages.cpp && ./a.out (C++23 is necessary for the string_view functionalities like contains), the output on my system is:
Allocated pointer at: 0x7f38cd600000
We are successfully using transparent huge pages!
AnonHugePages: 4096 kB
And this test was executed while cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages yields 0, i.e., there are no persistently reserved huge pages.
Note that only two pages (4096 kB) out of the requested 16 MiB were actually used because the other pages have not been written to. This is also why the call to madvise is possible and yields huge pages. It has to be done before the actual physical allocation, i.e., before writing to the allocated memory.
The example code includes a check for transparent huge pages for the process itself. This site lists multiple ways to check the amount of anonymous transparent huge pages that are in use. For example, you can check system-wide with:
grep AnonHugePages /proc/meminfo
What I find interesting is that normally, this is 0 kB on my system and while the example code with madvise is running it yields 4096 kB.
To me, it seems like this means that none of my normally used programs use any persistent huge pages and also no transparent huge pages. I find that very surprising because there should be a lot of use cases for which huge page advantages should outstrip their disadvantages (wasted memory).

A problem about the code example in OSTEP

In OSTEP(Operating Systems: Three Easy Pieces), the author offers a simple c program code to show how OS Virtualizing it's Memory
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include "common.h"
int
main(int argc, char *argv[])
{
int *p = malloc(sizeof(int));
//assert(p != NULL);
printf("(%d) address pointed to by p: %p\n", getpid(), p);
*p = 0;
while (1) {
sleep(1);
*p = *p + 1;
printf("(%d) p: %d\n", getpid(), *p);
}
return 0;
}
the book says that because of the virtualizing process the outcome shuld be:
prompt> ./mem &; ./mem &
[1]24113
[2]24114
(24113) address pointed to by p: 0x200000 (24114) address pointed to
(24114) address pointed to by p: 0x200000
(24113) p: 1
(24114) p: 1
(24114) p: 2
(24113) p: 2
(24113) p: 3
(24114) p: 3
(24113) p: 4
(24114) p: 4
the author explains why this happens:
Now, we again run multiple instances of this same program to see what
happens (Figure 2.4). We see from the example that each running
program has allocated memory at the same address (0x200000), and yet
each seems to be updating the value at 0x200000 independently! It is
as if each running program has its own private memory, instead of
sharing the same physical memory with other running programs 5 .
...
but in my computer(ubuntu) the outcome is:
it makes me feel really confuse...
In an attempt to frustrate malware, ubuntu is configured to load programs at pseudo-random addresses; so when your program is loaded into memory, the OS choose some base address, then loads the program relative to that. This is widely known as Address Space Layout Randomization (ASLR).
There is a kernel option: /proc/sys/kernel/randomize_va_space which you can set to zero to disable this feature. Note that if this offers any security, by disabling it you are losing that security feature.
as root:
$ oldval = $(cat /proc/sys/kernel/randomize_va_space)
$ echo 0 >> /proc/sys/kernel/randomize_va_space
then, after you have run your examples as somebody other than root, return to this session and:
$ echo "$oldval" >> /proc/sys/kernel/randomize_va_space

malloc() not showing up in System Monitor

I wrote a program which only purpose was to allocate a given amount of memory so that I could see its effect on the Ubuntu 12.04 System Monitor.
This is what I wrote
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main(int argc, char **argv)
{
if(argc<2)
exit(-1);
int *mem=0;
int kB = 1024*atoi(argv[1]);
mem = malloc(kB);
if(mem==NULL)
exit(-2);
sleep(3);
free(mem);
exit(0);
}
I used sleep() so that the program would not end immediately so that System Monitor would have time to show the change in memory.
What happens (and I'm puzzled about) is that even if I allocate 1GB, the System Monitor shows no memory change at all! Why is that so?
I thought the reason could be because the allocated memory was never accessed so I tried to insert the following before sleep()
int i;
for(i=0; i<kB/sizeof(int); i++)
mem[i]=i;
But this too had no effect on the graph in System Monitor.
Thanks.

tbb::parallel_for running out of memory on machine with 80 cores

I am trying to use tbb::parallel_for on a machine with 160 parallel threads (8 Intel E7-8870) and 0.5 TBytes of memory. It is a current Ubuntu system with kernel 3.2.0-35-generic #55-Ubuntu SMP. TBB is from the package libtbb2 Version 4.0+r233-1
Even with a very simple task, I tend to run out of resources, either "bad_alloc" or "thread_monitor Resource temporarily unavailable". I boiled it down to this very simple test:
#include <vector>
#include <cstdlib>
#include <cmath>
#include <iostream>
#include "tbb/tbb.h"
#include "tbb/task_scheduler_init.h"
using namespace tbb;
class Worker
{
std::vector<double>& dst;
public:
Worker(std::vector<double>& dst)
: dst(dst)
{}
void operator()(const blocked_range<size_t>& r ) const
{
for (size_t i=r.begin(); i!=r.end(); ++i)
dst[i] = std::sin(i);
}
};
int main(int argc, char** argv)
{
unsigned int n = 10000000;
unsigned int p = task_scheduler_init::default_num_threads();
std::cout << "Vector length: " << n << std::endl
<< "Processes : " << p << std::endl;
const size_t grain_size = n/p;
std::vector<double> src(n);
std::cerr << "Starting loop" << std::endl;
parallel_for(blocked_range<size_t>(0, n, grain_size), RandWorker(src));
std::cerr << "Loop finished" << std::endl;
}
Typical output is
Vector length: 10000000
Processes : 160
Starting loop
thread_monitor Resource temporarily unavailable
thread_monitor Resource temporarily unavailable
thread_monitor Resource temporarily unavailable
The errors appear randomly, and more frequent with greater n. The value of 10 million here is a point where they happen quite regularly. Nevertheless, given the machine characteristics, this should by far not exhaust the memory (I am using it alone for these tests).
The grain size was introduced after tbb created too many instances of the Worker, which made it fail for even smaller n.
Can anybody advise on how to set up tbb to handle large numbers of threads?
Summarizing the discussion in comments in an answer:
The message "thread_monitor Resource temporarily unavailable in pthread_create" basically tells that TBB cannot create enough threads; the "Resource temporarily unavailable" is what strerror() reports for the error code returned by pthread_create(). One possible reason for this error is insufficient memory to allocate stack for a new thread. By default, TBB requests 4M of stack for a worker thread; this value can be adjusted with a parameter to tbb::task_scheduler_init constructor if necessary.
In this particular case, as Guido Kanschat reported, the problem was caused by ulimit accidentally set which limited the memory available for the process.

c stream buffer

I am using C and need a stream buffer mechanism that I can write arbitrary bytes two and read bytes from. I would prefer something that is platform independent (or that can at least run on osx and linux). Is anyone aware of any permissive lightweight libraries or code than I can drop in?
I've used buffers within libevent and I may end up going that route, but it seems overkill to have libevent as a dependency when I don't do any sort of event based io.
If you don't mind depending on C++ and possibly some bits of STL, you can use std::stringstream. It shouldn't be too difficult to write a thin C wrapper around it.
Is setbuf(3) (and its aliases) the 'mechanism' you are searching for?
Please consider the following example:
#include <stdio.h>
int main()
{
char buf[256];
setbuffer(stderr, buf, 256);
fprintf(stderr, "Error: no more oxygen.\n");
buf[1] = 'R';
buf[2] = 'R';
buf[3] = 'O';
buf[4] = 'R';
fflush(stderr);
}

Resources