My particular hardware target is a MinnoboardMax Turbot Dual-E, but I believe this question is generic to Intel Atom/BayTrail processors on other boards.
The SoC has a "multifunction serial port" (SSP) that can be used as a SPI bus interface. It can be attached by ACPI or by PCI (in the later case, the PCI id is 0x8086:0x0f0e). And within the Linux kernel, there is existing driver support (low_speed_spidev, spi_pxa2xx_pci, spi_pxa2xx_platform) which allows access to the hardware.
I would like to access the hardware from UEFI, and I'm finding very little documentation on how to access or program this hardware. I don't care very much whether I have to set it into ACPI or PCI mode in the BIOS, either one is OK for my purposes. I'd love to find an existing driver, but after half a day of searching I'm pretty convinced it's not out there. (Happy to be proven wrong on that.)
In lieu of that, can anyone point me to any code examples that would help to bootstrap this? I've tried looking at the Linux and BSD source code, but they both involve so many extra layers for generality/portability that I'm having trouble sorting out the portions that relate to this particular device, and the portions that simply support a generic driver model.
Edit: Post originally mistakenly said AHCI.
Related
Some context: I'm working on embedded system with Micro-Controler targets. My purpose here is to clarify the terms I can use for my code repository names. I focus on the low-level naming in that post which represents for me the target-oriented code (versus application-oriented code for the high level).
I enter in a loop over the web and forums where nobody seems to clearly defined the difference between these terms: HAL vs BSP vs Drivers.
According to me, all my three terms are theoretically equivalent, but people seems to make difference where the HAL is reserved for the MCU drivers (e.g. UART, GPIO, ...) and the BSP is reserved for the external peripheral drivers (e.g. accelerometer, EEPROM, ...).
Can somebody help me to clarify this? Additionally, can you mention if your answer is based on your personal opinion or if it is based on the reasoning/rationale of a community/company/standard/whatever?
Thank you for your time,
As I understand, ACPI defines a generic hardware programming model where operating system relies on the OEM firmware provided AML (ACPI machine language) code to manipulate the hardware.
In order to execute the AML code, operating system has to incorporate an AML interpreter.
So, it looks to me that firmware developers use AML to provide a control interface between platform hardware and operating system.
But do we really need AML?
I think ultimately the hardware can only be configured through the native instruction of the platform. So the AML interpreter must translate the AML into native instructions otherwise it cannot be executed on the platform.
But what's the point of using an intermediate language like AML? I mean though the AML is said to be platform-independent, which means I can use AML to describe my platform in a non-native way.
But the AML is part of the platform firmware in practice. And the entire firmware has already been built into the target platform's native instructions. So what good can it be to make such a little part of the firmware as platform-independent? Why not just use native instructions? There must be some way to let OS use it as well. And this way operating system doesn't need the AML interpreter at all. A lot of complexity can be avoided.
One of the big goals of ACPI over its predecessor APM was to give the OS more viability and control over power state transitions.
APM was a black box. The OS knew nothing about the power management implementation. It would just call a BIOS function and the BIOS handled all of the magic. Did it work? Did the system sleep properly? Did the system freeze? Was a user application able to handle the BIOS implementation? The sad truth was that many systems had power management that was downright broken, and Microsoft wanted to provide a better power management experience for the growing laptop industry.
Now, the BIOS hands the ASL/AML code over to the OS and the OS executes it not the BIOS. If the BIOS code does something dumb (like messing with registers it shouldn't), Windows can detect that by parsing the code and block it. AML is 100% decompilable unlike C.
Remember that ACPI is not x86 specific. At the time it was developed, Itanium and Xscale were around. Intel and Microsoft needed a language that would work on all platforms, both 32 and 64 bit.
Lastly, ASL is more than just a list of executable functions. It is also number of static configuration tables. The ASL code has tables to define the non PnP hardware built onto your motherboard. It has tables of supported power states. A traditional programming language like C isn't really setup for that.
If ACPI was invented today, they would probably use something like XML to provide the info to the OS.
Originally, hardware for "80x86 PC" was cloned from IBM's PC, and this created an effective de-facto standard for hardware to follow. However it didn't take long before manufacturers wanted to add features that didn't previously exist, where there was no (official or de-facto) standard to follow.
This led to a major problem for operating system software (how do you support "non-standard chaos"). Some standards were created for some things (APM, etc) but they didn't really cover everything needed and became out-of-date. ACPI was created to fix this.
Ideally, what was (and still is) needed is standards that allow operating system to detect and use supported features of the motherboard. For example, a "standardised case temperature and fan control" device (with support for detecting how many fans, temperature sensors, etc), or a "standardised CPU speed/power consumption", a "PCI slot IRQ routing for IO APICs" standard, a "hot-plug PCI controller device" standard, etc.
However, ACPI didn't provide useful standards that hardware manufacturers and operating systems can use. Instead, ACPI provided an over-engineered mess (AML) to allow an OS to cope with ACPI's failure to standardise the hardware.
Essentially; we "need" AML now because it's the only viable way for an OS to work-around the "non-standard chaos" problem that ACPI failed to fix.
The problem with providing native code instead of AML is that different operating systems use CPUs in different ways (e.g. native 64-bit 80x86 code in firmware would be useless for an older "32-bit only" OS). AML provides portability between different types of CPUs and between the same CPU/s in different modes.
Also; native code is considered a major security problem (rootkits, etc); and people tend to think an interpreted language mitigates that problem. Of course in practice AML needs far too much access to the underlying hardware and does it in a way that an OS can't check, and there's isn't even a way for an OS to determine if the AML has been maliciously modified before the OS booted. For these reasons AML is still a major security problem despite using interpreted language.
I am working in software firm where hardware independent coding is done on the Network Chipsets and fully Multigthreading coding implemented and various buffers(CRU Buffer, Linear Buffer) are handled and memory (stack memory) is optimally used. And IPC done via Message queues. And Multiple Locks, Semaphores are used for concurrency mechanisum. Now i will be assigned to new development project, where i have to understand and have to develop new features in next one month. I am feeling like middle of the Amazon Jungle :).
=> I am in beginning level in OS concepts. I feel like intermediate level in C language. So expecting, suggestion for "Materail/Book which could help me to improve/concrete my OS skills"
i saw OS Book by Abraham Silberschatz and Modern Operating Systems by Tanenbaum - 3rd Edition. Both are looking big and covers all corners of operating system. I thought to study that book steadily and slowly for future referencee.
==> Now i am looking for the Network materials/books which explaining the "Main concepts" in the detailed manner. For example i have seen virtual memory concepts in one online material where clearly virtual memory explained.
Example abour virtual memory from that material:
amesmol#aubergine:~/test> objdump -f a.out
a.out: file format elf32-i386 architecture: i386, flags 0x00000112: EXEC_P, HAS_SYMS, D_PAGED start address 0x080482a0
explanation:
Notice the start address of the program is at 0x80482a0.Program thinks like where its starting address is actual physical address. But it is a virtual address space. Its original starting address at physical memory location 0x1000000.
As like this( correct point and example), could you people suggest good materials for the OS concepts ( Process Management, Memory Management, IPC)?
Can you also suggest the ways to improve/concrete this skills? (suggest either what kind of mini homework project i can do, etc..)
Thanks in advance
if you are working on projects you have to go over books you mentioned as soon as possible for theoretical explanations, concepts and terminologies. after that,even along with your reading, i suggest you to go to university websites to get hand on skills for small projects. some suggested links are as follows
http://www.eecg.toronto.edu/~lie/Courses/ECE344/
http://web.stanford.edu/~ouster/cgi-bin/cs140-winter13/pintos/pintos.html#SEC_Contents
http://www3.cs.stonybrook.edu/~porter/courses/cse624/f13/project.html
(JOS implementation. very helping instructors if you send them specific queries)
http://www.brokenthorn.com/Resources/OSDev7.html
http://www.osdever.net/bkerndev/Docs/intro.htm
(above two links are not university link but as a beginner I recommend this to start with )
apart from above, Lion's commentary on Unix code with line number reference must be in your reading to understand the implementation of small scale OS
I was wondering, how does pthreads-win32 (windows implementation of pthreads) implement cross-threading? Is it written exclusively with windows API? I checked some of the sources and it seems that most is indeed written with windows API, tho i was wondering if it uses windows scheduler to switch between threads (and cores) as well or does it implement its own? Specifically, most processors these days implement their own scheduler (i've read about itanium arch for example, the hardwired logic supports two threads per core and it even automatically switches between them with hw logic, so evidently OS support for multiple cores is not necessarily needed), so if i have an obsolete OS like windows 32-bit or something, which doesn't support multi-core processors, would a program written with pthreads-win32 still run on more than one processor core or would only one core be used?
How about pthreads implementations (untainted posix threads)? Do they support multi-core processors even if the OS on which they are running doesn't?
I am guessing the answer is no, for both windows and posix versions, only one core is in use if the OS doesn't support for multiple cores. Tho this is just an educated guess and i would like to confirm it, so pls leave a comment.
On a side request, can you pls recommend a lib that DOES support for muli-core thread execution, even if the OS on which the program is running DOESN'T. If any exist ofc.
Also, is there a way to ensure two threads written with pthreads are being executed on different cores, or does the OS (or the processor, or pthreads lib) do the assignment automatically? Does pthreads guarantee execution on different cores if they are available?
Cheers, Val
EDIT:
I know most of these questions are implementation specific, so i was referring to this implementation of pthreads for windows http://sourceware.org/pthreads-win32/. I didn't specifically mention it before, because as far as i know, this is the most popular and widely used implementation of pthreads for windows.
So from what i'm getting, the most important thing to note in all of this is that threading has very little to do with parallelism (like UMA with multi-core processors). So while threading might be a technique to implement concurrency it is not a way of ensuring ACTUAL parallel execution, which is what i was looking for in the first place, since i am studying parallel and distributed systems and algorithms.
So to answer one question at a time. Yes, pthreads, and probably most (if not all) other threading APIs out there are based on the underlying OS API. Which ofc gives them the same limits that the OS has. Meaning, yes, if the OS (concretely in this case, some windows running for example pthreads-win32) doesn't support multiple cores, only one core is in use at all times. As is pointed out on the wiki page nob provided, to cite: "Hyper-threading requires not only that the operating system support multiple processors, but also that it be specifically optimised for HTT, and Intel recommends disabling HTT when using operating systems that have not been so optimized." http://en.wikipedia.org/wiki/Hyper-threading Meaning in most cases, just hardwired processors (basic) scheduler is not enough to take advantage of multiple cores, it has to be supported/used by SW (OS support).
While this might not be a definitive proof, i believe enough evidence points in the same direction to confirm this to be the case.
I did not sift through pthreads (for posix compliant OSes) sources, i am guessing the same goes for this API, since it is more than likely to use the underlying OS API. You will have to confirm this on your own. :)
Also, any potential libs out there that might support execution on multiple cores even if the OS on which they're running on doesn't support multiple cores, you will have to find them on your own (if they exist), please leave a comment.
To ensure parallelism (execution on different cores) manually, linux does provide a way to pin a thread to a specific virtual processor (under certain conditions). To pin an entire process to a specific (virtual) processor/core, sched_setaffinity() (from sched.h) can be used. As nos pointed out, pthreads provides pthread_setaffinity_np() to pin a particular thread to a specific core. Windows supports a similar functionality with SetThreadAffinityMask(), so clearly, assigning threads manually to run in parallel on different cores is possible (if the OS supports multi-cores).
From my experience coding with pthreads, if you write for code that uses multiple threads (more than 2), they SHOULD be executed on more than one physical core, if available (which is probably an OS feature used by pthreads).
My questions were quite general to begin with, since most of these things are implementation specific, it's hard to give one answer. I hope this answer is detailed enough to help you clarify a few things.
Cheers, Val
Generally each modern OS supports Threads by itself and schedules them to the different (virtual) Cores of a System. The OS provides some general synchronization techniques (like Mutexes or Semaphores or Barriers) which are used by pthread to implement the pthreads API.
With two threads per Core (I think you mean Hyper Threading) on some Intel Processors (like Itanium) the OS sees two "virtual" Cores. The processor indeed schedules the two threads onto one physical core. (See Wikipedia)
However, there are examples where Runtime-Plattforms implement their own Thread-Conceptepts and do the scheduling: I think of (at least older) implementations of Java having their own scheduling routines.
I completely newbie in device drivers, so I hope my question is in place, but I need to develop a driver to control some equipment. I was thinking on using Linux as the host OS, but not sure if it is such good idea. I've heard some horror stories about the mess of developing device drivers under Linux. Is there a better alternative under the *Nix world? Or maybe should I check other OSes?
Linux documentation is basically non-existent (similar to other platforms). However, there are a few books which do cover enough information to get started, and the trickier kernel bits can borrowed from other drivers (yay for Open Source).
However, it is one of the easiest current platforms to develop drivers for. There are cleaner models, such as QNX, but that product is sadly near the end (and doesn't support 1/10th as much as hardware as Linux)
What type of device is the driver targetting? Many times you can avoid writing in-kernel drivers (for instance, using libusb in userspace, or the user space IO framework)