How to do real-time pitch shifting from mic with Superpowered? [duplicate] - ios

This question already has answers here:
Superpowered: real time pitch shift with timestretcher not working
(2 answers)
Closed 5 years ago.
I'm trying to make a pitch shift in real time from a microphone using superpowerd. I looked at the example that is for the file. Also tried to adapt it. I managed to change the sound, but it turned out very distorted with interference. What am I doing wrong? where to find more information on superpowered and timeStretching?
static bool audioProcessing(void *clientdata,
float **buffers,
unsigned int inputChannels,
unsigned int outputChannels,
unsigned int numberOfSamples,
unsigned int samplerate,
uint64_t hostTime) {
__unsafe_unretained Superpowered *self = (__bridge Superpowered *)clientdata;
float tempBuffer[numberOfSamples * 2 + 16];
SuperpoweredInterleave(buffers[0], buffers[1], tempBuffer, numberOfSamples);
float *outputBuffer = tempBuffer;
SuperpoweredAudiobufferlistElement inputBuffer;
inputBuffer.samplePosition = 0;
inputBuffer.startSample = 0;
inputBuffer.samplesUsed = 0;
inputBuffer.endSample = self->timeStretcher->numberOfInputSamplesNeeded;
inputBuffer.buffers[0] = SuperpoweredAudiobufferPool::getBuffer(self->timeStretcher->numberOfInputSamplesNeeded * 8 + 64);
inputBuffer.buffers[1] = inputBuffer.buffers[2] = inputBuffer.buffers[3] = NULL;
memcpy((float *)inputBuffer.buffers[0], outputBuffer, numberOfSamples * 2 + 16);
self->timeStretcher->process(&inputBuffer, self->outputBuffers);
// Do we have some output?
if (self->outputBuffers->makeSlice(0, self->outputBuffers->sampleLength)) {
while (true) { // Iterate on every output slice.
// Get pointer to the output samples.
int sampleCount = 0;
float *timeStretchedAudio = (float *)self->outputBuffers->nextSliceItem(&sampleCount);
if (!timeStretchedAudio) break;
SuperpoweredDeInterleave(timeStretchedAudio, buffers[0], buffers[1], numberOfSamples);
};
// Clear the output buffer list.
self->outputBuffers->clear();
};
return true;
}

I did the following:
static bool audioProcessing(void *clientdata,
float **buffers,
unsigned int inputChannels,
unsigned int outputChannels,
unsigned int numberOfSamples,
unsigned int samplerate,
uint64_t hostTime) {
__unsafe_unretained Superpowered *self = (__bridge Superpowered *)clientdata;
SuperpoweredAudiobufferlistElement inputBuffer;
inputBuffer.startSample = 0;
inputBuffer.samplesUsed = 0;
inputBuffer.endSample = numberOfSamples;
inputBuffer.buffers[0] = SuperpoweredAudiobufferPool::getBuffer((unsigned int) (numberOfSamples * 8 + 64));
inputBuffer.buffers[1] = inputBuffer.buffers[2] = inputBuffer.buffers[3] = NULL;
SuperpoweredInterleave(buffers[0], buffers[1], (float *)inputBuffer.buffers[0], numberOfSamples);
self->timeStretcher->process(&inputBuffer, self->outputBuffers);
// Do we have some output?
if (self->outputBuffers->makeSlice(0, self->outputBuffers->sampleLength)) {
while (true) { // Iterate on every output slice.
// Get pointer to the output samples.
int numSamples = 0;
float *timeStretchedAudio = (float *)self->outputBuffers->nextSliceItem(&numSamples);
if (!timeStretchedAudio || *timeStretchedAudio == 0) {
break;
}
SuperpoweredDeInterleave(timeStretchedAudio, buffers[0], buffers[1], numSamples);
}
// Clear the output buffer list.
self->outputBuffers->clear();
}
return true;
}
This might not work correctly when changing the speed also, but I wanted live pitch shifting only. People should be able to speak slower or faster themselves.

Related

Lame - increase bitrate to 320

First I want to show my method for converting source .wav files to .mp3 by Lame library:
- (void)convertFromWav:(NSString *)sourceFilePath ToMp3:(NSString *)resultName {
NSString *mp3FileName = [resultName stringByAppendingString:#".mp3"];
NSString *mp3FilePath = [NSTemporaryDirectory() stringByAppendingPathComponent:mp3FileName];
#try {
int read, write;
FILE *pcm = fopen([sourceFilePath UTF8String], "rb"); //source
if (pcm == NULL) {
perror("fopen");
return;
}
fseek(pcm, 4*1024, SEEK_CUR); //skip file header
FILE *mp3 = fopen([mp3FilePath cStringUsingEncoding:1], "wb"); //output
const int sampleRate = 44100;
const int bitsPerSample = 16;
const int numberOfChannels = 2;
const int PCM_SIZE = 8192*2;
const int MP3_SIZE = 8192*2;
short int pcm_buffer[PCM_SIZE*2];
unsigned char mp3_buffer[MP3_SIZE];
lame_t lame = lame_init();
lame_set_in_samplerate(lame, sampleRate);
lame_set_VBR(lame, vbr_default);
lame_init_params(lame);
lame_get_num_samples(lame);
long long fileSize = [[[[NSFileManager defaultManager] attributesOfItemAtPath:sourceFilePath error:nil] objectForKey:NSFileSize] longLongValue];
long duration = fileSize / (sampleRate * numberOfChannels * bitsPerSample / 8);//(fileSize * 8.0f) / (sampleRate * 2);
lame_set_num_samples(lame, (duration * sampleRate));
lame_get_num_samples(lame);
float percent = 0.0;
int totalframes = lame_get_totalframes(lame);
do {
read = fread(pcm_buffer, 2*sizeof(short int), PCM_SIZE, pcm);
if (read == 0)
write = lame_encode_flush(lame, mp3_buffer, MP3_SIZE);
else
write = lame_encode_buffer_interleaved(lame, pcm_buffer, read, mp3_buffer, MP3_SIZE);
fwrite(mp3_buffer, write, 1, mp3);
int frameNum = lame_get_frameNum(lame);
if (frameNum < totalframes)
percent = (100. * frameNum / totalframes + 0.5);
else
percent = 100;
if ([_delegate respondsToSelector:#selector(convertingProgressChangedWithPercent:)])
{
[_delegate convertingProgressChangedWithPercent:percent];
}
} while (read != 0);
lame_close(lame);
fclose(mp3);
fclose(pcm);
}
#catch (NSException *exception) {
NSLog(#"%#",[exception description]);
}
#finally {
if ([_delegate respondsToSelector:#selector(convertingDidFinish:)])
{
[_delegate convertingDidFinish:mp3FilePath];
}
}
}
It's okay and it's working. As a result I have .mp3 which has 152000 bits per second. But I want to make it 320000 bits per second. How can I change it? I am not good in theory about this stuff so I don't know which values change to what. Thanks.
You want to use lame_set_VBR (lame_t, vbr_off); and then you can use lame_set_brate where you can set the required bitrate amount. Using vbr_off gives you CBR mode as confirmed in the docs (see headers.h) :
*********************************************************************
VBR control
************************************************************************ /* Types of VBR. default = vbr_off = CBR */ int CDECL
lame_set_VBR(lame_global_flags *, vbr_mode); vbr_mode CDECL
lame_get_VBR(const lame_global_flags *);
Try this :
//# for constants of settings
const int sampleRate = 44100;
const int bitsPerSample = 16;
const int numberOfChannels = 2;
const int myBitRate = 320;
//# for Lame settings
lame_t lame = lame_init();
lame_set_in_samplerate (lame_t, sampleRate); //is 44100
lame_set_VBR (lame_t, vbr_off); //force CBR mode
lame_set_brate (lame_t, myBitRate); //is 320
lame_init_params (lame_t);
Also you can probably setup Lame like this :
lame_t lame = lame_init(); instead becomes like this : lame_t = lame_init();
Just saying that if you defined a lame_t I would expect it to require that name for rest of settings. You know like lame_init_params (lame_t); etc.

Objective-C division on 32/64-bit device produces different results

As described in the title, when I try to do the following divsion I get two different results depending on the architecture of the device:
unsigned int a = 42033;
unsigned int b = 360;
unsigned int c = 466
double result = a / (double)(b * c);
// on arm64 -> result = 0.25055436337625181
// on armv7 -> result = 0.24986030696800732
Why the results do not match?
According to Apple 64-Bit Transition Guide for Cocoa Touch these data type have the same size in 32 and 64 bit runtime.
EDIT
The complete code:
#import "UIImage+MyCategory.h"
#define CLIP_THRESHOLD 0.74 // if this much of the image is the clip color, leave it alone
typedef struct {
unsigned int leftNonColorIndex;
unsigned int rightNonColorIndex;
unsigned int nonColorCount;
} scanLineResult;
static inline scanLineResult scanOneLine(unsigned int *scanline, unsigned int count, unsigned int color, unsigned int mask) {
scanLineResult result = {UINT32_MAX, 0, 0};
for (int i = 0; i < count; i++) {
if ((*scanline++ & mask) != color) {
result.nonColorCount++;
result.leftNonColorIndex = MIN(result.leftNonColorIndex, i);
result.rightNonColorIndex = MAX(result.rightNonColorIndex, i);
}
}
return result;
}
typedef struct {
unsigned int leftNonColorIndex;
unsigned int topNonColorIndex;
unsigned int rightNonColorIndex;
unsigned int bottomNonColorIndex;
unsigned int nonColorCount;
double colorRatio;
} colorBoundaries;
static colorBoundaries findTrimColorBoundaries(unsigned int *buffer,
unsigned int width,
unsigned int height,
unsigned int bytesPerRow,
unsigned int color,
unsigned int mask)
{
colorBoundaries result = {UINT32_MAX, UINT32_MAX, 0, 0, 0.0};
unsigned int *currentLine = buffer;
for (int i = 0; i < height; i++) {
scanLineResult lineResult = scanOneLine(currentLine, width, color, mask);
if (lineResult.nonColorCount) {
result.nonColorCount += lineResult.nonColorCount;
result.topNonColorIndex = MIN(result.topNonColorIndex, i);
result.bottomNonColorIndex = MAX(result.bottomNonColorIndex, i);
result.leftNonColorIndex = MIN(result.leftNonColorIndex, lineResult.leftNonColorIndex);
result.rightNonColorIndex = MAX(result.rightNonColorIndex, lineResult.rightNonColorIndex);
}
currentLine = (unsigned int *)((char *)currentLine + bytesPerRow);
}
double delta = result.nonColorCount / (double)(width * height);
result.colorRatio = 1.0 - delta;
return result;
}
#implementation UIImage (MyCategory)
- (UIImage *)crop:(CGRect)rect {
rect = CGRectMake(rect.origin.x * self.scale,
rect.origin.y * self.scale,
rect.size.width * self.scale,
rect.size.height * self.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], rect);
UIImage *result = [UIImage imageWithCGImage:imageRef
scale:self.scale
orientation:self.imageOrientation];
CGImageRelease(imageRef);
return result;
}
- (UIImage*)trimWhiteBorders {
#ifdef __BIG_ENDIAN__
// undefined
#else
const unsigned int whiteXRGB = 0x00ffffff;
// Which bits to actually check
const unsigned int maskXRGB = 0x00ffffff;
#endif
CGImageRef image = [self CGImage];
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(image);
// Only support default image formats
if (bitmapInfo != (kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Host))
return nil;
CGDataProviderRef dataProvider = CGImageGetDataProvider(image);
CFDataRef imageData = CGDataProviderCopyData(dataProvider);
colorBoundaries result = findTrimColorBoundaries((unsigned int *)CFDataGetBytePtr(imageData),
(unsigned int)CGImageGetWidth(image),
(unsigned int)CGImageGetHeight(image),
(unsigned int)CGImageGetBytesPerRow(image),
whiteXRGB,
maskXRGB);
CFRelease(imageData);
if (result.nonColorCount == 0 || result.colorRatio > CLIP_THRESHOLD)
return self;
CGRect trimRect = CGRectMake(result.leftNonColorIndex,
result.topNonColorIndex,
result.rightNonColorIndex - result.leftNonColorIndex + 1,
result.bottomNonColorIndex - result.topNonColorIndex + 1);
return [self crop:trimRect];
}
#end
Tested the code below in Xcode 6.3.1 using iOS 8.3 LLVM 6.1 on an iPad 3(armv7) and an iPhone 6 (arm64) and they produced the same values to at least 15 points of precision.
unsigned int a = 42033;
unsigned int b = 360;
unsigned int c = 466;
double result = a / (double)(b * c);
// on arm64 -> result = 0.25055436337625181
// on armv7 -> result = 0.24986030696800732
NSString *msg = [NSString stringWithFormat:#"result: %.15f", result];
[[[UIAlertView alloc] initWithTitle:#"" message:msg delegate:nil cancelButtonTitle:#"#jolo" otherButtonTitles:nil] show];
That being said, Xcode 6.3 includes LVVM 6.1 and that includes changes for arm64 and floating point math. See the Apple LLVM Compiler Version 6.1 section in the release notes.
https://developer.apple.com/library/ios/releasenotes/DeveloperTools/RN-Xcode/Chapters/xc6_release_notes.html
Java code:
public class Division {
public static void main(String[] args) {
int a = 42033;
int b = 360;
int c = 466;
double result = a / (double)(b * c);
System.out.println("Result = " + result);
double result2 = (a - 1) / (double) (b * c);
double result3 = (a) / (double) ((b + 1) * c);
double result4 = (a) / (double) (b * (c + 1));
System.out.println("Result2 = " + result2);
System.out.println("Result3 = " + result3);
System.out.println("Result4 = " + result4);
}
}
Results:
C:\JavaTools>java Division
Result = 0.2505543633762518
Result2 = 0.250548402479733
Result3 = 0.24986030696800732
Result4 = 0.25001784439685937
As can be seen, the "wrong" results are explained by having a value for b other than what was stated by the OP. Has nothing to do with the precision of the arithmetic.

ios: EXC_ARM_DA_ALIGN error in release build

I have a function in my application, that store data from buffer. It works fine in debug mode both device and simulator, but when I create .ipa and run it on device, I have EXC_ARM_DA_ALIGN error libstdc++.6.dylib std::string::_M_replace_safe(unsigned long, unsigned long, char const, unsigned long)
struct stMemoryBlock
{
stMemoryBlock(void* InData, int InSize)
{
data = InData;
size = InSize;
offset = 0;
};
void* data;
unsigned int size;
unsigned int offset;
};
//-----------------------------------------------
char* cDataCollector::TestMemoryThink(char* Buffer, int BufferSize, int TestOffset, int TestSize)
{
char* result = NULL;
if (TestOffset + TestSize <= BufferSize)
{
result = &Buffer[TestOffset];
}
return result;
}
//-----------------------------------------------------
bool cDataCollector::StoreBinaryData(void* DataBuffer, int DataSize)
{
bool result = false;
char* InBuffer = (char *)DataBuffer;
if (!mPreparedData && !mPreparedDataSize && !mMemoryMap.size())
{
unsigned int CountElements = 0;
int offset = sizeof(unsigned int);
if (DataSize >= sizeof(unsigned int))
{
// CountElements = *(unsigned int*)(&InBuffer[0]);
memcpy(&CountElements, InBuffer, sizeof(CountElements));
}
result = true;
for (unsigned int i = 0; (i < CountElements) && result; ++i)
{
std::string ThinkName ;
stMemoryBlock * MemoryBlock = NULL;
result = result && TestMemoryThink(InBuffer, DataSize, offset, 0) != NULL;
if (result)
{
size_t name_think_size = strlen(&InBuffer[offset]);
char* think_name = TestMemoryThink(InBuffer, DataSize, offset, 0);
result = result && (think_name != NULL);
if (result)
{
ThinkName = think_name;
offset += (name_think_size + 1);
}
}
this line cause an error:
ThinkName = think_name;
maybe I need another way to read a string from memory location that isn’t word (32-bit) aligned? please,help!

How to play and read .caf PCM audio file

I have an app that selects a song from the iPod Library then copies that song into the app's directory as a '.caf' file. I now need to play and at the same time read that file into Apples FFT from the Accelerate framework so I can visualize the data like a spectrogram. Here is the code for the FFT:
void FFTAccelerate::doFFTReal(float samples[], float amp[], int numSamples)
{
int i;
vDSP_Length log2n = log2f(numSamples);
//Convert float array of reals samples to COMPLEX_SPLIT array A
vDSP_ctoz((COMPLEX*)samples,2,&A,1,numSamples/2);
//Perform FFT using fftSetup and A
//Results are returned in A
vDSP_fft_zrip(fftSetup, &A, 1, log2n, FFT_FORWARD);
//Convert COMPLEX_SPLIT A result to float array to be returned
amp[0] = A.realp[0]/(numSamples*2);
for(i=1;i<numSamples;i++)
amp[i]=sqrt(A.realp[i]*A.realp[i]+A.imagp[i]*A.imagp[i])/numSamples;
}
//Constructor
FFTAccelerate::FFTAccelerate (int numSamples)
{
vDSP_Length log2n = log2f(numSamples);
fftSetup = vDSP_create_fftsetup(log2n, FFT_RADIX2);
int nOver2 = numSamples/2;
A.realp = (float *) malloc(nOver2*sizeof(float));
A.imagp = (float *) malloc(nOver2*sizeof(float));
}
My question is how to I loop through the '.caf' audio file to feed the FFT while at the same time playing the song? I only need one channel. Im guessing I need to get 1024 samples of the song, process that in the FTT and then move further down the file and grab another 1024 samples. But I dont understand how to read an audio file to do this. The file has a sample rate of 44100.0 hz, is in linear PCM format, 16 Bit and I believe is also interleaved if that helps...
Try the ExtendedAudioFile API (requires AudioToolbox.framework).
#include <AudioToolbox/ExtendedAudioFile.h>
NSURL *urlToCAF = ...;
ExtAudioFileRef caf;
OSStatus status;
status = ExtAudioFileOpenURL((__bridge CFURLRef)urlToCAF, &caf);
if(noErr == status) {
// request float format
const UInt32 NumFrames = 1024;
const int ChannelsPerFrame = 1; // Mono, 2 for Stereo
// request float format
AudioStreamBasicDescription clientFormat;
clientFormat.mChannelsPerFrame = ChannelsPerFrame;
clientFormat.mSampleRate = 44100;
clientFormat.mFormatID = kAudioFormatLinearPCM;
clientFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsNonInterleaved; // float
int cmpSize = sizeof(float);
int frameSize = cmpSize*ChannelsPerFrame;
clientFormat.mBitsPerChannel = cmpSize*8;
clientFormat.mBytesPerPacket = frameSize;
clientFormat.mFramesPerPacket = 1;
clientFormat.mBytesPerFrame = frameSize;
status = ExtAudioFileSetProperty(caf, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);
if(noErr != status) { /* handle it */ }
while(1) {
float buf[ChannelsPerFrame*NumFrames];
AudioBuffer ab = { ChannelsPerFrame, sizeof(buf), buf };
AudioBufferList abl;
abl.mNumberBuffers = 1;
abl.mBuffers[0] = ab;
UInt32 ioNumFrames = NumFrames;
status = ExtAudioFileRead(caf, &ioNumFrames, &abl);
if(noErr == status) {
// process ioNumFrames here in buf
if(0 == ioNumFrames) {
// EOF!
break;
} else if(ioNumFrames < NumFrames) {
// TODO: pad buf with zeroes out to NumFrames
} else {
float amp[NumFrames]; // scratch space
doFFTReal(buf, amp, NumFrames);
}
}
}
// later
status = ExtAudioFileDispose(caf);
if(noErr != status) { /* hmm */ }
}

Bit field ordering on Big-Endian (SPARC) processor

Consider the code below:
#include <stdio.h>
#include <stdlib.h>
#define FORCE_CAST(var, type) *(type*)&var
struct processor_status_register
{
unsigned int cwp:5;
unsigned int et:1;
unsigned int ps:1;
unsigned int s:1;
unsigned int pil:4;
unsigned int ef:1;
unsigned int ec:1;
unsigned int reserved:6;
unsigned int c:1;
unsigned int v:1;
unsigned int z:1;
unsigned int n:1;
unsigned int ver:4;
unsigned int impl:4;
}__attribute__ ((__packed__));
struct registers
{
unsigned long* registerSet;
unsigned long* globalRegisters;
unsigned long* cwptr;
unsigned long wim, tbr, y, pc, npc;
unsigned short registerWindows;
/* Though Intel x86 architecture allows un-aligned memory access, SPARC mandates memory accesses to be 8 byte aligned. Without __attribute__ ((aligned (8))) or a preceding dummy byte e.g. unsigned short dummyByte, the code below crashes with a dreaded Bus error and Core dump. For more details, follow the links below:
http://blog.jgc.org/2007/04/debugging-solaris-bus-error-caused-by.html
https://groups.google.com/forum/?fromgroups=#!topic/comp.unix.solaris/8SgFiMudGL4
*/
struct processor_status_register __attribute__ ((aligned (8))) psr;
}__attribute__ ((__packed__));
int getBit(unsigned long bitStream, int position)
{
int bit;
bit = (bitStream & (1 << position)) >> position;
return bit;
}
char* showBits(unsigned long bitStream, int startPosition, int endPosition)
{
// Allocate one extra byte for NULL character
char* bits = (char*)malloc(endPosition - startPosition + 2);
int bitIndex;
for(bitIndex = 0; bitIndex <= endPosition; bitIndex++)
bits[bitIndex] = (getBit(bitStream, endPosition - bitIndex)) ? '1' : '0';
bits[bitIndex] = '\0';
return bits;
}
int main()
{
struct registers sparcRegisters; short isLittleEndian;
// Check for Endianness
unsigned long checkEndian = 0x00000001;
if(*((char*)(&checkEndian)))
{printf("Little Endian\n"); isLittleEndian = 1;} // Little
Endian architecture detected
else
{printf("Big Endian\n"); isLittleEndian = 0;} // Big
Endian architecture detected
unsigned long registerValue = 0xF30010A7;
unsigned long swappedRegisterValue = isLittleEndian ? registerValue :
__builtin_bswap32(registerValue);
sparcRegisters.psr = FORCE_CAST(swappedRegisterValue, struct
processor_status_register);
registerValue = isLittleEndian ? FORCE_CAST (sparcRegisters.psr,
unsigned long) : __builtin_bswap32(FORCE_CAST (sparcRegisters.psr,
unsigned long));
printf("\nPSR=0x%0X, IMPL=%u, VER=%u, CWP=%u\n", registerValue,
sparcRegisters.psr.impl, sparcRegisters.psr.ver,
sparcRegisters.psr.cwp);
printf("PSR=%s\n",showBits(registerValue, 0, 31));
sparcRegisters.psr.cwp = 7;
sparcRegisters.psr.et = 1;
sparcRegisters.psr.ps = 0;
sparcRegisters.psr.s = 1;
sparcRegisters.psr.pil = 0;
sparcRegisters.psr.ef = 0;
sparcRegisters.psr.ec = 0;
sparcRegisters.psr.reserved = 0;
sparcRegisters.psr.c = 0;
sparcRegisters.psr.v = 0;
sparcRegisters.psr.z = 0;
sparcRegisters.psr.n = 0;
sparcRegisters.psr.ver = 3;
sparcRegisters.psr.impl = 0xF;
registerValue = isLittleEndian ? FORCE_CAST (sparcRegisters.psr,
unsigned long) : __builtin_bswap32(FORCE_CAST (sparcRegisters.psr,
unsigned long));
printf("\nPSR=0x%0X, IMPL=%u, VER=%u, CWP=%u\n", registerValue,
sparcRegisters.psr.impl, sparcRegisters.psr.ver,
sparcRegisters.psr.cwp);
printf("PSR=%s\n\n",showBits(registerValue, 0, 31));
return 0;
}
I have used gcc-4.7.2 on Solaris 10 on SPARC to compile the following
code to produce the Big-Endian output:
Big Endian
PSR=0xF30010A7, IMPL=3, VER=15, CWP=20
PSR=11110011000000000001000010100111
PSR=0x3F00003D, IMPL=15, VER=3, CWP=7
PSR=00111111000000000000000000111101
I have used gcc-4.4 on Ubuntu-10.04 on Intel-x86 to compile the
following code to produce the Little-Endian output:
Little Endian
PSR=0xF30010A7, IMPL=15, VER=3, CWP=7
PSR=11110011000000000001000010100111
PSR=0xF30000A7, IMPL=15, VER=3, CWP=7
PSR=11110011000000000000000010100111
While the later one is as expected, can anyone please explain the
Big-Endian counterpart? Considering the showBits() method to be
correct, how can PSR=0x3F00003D give rise to IMPL=15, VER=3, CWP=7
values? How is the bit-field is being arranged and interpreted in
memory on a Big-Endian system?
... PSR=0x3F00003D give rise to IMPL=15, VER=3, CWP=7 values?
It cant. I don't know why you're calling __builtin_bswap32 but 0x3F00003D does not represent the memory of the sparcRegisters struct as you initialized it.
Lets check this code:
sparcRegisters.psr.cwp = 7;
sparcRegisters.psr.et = 1;
sparcRegisters.psr.ps = 0;
sparcRegisters.psr.s = 1;
sparcRegisters.psr.pil = 0;
sparcRegisters.psr.ef = 0;
sparcRegisters.psr.ec = 0;
sparcRegisters.psr.reserved = 0;
sparcRegisters.psr.c = 0;
sparcRegisters.psr.v = 0;
sparcRegisters.psr.z = 0;
sparcRegisters.psr.n = 0;
sparcRegisters.psr.ver = 3;
sparcRegisters.psr.impl = 0xF;
The individual translations are as follows:
7 => 00111
1 => 1
0 => 0
1 => 1
0 => 0000
0 => 0
0 => 0
0 => 000000
0 => 0
0 => 0
0 => 0
0 => 0
3 => 0011
F => 1111
The structure therefore in memory becomes 00111101000000000000000000111111 which is 0x3D00003F in big-endian.
You can confirm with this code (tested using CC in solaris):
#include <stdio.h>
#include <string.h>
struct processor_status_register
{
unsigned int cwp:5;
unsigned int et:1;
unsigned int ps:1;
unsigned int s:1;
unsigned int pil:4;
unsigned int ef:1;
unsigned int ec:1;
unsigned int reserved:6;
unsigned int c:1;
unsigned int v:1;
unsigned int z:1;
unsigned int n:1;
unsigned int ver:4;
unsigned int impl:4;
}__attribute__ ((__packed__));
int getBit(unsigned long bitStream, int position)
{
int bit;
bit = (bitStream & (1 << position)) >> position;
return bit;
}
char* showBits(unsigned long bitStream, int startPosition, int endPosition)
{
// Allocate one extra byte for NULL character
static char bits[33];
memset(bits, 0, 33);
int bitIndex;
for(bitIndex = 0; bitIndex <= endPosition; bitIndex++)
{
bits[bitIndex] = (getBit(bitStream, endPosition - bitIndex)) ? '1' : '0';
}
return bits;
}
int main()
{
processor_status_register psr;
psr.cwp = 7;
psr.et = 1;
psr.ps = 0;
psr.s = 1;
psr.pil = 0;
psr.ef = 0;
psr.ec = 0;
psr.reserved = 0;
psr.c = 0;
psr.v = 0;
psr.z = 0;
psr.n = 0;
psr.ver = 3;
psr.impl = 0xF;
unsigned long registerValue = 0;
memcpy(&registerValue, &psr, sizeof(registerValue));
printf("\nPSR=0x%0X, IMPL=%u, VER=%u, CWP=%u\n", registerValue,
psr.impl, psr.ver,
psr.cwp);
printf("PSR=%s\n\n",showBits(registerValue, 0, 31));
return 0;
}
The output of this is:
PSR=0x3D00003F, IMPL=15, VER=3, CWP=7
PSR=00111101000000000000000000111111

Resources