Wrong analog output using PCF8591 on Android Things - android-things

Note: Since the most logical cause of malfunction is that my component is faulty I have ordered a few more. Will test on them and let you know the result.
I am expanding an existing driver for the PCF8591 Analog to Digital converter, here's the Pcf8591 class in github.
Reading analog inputs works fine, but I have a strange problem when trying to write the output.
To test the driver I have a potentiometer connected to input channel 2, then I write that value to the analog output and then connect the analog output to the input 3.
The analog value only changes from 20 to around 75 for a value write from 0 to 255. I don't know if I am missing something or maybe my component is defective.
I am doing it the way it is specified by the spec sheet and also in the same way I've seen some drivers for Arduino doing it.
This is how I am writing the analog value:
byte[] data = {0x40, // Control code for writing analog value
(byte) (value)}; // Analog value
mI2cDevice.write(data, 2);
I'll appreciate if someone else with this component can try and see if writing the analog value works for them.
In theory the analog output goes from AGND to VREF and I am assuming the board I have have them linked to GND and VCC. I am using this board:

Related

Identify what kind of waveform i get at an analog input

I'm currently using MPLABX to code Pic16f877a, I use Proteus as a simulator, I have to identify what type of waveform I get at the analog input, my frequency is not fixed but I can store it. I wanted to use a full bridge rectifier so I get an input of 0-5V.
I was thinking I could check how much time it needs to get from 0 to 1V, from 1V to 2V, etc. for the square wave it would always be ~+5V.
I don't know if there's a better way of if this is even possible. I attach a piece of code that stores the frequency. MPLABX

Data recovery of QFSK signal in GNURadio

I'm pretty new to using GNURadio and I'm having trouble recovering the data from a signal that I've saved into a file. The signal is a carrier frequency of 56KHz with a frequency shift key of +/- 200hz at 600 baud.
So far, I've been able to demodulate the signal that looks similar to the signal I get from the source:
I'm trying to get this into a repeating string of 1s and 0s (the whole telegram is 38 bytes long and it continuously repeats). I've tried to use a clock recovery block in order to have only one byte per sample, but I'm not having much luck. Using the M&M clock recovery block, the whole telegram sometimes comes out correct, but it is not consistent. I've tried to adjust the omega and Mu values, but it doesn't seem to help that much. I've also tried using the Polyphase Clock sync, but I keep getting a runtime error of 'please specify a filter'. Is this asking me to add a tap? what tap would i use?
So I guess my overall question would be: What's the best way to get the telegram out of the demodulated fsk signal?
Again, pretty new at this so please let me know if I've missed something crucial. GNU flow graph below:
You're recovering the bit timing, but you're not recovering the byte boundaries – that needs to happen "one level higher", eg. by a well-known packet format with a defined preamble that you can look for.

FSK demodulation with GNU Radio

I'm trying to demodulate a signal using GNU Radio Companion. The signal is FSK (Frequency-shift keying), with mark and space frequencies at 1200 and 2200 Hz, respectively.
The data in the signal text data generated by a device called GeoStamp Audio. The device generates audio of GPS data fed into it in real time, and it can also decode that audio. I have the decoded text version of the audio for reference.
I have set up a flow graph in GNU Radio (see below), and it runs without error, but with all the variations I've tried, I still can't get the data.
The output of the flow graph should be binary (1s and 0s) that I can later convert to normal text, right?
Is it correct to feed in a wav audio file the way I am?
How can I recover the data from the demodulated signal -- am I missing something in my flow graph?
This is a FFT plot of the wav audio file before demodulation:
This is the result of the scope sink after demodulation (maybe looks promising?):
UPDATE (August 2, 2016): I'm still working on this problem (occasionally), and unfortunately still cannot retrieve the data. The result is a promising-looking string of 1's and 0's, but nothing intelligible.
If anyone has suggestions for figuring out the settings on the Polyphase Clock Sync or Clock Recovery MM blocks, or the gain on the Quad Demod block, I would greatly appreciate it.
Here is one version of an updated flow graph based on Marcus's answer (also trying other versions with polyphase clock recovery):
However, I'm still unable to recover data that makes any sense. The result is a long string of 1's and 0's, but not the right ones. I've tried tweaking nearly all the settings in all the blocks. I thought maybe the clock recovery was off, but I've tried a wide range of values with no improvement.
So, at first sight, my approach here would look something like:
What happens here is that we take the input, shift it in frequency domain so that mark and space are at +-500 Hz, and then use quadrature demod.
"Logically", we can then just make a "sign decision". I'll share the configuration of the Xlating FIR here:
Notice that the signal is first shifted so that the center frequency (middle between 2200 and 1200 Hz) ends up at 0Hz, and then filtered by a low pass (gain = 1.0, Stopband starts at 1 kHz, Passband ends at 1 kHz - 400 Hz = 600 Hz). At this point, the actual bandwidth that's still present in the signal is much lower than the sample rate, so you might also just downsample without losses (set decimation to something higher, e.g. 16), but for the sake of analysis, we won't do that.
The time sink should now show better values. Have a look at the edges; they are probably not extremely steep. For clock sync I'd hence recommend to just go and try the polyphase clock recovery instead of Müller & Mueller; chosing about any "somewhat round" pulse shape could work.
For fun and giggles, I clicked together a quick demo demod (GRC here):
which shows:

Sobel edge detection filter not correct output: can it be because of some parameters

I am using http://shakithweblog.blogspot.kr/2012/12/getting-sobel-filter-application.html for zynq processor.
I am using his filter design in the PL part and running the hdmi test.
I am inputting this file
and my filtered output is coming like this:
I am trying to display 1920 * 1080 pixels.
Now lets assume its difficult for you see exactly my design/ or download the design and check it or even you are not familiar with the zynq board and all but is it possible to make some guess that why the filter output could be like this? and what can I try to make it correct. I need some suggestions.

How do I choose a pixel format when creating a new Texture2D?

I'm using the SharpDX Toolkit, and I'm trying to create a Texture2D programmatically, so I can manually specify all the pixel values. And I'm not sure what pixel format to create it with.
SharpDX doesn't even document the toolkit's PixelFormat type (they have documentation for another PixelFormat class but it's for WIC, not the toolkit). I did find the DirectX enum it wraps, DXGI_FORMAT, but its documentation doesn't give any useful guidance on how I would choose a format.
I'm used to plain old 32-bit bitmap formats with 8 bits per color channel plus 8-bit alpha, which is plenty good enough for me. So I'm guessing the simplest choices will be R8G8B8A8 or B8G8R8A8. Does it matter which I choose? Will they both be fully supported on all hardware?
And even once I've chosen one of those, I then need to further specify whether it's SInt, SNorm, Typeless, UInt, UNorm, or UNormSRgb. I don't need the sRGB colorspace. I don't understand what Typeless is supposed to be for. UInt seems like the simplest -- just a plain old unsigned byte -- but it turns out it doesn't work; I don't get an error, but my texture won't draw anything to the screen. UNorm works, but there's nothing in the documentation that explains why UInt doesn't. So now I'm paranoid that UNorm might not work on some other video card.
Here's the code I've got, if anyone wants to see it. Download the SharpDX full package, open the SharpDXToolkitSamples project, go to the SpriteBatchAndFont.WinRTXaml project, open the SpriteBatchAndFontGame class, and add code where indicated:
// Add new field to the class:
private Texture2D _newTexture;
// Add at the end of the LoadContent method:
_newTexture = Texture2D.New(GraphicsDevice, 8, 8, PixelFormat.R8G8B8A8.UNorm);
var colorData = new Color[_newTexture.Width*_newTexture.Height];
_newTexture.GetData(colorData);
for (var i = 0; i < colorData.Length; ++i)
colorData[i] = (i%3 == 0) ? Color.Red : Color.Transparent;
_newTexture.SetData(colorData);
// Add inside the Draw method, just before the call to spriteBatch.End():
spriteBatch.Draw(_newTexture, new Vector2(0, 0), Color.White);
This draws a small rectangle with diagonal lines in the top left of the screen. It works on the laptop I'm testing it on, but I have no idea how to know whether that means it's going to work everywhere, nor do I have any idea whether it's going to be the most performant.
What pixel format should I use to make sure my app will work on all hardware, and to get the best performance?
The formats in the SharpDX Toolkit map to the underlying DirectX/DXGI formats, so you can, as usual with Microsoft products, get your info from the MSDN:
DXGI_FORMAT enumeration (Windows)
32-bit-textures are a common choice for most texture scenarios and have a good performance on older hardware. UNorm means, as already answered in the comments, "in the range of 0.0 .. 1.0" and is, again, a common way to access color data in textures.
If you look at the Hardware Support for Direct3D 10Level9 Formats (Windows) page you will see, that DXGI_FORMAT_R8G8B8A8_UNORM as well as DXGI_FORMAT_B8G8R8A8_UNORM are supported on DirectX 9 hardware. You will not run into compatibility-problems with both of them.
Performance is up to how your Device is initialized (RGBA/BGRA?) and what hardware (=supported DX feature level) and OS you are running your software on. You will have to run your own tests to find it out (though in case of these common and similar formats the difference should be a single digit percentage at most).

Resources