OSError: [Errno 12] ENOMEM with Pi Pico W - storage

This is code that I used from | https://makersportal.com/blog/ws2812-ring-light-with-raspberry-pi-pico | and to begin, this worked great, however after stopping and running the code multiple times with thonny without issue, I then got the error:
"OSError: [Errno 12] ENOMEM"
I used the nuke.uf2 file to clear all the storage on my pi pico W however the error soon re-occurred after having run the file too many times. I am extremely confused and would love some help.
Thank you!
I tried using nuke.uf2 which was a temporary fix
###############################################################
# WS2812 RGB LED Ring Light Breathing
# with the Raspberry Pi Pico Microcontroller
#
# by Joshua Hrisko, Maker Portal LLC (c) 2021
#
# Based on the Example neopixel_ring at:
# https://github.com/raspberrypi/pico-micropython-examples
###############################################################
#
import array, time
from machine import Pin
import rp2
#
############################################
# RP2040 PIO and Pin Configurations
############################################
#
# WS2812 LED Ring Configuration
led_count = 12 # number of LEDs in ring light
PIN_NUM = 1 # pin connected to ring light
brightness = 0.2 # 0.1 = darker, 1.0 = brightest
#rp2.asm_pio(sideset_init=rp2.PIO.OUT_LOW, out_shiftdir=rp2.PIO.SHIFT_LEFT,
autopull=True, pull_thresh=24) # PIO configuration
# define WS2812 parameters
def ws2812():
T1 = 2
T2 = 5
T3 = 3
wrap_target()
label("bitloop")
out(x, 1) .side(0) [T3 - 1]
jmp(not_x, "do_zero") .side(1) [T1 - 1]
jmp("bitloop") .side(1) [T2 - 1]
label("do_zero")
nop() .side(0) [T2 - 1]
wrap()
# Create the StateMachine with the ws2812 program, outputting on pre-defined pin
# at the 8MHz frequency
sm = rp2.StateMachine(0, ws2812, freq=8_000_000, sideset_base=Pin(PIN_NUM))
# Activate the state machine
sm.active(1)
# Range of LEDs stored in an array
ar = array.array("I", [0 for _ in range(led_count)])
#
############################################
# Functions for RGB Coloring
############################################
#
def pixels_show(brightness_input=brightness):
dimmer_ar = array.array("I", [0 for _ in range(led_count)])
for ii,cc in enumerate(ar):
r = int(((cc >> 8) & 0xFF) * brightness_input) # 8-bit red dimmed to brightness
g = int(((cc >> 16) & 0xFF) * brightness_input) # 8-bit green dimmed to brightness
b = int((cc & 0xFF) * brightness_input) # 8-bit blue dimmed to brightness
dimmer_ar[ii] = (g<<16) + (r<<8) + b # 24-bit color dimmed to brightness
sm.put(dimmer_ar, 8) # update the state machine with new colors
time.sleep_ms(10)
def pixels_set(i, color):
ar[i] = (color[1]<<16) + (color[0]<<8) + color[2] # set 24-bit color
def breathing_led(color):
step = 5
breath_amps = [ii for ii in range(0,255,step)]
breath_amps.extend([ii for ii in range(255,-1,-step)])
for ii in breath_amps:
for jj in range(len(ar)):
pixels_set(jj, color) # show all colors
pixels_show(ii/255)
time.sleep(0.02)
#
############################################
# Main Calls and Loops
############################################
#
# color specifications
red = (255,0,0)
green = (0,255,0)
blue = (0,0,255)
yellow = (255,255,0)
cyan = (0,255,255)
white = (255,255,255)
blank = (0,0,0)
colors = [blue,yellow,cyan,red,green,white]
while True: # loop indefinitely
for color in colors: # emulate breathing LED (similar to Amazon's Alexa)
breathing_led(color)
time.sleep(0.1) # wait between colors

I just tripped over this myself.
The quick fix is to type machine.reset() into the Thonny console (you might need to type import machine first).
The issue is the line that initialises the state machine. The number of times that you can run it without hitting the error depends on the length of the PIO code (the ws2812() function), so I assume that for some reason it is not resetting the PIO memory which can only hold 32 instructions.
Your code has 4 PIO instructions, and in my tests it fails on the 9th run, so that adds up.
This seems to be specific to the Pico W, as exactly the same code on the same MicroPython nightly build works fine on a non-W Pico.
EDIT: Here's the problem - MicroPython doesn't reset PIO memory on the Pico W because it interferes with the Wireless driver.
EDIT: Another solution is to put rp2.PIO(0).remove_program() before the rp2.StateMachine line, to clear the PIO memory.

Related

Extracting Semi-structured Text from a complex UI (Golf Simulator)

I'm fairly new to the world of OCR, OpenCV, Tesseract etc and was hoping to get some advice or a nudge in the right direction for a project I'm working on. For context, I practice golf at an indoor simulator that is powered by Full Swing Golf. My goal is to build an app (preferably iphone, but desktop is fine too) that will be able to grab the data provided by the simulator and process it however I'd like. The overall workflow would look something like:
Set up iPhone or laptop camera to watch the simulator screen.
Hit ball
Statistics Screen is displayed that looks more or less like:
Detect that the Statistics Screen has been displayed and grab all relevant data:
| Distance | Launch | Back Spin | Club Speed | Carry | To Pin | Direction | Ball Speed | Side Spin | Club Face | Club Path |
|----------|--------|-----------|------------|-------|--------|-----------|------------|-----------|-----------|-----------|
| 345 | 13 | 3350 | 135 | 335 | 80 | 2.4 | 190 | 350 | 4.3 | 1.6 |
5-?: Save the data to my app, keep track of it over time etc...
Attempts So Far:
It seemed like OpenCV's matchTemplate would be a simple way to find all of the headings in the image (Distance, Launch etc...) and it does seem to work when the image and template are both the perfect resolution. However, as this will be an iPhone app, the quality is not something I can really guarantee (within reason). Moreso, the screen will almost never be straight-on as it appears above. Most likely, the camera will be off to the side and we will have to de-skew accordingly. I've attempted to use the following image to work on my deskewing logic to no avail:
Finding the reference points in order to deskew via getPerspectiveTransform and warpPerspective has proven to be incredibly difficult due to the above issues with matching templates.
I've also tried dynamically adjusting for scale with code resembling the following:
def findTemplateLocation(image_path):
template = cv2.imread(image_path)
template = cv2.cvtColor(template, cv2.COLOR_BGR2GRAY)
w, h = template.shape[::-1]
threshold = 0.65
loc = []
for scale in np.linspace(0.1, 2, 20)[::-1]:
resized = imutils.resize(template, width=int(template.shape[1] * scale))
w, h = resized.shape[::-1]
res = cv2.matchTemplate(image_gray, resized, cv2.TM_CCOEFF_NORMED)
loc = np.where(res >= threshold)
if len(list(zip(*loc[::-1]))) > 0:
break
if loc and len(list(zip(*loc[::-1]))) > 0:
adjusted_w = int(w/scale)
adjusted_h = int(h/scale)
print(str(adjusted_w) + " " + str(adjusted_h) + " " + str(scale))
ret = []
for pt in zip(*loc[::-1]):
ret.append({'width': w, 'height': h, 'location': pt})
return ret
return None
This still returns a ton of false positives.
I'm hoping to get some advice on how to approach this problem with a clean slate. I'm open to any language / workflow.
If it does seem that I'm on the right track, my current code is at https://gist.github.com/naderhen/9ec8d45f13d92507131d5bce0e84fad8 . Would really appreciate any suggestions for the best next steps.
Thanks for any help you can provide!
EDIT: Additional Resources
I've uploaded a number of videos and still photos from my time at the indoor simulator this weekend: https://www.dropbox.com/sh/5vub2mi4rvunyaw/AAAY1_7Q_WBV4JvmDD0dEiTDa?dl=0
I tried to get a number of different angles, with different lighting etc. Please let me know if I can provide any other resources that may help.
So, I tried two different methods:
Contour detection - This seemed to be the most obvious method since the statistics screen is the primary part of the image and is present in all your images. Although it does work with two of the three images, it might not be very robust with the parameters. Here are the steps that I tried for contour:
First, get the the image in grayscale or take one of the Value channel in HSV. Then, threshold the image using either Otsu or Adaptive Thresholding. After playing with a lot of the associated parameters, I got satisfactory results, which would basically mean nice whole statistics screen in white on a black background. After this, sort the contours like this:
contours = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)[1]
# Sort the contours to avoid unnecessary comparison in the for loop below
cntsSorted = sorted(contours, key=lambda x: cv2.contourArea(x), reverse=True)
for cnt in cntsSorted[0:20]:
peri = cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, 0.04 * peri, True)
if len(approx) == 4 and peri > 10000:
cv2.drawContours(sorted_image, cnt, -1, (0, 255, 0), 10)
Feature Detection and Matching: Since using contours wasn't robust enough, I tried another method which I've worked on for a similar problem as yours. This method is fairly robust, much faster (I tried this on an android phone 2 years back and it would do the job in less than a second for a 1280 x 760 image). However, after trying on your work cases, I figured that your images are pretty vague. What I mean by that is, you have two images in your question that have fairly similar primaries and it works for that but the images you posted in comments are very different from these and hence it doesn't find suitable number of good matches (at least 10 in my case). If you can post a nice set of images that you actually will encounter, I will update this answer with my results on the new set. More importantly, the images of the scene evidently have change in perspective which shouldn't be an issue assuming you are able to get a very good source image (as the first one in your question). However, the change in lighting conditions can be a pain. I'd suggest either using different color spaces such as HSV, Lab and Luv instead of BGR.
Here is where you can find a working example of how to implement your own feature matcher. There are some code changes required depending on the version of OpenCV you are using but I am sure you can find the solutions ( I did ;) ).
A good example:
Some suggestions:
Try getting as clean an image as possible for the image you are using to match with the others (your first image in my case). Hopefully, this would require you to do less processing.
Try using unsharp mask before finding Keypoints.
My results are from using ORB. You can also try with other detectors/descriptors like SURF, SIFT and FAST.
Finally, your approach of template matching should work in cases where there is a change only in the scaling and not the perspective.
Hope this helps! Write a comment if you have any additional questions and/or when you have a good image set ready (rubs palms). Cheers!
Edit 1: This is the code that I used for the Feature Detection and Matching in Opencv 3.4.3 and Python 3.4
def unsharp_mask(im):
# This is used to sharpen images
gaussian_3 = cv2.GaussianBlur(im, (3, 3), 3.0)
return cv2.addWeighted(im, 2.0, gaussian_3, -1.0, 0, im)
def screen_finder2(image, source, num=0):
def resize(im, new_width):
r = float(new_width) / im.shape[1]
dim = (new_width, int(im.shape[0] * r))
return cv2.resize(im, dim, interpolation=cv2.INTER_AREA)
width = 300
source = resize(source, new_width=width)
image = resize(image, new_width=width)
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2LUV)
image, u, v = cv2.split(hsv)
hsv = cv2.cvtColor(source, cv2.COLOR_BGR2LUV)
source, u, v = cv2.split(hsv)
MIN_MATCH_COUNT = 10
orb = cv2.ORB_create()
kp1, des1 = orb.detectAndCompute(image, None)
kp2, des2 = orb.detectAndCompute(source, None)
flann = cv2.DescriptorMatcher_create(cv2.DescriptorMatcher_FLANNBASED)
# Without the below 2 lines, matching doesn't work
des1 = np.asarray(des1, dtype=np.float32)
des2 = np.asarray(des2, dtype=np.float32)
matches = flann.knnMatch(des1, des2, k=2)
# store all the good matches as per Lowe's ratio test
good = []
for m, n in matches:
if m.distance < 0.7 * n.distance:
good.append(m)
if len(good) >= MIN_MATCH_COUNT:
src_pts = np.float32([kp1[m.queryIdx].pt for m in good]).reshape(-1,
1, 2)
dst_pts = np.float32([kp2[m.trainIdx].pt for m in good]).reshape(-1,
1, 2)
M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)
matchesMask = mask.ravel().tolist()
h,w = image.shape
pts = np.float32([[0, 0], [0, h-1], [w-1, h-1], [w-1, 0]]).reshape(-1,
1, 2)
dst = cv2.perspectiveTransform(pts, M)
source_bgr = cv2.cvtColor(source, cv2.COLOR_GRAY2BGR)
img2 = cv2.polylines(source_bgr, [np.int32(dst)], True, (0,0,255), 3,
cv2.LINE_AA)
cv2.imwrite("out"+str(num)+".jpg", img2)
else:
print("Not enough matches." + str(len(good)))
matchesMask = None
draw_params = dict(matchColor=(0, 255, 0), # draw matches in green color
singlePointColor=None,
matchesMask=matchesMask, # draw only inliers
flags=2)
img3 = cv2.drawMatches(image, kp1, source, kp2, good, None, **draw_params)
cv2.imwrite("ORB"+str(num)+".jpg", img3)
match_image = unsharp_mask(cv2.imread("source.jpg"))
image_1 = unsharp_mask(cv2.imread("Screen_1.jpg"))
screen_finder2(match_image, image_1, num=1)

What would be the best strategy to match color dominants between two pictures?

I need to match the color dominant between two different pictures, to make them as similar as possible.
For example,I would like to match the grayscale picture of the child below, to the sepia picture of the soldier and compensate for contrast and lighning.
So far, I am thinking to convert the pictures to YCrCb and match the contrast on the histogram of the Y channel and the color in the other channels.
I will have to do the same also between color pictures.
Any suggestions?
I have some ideas that should be of use - they kind of start in Photoshop and wander through Perl, ImageMagick and OpenCV. I am a big fan of the warm and beautiful tonalities achieved by photographers such as David Fokos and Michael Kenna and I worked out, many years back, how to replicate their toning.
First, load your image up in Photoshop, convert to black and white mode, and then back to RGB mode, add a Curves adjustment layer and a new layer with the original colour image. Your Layers window will look like this:
Now turn off all layers except the grey background, and use the Color Dropper to find and mark:
a quarter-tone pixel (i.e. value around 64 in the Info window)
a mid-tone pixel (i.e. around 128 in the Info window)
a three quarter-tone pixel (i.e. around 192 in the Info window)
Now turn the other layers back on and find what those three tones map to in RGB:
Now go in the Curves layer and adjust the Red, Green and Blue curves to match those values:
And if you then switch back to RGB, you can see all three curves on one diagram:
You now just need to save that Curve as a file with ACV extension and you can apply it to other images:
I got a bit bored doing that, so I wrote a Perl script that does exactly the same. You pass it a toned image as a filename, it finds the quarter, mid and three-quarter tones and then creates an Adobe Photoshop Curves file - an ACV file for you which you can then batch apply to other photos.
Here's the Perl:
#!/usr/bin/perl
use strict;
use warnings;
use Image::Magick;
use Data::Dumper;
my $Debug=1; # 1=print debug messages, 0=don't
my $NPOINTS=5; # Number of points in curve we create
# Read in image in first parameter
my $imagename=$ARGV[0];
my $orig=Image::Magick::->new;
my $x = $orig->Read($imagename);
warn "$x" if "$x";
my $width =$orig->Get('columns');
my $height=$orig->Get('rows');
my $depth=$orig->Get('depth');
print "DEBUG: ",$width,"x",$height,", depth: ",$depth,"\n" if $Debug;
# Access pixel cache
my #RGBpixels = $orig->GetPixels(map=>'RGB',height=>$height,width=>$width,normalize=>1);
my ($i,$j,$p);
my (#greypoint,#Rpoint,#Gpoint,#Bpoint);
for($p=0;$p<$NPOINTS;$p++){
my $greylevelsought=int(($p+1)*256/($NPOINTS+1));
my $nearestgrey=1000;
for(my $t=0;$t<$height*$width;$t++){
my $R = int(255*$RGBpixels[(3*$t)+0]);
my $G = int(255*$RGBpixels[(3*$t)+1]);
my $B = int(255*$RGBpixels[(3*$t)+2]);
my $this=int(0.21*$R + 0.72*$G +0.07*$B);
printf "Point: %d, Greysought: %d, this pixel: %d\n",$p,$greylevelsought,$this if $Debug>1;
if(abs($this-$greylevelsought)<abs($nearestgrey-$greylevelsought)){
$nearestgrey=$this;
$greypoint[$p]=$nearestgrey;
$Rpoint[$p]=$R;
$Gpoint[$p]=$G;
$Bpoint[$p]=$B;
}
}
printf "DEBUG: Point#: %d, sought grey: %d, nearest grey: %d\n",$p,$greylevelsought,$nearestgrey if $Debug;
}
# Work out name of the curve file = image basename + acv
my $curvefile=substr($imagename,0,rindex($imagename,'.')) . ".acv";
open(my $out,'>:raw',$curvefile) or die "Unable to open: $!";
print $out pack("s>",4); # Version=4
print $out pack("s>",4); # Number of curves in file = Master NULL curve + R + G + B
print $out pack("s>",2); # Master NULL curve with 2 points for all channels
print $out pack("s>",0 ),pack("s>",0 ); # 0 out, 0 in
print $out pack("s>",255),pack("s>",255); # 255 out, 255 in
print $out pack("s>",2+$NPOINTS); # Red curve
print $out pack("s>",0 ),pack("s>",0 ); # 0 out, 0 in
for($p=0;$p<$NPOINTS;$p++){
print $out pack("s>",$Rpoint[$p]),pack("s>",$greypoint[$p]);
}
print $out pack("s>",255),pack("s>",255); # 255 out, 255 in
print $out pack("s>",2+$NPOINTS); # Green curve
print $out pack("s>",0 ),pack("s>",0 ); # 0 out, 0 in
for($p=0;$p<$NPOINTS;$p++){
print $out pack("s>",$Gpoint[$p]),pack("s>",$greypoint[$p]);
}
print $out pack("s>",255),pack("s>",255); # 255 out, 255 in
print $out pack("s>",2+$NPOINTS); # Blue curve
print $out pack("s>",0 ),pack("s>",0 ); # 0 out, 0 in
for($p=0;$p<$NPOINTS;$p++){
print $out pack("s>",$Bpoint[$p]),pack("s>",$greypoint[$p]);
}
print $out pack("s>",255),pack("s>",255); # 255 out, 255 in
close($out);
If you want to do this in OpenCV, you could translate the first 70% of the script to OpenCV pretty simply - it is just 2 loops. Then you would have the quarter, mid and three-quarter tone points. You could use a curve-fitting program such as gnuplot (I have no idea of your skillset) to fit a curve to the points and then generate a lookup table for each of the 256 values 0-255, and apply that to your other images using cv::LUT() to replicate or clone the tone.

Image filtering - wrong results?

I'm experimenting with convolving an image with a user-supplied mask, in this case
u = array([[-2,-2,-2],[-2,25,-2],[-2,-2,-2]])/9
using the commands
In[1]: import scipy.ndimage as ndi
In[2]: import skimage.io as io
In[3]: c = io.imread('cameraman.png')
In[4]: cu = ndi.convolve(c,u)
In[5]: io.imshow(cu)
I'm checking this against commands in GNU Octave:
Octave-3.8: 1> c = imread('cameraman.png');
Octave-3.8: 2> u = [-2 -2 -2;-2 25 -2;-2 -2 -2]/9
Octave-3.8: 3> cu = imfilter(c,u)
Octave-3.8: 4> imshow(cu)
But here's the thing: Octave seems to give the correct result, but Python doesn't, even though the commands convolve and imfilter are supposed to be implementing the same algorithm. (Well in fact imfilter performs a correlation, which in this case is the same as a convolution.)
The Octave output is:
!
and the Python output is:
!
which as you can see is very different to the Octave result. Does anybody know what's going on here? Or is there a better way of convolving with a user-supplied linear filter than using convolve?
The problem may be the result of your convolution taking your image luminance values out of bounds. I ran the example below in Matlab (~=Octave) and for an image that initially has grey values 0-255 so in normalised range [0,0.99] the result ends in with pixels in range [-0.88,2.03].
>> img=double(imread('cameraman.tif'))./255;
>> K=[-2 -2 -2 ; -2 25 -2; -2 -2 -2]/9;
>> out=conv2(img,K,'same');
>> max(max(out))
ans =
2.0288
>> min(min(out))
ans =
-0.8776
It could be that Python has a problem visualising images with out of range grey values <0 or >255 and this is causing a clamping of values resulting in black/white halos in those areas. Perhaps Octave normalises the image prior to displaying it resulting in few artifacts. If you normalise you image in Python prior to displaying it, do you still have this problem?

FFT normalization

I know this question has been asked ad nauseam but somehow I can't make it work properly. I created a single, sine wave of 440 Hz having a unit amplitude. Now, after the FFT, the bin at 440 Hz has a distinct peak but the value just isn't right. I'd expect to see 0 dB since I'm dealing with a unit amplitude sine wave. Instead, the power calculated is well above 0 dB. The formula I'm using is simply
for (int i = 0; i < N/2; i++)
{
mag = sqrt((Real[i]*Real[i] + Img[i]*Img[i])/(N*0.54)); //0.54 correction for a Hamming Window
Mag[i] = 10 * log(mag) ;
}
I should probably point out that the total energy in the time domain is equal to the energy in the frequency domain (Parseval's theorem), so I know that my FFT routine is fine.
Any help is much appreciated.
I've been struggling with this again for work. It seems that a lot of software routines / books are a bit sloppy on the normalization of the FFT.
The best summary I have is: Energy needs to be conserved - which is Parseval's theorem. Also when coding this in Python, you can easily loose an element and not know it. Note that numpy.arrays indexing is not inclusive of the last element.
a = [1,2,3,4,5,6]
a[1:-1] = [2,3,4,5]
a[-1] = 6
Here's my code to normalize the FFT properly:
# FFT normalization to conserve power
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal
sample_rate = 500.0e6
time_step = 1/sample_rate
carrier_freq = 100.0e6
# number of digital samples to simulate
num_samples = 2**18 # 262144
t_stop = num_samples*time_step
t = np.arange(0, t_stop, time_step)
# let the signal be a voltage waveform,
# so there is no zero padding
carrier_I = np.sin(2*np.pi*carrier_freq*t)
#######################################################
# FFT using Welch method
# windows = np.ones(nfft) - no windowing
# if windows = 'hamming', etc.. this function will
# normalize to an equivalent noise bandwidth (ENBW)
#######################################################
nfft = num_samples # fft size same as signal size
f,Pxx_den = scipy.signal.welch(carrier_I, fs = 1/time_step,\
window = np.ones(nfft),\
nperseg = nfft,\
scaling='density')
#######################################################
# FFT comparison
#######################################################
integration_time = nfft*time_step
power_time_domain = sum((np.abs(carrier_I)**2)*time_step)/integration_time
print 'power time domain = %f' % power_time_domain
# Take FFT. Note that the factor of 1/nfft is sometimes omitted in some
# references and software packages.
# By proving Parseval's theorem (conservation of energy) we can find out the
# proper normalization.
signal = carrier_I
xdft = scipy.fftpack.fft(signal, nfft)/nfft
# fft coefficients need to be scaled by fft size
# equivalent to scaling over frequency bins
# total power in frequency domain should equal total power in time domain
power_freq_domain = sum(np.abs(xdft)**2)
print 'power frequency domain = %f' % power_freq_domain
# Energy is conserved
# FFT symmetry
plt.figure(0)
plt.subplot(2,1,1)
plt.plot(np.abs(xdft)) # symmetric in amplitude
plt.title('magnitude')
plt.subplot(2,1,2)
plt.plot(np.angle(xdft)) # pi phase shift out of phase
plt.title('phase')
plt.show()
xdft_short = xdft[0:nfft/2+1] # take only positive frequency terms, other half identical
# xdft[0] is the dc term
# xdft[nfft/2] is the Nyquist term, note that Python 2.X indexing does NOT
# include the last element, therefore we need to use 0:nfft/2+1 to have an array
# that is from 0 to nfft/2
# xdft[nfft/2-x] = conjugate(xdft[nfft/2+x])
Pxx = (np.abs(xdft_short))**2 # power ~ voltage squared, power in each bin.
Pxx_density = Pxx / (sample_rate/nfft) # power is energy over -fs/2 to fs/2, with nfft bins
Pxx_density[1:-1] = 2*Pxx_density[1:-1] # conserve power since we threw away 1/2 the spectrum
# note that DC (0 frequency) and Nyquist term only appear once, we don't double those.
# Note that Python 2.X array indexing is not inclusive of the last element.
Pxx_density_dB = 10*np.log10(Pxx_density)
freq = np.linspace(0,sample_rate/2,nfft/2+1)
# frequency range of the fft spans from DC (0 Hz) to
# Nyquist (Fs/2).
# the resolution of the FFT is 1/t_stop
# dft of size nfft will give nfft points at frequencies
# (1/stop) to (nfft/2)*(1/t_stop)
plt.figure(1)
plt.plot(freq,Pxx_density_dB,'^')
plt.figure(1)
plt.plot(f, 10.0*np.log10(Pxx_den))
plt.xlabel('Freq (Hz)'),plt.ylabel('dBm/Hz'),#plt.show()
plt.ylim([-200, 0])
plt.show()
Many common (but not all) FFT libraries scale the FFT result of a unit amplitude sinusoid by the length of the FFT. This maintains Parsevals equality since a longer sinusoid represents more total energy than a shorter one of the same amplitude.
If you don't want to scale by the FFT length when using one of these libraries, then divide by the length before computing the magnitude in dB.
Normalization can be done in many different ways - depending on window, number of samples, etc.
Common trick: take FFT of known signal and normalize by the value of the peak. Say in the above example your peak is 123 - if you want it to be 1, then divide it ( and all results obtained with this algorithm) by 123.

What is the MS Office contrast algorithm?

Does anyone know what formula MS Office uses to apply contrast adjustments to image?
It looks like the quadratic function, but I couldn't discover it.
Not too sure what formula they use. I doubt you'll find out either since nothings opensource but here is code I use for contrast adjustment:
function(im, contrast=10){
# Get c-value from contrast
c = (100.0 + contrast) / 100.0
# Apply the contrast
im = ((im-0.5)*c)+0.5
# Cap anything that went outside the bounds of 0 or 1
im[im<0] = 0
im[im>1] = 1
# Return the image
return(im)
}
This works really well for me.
Note
this assumes your pixel intensity values are on a scale of 0 to 1. If on a 255 scale, change lines im = ((im-0.5*255)*c)+0.5*255 and im[im>255] = 255
the function above is in R language
Good luck

Resources