Gnu radio and Yaesu CAT over serial - gnuradio-companion

I'm trying to take info from my radio (ft817) over serial connection and give them to gnuradio in order to use them to make a nice GUI for a panadapter. I normally print CAT data like this, please excuse my python skills I'm still learning :
`
import serial
import time
SERIAL_PORT = "/dev/ttyUSB0"
SAMPLES_PER_SEC = 5
class FT817(object):
SERIAL_SPEED = 9600
SERIAL_STOPBITS = serial.STOPBITS_TWO
SERIAL_TIMEOUT = 1.0
# Transceiver modes and commands
MODES = ["LSB", "USB", "CW", "CWR", "AM", None, "WFM", None, "FM", None, "DIG", None, "PKT"]
CMD_READ_FREQ = [0x00, 0x00, 0x00, 0x00, 0x03]
CMD_READ_RX_STATUS = [0x00, 0x00, 0x00, 0x00, 0xE7]
def __init__(self, serial_port, serial_speed=SERIAL_SPEED, serial_stopbits=SERIAL_STOPBITS):
self._serial = serial.Serial(serial_port, serial_speed, stopbits=serial_stopbits, timeout=FT817.SERIAL_TIMEOUT)
self._frequency = ""
self._mode = ""
self._squelch = True
self._s_meter = ""
def read_frequency(self):
'''Queries transceiver RX frequency and mode.
The response is 5 bytes: first four store frequency and
the fifth stores mode (AM, FM, SSB etc.)
'''
cmd = FT817.CMD_READ_FREQ
self._serial.write(cmd)
resp = self._serial.read(5)
resp_bytes = ((resp[0]), (resp[1]), (resp[2]), (resp[3]))
self._frequency = "%02x%02x%02x%02x" % resp_bytes
self._mode = FT817.MODES[(resp[4])]
def read_rx_status(self):
'''Queries transceiver RX status.
The response is 1 byte:
bit 7: Squelch status: 0 - off (signal present), 1 - on (no signal)
bit 6: CTCSS/DCS Code: 0 - code is matched, 1 - code is unmatched
bit 5: Discriminator centering: 0 - discriminator centered, 1 - uncentered
bit 4: Dummy data
bit 3-0: S Meter data
'''
cmd = FT817.CMD_READ_RX_STATUS
self._serial.write(cmd)
resp = self._serial.read(1)
resp_byte = (resp[0])
self._squelch = True if (resp_byte & 0B10000000) else False
def get_trx_state_string(self):
'''Returns transceiver state data for printing.
'''
sql_str = 'SQL: ON' if self._squelch else 'SQL: OFF'
res = "%sHz %s %s" % (self._frequency, self._mode, sql_str)
return res
def __str__(self):
'''Overrides __str__() method for using FT817 class with print command.
'''
return self.get_trx_state_string()
if __name__ == '__main__':
print ("Starting FT-817ND monitor...")
try:
ft817 = FT817(SERIAL_PORT)
delay = 1.0 / SAMPLES_PER_SEC
while True:
ft817.read_frequency()
ft817.read_rx_status()
print
print (ft817)
time.sleep(delay)
except KeyboardInterrupt:
# KeyboardInterrupt exception is thrown when CTRL-C or CTRL-Break is pressed.
pass
finally:
print ("See you later. 73!")
`
But i'm not sure how to implement this in GNURADIO, any help?
I've tried to use py block but without success.

Related

How can I improve this Reinforced Learning scenario in Stable Baselines3?

In this scenario, I present a box observation with numbers 0, 1 or 2 and shape (1, 10).
The odds for 0 and 2 are 2% each, and 96% for 1.
I want the model to learn to pick the index of any 2 that comes. If it doesn't have a 2, just choose 0.
Bellow is my code:
import numpy as np
import gym
from gym import spaces
from stable_baselines3 import PPO, DQN, A2C
from stable_baselines3.common.env_util import make_vec_env
from stable_baselines3.common.vec_env import VecFrameStack
action_length = 10
class TestBot(gym.Env):
def __init__(self):
super(TestBot, self).__init__()
self.total_rewards = 0
self.time = 0
self.action_space = spaces.Discrete(action_length)
self.observation_space = spaces.Box(low=0, high=2, shape=(1, action_length), dtype=np.float32)
def generate_next_obs(self):
p = [0.02, 0.02, 0.96]
a = [0, 2, 1]
self.observation = np.random.choice(a, size=(1, action_length), p=p)
if 2 in self.observation[0][1:]:
self.best_reward += 1
def reset(self):
if self.time != 0:
print('Total rewards: ', self.total_rewards, 'Best possible rewards: ', self.best_reward)
self.best_reward = 0
self.time = 0
self.generate_next_obs()
self.total_rewards = 0
self.last_observation = self.observation
return self.observation
def step(self, action):
reward = 0
if action != 0:
last_value = self.last_observation[0][action]
if last_value == 2:
reward = 1
else:
reward = -1
self.time += 1
self.generate_next_obs()
done = self.time == 4096
info = {}
self.last_observation = self.observation
self.total_rewards += reward
return self.observation, reward, done, info
For training, I used the following:
env = TestBot()
env = make_vec_env(lambda: env, n_envs=1)
model = PPO('MlpPolicy', env, verbose=0)
iters = 0
while True:
iters += 1
model.learn(total_timesteps=4096, reset_num_timesteps=True)
PPO gave the best result, which wasn't so great. It learned to have positive rewards, but took a long time and got stuck in a point far from optimal.
How can I improve the learning of this scenario?
I managed to solve my problem by tunning the PPO parameters.
I had to change the following parameters:
gamma: from 0.99 to 0. It determines the importance of future rewards in the decision-making process. A value of 0 means that only imediate rewards should be considered.
gae_lambda: from 0.95 to 0.65. The gae_lambda parameter in Reinforcement Learning is used in the calculation of the Generalized Advantage Estimation (GAE). GAE is a method for estimating the advantage function in reinforcement learning, which is a measure of how much better a certain action is compared to the average action. A lower value means that PPO doesn't need to use the GAE too much.
clip_range: from 0.2 to function based. It determines the percentage of the decisions that will be done for exploration. At the end, exploration starts to be irrelevant. So, I made a function that uses a high exploration in the first few iteractions and goes to 0 at the end.
I also made a small modification in the environment in order to penalize more the loss of oportunity of picking a number 2 index, but that is done just to accelerate the training.
The following is my final code:
env = TestBot()
env = make_vec_env(lambda: env, n_envs=1)
iters = 0
def clip_range_schedule():
def real_clip_range(progress):
global iters
cr = 0.2
if iters > 20:
cr = 0.0
elif iters > 12:
cr = 0.05
elif iters > 6:
cr = 0.1
return cr
return real_clip_range
model = PPO('MlpPolicy', env, verbose=0, gamma=0.0, gae_lambda=0.65, clip_range=clip_range_schedule())
while True:
iters += 1
model.learn(total_timesteps=4096, reset_num_timesteps=True)

mxnet cpu memory leak when running inference on GPT2 model

I am testing GPT2.
This code takes a question and predicts the next word.
def chat(model, vocab, sentence):
q = sentence
if q == 'quit':
return
q_tok = tok(q)
a = ''
a_tok = []
break_check_idx=0
prev_gen = ''
while 1:
input_ids = mx.nd.array([vocab[U_TKN]] + vocab[q_tok] +
vocab[EOS, SENT] +
vocab[EOS, S_TKN] +
vocab[a_tok]).expand_dims(axis=0) # <--- Here
pred = model(input_ids.as_in_context(ctx)) # <--- Here
gen = vocab.to_tokens(
mx.nd.argmax(
pred,
axis=-1).squeeze().astype('int').asnumpy().tolist())[-1] # <--- Here
if gen == EOS:
break
if prev_gen == gen:
break_check_idx += 1
if break_check_idx == 5:
break_check_idx = 0
prev_gen = ''
return '๑°⌓°๑ ...?'
prev_gen = gen
a += gen.replace('▁', ' ')
a_tok = tok(a)
return a.strip()
As a result of tracking memory using several tools, memory is allocated from the indicated area.
No matter how much I search, I do not know the cause. Where is the memory leaking?
I tried del, mx.nd.waitall(), gc.collect(), ctx.empty_cache(), jemalloc all.

How to save self-trained word2vec to a txt file with format like 'word2vec-google-news' or 'glove.6b.50d'

I wonder that how can I save a self-trained word2vec to txt file with the format like 'word2vec-google-news' or 'glove.6b.50d' which has the tokens followed by matched vectors.
I export my self-trained vectors to txt file which only has vectors but no tokens in the front of those vectors.
My code for training my own word2vec:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import math
import random
import numpy as np
from six.moves import xrange
import zipfile
import tensorflow as tf
import pandas as pd
filename = ('data/data.zip')
# Step 1: Read the data into a list of strings.
def read_data(filename):
with zipfile.ZipFile(filename) as f:
data = tf.compat.as_str(f.read(f.namelist()[0])).split()
return data
words = read_data(filename)
#print('Data size', len(words))
# Step 2: Build the dictionary and replace rare words with UNK token.
vocabulary_size = 50000
def build_dataset(words):
count = [['UNK', -1]]
count.extend(collections.Counter(words).most_common(vocabulary_size - 1))
#print("count",len(count))
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
if word in dictionary:
index = dictionary[word]
else:
index = 0
unk_count += 1
data.append(index)
count[0][1] = unk_count
reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reverse_dictionary
data, count, dictionary, reverse_dictionary = build_dataset(words)
#del words # Hint to reduce memory.
#print('Most common words (+UNK)', count[:5])
#print('Sample data', data[:10], [reverse_dictionary[i] for i in data[:10]])
data_index = 0
# Step 3: Function to generate a training batch for the skip-gram model.
def generate_batch(batch_size, num_skips, skip_window):
global data_index
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
batch = np.ndarray(shape=(batch_size), dtype=np.int32)
labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
span = 2 * skip_window + 1 # [ skip_window target skip_window ]
buffer = collections.deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(batch_size // num_skips):
target = skip_window # target label at the center of the buffer
targets_to_avoid = [skip_window]
for j in range(num_skips):
while target in targets_to_avoid:
target = random.randint(0, span - 1)
targets_to_avoid.append(target)
batch[i * num_skips + j] = buffer[skip_window]
labels[i * num_skips + j, 0] = buffer[target]
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
batch, labels = generate_batch(batch_size=8, num_skips=2, skip_window=1)
#for i in range(8):
#print(batch[i], reverse_dictionary[batch[i]],'->', labels[i, 0], reverse_dictionary[labels[i, 0]])
# Step 4: Build and train a skip-gram model.
batch_size = 128
embedding_size = 128
skip_window = 2
num_skips = 2
valid_size = 9
valid_window = 100
num_sampled = 64 # Number of negative examples to sample.
valid_examples = np.random.choice(valid_window, valid_size, replace=False)
graph = tf.Graph()
with graph.as_default():
# Input data.
train_inputs = tf.placeholder(tf.int32, shape=[batch_size])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# Ops and variables pinned to the CPU because of missing GPU implementation
with tf.device('/cpu:0'):
# Look up embeddings for inputs.
embeddings = tf.Variable(
tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
embed = tf.nn.embedding_lookup(embeddings, train_inputs)
# Construct the variables for the NCE loss
nce_weights = tf.Variable(
tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))
nce_biases = tf.Variable(tf.zeros([vocabulary_size]),dtype=tf.float32)
# Compute the average NCE loss for the batch.
# tf.nce_loss automatically draws a new sample of the negative labels each
# time we evaluate the loss.
loss = tf.reduce_mean(
tf.nn.nce_loss(weights=nce_weights,biases=nce_biases, inputs=embed, labels=train_labels,
num_sampled=num_sampled, num_classes=vocabulary_size))
# Construct the SGD optimizer using a learning rate of 1.0.
optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss)
# Compute the cosine similarity between minibatch examples and all embeddings.
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
normalized_embeddings = embeddings / norm
valid_embeddings = tf.nn.embedding_lookup(normalized_embeddings, valid_dataset)
similarity = tf.matmul(valid_embeddings, normalized_embeddings, transpose_b=True)
# Add variable initializer.
init = tf.global_variables_initializer()
# Step 5: Begin training.
num_steps = 20000
with tf.Session(graph=graph) as session:
# We must initialize all variables before we use them.
init.run()
#print("Initialized")
average_loss = 0
for step in xrange(num_steps):
batch_inputs, batch_labels = generate_batch(batch_size, num_skips, skip_window)
feed_dict = {train_inputs: batch_inputs, train_labels: batch_labels}
# We perform one update step by evaluating the optimizer op (including it
# in the list of returned values for session.run()
_, loss_val = session.run([optimizer, loss], feed_dict=feed_dict)
average_loss += loss_val
#if step % 2000 == 0:
# if step > 0:
# average_loss /= 2000
# The average loss is an estimate of the loss over the last 2000 batches.
# print("Average loss at step ", step, ": ", average_loss)
#average_loss = 0
final_embeddings = normalized_embeddings.eval()
np.savetxt('data/w2v.txt', final_embeddings)
You may want to look at the implementation of _save_word2vec_format() in gensim for an example of Python code which writes that format:
https://github.com/RaRe-Technologies/gensim/blob/e859c11f6f57bf3c883a718a9ab7067ac0c2d4cf/gensim/models/utils_any2vec.py#L104
def _save_word2vec_format(fname, vocab, vectors, fvocab=None, binary=False, total_vec=None):
"""Store the input-hidden weight matrix in the same format used by the original
C word2vec-tool, for compatibility.
Parameters
----------
fname : str
The file path used to save the vectors in.
vocab : dict
The vocabulary of words.
vectors : numpy.array
The vectors to be stored.
fvocab : str, optional
File path used to save the vocabulary.
binary : bool, optional
If True, the data wil be saved in binary word2vec format, else it will be saved in plain text.
total_vec : int, optional
Explicitly specify total number of vectors
(in case word vectors are appended with document vectors afterwards).
"""
if not (vocab or vectors):
raise RuntimeError("no input")
if total_vec is None:
total_vec = len(vocab)
vector_size = vectors.shape[1]
if fvocab is not None:
logger.info("storing vocabulary in %s", fvocab)
with utils.open(fvocab, 'wb') as vout:
for word, vocab_ in sorted(iteritems(vocab), key=lambda item: -item[1].count):
vout.write(utils.to_utf8("%s %s\n" % (word, vocab_.count)))
logger.info("storing %sx%s projection weights into %s", total_vec, vector_size, fname)
assert (len(vocab), vector_size) == vectors.shape
with utils.open(fname, 'wb') as fout:
fout.write(utils.to_utf8("%s %s\n" % (total_vec, vector_size)))
# store in sorted order: most frequent words at the top
for word, vocab_ in sorted(iteritems(vocab), key=lambda item: -item[1].count):
row = vectors[vocab_.index]
if binary:
row = row.astype(REAL)
fout.write(utils.to_utf8(word) + b" " + row.tostring())
else:
fout.write(utils.to_utf8("%s %s\n" % (word, ' '.join(repr(val) for val in row))))

Tensorflow Neural Network for Binary Classicication; how do I use placeholder

Here is my code:
My target is a vector with shape(N,) which is a vector with only binary numbers
However, I'm running into compiling errors
/Library/Frameworks/Python.framework/Versions/3.6/bin/python3.6 /Users/Lai/Dropbox/PersonalProject/MachineLearningForSports/models/NeuralNetwork.py
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/sklearn/cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
"This module will be removed in 0.20.", DeprecationWarning)
Traceback (most recent call last):
File "/Users/Lai/Dropbox/PersonalProject/MachineLearningForSports/models/NeuralNetwork.py", line 102, in <module>
_, c = sess.run([optimizer,cost],feed_dict = {x:batch_x,y:batch_y})
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 766, in run
run_metadata_ptr)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 943, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (100,) for Tensor 'Placeholder_1:0', which has shape '(?, 2)'
Since my batch size is 100; I believe that the error is at when comparing my target to my predictions. The tf.placable seems make the prediction with N*2, although I'm sure. Any help ?? Thanks
import tensorflow as tf
import DataPrepare as dp
import numpy as np
def random_init(x,num_feature_1st,num_feature_2nd,num_class):
W1 = tf.Variable(tf.random_normal([num_feature_1st,num_feature_2nd]))
bias1 = tf.Variable(tf.random_normal([num_feature_2nd]))
W2 = tf.Variable(tf.random_normal([num_feature_2nd,num_class]))
bias2 = tf.Variable(tf.random_normal([num_class]))
return [W1,bias1,W2,bias2]
def softsign(z):
"""The softsign function, applied elementwise."""
return z / (1. + np.abs(z))
def multilayer_perceptron(x,num_feature_1st,num_feature_2nd,num_class):
params = random_init(x,num_feature_1st,num_feature_2nd,num_class)
layer_1 = tf.add(tf.matmul(x,params[0]),params[1])
layer_1 = softsign(layer_1)
#layer_1 = tf.nn.relu(layer_1)
layer_2 = tf.add(tf.matmul(layer_1,params[2]),params[3])
#output = tf.nn.softmax(layer_2)
output = tf.nn.sigmoid(layer_2)
return output
def next_batch(num, dataX,dataY):
idx = np.arange(0,len(dataX))
np.random.shuffle(idx)
idx = idx[0:num]
dataX_shuffle = [dataX[i] for i in idx]
dataY_shuffle = [dataY[i] for i in idx]
dataX_shuffle = np.asarray(dataX_shuffle)
dataY_shuffle = np.asarray(dataY_shuffle)
return dataX_shuffle, dataY_shuffle
if __name__ == "__main__":
#sess = tf.InteractiveSession()
learning_rate = 0.001
training_epochs = 10
batch_size = 100
display_step = 1
num_feature_1st = 6
num_feature_2nd = 500
num_class = 2
x = tf.placeholder('float', [None, 6])
y = tf.placeholder('float',[None,2])
data = dp.dataPrepare(dp.datas,dp.path)
trainX = data[0]
testX = data[1] # a matrix
trainY = data[2] # a vector with binary number
testY = data[3]
params = random_init(x,num_feature_1st,num_feature_2nd,num_class)
# construct model
pred = multilayer_perceptron(x, num_feature_1st, num_feature_2nd, num_class)
cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(pred, y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
#train
for epoch in range(training_epochs):
avg_cost = 0
total_batch = int(len(trainX[:,0])/batch_size)
for i in range(total_batch):
batch_x, batch_y = next_batch(batch_size,trainX,trainY)
_, c = sess.run([optimizer,cost],feed_dict = {x:batch_x,y:batch_y})
avg_cost += c/total_batch
if epoch % display_step ==0:
print("Epoch: ", "%04d" % (epoch+1), " cost= ", "{:.9f}".format(avg_cost))
print("Optimization Finished!")
Whenever you execute a dynamic node from the computation graph - which is pretty much any node that is not an input - you need to specify all dependent variables. Think about this this way: If you had a mathematical function of the form
y = f(x) = Ax + b (for example)
and you wanted to evaluate that function, you need to specify x as well. You need not, however, specify x if you wanted to evaluate (i.e. read) the value of A, since A is known (at least in this context).
Consequently, you can evaluate (by passing it to tf.Session.run(...) the parameters of your network without specifying the inputs (A in the example above). You cannot, however, evaluate the output of your functions without specifying the inputs (in the example, you need to specify x).
As for your code, the following line will thus not work:
print(sess.run(pred)), since you ask the session to evaluate a function without specifying its inputs.

Is a step response enough information for deconvolution?

I have a measurement system, which responds to a step (green line) with an exponential decline (blue line, which would be the measured data).
I want to go back from the blue line to the green line using deconvolution. Is this step-response already sufficient information for the deconvolution or would it be necessary to have the impulse response?
Thanks for your help,
I had the same problem. I think that it can be addressed using of the fact that Dirac delta is a derivative of Heaviside function. You just need to take numerical derivative of your step response and use it as impulse response for the deconvolution.
Here is an example:
import numpy as np
from scipy.special import erfinv, erf
from scipy.signal import deconvolve, convolve, resample, decimate, resample_poly
from numpy.fft import fft, ifft, ifftshift
def deconvolve_fun(obs, signal):
"""Find convolution filter
Finds convolution filter from observation and impulse response.
Noise-free signal is assumed.
"""
signal = np.hstack((signal, np.zeros(len(obs) - len(signal))))
Fobs = np.fft.fft(obs)
Fsignal = np.fft.fft(signal)
filt = np.fft.ifft(Fobs/Fsignal)
return filt
def wiener_deconvolution(signal, kernel, lambd=1e-3):
"""Applies Wiener deconvolution to find true observation from signal and filter
The function can be also used to estimate filter from true signal and observation
"""
# zero pad the kernel to same length
kernel = np.hstack((kernel, np.zeros(len(signal) - len(kernel))))
H = fft(kernel)
deconvolved = np.real(ifft(fft(signal)*np.conj(H)/(H*np.conj(H) + lambd**2)))
return deconvolved
def get_signal(time, offset_x, offset_y, reps=4, lambd=1e-3):
"""Model step response as error function
"""
ramp_up = erf(time * multiplier)
ramp_down = 1 - ramp_up
if (reps % 1) == 0.5:
signal = np.hstack(( np.zeros(offset_x),
ramp_up)) + offset_y
else:
signal = np.hstack(( np.zeros(offset_x),
np.tile(np.hstack((ramp_up, ramp_down)), reps),
np.zeros(offset_x))) + offset_y
signal += np.random.randn(*signal.shape) * lambd
return signal
def make_filter(signal, offset_x):
"""Obtain filter from response to step function
Takes derivative of Heaviside to get Dirac. Avoid zeros at both ends.
"""
# impulse response. Step function is integration of dirac delta
hvsd = signal[(offset_x):]
dirac = np.gradient(hvsd)# + offset_y
dirac = dirac[dirac > 0.0001]
return dirac, hvsd
def get_step(time, offset_x, offset_y, reps=4):
""""Creates true step response
"""
ramp_up = np.heaviside(time, 0)
ramp_down = 1 - ramp_up
step = np.hstack(( np.zeros(offset_x),
np.tile(np.hstack((ramp_up, ramp_down)), reps),
np.zeros(offset_x))) + offset_y
return step
# Worst case scenario from specs : signal Time t98% < 60 s at 25 °C
multiplier = erfinv(0.98) / 60
offset_y = .01
offset_x = 300
reps = 1
time = np.arange(301)
lambd = 0
sampling_time = 3 #s
signal = get_step(time, offset_x, offset_y, reps=reps)
filter = get_signal( time, offset_x, offset_y, reps=0.5, lambd=lambd)
filter, hvsd = make_filter(filter, offset_x)
observation = get_signal(time, offset_x, offset_y, reps=reps, lambd=lambd)
assert len(signal) == len(observation)
observation_est = convolve(signal, filter, mode="full")[:len(observation)]
signal_est = wiener_deconvolution(observation, filter, lambd)[:len(observation)]
filt_est = wiener_deconvolution(observation, signal, lambd)[:len(filter)]
This will allow you to obtain these two figures:
Heaviside and Dirac
Signal and Filter Estimate
You should also benefit from checking other related posts and the example of Wiener deconvolution that I partly use in my code.
Let me know if this helps.

Resources