Difference between example Acrobot plant A matrix and standard form - drake

In section 3.4.1 of the Underactuated Robotics notes (https://underactuated.mit.edu/acrobot.html#section4), the manipulator equations are linearized around a fixed point and the matrix A_lin is derived.
While verifying the linearization of my own attempt at making an acrobot, I used the python notebook provided in Example 3.5 (LQR for the Acrobot and Cart-pole) to obtain the A matrix of the linearized Acrobot (Plant from the Examples module). I did this by simply adding 'print(linearized_acrobot.A())' on line 21 of the LQR for Acrobot block. Interestingly, I noticed that the bottom right 2x2 block is nonzero, which is different from the form derived in the notes. What is the reason behind the difference? For convenience I'll leave the code below:
import matplotlib.pyplot as plt
import mpld3
import numpy as np
from IPython.display import HTML, display
from pydrake.all import (AddMultibodyPlantSceneGraph, ControllabilityMatrix,
DiagramBuilder, Linearize, LinearQuadraticRegulator,
MeshcatVisualizerCpp, Parser, Saturation, SceneGraph,
Simulator, StartMeshcat, WrapToSystem)
from pydrake.examples.acrobot import (AcrobotGeometry, AcrobotInput,
AcrobotPlant, AcrobotState)
from pydrake.solvers.mathematicalprogram import MathematicalProgram, Solve
from underactuated import FindResource, running_as_notebook
from underactuated.meshcat_cpp_utils import MeshcatSliders
from underactuated.quadrotor2d import Quadrotor2D, Quadrotor2DVisualizer
if running_as_notebook:
mpld3.enable_notebook()
def UprightState():
state = AcrobotState()
state.set_theta1(np.pi)
state.set_theta2(0.)
state.set_theta1dot(0.)
state.set_theta2dot(0.)
return state
def acrobot_controllability():
acrobot = AcrobotPlant()
context = acrobot.CreateDefaultContext()
input = AcrobotInput()
input.set_tau(0.)
acrobot.get_input_port(0).FixValue(context, input)
context.get_mutable_continuous_state_vector()\
.SetFromVector(UprightState().CopyToVector())
linearized_acrobot = Linearize(acrobot, context)
print(linearized_acrobot.A())
print(
f"The singular values of the controllability matrix are: {np.linalg.svd(ControllabilityMatrix(linearized_acrobot), compute_uv=False)}"
)
acrobot_controllability()

Great question. The AcrobotPlant in Drake has default parameters which include some joint friction, which leads to non-zero elements in the bottom corner. If you amend your code with
acrobot = AcrobotPlant()
context = acrobot.CreateDefaultContext()
params = acrobot.get_mutable_parameters(context)
print(params)
params.set_b1(0)
params.set_b2(0)
then the bottom-right 2x2 elements of the linearized A are zero as expected.

Related

Geographic points extend beyond expected boundary

I have a point geometry of US locations contained in a GeoDataFrame.
I want to plot this as a scatterplot over the US map.
My code is:
import numpy as np
import geopandas as gpd
import libpysal
import contextily as ctx
import matplotlib.pyplot as plt
from shapely.ops import cascaded_union
gdf = gpd.GeoDataFrame(point_geometry, geometry='geometry')
boundary = gpd.read_file(libpysal.examples.get_path('us48.shp'))
fig, ax = plt.subplots(figsize=(50, 50))
boundary.plot(ax=ax, color="gray")
gdf.plot(ax=ax, markersize=3.5, color="black")
ax.axis("off")
plt.axis("equal")
plt.show()
Upon inspecting on the graph, the dots are out of my expected bounds.
Is there something I am missing?
Do I need to create a boundary to limit the scatter of the dots?
The plot looks good. I guess you want to exclude the points outside conterminous USA. Those points are clearly in Hawaii, Alaska, and Canada.
From your geodataframe with point geometry, gdf, and with polygon geometry, boundary, you can create a proper boundary that can be used to limit the scatter of the points.
# need this module
from shapely.ops import cascaded_union
# create the conterminous USA polygon
poly_union = cascaded_union([poly for poly in boundary.geometry])
# get a selection from `gdf`, taking points within `poly_union`
points_within = gdf[gdf.geometry.within(poly_union)]
Now, points_within is a geodataframe that you can use to plot instead of gdf.
points_within.plot(ax=ax, markersize=3.5, color="black")

pyqtgraph LUT histogram element how to apply same transform to the numpy array separately

I made a GUI to edit an image(16 bit grayscale) , everything looks good in the GUI but I need to repeat a step the GUI does for me on my own, I used pyqtgraph... the imageview widget provides a histogram feature
if I move the yellow bars, I can change the maximum and minimum intensity range, in this case from 1500 to 10000 would make the image visible in this case.
I need to repeat that step of processing the image without using using the GUI,I took a look at the source code, and it mentions a look up table(LUT) to perform the calculation, yet I didn't comprehend the code enough to find where that step is being down and trying to implement it myself.
any help on how to apply a Look up table transformation to a 16 bit image would be helpful
import sys
import cv2
import numpy as np
import pyqtgraph as pg
from PyQt5.QtCore import *
from PyQt5.QtGui import *
from PyQt5.QtWidgets import *
import pco
from PyQt5.QtWidgets import QScrollArea
import time
class MainWindow(QWidget):
def __init__(self):
super().__init__()
self.initUI()
def initUI(self):
img_tif = cv2.imread("my_file.tif",cv2.IMREAD_ANYDEPTH)
img_tifr = cv2.rotate(img_tif, cv2.ROTATE_90_COUNTERCLOCKWISE)
img = np.asarray(img_tifr)
self.image = pg.image()
self.image.getHistogramWidget().setLevels(0,50000)
self.image.ui.menuBtn.hide()
self.image.ui.roiBtn.hide()
self.image.setImage(img)
def main():
app = QApplication(sys.argv)
main_window = MainWindow()
app.exec_()
sys.exit(0)
if __name__ == '__main__':
main()
I ended up finding an answer, by following this
How to convert a 16 bit to an 8 bit image in OpenCV?
hope it helps anyone else

How to show kdeplot in a 5*4 subplot?

I am working on a machine learning project and am using the seaborn kdeplot to show the standard scaler after scaling. However, no matter how large the figure size I change, the graphs just can't show and will show the error: AttributeError: 'numpy.ndarray' object has no attribute 'plot'.The image I'm willing to show is a 5*4 subplot that look like this:
expected subplot image
#feature scaling
#since numerical attributes have very different scales,
#we use standardization to get all attributes to have the same scale
import pandas as pd
import numpy as np
from sklearn import preprocessing
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
matplotlib.style.use('ggplot')
scaler = preprocessing.StandardScaler()
scaled_df = scaler.fit_transform(train_set)
scaled_df = pd.DataFrame(scaled_df, columns=["SaleAmount","SaleCount","ReturnAmount","ReturnCount",
"KeyedAmount","KeyedCount","VoidRejectAmount","VoidRejectCount","RetrievalAmount",
"RetrievalCount","ChargebackAmount","ChargebackCount","DepositAmount","DepositCount",
"NetDeposit","AuthorizationAmount","AuthorizationCount","DeclinedAuthorizationAmount","DeclinedAuthorizationCount"])
fig, axes = plt.subplots(figsize=(20,10), ncols=5, nrows=4)
sns.kdeplot(scaled_df['SaleAmount'], ax=axes[0])
sns.kdeplot(scaled_df['SaleCount'], ax=axes[1])
sns.kdeplot(scaled_df['ReturnAmount'], ax=axes[2])
sns.kdeplot(scaled_df['ReturnCount'], ax=axes[3])
sns.kdeplot(scaled_df['KeyedAmount'], ax=axes[4])
sns.kdeplot(scaled_df['KeyedCount'], ax=axes[5])
sns.kdeplot(scaled_df['VoidRejectAmount'], ax=axes[6])
sns.kdeplot(scaled_df['VoidRejectCount'], ax=axes[7])
sns.kdeplot(scaled_df['RetrievalAmount'], ax=axes[8])
sns.kdeplot(scaled_df['RetrievalCount'], ax=axes[9])
sns.kdeplot(scaled_df['ChargebackAmount'], ax=axes[10])
sns.kdeplot(scaled_df['ChargebackCount'], ax=axes[11])
sns.kdeplot(scaled_df['DepositAmount'], ax=axes[12])
sns.kdeplot(scaled_df['DepositCount'], ax=axes[13])
sns.kdeplot(scaled_df['NetDeposit'], ax=axes[14])
sns.kdeplot(scaled_df['AuthorizationAmount'], ax=axes[15])
sns.kdeplot(scaled_df['AuthorizationCount'], ax=axes[16])
sns.kdeplot(scaled_df['DeclinedAuthorizationAmount'], ax=axes[17])
sns.kdeplot(scaled_df['DeclinedAuthorizationCount'], ax=axes[18])
You need to know that you have a two dimension array so something like this:
sns.kdeplot(scaled_df['DeclinedAuthorizationCount'], ax=axes[9,2])

Tensorflow, object detection API

Is there a way to view the images that tensorflow object detection api trains on after all preprocessing/augmentation.
I'd like to verify that things look correctly. I was able to verify the resizing my looking at the graph post resize in inference but I obviously can't do that for augmentation options.
TIA
I answered a similar question here.
You can utilize the test script provided by the api and make some changes to fit your need.
I wrote a little test script called augmentation_test.py. It borrowed some code from input_test.py
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import functools
import os
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from scipy.misc import imsave, imread
from object_detection import inputs
from object_detection.core import preprocessor
from object_detection.core import standard_fields as fields
from object_detection.utils import config_util
from object_detection.utils import test_case
FLAGS = tf.flags.FLAGS
class DataAugmentationFnTest(test_case.TestCase):
def test_apply_image_and_box_augmentation(self):
data_augmentation_options = [
(preprocessor.random_horizontal_flip, {
})
]
data_augmentation_fn = functools.partial(
inputs.augment_input_data,
data_augmentation_options=data_augmentation_options)
tensor_dict = {
fields.InputDataFields.image:
tf.constant(imread('lena.jpeg').astype(np.float32)),
fields.InputDataFields.groundtruth_boxes:
tf.constant(np.array([[.5, .5, 1., 1.]], np.float32))
}
augmented_tensor_dict =
data_augmentation_fn(tensor_dict=tensor_dict)
with self.test_session() as sess:
augmented_tensor_dict_out = sess.run(augmented_tensor_dict)
imsave('lena_out.jpeg',augmented_tensor_dict_out[fields.InputDataFields.image])
if __name__ == '__main__':
tf.test.main()
You can put this script under models/research/object_detection/ and simply run it with python augmentation_test.py (Of course you need to install the API first). To successfully run it you should provide any image name 'lena.jpeg' and the output image after augmentation would be saved as 'lena_out.jpeg'.
I ran it with the 'lena' image and here is the result before augmentation and after augmentation.
.
Note that I used preprocessor.random_horizontal_flip in the script. And the result showed exactly what the input image looks like after random_horizontal_flip. To test it with other augmentation options, you can replace the random_horizontal_flip with other methods (which are all defined in preprocessor.py), all you can append other options to the data_augmentation_options list, for example:
data_augmentation_options = [(preprocessor.resize_image, {
'new_height': 20,
'new_width': 20,
'method': tf.image.ResizeMethod.NEAREST_NEIGHBOR
}),(preprocessor.random_horizontal_flip, {
})]

Probable issue with LSTM in lasagne

With a simple constructor for the LSTM, as given in the tutorial, and an input of dimension [,,1] one would expect to see an output of shape [,,num_units].
But regardless of the num_units passed during construction, the output has the same shape as the input.
Following is the min code to replicate this issue...
import lasagne
import theano
import theano.tensor as T
import numpy as np
num_batches= 20
sequence_length= 100
data_dim= 1
train_data_3= np.random.rand(num_batches,sequence_length,data_dim).astype(theano.config.floatX)
#As in the tutorial
forget_gate = lasagne.layers.Gate(b=lasagne.init.Constant(5.0))
l_lstm = lasagne.layers.LSTMLayer(
(num_batches,sequence_length, data_dim),
num_units=8,
forgetgate=forget_gate
)
lstm_in= T.tensor3(name='x', dtype=theano.config.floatX)
lstm_out = lasagne.layers.get_output(l_lstm, {l_lstm:lstm_in})
f = theano.function([lstm_in], lstm_out)
lstm_output_np= f(train_data_3)
lstm_output_np.shape
#= (20, 100, 1)
An unqualified LSTM (I mean in its default mode) should produce one output for each unit right?
The code was run on kaixhin's cuda lasagne docker image docker image
What gives?
Thanks !
You can fix that by using a lasagne.layers.InputLayer
import lasagne
import theano
import theano.tensor as T
import numpy as np
num_batches= 20
sequence_length= 100
data_dim= 1
train_data_3= np.random.rand(num_batches,sequence_length,data_dim).astype(theano.config.floatX)
#As in the tutorial
forget_gate = lasagne.layers.Gate(b=lasagne.init.Constant(5.0))
input_layer = lasagne.layers.InputLayer(shape=(num_batches, # <-- change
sequence_length, data_dim),) # <-- change
l_lstm = lasagne.layers.LSTMLayer(input_layer, # <-- change
num_units=8,
forgetgate=forget_gate
)
lstm_in= T.tensor3(name='x', dtype=theano.config.floatX)
lstm_out = lasagne.layers.get_output(l_lstm, lstm_in) # <-- change
f = theano.function([lstm_in], lstm_out)
lstm_output_np= f(train_data_3)
print lstm_output_np.shape
If you feed your input into the input_layer, it is not ambiguous anymore, so you do not even need to specify where the input is supposed to go. Directly specifying a shape and adding the tensor3 into the LSTM does not work.

Resources