Pretty print Expressions in Drake - drake

Is there some built in function to print expressions in a readable way? For example, this expression;
<Expression "( - (x(2) * (2 * (x(0) * P(0,0)) + (x(1) * P(1,0)) + (x(1) * P(0,1)) + (x(2) * P(2,0)) + (x(2) * P(0,2)) + (x(3) * P(3,0)) + (x(3) * P(0,3)) + 2 * (lambda(0) * Q(0,0)) + 2 * (lambda(1) * Q(0,1)))) - (x(3) * ((x(0) * P(1,0)) + (x(0) * P(0,1)) + 2 * (x(1) * P(1,1)) + (x(2) * P(2,1)) + (x(2) * P(1,2)) + (x(3) * P(3,1)) + (x(3) * P(1,3)) + 2 * (lambda(0) * Q(1,0)) + 2 * (lambda(1) * Q(1,1)))) - ((9.8000000000000007 * x(1) + (x(0) * K(0)) + (x(1) * K(1)) + (x(2) * K(2)) + (x(3) * K(3))) * ((x(0) * P(2,0)) + (x(0) * P(0,2)) + (x(1) * P(2,1)) + (x(1) * P(1,2)) + 2 * (x(2) * P(2,2)) + (x(3) * P(3,2)) + (x(3) * P(2,3)) + 2 * (lambda(0) * Q(2,0)) + 2 * (lambda(1) * Q(2,1)))) - ((19.600000000000001 * x(1) + lambda(0) - lambda(1) + (x(0) * K(0)) + (x(1) * K(1)) + (x(2) * K(2)) + (x(3) * K(3))) * ((x(0) * P(3,0)) + (x(0) * P(0,3)) + (x(1) * P(3,1)) + (x(1) * P(1,3)) + (x(2) * P(3,2)) + (x(2) * P(2,3)) + 2 * (x(3) * P(3,3)) + 2 * (lambda(0) * Q(3,0)) + 2 * (lambda(1) * Q(3,1)))))">
Would be a lot easier to see if it was organized by the matrix and variable dot products.

If you're working in a notebook workflow, then we have a ToLatex method.
My favorite way to use it is, e.g.:
from IPython.display import Markdown, display
from pydrake.all import ToLatex, MultibodyPlant, Parser
from underactuated import FindResource, ManipulatorEquations
plant = MultibodyPlant(time_step=0)
parser = Parser(plant)
parser.AddModelFromFile(FindResource("models/double_pendulum.urdf"))
plant.Finalize()
# Evaluate the dynamics numerically
q = [0.1, 0.1]
v = [1, 1]
(M, Cv, tauG, B, tauExt) = ManipulatorDynamics(plant, q, v)
display(Markdown("$M = " + ToLatex(M, precision=2) + "$"))
display(Markdown("$Cv = " + ToLatex(Cv, precision=2) + "$"))
display(Markdown("$\\tau_G = " + ToLatex(tauG, precision=2) + "$"))
display(Markdown("$B = " + ToLatex(B, precision=2) + "$"))
display(Markdown("$\\tau_{ext} = " + ToLatex(tauExt, precision=2) + "$"))
as seen in the intro chapter notebook from my underactuated course notes: https://deepnote.com/project/52e7e101-429f-4aef-a373-e4cca7980cfe/%2Fintro.ipynb

Related

Infinity objective value given by CVXPY on a convex program

I am solving a convex problem using cvxpy. The constraints are rather simple, there are 3 variables, but we could eliminate one. The objective is convex and involves the entropy and logarithm. The solution is correct, in the sense that the variables have the expected values. Howver the objective value should be around -1.06, but it is infinite. Is there a bug with evaluating involved expressions ?
#!/usr/bin/env python3
import cvxpy as cx
import numpy as np
from math import log
def entr(x):
return -x * log(x)
def check_obj(a, b, c):
return -entr(2.0) + -2.0 * log(2.0) + -entr(1.0 + a) + -1.0 + a * log(2.0) + -entr(2.0 + a) -2.0 + a * log(1.0) -entr(1.0 + a + b + c) + -1.0 + a + b + c * log(2.0) + -entr(2.0) + -2.0 * log(2.0) + -entr(1.0 + b) -1.0 + b * log(2.0) + -entr(2.0 + b) + -2.0 + b * log(1.0) -entr(1.0 + b + a + c) -1.0 + b + a + c * log(2.0)
a = cx.Variable(name='a')
b = cx.Variable(name='b')
c = cx.Variable(name='c')
obj = -cx.entr(2.0) + -2.0 * cx.log(2.0) + -cx.entr(1.0 + a) + -1.0 + a * cx.log(2.0) + -cx.entr(2.0 + a) -2.0 + a * cx.log(1.0) -cx.entr(1.0 + a + b + c) + -1.0 + a + b + c * cx.log(2.0) + -cx.entr(2.0) + -2.0 * cx.log(2.0) + -cx.entr(1.0 + b) -1.0 + b * cx.log(2.0) + -cx.entr(2.0 + b) + -2.0 + b * cx.log(1.0) -cx.entr(1.0 + b + a + c) -1.0 + b + a + c * cx.log(2.0)
p = cx.Problem(cx.Minimize(obj), [0 <= a, 0<= b, 0 <= c, a + b + c == 1])
p.solve()
# should be 'optimal' and indeed it is
print(p.status)
# the following two values should be the same, but p.value is infinite and should be around -1.06
print(p.value)
print(check_obj(a.value, b.value, c.value))
It looks like a bug in the entropy atom. I fixed it and made a pull request here. It is merged now. If you run your code with the latest cvxpy from the master branch it should give correct results.

Wilson Score in Rails + Add Random Number to Upvote in Postgres Select Statement

I am trying to add a randomized number to my Wilson Score select statement.
My Post model has columns 'upvotes_count' and 'downvotes_count' that get incremented and decremented according to how many upvotes and downvotes they have (duh..).
Post.select("((upvotes_count + 1 + 1.9208) / (upvotes_count + downvotes_count + 2) - " +
"1.96 * SQRT(((upvotes_count + 1) * (downvotes_count + 1)) / (upvotes_count + downvotes_count + 2) + 0.9604) / " +
"(upvotes_count + downvotes_count + 2)) / (1 + 3.8416 / (upvotes_count + downvotes_count + 2)) " +
"AS ci_lower_bound, posts.*")
.order("ci_lower_bound DESC")
Note: In theory, Wilson Score doesn't allow for cases where both upvotes_count and downvotes_count equal 0, so I cheat and add upvote and downvote to each post record. I know this is mathematical "blasphemy" so please don't hurt me math gods. Otherwise, I wouldn't get post records with zero upvotes and zero downvotes...and that sucks.
What would be ideal is to have a select statement that would add a different random number (Poisson preferably) to upvotes_count of each post to give an artificial bump to it's Wilson score so it would occasionally appear higher on the list.
Question: How do I add a different random number to upvotes_count for each post selected in my select statement?
Something like this...but obviously it doesn't work...
Post.select("((upvotes_count + 1 + random + 1.9208) / (upvotes_count + downvotes_count + 2) - " +
"1.96 * SQRT(((upvotes_count + 1 + random) * (downvotes_count + 1)) / (upvotes_count + downvotes_count + 2) + 0.9604) / " +
"(upvotes_count + random + downvotes_count + 2)) / (1 + 3.8416 / (upvotes_count + random + downvotes_count + 2)) " +
"AS ci_lower_bound, posts.*")
This answer may be blasphemy too and I'm not sure I would ever do this, but here's how you could probably do what you're asking
# app/models/post.rb
class Post < ActiveRecord::Base
def self.custom_select_query
select("((upvotes_count + 1 + #{random(100)} + 1.9208) / (upvotes_count + downvotes_count + 2) - " +
"1.96 * SQRT(((upvotes_count + 1 + #{random(99)}) * (downvotes_count + 1)) / (upvotes_count + downvotes_count + 2) + 0.9604) / " +
"(upvotes_count + #{random(98)} + downvotes_count + 2)) / (1 + 3.8416 / (upvotes_count + #{random(97)} + downvotes_count + 2)) " +
"AS ci_lower_bound, posts.*")
end
private
def self.random(max)
SecureRandom.random_number(max)
end
end
Then call it Post.custom_select_query

Need help getting for loop to iterate correctly

I am working on this for homework, and I have most of this done my problem is there is something wrong in one of the for loops as the display I get is only one line when it should be a line for every value in the range 50,000 - 60,000 with a step value of 50.
def computeTax(status, taxableIncome):
print("Taxable Income/" + "\t" + "Single/" + "\t" + "Married Joint/" + \
"\t" + "Married Separate/" + " " + "Head of Household")
for taxableIncome in range(50000, 60001, 50):
for status in range(1, 5, 1):
if (status == 1): #tax calculation for single person
tax1 = 8350 * .10 + (33950 - 8350) * 0.15 + \
(taxableIncome - 33950) * 0.25
tax1 = round(tax1,0)
if (status == 2): #tax calculation for married file jointly
tax2 = 16700 * .10 + (67900 - 16700) * 0.15 + \
(taxableIncome - 67900) * 0.15
tax2 = round(tax2,0)
if (status == 3): #tax calculation for married file separately
tax3 = 8350 * .10 + (33950 - 8350) * 0.15 + \
(taxableIncome - 33950) * 0.25
tax3 = round(tax3,0)
if (status == 4): #tax calculation for head of household
tax4 = 11950 * .10 + (45500 - 11950) * 0.15 + \
(taxableIncome - 45500) * 0.25
tax4 = round(tax4,0)
print(str(tax1) + "\t" + str(tax2) + "\t" + str(tax3) + "\t" + str(tax4))
return (status, taxableIncome)
computeTax(range(1, 5, 1),range(50000, 60001, 50))
I'm not familiar with python, but do you have to define code blocks like you would with PHP?
EG
if (status == 1): #tax calculation for single person
{
tax1 = 8350 * .10 + (33950 - 8350) * 0.15 + \
(taxableIncome - 33950) * 0.25
tax1 = round(tax1,0)
}
Like I said; not familiar with Python, but I thought I might have an answer for you; if I'm completely off then sorry and good luck.

Recurrence relation: T(n) = T(n - 1) + n - 1

I am trying to understand how to solve recurrence relations. I understand it to the point where we have to simplify.
T(N) = T(N-1) + N-1 Initial condition: T(1)=O(1)=1
T(N) = T(N-1) + N-1
T(N-1) = T(N-2) + N-2
T(N-2) = T(N-3) + N-3
……
T(2) = T(1) + 1
**Summing up right and left sides**
T(N) + T(N-1) + T(N-2) + T(N-3) + …. T(3) + T(2) =
= T(N-1) + T(N-2) + T(N-3) + …. T(3) + T(2) + T(1) +
(N-1) + (N-2) + (N-3) + …. +3 + 2 + 1
** Canceling like terms and simplifying **
T(N) = T(1) + N*(N-1)/2 1 + N*(N - 1)/2
T(N) = 1 + N*(N - 1)/2
I really don't understand the last part. I understand canceling like terms but don't understand how the simplification below works:
T(N) = T(1) + (N-1) + (N-2) + (N-3) + …. +3 + 2 + 1
T(N) = T(1) + N*(N-1)/2 1 + N*(N - 1)/2
How is the second line derived from the first? Doesn't make any sense to me.
Would be a great help if someone can help me understand this. Thanks =)
In your second-to-last-line:
S = (N-1) + (N-2) + (N-3) + ... + 3 + 2 + 1
You can say:
2S = S + S
= (N-1) + (N-2) + (N-3) + ... + 3 + 2 + 1
1 + 2 + 3 + ... + (N-3) + (N-2) + (N-1)
= N + N + N + ... + N + N + N
|__________________ N-1 times ________________|
You're counting from N - 1 to 1, so there are N - 1 terms in the sequence. But the whole sequence is just N so you can say:
2S = N * (N - 1)
S = (N * (N - 1)) / 2
So in your last chunk:
T(N) = T(1) + (N-1) + (N-2) + (N-3) + ... + 3 + 2 + 1
= T(1) + (N * (N - 1)) / 2
T(N) = T(1) + (N-1) + (N-2) + (N-3) + …. +3 + 2 + 1
= T(1) + (N-1) + (N-2) + (N-3) + ..... + ( N-(N-3)) + (N-(N-2)) + (N-(N-1))
= T(1) + [N+N+N+..... n-1 times] - [1+2+3+......+(N-3)+(N-2)+(N-1)]
= T(1) + N*(N-1) - (N*(N-1))/2
= T(1) + N*(N-1)/2

Recurrence Relation Homework Struggles

Here's the question:
Solve the recurrence by obtaining a theta bound for T(n) given that T(1) = theta(1).
T(n) = n + T(n-3)
Attempted Solution:
T(n) = T(n-6) + (n-3) + n
= T(n-9) + (n-6) + (n-3) + n
= T(n-(n-1)) + [(n-n) + (n-(n-3)) + (n-(n-6)) + ... + n]
= T(1) + [0 + 3 + 6 + ... + n]
= theta(1) = 3[1 + 2 + 3 + ... + n/3]
= theta(1) + [(n/3)(n/3 + 1)]/2
= theta(1) + (n^2+3n)/6
When I double check to see if the solution fits the recurrence, it doesn't work.
The issue was that you were getting the wrong summation.
It doesn't start at 0, since your last T function was T(n - (n-1)) , which means previous one was T(n-(n-4)). So the summation starts at 4, and goes up till n.
If you don't know how to find the summation of this, I'd suggest you look at some of the proofs from the summation formula. This is what the solution looks like.
T(n) = T(n-3) + n
= T(n-6) + (n-3) + n
= T(n-(n-1)) + [ (n-(n-4)) + (n-(n-7)) + ... + n]
= T(1) + [4 + 7 + ... + n]
= theta(1) + (4 + n) * (n - 1)/6

Resources