Approximately compare decimal attributes of a class defined with `attrs` - python-attrs

I have a Thing class with an float x attribute. And I want to approximately compare two instances of Thing with a relelative tolerance of 1e-5.
import attr
#attr.s
class Thing(object):
x: float = attr.ib()
>>> assert Thing(3.141592) == Thing(3.1415926535) # I want this to be true with a relelative tolerance of 1e-5
False
Do I need to override the __eq__ method or is there a clean way of telling attr to use math.isclose() or a custom comparison function?

Yes, setting eq=True and implementing your own __eq__/__ne__ is your way to go. In this case your need is so specific that I wouldn't even know how to abstract it away without making it confusing.

Related

How to implement custom constraint using C++?

I am trying to reimplement littledog.ipynb using C++. I find it is hard to translate the function velocity_dynamics_constraint and have 3 questions
What is the function of ad_velocity_dynamics_context? Can we ignore it?
How to reimplement velocity_dynamics_constraint using C++? Do I have to create a new class like class VelocityDynamicsConstraint : public drake::solvers::Constraint? Is three any easier way to implement it?
Why we need to consider isinstance(vars[0], AutoDiffXd) condition?
# Some code from https://github.com/RussTedrake/underactuated/blob/master/examples/littledog.ipynb
ad_velocity_dynamics_context = [
ad_plant.CreateDefaultContext() for i in range(N)
]
def velocity_dynamics_constraint(vars, context_index):
h, q, v, qn = np.split(vars, [1, 1+nq, 1+nq+nv])
if isinstance(vars[0], AutoDiffXd):
if not autoDiffArrayEqual(
q,
ad_plant.GetPositions(
ad_velocity_dynamics_context[context_index])):
ad_plant.SetPositions(
ad_velocity_dynamics_context[context_index], q)
v_from_qdot = ad_plant.MapQDotToVelocity(
ad_velocity_dynamics_context[context_index], (qn - q) / h)
else:
if not np.array_equal(q, plant.GetPositions(
context[context_index])):
plant.SetPositions(context[context_index], q)
v_from_qdot = plant.MapQDotToVelocity(context[context_index],
(qn - q) / h)
return v - v_from_qdot
for n in range(N-1):
prog.AddConstraint(partial(velocity_dynamics_constraint,
context_index=n),
lb=[0] * nv,
ub=[0] * nv,
vars=np.concatenate(
([h[n]], q[:, n], v[:, n], q[:, n + 1]))
What is the function of ad_velocity_dynamics_context? Can we ignore it?
The context caches the intermediate computation result for a given q, v, u. It is very common that several constraints are imposed on the same set of q, v, u (for example, consider at the final time, we typically have a kinematic constraint on the final state, say the robot foot has to land on the ground and its center of mass is at a certain location. At the same time we we have the velocity dynamics constraint on the final state). Hence these different constraints can share some intermediate computation result, such as the rigid transform between each adjacent links. Hence we cache the result in ad_velocity_dynamics_context, and this ad_velocity_dynamics_context can be used later when we impose other constraints.
How to reimplement velocity_dynamics_constraint using C++? Do I have to create a new class like class VelocityDynamicsConstraint : public drake::solvers::Constraint? Is there any easier way to implement it?
That is right, you will need to create a new class VelocityDynamicsConstraint. The main challenge in implementing this class is to write the three overloaded DoEval function for three scalar types (double, AutoDiffXd, symbolic::Expression). You can refer to PositionConstraint as a reference. And for the moment you can ignore the case to call DoEval(const Eigen::Ref<const AutoDiffXd>&, AutoDiffXd*) with a MultibodyPlant<double> case, and only implement the this DoEval function with MultibodyPlant<AutoDiffXd>.
Why we need to consider isinstance(vars[0], AutoDiffXd) condition?
Because when the scalar type is AutoDiffXd, we want to compare not only the value of q against the one stored in context, but also its gradient. If they are different, then we need to call SetPositions to recompute the cache. When the scalar type is double, we then only need to compare the value.

Shorter way to convert an empty string to an int, and then clamp

I'm looking for a way to simplify the code for the following logic:
Take a value that is either a nil or an empty string
Convert that value to an integer
Set zero values to the maximum value (empty string/nil are converted to 0 when cast as an int)
.clamp the value between a minimum and a maximum
Here's the long form that works:
minimum = 1
maximum = 10_000
value = value.to_i
value = maximum if value.zero?
value = value.clamp(minimum, maximum)
So for example, if value is "", I should get 10,000. If value is "15", I should get 15. If value is "45000", I should get 10000.
Is there a way to shorten this logic, assuming that minimum and maximum are defined and that the default value is the maximum?
The biggest problem I've had in shortening it is that null-coalescing doesn't work on the zero, since Ruby considers zero a truthy value. Otherwise, it could be a one-liner.
you could still do a one-liner with your current logic
minimum, maximum = 1, 10_000
value = ( value.to_i.zero? ? maximum: value.to_i ).clamp(minimum, maximum)
but not sure if your issue is that if you enter '0' you want 1 and not 10_000 if so then try this
minimum, maximum = 1, 10_000
value = (value.to_i if Float(value) rescue maximum).clamp(minimum, maximum)
Consider Fixing the Input Object or Method
If you're messing with String objects when you expect an Integer, you're probably dealing with user input. If that's the case, the problem should really be solved through input validation and/or looping over an input prompt elsewhere in your program rather than trying to perform input transformations inline.
Duck-typing is great, but I suspect you have a broken contract between methods or objects. As a general rule, it's better to fix the source of the mismatch unless you're deliberately wrapping some piece of code that shouldn't be modified. There are a number of possible refactorings and patterns if that's the case.
One such solution is to use a collaborator object or method for information hiding. This enables you to perform your input transformations without complicating your inline logic, and allowing you to access the transformed value as a simple method call such as user_input.value.
Turning a Value into a Collaborator Object
If you are just trying to tighten up your current method you can aim for shorter code, but I'd personally recommend aiming for maintainability instead. Pragmatically, that means sending your value to the constructor of a specialized object, and then asking that object for a result. As a bonus, this allows you to use a default variable assignment to handle nil. Consider the following:
class MaximizeUnsetInputValue
MIN = 1
MAX = 10_000
def initialize value=MAX
#value = value
set_empty_to_max
end
def set_empty_to_max
#value = MAX if #value.to_i.zero?
end
def value
#value.clamp MIN, MAX
end
end
You can easily validate that this handles your various use cases while hiding the implementation details inside the collaborator object's methods. For example:
inputs_and_expected_outputs = [
[0, 10000],
[1, 1],
[10, 10],
[10001, 10000],
[nil, 10000],
['', 10000]
]
inputs_and_expected_outputs.map do |input, expected|
MaximizeUnsetInputValue.new(input).value == expected
end
#=> [true, true, true, true, true, true]
There are certainly other approaches, but this is the one I'd recommend based on your posted code. It isn't shorter, but I think it's readable, maintainable, adaptable, and reusable. Your mileage may vary.

Provide custom gradient to drake::MathematicalProgram

Drake has an interface where you can give it a generic function as a constraint and it can set up the nonlinearly-constrained mathematical program automatically (as long as it supports AutoDiff). I have a situation where my constraint does not support AutoDiff (the constraint function conducts a line search to approximate the maximum value of some function), but I have a closed-form expression for the gradient of the constraint. In my case, the math works out so that it's difficult to find a point on this function, but once you have that point it's easy to linearize around it.
I know many optimization libraries will allow you to provide your own analytical gradient when available; can you do this with Drake's MathematicalProgram as well? I could not find mention of it in the MathematicalProgram class documentation.
Any help is appreciated!
It's definitely possible, but I admit we haven't provided helper functions that make it pretty yet. Please let me know if/how this helps; I will plan to tidy it up and add it as an example or code snippet that we can reference in drake.
Consider the following code:
from pydrake.all import AutoDiffXd, MathematicalProgram, Solve
prog = MathematicalProgram()
x = prog.NewContinuousVariables(1, 'x')
def cost(x):
return (x[0]-1.)*(x[0]-1.)
def constraint(x):
if isinstance(x[0], AutoDiffXd):
print(x[0].value())
print(x[0].derivatives())
return x
cost_binding = prog.AddCost(cost, vars=x)
constraint_binding = prog.AddConstraint(
constraint, lb=[0.], ub=[2.], vars=x)
result = Solve(prog)
When we register the cost or constraint with MathematicalProgram in this way, we are allowing that it can get called with either x being a float, or x being an AutoDiffXd -- which is simply a wrapping of Eigen's AutoDiffScalar (with dynamically allocated derivatives of type double). The snippet above shows you roughly how it works -- every scalar value has a vector of (partial) derivatives associated with it. On entry to the function, you are passed x with the derivatives of x set to dx/dx (which will be 1 or zero).
Your job is to return a value, call it y, with the value set to the value of your cost/constraint, and the derivatives set to dy/dx. Normally, all of this happens magically for you. But it sounds like you get to do it yourself.
Here's a very simple code snippet that, I hope, gets you started:
from pydrake.all import AutoDiffXd, MathematicalProgram, Solve
prog = MathematicalProgram()
x = prog.NewContinuousVariables(1, 'x')
def cost(x):
return (x[0]-1.)*(x[0]-1.)
def constraint(x):
if isinstance(x[0], AutoDiffXd):
y = AutoDiffXd(2*x[0].value(), 2*x[0].derivatives())
return [y]
return 2*x
cost_binding = prog.AddCost(cost, vars=x)
constraint_binding = prog.AddConstraint(
constraint, lb=[0.], ub=[2.], vars=x)
result = Solve(prog)
Let me know?

How to solve Mathematical Expressions in Rails 4 like 6000*70%?

I am using Dentaku gem to solve little complex expressions like basic salary is 70% of Gross salary. As the formulas are user editable so I worked on dentaku.
When I write calculator = Dentaku::Calculator.new to initialize and then enter the command calculator.evaluate("60000*70%") then error comes like below:
Dentaku::ParseError: Dentaku::AST::Modulo requires numeric operands
from /Users/sulman/.rbenv/versions/2.2.3/lib/ruby/gems/2.2.0/gems/dentaku-2.0.8/lib/dentaku/ast/arithmetic.rb:11:in `initialize'
I have array is which formula is stored like: ["EarningItem-5","*","6","7","%"] where EarningItem-5 is an object and has value 60000
How can I resolve such expressions?
For this particular case you can use basic_salary = gross_salary * 0.7
Next you need to create the number field in your views which accepts 0..100 range. At last, set up the after_save callback and use this code:
model
after_create :percent_to_float
protected
def percent_to_float
self.percent = percent / 100.0
self.save
end
edit:
Of course, you can simply use this formula without any callbacks:
basic_salary = gross_salary / 100.0 * 70
where 70 is user defined value.
Dentaku does not appear to support "percent". Try this instead
calculator.evaluate('60000 * 0.7')

How to enforce the number of significant digits of a BigDecimal

I defined a decimal field with a scale / significant digits of 4 in mysql (ex. 10.0001 ). ActiveRecord returns it as a BigDecimal.
I can set the field in ActiveRecord with a scale of 5 (ex. 10.00001 ), and save it, which effectively truncates the value (it's stored as 10.0000).
Is there a way to prevent this? I already looked at the BigDecimal class if there is a way to force scale. Couldn't find one. I can calculate the scale of a BigDecimal and return a validation error, but I wonder if there is a nicer way to enforce it.
You could add a before_save handler for your class and include logic to round at your preference, for example:
class MyRecord < ActiveRecord::Base
SCALE = 4
before_save :round_decimal_field
def round_decimal_field
self.decimal_field.round(SCALE, BigDecimal::ROUND_UP)
end
end
r = MyRecord.new(:decimal_field => 10.00009)
r.save!
r.decimal_field # => 10.0001
The scale factor might even be assignable automatically by reading the schema somehow.
See the ROUND_* constant names in the Ruby BigDecimal class documentation for other rounding modes.

Resources