I'm working with ROS_DJI_OSDK version 3.7 and dji matrice M600. Until this day I used for my autonomous missions TRACE_POINT mode defined in MissionWaypointTask.msg here and everything worked fine. But I would like to do the same with TRACE_COORDINATED mode with some damping between waypoints.
The problem is even if I set damping to 0 I get this error message:
STATUS/1 # getErrorCodeMessage, L656: missionWpUploadCallback
STATUS/1 # getCMDSetMissionMSG, L883: WAYPOINT_MISSION_CHECK_FAILED
[ INFO] [1538563753.039855552]: waypoint mission initialized and uploaded
[ WARN] [1538563753.040152078]: ack.info: set = 3 id = 17
[ WARN] [1538563753.040214866]: ack.data: 231
[ WARN] [1538563753.040261163]: Failed sending waypoint upload command
The ack.data equal to 231 means damping checking failed message. But whatever I set the damping to the result stays the same.
I've read here that there is some restriction about damping:
"Actually we don't have the limitation for the 1/2 distance. However, we have the restriction that the Damping distance for Waypoint A plus the damping distance for Waypoint B should be smaller than the distance between A and B" But with damping equal to 0 or other small value it should be alright with this restriction.
Is there something I've missed here?
This is my default configuration of the whole MissionWaypointTask and each WaypointSettings:
waypoint_task.velocity_range = 10;
waypoint_task.idle_velocity = 5;
waypoint_task.action_on_finish = dji_sdk::MissionWaypointTask::FINISH_NO_ACTION;
waypoint_task.mission_exec_times = 1;
waypoint_task.yaw_mode = dji_sdk::MissionWaypointTask::YAW_MODE_AUTO;
waypoint_task.trace_mode = dji_sdk::MissionWaypointTask::TRACE_COORDINATED;
waypoint_task.action_on_rc_lost = dji_sdk::MissionWaypointTask::ACTION_AUTO;
waypoint_task.gimbal_pitch_mode = dji_sdk::MissionWaypointTask::GIMBAL_PITCH_FREE;
waypoint_settings.damping = 0;
waypoint_settings.yaw = 0;
waypoint_settings.gimbalPitch = 0;
waypoint_settings.turnMode = 0;
waypoint_settings.hasAction = 0;
waypoint_settings.actionTimeLimit = 100;
waypoint_settings.actionNumber = 0;
waypoint_settings.actionRepeat = 0;
for (int i = 0; i < 16; ++i) {
waypoint_settings.commandList[i] = 0;
waypoint_settings.commandParameter[i] = 0;
}
Ok, I've figured it out. I was setting the whole time damping_distance to 0 which is not supported by the DJI SDK. Other values seem to be valid.
Related
I'm trying to run an optimization with SNOPT.
Right now as I run it I consistently get exit condition 41.
I've added the following parameters to the solver:
prog.SetSolverOption(solver.solver_id(),"Function Precision","1e-6")
prog.SetSolverOption(solver.solver_id(),"Major Optimality Tolerance","1e-3")
prog.SetSolverOption(solver.solver_id(),"Superbasics limit","600")
#print("Trying to Solve")
result = solver.Solve(prog)
print(result.is_success())
But I still contently get the same exit condition.
The error seems to be from the interpolation function I'm using. (When I remove it I no longer get the error).
t, c, k = interpolate.splrep(sref, kapparef, s=0, k=3)
kappafnc = interpolate.BSpline(t, c, k, extrapolate=False)
Here is the function I think I determined was causing the issue:
def car_continous_dynamics(self, state, force, steering):
beta = steering
s = state[0]
#print(s)
n = state[1]
alpha = state[2]
v = state[3]
#State = s, n, alpha , v
#s= distance along the track, n = perpendicular distance along the track
#alpha = velocity angle relative to the track
#v= magnitude of the velocity of the car
s_val = 0
if s.value() >0:
s_val = s.value()
Cm1 = 0.28
Cm2 = 0.05
Cr0 = 0.011
Cr2 = 0.006
m = 0.043
phi_dot = v*beta*15.5
Fxd = (Cm1 - Cm2 * v) * force - Cr2 * v * v - Cr0 *tanh(5 * v)
s_dot = v*cos(alpha+.5*beta)/(1)##-n*self.kappafunc(s_val))
n_dot= v*sin(alpha+.5*beta)
alpha_dot = phi_dot #-s_dot*self.kappafunc(s_val)
v_dot=Fxd/m*cos(beta*.5)
# concatenate velocity and acceleration
#print(s_dot)
#print(n_dot)
#print(alpha_dot)
#print(v_dot)
state_dot = np.array((s_dot,n_dot,alpha_dot,v_dot))
#print("State dot")
#print(state_dot.dtype.name)
#print(state_dot)
#print(state_dot)
return state_dot
``
Debugging INFO 41 in SNOPT is generally challenging, it is often caused by some bad gradient (but could also due to the problem being really hard to optimize).
You should check if your gradient is well-behaved. I see you have
s_val = 0
if s.value() >0:
s_val = s.value()
There are two potential problems
Your s carries the gradient (you could check s.derivatives(), it has non-zero gradient), but when you compute s_val, this is a float object with no gradient. Hence you lose the gradient information.
s_val is non-differentiable at 0.
I would use s instead of s_val in your function (without the branch if s.value() > 0). And also add a constraint s>=0 by
prog.AddBoundingBoxConstraint(0, np.inf, s)
With this bounding box constraint, SNOPT guarantees that in each iteration, the value of is is always non-negative.
I'm currently implementing symbol time recovery blocks. The idea is to be able to choose different TEDs (Gardner, Zero-crossing, Early-Late, Maximum-likelihood etc). In blocks like M&M recovery, the gain parameters of the loop are expressed explicitly (gain_omega and gain_mu) which can be difficult to get right. The contro_loop class is, however, more convenient (loop characteristics can be specified by "loop bandwidth" and "damping factor"(zeta)). So my first test started with the re-implementation of the MM Clock Recovery with a control loop. The work function of this block is shown below (Comments are mine)
clock_recovery_mm_ff_impl::general_work(int noutput_items,
gr_vector_int &ninput_items,
gr_vector_const_void_star &input_items,
gr_vector_void_star &output_items)
{
const float *in = (const float *)input_items[0];
float *out = (float *)output_items[0];
int ii = 0; // input index
int oo = 0; // output index
int ni = ninput_items[0] - d_interp->ntaps(); // don't use more input than this
float mm_val;
while(oo < noutput_items && ii < ni ) {
// produce output sample
out[oo] = d_interp->interpolate(&in[ii], d_mu); //Interpolation
mm_val = slice(d_last_sample) * out[oo] - slice(out[oo]) * d_last_sample; // Error calculation
d_last_sample = out[oo];
//Loop filtering
d_omega = d_omega + d_gain_omega * mm_val; //Frequency
d_omega = d_omega_mid + gr::branchless_clip(d_omega-d_omega_mid, d_omega_lim); //Bound the frequency
d_mu = d_mu + d_omega + d_gain_mu * mm_val; //Phase
ii += (int)floor(d_mu); // Basepoint index
d_mu = d_mu - floor(d_mu); // Fractional interval
oo++;
}
consume_each(ii);
return oo;
}
Here is my code. First, the control loop is initialized the constructor
loop(new gr::blocks::control_loop(0.02,(1 + d_omega_relative_limit)*omega,
(1 - d_omega_relative_limit)*omega))
First of all I would like to eliminate a couple of doubts that I have regarding pll (the control_loop above) in symbol timing recovery particularly phase and frequency ranges (that are in turn used for wrapping). Taking an analogy from Costas loop : carrier phase is wrapped between -2pi and +2pi and the frequency offset is tracked between -1 and +1. It is quite straightforward to see why. Unfortunately I can't get my head around phase and frequency tracking in symbol recovery. From the m&m block, frequency is tracked between (1+omega_relative_limit) and (1 - omega_relative_limit)*omega where omega is simply the number of samples per symbol. Phase is tracked between 0 and omega. I dont understand why this is so and why the m&m block doesn't wrap it. Any ideas here will be appreciated.
And here is my work function
debug_time_recovery_pam_test_1_impl::general_work (int noutput_items,
gr_vector_int &ninput_items,
gr_vector_const_void_star &input_items,
gr_vector_void_star &output_items)
{
// Tell runtime system how many output items we produced.
const float *in = (const float *)input_items[0];
float *out = (float *)output_items[0];
int ii = 0; // input index
int oo = 0; // output index
int ni = ninput_items[0] - d_interp->ntaps(); // don't use more input than this
float mm_val;
while(oo < noutput_items && ii < ni ) {
// produce output sample
out[oo] = d_interp->interpolate(&in[ii], d_mu);
//Calculating error
mm_val = slice(d_last_sample) * out[oo] - slice(out[oo]) * d_last_sample;
d_last_sample = out[oo];
//Loop filtering
loop->advance_loop(mm_val); // Filter the error
loop->frequency_limit(); //Stop frequency from wandering too far
//Loop phase and frequency
d_omega = loop->get_frequency();
d_mu = loop->get_phase();
//d_omega = d_omega + d_gain_omega * mm_val;
//d_omega = d_omega_mid + gr::branchless_clip(d_omega-d_omega_mid, d_omega_lim);
//d_mu = d_mu + d_omega + d_gain_mu * mm_val;
ii += (int)floor(d_mu); // Basepoint index
d_mu = d_mu - floor(d_mu);//Fractional interval
oo++;
}
consume_each(ii);
return oo;
}
I have tried to use the block in a GFSK demodulator and I got this error
python: /build/gnuradio-bJXzXK/gnuradio-3.7.9.1/gnuradio-runtime/include/gnuradio/buffer.h:177: unsigned int gr::buffer::index_add(unsigned int, unsigned int): Assertion `s < d_bufsize' failed.
The first google search regarding this error suggests that im somehow "abusing" the scheduler since this error comes somewhere below the API. I think my calculation of d_omega and d_mu from the control loop is a bit naive but unfortunately I don't know any other way of doing so. Another alternative will be to use a modulo-1 counter (incrementing or decrementing) but I want to explore this option first.
I am using libsvm java package for a sentence classification task. I have 3 classes. Every sentence is represented as a vector of size 435. The format of vector_file is as follows:
1 0 0.12 0 0.5 0.24 0.32 0 0 0 ... 0.43 0 First digit indicates class label and remaining is the vector.
The following is how I am making the svm_problem:
public void makeSvmProb(ArrayList<Float> inputVector,float label,int p){
// p is 0 to 77 (total training sentences)
int idx=0,count=0;
svm_prob.y[p]=label;
for(int i=0;i<inputVector.size();i++){
if(inputVector.get(i)!=0) {
count++; // To get the count of non-zero values
}
}
svm_node[] x = new svm_node[count];
for(int i=0;i<inputVector.size();i++){
if(inputVector.get(i)!=0){
x[idx] = new svm_node();
x[idx].index = i;
x[idx].value = inputVector.get(i);
idx++;
}
}
svm_prob.x[p]=x;
}
Parameter settings:
param.svm_type = svm_parameter.C_SVC;
param.kernel_type = svm_parameter.RBF;
param.degree = 3;
param.gamma = 0.5;
param.coef0 = 0;
param.nu = 0.5;
param.cache_size = 40;
param.C = 1;
param.eps = 1e-3;
param.p = 0.1;
param.shrinking = 1;
param.probability = 0;
param.nr_weight = 0;
param.weight_label = new int[0];
param.weight = new double[0];
While executing the program, After 2 iterations, I am getting a NullPointerException. I couldn't figure out what is going wrong.
This is the error coming:
optimization finished, #iter = 85
nu = 0.07502654779820772
obj = -15.305162227093849, rho = -0.03157808477381625
nSV = 47, nBSV = 1
*
optimization finished, #iter = 88
nu = 0.08576821199868506
obj = -17.83925196551639, rho = 0.1297986754900152
nSV = 51, nBSV = 3
Exception in thread "main" java.lang.NullPointerException
at libsvm.Kernel.dot(svm.java:207)
at libsvm.Kernel.<init>(svm.java:199)
at libsvm.SVC_Q.<init>(svm.java:1156)
at libsvm.svm.solve_c_svc(svm.java:1333)
at libsvm.svm.svm_train_one(svm.java:1510)
at libsvm.svm.svm_train(svm.java:2067)
at SvmOp.<init>(SvmOp.java:130)
at Main.main(Main.java:8)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Any idea on what is going wrong?
The NullPointerException is thrown in Line 207 in svm.class. Investigating the source code shows:
static double dot(svm_node[] x, svm_node[] y)
{
double sum = 0;
int xlen = x.length;
...
}
Line 207 is int xlen = x.length;. So in this case, we see, that one of your svm_node (or vectors) is null.
For this reason, we cannot really help you here, as we would need more information / source code to debug it.
I would go for the following strategy:
Investigate the svm_node objects after you completed the building of the svm_problem in a debugger and look for null values.
Check the build process of your svm_problem. The problem might be there.
An other possibility would be to change your data-format and be compliant to the official LIBSVM format:
As stated in the documentation, the data format uses sparse-data format and should be like that:
<label> 0:i 1:K(xi,x1) ... L:K(xi,xL)
The ascending integer refers to the attribute or feature id, which is necessary for internal representation of the vector.
I previously replied to a similar question here and added an example for the data format.
This format can be read out of the box as the code to construct the svm_problem is included in the library.
Hi how can i set the accelerometer 0 point to the users position/the angel he is holding it ?
I am using :
delta = -40/180*math.pi --
cos_delta, sin_delta = math.cos(delta), math.sin(delta)
To offset is, but hos can i make it that the angel the device is holed is the 0
#
extra code :
-- Speed of Movement with tilt. You can change it ans see the effects.
tiltSpeed = 30;
motionx = 0;
motiony = 0;
rotation = 0;
--delta = -50/180*math.pi -- 30 degrees, maybe should have minus sign
--cos_delta, sin_delta = math.cos(delta), math.sin(delta)
-- Firstly, you need to get accelerometer's values in "zero position"
-- probably, inside onTilt event
local gy, gz = event.yGravity, event.zGravity
local len = math.sqrt(gy*gy+gz*gz) * (gz < 0 and -1 or 1)
cos_delta = gz / len
sin_delta = -gy / len
local function onTilt(event)
motionx = tiltSpeed * event.xGravity
motiony = tiltSpeed * (cos_delta*event.yGravity + sin_delta*event.zGravity)
end
-- Firstly, you need to get accelerometer's values in "zero position"
-- probably, inside onTilt event
local gy, gz = event.yGravity, event.zGravity
-- Secondly, update your cos_delta and sin_delta to remember "zero position"
local len = math.sqrt(gy*gy+gz*gz) * (gz < 0 and -1 or 1)
cos_delta = gz / len
sin_delta = -gy / len
I keep getting this error when I try to compile. I am not quite sure what the issue is. I am not the best at coding, and thus, debugging is not my strong suit. Any ideas whats causing the error?
PROGRAM Atmospheric_Reflection
IMPLICIT NONE
REAL*8:: Wavelength = 0.200D-6, WaveNumber = 0.0, pi = 3.14159265, Length(1:71) = 0, Number(1:71) = 0, IndexOfRe(1:71) = 0
INTEGER:: i = 0
!Calculate the wave number for wavelengths varying from 0.2 micrometers to 1.6 micrometers in increments of 0.02 micrometers
i = 1
DO WHILE (Wavelength <= 1.600D-6)
WaveNumber = 1/Wavelength
Length(i) = Wavelength
Number(i) = WaveNumber
IndexOfRe(i) = 1 + (((5791817.0/(238.0185 - WaveNumber**2))+(167909.0/(57.362-WaveNumber**2))))D-8
Wavelength = Wavelength + 0.02D-6
i = i+1
END DO
END PROGRAM
Possibly the D-8 at the end of this line
IndexOfRe(i) = 1 + (((5791817.0/(238.0185 - WaveNumber**2))+(167909.0/(57.362-WaveNumber**2))))D-8