I am using libsvm java package for a sentence classification task. I have 3 classes. Every sentence is represented as a vector of size 435. The format of vector_file is as follows:
1 0 0.12 0 0.5 0.24 0.32 0 0 0 ... 0.43 0 First digit indicates class label and remaining is the vector.
The following is how I am making the svm_problem:
public void makeSvmProb(ArrayList<Float> inputVector,float label,int p){
// p is 0 to 77 (total training sentences)
int idx=0,count=0;
svm_prob.y[p]=label;
for(int i=0;i<inputVector.size();i++){
if(inputVector.get(i)!=0) {
count++; // To get the count of non-zero values
}
}
svm_node[] x = new svm_node[count];
for(int i=0;i<inputVector.size();i++){
if(inputVector.get(i)!=0){
x[idx] = new svm_node();
x[idx].index = i;
x[idx].value = inputVector.get(i);
idx++;
}
}
svm_prob.x[p]=x;
}
Parameter settings:
param.svm_type = svm_parameter.C_SVC;
param.kernel_type = svm_parameter.RBF;
param.degree = 3;
param.gamma = 0.5;
param.coef0 = 0;
param.nu = 0.5;
param.cache_size = 40;
param.C = 1;
param.eps = 1e-3;
param.p = 0.1;
param.shrinking = 1;
param.probability = 0;
param.nr_weight = 0;
param.weight_label = new int[0];
param.weight = new double[0];
While executing the program, After 2 iterations, I am getting a NullPointerException. I couldn't figure out what is going wrong.
This is the error coming:
optimization finished, #iter = 85
nu = 0.07502654779820772
obj = -15.305162227093849, rho = -0.03157808477381625
nSV = 47, nBSV = 1
*
optimization finished, #iter = 88
nu = 0.08576821199868506
obj = -17.83925196551639, rho = 0.1297986754900152
nSV = 51, nBSV = 3
Exception in thread "main" java.lang.NullPointerException
at libsvm.Kernel.dot(svm.java:207)
at libsvm.Kernel.<init>(svm.java:199)
at libsvm.SVC_Q.<init>(svm.java:1156)
at libsvm.svm.solve_c_svc(svm.java:1333)
at libsvm.svm.svm_train_one(svm.java:1510)
at libsvm.svm.svm_train(svm.java:2067)
at SvmOp.<init>(SvmOp.java:130)
at Main.main(Main.java:8)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Any idea on what is going wrong?
The NullPointerException is thrown in Line 207 in svm.class. Investigating the source code shows:
static double dot(svm_node[] x, svm_node[] y)
{
double sum = 0;
int xlen = x.length;
...
}
Line 207 is int xlen = x.length;. So in this case, we see, that one of your svm_node (or vectors) is null.
For this reason, we cannot really help you here, as we would need more information / source code to debug it.
I would go for the following strategy:
Investigate the svm_node objects after you completed the building of the svm_problem in a debugger and look for null values.
Check the build process of your svm_problem. The problem might be there.
An other possibility would be to change your data-format and be compliant to the official LIBSVM format:
As stated in the documentation, the data format uses sparse-data format and should be like that:
<label> 0:i 1:K(xi,x1) ... L:K(xi,xL)
The ascending integer refers to the attribute or feature id, which is necessary for internal representation of the vector.
I previously replied to a similar question here and added an example for the data format.
This format can be read out of the box as the code to construct the svm_problem is included in the library.
Related
I'm working with ROS_DJI_OSDK version 3.7 and dji matrice M600. Until this day I used for my autonomous missions TRACE_POINT mode defined in MissionWaypointTask.msg here and everything worked fine. But I would like to do the same with TRACE_COORDINATED mode with some damping between waypoints.
The problem is even if I set damping to 0 I get this error message:
STATUS/1 # getErrorCodeMessage, L656: missionWpUploadCallback
STATUS/1 # getCMDSetMissionMSG, L883: WAYPOINT_MISSION_CHECK_FAILED
[ INFO] [1538563753.039855552]: waypoint mission initialized and uploaded
[ WARN] [1538563753.040152078]: ack.info: set = 3 id = 17
[ WARN] [1538563753.040214866]: ack.data: 231
[ WARN] [1538563753.040261163]: Failed sending waypoint upload command
The ack.data equal to 231 means damping checking failed message. But whatever I set the damping to the result stays the same.
I've read here that there is some restriction about damping:
"Actually we don't have the limitation for the 1/2 distance. However, we have the restriction that the Damping distance for Waypoint A plus the damping distance for Waypoint B should be smaller than the distance between A and B" But with damping equal to 0 or other small value it should be alright with this restriction.
Is there something I've missed here?
This is my default configuration of the whole MissionWaypointTask and each WaypointSettings:
waypoint_task.velocity_range = 10;
waypoint_task.idle_velocity = 5;
waypoint_task.action_on_finish = dji_sdk::MissionWaypointTask::FINISH_NO_ACTION;
waypoint_task.mission_exec_times = 1;
waypoint_task.yaw_mode = dji_sdk::MissionWaypointTask::YAW_MODE_AUTO;
waypoint_task.trace_mode = dji_sdk::MissionWaypointTask::TRACE_COORDINATED;
waypoint_task.action_on_rc_lost = dji_sdk::MissionWaypointTask::ACTION_AUTO;
waypoint_task.gimbal_pitch_mode = dji_sdk::MissionWaypointTask::GIMBAL_PITCH_FREE;
waypoint_settings.damping = 0;
waypoint_settings.yaw = 0;
waypoint_settings.gimbalPitch = 0;
waypoint_settings.turnMode = 0;
waypoint_settings.hasAction = 0;
waypoint_settings.actionTimeLimit = 100;
waypoint_settings.actionNumber = 0;
waypoint_settings.actionRepeat = 0;
for (int i = 0; i < 16; ++i) {
waypoint_settings.commandList[i] = 0;
waypoint_settings.commandParameter[i] = 0;
}
Ok, I've figured it out. I was setting the whole time damping_distance to 0 which is not supported by the DJI SDK. Other values seem to be valid.
I am a newbie in the field of CV and IP. I was writing the HoughTransform algorithm for finding line.I am not getting what is wrong with this code in which i m trying to find the accumulator array
numRowsInBW = size(BW,1);
numColsInBW = size(BW,2);
%length of the diagonal of image
D = sqrt((numRowsInBW - 1)^2 + (numColsInBW - 1)^2);
%number of rows in the accumulator array
nrho = 2*(ceil(D/rhoStep)) + 1;
%number of cols in the accumulator array
ntheta = length(theta);
H = zeros(nrho,ntheta);
%this means the particular pixle is white
%i.e the edge pixle
[allrows allcols] = find(BW == 1);
for i = (1 : size(allrows))
y = allrows(i);
x = allcols(i);
for th = (1 : 180)
d = floor(x*cos(th) - y*sin(th));
H(d+floor(nrho/2),th) += 1;
end
end
I m applying this for a simple image
I m getting this result
But this is expected
I am not able to find the mistake.Please help me.Thanks in advance.
There are several issues with your code. The main issue is here:
ntheta = length(theta);
% ...
for i = (1 : size(allrows))
% ...
for th = (1 : 180)
d = floor(x*cos(th) - y*sin(th));
% ...
th seems to be an angle in degrees. cos(th) is meaningless. Instead, use cosd and sind.
Another issue is that th iterates from 1 to 180, but there is no guarantee that ntheta is 180. So, loop as follows instead:
for i = 1 : size(allrows)
% ...
for j = 1 : numel(theta)
th = theta(j);
% ...
and use th as the angle, and j as the index into H.
Finally, given your image and your expected output, you should apply some edge detection first (Canny, for example). Maybe you already did this?
İ try to use tensorflow image retraining.
https://www.tensorflow.org/tutorials/image_retraining
train like that and it is OK:
D:\dev\Anaconda\python D:/dev/detect_objects/tensorflow-master/tensorflow/examples/image_retraining/retrain.py --image_dir D:/dev/detect_objects/flower_photos --bottleneck_dir D:/dev/detect_objects/tensorflow-master/retrain/bottleneck --architecture mobilenet_0.25_128 --output_graph D:/dev/detect_objects/tensorflow-master/retrain/output_graph/output.pb --output_labels D:/dev/detect_objects/tensorflow-master/retrain/output_labels/labels.txt --saved_model_dir D:/dev/detect_objects/tensorflow-master/retrain/saved_model_dir --how_many_training_steps 100
When predict new image like:
D:\dev\Anaconda\python D:/dev/detect_objects/tensorflow-master/tensorflow/examples/label_image/label_image.py --graph=D:/dev/detect_objects/tensorflow-master/retrain/output_graph/output.pb --labels=D:/dev/detect_objects/tensorflow-master/retrain/output_labels/labels.txt --image=D:/dev/detect_objects/flower_photos/daisy/21652746_cc379e0eea_m.jpg
It gives error
KeyError: "The name 'import/Mul' refers to an Operation not in the graph."
label_image.py content:
input_height = 299
input_width = 299
input_mean = 0
input_std = 255
#input_layer = "input"
#output_layer = "InceptionV3/Predictions/Reshape_1"
input_layer = "Mul"
output_layer = "final_result"
What is the problem here?
Change this:
input_height = 299
input_width = 299
input_mean = 0
input_std = 255
#input_layer = "input"
#output_layer = "InceptionV3/Predictions/Reshape_1"
input_layer = "Mul"
output_layer = "final_result"
to this:
input_height = 128
input_width = 128
input_mean = 0
input_std = 128
input_layer = "input"
output_layer = "final_result"
If there is no node in the graph called "import/Mul" and we don't know
what the graph is or how it was produced, there's little chance that
anyone will be able to guess the right answer.
You might try printing the list of operations of your graph using graph.get_operations() and attempting to locate an appropriate-sounding node (try the first one that is printed)
I'm currently implementing symbol time recovery blocks. The idea is to be able to choose different TEDs (Gardner, Zero-crossing, Early-Late, Maximum-likelihood etc). In blocks like M&M recovery, the gain parameters of the loop are expressed explicitly (gain_omega and gain_mu) which can be difficult to get right. The contro_loop class is, however, more convenient (loop characteristics can be specified by "loop bandwidth" and "damping factor"(zeta)). So my first test started with the re-implementation of the MM Clock Recovery with a control loop. The work function of this block is shown below (Comments are mine)
clock_recovery_mm_ff_impl::general_work(int noutput_items,
gr_vector_int &ninput_items,
gr_vector_const_void_star &input_items,
gr_vector_void_star &output_items)
{
const float *in = (const float *)input_items[0];
float *out = (float *)output_items[0];
int ii = 0; // input index
int oo = 0; // output index
int ni = ninput_items[0] - d_interp->ntaps(); // don't use more input than this
float mm_val;
while(oo < noutput_items && ii < ni ) {
// produce output sample
out[oo] = d_interp->interpolate(&in[ii], d_mu); //Interpolation
mm_val = slice(d_last_sample) * out[oo] - slice(out[oo]) * d_last_sample; // Error calculation
d_last_sample = out[oo];
//Loop filtering
d_omega = d_omega + d_gain_omega * mm_val; //Frequency
d_omega = d_omega_mid + gr::branchless_clip(d_omega-d_omega_mid, d_omega_lim); //Bound the frequency
d_mu = d_mu + d_omega + d_gain_mu * mm_val; //Phase
ii += (int)floor(d_mu); // Basepoint index
d_mu = d_mu - floor(d_mu); // Fractional interval
oo++;
}
consume_each(ii);
return oo;
}
Here is my code. First, the control loop is initialized the constructor
loop(new gr::blocks::control_loop(0.02,(1 + d_omega_relative_limit)*omega,
(1 - d_omega_relative_limit)*omega))
First of all I would like to eliminate a couple of doubts that I have regarding pll (the control_loop above) in symbol timing recovery particularly phase and frequency ranges (that are in turn used for wrapping). Taking an analogy from Costas loop : carrier phase is wrapped between -2pi and +2pi and the frequency offset is tracked between -1 and +1. It is quite straightforward to see why. Unfortunately I can't get my head around phase and frequency tracking in symbol recovery. From the m&m block, frequency is tracked between (1+omega_relative_limit) and (1 - omega_relative_limit)*omega where omega is simply the number of samples per symbol. Phase is tracked between 0 and omega. I dont understand why this is so and why the m&m block doesn't wrap it. Any ideas here will be appreciated.
And here is my work function
debug_time_recovery_pam_test_1_impl::general_work (int noutput_items,
gr_vector_int &ninput_items,
gr_vector_const_void_star &input_items,
gr_vector_void_star &output_items)
{
// Tell runtime system how many output items we produced.
const float *in = (const float *)input_items[0];
float *out = (float *)output_items[0];
int ii = 0; // input index
int oo = 0; // output index
int ni = ninput_items[0] - d_interp->ntaps(); // don't use more input than this
float mm_val;
while(oo < noutput_items && ii < ni ) {
// produce output sample
out[oo] = d_interp->interpolate(&in[ii], d_mu);
//Calculating error
mm_val = slice(d_last_sample) * out[oo] - slice(out[oo]) * d_last_sample;
d_last_sample = out[oo];
//Loop filtering
loop->advance_loop(mm_val); // Filter the error
loop->frequency_limit(); //Stop frequency from wandering too far
//Loop phase and frequency
d_omega = loop->get_frequency();
d_mu = loop->get_phase();
//d_omega = d_omega + d_gain_omega * mm_val;
//d_omega = d_omega_mid + gr::branchless_clip(d_omega-d_omega_mid, d_omega_lim);
//d_mu = d_mu + d_omega + d_gain_mu * mm_val;
ii += (int)floor(d_mu); // Basepoint index
d_mu = d_mu - floor(d_mu);//Fractional interval
oo++;
}
consume_each(ii);
return oo;
}
I have tried to use the block in a GFSK demodulator and I got this error
python: /build/gnuradio-bJXzXK/gnuradio-3.7.9.1/gnuradio-runtime/include/gnuradio/buffer.h:177: unsigned int gr::buffer::index_add(unsigned int, unsigned int): Assertion `s < d_bufsize' failed.
The first google search regarding this error suggests that im somehow "abusing" the scheduler since this error comes somewhere below the API. I think my calculation of d_omega and d_mu from the control loop is a bit naive but unfortunately I don't know any other way of doing so. Another alternative will be to use a modulo-1 counter (incrementing or decrementing) but I want to explore this option first.
I'm playing with FsCheck so I have this implementation:
let add a b =
if a > 100
then failwith "nasty bug"
else a + b
...and this FsCheck based test:
fun (a:int) -> (add a 0) = a
|> Check.QuickThrowOnFailure
and the test never fails. My guess is that the 100 values produced by the random generator are never bigger than 100.
Shouldn't the values be more "random"?
When you use Check.QuickThrowOnFailure, it uses the configuration Config.QuickThrowOnFailure, which has these values:
> Config.QuickThrowOnFailure;;
val it : Config =
{MaxTest = 100;
MaxFail = 1000;
Replay = null;
Name = "";
StartSize = 1;
EndSize = 100;
QuietOnSuccess = false;
Every = <fun:get_Quick#342>;
EveryShrink = <fun:get_Quick#343-1>;
Arbitrary = [];
Runner = <StartupCode$FsCheck>.$Runner+get_throwingRunner#355;}
The important values to consider here are StartSize, but particularly EndSize. Some of the generators in FsCheck uses the size context to determine the size or range of values it generates.
If you change the EndSize to e.g. 1,000 you can make your test fail:
> Check.One({Config.QuickThrowOnFailure with EndSize = 1000}, fun (a:int) -> (add a 0) = a);;
System.Exception: Falsifiable, after 15 tests (0 shrinks) (StdGen (1912816373,296229213)):
Original:
101
with exception:
> System.Exception: nasty bug
at FSI_0040.add(Int32 a, Int32 b)
at FSI_0055.it#69-6.Invoke(Int32 a)
at FsCheck.Testable.evaluate[a,b](FSharpFunc`2 body, a a) in C:\Users\Kurt\Projects\FsCheck\FsCheck\src\FsCheck\Testable.fs:line 161
at <StartupCode$FsCheck>.$Runner.get_throwingRunner#365-1.Invoke(String message) in C:\Users\Kurt\Projects\FsCheck\FsCheck\src\FsCheck\Runner.fs:line 365
at <StartupCode$FsCheck>.$Runner.get_throwingRunner#355.FsCheck-IRunner-OnFinished(String , TestResult ) in C:\Users\Kurt\Projects\FsCheck\FsCheck\src\FsCheck\Runner.fs:line 365
at FsCheck.Runner.check[a](Config config, a p) in C:\Users\Kurt\Projects\FsCheck\FsCheck\src\FsCheck\Runner.fs:line 275
at <StartupCode$FSI_0055>.$FSI_0055.main#()
Stopped due to error