TensorFlow in iOS Swift:Binary classification problem - ios

I am having some issues while getting output in TensorFlow Model(tflite).
This is my input:
let inputInfo:[Float32] : [-1.0291401 , 1.6121695 , 0.58366895, -0.25974554, 2.6633718 ,
0.39398468, 1.2648116 , -1.0617405 , 1.0997621 , -0.01813432,
-0.02543107, 1.9113901 , 0.30188444, 0.3199759 , 0.07759953,
0.23082322, 2.0959156 , -0.42658705, 0.08775132, 3.4258583 ,
-1.0573974 , 0.7249298 , -1.1119401 , -0.72663903, -0.74873704,
-0.387724 , -0.14288527, -0.39554232, -0.10774904, -0.0911286 ,
0.40389383, -0.169619 , -1.1736624 ]
let inputData = Data(bytes: &inputInfo, count: inputInfo.count * MemoryLayout<Float32>.stride)
print(inputData)
try interpreter?.copy(inputData, toInputAt: 0)
After passing this array of Float32 type as input I am getting below result in output:
Output:
TensorFlowLite.Tensor(name: "Identity", dataType: TensorFlowLite.Tensor.DataType.float32, shape: TensorFlowLite.Tensor.Shape(rank: 2, dimensions: [1, 1]), data: 4 bytes, quantizationParameters: nil)
For getting the expexcted output result I am using this code:
let outputTensor = try self.interpreter?.output(at: 0)
let finalOutput = [Float32](unsafeData: outputTensor!.data)
print(finalOutput)
4.6533377e+33
Here expected output should be some numbers between 0 & 1(like 0.8,0.9) but my final output is beyond that expected output.
I am stuck here,please help.

Related

Training random forest (ranger) using caret with custom F4 metric in R yields but after running full ,error showing undefined columns selected

library(MLmetrics)
library(caret)
library(doSNOW)
library(ranger)
data is called as the "bank additional" full from this enter link description here and then following code to generate data1
library(VIM)
data1<-hotdeck(data,variable=c('job','marital','education','default','housing','loan'),domain_var = "y",imp_var=FALSE)
#converting the categorical variables to factors as they should be
library(magrittr)
data1%<>%
mutate_at(colnames(data1)[grepl('factor|logical|character',sapply(data1,class))],factor)
Now, splitting
library(caret)
#spliting data into train test 70/30
set.seed(1234)
trainIndex<-createDataPartition(data1$y,p=0.7,times = 1,list = F)
train<-data1[trainIndex,-11]
test<-data1[-trainIndex,-11]
levels(train$y)
train$y = as.factor(train$y)
# train$y = factor(train$y,levels = c("yes","no"))
# train$y = relevel(train$y,ref="yes")
Here, i got an idea of how to create F1 metric in Training Model in Caret Using F1 Metric
and using fbeta score formula i created f1_val; now i can't understand what lev,obs and pred are indicating . in my train dataset only column y showing data$obs , but no data$pred . So, is following error is due to this? and how to rectify this?
f1 <- function (data, lev = NULL, model = NULL) {
precision <- precision(data$obs,data$pred)
recall <- sensitivity(data$obs,data$pred)
f1_val <- (17*precision*recall)/(16*precision+recall)
names(f1_val) <- c("F1")
f1_val
}
tgrid <- expand.grid(
.mtry = 1:5,
.splitrule = "gini",
.min.node.size = seq(1,500,75)
)
model_caret <- train(train$y~., data = train,
method = "ranger",
trControl = trainControl(method="cv",
number = 2,
verboseIter = T,
classProbs = T,
summaryFunction = f1),
tuneGrid = tgrid,
num.trees = 500,
importance = "impurity",
metric = "F1")
After running for 3/4 minutes we get following :
Aggregating results
Selecting tuning parameters
Fitting mtry = 5, splitrule = gini, min.node.size = 1 on full training set
but error:
Error in `[.data.frame`(data, , all.vars(Terms), drop = FALSE) :
undefined columns selected
Also when running model_caret we get,
Error: object 'model_caret' not found
Kindly help. Thanks in advance

question for dask output when using dask.array.map_overlap

I would like to use dask.array.map_overlap to deal with the scipy interpolation function. However, I keep meeting errors that I cannot understand and hoping someone can answer this to me.
Here is the error message I have received if I want to run .compute().
ValueError: could not broadcast input array from shape (1070,0) into shape (1045,0)
To resolve the issue, I started to use .to_delayed() to check each partition outputs, and this is what I found.
Following is my python code.
Step 1. Load netCDF file through Xarray, and then output to dask.array with chunk size (400,400)
df = xr.open_dataset('./Brazil Sentinal2 Tile/' + data_file +'.nc')
lon, lat = df['lon'].data, df['lat'].data
slon = da.from_array(df['lon'], chunks=(400,400))
slat = da.from_array(df['lat'], chunks=(400,400))
data = da.from_array(df.isel(band=0).__xarray_dataarray_variable__.data, chunks=(400,400))
Step 2. declare a function for da.map_overlap use
def sumsum2(lon,lat,data, hex_res=10):
hex_col = 'hex' + str(hex_res)
lon_max, lon_min = lon.max(), lon.min()
lat_max, lat_min = lat.max(), lat.min()
b = box(lon_min, lat_min, lon_max, lat_max, ccw=True)
b = transform(lambda x, y: (y, x), b)
b = mapping(b)
target_df = pd.DataFrame(h3.polyfill( b, hex_res), columns=[hex_col])
target_df['lat'] = target_df[hex_col].apply(lambda x: h3.h3_to_geo(x)[0])
target_df['lon'] = target_df[hex_col].apply(lambda x: h3.h3_to_geo(x)[1])
tlon, tlat = target_df[['lon','lat']].values.T
abc = lNDI(points=(lon.ravel(), lat.ravel()),
values= data.ravel())(tlon,tlat)
target_df['out'] = abc
print(np.stack([tlon, tlat, abc],axis=1).shape)
return np.stack([tlon, tlat, abc],axis=1)
Step 3. Apply the da.map_overlap
b = da.map_overlap(sumsum2, slon[:1200,:1200], slat[:1200,:1200], data[:1200,:1200], depth=10, trim=True, boundary=None, align_arrays=False, dtype='float64',
)
Step 4. Using to_delayed() to test output shape
print(b.to_delayed().flatten()[0].compute().shape, )
print(b.to_delayed().flatten()[1].compute().shape)
(1065, 3)
(1045, 0)
(1090, 3)
(1070, 0)
which is saying that the output from da.map_overlap is only outputting 1-D dimension ( which is (1045,0) and (1070,0) ), while in the da.map_overlap, the output I am preparing is 2-D dimension ( which is (1065,3) and (1090,3) ).
In addition, if I turn off the trim argument, which is
c = da.map_overlap(sumsum2,
slon[:1200,:1200],
slat[:1200,:1200],
data[:1200,:1200],
depth=10,
trim=False,
boundary=None,
align_arrays=False,
dtype='float64',
)
print(c.to_delayed().flatten()[0].compute().shape, )
print(c.to_delayed().flatten()[1].compute().shape)
The output becomes
(1065, 3)
(1065, 3)
(1090, 3)
(1090, 3)
This is saying that when trim=True, I cut out everything?
because...
#-- print out the values
b.to_delayed().flatten()[0].compute()[:10,:]
(1065, 3)
array([], shape=(1045, 0), dtype=float64)
while...
#-- print out the values
c.to_delayed().flatten()[0].compute()[:10,:]
array([[ -47.83683837, -18.98359832, 1395.01848583],
[ -47.8482856 , -18.99038681, 2663.68391094],
[ -47.82800624, -18.99207069, 1465.56517187],
[ -47.81897323, -18.97919009, 2769.91556363],
[ -47.82066663, -19.00712956, 1607.85927095],
[ -47.82696896, -18.97167714, 2110.7516765 ],
[ -47.81562653, -18.98302933, 2662.72112163],
[ -47.82176881, -18.98594465, 2201.83205114],
[ -47.84567 , -18.97512514, 1283.20631652],
[ -47.84343568, -18.97270783, 1282.92117225]])
Any thoughts for this?
Thank You.
I guess I got the answer. Please let me if I am wrong.
I am not allowing to use trim=True is because I change the shape of output array (after surfing the internet, I notice that the shape of output array should be the same with the shape of input array). Since I change the shape, the dask has no idea how to deal with it so it returns the empty array to me (weird).
Instead of using trim=False, since I didn't ask cutting-out the buffer zone, it is now okay to output the return values. (although I still don't know why the dask cannot concat the chunked array, but believe is also related to shape)
The solution is using delayed function on da.concatenate, which is
delayed(da.concatenate)([e.to_delayed().flatten()[idx] for idx in range(len(e.to_delayed().flatten()))])
In this case, we are not relying on the concat function in map_overlap but use our own concat to combine the outputs we want.

Why does my SURF.detectAndCompute returns none before the array?

I am trying to extract key point and descriptors but my SURF.detectAndCompute returns None in front of the array and because of that I cannot do the clustering. Below is my code:
image_paths = []
image_group_id = []
image_group_names = os.listdir(train_path)
descriptors = []
SURF = cv2.xfeatures2d.SURF_create()
SURF.setHessianThreshold(5000)
for group_id, train_group_path in enumerate(os.listdir(train_path)):
image_path = train_path + '/' + train_group_path
for path in os.listdir(image_path):
full_image_path = image_path + '/' + path
image_paths.append(full_image_path)
image_group_id.append(group_id)
_, des = SURF.detectAndCompute(cv2.imread(full_image_path), None)
descriptors.append(des)
And this is the of the descriptors that I tried to print:
[None, array([[ 0.00140431, 0.00414569, 0.00156097, 0.00508978, -0.01072729,
0.03109567, 0.01673188, 0.04477221, -0.04659119, 0.0652261 ,
0.04661125, 0.08301401, -0.00103816, 0.0069982 , 0.00168295,
0.00876924, 0.01606237, -0.00151245, 0.01966742, 0.01294267,
-0.2951115 , -0.14716513, 0.32090443, 0.18698329, 0.21934257,
-0.02404423, 0.29070902, 0.17538053, -0.03951943, 0.02635496,
0.04406727, 0.02923489, 0.03092461, -0.04000381, 0.03236163,
0.04539306, -0.26897454, -0.112547 , 0.33476326, 0.29709056,
0.2916087 , -0.01106649, 0.351532 , 0.19272245, -0.07418256,
-0.04663706, 0.07452377, 0.04671762, 0.00109221, 0.00404389,
0.0019812 , 0.0101865 , 0.00381939, -0.01452297, 0.05944061,
0.05199388, 0.0035292 , -0.04607308, 0.03296318, 0.04701661,
-0.00757545, -0.00286426, 0.00760168, 0.00306828]],
dtype=float32)
The None isn't supposed to be there. What can I do to fix this? I am new in this field, any advice is very appreciated. Thank you!

how to load a vector of certrain word form word2vec saved model?

how can i find a respective words vector from previous trained word2vec model?
data = {'one': array([-0.06590105, 0.01573388, 0.00682817, 0.53970253, -0.20303348,
-0.24792041, 0.08682659, -0.45504045, 0.89248925, 0.0655603 ,
......
-0.8175681 , 0.27659689, 0.22305458, 0.39095637, 0.43375066,
0.36215973, 0.4040089 , -0.72396156, 0.3385369 , -0.600869 ],
dtype=float32),
'two': array([ 0.04694849, 0.13303463, -0.12208422, 0.02010536, 0.05969441,
-0.04734801, -0.08465996, 0.10344813, 0.03990637, 0.07126121,
......
0.31673026, 0.22282903, -0.18084198, -0.07555179, 0.22873943,
-0.72985399, -0.05103955, -0.10911274, -0.27275378, 0.01439812],
dtype=float32),
'three': array([-0.21048863, 0.4945509 , -0.15050395, -0.29089224, -0.29454648,
0.3420335 , -0.3419629 , 0.87303966, 0.21656844, -0.07530259,
......
-0.80034876, 0.02006451, 0.5299498 , -0.6286509 , -0.6182588 ,
-1.0569025 , 0.4557548 , 0.4697938 , 0.8928275 , -0.7877308 ],
dtype=float32),
'four': ......
}
now i want to obtain like
word = "one"
wordvector = data.get_vector(word)
and returns
[-0.06590105, 0.01573388, 0.00682817, 0.53970253, -0.20303348,
-0.24792041, 0.08682659, -0.45504045, 0.89248925, 0.0655603 ,
......
-0.8175681 , 0.27659689, 0.22305458, 0.39095637, 0.43375066,
0.36215973, 0.4040089 , -0.72396156, 0.3385369 , -0.600869 ]
one_array = data['one']
datais a dictionary. To get the value of a dictionary for a certain key, you call value = dict[key].
With one_list = data['one'].tolist(), you get the wordvector of the word 'one' as a list, which seems to be your expected output.

How to place 2d array with different lengths in Google Sheets Script?

I'm sending 2d array to google sheets using json:
{"rep":[["a3289035","b656011929551"],["brown","realistic","yellow"]]}
Then I'm getting this array in google script:
var parsedJson = JSON.parse(e.postData.contents);
var repList = parsedJson.rep;
And everything is ok, but when I'm trying to place this array on the sheet:
sheet.getRange(row + 1, 2, repList[0].length, 1).setValues(repList[0]);
sheet.getRange(row + 1, 3, repList[1].length, 1).setValues(repList[1]);
I'm getting the error:
Cannot convert Array to Object[][]
What is wrong?
I found that this code can do it:
repList[0] = repList[0].map(function(e){return [e];});
repList[1] = repList[1].map(function(e){return [e];});
sheet.getRange(row + 1, 2, repList[0].length, 1).setValues(repList[0]);
sheet.getRange(row + 1, 3, repList[1].length, 1).setValues(repList[1]);
but how to make it work for 2d array with any lengths and any subarray's lengths?
Expected result:
Vosnim,
Try something like this:
var repList = parsedJson.rep;
var sheet = SpreadsheetApp.getActive().getSheetByName('TEST')
var startCol = 1;
repList.forEach(function (r, i) {
var values = r.map(function (ro) { return [ro]});
sheet.getRange(1, startCol + i, values.length, 1).setValues(values)
})

Resources