Incompatible char and widechar in Delphi - delphi

I have a strange problem.
I'm using Delphi 2007 and running it with the -r switch. On my computer everything works fine. When I transfer code to another computer I get an error:
Incompatible types char and widechar.
Maybe I should change some options.
Function that makes the problem:
function THcp.ConVertString(s: string): string;
Var i:integer;
lstr:string;
begin
lstr:=EmptyStr;
for i := 1 to Length(s) do
begin
case s[i] of
'Č': s[i]:='C';
'č': s[i]:='c';
'Ć': s[i]:='C';
'ć': s[i]:='c';
'Š': s[i]:='S';
'š': s[i]:='s';
'Đ': s[i]:='D';
'đ': s[i]:='d';
'Ž': s[i]:='Z';
'ž': s[i]:='z';
end;
lstr:=lstr+s[i];
end;
Result:=lstr;
end;

This is my hypothesis. On the machine on which the code compiles, the non-ASCII characters in the code are all valid ANSI characters for that machine's locale. But the other machine uses a different locale under which some of those characters are not included in the >= 128 portion of the codepage. And hence those characters are promoted to WideChar and so of course are not compatible with AnsiChar.

The reason for this could very much be the reason David suggests.
If you declare the function like this:
function THcp.ConVertString(s: AnisString): AnsiString;
Then that reason yet only applies for the character constants in your code, not for the input. By elimination of those constants by using the character order instead, like I once did in these routines, then I suppose this will compile.
function AsciiExtToBase(Index: Byte): Byte; overload;
const
Convert: array[128..255] of Byte = (
//128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146
// € ‚ ƒ „ … † ‡ ˆ ‰ Š ‹ Œ Ž ‘ ’
// E , f " ^ S < Z ' '
69,129, 44,102, 34,133,134,135, 94,137, 83, 60,140,141, 90,143,144, 41, 41,
//147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165
// “ ” • – — ˜ ™ š › œ ž Ÿ ¡ ¢ £ ¤ ¥
// " " - ~ s > z Y !
34, 34,149, 45,151,126,153,115, 62,156,157,122, 89,160, 33,162,163,164,165,
//166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184
// ¦ § ¨ © ª « ¬ * ® ¯ ° ± ² ³ ´ µ ¶ · ¸
// | c a < - 2 3 '
124,167,168, 99, 97, 60,172, 45,174,175,176,177, 50, 51, 41,181,182,183,184,
//185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203
// ¹ º » ¼ ½ ¾ ¿ À Á Â Ã Ä Å Æ Ç È É Ê Ë
// 1 > ? A A A A A A A C E E E E
49,186, 62,188,189,190, 63, 65, 65, 65, 65, 65, 65, 65, 67, 69, 69, 69, 69,
//204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222
// Ì Í Î Ï Ð Ñ Ò Ó Ô Õ Ö × Ø Ù Ú Û Ü Ý Þ
// I I I I D N O O O O O x U U U U Y
73, 73, 73, 73, 68, 78, 79, 79, 79, 79, 79,120,216, 85, 85, 85, 85, 89,222,
//223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241
// ß à á â ã ä å æ ç è é ê ë ì í î ï ð ñ
// a a a a a a a c e e e e i i i i o n
223, 97, 97, 97, 97, 97, 97, 97, 99,101,101,101,101,105,105,105,105,111,110,
//242 243 244 245 246 247 248 249 250 251 252 253 254 255
// ò ó ô õ ö ÷ ø ù ú û ü ý þ ÿ
// o o o o o / u u u u y y
111,111,111,111,111, 47,248,117,117,117,117,121,254,121);
begin
if Index < 128 then
Result := Index
else
Result := Convert[Index];
end;
function AsciiExtToBase(AChar: AnsiChar): AnsiChar; overload;
begin
Result := Chr(AsciiExtToBase(Ord(AChar)));
end;
function AsciiExtToBase(const S: AnsiString): AnsiString; overload;
var
P: PByte;
I: Integer;
begin
Result := S;
P := #Result[1];
for I := 1 to Length(Result) do
begin
P^ := AsciiExtToBase(P^);
Inc(P);
end;
end;

Related

FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan when using greater int values

I've recently watched a YouTube (DataSchool) video where the guy used only 3 columns from the Titanic dataset and made a pipeline. I wanted to add more columns to get better accuracy so I added Age and Fare.
I think it's probably because of the values of Age and Fare that I'm getting this error when I perform cross_val_score
columns_trans = make_column_transformer(
(OneHotEncoder(), ['Sex', 'Embarked']),
remainder='passthrough')
logreg = LogisticRegression(solver='lbfgs')
pipe = make_pipeline(columns_trans, logreg)
cross_val_score(pipe, X, y, cv=5, scoring='accuracy').mean()
/opt/conda/lib/python3.7/site-packages/sklearn/model_selection/_validation.py:552: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan.
If I remove Age and Fare, everything works fine. I was wondering if the Column Transformer or the make_pipeline had a problem with values like that.
I also tried scaling the values of Fare and Age, then it gave a cross_val_score but failed in pipe.predict() giving an error:
ValueError: Input contains NaN, infinity or a value too large for dtype('float64')
Traceback:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/tmp/ipykernel_119/4279568460.py in <module>
----> 1 cross_val_score(pipe, X, y, cv=5, scoring='accuracy', error_score="raise").mean()
/opt/conda/lib/python3.7/site-packages/sklearn/utils/validation.py in inner_f(*args, **kwargs)
70 FutureWarning)
71 kwargs.update({k: arg for k, arg in zip(sig.parameters, args)})
---> 72 return f(**kwargs)
73 return inner_f
74
/opt/conda/lib/python3.7/site-packages/sklearn/model_selection/_validation.py in cross_val_score(estimator, X, y, groups, scoring, cv, n_jobs, verbose, fit_params, pre_dispatch, error_score)
404 fit_params=fit_params,
405 pre_dispatch=pre_dispatch,
--> 406 error_score=error_score)
407 return cv_results['test_score']
408
/opt/conda/lib/python3.7/site-packages/sklearn/utils/validation.py in inner_f(*args, **kwargs)
70 FutureWarning)
71 kwargs.update({k: arg for k, arg in zip(sig.parameters, args)})
---> 72 return f(**kwargs)
73 return inner_f
74
/opt/conda/lib/python3.7/site-packages/sklearn/model_selection/_validation.py in cross_validate(estimator, X, y, groups, scoring, cv, n_jobs, verbose, fit_params, pre_dispatch, return_train_score, return_estimator, error_score)
246 return_times=True, return_estimator=return_estimator,
247 error_score=error_score)
--> 248 for train, test in cv.split(X, y, groups))
249
250 zipped_scores = list(zip(*scores))
/opt/conda/lib/python3.7/site-packages/joblib/parallel.py in __call__(self, iterable)
1039 # remaining jobs.
1040 self._iterating = False
-> 1041 if self.dispatch_one_batch(iterator):
1042 self._iterating = self._original_iterator is not None
1043
/opt/conda/lib/python3.7/site-packages/joblib/parallel.py in dispatch_one_batch(self, iterator)
857 return False
858 else:
--> 859 self._dispatch(tasks)
860 return True
861
/opt/conda/lib/python3.7/site-packages/joblib/parallel.py in _dispatch(self, batch)
775 with self._lock:
776 job_idx = len(self._jobs)
--> 777 job = self._backend.apply_async(batch, callback=cb)
778 # A job can complete so quickly than its callback is
779 # called before we get here, causing self._jobs to
/opt/conda/lib/python3.7/site-packages/joblib/_parallel_backends.py in apply_async(self, func, callback)
206 def apply_async(self, func, callback=None):
207 """Schedule a func to be run"""
--> 208 result = ImmediateResult(func)
209 if callback:
210 callback(result)
/opt/conda/lib/python3.7/site-packages/joblib/_parallel_backends.py in __init__(self, batch)
570 # Don't delay the application, to avoid keeping the input
571 # arguments in memory
--> 572 self.results = batch()
573
574 def get(self):
/opt/conda/lib/python3.7/site-packages/joblib/parallel.py in __call__(self)
261 with parallel_backend(self._backend, n_jobs=self._n_jobs):
262 return [func(*args, **kwargs)
--> 263 for func, args, kwargs in self.items]
264
265 def __reduce__(self):
/opt/conda/lib/python3.7/site-packages/joblib/parallel.py in <listcomp>(.0)
261 with parallel_backend(self._backend, n_jobs=self._n_jobs):
262 return [func(*args, **kwargs)
--> 263 for func, args, kwargs in self.items]
264
265 def __reduce__(self):
/opt/conda/lib/python3.7/site-packages/sklearn/model_selection/_validation.py in _fit_and_score(estimator, X, y, scorer, train, test, verbose, parameters, fit_params, return_train_score, return_parameters, return_n_test_samples, return_times, return_estimator, error_score)
529 estimator.fit(X_train, **fit_params)
530 else:
--> 531 estimator.fit(X_train, y_train, **fit_params)
532
533 except Exception as e:
/opt/conda/lib/python3.7/site-packages/sklearn/pipeline.py in fit(self, X, y, **fit_params)
333 if self._final_estimator != 'passthrough':
334 fit_params_last_step = fit_params_steps[self.steps[-1][0]]
--> 335 self._final_estimator.fit(Xt, y, **fit_params_last_step)
336
337 return self
/opt/conda/lib/python3.7/site-packages/sklearn/linear_model/_logistic.py in fit(self, X, y, sample_weight)
1415 penalty=penalty, max_squared_sum=max_squared_sum,
1416 sample_weight=sample_weight)
-> 1417 for class_, warm_start_coef_ in zip(classes_, warm_start_coef))
1418
1419 fold_coefs_, _, n_iter_ = zip(*fold_coefs_)
/opt/conda/lib/python3.7/site-packages/joblib/parallel.py in __call__(self, iterable)
1039 # remaining jobs.
1040 self._iterating = False
-> 1041 if self.dispatch_one_batch(iterator):
1042 self._iterating = self._original_iterator is not None
1043
/opt/conda/lib/python3.7/site-packages/joblib/parallel.py in dispatch_one_batch(self, iterator)
857 return False
858 else:
--> 859 self._dispatch(tasks)
860 return True
861
/opt/conda/lib/python3.7/site-packages/joblib/parallel.py in _dispatch(self, batch)
775 with self._lock:
776 job_idx = len(self._jobs)
--> 777 job = self._backend.apply_async(batch, callback=cb)
778 # A job can complete so quickly than its callback is
779 # called before we get here, causing self._jobs to
/opt/conda/lib/python3.7/site-packages/joblib/_parallel_backends.py in apply_async(self, func, callback)
206 def apply_async(self, func, callback=None):
207 """Schedule a func to be run"""
--> 208 result = ImmediateResult(func)
209 if callback:
210 callback(result)
/opt/conda/lib/python3.7/site-packages/joblib/_parallel_backends.py in __init__(self, batch)
570 # Don't delay the application, to avoid keeping the input
571 # arguments in memory
--> 572 self.results = batch()
573
574 def get(self):
/opt/conda/lib/python3.7/site-packages/joblib/parallel.py in __call__(self)
261 with parallel_backend(self._backend, n_jobs=self._n_jobs):
262 return [func(*args, **kwargs)
--> 263 for func, args, kwargs in self.items]
264
265 def __reduce__(self):
/opt/conda/lib/python3.7/site-packages/joblib/parallel.py in <listcomp>(.0)
261 with parallel_backend(self._backend, n_jobs=self._n_jobs):
262 return [func(*args, **kwargs)
--> 263 for func, args, kwargs in self.items]
264
265 def __reduce__(self):
/opt/conda/lib/python3.7/site-packages/sklearn/linear_model/_logistic.py in _logistic_regression_path(X, y, pos_class, Cs, fit_intercept, max_iter, tol, verbose, solver, coef, class_weight, dual, penalty, intercept_scaling, multi_class, random_state, check_input, max_squared_sum, sample_weight, l1_ratio)
762 n_iter_i = _check_optimize_result(
763 solver, opt_res, max_iter,
--> 764 extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
765 w0, loss = opt_res.x, opt_res.fun
766 elif solver == 'newton-cg':
/opt/conda/lib/python3.7/site-packages/sklearn/utils/optimize.py in _check_optimize_result(solver, result, max_iter, extra_warning_msg)
241 " https://scikit-learn.org/stable/modules/"
242 "preprocessing.html"
--> 243 ).format(solver, result.status, result.message.decode("latin1"))
244 if extra_warning_msg is not None:
245 warning_msg += "\n" + extra_warning_msg
AttributeError: 'str' object has no attribute 'decode'
I solved this error by changing solver=lbfgs to solver=liblinear in LogisticRegression()
logreg = LogisticRegression(solver='lbfgs')
to
logreg = LogisticRegression(solver='liblinear')
And for the following error:
ValueError: Input contains NaN, infinity or a value too large for dtype('float64')
It's best to check if your test data contains any null values or strings.

Define a function : Fibonacci Sequence

Define a function to implement Fibonacci Sequence: 1, 1, 2, 3, 5, 8, 13, 21, 34. Please use the function output first 20 figures of Fibonacci Sequence.
Here is a python implementation
def fib(n):
a, b = 0, 1
while a < n:
print(a, end=' ')
a, b = b, a+b
print()
fib(5000)
Output
0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181
A recursive implementation
memo = [-1] * 21
memo[0] = 0
memo[1] = 1
print(memo[0], end=' ')
print(memo[1], end=' ')
def fibrec(n):
if(memo[n] == -1):
memo[n] = fibrec(n-2) + fibrec(n-1)
print(memo[n], end=' ')
return memo[n]
fibrec(20)
Output
0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181 6765

Parse input text that is located using left padding spaces

I have the following text structure. The values below JTT JNX JNA JNO belong to previous line.
9 8 11 56507785 93
JTT JNX JNA JNO
76 98
9 8 60 3269557 58
9 8 53 7269558 150
JTT JNX JNA JNO
132 71 45-7705678
9 8 62 439559 82
I'd like to parse it in order to print the corresponding values in a single line like below:
H1 H2 H3 H4 H5 JTT JNX JNA JNO
9 8 11 56507785 93 76 98
9 8 60 3269557 58
9 8 53 7269558 150 132 71 45-7705678
9 8 62 439559 82
My issue is when I use awk with FS = space (default FS) then it takes JTT as first field and JTT has 9 spaces before, so I think should be use some technique that counts how may spaces are from left until JTT JNX JNA JNO and count number of spaces from beginning until the values below JTT JNX JNA JNO in order to positionate correctly each value. I mean, 76 and 132 below JTT header, 971 below JNX, 98 below JNA and 45-7705678 below JNO.
How can this be done in awk?
$ awk --version
GNU Awk 5.0.0, API: 2.0 (GNU MPFR 4.0.2, GNU MP 6.1.2)
Copyright (C) 1989, 1991-2019 Free Software Foundation.
$ uname -srv
CYGWIN_NT-6.1 3.0.7(0.338/5/3) 2019-04-30 18:08
Thanks in advance.
With GNU awk (which you have) for FIELDWIDTHS:
$ cat tst.awk
BEGIN {
OFS = ","
print "H1", "H2", "H3", "H4", "H5", "JTT", "JNX", "JNA", "JNO"
}
!NF || ($1 == "JTT") { next }
!/^ / {
if (NR>1) {
print rec
}
FS = " "
$0 = $0
$1 = $1
rec = $0
}
/^ / {
FIELDWIDTHS = "12 5 5 *"
$0 = $0
$1 = $1
for (i=1; i<=NF; i++) {
gsub(/^\s+|\s+$/,"",$i)
}
rec = rec OFS $0
}
END {
print rec
}
.
$ awk -f tst.awk file
H1,H2,H3,H4,H5,JTT,JNX,JNA,JNO
9,8,11,56507785,93,76,,98
9,8,60,3269557,58
9,8,53,7269558,150,132,71,,45-7705678
9,8,62,439559,82
$ awk -f tst.awk file | column -s, -t
H1 H2 H3 H4 H5 JTT JNX JNA JNO
9 8 11 56507785 93 76 98
9 8 60 3269557 58
9 8 53 7269558 150 132 71 45-7705678
9 8 62 439559 82
Replace OFS="," with OFS="\t" or otherwise massage to suit...

neo4django with Neo4j 2.0

I'm just starting out using Neo4j and I'd like to use 2.0 (I have 2.0.1 community installed). I see that neo4django was only tested against neo4j 1.8.2-1.9.4, but have people gotten it working with 2.x? I installed the gremlin plugin but can't create or query through neo4django.
create:
In [8]: NeoProfile.objects.create(profile_id=1234)
[INFO] requests.packages.urllib3.connectionpool#214: Resetting dropped connection: localhost
---------------------------------------------------------------------------
StatusException Traceback (most recent call last)
/Users/atomos/workspace/Project-Vitamin/lib/python2.7/site-packages/django/core/management/commands/shell.pyc in <module>()
----> 1 NeoProfile.objects.create(profile_id=1234)
/Users/atomos/workspace/Project-Vitamin/src/neo4django/neo4django/db/models/manager.pyc in create(self, **kwargs)
41
42 def create(self, **kwargs):
---> 43 return self.get_query_set().create(**kwargs)
44
45 def filter(self, *args, **kwargs):
/Users/atomos/workspace/Project-Vitamin/src/neo4django/neo4django/db/models/query.pyc in create(self, **kwargs)
1295 if 'id' in kwargs or 'pk' in kwargs:
1296 raise FieldError("Neo4j doesn't allow node ids to be assigned.")
-> 1297 return super(NodeQuerySet, self).create(**kwargs)
1298
1299 #TODO would be awesome if this were transactional
/Users/atomos/workspace/Project-Vitamin/lib/python2.7/site-packages/django/db/models/query.pyc in create(self, **kwargs)
375 obj = self.model(**kwargs)
376 self._for_write = True
--> 377 obj.save(force_insert=True, using=self.db)
378 return obj
379
/Users/atomos/workspace/Project-Vitamin/src/neo4django/neo4django/db/models/base.pyc in save(self, using, **kwargs)
315
316 def save(self, using=DEFAULT_DB_ALIAS, **kwargs):
--> 317 return super(NodeModel, self).save(using=using, **kwargs)
318
319 #alters_data
/Users/atomos/workspace/Project-Vitamin/lib/python2.7/site-packages/django/db/models/base.pyc in save(self, force_insert, force_update, using)
461 if force_insert and force_update:
462 raise ValueError("Cannot force both insert and updating in model saving.")
--> 463 self.save_base(using=using, force_insert=force_insert, force_update=force_update)
464
465 save.alters_data = True
/Users/atomos/workspace/Project-Vitamin/src/neo4django/neo4django/db/models/base.pyc in save_base(self, raw, cls, origin, force_insert, force_update, using, *args, **kwargs)
331
332 is_new = self.id is None
--> 333 self._save_neo4j_node(using)
334 self._save_properties(self, self.__node, is_new)
335 self._save_neo4j_relationships(self, self.__node)
/Users/atomos/workspace/Project-Vitamin/src/neo4django/neo4django/db/models/base.pyc in _save_neo4j_node(self, using)
/Users/atomos/workspace/Project-Vitamin/src/neo4django/neo4django/db/models/base.pyc in trans_method(func, *args, **kw)
95 #TODO this is where generalized transaction support will go,
96 #when it's ready in neo4jrestclient
---> 97 ret = func(*args, **kw)
98 #tx.commit()
99 return ret
/Users/atomos/workspace/Project-Vitamin/src/neo4django/neo4django/db/models/base.pyc in _save_neo4j_node(self, using)
359 self.__node = conn.gremlin_tx(script, types=type_hier_props,
360 indexName=self.index_name(),
--> 361 typesToIndex=type_names_to_index)
362 return self.__node
363
/Users/atomos/workspace/Project-Vitamin/src/neo4django/neo4django/neo4jclient.pyc in gremlin_tx(self, script, **params)
177 will be wrapped in a transaction.
178 """
--> 179 return self.gremlin(script, tx=True, **params)
180
181 def cypher(self, query, **params):
/Users/atomos/workspace/Project-Vitamin/src/neo4django/neo4django/neo4jclient.pyc in gremlin(self, script, tx, raw, **params)
166 try:
167 return send_script(include_unloaded_libraries(lib_script),
--> 168 params)
169 except LibraryCouldNotLoad:
170 if i == 0:
/Users/atomos/workspace/Project-Vitamin/src/neo4django/neo4django/neo4jclient.pyc in send_script(s, params)
151 if raw:
152 execute_kwargs['returns'] = RETURNS_RAW
--> 153 script_rv = ext.execute_script(s, params=params, **execute_kwargs)
154 if isinstance(script_rv, basestring):
155 if LIBRARY_ERROR_REGEX.match(script_rv):
/Users/atomos/workspace/Project-Vitamin/src/neo4j-rest-client/neo4jrestclient/client.py in __call__(self, *args, **kwargs)
2313 except (ValueError, AttributeError, KeyError, TypeError):
2314 pass
-> 2315 raise StatusException(response.status_code, msg)
2316
2317 def __repr__(self):
StatusException: Code [400]: Bad Request. Bad request syntax or unsupported method.
Invalid data sent: javax.script.ScriptException: groovy.lang.MissingMethodException: No signature of method: groovy.lang.MissingMethodException.setMaxBufferSize() is applicable for argument types: () values: []
query:
In [9]: NeoProfile.objects.filter(profile_id=1234)
Out[9]: ---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/Users/atomos/workspace/Project-Vitamin/lib/python2.7/site-packages/django/core/management/commands/shell.pyc in <module>()
----> 1 NeoProfile.objects.filter(profile_id=1234)
/Users/atomos/workspace/Project-Vitamin/lib/python2.7/site-packages/IPython/core/displayhook.pyc in __call__(self, result)
236 self.start_displayhook()
237 self.write_output_prompt()
--> 238 format_dict = self.compute_format_data(result)
239 self.write_format_data(format_dict)
240 self.update_user_ns(result)
/Users/atomos/workspace/Project-Vitamin/lib/python2.7/site-packages/IPython/core/displayhook.pyc in compute_format_data(self, result)
148 MIME type representation of the object.
149 """
--> 150 return self.shell.display_formatter.format(result)
151
152 def write_format_data(self, format_dict):
/Users/atomos/workspace/Project-Vitamin/lib/python2.7/site-packages/IPython/core/formatters.pyc in format(self, obj, include, exclude)
124 continue
125 try:
--> 126 data = formatter(obj)
127 except:
128 # FIXME: log the exception
/Users/atomos/workspace/Project-Vitamin/lib/python2.7/site-packages/IPython/core/formatters.pyc in __call__(self, obj)
445 type_pprinters=self.type_printers,
446 deferred_pprinters=self.deferred_printers)
--> 447 printer.pretty(obj)
448 printer.flush()
449 return stream.getvalue()
/Users/atomos/workspace/Project-Vitamin/lib/python2.7/site-packages/IPython/lib/pretty.pyc in pretty(self, obj)
358 if callable(meth):
359 return meth(obj, self, cycle)
--> 360 return _default_pprint(obj, self, cycle)
361 finally:
362 self.end_group()
/Users/atomos/workspace/Project-Vitamin/lib/python2.7/site-packages/IPython/lib/pretty.pyc in _default_pprint(obj, p, cycle)
478 if getattr(klass, '__repr__', None) not in _baseclass_reprs:
479 # A user-provided repr.
--> 480 p.text(repr(obj))
481 return
482 p.begin_group(1, '<')
/Users/atomos/workspace/Project-Vitamin/lib/python2.7/site-packages/django/db/models/query.pyc in __repr__(self)
70
71 def __repr__(self):
---> 72 data = list(self[:REPR_OUTPUT_SIZE + 1])
73 if len(data) > REPR_OUTPUT_SIZE:
74 data[-1] = "...(remaining elements truncated)..."
/Users/atomos/workspace/Project-Vitamin/lib/python2.7/site-packages/django/db/models/query.pyc in __len__(self)
85 self._result_cache = list(self.iterator())
86 elif self._iter:
---> 87 self._result_cache.extend(self._iter)
88 if self._prefetch_related_lookups and not self._prefetch_done:
89 self._prefetch_related_objects()
/Users/atomos/workspace/Project-Vitamin/src/neo4django/neo4django/db/models/query.pyc in iterator(self)
1274 using = self.db
1275 if not self.query.can_filter():
-> 1276 for model in self.query.execute(using):
1277 yield model
1278 else:
/Users/atomos/workspace/Project-Vitamin/src/neo4django/neo4django/db/models/query.pyc in execute(self, using)
1161 conn = connections[using]
1162
-> 1163 groovy, params = self.as_groovy(using)
1164
1165 raw_result_set = conn.gremlin_tx(groovy, **params) if groovy is not None else []
/Users/atomos/workspace/Project-Vitamin/src/neo4django/neo4django/db/models/query.pyc in as_groovy(self, using)
925 # add the typeNodeId param, either for type verification or initial
926 # type tree traversal
--> 927 cypher_params['typeNodeId'] = self.model._type_node(using).id
928
929 type_restriction_expr = """
/Users/atomos/workspace/Project-Vitamin/src/neo4django/neo4django/db/models/base.pyc in _type_node(cls, using)
411 return cls.__type_node_memoized(using)
412 else:
--> 413 return cls.__type_node_classmethod(using)
414
415 #classmethod
/Users/atomos/workspace/Project-Vitamin/src/neo4django/neo4django/db/models/base.pyc in __type_node(cls, using)
394 script_rv = conn.gremlin_tx(script, types=type_hier_props)
395 except Exception, e:
--> 396 raise RuntimeError(error_message, e)
397 if not hasattr(script_rv, 'properties'):
398 raise RuntimeError(error_message + '\n\n%s' % script_rv)
RuntimeError: ('The type node for class NeoProfile could not be created in the database.', StatusException())
My model is incredibly complex:
class NeoProfile(neomodels.NodeModel):
profile_id = neomodels.IntegerProperty(indexed=True)

Try to simulate a neural network in MATLAB by myself

I tried to create a neural network to estimate y = x ^ 2. So I created a fitting neural network and gave it some samples for input and output. I tried to build this network in C++. But the result is different than I expected.
With the following inputs:
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 -1
-2 -3 -4 -5 -6 -7 -8 -9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 -39 -40 -41 -42 -43 -44 -45 -46 -47 -48 -49 -50 -51 -52 -53 -54 -55 -56 -57 -58 -59 -60 -61 -62 -63 -64 -65 -66 -67 -68 -69 -70 -71
and the following outputs:
0 1 4 9 16 25 36 49 64 81 100 121 144 169 196 225 256 289 324 361 400
441 484 529 576 625 676 729 784 841 900 961 1024 1089 1156 1225 1296
1369 1444 1521 1600 1681 1764 1849 1936 2025 2116 2209 2304 2401 2500
2601 2704 2809 2916 3025 3136 3249 3364 3481 3600 3721 3844 3969 4096
4225 4356 4489 4624 4761 4900 5041 1 4 9 16 25 36 49 64 81 100 121 144
169 196 225 256 289 324 361 400 441 484 529 576 625 676 729 784 841
900 961 1024 1089 1156 1225 1296 1369 1444 1521 1600 1681 1764 1849
1936 2025 2116 2209 2304 2401 2500 2601 2704 2809 2916 3025 3136 3249
3364 3481 3600 3721 3844 3969 4096 4225 4356 4489 4624 4761 4900 5041
I used fitting tool network. with matrix rows. Training is 70%, validation is 15% and testing is 15% as well. The number of hidden neurons is two. Then in command lines I wrote this:
purelin(net.LW{2}*tansig(net.IW{1}*inputTest+net.b{1})+net.b{2})
Other information :
My net.b[1] is: -1.16610230053776 1.16667147712026
My net.b[2] is: 51.3266249426358
And net.IW(1) is: 0.344272596370387 0.344111217766824
net.LW(2) is: 31.7635369693519 -31.8082184881063
When my inputTest is 3, the result of this command is 16, while it should be about 9. Have I made an error somewhere?
I found the Stack Overflow post Neural network in MATLAB that contains a problem like my problem, but there is a little difference, and the differences is in that problem the ranges of input and output are same, but in my problem is no. That solution says I need to scale out the results, but how can I scale out my result?
You are right about scaling. As was mentioned in the linked answer, the neural network by default scales the input and output to the range [-1,1]. This can be seen in the network processing functions configuration:
>> net = fitnet(2);
>> net.inputs{1}.processFcns
ans =
'removeconstantrows' 'mapminmax'
>> net.outputs{2}.processFcns
ans =
'removeconstantrows' 'mapminmax'
The second preprocessing function applied to both input/output is mapminmax with the following parameters:
>> net.inputs{1}.processParams{2}
ans =
ymin: -1
ymax: 1
>> net.outputs{2}.processParams{2}
ans =
ymin: -1
ymax: 1
to map both into the range [-1,1] (prior to training).
This means that the trained network expects input values in this range, and outputs values also in the same range. If you want to manually feed input to the network, and compute the output yourself, you have to scale the data at input, and reverse the mapping at the output.
One last thing to remember is that each time you train the ANN, you will get different weights. If you want reproducible results, you need to fix the state of the random number generator (initialize it with the same seed each time). Read the documentation on functions like rng and RandStream.
You also have to pay attention that if you are dividing the data into training/testing/validation sets, you must use the same split each time (probably also affected by the randomness aspect I mentioned).
Here is an example to illustrate the idea (adapted from another post of mine):
%%# data
x = linspace(-71,71,200); %# 1D input
y_model = x.^2; %# model
y = y_model + 10*randn(size(x)).*x; %# add some noise
%%# create ANN, train, simulate
net = fitnet(2); %# one hidden layer with 2 nodes
net.divideFcn = 'dividerand';
net.trainParam.epochs = 50;
net = train(net,x,y);
y_hat = net(x);
%%# plot
plot(x, y, 'b.'), hold on
plot(x, x.^2, 'Color','g', 'LineWidth',2)
plot(x, y_hat, 'Color','r', 'LineWidth',2)
legend({'data (noisy)','model (x^2)','fitted'})
hold off, grid on
%%# manually simulate network
%# map input to [-1,1] range
[~,inMap] = mapminmax(x, -1, 1);
in = mapminmax('apply', x, inMap);
%# propagate values to get output (scaled to [-1,1])
hid = tansig( bsxfun(#plus, net.IW{1}*in, net.b{1}) ); %# hidden layer
outLayerOut = purelin( net.LW{2}*hid + net.b{2} ); %# output layer
%# reverse mapping from [-1,1] to original data scale
[~,outMap] = mapminmax(y, -1, 1);
out = mapminmax('reverse', outLayerOut, outMap);
%# compare against MATLAB output
max( abs(out - y_hat) ) %# this should be zero (or in the order of `eps`)
I opted to use the mapminmax function, but you could have done that manually as well. The formula is a pretty simply linear mapping:
y = (ymax-ymin)*(x-xmin)/(xmax-xmin) + ymin;

Resources