glTexImage2D fail on opengles3.0 context - ios

I am implementing a native webgl context compatible with h5.
Currently I support webgl1.0 APIs.
On iOS I create the EAGLContext with kEAGLRenderingAPIOpenGLES3. Other GL calls works fine, but
glTexImage2D(3553, 0, 6408, 144, 108, 0, 6408, 5126, null), glError()=1282
This call fails.
If I change EAGLContext to opengles2.0, everything works fine.
My question is all parameter values to glTexSubImage2D are the same. Why this call fails if I create the context as es3.0 but succeeds if the context is es2.0.
These are the gl calls dumped. The only difference is that when I create the EAGLContext using GLES3 api level, there is a glError 1282. If the context is created using GLES2 api level, everything works fine.
The first two glTexImage2D use GL_UNSIGNED_BYTE, the failed one uses GL_FLOAT. But es3.0 context should support GL_FLOAT.
17:26:24.683200 Will setup FBOs.
17:26:24.684360 Setup FBOs done.
17:26:24.694778 glCreateTexture()=1
17:26:24.694981 glBindTexture(3553, 1)
17:26:24.695079 glTexParameteri(3553, 10242, 10497)
17:26:24.695142 glTexParameteri(3553, 10243, 10497)
17:26:24.695266 glTexParameteri(3553, 10241, 9985)
17:26:24.695313 glTexParameteri(3553, 10240, 9729)
17:26:24.695414 glTexParameterf(3553, 34046, 1.000000)
17:26:24.695414 glTexImage2D(3553, 0, 6408, 2, 2, 0, 6408, 5121, null)
17:26:24.695414 glTexImage2D(3553, 1, 6408, 1, 1, 0, 6408, 5121, null)
17:26:24.696141 [Buf:GL_UNSIGNED_BYTE:u8] 16, 16, 1
17:26:24.696961 glTexImage2D(3553, 0, 6408, 2, 2, 0, 6408, 5121, [16])
17:26:24.697674 glGenBuffers()=1
17:26:24.697862 glGenBuffers()=2
17:26:24.702478 glGenBuffers()=3
17:26:24.702547 glGenBuffers()=4
17:26:24.702675 glGenBuffers()=5
17:26:24.702734 glGenBuffers()=6
17:26:24.722429 glGenBuffers()=7
17:26:24.722589 glBindBuffer(34962, 7)
17:26:24.722697 glBufferData(34962, [65536], null, 35048)
17:26:24.722758 glGenBuffers()=8
17:26:24.722806 glBindBuffer(34962, 8)
17:26:24.722862 glBufferData(34962, [65536], null, 35048)
17:26:24.723104 createVertexArrayOES(1)
17:26:24.723690 glGenBuffers()=9
17:26:24.723743 glBindBuffer(34962, 9)
17:26:24.723799 glBufferData(34962, [2304000], null, 35048)
17:26:24.723985 glGenBuffers()=10
17:26:24.724068 glBindBuffer(34963, 10)
17:26:24.724120 glBufferData(34963, [64000], null, 35048)
17:26:24.724120 glCreateTexture()=2
17:26:24.747552 glBindTexture(3553, 2)
17:26:24.747625 glTexParameteri(3553, 10242, 33071)
17:26:24.747680 glTexParameteri(3553, 10243, 33071)
17:26:24.747733 glTexParameteri(3553, 10241, 9729)
17:26:24.747778 glTexParameteri(3553, 10240, 9729)
17:26:24.747842 glTexParameterf(3553, 34046, 1.000000)
17:26:24.747842 glTexImage2D(3553, 0, 6408, 144, 108, 0, 6408, 5126, null), glError()=1282
17:26:24.748000 glTexParameteri(3553, 10241, 9728)
17:26:24.748048 glTexParameteri(3553, 10240, 9728)
17:26:24.748120 glTexParameteri(3553, 10242, 33071)
17:26:24.748189 glTexParameteri(3553, 10243, 33071)
17:26:24.748266 glTexParameterf(3553, 34046, 1.000000)+0800

The error is because JS passed invalid combination of internal format/format/type.
glTexImage2D(3553, 0, 6408, 144, 108, 0, 6408, 5126, null), glError()=1282
is actually glTexImage2D(3553, 0, GL_RGBA, 144, 108, 0, GL_RGBA, GL_FLOAT, null)
This combination is not valid according to https://www.khronos.org/registry/OpenGL-Refpages/es3.0/html/glTexImage2D.xhtml.
The interesting thing is that in iOS es2.0 context, this combination is valid.

Related

How can I track the bandwidth of transfers within a Dask cluster?

I would like to perform some networking benchmarks on my Dask cluster. What is the best way to get detailed information about recent transfers?
Dashboard
Many people use Dask's dashboard for this and watch for the presence of red bars in the task stream plot.
get_task_stream
However, if you're doing benchmarks then maybe the idea of looking at a live plot during computations isn't what you're looking for. You can get that same information by using the dask.distributed.get_task_stream context manager.
>>> from dask.distributed import get_task_stream
>>> with get_task_stream() as ts:
... x.compute()
>>> ts.data
[...]
This will include information about both computations and data transfers, so you'll have to sift through them a bit.
transfer_logs
Also, as of the time of writing this, Dask workers maintain logs of every transfer in the Worker.incoming_transfer_log and Worker.outgoing_transfer_log attributes. You could use the Client.run method to get these.
>>> client.run(lambda dask_worker: dask_worker.incoming_transfer_log)
{'tcp://192.168.1.191:50637': deque([]),
'tcp://192.168.1.191:50638': deque([]),
'tcp://192.168.1.191:50640': deque([{'start': 1558119113.3489196,
'stop': 1558119113.4012725,
'middle': 1558119113.375096,
'duration': 0.0523529052734375,
'keys': {"('dataframe-sum-chunk-5b219ece79c8315870694c0e17df68ee', 0, 27, 0)": 463941,
"('dataframe-sum-chunk-5b219ece79c8315870694c0e17df68ee', 0, 23, 0)": 464477,
"('dataframe-sum-chunk-5b219ece79c8315870694c0e17df68ee', 0, 7, 0)": 463708,
"('dataframe-sum-chunk-5b219ece79c8315870694c0e17df68ee', 0, 15, 0)": 464091,
"('dataframe-sum-chunk-5b219ece79c8315870694c0e17df68ee', 0, 19, 0)": 464826,
"('dataframe-sum-chunk-5b219ece79c8315870694c0e17df68ee', 0, 3, 0)": 463847,
"('dataframe-sum-chunk-5b219ece79c8315870694c0e17df68ee', 0, 11, 0)": 464200},
'total': 3249090,
'bandwidth': 62061312.22384144,
'who': 'tcp://192.168.1.191:50642'},
{'start': 1558119113.3484848,
'stop': 1558119113.4085395,
'middle': 1558119113.3785121,
'duration': 0.060054779052734375,
'keys': {"('dataframe-sum-chunk-5b219ece79c8315870694c0e17df68ee', 0, 12, 0)": 463485,
"('dataframe-sum-chunk-5b219ece79c8315870694c0e17df68ee', 0, 24, 0)": 464183,
"('dataframe-sum-chunk-5b219ece79c8315870694c0e17df68ee', 0, 16, 0)": 464061,
"('dataframe-sum-chunk-5b219ece79c8315870694c0e17df68ee', 0, 4, 0)": 464161,
"('dataframe-sum-chunk-5b219ece79c8315870694c0e17df68ee', 0, 8, 0)": 463925,
"('dataframe-sum-chunk-5b219ece79c8315870694c0e17df68ee', 0, 0, 0)": 464214,
"('dataframe-sum-chunk-5b219ece79c8315870694c0e17df68ee', 0, 20, 0)": 464070},
'total': 3248099,
'bandwidth': 54085604.03074382,
'who': 'tcp://192.168.1.191:50637'},
This is keyed by worker, and gives the start/stop/total bytes/sender/recipient of every transfer. This solution uses internal API though, and so could change at any time.

iOS Swift CFSocketCallBack - Could not read data

I am trying to read data from UDP socket. I am using Xcode 9.2 and Swift and I want to implement communication without external libraries. I can send the data (testing using Packet Sender), but the problem is when I send the packet back and try to read data. My socket callback function gets called and callback type matches data callback. If I try to load CFData from memory using the pointer, the operation doesn't crash. Then when I try to use a CFDataGetLength or similar function on that object, the app crashes.
func socketCallback(socket: CFSocket?, callbackType: CFSocketCallBackType, address: CFData?, data: UnsafeRawPointer?, info: UnsafeMutableRawPointer?) {
if callbackType == CFSocketCallBackType.dataCallBack {
print("dataCallback called.")
let cf = data!.load(as: CFData.self)
print(cf) // prints __NSCFData
print(CFDataGetLength(cf)) // crashes
}
}
The exception being thrown:
Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '+[__NSCFData length]: unrecognized selector sent to class 0x10ca08348'
Same exception for other CFData functions, not just CFDataGetLength, for example
var bytes = [UInt8]()
print(CFDataGetBytes(cf, CFRange(location: 0, length: 3), &bytes))
I have tried to read some data from memory as unsigned bytes, at pointer (data) and with offset:
var array = [UInt8]()
for i in 0...55 {
let pointer = data! + i
array.append(pointer.load(as: UInt8.self))
}
print(array, separator: ", ", terminator: "\n")
It prints the following:
[72, 67, 103, 5, 1, 0, 0, 0, 132, 20, 0, 0, 1, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 97, 97, 97, 0, 0, 0, 0, 0]
I noticed that the number 3 is the length of the message, and bytes 48, 49 and 50 are the data (I have send "aaa", 97 is "a" in ascii).
Socket initialization code:
var socket = CFSocketCreate(nil, PF_INET, SOCK_DGRAM, IPPROTO_UDP, CFSocketCallBackType.dataCallBack.rawValue, socketCallback, nil)
let runLoopSource = CFSocketCreateRunLoopSource(nil, socket, 0)
CFRunLoopAddSource(CFRunLoopGetMain(), runLoopSource, CFRunLoopMode.defaultMode)
Most information that I found was for ObjC and I am not very familiar with ObjC.
What could I have done wrong?

Client request for tensorflow serving gives error "Attempting to use uninitialized value fully_connected/biases"

I created a LSTM RNN model for text classification on tensorflow and exported the savedModel successfully. I tested the model using savedModel CLI and everything seems to be working fine. However I am trying to create a client that can make a request and get a result. I have been following this tensorflow serving inception example (more specifically inception_client.py) for reference. This works well with the inception model but I am not sure how to change the request for my own model. How exactly should I change the request?
My signature and saving the model:
# Build the signature_def_map.
classification_signature = signature_def_utils.build_signature_def(
inputs={signature_constants.CLASSIFY_INPUTS: classification_inputs},
outputs={
signature_constants.CLASSIFY_OUTPUT_CLASSES:
classification_outputs_classes,
},
method_name=signature_constants.CLASSIFY_METHOD_NAME)
legacy_init_op = tf.group(
tf.tables_initializer(), name='legacy_init_op')
#add the sigs to the servable
builder.add_meta_graph_and_variables(
sess, [tag_constants.SERVING],
signature_def_map={
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
classification_signature
},
assets_collection=tf.get_collection(tf.GraphKeys.ASSET_FILEPATHS),
legacy_init_op=tf.group(assign_filename_op))
print ("added meta graph and variables")
builder.save()
print("model saved")
The model takes in inputs_ as the input which is a list of list of numbers ( [[1,3,4,5,2]] ).
inputs_ = tf.placeholder(tf.int32, [None, None], name="input_ints")
How I am using the savedModel CLI (returns right results):
$ saved_model_cli run --dir ./python2_SavedModelFinalInputInts --tag_set serve --signature_def 'serving_default' --input_exprs inputs='[[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2634, 758, 938, 579, 1868, 1894, 24, 651, 572, 32, 1847, 232]]'
More information about the savedModel:
$ saved_model_cli show --dir ./python2_prediction_SavedModelFinalInputInts --all
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['inputs'] tensor_info:
dtype: DT_INT32
shape: (-1, -1)
name: inputs/input_ints:0
The given SavedModel SignatureDef contains the following output(s):
outputs['outputs'] tensor_info:
dtype: DT_FLOAT
shape: (1, 1)
name: predictions/fully_connected/Sigmoid:0
Method name is: tensorflow/serving/predict
How I am trying to create a request in the client code:
request1 = predict_pb2.PredictRequest()
request1.model_spec.name = 'mnist'
request1.model_spec.signature_name = signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY
request1.inputs[signature_constants.PREDICT_INPUTS].CopyFrom(tf.contrib.util.make_tensor_proto(input_nums, shape=[1,100],dtype=tf.int32))
response = stub.Predict(request1,1.0)
result_dict = { 'Analyst Rating': str(response.message) }
return jsonify(result_dict)
I am getting the following error:
[2017-11-29 19:03:29,318] ERROR in app: Exception on /analyst_rating [POST]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python2.7/dist-packages/flask_restful/__init__.py", line 480, in wrapper
resp = resource(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/flask/views.py", line 84, in view
return self.dispatch_request(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/flask_restful/__init__.py", line 595, in dispatch_request
resp = meth(*args, **kwargs)
File "restApi.py", line 91, in post
response = stub.Predict(request,1)
File "/usr/local/lib/python2.7/dist-packages/grpc/beta/_client_adaptations.py", line 309, in __call__
self._request_serializer, self._response_deserializer)
File "/usr/local/lib/python2.7/dist-packages/grpc/beta/_client_adaptations.py", line 195, in _blocking_unary_unary
raise _abortion_error(rpc_error_call)
AbortionError: AbortionError(code=StatusCode.FAILED_PRECONDITION, details="Attempting to use uninitialized value fully_connected/biases
[[Node: fully_connected/biases/read = Identity[T=DT_FLOAT, _class=["loc:#fully_connected/biases"], _output_shapes=[[1]], _device="/job:localhost/replica:0/task:0/cpu:0"](fully_connected/biases)]]")
127.0.0.1 - - [29/Nov/2017 19:03:29] "POST /analyst_rating HTTP/1.1" 500 -
{"message": "Internal Server Error"}
Update:
Changing the signature of the model from a classification signature to a prediction signature seemed to work. I also changed the legacy_init_op to legacy_init_op as defined from assign_filename_op which I was using for Assets organization initially.
Changing the model signature from classification to a prediction signature seemed to return results.
prediction_signature = (tf.saved_model.signature_def_utils.build_signature_def(
inputs={signature_constants.PREDICT_INPUTS: prediction_inputs},
outputs={signature_constants.PREDICT_OUTPUTS: prediction_outputs},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))
legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
#add the sigs to the servable
builder.add_meta_graph_and_variables(
sess, [tag_constants.SERVING],
signature_def_map={
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
prediction_signature
},
# assets_collection=tf.get_collection(tf.GraphKeys.ASSET_FILEPATHS),
legacy_init_op=legacy_init_op)
I am not entirely sure how the client request should be for a model with classification signature or why it was not working.
(If anyone has an explanation, I will select that as the correct answer.)

NACK Error message is not valid?

I have an issue about the NACK message generated by HAPI,
I'm generating the NACK message as follows;
Message msg= hl7Msg.generateACK(HL7Constants.HL7_MSA_ERROR_FIELD_VALUE,
new HL7Exception(errorMsg));
This returns; following message;
MSH|^~\&|||||20130604165513.576+0100||ACK|108|P|2.5
MSA|AE|HL7Gtw01361605B49500
ERR|^^^207&ERROR&hl70357&&errmsg
If you notice the ERR segment, it doesn't have required info;
Is the above message valid?
I suspect it has to be like this;
MSH|^~\&|||||20130604165513.576+0100||ACK|108|P|2.5
MSA|AE|HL7Gtw01361605B49500
ERR|||207|E|^errmsg
Why do i get such invalid message? Am i doing anything wrong here?
Ans from hapi mailing list;
If possible, you should upgrade to most recent version (2.1). This version makes the distinction between ERR segments as of version 2.5 (where ERR-2 and ERR-3 is populated) and before version 2.5 (where ERR-1 is used) when generateACK is called with an Exception.
Anyway, you can use util classes like Terser to modify fields of the ERR segment in the ACK message as you wish. In your case you would probably have to copy values from ERR-1 to ERR-3
Segment err = (Segment)msg.get("ERR");
Terser.set(err, 3, 0, 1, 1, Terser.get(err, 1, 0, 4, 1));
Terser.set(err, 3, 0, 2, 1, Terser.get(err, 1, 0, 4, 2));
Terser.set(err, 3, 0, 3, 1, Terser.get(err, 1, 0, 4, 3));
Terser.set(err, 3, 0, 9, 1, Terser.get(err, 1, 0, 4, 5));
Terser.set(err, 4, 0, 1, 1, "E");
and optionally remove the values in ERR-1 afterwards:
Terser.set(err, 1, 0, 4, 1, "");

sqlite with iOS 5 issue

OK this is the case, I have the following query
INSERT INTO 'FoodListTBL' ('AutoNo','CHOCals','PrtCals', 'FatCals','CHOgram','PrtGram','FatGram','CatId', 'TimeTypeId','TotalCals','Visibl', 'IsActive','NameAr','NameEn','CountryId', 'TotalPerUnit','UnitId','PreferedBread1', 'PreferedMilk1','PreferedVeg1','PreferedFat1', 'PreferedFruit1','CauseAllergy','AllergyCatId', 'TotalLikes','NameDescEn','NameDescAr','ChoPerUnit','PrtPerUnit','FatPerUnit','Quantity','PreferedBread2','PreferedMilk2', 'PreferedVeg2','PreferedFat2', 'PreferedFruit2','IV','UV','InsertDate','InsertUser') VALUES (818,0,0, 45, 0, 0, 5, 17, 1, 45,1, 1, 'زبدة قليلة الدسم', 'Butter reduced fat', 0, 45, 14, 0, 0, 0, 0, 0, 0, 0, 0, 'Butter reduced fat', 'زبدة قليلة الدسم', 0, 0, 45, 1, 0, 0, 0, 0, 0, 492, 0, '-', '-' ),(819,0,0, 45, 0, 0, 5, 17, 1, 45,1, 1, 'زبدة', 'Butter regular', 0, 45, 4, 0, 0, 0, 0, 0, 0, 0, 0, 'Butter regular', 'زبدة', 0, 0, 45, 1, 0, 0, 0, 0, 0, 493, 1475, '-', '-')
this query executed successfully on iOS 6.X and failed on any iOS less than 5.X taking into consideration that any other insert query on other tables finished successfully on any iOS
and I've tried two codes for insert this is one of them
if(sqlite3_prepare_v2(database, [query UTF8String], -1, &compiledStatement, NULL) == SQLITE_OK)
{
if(SQLITE_DONE != sqlite3_step(compiledStatement))
NSLog( #"Error while inserting data: '%s'", sqlite3_errmsg(database));
else NSLog(#"New data inserted");
sqlite3_reset(compiledStatement);
}else
{
NSLog( #"Error while inserting '%s'", sqlite3_errmsg(database));
}
sqlite3_finalize(compiledStatement)
and the result in the two cases is
Error while inserting 'near ",":syntax error'
aging this query is functional on every thing except iOS < 6.0
any clues are appreciated
SQLite before version 3.7.11 does not support the multi-record INSERT syntax.
Use multiple INSERT commands, or insert the records with INSERT ... SELECT ... UNION ALL ....

Resources