Array type in clickhouseIO for apache beam(dataflow) - google-cloud-dataflow

I am using Apache Beam to consume json and insert into clickhouse.
I am currently having a problem with the Array data type.
Everything works fine before I add an array type of field
Schema.Field.of("inputs.value", Schema.FieldType.array(Schema.FieldType.INT64).withNullable(true))
Code for the transformations
p.apply(transformNameSuffix + "ReadFromPubSub",
PubsubIO.readStrings().fromSubscription(chainConfig.getPubSubSubscriptionPrefix() + "transactions").withIdAttribute(PUBSUB_ID_ATTRIBUTE))
.apply(transformNameSuffix + "ReadFromPubSub", ParDo.of(new DoFn<String, Row>() {
#ProcessElement
public void processElement(ProcessContext c) {
String item = c.element();
//System.out.print(item);
Transaction transaction = JsonUtils.parseJson(item, Transaction.class);
c.output(Row.withSchema(Schemas.TRANSACTIONS)
.addValues(*****,
*****
.......
transaction.getInputValues()).build());}
})).setRowSchema(Schemas.TRANSACTIONS).apply(
ClickHouseIO.<Row>write(
chainConfig.getClickhouseJDBCURI(),
chainConfig.getTransactionsTable())
.withMaxRetries(3)
.withMaxInsertBlockSize(1)
.withInitialBackoff(Duration.standardSeconds(5))
.withInsertDeduplicate(true)
.withInsertDistributedSync(false));
The method that generates the inputs
public List<Long> getInputValues() {
List<Long> values = Lists.newArrayList();
for (TransactionInput eachInput : inputs) {
System.out.print(eachInput.getValue());
values.add(eachInput.getValue());
}
return values;
}
The error I am getting is :
ru.yandex.clickhouse.except.ClickHouseException: ClickHouse exception, code: 33, host: 35.202.46.77, port: 8123; Code: 33, e.displayText() = DB::Exception: Cannot read all data. Bytes read: 6. Bytes expected: 15. (version 19.17.4.11 (official build))
at ru.yandex.clickhouse.except.ClickHouseExceptionSpecifier.specify(ClickHouseExceptionSpecifier.java:58)
at ru.yandex.clickhouse.except.ClickHouseExceptionSpecifier.specify(ClickHouseExceptionSpecifier.java:28)
at ru.yandex.clickhouse.ClickHouseStatementImpl.checkForErrorAndThrow(ClickHouseStatementImpl.java:875)
at ru.yandex.clickhouse.ClickHouseStatementImpl.sendStream(ClickHouseStatementImpl.java:851)
at ru.yandex.clickhouse.Writer.send(Writer.java:106)
at ru.yandex.clickhouse.Writer.send(Writer.java:141)
at ru.yandex.clickhouse.ClickHouseStatementImpl.sendRowBinaryStream(ClickHouseStatementImpl.java:764)
at ru.yandex.clickhouse.ClickHouseStatementImpl.sendRowBinaryStream(ClickHouseStatementImpl.java:758)
at org.apache.beam.sdk.io.clickhouse.ClickHouseIO$WriteFn.flush(ClickHouseIO.java:427)
at org.apache.beam.sdk.io.clickhouse.ClickHouseIO$WriteFn.processElement(ClickHouseIO.java:411)
at org.apache.beam.sdk.io.clickhouse.AutoValue_ClickHouseIO_WriteFn$DoFnInvoker.invokeProcessElement(Unknown Source)
at org.apache.beam.repackaged.direct_java.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:222)
at org.apache.beam.repackaged.direct_java.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:183)
at org.apache.beam.repackaged.direct_java.runners.core.SimplePushbackSideInputDoFnRunner.processElementInReadyWindows(SimplePushbackSideInputDoFnRunner.java:78)
at org.apache.beam.runners.direct.ParDoEvaluator.processElement(ParDoEvaluator.java:216)
at org.apache.beam.runners.direct.DoFnLifecycleManagerRemovingTransformEvaluator.processElement(DoFnLifecycleManagerRemovingTransformEvaluator.java:54)
at org.apache.beam.runners.direct.DirectTransformExecutor.processElements(DirectTransformExecutor.java:160)
at org.apache.beam.runners.direct.DirectTransformExecutor.run(DirectTransformExecutor.java:124)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Throwable: Code: 33, e.displayText() = DB::Exception: Cannot read all data. Bytes read: 6. Bytes expected: 15. (version 19.17.4.11 (official build))
at ru.yandex.clickhouse.except.ClickHouseExceptionSpecifier.specify(ClickHouseExceptionSpecifier.java:53)
... 22 more
Feb 06, 2020 9:04:38 PM org.apache.beam.sdk.io.clickhouse.ClickHouseIO$WriteFn flush
WARNING: Error writing to ClickHouse. Retry attempt[1]
ru.yandex.clickhouse.except.ClickHouseException: ClickHouse exception, code: 33, host: 35.202.46.77, port: 8123; Code: 33, e.displayText() = DB::Exception: Cannot read all data. Bytes read: 6. Bytes expected: 93. (version 19.17.4.11 (official build))
at ru.yandex.clickhouse.except.ClickHouseExceptionSpecifier.specify(ClickHouseExceptionSpecifier.java:58)
at ru.yandex.clickhouse.except.ClickHouseExceptionSpecifier.specify(ClickHouseExceptionSpecifier.java:28)
at ru.yandex.clickhouse.ClickHouseStatementImpl.checkForErrorAndThrow(ClickHouseStatementImpl.java:875)
at ru.yandex.clickhouse.ClickHouseStatementImpl.sendStream(ClickHouseStatementImpl.java:851)
at ru.yandex.clickhouse.Writer.send(Writer.java:106)
at ru.yandex.clickhouse.Writer.send(Writer.java:141)
at ru.yandex.clickhouse.ClickHouseStatementImpl.sendRowBinaryStream(ClickHouseStatementImpl.java:764)
at ru.yandex.clickhouse.ClickHouseStatementImpl.sendRowBinaryStream(ClickHouseStatementImpl.java:758)
at org.apache.beam.sdk.io.clickhouse.ClickHouseIO$WriteFn.flush(ClickHouseIO.java:427)
at org.apache.beam.sdk.io.clickhouse.ClickHouseIO$WriteFn.processElement(ClickHouseIO.java:411)
at org.apache.beam.sdk.io.clickhouse.AutoValue_ClickHouseIO_WriteFn$DoFnInvoker.invokeProcessElement(Unknown Source)
at org.apache.beam.repackaged.direct_java.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:222)
at org.apache.beam.repackaged.direct_java.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:183)
at org.apache.beam.repackaged.direct_java.runners.core.SimplePushbackSideInputDoFnRunner.processElementInReadyWindows(SimplePushbackSideInputDoFnRunner.java:78)
at org.apache.beam.runners.direct.ParDoEvaluator.processElement(ParDoEvaluator.java:216)
at org.apache.beam.runners.direct.DoFnLifecycleManagerRemovingTransformEvaluator.processElement(DoFnLifecycleManagerRemovingTransformEvaluator.java:54)
at org.apache.beam.runners.direct.DirectTransformExecutor.processElements(DirectTransformExecutor.java:160)
at org.apache.beam.runners.direct.DirectTransformExecutor.run(DirectTransformExecutor.java:124)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Throwable: Code: 33, e.displayText() = DB::Exception: Cannot read all data. Bytes read: 6. Bytes expected: 93. (version 19.17.4.11 (official build))
at ru.yandex.clickhouse.except.ClickHouseExceptionSpecifier.specify(ClickHouseExceptionSpecifier.java:53)
... 22 more
Feb 06, 2020 9:04:39 PM org.apache.beam.sdk.io.clickhouse.ClickHouseIO$WriteFn flush
WARNING: Error writing to ClickHouse. Retry attempt[1]
ru.yandex.clickhouse.except.ClickHouseException: ClickHouse exception, code: 33, host: 35.202.46.77, port: 8123; Code: 33, e.displayText() = DB::Exception: Cannot read all data. Bytes read: 5. Bytes expected: 2641. (version 19.17.4.11 (official build)
Clikhouse schema:
CREATE TABLE IF NOT EXISTS transactions_streaming_small (
*****,
*****,
inputs Nested ( value Nullable(UInt64) ) )
ENGINE = MergeTree() PARTITION BY toYYYYMM(block_date_time)
What is the problem?
[ClickHouse version 19.17.4.11 (official build)]

Related

Kindly check this Ordinal Encoding Value Error Issue

To avoid data leakage, these steps are the advised method to use a tranformation process.
Create an instance of the Ordinal Encoder class
Fit the encoder to the training data and transform the data using the fit_transform() method
Transform the test data using the encoder and the transform() method
I did this initially, but i kept getting an error message. I think it's because the dataset contains both categorical and numerical features.
The fit_transform worked on the train dataset, but kept returning an error message on test data. It was until i used fit_transform on the test data that it stopped. This i believe is not ideal as that will be data leakage in itself.
Is there a way to carry out the above steps on only the categorical features on the train data and successfully transform the test dataset without error?
enc_cat = OrdinalEncoder()
enc_tag = LabelEncoder()
y_train_transf_cd = enc_tag.fit_transform(y_train_cd)
y_test_transf_cd = enc_tag.transform(y_test_cd)
x_train_transf_cd = enc_cat.fit_transform(x_train_cd)
x_test_transf_cd = enc_cat.transform(x_test_cd)
ValueError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_1316\78241529.py in <module>
6
7 x_train_transf_cd = enc_cat.fit_transform(x_train_cd)
----> 8 x_test_transf_cd = enc_cat.transform(x_test_cd)
~\anaconda3\lib\site-packages\sklearn\preprocessing\_encoders.py in transform(self, X)
928 Transformed input.
929 """
--> 930 X_int, X_mask = self._transform(
931 X, handle_unknown=self.handle_unknown, force_all_finite="allow-nan"
932 )
~\anaconda3\lib\site-packages\sklearn\preprocessing\_encoders.py in _transform(self, X, handle_unknown, force_all_finite, warn_on_unknown)
140 " during transform".format(diff, i)
141 )
--> 142 raise ValueError(msg)
143 else:
144 if warn_on_unknown:
ValueError: Found unknown categories [647, 677, 726, 778, 787, 816, 822, 823, 881, 899, 944, 951, 963, 1016, 1033, 1063, 1120, 1151, 1192, 1197, 1199, 1209, 1240, 1254, 1255, 1321, 1325, 1335, 1385, 1387, 1444, 1479, 1498, 1519, 1642, 1655, 1668, 1681, 1698, 1719, 1749, 1755, 1757, 1765, 1785, 1787, 1823, 1834, 1858, 1874, 1875, 1884, 1890, 1902, 1986, 2022, 2036, 2055, 2080, 2100, 2119, 2142, 2160, 2193, 2210, 2227, 2231, 2257, 2263, 2268, 2269, 2295, 2317, 2330, 2382, 2400, 2414, 2416, 2419,

Reading UIDs of NFC Cards in iOS 13

I would like to retrive the UID of MiFare cards. I'm using an iPhone X, Xcode 11 and iOS 13.
I'm aware this wasn't possible (specifically reading the UID) until iOS 13 according to this website: https://gototags.com/blog/apple-expands-nfc-on-iphone-in-ios-13/ and this guy: https://www.reddit.com/r/apple/comments/c0gzf0/clearing_up_misunderstandings_and/
The phones NFC reader is correctly detecting the card however the unique identifier is always returned as empty or nil. I can read the payload however and irrelvant to iOS but I can do this in Android (confirms the card isn't faulty or just odd)
Apple Sample Project: https://developer.apple.com/documentation/corenfc/building_an_nfc_tag-reader_app
func tagReaderSession(_ session: NFCTagReaderSession, didDetect tags: [NFCTag]) {
if case let NFCTag.miFare(tag) = tags.first! {
session.connect(to: tags.first!) { (error: Error?) in
let apdu = NFCISO7816APDU(instructionClass: 0, instructionCode: 0xB0, p1Parameter: 0, p2Parameter: 0, data: Data(), expectedResponseLength: 16)
tag.queryNDEFStatus(completionHandler: {(status: NFCNDEFStatus, e: Int, error: Error?) in
debugPrint("\(status) \(e) \(error)")
})
tag.sendMiFareISO7816Command(apdu) { (data, sw1, sw2, error) in
debugPrint(data)
debugPrint(error)
debugPrint(tag.identifier)
debugPrint(String(data: tag.identifier, encoding: .utf8))
}
}
}
}
I'm aware these sorts of hacks: CoreNFC not reading UID in iOS
But they are closed and only apply to iOS 11 for a short time in the past.
Ok I have an answer.
tag.identifier isn't empty -- per se -- if you examine from Xcodes debugger it appears empty (0x00 is the value!). It's type is Data and printing it will reveal the length of the Data but not how it's encoded. In this case it's a [UInt8] but stored as a bag of bits, I don't understand why Apple have done it this way -- it's clunky -- I'm sure they have good reasons. I would have stored it as a type String -- after all the whole point of a high level language like Swift is to abstract us away from such hadware implementation details.
The following code will retrive the UID from a MiFare card:
if case let NFCTag.miFare(tag) = tags.first! {
session.connect(to: tags.first!) { (error: Error?) in
let apdu = NFCISO7816APDU(instructionClass: 0, instructionCode: 0xB0, p1Parameter: 0, p2Parameter: 0, data: Data(), expectedResponseLength: 16)
tag.sendMiFareISO7816Command(apdu) { (apduData, sw1, sw2, error) in
let tagUIDData = tag.identifier
var byteData: [UInt8] = []
tagUIDData.withUnsafeBytes { byteData.append(contentsOf: $0) }
var uidString = ""
for byte in byteData {
let decimalNumber = String(byte, radix: 16)
if (Int(decimalNumber) ?? 0) < 10 { // add leading zero
uidString.append("0\(decimalNumber)")
} else {
uidString.append(decimalNumber)
}
}
debugPrint("\(byteData) converted to Tag UID: \(uidString)")
}
}
}
I know you have said that it returns nil but for clarity for future readers:
Assuming it is not a Felica tag, it should be on the identifier field when it is detected:
func tagReaderSession(_ session: NFCTagReaderSession, didDetect tags: [NFCTag]) {
if case let NFCTag.miFare(tag) = tags.first! {
print(tag.identifier as NSData)
}
}
But in your case, it's empty (see edit below). For most tags the APDU to get the UID of a tag is
0xff // Class
0xca // INS
0x00 // P1
0x00 // P2
0x00 // Le
so you could try using tag.sendMiFareCommand to send that command manually.
Edit: Response from OP, it wasn't empty but was unclear because printing Data in Swift doesn't show in console
In iOS13 I was able read the Tag.identifier value for various MIFARE family's DESfire and UltraLight tags same as #scott-condron's answer, but for various MIFARE Classic ICs (the unknown family member?) my Console shows different error types.
Perhaps private framework APIs similar to the iOS11 work-around in the hack you mentioned would be helpful in these cases, e.g. to intercept and amend the discovery polling routine, but I wouldn't know which ones or how to use them.
Below you can find some test results for MIFARE Classic 4K (emulation) tags, as also reported in this github thread and this MIFARE support thread. Following Table 6 of Application Note #10833, the Select Acknowledge (SAK) value of 0x38 of the emulation tags below translates into 0 0 1 1 1 0 0 0 for bits 8..1, i.e. bits 6, 5, and 4 are 1, and therefore these SAK values classify as Smart MX with CLASSIC 4K as per Figure 3 of Application Note #10834.
an Infineon Classic 4k Emulation successfully logs 1 tags found with the correct UID (31:9A:2F:88), ATQA (0x0200), SAK (detects 0x20, i.e. ISO 14443-4 protocol, and 0x18, i.e. MIFARE 4K, both part of the expected value: 0x38) and respective tag type (both Generic 4A and MiFare classified correctly), but then throws a Stack Error:
error 14:48:08.675369 +0200 nfcd 00000001 04e04390 -
[NFDriverWrapper connectTag:]:1436 Failed to connect to tag:
<NFTagInternal: 0x104e05cd0>-{length = 8, bytes = 0x7bad030077180efa}
{ Tech=A Type=Generic 4A ID={length = 4, bytes = 0x319a2f88}
SAK={length = 1, bytes = 0x20} ATQA={length = 2, bytes = 0x0200} historicalBytes={length = 0, bytes = 0x}}
:
error 14:48:08.682881 +0200 nfcd 00000001 04e04390 -
[NFDriverWrapper connectTag:]:1436 Failed to connect to tag:
<NFTagInternal: 0x104e1d600>-{length = 8, bytes = 0x81ad0300984374f3}
{ Tech=A Type=MiFare ID={length = 4, bytes = 0x319a2f88}
SAK={length = 1, bytes = 0x18} ATQA={length = 2, bytes = 0x0200} historicalBytes={length = 0, bytes = 0x}}
:
default 14:48:08.683150 +0200 nfcd 00000001 04e07470 -
[_NFReaderSession handleRemoteTagsDetected:]:445 1 tags found
default 14:48:08.685792 +0200 nfcd 00000001 04e07470 -
[_NFReaderSession connect:callback:]:507 NFC-Example
:
error 14:48:08.693429 +0200 nfcd 00000001 04e04390 -
[NFDriverWrapper connectTag:]:1436 Failed to connect to tag:
<NFTagInternal: 0x104e05cd0>-{length = 8, bytes = 0x81ad0300984374f3}
{ Tech=A Type=MiFare ID={length = 4, bytes = 0x319a2f88}
SAK=(null) ATQA=(null) historicalBytes={length = 0, bytes = 0x}}
:
error 14:48:08.694019 +0200 NFC-Example 00000002 802e2700 -
[NFCTagReaderSession _connectTag:error:]:568 Error
Domain=NFCError Code=100 "Stack Error" UserInfo={NSLocalizedDescription=Stack Error, NSUnderlyingError=0x2822a86c0
{Error Domain=nfcd Code=15 "Stack Error" UserInfo={NSLocalizedDescription=Stack Error}}}
an NXP SmartMX (Classic 4k emulation) with UID CF:3E:40:04 is discovered initially, but a reception error during ISO 14443-4A presence check (Proc Iso-Dep pres chk ntf: Receiption failed) continuously restarts the discovery polling until the session finally expires, possibly preventing the other SAK value 0x18 (for MIFARE 4K tag type) to be received:
error 10:44:50.650673 +0200 nfcd Proc Iso-Dep pres chk ntf: Receiption failed
:
error 10:44:50.677470 +0200 nfcd 00000001 04e04390 -
[NFDriverWrapper disconnectTag:tagRemovalDetect:]:1448 Failed to disconnect tag:
<NFTagInternal: 0x104f09930>-{length = 8, bytes = 0x07320d00f3041861}
{ Tech=A Type=Generic 4A ID={length = 4, bytes = 0xcf3e4004}
SAK={length = 1, bytes = 0x20} ATQA={length = 2, bytes = 0x0200} historicalBytes={length = 0, bytes = 0x}}
default 10:44:50.677682 +0200 nfcd 00000001 04e04390 -
[NFDriverWrapper restartDiscovery]:1953
an actual NXP Classic 4k with UID 2D:FE:9B:87 remains undetected and throws no error. The discovery polling session for this tag simply times out after 60 seconds and logs the last 128 discovery messages transmitted (Tx) and received (Rx), among which the following pattern is repeated (which does include the expected UID: 2D FE 9B 87):
error 11:42:19.511354 +0200 nfcd 1571305339.350902 Tx '21 03 07 03 FF 01 00 01 01 01 6F 61'
error 11:42:19.511484 +0200 nfcd 1571305339.353416 Rx '41 03 01'
error 11:42:19.511631 +0200 nfcd 1571305339.353486 Rx '00 F6 89'
error 11:42:19.511755 +0200 nfcd 1571305339.362455 Rx '61 05 14'
error 11:42:19.511905 +0200 nfcd 1571305339.362529 Rx '01 80 80 00 FF 01 09 02 00 04 2D FE 9B 87 01 18 00 00 00 00 2D 11'
error 11:42:19.512152 +0200 nfcd 1571305339.362734 Tx '21 06 01 00 44 AB'
error 11:42:19.512323 +0200 nfcd 1571305339.363959 Rx '41 06 01'
error 11:42:19.512489 +0200 nfcd 1571305339.364028 Rx '00 1D 79'
error 11:42:19.512726 +0200 nfcd 1571305339.364300 Rx '61 06 02'
error 11:42:19.512914 +0200 nfcd 1571305339.364347 Rx '00 00 EB 78'

LogesticRegression fit() function is throwing this error

i'm following datacamp pyspark tutorial series and on chapter 04 Model tuning and selection in fitting the model, I'm getting this error when i execute these line
best_lr = lr.fit(training)
Error
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<ipython-input-102-88042cb88c20> in <module>()
----> 1 best_lr = lr.fit(training)
/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/ml/base.py in fit(self, dataset, params)
130 return self.copy(params)._fit(dataset)
131 else:
--> 132 return self._fit(dataset)
133 else:
134 raise ValueError("Params must be either a param map or a list/tuple of param maps, "
/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/ml/wrapper.py in _fit(self, dataset)
286
287 def _fit(self, dataset):
--> 288 java_model = self._fit_java(dataset)
289 model = self._create_model(java_model)
290 return self._copyValues(model)
/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/ml/wrapper.py in _fit_java(self, dataset)
283 """
284 self._transfer_params_to_java()
--> 285 return self._java_obj.fit(dataset._jdf)
286
287 def _fit(self, dataset):
/usr/hdp/current/spark2-client/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py in __call__(self, *args)
1158 answer = self.gateway_client.send_command(command)
1159 return_value = get_return_value(
-> 1160 answer, self.gateway_client, self.target_id, self.name)
1161
1162 for temp_arg in temp_args:
/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/sql/utils.py in deco(*a, **kw)
61 def deco(*a, **kw):
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
65 s = e.java_exception.toString()
/usr/hdp/current/spark2-client/python/lib/py4j-0.10.6-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
318 raise Py4JJavaError(
319 "An error occurred while calling {0}{1}{2}.\n".
--> 320 format(target_id, ".", name), value)
321 else:
322 raise Py4JError(
Py4JJavaError: An error occurred while calling o596.fit.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 60.0 failed 1 times, most recent failure: Lost task 2.0 in stage 60.0 (TID 86, localhost, executor driver): org.apache.spark.SparkException: Failed to execute user defined function($anonfun$3: (struct<month_double_VectorAssembler_42f79ae7f99735f04859:double,air_time_double_VectorAssembler_42f79ae7f99735f04859:double,carrier_fact:vector,dest_fact:vector,plane_age_double_VectorAssembler_42f79ae7f99735f04859:double>) => vector)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.sort_addToSorter$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:216)
at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1092)
at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1083)
at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1018)
at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1083)
at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:809)
at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:335)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:286)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Values to assemble cannot be null.
at org.apache.spark.ml.feature.VectorAssembler$$anonfun$assemble$1.apply(VectorAssembler.scala:163)
at org.apache.spark.ml.feature.VectorAssembler$$anonfun$assemble$1.apply(VectorAssembler.scala:146)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
at org.apache.spark.ml.feature.VectorAssembler$.assemble(VectorAssembler.scala:146)
at org.apache.spark.ml.feature.VectorAssembler$$anonfun$3.apply(VectorAssembler.scala:99)
at org.apache.spark.ml.feature.VectorAssembler$$anonfun$3.apply(VectorAssembler.scala:98)
... 24 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1587)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1586)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1586)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1820)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1769)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1758)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2131)
at org.apache.spark.rdd.RDD$$anonfun$fold$1.apply(RDD.scala:1092)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.fold(RDD.scala:1086)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1.apply(RDD.scala:1155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.treeAggregate(RDD.scala:1131)
at org.apache.spark.ml.classification.LogisticRegression.train(LogisticRegression.scala:518)
at org.apache.spark.ml.classification.LogisticRegression.train(LogisticRegression.scala:488)
at org.apache.spark.ml.classification.LogisticRegression.train(LogisticRegression.scala:278)
at org.apache.spark.ml.Predictor.fit(Predictor.scala:118)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Failed to execute user defined function($anonfun$3: (struct<month_double_VectorAssembler_42f79ae7f99735f04859:double,air_time_double_VectorAssembler_42f79ae7f99735f04859:double,carrier_fact:vector,dest_fact:vector,plane_age_double_VectorAssembler_42f79ae7f99735f04859:double>) => vector)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.sort_addToSorter$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:216)
at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1092)
at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1083)
at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1018)
at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1083)
at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:809)
at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:335)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:286)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
Caused by: org.apache.spark.SparkException: Values to assemble cannot be null.
at org.apache.spark.ml.feature.VectorAssembler$$anonfun$assemble$1.apply(VectorAssembler.scala:163)
at org.apache.spark.ml.feature.VectorAssembler$$anonfun$assemble$1.apply(VectorAssembler.scala:146)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
at org.apache.spark.ml.feature.VectorAssembler$.assemble(VectorAssembler.scala:146)
at org.apache.spark.ml.feature.VectorAssembler$$anonfun$3.apply(VectorAssembler.scala:99)
at org.apache.spark.ml.feature.VectorAssembler$$anonfun$3.apply(VectorAssembler.scala:98)
... 24 more
Tools
I'm using online pyspark cluter with Cloudxlabs.com(trail version)
May be there are some NULL values in the data set. You'll have to take care of those first.
As explained by the error "Values to assemble cannot be null."
Apart from removing or imputing missing values, they can be replaced with mean, median values.
Second option is using xgboost for regression which will automatically handle missing values.
df = pd.DataFrame({'Last_Name': ['Smith', None, 'Brown'],
'First_Name': ['John', 'Mike', 'Bill'],
'Age': [35, 45, None]})
print(df)
Last_Name First_Name Age
0 Smith John 35.0
1 None Mike 45.0
2 Brown Bill NaN
df2 = df.dropna()
print(df2)
Last_Name First_Name Age
0 Smith John 35.0
Also xgboost can be applied as below:
https://www.datacamp.com/community/tutorials/xgboost-in-python

How to join more than 2 regions with Apache Geode?

I've been trying to query some regions and i'm failing to join more than 2 of them. I set that up in a Java test to run them more easily but it fails all the same in pulse.
#Test
public void test_geode_join() throws QueryException {
ClientCache cache = new ClientCacheFactory()
.addPoolLocator(HOST, LOCATOR_PORT)
.setPoolSubscriptionEnabled(true)
.setPdxSerializer(new MyReflectionBasedAutoSerializer())
.create();
{
#SuppressWarnings("unchecked")
SelectResults<StructImpl> r = (SelectResults<StructImpl>) cache.getQueryService()
.newQuery("SELECT itm.itemId, bx.boxId " +
"FROM /items itm, /boxs bx " +
"WHERE itm.boxId = bx.boxId " +
"LIMIT 5")
.execute();
for (StructImpl v : r) {
System.out.println(v);
}
}
{
#SuppressWarnings("unchecked")
SelectResults<StructImpl> r = (SelectResults<StructImpl>) cache.getQueryService()
.newQuery("SELECT bx.boxId, rm.roomId " +
"FROM /boxs bx, /rooms rm " +
"WHERE bx.roomId = rm.roomId " +
"LIMIT 5")
.execute();
for (StructImpl v : r) {
System.out.println(v);
}
}
{
// That fails
#SuppressWarnings("unchecked")
SelectResults<StructImpl> r = (SelectResults<StructImpl>) cache.getQueryService()
.newQuery("SELECT itm.itemId, bx.boxId, rm.roomId " +
"FROM /items itm, /boxs bx, /rooms rm " +
"WHERE itm.boxId = bx.boxId " +
"AND bx.roomId = rm.roomId " +
"LIMIT 5")
.execute();
for (StructImpl v : r) {
System.out.println(v);
}
}
}
The first 2 queries work fine and respond in an instant but the last query holds until it timeouts. I get the following logs
[warn 2018/02/06 17:33:17.155 CET <main> tid=0x1] Pool unexpected socket timed out on client connection=Pooled Connection to hostname:31902: Connection[hostname:31902]#1978504976)
[warn 2018/02/06 17:33:27.333 CET <main> tid=0x1] Pool unexpected socket timed out on client connection=Pooled Connection to hostname2:31902: Connection[hostname2:31902]#1620459733 attempt=2)
[warn 2018/02/06 17:33:37.588 CET <main> tid=0x1] Pool unexpected socket timed out on client connection=Pooled Connection to hostname3:31902: Connection[hostname3:31902]#422409467 attempt=3)
[warn 2018/02/06 17:33:37.825 CET <main> tid=0x1] Pool unexpected socket timed out on client connection=Pooled Connection to hostname3:31902: Connection[hostname3:31902]#422409467 attempt=3). Server unreachable: could not connect after 3 attempts
[info 2018/02/06 17:33:37.840 CET <Distributed system shutdown hook> tid=0xd] VM is exiting - shutting down distributed system
[info 2018/02/06 17:33:37.840 CET <Distributed system shutdown hook> tid=0xd] GemFireCache[id = 1839168128; isClosing = true; isShutDownAll = false; created = Tue Feb 06 17:33:05 CET 2018; server = false; copyOnRead = false; lockLease = 120; lockTimeout = 60]: Now closing.
[info 2018/02/06 17:33:37.887 CET <Distributed system shutdown hook> tid=0xd] Destroying connection pool DEFAULT
And it ends up crashing
org.apache.geode.cache.client.ServerConnectivityException: Pool unexpected socket timed out on client connection=Pooled Connection to hostname3:31902: Connection[hostname3:31902]#422409467 attempt=3). Server unreachable: could not connect after 3 attempts
at org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:798)
at org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:623)
at org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:174)
at org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:115)
at org.apache.geode.cache.client.internal.PoolImpl.execute(PoolImpl.java:763)
at org.apache.geode.cache.client.internal.QueryOp.execute(QueryOp.java:58)
at org.apache.geode.cache.client.internal.ServerProxy.query(ServerProxy.java:70)
at org.apache.geode.cache.query.internal.DefaultQuery.executeOnServer(DefaultQuery.java:456)
at org.apache.geode.cache.query.internal.DefaultQuery.execute(DefaultQuery.java:338)
at org.apache.geode.cache.query.internal.DefaultQuery.execute(DefaultQuery.java:319)
at local.test.geode.GeodeTest.test_geode_join(GeodeTest.java:226)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:538)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:760)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:460)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:206)
I tried to set timeouts at 60 seconds but i'm still not getting any results.
All regions are configured like this:
Type | Name | Value
------ | --------------- | --------------------
Region | data-policy | PERSISTENT_REPLICATE
| disk-store-name | regionDiskStore1
| size | 1173
| scope | distributed-ack
Am I missing anything here ?
Based on all the information provided, it looks like you are doing everything correct. I tried to reproduce in a simple test (similar test) and the test returns 5 results. However, if one of the predicates did not match, it could cause the query to take a lot longer to join enough rows to find a tuple that matches.
Below is a sample test that does not have an issue, but if I modify the test to put into region3 only portfolios with ID = -1. Then the test "hangs" trying to find 5 rows that fulfill the search criteria (it has to join 1000 * 1000 * 1000 rows which takes awhile to do). In the end the query will not find an p3.ID = p1.ID. Is it possible that the itm.boxIds just do not match box.boxId often enough so it takes a lot longer to find ones that do?
public void testJoinMultipleReplicatePersistentRegionsWithLimitClause() throws Exception {
String regionName = "portfolio";
Cache cache = serverStarterRule.getCache();
assertNotNull(cache);
Region region1 =
cache.createRegionFactory(RegionShortcut.REPLICATE_PERSISTENT).create(regionName + 1);
Region region2 =
cache.createRegionFactory(RegionShortcut.REPLICATE_PERSISTENT).create(regionName + 2);
Region region3 =
cache.createRegionFactory(RegionShortcut.REPLICATE_PERSISTENT).create(regionName + 3);
for ( int i = 0; i < 1000; i++) {
Portfolio p = new Portfolio(i);
region1.put(i, p);
region2.put(i, p);
region3.put(i, p); //modify this line to region3.put(i, new Portfolio(-1)) to cause query to take longer
}
QueryService queryService = cache.getQueryService();
SelectResults results = (SelectResults) queryService
.newQuery("select p1.ID, p2.ID, p3.ID from /portfolio1 p1, /portfolio2 p2, /portfolio3 p3 where p1.ID = p2.ID and p3.ID = p1.ID limit 5").execute();
assertEquals(5, results.size());
}

Symfony, in remote host: Error 500. Unknown record property / related component "algorithm" on "sfGuardUser"

after deploying, i get the error below after loggingin.
Sf 1.3, sfDoctrineGuardPlugin. And i have this schema.yml in config/doctrine:
Usuario:
inheritance:
extends: sfGuardUser
type: simple
columns:
username:
type: string(128)
notnull: false
unique: true
nombre_apellidos: string(60)
sexo: string(5)
fecha_nac: date
provincia: string(60)
localidad: string(255)
email_address: string(255)
fotografia: string(255)
avatar: string(255)
avatar_mensajes: string(255)
relations:
Usuario:
local: user1_id
foreign: user2_id
refClass: AmigoUsuario
equal: true
500 | Internal Server Error | Doctrine_Record_UnknownPropertyException Unknown record property / related component "algorithm" on "sfGuardUser" stack trace
* at ()
in SF_ROOT_DIR/lib/vendor/symfony/lib/plugins/sfDoctrinePlugin/lib/vendor/doctrine/Doctrine/Record/Filter/Standard.php line 55 ...
52. */
53. public function filterGet(Doctrine_Record $record, $name)
54. {
55. throw new Doctrine_Record_UnknownPropertyException(sprintf('Unknown record property / related component "%s" on "%s"', $name, get_class($record)));
56. }
57. }
* at Doctrine_Record_Filter_Standard->filterGet(object('sfGuardUser'), 'algorithm')
in SF_ROOT_DIR/lib/vendor/symfony/lib/plugins/sfDoctrinePlugin/lib/vendor/doctrine/Doctrine/Record.php line 1382 ...
1379. $success = false;
1380. foreach ($this->_table->getFilters() as $filter) {
1381. try {
1382. $value = $filter->filterGet($this, $fieldName);
1383. $success = true;
1384. } catch (Doctrine_Exception $e) {}
1385. }
* at Doctrine_Record->_get('algorithm', 1)
in SF_ROOT_DIR/lib/vendor/symfony/lib/plugins/sfDoctrinePlugin/lib/vendor/doctrine/Doctrine/Record.php line 1337 ...
1334. return $this->$accessor($load);
1335. }
1336. }
1337. return $this->_get($fieldName, $load);
1338. }
1339.
1340. protected function _get($fieldName, $load = true)
* at Doctrine_Record->get('algorithm')
in SF_ROOT_DIR/lib/vendor/symfony/lib/plugins/sfDoctrinePlugin/lib/record/sfDoctrineRecord.class.php line 212 ...
209. return call_user_func_array(
210. array($this, $verb),
211. array_merge(array($entityName), $arguments)
212. );
213. } else {
214. $failed = true;
215. }
* at sfDoctrineRecord->__call(array(object('sfGuardUser'), 'get'), array('algorithm'))
in n/a line n/a ...
* at sfGuardUser->getAlgorithm('getAlgorithm', array())
in SF_ROOT_DIR/plugins/sfDoctrineGuardPlugin/lib/model/doctrine/PluginsfGuardUser.class.php line 96 ...
93. */
94. public function checkPasswordByGuard($password)
95. {
96. $algorithm = $this->getAlgorithm();
97. if (false !== $pos = strpos($algorithm, '::'))
98. {
99. $algorithm = array(substr($algorithm, 0, $pos), substr($algorithm, $pos + 2));
* at PluginsfGuardUser->checkPasswordByGuard()
in SF_ROOT_DIR/plugins/sfDoctrineGuardPlugin/lib/model/doctrine/PluginsfGuardUser.class.php line 83 ...
80. }
81. else
82. {
83. return $this->checkPasswordByGuard($password);
84. }
85. }
86.
* at PluginsfGuardUser->checkPassword('m')
in SF_ROOT_DIR/lib/sfGuardValidatorUserByEmail.class.php line 28 ...
25. {
26. // password is ok?
27.
28. if ($user->checkPassword($password))
29. {
30.
31. //die("entro");
* at sfGuardValidatorUserByEmail->doClean('m')
Here you have also my frontend_dev.log
Apr 15 07:15:23 symfony [info]
{sfPatternRouting} Connect sfRoute
"sf_guard_signin" (/login) Apr 15
07:15:23 symfony [info]
{sfPatternRouting} Connect sfRoute
"sf_guard_signout" (/logout) Apr 15
07:15:23 symfony [info]
{sfPatternRouting} Connect sfRoute
"sf_guard_password"
(/request_password) Apr 15 07:15:23
symfony [info] {sfPatternRouting}
Match route "sf_guard_signin" (/login)
for /login with parameters array (
'module' => 'sfGuardAuth', 'action'
=> 'signin',) abr 15 07:15:23 symfony [info] {sfFilterChain} Executing
filter "sfRenderingFilter" abr 15
07:15:23 symfony [info]
{sfFilterChain} Executing filter
"sfCommonFilter" abr 15 07:15:23
symfony [info] {sfFilterChain}
Executing filter "sfExecutionFilter"
abr 15 07:15:23 symfony [info]
{sfGuardAuthActions} Call
"sfGuardAuthActions->executeSignin()"
abr 15 07:15:23 symfony [info]
{Doctrine_Connection_Mysql} exec : SET
NAMES 'UTF8' - () abr 15 07:15:23
symfony [info]
{Doctrine_Connection_Statement}
execute : SELECT s.id AS s__id,
s.username AS s__username,
s.nombre_apellidos AS
s__nombre_apellidos, s.sexo AS
s__sexo, s.fecha_nac AS s__fecha_nac,
s.provincia AS s__provincia,
s.localidad AS s__localidad,
s.email_address AS s__email_address,
s.fotografia AS s__fotografia,
s.avatar AS s__avatar,
s.avatar_mensajes AS
s__avatar_mensajes FROM sf_guard_user
s WHERE (s.email_address = ?) LIMIT 1
- (f#m.com) abr 15 07:15:23 symfony [err]
{Doctrine_Record_UnknownPropertyException}
Unknown record property / related
component "algorithm" on "sfGuardUser"
abr 15 07:15:23 symfony [info]
{sfWebResponse} Send status "HTTP/1.1
500 Internal Server Error" abr 15
07:15:23 symfony [info]
{sfWebResponse} Send header
"Content-Type: text/html;
charset=utf-8" abr 15 07:15:23 symfony
[info] {sfWebDebugLogger}
Configuration 16.42 ms (8) abr 15
07:15:23 symfony [info]
{sfWebDebugLogger} Factories 123.17 ms
(1) abr 15 07:15:23 symfony [info]
{sfWebDebugLogger} Action
"sfGuardAuth/signin" 211.86 ms (1) abr
15 07:15:23 symfony [info]
{sfWebDebugLogger} Database (Doctrine)
0.01 ms (2)
Any idea?
Javi
If you change the field names, the entire plugin will break. You will need to rollback those changes, or rewrite every call of $user->getId(), $user->getUsername(), etc

Resources