I've got a simple test that demonstrate an odd behaviour of sliding window when used with TestPipeline. Basically a bunch of strings is fed to the input, then they get accumulated in the sliding window, then the sum aggregation is applied to count the duplicates and finally the output of the aggregation function is logged. With a sliding window of 10 minutes duration and 5 minutes period I expected only one window being used to store all the elements (as the new one is started in 5 minutes after the first one)...
public class SlidingWindowTest {
private static PipelineOptions options = PipelineOptionsFactory.create();
private static final Logger LOG = LoggerFactory.getLogger(SlidingWindowTest.class);
private static class IdentityDoFn extends DoFn<KV<String, Integer>, KV<String, Integer>>
implements DoFn.RequiresWindowAccess{
#Override
public void processElement(ProcessContext processContext) throws Exception {
KV<String, Integer> item = processContext.element();
LOG.info("~~~~~~~~~~> {} => {}", item.getKey(), item.getValue());
LOG.info("~~~~~~~~~~~ {}", processContext.window());
processContext.output(item);
}
}
#Test
public void whatsWrongWithSlidingWindow() {
Pipeline p = TestPipeline.create(options);
p.apply(Create.of("cab", "abc", "a1b2c3", "abc", "a1b2c3"))
.apply(MapElements.via((String item) -> KV.of(item, 1))
.withOutputType(new TypeDescriptor<KV<String, Integer>>() {}))
.apply(Window.<KV<String, Integer>>into(SlidingWindows.of(Duration.standardMinutes(10))
.every(Duration.standardMinutes(5))))
.apply(Sum.integersPerKey())
.apply(ParDo.of(new IdentityDoFn()));
p.run();
}
}
But I got 8 windows being fired instead. Is there something wrong with TestPipeline or with my understanding of how sliding windows are supposed to work?
12:19:04.566 [main] DEBUG c.g.c.d.sdk.coders.CoderRegistry - Default coder for com.google.cloud.dataflow.sdk.values.KV<java.lang.String, java.lang.Integer>: KvCoder(StringUtf8Coder, VarIntCoder)
12:19:04.566 [main] INFO c.q.m.core.SlidingWindowTest - ~~~~~~~~~~> abc => 2
12:19:04.567 [main] INFO c.q.m.core.SlidingWindowTest - ~~~~~~~~~~~ [-290308-12-21T19:50:00.000Z..-290308-12-21T20:00:00.000Z)
12:19:04.567 [main] INFO c.q.m.core.SlidingWindowTest - ~~~~~~~~~~> abc => 2
12:19:04.567 [main] INFO c.q.m.core.SlidingWindowTest - ~~~~~~~~~~~ [-290308-12-21T19:55:00.000Z..-290308-12-21T20:05:00.000Z)
12:19:04.567 [main] INFO c.q.m.core.SlidingWindowTest - ~~~~~~~~~~> a1b2c3 => 2
12:19:04.567 [main] INFO c.q.m.core.SlidingWindowTest - ~~~~~~~~~~~ [-290308-12-21T20:00:00.000Z..-290308-12-21T20:10:00.000Z)
12:19:04.567 [main] INFO c.q.m.core.SlidingWindowTest - ~~~~~~~~~~> cab => 1
12:19:04.568 [main] INFO c.q.m.core.SlidingWindowTest - ~~~~~~~~~~~ [-290308-12-21T19:50:00.000Z..-290308-12-21T20:00:00.000Z)
12:19:04.568 [main] INFO c.q.m.core.SlidingWindowTest - ~~~~~~~~~~> a1b2c3 => 2
12:19:04.568 [main] INFO c.q.m.core.SlidingWindowTest - ~~~~~~~~~~~ [-290308-12-21T19:50:00.000Z..-290308-12-21T20:00:00.000Z)
12:19:04.568 [main] INFO c.q.m.core.SlidingWindowTest - ~~~~~~~~~~> cab => 1
12:19:04.568 [main] INFO c.q.m.core.SlidingWindowTest - ~~~~~~~~~~~ [-290308-12-21T19:55:00.000Z..-290308-12-21T20:05:00.000Z)
12:19:04.568 [main] INFO c.q.m.core.SlidingWindowTest - ~~~~~~~~~~> abc => 2
12:19:04.568 [main] INFO c.q.m.core.SlidingWindowTest - ~~~~~~~~~~~ [-290308-12-21T20:00:00.000Z..-290308-12-21T20:10:00.000Z)
12:19:04.568 [main] INFO c.q.m.core.SlidingWindowTest - ~~~~~~~~~~> cab => 1
12:19:04.568 [main] INFO c.q.m.core.SlidingWindowTest - ~~~~~~~~~~~ [-290308-12-21T20:00:00.000Z..-290308-12-21T20:10:00.000Z)
P/S: Dataflow sdk version: 1.8.0
The expected behavior is different that what you observe, but also different from what you expect:
First, you have three different keys, so if they all fell into a single window, then you would expect three outputs.
For sliding windows of 10 minutes with a 5 minute period, every element necessarily falls into two windows. If an element arrives at minute 1 it falls into both the window from 0 to 10 but also the window from -5 to 5. So you should expect six output values, two per key. It is a common pitfall to think of windows as something that updates as a pipeline runs, when in fact they are simply calculated properties of the input data, not a property of its arrival time or the pipeline's execution.
The Create transform will output all values with a timestamp of BoundedWindow.TIMESTAMP_MIN_VALUE so they should all fall into the same two windows.
Your example seems to indicate a real bug. It should not be possible for "a1b2c3" to be in the two disjoint windows that it falls in, nor for "abc" to fall into three windows, two of which are disjoint.
Incidentally, though, you would benefit from checking out DataflowAssert (called PAssert now in Beam) for testing the contents of a PCollection in a consistent and cross-runner way.
Related
I am trying to establish a JDBC (Postgres) Source to Snowflake (JDBC compliant) Sink pipeline.
In order to integrate Snowflake I needed to patch the OOB JDBC Sink app according to the guide https://docs.spring.io/spring-cloud-stream-app-starters/docs/Einstein.SR6/reference/htmlsingle/#_patching_pre_built_applications
Which I did and registered on my local SCDF stack.
However I do have "binding errors" at the Sink side of this simple equation.
The stacktrace is:
2020-06-22 13:19:15.762 INFO 17866 --- [container-0-C-1] o.a.k.c.consumer.internals.Fetcher : [Consumer clientId=consumer-3, groupId=poc-stream-CCABUxRjhfLYIJvcqdRl] Resetting offset for partition poc-stream-CCABUxRjhfLYIJvcqdRl.source-0 to offset 0.
2020-06-22 13:19:15.767 INFO 17866 --- [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder$1 : partitions assigned: [poc-stream-CCABUxRjhfLYIJvcqdRl.source-0]
2020-06-22 13:19:15.774 INFO 17866 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 28782 (http) with context path ''
2020-06-22 13:19:15.775 INFO 17866 --- [ main] c.u.p.s.s.SnowflakeSinkApplication : Started SnowflakeSinkApplication in 10.866 seconds (JVM running for 11.581)
2020-06-22 13:19:51.914 WARN 17866 --- [container-0-C-1] com.zaxxer.hikari.pool.ProxyConnection : HikariPool-1 - Connection net.snowflake.client.jdbc.SnowflakeConnectionV1#5b8398d7 marked as broken because of SQLSTATE(0A000), ErrorCode(200018)
net.snowflake.client.jdbc.SnowflakeSQLException: **Data type not supported for binding: Object type: class [B.** at net.snowflake.client.jdbc.SnowflakePreparedStatementV1.setObject(SnowflakePreparedStatementV1.java:526) ~[snowflake-jdbc-3.12.7.jar!/:3.12.7]
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.setObject(HikariProxyPreparedStatement.java) ~[HikariCP-3.2.0.jar!/:na]
Here is my Stream definition:
source: jdbc --trigger.time-unit=SECONDS --spring.datasource.username='' --trigger.fixed-delay=1 --spring.datasource.url='' --jdbc.query='SELECT * FROM public.source_table_1 WHERE active=false' --jdbc.update='UPDATE public.source_table_1 SET active=true WHERE id in (:id)' --spring.cloud.stream.bindings.output.contentType=text/plain --spring.datasource.password='*****' | snowflake-sink --jdbc.columns=id:payload,first_name:payload,last_name:payload --spring.datasource.driver-class-name=net.snowflake.client.jdbc.SnowflakeDriver --spring.cloud.stream.bindings.input.contentType=text/plain --jdbc.table-name=target_table_1*
Chaning the jdbc.columns as:
--jdbc.columns=id,first_name,last_name
did not help either.
My target table at SnowFlake:
CREATE TABLE target_table_1 (
id bigint NOT NULL,
first_name varchar,
last_name varchar,
primary key (id)
)
Source table DDL is the same
Any help/pointers are appreciated.
Replying to myself might also be useful for the others in the universe:
Added contentType property for both source and sink. Also declared the jdbc.columns property as shown below (as opposed to the ref. guide - https://github.com/spring-cloud-stream-app-starters/jdbc/tree/master/spring-cloud-starter-stream-sink-jdbc
#Bean
public StreamApplication jdbcSource() {
return new StreamApplication("source: jdbc")
.addProperties(Map.of("jdbc.query", "'SELECT * FROM public.source_table_1 WHERE imported=false'"))
.addProperties(Map.of("jdbc.update", "'UPDATE public.source_table_1 SET imported=true WHERE id in (:id)'"))
.addProperties(Map.of("spring.datasource.username", "postgres"))
.addProperties(Map.of("spring.datasource.password", "mysecretpassword"))
.addProperties(Map.of("spring.datasource.url", "'jdbc:postgresql://localhost:5432/content_types'"))
.addProperties(Map.of("trigger.time-unit", "SECONDS"))
.addProperties(Map.of("trigger.fixed-delay", "1"))
.addProperties(Map.of("spring.cloud.stream.bindings.output.contentType", "application/json"));
}
#Bean
public StreamApplication sfSink() {
return new StreamApplication("snowflake-sink")
.addProperties(Map.of("jdbc.table-name", "target_table_1"))
.addProperties(Map.of("jdbc.initialize", "true"))
.addProperties(Map.of("spring.cloud.stream.bindings.input.contentType", "application/json"))
.addProperties(Map.of("jdbc.columns", "'id:new String(payload).id,first_name:new String(payload).first_name,last_name:new String(payload).last_name'"))
.addProperties(Map.of("spring.datasource.driver-class-name", "net.snowflake.client.jdbc.SnowflakeDriver"));
}
I have the following:
Flux<String> flux = ...
Mono<Void> mono = ...
Mono<Void> combined = operation(flux, mono);
They represent operations happening in parallel.
Now, I would like to print all elements emitted by the flux to sysout until mono completes,
what's the right operator to use here?
I've tried :
final Disposable subscribe = flux.subscribe(System.out::println);
mono.doOnSuccessOrError((o, e) -> subscribe.dispose());
But if feels clumpsy, I have a feeling there might be a better way to do this. Is there?
So, Project Reactor provides some methods to log emitted results:
doOnNext(Consumer onNext)
subscribe(Consumer consumer)
log()
You can see how it works.
Code example:
Flux<String> stringFlux = Flux.just("one", "two", "three")
.doOnNext(s -> System.out.println("On next: " + s));
Mono<String> stringMono = Mono.just("four");
stringFlux = stringFlux.concatWith(stringMono)
.map(s -> s + " hundred")
.log();
stringFlux.subscribe(s -> System.out.println("On next subscriber: " + s));
Result:
15:32:18.984 [main] INFO reactor.Flux.Map.1 - onSubscribe(FluxMap.MapSubscriber)
15:32:18.986 [main] INFO reactor.Flux.Map.1 - request(unbounded)
On next: one
15:32:19.005 [main] INFO reactor.Flux.Map.1 - onNext(one hundred)
On next subscriber: one hundred
On next: two
15:32:19.005 [main] INFO reactor.Flux.Map.1 - onNext(two hundred)
On next subscriber: two hundred
On next: three
15:32:19.005 [main] INFO reactor.Flux.Map.1 - onNext(three hundred)
On next subscriber: three hundred
15:32:19.006 [main] INFO reactor.Flux.Map.1 - onNext(four hundred)
On next subscriber: four hundred
15:32:19.007 [main] INFO reactor.Flux.Map.1 - onComplete()
First message has been written from doOnNext of stringFlux, after that log of combined Publishers and the last is subscribe.
P.S. Also you can log other events like OnError, OnComplete and etc.
I'm trying to use Spring-ldap's LdapTemplate to retrieve information from an LDAP source during a Rest call service implementation and, while I think I have a working configuration, we're noticing stalls of up to 15 minutes intermittently when the service is hit. Logging statements have determined the stall happens during the ldapTemplate.search() call.
My beans:
contextSourceTarget(org.springframework.ldap.core.support.LdapContextSource) {
urls = ["https://someldapsource.com"]
userDn = 'uid=someaccount,ou=xxx,cn=users,dc=org,dc=com'
password = 'somepassword'
pooled = true
}
dirContextValidator(org.springframework.ldap.pool2.validation.DefaultDirContextValidator)
poolConfig( org.springframework.ldap.pool2.factory.PoolConfig ) {
testOnBorrow = true
testWhileIdle = true
}
ldapContextSource(org.springframework.ldap.pool2.factory.PooledContextSource, ref('poolConfig')) {
contextSource = ref('contextSourceTarget')
dirContextValidator = ref('dirContextValidator')
}
ldapTemplate(LdapTemplate, ref('ldapContextSource')) {}
I expect this application could be hitting LDAP several times concurrently (via concurrent rest calls to this app) for retrieving data from different users. Here's the code that makes that call:
List attrs =['uid', 'otherattr1', 'otherattr2']
// this just returns a Map containing the key value pairs of the attrs passed in here.
LdapNamedContextMapper mapper = new LdapNamedContextMapper( attrs )
log.debug( "getLdapUser:preLdapSearch")
List<Map> results = ldapTemplate.search(
'cn=grouproot,cn=Groups,dc=org,dc=com',
'uniquemember=userNameImsearchingfor',
SearchControls.SUBTREE_SCOPE,
attrs as String[], mapper )
log.debug( "getLdapUser:postLdapSearch" )
Unfortunately, at random times it seems, the timestamp difference between the preLdapSearch and postLdapSearch logs is upwards of 15 minutes. Obviously, this is bad, and it would seem to be a pool management issue.
So I turned on debug logging for packages org.springframework.ldap and org.apache.commons.pool2
And now when this happens I get the following in the logs:
2018-09-20 20:18:46.251 DEBUG appEvent="getLdapUser:preLdapSearch"
2018-09-20 20:35:03.246 DEBUG A class javax.naming.ServiceUnavailableException - not explicitly configured to be a non-transient exception - encountered; ignoring.
2018-09-20 20:35:03.249 DEBUG DirContext 'javax.naming.ldap.InitialLdapContext#1f4f37b4' failed validation with an exception.
javax.naming.ServiceUnavailableException: my.ldaphost.com:636; socket closed
at com.sun.jndi.ldap.Connection.readReply(Connection.java:454)
at com.sun.jndi.ldap.LdapClient.getSearchReply(LdapClient.java:638)
at com.sun.jndi.ldap.LdapClient.getSearchReply(LdapClient.java:638)
at com.sun.jndi.ldap.LdapClient.search(LdapClient.java:561)
at com.sun.jndi.ldap.LdapCtx.doSearch(LdapCtx.java:1985)
at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1844)
at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1769)
at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:392)
(LOTS OF STACK TRACE REMOVED)
2018-09-20 20:35:03.249 DEBUG Closing READ_ONLY DirContext='javax.naming.ldap.InitialLdapContext#1f4f37b4'
2018-09-20 20:35:03.249 DEBUG Closed READ_ONLY DirContext='javax.naming.ldap.InitialLdapContext#1f4f37b4'
2018-09-20 20:35:03.249 DEBUG Creating a new READ_ONLY DirContext
2018-09-20 20:35:03.787 DEBUG Created new READ_ONLY DirContext='javax.naming.ldap.InitialLdapContext#5239386d'
2018-09-20 20:35:03.838 DEBUG DirContext 'javax.naming.ldap.InitialLdapContext#5239386d' passed validation.
2018-09-20 20:35:03.890 DEBUG appEvent="getLdapUser:postLdapSearch"
Questions:
How can I find out more? I've got debug logging turned on for org.springframework.ldap and org.apache.commons.pool2
Why is it seeming to take 15+minutes to determine that a connection is stale/unusable? How can I configure to make that much shorter?
There is a good chance that the underlying LDAP system is having connection issues.
You could try adding timeouts in the connection pool settings:
max-wait - default is -1
eviction-run-interval-millis - you may
want to set this to control how often to check for problems
Docs: https://docs.spring.io/spring-ldap/docs/current/reference/#pool-configuration
This question has been askedfor log4j but not log4j2: Is it safe to use the same log file by two different appenders
Technically, you can create multiple appenders in Log4j2 that write in the same file. This seems to work well.
Here's my OS / JDK :
Oracle JDK 7u45
Ubuntu LTE 14.04
Here's a sample configuration (in yaml) :
Configuration:
status: debug
Appenders:
RandomAccessFile:
- name: TestA
fileName: logs/TEST.log
PatternLayout:
Pattern: "%msg%n"
- name: TestB
fileName: logs/TEST.log
PatternLayout:
Pattern: "%msg%n"
Loggers:
Root:
level: trace
AppenderRef:
- ref: TestA
- ref: TestB
My Java sample :
final Logger root = LoggerFactory.getLogger(Logger.ROOT_LOGGER_NAME);
root.trace("!!! Trace World !!!");
root.debug("!!! Debug World !!!");
root.info("!!! Info World !!!");
root.warn("!!! Warn World !!!");
root.error("!!! Error World !!!");
My log file result :
20150513T112956,819 TRACE "!!! Trace World !!!"
20150513T112956,819 TRACE "!!! Trace World !!!"
20150513T112956,819 DEBUG "!!! Debug World !!!"
20150513T112956,819 DEBUG "!!! Debug World !!!"
20150513T112956,819 INFO "!!! Info World !!!"
20150513T112956,819 INFO "!!! Info World !!!"
20150513T112956,819 WARN "!!! Warn World !!!"
20150513T112956,819 WARN "!!! Warn World !!!"
20150513T112956,819 ERROR "!!! Error World !!!"
20150513T112956,819 ERROR "!!! Error World !!!"
I would like to know if Log4j2 has been built with that in mind or if things may (crash | lose logs | etc) at very high concurrency.
UPDATE :
I ran this Benchmark test and there is no missing log. Still, I'm not sure this test fully solves the question. :
public class Benchmark {
private static final int nbThreads = 32;
private static final int iterations = 10000;
static List<BenchmarkThread> benchmarkThreadList = new ArrayList<>(nbThreads);
private static Logger root;
static {
System.setProperty("Log4jContextSelector", "org.apache.logging.log4j.core.async.AsyncLoggerContextSelector");
}
public static void main(String[] args) throws InterruptedException {
root = LoggerFactory.getLogger(Logger.ROOT_LOGGER_NAME);
// Create BenchmarkThreads
for (int i = 1; i <= nbThreads; i++) {
benchmarkThreadList.add(new BenchmarkThread("T" + i, iterations));
}
root.error("-----------------------------------------------------------");
root.error("---------------------- WARMUP ---------------------------");
root.error("-----------------------------------------------------------");
// Warmup loggers
doBenchmark("WARMUP", 100);
Thread.sleep(100);
root.error("-----------------------------------------------------------");
root.error("--------------------- BENCHMARK -------------------------");
root.error("-----------------------------------------------------------");
// Execute Benchmark
for (int i = 0; i < nbThreads; i++) {
benchmarkThreadList.get(i).start();
}
Thread.sleep(100);
root.error("-----------------------------------------------------------");
root.error("---------------------- FINISHED -------------------------");
root.error("-----------------------------------------------------------");
}
protected static void doBenchmark(String name, int iteration) {
for (int i = 1; i <= iteration; i++) {
root.error("{};{}", name, i);
}
}
protected static class BenchmarkThread extends Thread {
protected final int iteration;
protected final String name;
public BenchmarkThread(String name, int iteration) {
this.name = name;
this.iteration = iteration;
}
#Override
public void run() {
Benchmark.doBenchmark(name, iteration);
}
}
}
UPDATE:
I did not realize that you are already using Async Loggers. In that case this is indeed a log4j2 question. :-)
The answer is yes, log4j2 and especially Async Loggers are designed with very high concurrency in mind. Multiple loggers in multiple threads can log concurrently, and the resulting log messages are put on a lock-free queue for later processing by the background thread. There is a single background thread that calls all appenders sequentially, so even if multiple appenders write to the same file, no messages are dropped and messages from each logger thread are written fully before the next message is written (no partial writes).
In case of a crash, messages that were in the queue but have not been flushed to disk yet may be lost. This is a trade-off for performance and is the case with all asynchronous logging.
If you are logging synchronously (e.g. without using Async Loggers) it becomes a question of what file I/O atomicity guarantees the JVM and OS make.
PREVIOUS ANSWER:
This is more of a JVM/OS question than a log4j2 question. Rephrased: if multiple threads concurrently write to the same file, will the resulting file contain all messages (nothing is lost, and all messages are complete and correct)? (You may want to specify your JVM vendor and version and OS name and version.)
If you are looking for a safe log4j2 configuration, consider using Async Loggers.
With Async Loggers all appenders are called sequentially in the single shared background thread, so you are sure no corruption will occur. In addition you get nice performance benefits.
I have created a simple BIRT report that selects data from two DB tables using two different DataSets. It also has got two parameters that are passed into DataSet queries.
My problem is that when I run the report from the Eclipse+BIRT environment it fetches all the data as it should, but when I run the report from a Grails birt-report service, it prints only the static skeleton, but not the data from DB.
First I thought that something was wrong with the parameter values I passed to the service, so I added a line to the report that printed those values. The params were passd and printed as they should.
Then I removed the where part from my queries, but still nothing happened. The report is filled in when run from Eclipse and is empty when run in a birt-report plugin.
This is the code I use to print a report:
params.remove('action')
params.remove('controller')
params.remove('name')
params.put("userId", 10) //parameter that should be passed
params.put("resumeId", 5) //parameter that should be passed
println params;
def options = birtReportService.getRenderOption(request, 'doc')
def result=birtReportService.runAndRender(reportName, params, options)
response.setHeader("Content-disposition", "attachment; filename=" +reportName + "."+reportExt);
response.contentType = 'application/doc'
response.outputStream << result.toByteArray()
Any ideas what might be the reason?
UPD: How can I debug it to see what's really going on inside?
UPD 2: looks like it has something to do with the app, cause when I launch the report in a new Grails application it works fine.
The only difference between successful and failed launches that I found when looking through debug logs is this:
my app log:
[26/03/2013 13:12:36,909] [org.eclipse.birt.data.engine.impl.PreparedDataSourceQuery] [219] ENTRY org.eclipse.birt.data.engine.impl.DataEngineImpl#1ed14cb8 org.eclipse.birt.data.engine.api.querydefn.QueryDefinition#729b6def org.eclipse.birt.data.engine.impl.OdaDataSetAdapter#200ecb05 {org.eclipse.datatools.connectivity.oda.util_consumerResourceIds=org.eclipse.datatools.connectivity.oda.util.ResourceIdentifiers#2b32c5b4, OdaJDBCDriverPassInConnection=Transaction-aware proxy for target Connection from DataSource [org.apache.commons.dbcp.BasicDataSource#39547eb6], HTML_RENDER_CONTEXT=org.eclipse.birt.report.engine.api.HTMLRenderContext#1251c294, org.eclipse.birt.data.query.ResultBufferSize=100}
[26/03/2013 13:12:36,909] [org.eclipse.birt.data.engine.impl.DataEngineImpl] [219] PreparedQuery starts up.
[26/03/2013 13:12:36,909] [org.eclipse.birt.data.engine.impl.aggregation.AggregateTable] [219] ENTRY
[26/03/2013 13:12:36,909] [org.eclipse.birt.data.engine.impl.aggregation.AggregateTable] [219] RETURN
[26/03/2013 13:12:36,909] [org.eclipse.birt.data.engine.impl.aggregation.AggregateTable] [219] ENTRY C:\Users\SZAGOR~1\AppData\Local\Temp\DataEngine_517033144_2\ [object Object] []
[26/03/2013 13:12:36,909] [org.eclipse.birt.data.engine.impl.aggregation.AggregateTable] [219] RETURN
[26/03/2013 13:12:36,909] [org.eclipse.birt.data.engine.impl.DataEngineImpl] [219] Start to prepare a PreparedQuery.
[26/03/2013 13:12:36,909] [org.eclipse.birt.data.engine.impl.aggregation.AggrRegistry] [219] ENTRY 0 -1 true org.eclipse.birt.core.script.ScriptContext#57b32816
[26/03/2013 13:12:36,909] [org.eclipse.birt.data.engine.impl.aggregation.AggrRegistry] [219] RETURN
//these lines are suspicious
[26/03/2013 13:12:36,909][org.eclipse.birt.data.engine.expression.InvalidExpression] [219] InvalidExpression starts up
[26/03/2013 13:12:36,912] [org.eclipse.birt.data.engine.expression.InvalidExpression] [219] InvalidExpression starts up
[26/03/2013 13:12:36,912] [org.eclipse.birt.data.engine.impl.GroupBindingColumn] [219] ENTRY null 0 {first_name=org.eclipse.birt.data.engine.api.querydefn.Binding#f6678eba, last_name=org.eclipse.birt.data.engine.api.querydefn.Binding#77fdce94, coalesce(resume.mobile_phone, " - ")=org.eclipse.birt.data.engine.api.querydefn.Binding#9beadc7d}
and the new app log:
2013-03-26 12:51:08,352 [http-bio-8080-exec-1] DEBUG impl.PreparedDataSourceQuery - ENTRY org.eclipse.birt.data.engine.impl.DataEngineImpl#79bff971 org.eclipse.birt.data.engine.api.querydefn.QueryDefinition#1799e2e2 org.eclipse.birt.data.engine.impl.OdaDataSetAdapter#1aec9361 {org.eclipse.datatools.connectivity.oda.util_consumerResourceIds=org.eclipse.datatools.connectivity.oda.util.ResourceIdentifiers#21bfd316, OdaJDBCDriverPassInConnection=Transaction-aware proxy for target Connection from DataSource [org.apache.commons.dbcp.BasicDataSource#38bb5aa9], HTML_RENDER_CONTEXT=org.eclipse.birt.report.engine.api.HTMLRenderContext#143d2a58, org.eclipse.birt.data.query.ResultBufferSize=100}
2013-03-26 12:51:08,352 [http-bio-8080-exec-1] DEBUG impl.DataEngineImpl - PreparedQuery starts up.
2013-03-26 12:51:08,352 [http-bio-8080-exec-1] DEBUG aggregation.AggregateTable - ENTRY
2013-03-26 12:51:08,352 [http-bio-8080-exec-1] DEBUG aggregation.AggregateTable - RETURN
2013-03-26 12:51:08,352 [http-bio-8080-exec-1] DEBUG aggregation.AggregateTable - ENTRY C:\Users\SZAGOR~1\AppData\Local\Temp\DataEngine_2042624369_1\ [object Object] []
2013-03-26 12:51:08,352 [http-bio-8080-exec-1] DEBUG aggregation.AggregateTable - RETURN
2013-03-26 12:51:08,352 [http-bio-8080-exec-1] DEBUG impl.DataEngineImpl - Start to prepare a PreparedQuery.
2013-03-26 12:51:08,362 [http-bio-8080-exec-1] DEBUG aggregation.AggrRegistry - ENTRY 0 -1 true org.eclipse.birt.core.script.ScriptContext#560fc912
2013-03-26 12:51:08,362 [http-bio-8080-exec-1] DEBUG aggregation.AggrRegistry - RETURN
// here it's different
2013-03-26 12:51:08,362 [http-bio-8080-exec-1] DEBUG expression.ColumnReferenceExpression - ENTRY first_name
2013-03-26 12:51:08,362 [http-bio-8080-exec-1] DEBUG expression.ColumnReferenceExpression - RETURN
2013-03-26 12:51:08,362 [http-bio-8080-exec-1] DEBUG expression.ColumnReferenceExpression - ENTRY last_name
2013-03-26 12:51:08,362 [http-bio-8080-exec-1] DEBUG expression.ColumnReferenceExpression - RETURN
2013-03-26 12:51:08,362 [http-bio-8080-exec-1] DEBUG expression.ColumnReferenceExpression - ENTRY coalesce(resume.mobile_phone, " - ")
2013-03-26 12:51:08,362 [http-bio-8080-exec-1] DEBUG expression.ColumnReferenceExpression - RETURN
2013-03-26 12:51:08,362 [http-bio-8080-exec-1] DEBUG impl.GroupBindingColumn - ENTRY null 0 {first_name=org.eclipse.birt.data.engine.api.querydefn.Binding#f6678eba, last_name=org.eclipse.birt.data.engine.api.querydefn.Binding#77fdce94, coalesce(resume.mobile_phone, " - ")=org.eclipse.birt.data.engine.api.querydefn.Binding#9beadc7d}
The same report file was used in both cases.