What's the JSON equivalent to this Log4J2 configuration? - log4j2

This is an XML configuration example from Log4J2's site:
I have tried a few different JSON variations and all of them fail with NullPointException or something similar.
For example, the following configuration fails:
"PatternLayout": {
"MarkerPatternSelector": {
"defaultPattern": [%-5level] %c{1.} %msg%n",
"PatternMatch": {
"key": "FLOW",
"pattern": "[%-5level] %c{1.} ====== %C{1.}.%M:%L %msg ======%n"
}
}
}
What's the correct JSON equivalent for this XML configuration?

The problem was due to the version of the Log4J2 I was using (2.3). 2.6.2 supports this feature. To be more precise, here's the factory method in 2.6.2:
#PluginFactory
public static PatternLayout createLayout(
#PluginAttribute(value = "pattern", defaultString = DEFAULT_CONVERSION_PATTERN) final String pattern,
#PluginElement("PatternSelector") final PatternSelector patternSelector,
#PluginConfiguration final Configuration config,
#PluginElement("Replace") final RegexReplacement replace,
// LOG4J2-783 use platform default by default, so do not specify defaultString for charset
#PluginAttribute(value = "charset") final Charset charset,
#PluginAttribute(value = "alwaysWriteExceptions", defaultBoolean = true) final boolean alwaysWriteExceptions,
#PluginAttribute(value = "noConsoleNoAnsi", defaultBoolean = false) final boolean noConsoleNoAnsi,
#PluginAttribute("header") final String headerPattern,
#PluginAttribute("footer") final String footerPattern) {
return newBuilder()
.withPattern(pattern)
.withPatternSelector(patternSelector)
.withConfiguration(config)
.withRegexReplacement(replace)
.withCharset(charset)
.withAlwaysWriteExceptions(alwaysWriteExceptions)
.withNoConsoleNoAnsi(noConsoleNoAnsi)
.withHeader(headerPattern)
.withFooter(footerPattern)
.build();
}
And here is how it looked like in 2.3:
#PluginFactory
public static PatternLayout createLayout(
#PluginAttribute(value = "pattern", defaultString = DEFAULT_CONVERSION_PATTERN) final String pattern,
#PluginConfiguration final Configuration config,
#PluginElement("Replace") final RegexReplacement replace,
#PluginAttribute(value = "charset", defaultString = "UTF-8") final Charset charset,
#PluginAttribute(value = "alwaysWriteExceptions", defaultBoolean = true) final boolean alwaysWriteExceptions,
#PluginAttribute(value = "noConsoleNoAnsi", defaultBoolean = false) final boolean noConsoleNoAnsi,
#PluginAttribute("header") final String header,
#PluginAttribute("footer") final String footer) {
return newBuilder()
.withPattern(pattern)
.withConfiguration(config)
.withRegexReplacement(replace)
.withCharset(charset)
.withAlwaysWriteExceptions(alwaysWriteExceptions)
.withNoConsoleNoAnsi(noConsoleNoAnsi)
.withHeader(header)
.withFooter(footer)
.build();
}
The API docs site's URL (https://logging.apache.org/log4j/2.x) put me under the impression that all 2.x versions are compatible.

Related

How to override the default encrypted property prefix/suffix?

Jasypt expects encrypted properties to be wrapped with "ENC(...)".
I'm looking for a way to override the default jasypt encrypted property prefix and suffix with a custom ones like "secure[...]".
Is there a way to achieve this?
If you use pure jasypt library it is required to write your own version of existing EncryptableProperties as default one supports only ENC( as prefix and ) as suffix.
public class EncryptableProperties extends Properties {
private final String prefix;
private final String suffix;
private final StringEncryptor encryptor;
public EncryptableProperties(final Properties defaults, final StringEncryptor encryptor,
final String prefix, final String suffix) {
super(defaults);
this.encryptor = encryptor;
this.prefix = prefix;
this.suffix = suffix;
}
public EncryptableProperties(final Properties defaults, final StringEncryptor encryptor){
this(defaults,encryptor,"ENC(", ")");
}
#Override
public String getProperty(String key) {
String value = super.getProperty(key);
return decode(value);
}
private String decode(String value) {
if (value == null) {
return value;
}
String decryptedValue = value;
if (value.startsWith(prefix) && value.endsWith(suffix)) {
int start = prefix.length();
int end = value.length() - suffix.length();
String encryptedValue = value.substring(start, end);
decryptedValue = encryptor.decrypt(encryptedValue);
}
return decryptedValue;
}
}
Usage:
Properties encryptableProperties = new EncryptableProperties(properties,stringEncryptor,"CUSTOM_PREFIX(",")");
If you use spring boot integration
You could do this by using following two properties
jasypt.encryptor.property.prefix
jasypt.encryptor.property.suffix
Default one is:
jasypt.encryptor.property.prefix=ENC(
jasypt.encryptor.property.suffix=)
These properties are available with following library
<dependency>
<groupId>com.github.ulisesbocchio</groupId>
<artifactId>jasypt-spring-boot</artifactId>
<version>2.0.0</version>
</dependency>

DymanicDestinations in Apache Beam

I have a PCollection [String] say "X" that I need to dump in a BigQuery table.
The table destination and the schema for it is in a PCollection[TableRow] say "Y".
How to accomplish this in the simplest manner?
I tried extracting the table and schema from "Y" and saving it in static global variables (tableName and schema respectively). But somehow oddly the BigQueryIO.writeTableRows() always gets the value of the variable tableName as null. But it gets the schema. I tried logging the values of those variables and I can see the values are there for both.
Here is my pipeline code:
static String tableName;
static TableSchema schema;
PCollection<String> read = p.apply("Read from input file",
TextIO.read().from(options.getInputFile()));
PCollection<TableRow> tableRows = p.apply(
BigQueryIO.read().fromQuery(NestedValueProvider.of(
options.getfilename(),
new SerializableFunction<String, String>() {
#Override
public String apply(String filename) {
return "SELECT table,schema FROM `BigqueryTest.configuration` WHERE file='" + filename +"'";
}
})).usingStandardSql().withoutValidation());
final PCollectionView<List<String>> dataView = read.apply(View.asList());
tableRows.apply("Convert data read from file to TableRow",
ParDo.of(new DoFn<TableRow,TableRow>(){
#ProcessElement
public void processElement(ProcessContext c) {
tableName = c.element().get("table").toString();
String[] schemas = c.element().get("schema").toString().split(",");
List<TableFieldSchema> fields = new ArrayList<>();
for(int i=0;i<schemas.length;i++) {
fields.add(new TableFieldSchema()
.setName(schemas[i].split(":")[0]).setType(schemas[i].split(":")[1]));
}
schema = new TableSchema().setFields(fields);
//My code to convert data to TableRow format.
}}).withSideInputs(dataView));
tableRows.apply("write to BigQuery",
BigQueryIO.writeTableRows()
.withSchema(schema)
.to("ProjectID:DatasetID."+tableName)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED));
Everything works fine. Only BigQueryIO.write operation fails and I get the error TableId is null.
I also tried using SerializableFunction and returning the value from there but i still get null.
Here is the code that I tried for it:
tableRows.apply("write to BigQuery",
BigQueryIO.writeTableRows()
.withSchema(schema)
.to(new GetTable(tableName))
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED));
public static class GetTable implements SerializableFunction<String,String> {
String table;
public GetTable() {
this.table = tableName;
}
#Override
public String apply(String arg0) {
return "ProjectId:DatasetId."+table;
}
}
I also tried using DynamicDestinations but I get an error saying schema is not provided. Honestly I'm new to the concept of DynamicDestinations and I'm not sure that I'm doing it correctly.
Here is the code that I tried for it:
tableRows2.apply(BigQueryIO.writeTableRows()
.to(new DynamicDestinations<TableRow, TableRow>() {
private static final long serialVersionUID = 1L;
#Override
public TableDestination getTable(TableRow dest) {
List<TableRow> list = sideInput(bqDataView); //bqDataView contains table and schema
String table = list.get(0).get("table").toString();
String tableSpec = "ProjectId:DatasetId."+table;
String tableDescription = "";
return new TableDestination(tableSpec, tableDescription);
}
public String getSideInputs(PCollectionView<List<TableRow>> bqDataView) {
return null;
}
#Override
public TableSchema getSchema(TableRow destination) {
return schema; //schema is getting added from the global variable
}
#Override
public TableRow getDestination(ValueInSingleWindow<TableRow> element) {
return null;
}
}.getSideInputs(bqDataView)));
Please let me know what I'm doing wrong and which path I should take.
Thank You.
Part of the reason your having trouble is because of the two stages of pipeline execution. First the pipeline is constructed on your machine. This is when all of the applications of PTransforms occur. In your first example, this is when the following lines are executed:
BigQueryIO.writeTableRows()
.withSchema(schema)
.to("ProjectID:DatasetID."+tableName)
The code within a ParDo however runs when your pipeline executes, and it does so on many machines. So the following code runs much later than the pipeline construction:
#ProcessElement
public void processElement(ProcessContext c) {
tableName = c.element().get("table").toString();
...
schema = new TableSchema().setFields(fields);
...
}
This means that neither the tableName nor the schema fields will be set at when the BigQueryIO sink is created.
Your idea to use DynamicDestinations is correct, but you need to move the code to actually generate the schema the destination into that class, rather than relying on global variables that aren't available on all of the machines.

Execute read operations in sequence - Apache Beam

I need to execute below operations in sequence as given:-
PCollection<String> read = p.apply("Read Lines",TextIO.read().from(options.getInputFile()))
.apply("Get fileName",ParDo.of(new DoFn<String,String>(){
ValueProvider<String> fileReceived = options.getfilename();
#ProcessElement
public void procesElement(ProcessContext c)
{
fileName = fileReceived.get().toString();
LOG.info("File: "+fileName);
}
}));
PCollection<TableRow> rows = p.apply("Read from BigQuery",
BigQueryIO.read()
.fromQuery("SELECT table,schema FROM `DatasetID.TableID` WHERE file='" + fileName +"'")
.usingStandardSql());
How to accomplish this in Apache Beam/Dataflow?
It seems that you want to apply BigQueryIO.read().fromQuery() to a query that depends on a value available via a property of type ValueProvider<String> in your PipelineOptions, and the provider is not accessible at pipeline construction time - i.e. you are invoking your job via a template.
In that case, the proper solution is to use NestedValueProvider:
PCollection<TableRow> tableRows = p.apply(BigQueryIO.read().fromQuery(
NestedValueProvider.of(
options.getfilename(),
new SerializableFunction<String, String>() {
#Override
public String apply(String filename) {
return "SELECT table,schema FROM `DatasetID.TableID` WHERE file='" + fileName +"'";
}
})));

Invalid signature when connecting to VitaDock API with Scribe library

I am using grails plugin: oauth 2.1.0 to connect to oauth APIs.
Vitadock requires HMACSHA256 to encode base signature string, so I created a HMACSha256SignatureService.groovy to do it and customize TargetScaleApi.groovy
HMACSha256SignatureService.groovy
import javax.crypto.*
import javax.crypto.spec.*
import org.apache.commons.codec.binary.*
import org.scribe.exceptions.*
import org.scribe.services.SignatureService
import org.scribe.utils.*
public class HMACSha256SignatureService implements SignatureService {
private static final String EMPTY_STRING = "";
private static final String CARRIAGE_RETURN = "\r\n";
private static final String UTF8 = "UTF-8";
private static final String HMAC_SHA256 = "HMACSHA256";
private static final String METHOD = "HMAC-SHA256";
/**
* {#inheritDoc}
*/
public String getSignature(String baseString, String apiSecret, String tokenSecret) {
try {
println baseString
Preconditions.checkEmptyString(baseString, "Base string cant be null or empty string");
Preconditions.checkEmptyString(apiSecret, "Api secret cant be null or empty string");
return doSign(baseString, OAuthEncoder.encode(apiSecret) + '&' + OAuthEncoder.encode(tokenSecret));
}
catch (Exception e) {
throw new OAuthSignatureException(baseString, e);
}
}
private String doSign(String toSign, String keyString) throws Exception {
SecretKeySpec key = new SecretKeySpec((keyString).getBytes(UTF8), HMAC_SHA256);
Mac mac = Mac.getInstance(HMAC_SHA256);
mac.init(key);
byte[] bytes = mac.doFinal(toSign.getBytes(UTF8));
String a = new String(Base64.encodeBase64(bytes)).replace(CARRIAGE_RETURN, EMPTY_STRING)
println a
return a;
}
public String getSignatureMethod() {
return METHOD;
}
}
TargetScaleApi.groovy
import org.scribe.builder.api.DefaultApi10a
import org.scribe.model.Token
import org.scribe.services.SignatureService
class TargetScaleApi extends DefaultApi10a {
private static final String AUTHORIZE_URL = "https://vitacloud.medisanaspace.com/auth?oauth_token=%s"
#Override
public String getAccessTokenEndpoint() {
return "https://vitacloud.medisanaspace.com/auth/accesses/verify"
}
#Override
public String getAuthorizationUrl(Token requestToken) {
return String.format(AUTHORIZE_URL, requestToken.getToken());
}
#Override
public String getRequestTokenEndpoint() {
return "https://vitacloud.medisanaspace.com/auth/unauthorizedaccesses"
}
#Override
public SignatureService getSignatureService() {
return new HMACSha256SignatureService();
}
}
But I received a error message: invalid signature.
<b>message</b>Invalid signature (jBbmlITCOBuIN3KfVB8glzv1sftrx1v7MvNyAJkiGTU%3D, expected: Ia21vjqskdBXrRE%2BngpHqaP4GJV3hfUGOt0ksGVcgk0%3D) [Base Parameter String: oauth_consumer_key=V5BiK7kzVcefBVfJ1htu13vfreWZNDPnkzx4DG67UBG6lNe0dZ1DUClKk5XM1Y1L&oauth_nonce=897870535&oauth_signature_method=HMAC-SHA256&oauth_timestamp=1372069427&oauth_version=1.0, Base Signature String: POST&https%3A%2F%2Fvitacloud.medisanaspace.com%2Fauth%2Funauthorizedaccesses&oauth_consumer_key%3DV5BiK7kzVcefBVfJ1htu13vfreWZNDPnkzx4DG67UBG6lNe0dZ1DUClKk5XM1Y1L%26oauth_nonce%3D897870535%26oauth_signature_method%3DHMAC-SHA256%26oauth_timestamp%3D1372069427%26oauth_version%3D1.0] [authorization = OAuth oauth_callback="http%3A%2F%2Flocal.mydatainnet.axonactive.vn%3A8080%2Faa-mdin-web-client-2.0.1%2Foauth%2Fcallback%3Fprovider%3Dtargetscale", oauth_signature="jBbmlITCOBuIN3KfVB8glzv1sftrx1v7MvNyAJkiGTU%3D", oauth_version="1.0", oauth_nonce="897870535", oauth_signature_method="HMAC-SHA256", oauth_consumer_key="V5BiK7kzVcefBVfJ1htu13vfreWZNDPnkzx4DG67UBG6lNe0dZ1DUClKk5XM1Y1L", oauth_timestamp="1372069427", content-type = application/x-www-form-urlencoded, cache-control = no-cache, pragma = no-cache, user-agent = Java/1.6.0_25, host = vitacloud.medisanaspace.com, accept = text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2, connection = keep-alive, content-length = 0, ]
Thanks for any help
Hang Dinh
I believe that VitaDock implements Oauth 1.0 (https://github.com/Medisana/vitadock-api/wiki/Definitions). If you are using a plugin geared towards oauth 2.1.0 maybe that could be the source of the error.

Executing stored procedures in Firebird using JPA NamedStoredProcedureQuery

EntityManager em = getEntityManager();
EntityTransaction etx = em.getTransaction();
etx.begin();
Query query = em.createNamedQuery("login_procedure").setParameter("param1","user").setParameter("param2", "pw");
Integer result = 23;
try {
System.out.println("query = " + query.getSingleResult());
} catch (Exception e) {
result = null;
e.printStackTrace();
}
etx.commit();
em.close();
...executing this code I get
[EL Warning]: 2011-02-10 17:32:16.846--UnitOfWork(1267140342)--Exception
[EclipseLink-4002] (Eclipse
Persistence Services -
1.2.0.v20091016-r5565): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception:
org.firebirdsql.jdbc.FBSQLException:
GDS Exception. 335544569. Dynamic SQL
Error SQL error code = -104 Token
unknown - line 1, column 36
= Error Code: 335544569 Call: EXECUTE PROCEDURE LOGIN_PROCEDURE(USER_NAME =
?, USER_PASSWORD = ?) bind => [user,
pw] Query:
DataReadQuery(name="login_procedure" )
The -104 SQL error usually indicates a SQL syntax error.
Everything is processed without any error until query.getSingleResult() is called. Calling query.getResultList() doesn't change anything. I've tried several 1.x and 2.x EclipseLink versions. The Firebird DB version is 2.1.
The JPA2 declaration is:
#Entity
#NamedStoredProcedureQuery(
name = "login_procedure",
resultClass = void.class,
procedureName = "LOGIN_PROCEDURE",
returnsResultSet = false,
parameters = {
#StoredProcedureParameter(queryParameter = "param1", name = "USER_NAME", direction = Direction.IN, type = String.class),
#StoredProcedureParameter(queryParameter = "param2", name = "USER_PASSWORD", direction = Direction.IN, type = String.class)
}
)
#Table(name = "USERS")
public class Login implements Serializable {
#Id
private Long id;
}
UPDATE:
After tinkering a little bit more, I believe there might be an error in the EclipseLink implementation as EXECUTE PROCEDURE LOGIN_PROCEDURE(USER_NAME = ?, USER_PASSWORD = ?) isn't valid Firebird 2.1 syntax for calling procedures.
By specifying the name="USER_NAME" you are making Eclipselink use the 'USER_NAME=?' syntax instead of just passing in the unnamed parameter. Try removing the name definition.
Inspired by this post, I've found a solution/workaround:
public class JPATest {
final Session session;
JPATest() {
final String DATABASE_USERNAME = "SYSDBA";
final String DATABASE_PASSWORD = "masterkey";
final String DATABASE_URL = "jdbc:firebirdsql:dbServer/3050:e:/my/db.fdb";
final String DATABASE_DRIVER = "org.firebirdsql.jdbc.FBDriver";
final DatabaseLogin login = new DatabaseLogin();
login.setUserName(DATABASE_USERNAME);
login.setPassword(DATABASE_PASSWORD);
login.setConnectionString(DATABASE_URL);
login.setDriverClassName(DATABASE_DRIVER);
login.setDatasourcePlatform(new FirebirdPlatform());
login.bindAllParameters();
final Project project = new Project(login);
session = project.createDatabaseSession();
session.setLogLevel(SessionLog.FINE);
((DatabaseSession) session).login();
}
public static void main(String[] args) {
final JPATest jpaTest = new JPATest();
jpaTest.run();
}
protected void run() {
testProcCursor();
}
/*
* Run Proc with scalar input and cursor output
*/
#SuppressWarnings("unchecked")
private void testProcCursor() {
final StoredProcedureCall call = new StoredProcedureCall();
call.setProcedureName("LOGIN");
call.addUnamedArgument("USER_NAME"); // .addNamedArgument doesn't work
call.addUnamedArgument("USER_PASSWORD");
final DataReadQuery query = new DataReadQuery();
query.setCall(call);
query.addArgument("USER_NAME");
query.addArgument("USER_PASSWORD");
final List<String> queryArgs = new ArrayList<String>();
queryArgs.add("onlinetester");
queryArgs.add("test");
final List outList = (List) session.executeQuery(query, queryArgs);
final ListIterator<DatabaseRecord> listIterator = ((List<DatabaseRecord>) outList).listIterator();
while (listIterator.hasNext()) {
final DatabaseRecord databaseRecord = listIterator.next();
System.out.println("Value -->" + databaseRecord.getValues());
}
}
}
Apparently named parameters aren't supported in my specific configuration but using unnamed parameters in annotations, hasn't solved the problem either. However using unnamed parameters, as specified above, solved the problem for me.

Resources