Error on variable passing in Groovy to read JSON array - jenkins

In my Jenkins groovy file, I'm reading app name from a JSON file, and if I assign app name manually, value is read, but if I assign a value using variable which does not read and throw errors.
manual app name assigned :
${commonConfig.configuration.Cache_list.app_name}".contains("tw")
Instead of app_name, how can I assign a variable, I tried following and gave me errors.
def app = "app_name"
${commonConfig.configuration.Cache_list.$app}".contains("tw")
Sample JSON :
"Cache_list": {"app_name": ["au","sg","tw","jp"],"app2": ["tw","id"]},

Related

Camel stored procedure call can't use variables?

Trying to build a generic REST-to-stored-procedure bridge along this approach:
from("jetty:http://0.0.0.0:8080/{procedure}")
.to("sql-stored:${header.procedure}()");
Which gives the error
org.apache.camel.component.sql.stored.template.generated.ParseException: Encountered " <SIMPLE_EXP_TOKEN> "${header.procedure} "" at line 1, column 1.
Was expecting:
<IDENTIFIER> ...
at org.apache.camel.component.sql.stored.template.generated.SSPTParser.generateParseException(SSPTParser.java:370)
at org.apache.camel.component.sql.stored.template.generated.SSPTParser.jj_consume_token(SSPTParser.java:308)
at org.apache.camel.component.sql.stored.template.generated.SSPTParser.parse(SSPTParser.java:27)
at org.apache.camel.component.sql.stored.template.TemplateParser.parseTemplate(TemplateParser.java:41)
... 38 more
I saw examples using header variables in sql-stored at many places but always when binding variables. How could I set the name of the stored procedure dynamically?
You are trying to send the message to a dynamic endpoint. The destination uri will depend on the ${header.procedure} content.
From Camel 2.16 onwards you can use "toD" to tell Camel that your destination endpoint is dynamic.
There is more information here http://camel.apache.org/how-to-use-a-dynamic-uri-in-to.html and here http://camel.apache.org/message-endpoint.html

Google dataflow: AvroIO read from file in google storage passed as runtime parameter

I want to read Avro files in my dataflow using java SDK 2
I have schedule my dataflow using cloud function which are triggered based on the files uploaded to the bucket.
Following is the code for options:
ValueProvider <String> getInputFile();
void setInputFile(ValueProvider<String> value);
I am trying to read this input file using following code:
PCollection<user> records = p.apply(
AvroIO.read(user.class)
.from(String.valueOf(options.getInputFile())));
I get following error while running the pipeline:
java.lang.IllegalArgumentException: Unable to find any files matching RuntimeValueProvider{propertyName=inputFile, default=gs://test_bucket/user.avro, value=null}
Same code works fine in case of TextIO.
How can we read Avro file which is uploaded for triggering cloud function which triggers the dataflow pipeline?
Please try ...from(options.getInputFile())) without converting it to a string.
For simplicity, you could even define your option as simple string:
String getInputFile();
void setInputFile(String value);
You need to use simply from(options.getInputFile()): AvroIO explicitly supports reading from a ValueProvider.
Currently the code is taking options.getInputFile() which is a ValueProvider, calling the JavatoString() function on it which gives a human-readable debug string "RuntimeValueProvider{propertyName=inputFile, default=gs://test_bucket/user.avro, value=null}" and passing that as a filename for AvroIO to read, and of course this string is not a valid filename, that's why the code currently doesn't work.
Also note that the whole point of ValueProvider is that it is placeholder for a value that is not known while constructing the pipeline and will be supplied later (potentially the pipeline will be executed several times, supplying different values) - so extracting the value of a ValueProvider at pipeline construction time is impossible by design, because there is no value. At runtime though (e.g. in a DoFn) you can extract the value by calling .get() on it.

“Inject environment variables” Jenkins 2.0

I recently upgraded to Jenkins 2.0.
I’m trying to add a build step to a jenkins job of "Inject environment variables" along the lines of this SO post, but it’s not showing up as an option.
Is this not feature in Jenkins 2.0 (or has it always been a separate plugin)? Do I have to install another plugin, such as Envinject?
If you are using Jenkins 2.0
you can load the property file (which consists of all required Environment variables along with their corresponding values) and read all the environment variables listed there automatically and inject it into the Jenkins provided env entity.
Here is a method which performs the above stated action.
def loadProperties(path) {
properties = new Properties()
File propertiesFile = new File(path)
properties.load(propertiesFile.newDataInputStream())
Set<Object> keys = properties.keySet();
for(Object k:keys){
String key = (String)k;
String value =(String) properties.getProperty(key)
env."${key}" = "${value}"
}
}
To call this method we need to pass the path of property file as a string variable
For example, in our Jenkins file using groovy script we can call like
path = "${workspace}/pic_env_vars.properties"
loadProperties(path)
Please ask me if you have any doubt

Why doesn't the default attribute for number fields work for Jenkins jelly configurations?

I'm working on a Jenkins plugin where we make a call out to a remote service using Spring's RestTemplate. To configure the timeout values, I'm setting up some fields in the global configuration using the global.jelly file for Jenkins plugins using a number field as shown here:
<f:entry title="Read Timeout" field="readTimeout" description="Read timeout in ms.">
<f:number default="3000"/>
</f:entry>
Now, this works to save the values and retrieve the values no problem, so it looks like everything is setup correctly for my BuildStepDescriptor. However, when I first install the update to a Jenkins instance, instead of getting 3000 in the field by default as I would expect, instead I am getting 0. This is the same for all the fields that I'm using.
Given that the Jelly tag reference library says this attribute should be the default value, why do I keep seeing 0 when I first install the plugin?
Is there some more Java code that needs to be added to my plugin to tie the default in Jelly back to the global configuration?
I would think that when Jenkins starts, it goes to get the plugin configuration XML and fails to find a value and sets it to a default of 0.
I have got round this in the past by setting a default in the descriptor (in groovy) then this value will be saved into the global config the first time in and also be available if the user never visits the config page.
#Extension
static class DescriptorImpl extends AxisDescriptor {
final String displayName = 'Selenium Capability Axis'
String server = 'http://localhost:4444'
Boolean sauceLabs = false
String sauceLabsName
Secret sauceLabsPwd
String sauceLabsAPIURL =
'http://saucelabs.com/rest/v1/info/platforms/webdriver'
String sauceLabsURL = 'http://ondemand.saucelabs.com:80'
from here

Net::SFTP Errors

I have been trying to download a file using Net::SFTP and it keeps getting an error.
The file is partially downloaded, and is only 2.1 MB, so it's not a huge file. I removed the loop over the files and even tried just downloading the one file and got the same error:
yml = YAML.load_file Rails.root.join('config', 'ftp.yml')
Net::SFTP.start(yml["url"], yml["username"], password: yml["password"]) do |sftp|
sftp.dir.glob(File.join('users', 'import'), '*.csv').each do |f|
sftp.download!(File.join('users', 'import', f.name), Rails.root.join('processing_files', 'download_files', f.name), read_size: 1024)
end
end
NoMethodError: undefined method `close' for #<Pathname:0x007fc8fdb50ea0>
from /[my_working_ap_dir]/gems/net-sftp-2.1.2/lib/net/sftp/operations/download.rb:331:in `on_read'
I have prayed to Google all I can and am not getting anywhere with it.
Rails.root returns a Pathname object, but it looks like the sftp code doesn't check to see whether it got a Pathname or a File handle, it just runs with it. When it runs into entry.sink.close it crashes because Pathnames don't implement close.
Pathnames are great for manipulating paths to files and directories, but they're not substitutes for file handles. You could probably tack on to_s which would return a string.
Here's a summary of the download call from the documentation that hints that the expected parameters should be a String:
To download a single file from the remote server, simply specify both the
remote and local paths:
downloader = sftp.download("/path/to/remote.txt", "/path/to/local.txt")
I suspect that if I dig into the code it will check to see whether the parameters are strings, and, if not, assumes that they are IO handles.
See ri Net::SFTP::Operations::Download for more info.
Here's an excerpt from the current download! code, and you can see how the problem occurred:
def download!(remote, local=nil, options={}, &block)
require 'stringio' unless defined?(StringIO)
destination = local || StringIO.new
result = download(remote, destination, options, &block).wait
local ? result : destination.string
end
local was passed in as a Pathname. The code checks to see if there's something passed in, but not what that is. If nothing is passed in it assumes it's something with IO-like features, which is what StringIO provides for the in-memory caching.
Apparently you can't use Rails.root.join, which was causing the problem. It is really stupid though because it would download part of the file.
Changed:
sftp.download!(File.join('users', 'import', f.name), Rails.root.join('processing_files', 'download_files', f.name))
To:
sftp.download!(File.join('users', 'import', f.name), File.join('processing_files', 'download_files', f.name))
argument remote can be a Pathname object while argument local when set should be a String or else an object that responds to #write method.
Below is the working code
local_stringified_path = Rails.root.join('processing_files', f.name).to_s
sftp.download!(Pathname.new('/users/import'), local_stringified_path)
For all those curious minds please read below to understand this behaviour..
The issue NoMethodError: undefined method close' for #<Pathname:0x007fc8fdb50ea0> happens exactly here
in the #on_read method and below is the code snippet of the concerned statements.
if response.eof?
update_progress(:close, entry)
entry.sink.close # ERRORED OUT LINE.. ideally when eof, file IO handler is supposed to be closed
WHAT IS entry.sink ?
We know already that #download! methods takes two args as below
sftp.download!(remote, local)
The given args remote and local is converted to an Entry object here
[Entry.new(remote, local, recursive?)]
and Entry is a nothing but a Struct here
Entry = Struct.new(:remote, :local, :directory, :size, :handle, :offset, :sink)
okay then what is sink attribute? we will jump to that right away..
Once the concerned remote file is open to be read, the #on_open method updates this sink attribute with a File handler here.
Find the snippet below,
entry.sink = entry.local.respond_to?(:write) ? entry.local : ::File.open(entry.local, "wb")
This actually happens only when given local path object doesn't implement it's own #write method In our scenario, Pathname objects does respond to write
Below are some snippets of the console outputs, I inspected in between multiple download chunk calls while debugging this.. which shows the entry and entry.sink displaying the above discussed objects.
Here I chose my remote to be a Pathname object and local to be String path which returns proper value for the entry.sink and there by downloading successfully..
0> entry
=> #<struct Net::SFTP::Operations::Download::Entry remote=#<Pathname:214010463.xml>, local="214010463.xml", directory=nil, size=nil, handle="1", offset=32000, sink=#<File:214010463.xml>>
0> entry.sink
=> #<File:214010463.xml>

Resources