Values in Informix 4gl Language - informix

I have a field called 'UCN' which has 6 character.
This Field can have both Character and Numeric Value like "A123Y5" or "12345Y" or "G23561" some thing Like this.
We need to print the data from here with Pipe as A|1|2|3|Y|5.
I am able to put Integer with 'using' keyword, but unable to put both together.
Please if anyone can help
Mukesh

I don't think there's a short cut. You need:
PRINT ucn[1], '|', ucn[2], '|', ucn[3], '|', ucn[4], '|', ucn[5], '|', ucn[6]
If it was marginally longer, you might use a loop instead; that has its own fiddliness.

If you are working for who I think you are working for, I can give you an answer using some Genero extensions to 4GL. Create a library function like ...
FUNCTION insert_between_each_char(str,delimiter)
DEFINE str STRING
DEFINE delimiter CHAR(1)
DEFINE sb base.StringBuffer
DEFINE i INTEGER
LET sb = base.StringBuffer.create()
CALL sb.append(str)
FOR i = sb.getLength() TO 2 STEP -1
CALL sb.insertAt(i,delimiter)
END FOR
RETURN sb.toString()
END FUNCTION
... and then your code becomes
PRINT insert_between_each_char(ucn,"|")

Here is the code to loop like mentioned by Jonathan:
DEFINE
l_result char(512),
l_sel LIKE table.UCN,
i integer
LET l_sel = "A123Y5" #Or select into l_sel
FOR i = 1 to length(l_sel)
IF i < length(l_sel)
THEN
LET l_result = l_result, l_sel[i], "|"
END IF
END FOR
PRINT l_result

Related

can you use a colon in a key for a lua table

So I'm writing a program for the minecraft mod computercraft. I wanted to know if it was possible to do something like this:
tbl = {}
var = "minecraft:dirt"
tbl[var] = {pos ={0,0,0,1}}
For some reason I feel it doesnt save this table correctly so when I go to do
print(tbl["minecraft:dirt"].pos[4])
it errors
Can you use colons in keys?
tbl = {}
var = "minecraft:dirt"
tbl[var] = {pos ={0,0,0,1}}
print(tbl["minecraft:dirt"].pos[4])
prints 1
This is syntactically correct and should not result in any error message.
The only thing that won't work with colon is the syntactic sugar tbl.minecraft:dirt as Lua names may not contain colons. But if you use it like that tbl["minecraft:dirt"] colon is perfectly fine.
Long story short: Yes you can use colons in table keys.

string.format variable number of arguments

Luas string.format is pretty straight forward, if you know what to format.
However, I stuck at writing a function which takes a wildcard-string to format, and a variable number of arguments to put into that blank string.
Example:
str = " %5s %3s %6s %6s",
val = {"ttyS1", "232", "9600", "230400"}
Formatting that by hand is pretty easy:
string.format( str, val[1], val[2], val[3], val[4] )
Which is the same as:
string.format(" %5s %3s %6s %6s", "ttyS1, "232", "9600","230400")
But what if I wan't to have a fifth or sixth argument?
For example:
string.format(" %1s %2s %3s %4s %5s %6s %7s %", ... )
How can I implement a string.format with an variable number of arguments?
I want to avoid appending the values one by one because of performance issues.
The application runs on embedded MCUs.
Generate arbitrary number of repeats of whatever format you want with string.rep if format is the same for all arguments. Or fill table with all formats and use table.concat. Remember that you don't need to specify index of argument in format if you don't want to reorder them.
If you just need to concatenate strings together separated by space, use more suitable tool: table.concat(table_of_strings, ' ').
You can create a table using varargs:
function foo(fmt, ...)
local t = {...}
return t[6] -- might be nil
end
Ps, don't use # on the table if you expect the argument list might contain nil. Instead use select("#", ...).

Ruby on Rails: Checking for valid regex does not work properly, high false rate

In my application I've got a procedure which should check if an input is valid or not. You can set up a regex for this input.
But in my case it returns false instead of true. And I can't find the problem.
My code looks like this:
gaps.each_index do | i |
if gaps[i].first.starts_with? "~"
# regular expression
begin
regex = gaps[i].first[1..-1]
# a pipe is used to seperate the regex from the solution-string
if regex.include? "|"
puts "REGEX FOUND ------------------------------------------"
regex = regex.split("|")[0..-2].join("|")
end
reg = Regexp.new(regex, true)
unless reg.match(data[i])
puts "REGEX WRONGGGG -------------------"
#wrong_indexes << i
end
rescue
end
else
# normal string
if data[i].nil? || data[i].strip != gaps[i].first.strip
#wrong_indexes << i
end
end
An example would be:
[[~berlin|berlin]]
The left one before the pipe is the regex and the right one next to the pipe is the correct solution.
This easy input should return true, but it doesn't.
Does anyone see the problem?
Thank you all
EDIT
Somewhere in this lines must be the problem:
if regex.include? "|"
puts "REGEX FOUND ------------------------------------------"
regex = regex.split("|")[0..-2].join("|")
end
reg = Regexp.new(regex, true)
unless reg.match(data[i])
Update: Result without ~
The whole point is that you are initializing regex using the Regexp constructor
Constructs a new regular expression from pattern, which can be either a String or a Regexp (in which case that regexp’s options are propagated, and new options may not be specified (a change as of Ruby 1.8).
However, when you pass the regex (obtained with regex.split("|")[0..-2].join("|")) to the constructor, it is a string, and reg = Regexp.new(regex, true) is getting ~berlin (or /berlin/i) as a literal string pattern. Thus, it actually is searching for something you do not expect.
See, regex= "[[/berlin/i|berlin]]" only finds a *literal /berlin/i text (see demo).
Also, you need to get the pattern from the [[...]], so strip these brackets with regex = regex.gsub(/\A\[+|\]+\z/, '').split("|")[0..-2].join("|").
Note you do not need to specify the case insensitive options, since you already pass true as the second parameter to Regexp.new, it is already case-insensitive.
If you are performing whole word lookup, add word boundaries: regex= "[[\\bberlin\\b|berlin]]" (see demo).

Apply modification only to substring in Ruby

I have a string of the form "award.x_initial_value.currency" and I would like to camelize everything except the leading "x_" so that I get a result of the form: "award.x_initialValue.currency".
My current implementation is:
a = "award.x_initial_value.currency".split(".")
b = a.map{|s| s.slice!("x_")}
a.map!{|s| s.camelize(:lower)}
a.zip(b).map!{|x, y| x.prepend(y.to_s)}
I am not very happy with it since it's neither fast nor elegant and performance is key since this will be applied to large amounts of data.
I also googled it but couldn't find anything.
Is there a faster/better way of achieving this?
Since "performance is key" you could skip the overhead of ActiveSupport::Inflector and use a regular expression to perform the "camelization" yourself:
a = "award.x_initial_value.currency"
a.gsub(/(?<!\bx)_(\w)/) { $1.capitalize }
#=> "award.x_initialValue.currency"
▶ "award.x_initial_value.x_currency".split('.').map do |s|
"#{s[/\Ax_/]}#{s[/(\Ax_)?(.*)\z/, 2].camelize(:lower)}"
end.join('.')
#⇒ "award.x_initialValue.x_currency"
or, with one gsub iteration:
▶ "award.x_initial_value.x_currency".gsub(/(?<=\.|\A)(x_)?(.*?)(?=\.|\z)/) do |m|
"#{$~[1]}" << $~[2].camelize(:lower)
end
#⇒ "award.x_initialValue.x_currency"
In the latter version we use global substitution:
$~ is a short-hand to a global, storing the last regexp match occured;
$~[1] is the first matched entity, corresponding (x_)?, because of ? it might be either matched string, or nil; that’s why we use string extrapolation, in case of nil "#{nil}" will result in an empty string;
after all, we append the camelized second match to the string, discussed above;
NB Instead of $~ for the last match, one might use Regexp::last_match
Could you try solmething like this:
'award.x_initial_value.currency'.gsub(/(\.|\A)x_/,'\1#').camelize(:lower).gsub('#','x_')
# => award.x_initialValue.currency
NOTE: for # char can be used any of unused char for current name/char space.

Funny CSV format help

I've been given a large file with a funny CSV format to parse into a database.
The separator character is a semicolon (;). If one of the fields contains a semicolon it is "escaped" by wrapping it in doublequotes, like this ";".
I have been assured that there will never be two adjacent fields with trailing/ leading doublequotes, so this format should technically be ok.
Now, for parsing it in VBScript I was thinking of
Replacing each instance of ";" with a GUID,
Splitting the line into an array by semicolon,
Running back through the array, replacing the GUIDs with ";"
It seems to be the quickest way. Is there a better way? I guess I could use substrings but this method seems to be acceptable...
Your method sounds fine with the caveat that there's absolutely no possibility that your GUID will occur in the text itself.
On approach I've used for this type of data before is to just split on the semi-colons regardless then, if two adjacent fields end and start with a quote, combine them.
For example:
Pax;is;a;good;guy";" so;says;his;wife.
becomes:
0 Pax
1 is
2 a
3 good
4 guy"
5 " so
6 says
7 his
8 wife.
Then, when you discover that fields 4 and 5 end and start (respectively) with a quote, you combine them by replacing the field 4 closing quote with a semicolon and removing the field 5 opening quote (and joining them of course).
0 Pax
1 is
2 a
3 good
4 guy; so
5 says
6 his
7 wife.
In pseudo-code, given:
input: A string, first character is input[0]; last
character is input[length]. Further, assume one dummy
character, input[length+1]. It can be anything except
; and ". This string is one line of the "CSV" file.
length: positive integer, number of characters in input
Do this:
set start = 0
if input[0] = ';':
you have a blank field in the beginning; do whatever with it
set start = 2
endif
for each c between 1 and length:
next iteration unless string[c] = ';'
if input[c-1] ≠ '"' or input[c+1] ≠ '"': // test for escape sequence ";"
found field consting of half-open range [start,c); do whatever
with it. Note that in the case of empty fields, start≥c, leaving
an empty range
set start = c+1
endif
end foreach
Untested, of course. Debugging code like this is always fun….
The special case of input[0] is to make sure we don't ever look at input[-1]. If you can make input[-1] safe, then you can get rid of that special case. You can also put a dummy character in input[0] and then start your data—and your parsing—from input[1].
One option would be to find instances of the regex:
[^"];[^"]
and then break the string apart with substring:
List<string> ret = new List<string>();
Regex r = new Regex(#"[^""];[^""]");
Match m;
while((m = r.Match(line)).Success)
{
ret.Add(line.Substring(0,m.Index + 1);
line = line.Substring(m.Index + 2);
}
(Sorry about the C#, I don't known VBScript)
Using quotes is normal for .csv files. If you have quotes in the field then you may see opening and closing and the embedded quote all strung together two or three in a row.
If you're using SQL Server you could try using T-SQL to handle everything for you.
SELECT * INTO MyTable FROM OPENDATASOURCE('Microsoft.JET.OLEDB.4.0',
'Data Source=F:\MyDirectory;Extended Properties="text;HDR=No"')...
[MyCsvFile#csv]
That will create and populate "MyTable". Read more on this subject here on SO.
I would recommend using RegEx to break up the strings.
Find every ';' that is not a part of
";" and change it to something else
that does not appear in your fields.
Then go through and replace ";" with ;
Now you have your fields with the correct data.
Most importers can swap out separator characters pretty easily.
This is basically your GUID idea. Just make sure the GUID is unique to your file before you start and you will be fine. I tend to start using 'Z'. After enough 'Z's, you will be unique (sometimes as few as 1-3 will do).
Jacob

Resources