Adding IF statement to DXL script to check object type? - ibm-doors

I'm currently working on a dxl script that exports to a .tex file which then uses TexWorks to format the export into a pdf. My issue currently is that both my tables and figures in DOORS are OLE objects. I was wondering if I can put an IF statement to distinguish between the two and how would I go about doing this? I'm not sure what the syntax is in DOORS DXL scripting for object type.
void writeFigureHeadAndExport(Stream& oss, Object img, string outputDir)
{
if (o."Object Type""" == "Figure")
{
Module mod = module(img);
string n = mod."Prefix"img."Absolute Number"".png";
string s = exportPicture(img, outputDir "\\" n, formatPNG);
oss << "\\begin{figure}[ht]\n";
oss << "\\centering\n";
oss << "\\includegraphics[width=\\textwidth]{"n"}\n";
}
else
{
Module mod = module(img);
string n = mod."Prefix"img."Absolute Number"".png";
string s = exportPicture(img, outputDir "\\" n, formatPNG);
oss << "\\begin{table}[ht]\n";
oss << "\\centering\n";
oss << "\\includegraphics[width=\\textwidth]{"n"}\n";
}
}

It is possible to discover the type of an Ole item, but it's very difficult to implement. I would suggest an additional attribute that specifies if an object has a table or a figure in it, then using that for the IF statement handling.

Related

in DXL, how to get the handle of a module that I don't open myself in the DXL script

I have a DXL script that open (read or edit) modules and put them in a skip list (so that I could close them at the end)
The skip list store the module handle of each module read or edit :
if (MODIF_OTHER_MODULES)
{
modSrc = edit(modSrc_Name, false)
} else
{
modSrc = read(modSrc_Name, false)
}
put(skp_openmodule, modSrc, modSrc)
But sometimes modules are already open outside my DXL script so following check is KO :
mvSource = sourceVersion lr
modSrc_data = data mvSource
modSrc_Name = fullName(source lr)
if (null modSrc_data)
"read/edit modSrc_Name module and add module in the skip list" : OK DONE
else
"the module is already open but maybe I don't open it myself"
"so I WANT TO CHECK if module is already in the skiplist and ADD module of modSrc_data in the precedent skip list if it isn't " : I DONT KNOW HOW !
"
Is there a way to get module of modSrc_data so that it could be added in skp_openmodule if it is not already present in the list ?
I don't want to read/edit it again because I don't know in which mode it was open previously and I would prefer to avoid it because I will do it for each objet and each link !
also it would be great if I could also retrieve the information about how the module was open (read or edit)
I tried :
module (modSrc_data)
and
module(modSrc_Name)
but it doesn't work.
Not sure if this is due to the excerpt you posted, but you should always turn off the autodeclare option and ensure that you always use the correct types for your variables by either checking the DXL manual or by using alternative sources like the "undocumented perms list" . data performed on a ModuleVersion returns type Module. So you already have what you need. An alternative would be the perm bool open(ModName_ modRef). And note that the perm module does not return a Module, but a ModName_.
Also, in addition to bool isRead(Module m), bool isEdit(Module m) and bool isShare(Module m)(!!) when you really want to close modules that have been opened previously, you might want to check bool unsaved(Module m)
ModuleVersion mvSource = sourceVersion lr
Module modSrc_data = data mvSource
string modSrc_Name = fullName(source lr)
if (null modSrc_data)
print "read/edit modSrc_Name module and add module in the skip list"
else
{
print "the module is already open but maybe I don't open it myself"
if isRead (modSrc_data) then print " - read"
if isEdit (modSrc_data) then print " - edit"
if isShare (modSrc_data) then print " - shared mode"
if unsaved (modSrc_data) then print " - do not silently close me, otherwise the user might be infuriated"
}

Jenkins multi branch pipeline

I have branch called feature/xyz.
Now I have to name a file from filename.exe to filename_$BRANCH_NAME.exe
But problem here is as my branch name has fwd slash it is throwing an error.
So how can I name my file as filename_feature_xyz??
Example of code below. Essentially you can just use the string replace function. But went a bit further to cater for an unknown filename conforming to the convention you laid out in the example.
#!groovy
// Setup vars to replicate your questions specs
env.BRANCH_NAME = "feature/xyz"
String file = 'filename.exe'
// Replace any forward slash with an underscore
String branchName = (env.BRANCH_NAME).replace('/', '_')
// Split apart your current filename
List fileParts = file.tokenize('.')
// Construct the original filename, catering for multiple period usecases
String originalFileName = fileParts[0..-2].join('.')
// Seperate the extension for use later
String originalExtension = fileParts[-1]
// Combine into the desired filename as per your requirements
String newFileName = "${originalFileName}_${branchName}.${originalExtension}"

Not that kind of Map exception with Jenkins and Groovy

I have a string in groovy that I want to convert into a map. When I run the code on my local computer through a groovy script for testing, I have no issues and a lazy map is returned. I can then convert that to a regular map and life goes on. When I try the same code through my Jenkins DSL pipeline, I run into the exception
groovy.json.internal.Exceptions$JsonInternalException: Not that kind of map
Here is the code chunk in question:
import groovy.json.*
String string1 = "{value1={blue green=true, red pink=true, gold silver=true}, value2={red gold=false}, value3={silver brown=false}}"
def stringToMapConverter(String stringToBeConverted){
formattedString = stringToBeConverted.replace("=", ":")
def jsonSlurper = new JsonSlurper().setType(JsonParserType.LAX)
def mapOfString = jsonSlurper.parseText(formattedString)
return mapOfString
}
def returnedValue = stringToMapConverter(string1)
println(returnedValue)
returned value:
[value2:[red gold:false], value1:[red pink:true, gold silver:true, blue green:true], value3:[silver brown:false]]
I know that Jenkins and Groovy differ in various ways, but from searches online others suggest that I should be able to use the LAX JsonSlurper library within my groovy pipeline. I am trying to avoid hand rolling my own string to map converter and would prefer to use a library if it's out there. What could be the difference here that would cause this behavior?
Try to use
import groovy.json.*
//#NonCPS
def parseJson(jsonString) {
// Would like to use readJSON step, but it requires a context, even for parsing just text.
def lazyMap = new JsonSlurper().setType(JsonParserType.LAX).parseText(jsonString.replace("=", ":").normalize())
// JsonSlurper returns a non-serializable LazyMap, so copy it into a regular map before returning
def m = [:]
m.putAll(lazyMap)
return m
}
String string1 = "{value1={blue green=true, red pink=true, gold silver=true}, value2={red gold=false}, value3={silver brown=false}}"
def returnedValue = parseJson(string1)
println(returnedValue)
println(JsonOutput.toJson(returnedValue))
You can find information about normalize here.

How do I manage multi-projects dependency (Directed acyclic path) with IBM RAD ant?

I am working on an ant script to build java prjects developed with IBM RAD 7.5.
The an script is calling IBM RAD ant extenstion API. I am using Task to load the project set file(*.psf) into the memory, and calling Task to compile the projects listed in the projectSetImport.
The problem is the projects listed in psf file is not ordered by project dependency, when compiles, it fails because the depency is incorrect.
Is there any API or method to manage the dependency automatically? the psf files Iam handling is quite big, with 200+ projects in each file and it is constanly changing(e.g. some projects get removed and some new projects added in each week)
here is a detailed description for the question:
The project dependency is like:
1) project A depends on B and D.
2) project B depends on C
3) project E depends on F
A -> B -> C
A -> D
E-> F
The sample.psf file just list all projects:
A
B
C
D
E
F
loads sample.psf, which have a project list [A,B,C,D,E,F]
build project list from
the build fail at A, because A need B and D to be build first.
My current solution is to rebuild the sample.psf manually, e.g.
sample.psf file:
C
B
D
A
F
E
but this is hard to maintain, because there are 200+ projects in a psf file and they are constanly changing.
One way to attack this issue is to write a parser to read the .project file for each project, the dependency projects are listed in "projects" tag. Then implement a Directed acyclic path algorithm to reorder the dependency. This approach might be over kill. This must be a common issue in teams build IBM java projects, is there a solution?
Finally, I wrote some python code to compute the dependency. I Listed the logic below:
read the psf file into an list, the psf file is a xml file, and
the project name is in tag.
for each project in the
list, go to project source code and read the .project file and
.classpath file, these two files contains the dependency project.
for .project file(xml), fetch the project name from tag,
for .classpath file. fetch the line with attribute kind='src'
now you got [source]->[dependened_project_list], implement a
Directed acyclic map.(see attached code)
load the [source]->[dependened_project] in to the AdjecentListDigraph, and
call topoSort() to return the dependency.
generate a new ordered psf file.
/////////////////////// dap_graph.py/////////////////////////////
# -*- coding: utf-8 -*-
'''Use directed acyclic path to calculate the dependency'''
class Vertex:
def init(self, name):
self._name = name
self.visited = True
class InValidDigraphError(RuntimeError):
def init(self, arg):
self.args = arg
class AdjecentListDigraph:
'''represent a directed graph by adjacent list'''
def __init__(self):
'''use a table to store edges,
the key is the vertex name, value is vertex list
'''
self._edge_table = {}
self._vertex_name_set = set()
def __addVertex(self, vertex_name):
self._vertex_name_set.add(vertex_name)
def addEdge(self, start_vertex, end_vertex):
if not self._edge_table.has_key(start_vertex._name):
self._edge_table[start_vertex._name] = []
self._edge_table[start_vertex._name].append(end_vertex)
# populate vertex set
self.__addVertex(start_vertex._name)
self.__addVertex(end_vertex._name)
def getNextLeaf(self, vertex_name_set, edge_table):
'''pick up a vertex which has no end vertex. return vertex.name.
algorithm:
for v in vertex_set:
get vertexes not in edge_table.keys()
then get vertex whose end_vertex is empty
'''
print 'TODO: validate this is a connected tree'
leaf_set = vertex_name_set - set(edge_table.keys())
if len(leaf_set) == 0:
if len(edge_table) > 0:
raise InValidDigraphError("Error: Cyclic directed graph")
else:
vertex_name = leaf_set.pop()
vertex_name_set.remove(vertex_name)
# remove any occurrence of vertext_name in edge_table
for key, vertex_list in edge_table.items():
if vertex_name in vertex_list:
vertex_list.remove(vertex_name)
# remove the vertex who has no end vertex from edge_table
if len(vertex_list) == 0:
del edge_table[key]
return vertex_name
def topoSort(self):
'''topological sort, return list of vertex. Throw error if it is
a cyclic graph'''
sorted_vertex = []
edge_table = self.dumpEdges()
vertex_name_set = set(self.dumpVertexes())
while len(vertex_name_set) > 0:
next_vertex = self.getNextLeaf(vertex_name_set, edge_table)
sorted_vertex.append(next_vertex)
return sorted_vertex
def dumpEdges(self):
'''return the _edge_list for debugging'''
edge_table = {}
for key in self._edge_table:
if not edge_table.has_key(key):
edge_table[key] = []
edge_table[key] = [v._name for v in self._edge_table[key]]
return edge_table
def dumpVertexes(self):
return self._vertex_name_set
//////////////////////projects_loader.py///////////////////////
-- coding: utf-8 --
'''
This module will load dependencies from every projects from psf, and compute
the directed acyclic path.
Dependencies are loaded into a map structured as below:
dependency_map{"project_A":set(A1,A2,A3),
"A1:set(B1,B2,B3)}
The algorithm is:
1) read
2) call readProjectDependency(project_name)
'''
import os, xml.dom.minidom
from utils.setting import configuration
class ProjectsLoader:
def __init__(self, application_name):
self.dependency_map = {}
self.source_dir = configuration.get('Build', 'base.dir')
self.application_name = application_name
self.src_filter_list = configuration.getCollection('psf',\
'src.filter.list')
def loadDependenciesFromProjects(self, project_list):
for project_name in project_list:
self.readProjectDependency(project_name)
def readProjectDependency(self, project_name):
project_path = self.source_dir + '\\' + self.application_name + '\\'\
+ project_name
project_file_path = os.path.join(project_path,'.project')
projects_from_project_file = self.readProjectFile(project_file_path)
classpath_file_path = os.path.join(project_path,'.classpath')
projects_from_classpath_file = self.\
readClasspathFile(classpath_file_path)
projects = (projects_from_project_file | projects_from_classpath_file)
if self.dependency_map.has_key(project_name):
self.dependency_map[project_name] |= projects
else:
self.dependency_map[project_name] = projects
def loadDependencyByProjectName(self, project_name):
project_path = self.source_dir + '\\' + self.application_name + '\\'\
+ project_name
project_file_path = os.path.join(project_path,'.project')
projects_from_project_file = self.readProjectFile(project_file_path)
classpath_file_path = os.path.join(project_path,'.classpath')
projects_from_classpath_file = self.\
readClasspathFile(classpath_file_path)
projects = list(set(projects_from_project_file\
+ projects_from_classpath_file))
self.dependency_map[project_name] = projects
for project in projects:
self.loadDependencyByProjectName(project)
def readProjectFile(self, project_file_path):
DOMTree = xml.dom.minidom.parse(project_file_path)
projects = DOMTree.documentElement.getElementsByTagName('project')
return set([project.childNodes[0].data for project in projects])
def readClasspathFile(self, classpath_file_path):
dependency_projects = set([])
if os.path.isfile(classpath_file_path):
DOMTree = xml.dom.minidom.parse(classpath_file_path)
projects = DOMTree.documentElement.\
getElementsByTagName('classpathentry')
for project in projects:
if project.hasAttribute('kind') and project.getAttribute\
('kind') == 'src' and project.hasAttribute('path') and \
project.getAttribute('path') not in self.src_filter_list:
project_name = project.getAttribute('path').lstrip('/')
dependency_projects.add(project_name)
return dependency_projects
def getDependencyMap(self):
return self.dependency_map

Best choices for analyzing custom log files

I have written a logger for my projects. I logs to text files and as you may guess there's a time-stamp, namespace, class, method... and finally a log message. Like this:
TestNamespace.MyProject.exe Error: 0 :
11/11/2010 10:24:11 AM
Assembly: TestNamespace.MyProject.exe
Class: myClass
Method: Test
This is a log message !
TestNamespace.MyProject.exe Error: 0 :
11/11/2010 10:24:12 AM
Assembly: TestNamespace.MyProject.exe
Class: myClass
Method: Test2
This is another log message !
I'm looking for a free tool for analyzing my log files (some tables, graphs etc).
Thanks in advance.
Since you are outputting log messages in a custom format, you practically need a custom parser for it.
Python
import datetime
from collections import namedtuple
Record = namedtuple( 'Record', 'file,level,number,datetime,assembly,class,method,message' )
def block_iter( theFile ):
file_iter= iter(theFile)
while True:
items= [ next(file_iter) for x in range(9) ]
if not items: break
yield items
def record_iter( blocks ):
for items in blocks:
file, level, number = items[0].split(":")
dt = datetime.datetime.strptime( items[1], "%m/%d/%Y %H:%M:%S %p" )
_, _, asm = items[2].partition(":")
_, _, cls = items[3].partition(":")
_, _, mth = items[4].partition(":")
txt = "".join( items[5:] )
yield Record( file, level, number, dt, asm, cls, mth, txt )
with open( "someapp.log", "r" ) as source:
for log in record_iter( block_iter( source ) ):
print log
Something like that might help get you started.
Microsoft has a LogParser which is very flexible with any log format. The down-side is, it's a command-line tool and has no changes from 2005 (version 2.2). You can write SQL commands against your log file and it will generate proper tables/charts for you. Some GUI tools are written for it.

Resources