I need to translate SAP BO InfoView interface. The language we require doesn't have native BO language pack.
I discovered a bunch of .properties files, translated them, but not all of the texts was converted.
Maybe someone could give me some advices, or some ideas for solution?
Thanks.
I would think any text not in the properties or xml config files are in 2 main places:
Hard coded in the JSP files. A general search and replace should work at finding where these are.
The Java applet WebI controller. This one is compiled and packaged jar. I believe this is where the text that you haven't been able to translate yet resides. This one will be trickier to update and changes to it will most likely end any support from SAP if you run into troubles.
Related
I have a large application to manage consisting of of three or four executables and as many as fifty .dlls. Many of the source code files are shared across many of the projects.
The problem is a familiar one to many of us - if I change some source code I want to be able to identify which of the binaries will change and, therefore, what it is appropriate to retest.
A simple approach would be simply to compare file sizes. That is an 80% acceptable solution, but there is at least a theoretical possibility of missing something. Secondly, it gives me very little indication as to WHAT has changed; It would be ideal to get some form of report on this so I can then filter out irrelevant (e.g. dates/versions copyrights etc..)
On the plus side :
all my .dcus are in a row - I mean they are all built into a single folder
the build is controlled by a script (.bat)(easy, for example, to emit .obj files if that helps)
svn makes it easy to collect together any (two) revisions for comparison
On the minus side
There is no policy to include all used units in all projects; some units get included because they are on a search path.
Just knowing that a changed unit is used/compiled by a project is not sufficient proof that the binary is affected.
Before I begin writing some code to solve the problem I would like to ask the panel what suggestions they might have as to how to approach this.
The rules of StackOverflow forbid me to ask for recommended software, but if anyone has any positive experiences of continuous integration tools that would help - great
I am open to any suggestion or observation that is relevant in this context.
It seems to me that your question boils down to knowing which units are contained in your various executables. Since you are using search paths, it will be hard for you to work this out ahead of time. The most robust way to find out is to consult the .map file that the compiler emits. This contains a list of all units contained in your executable.
Once you know which units are contained in each executable, you need to know whether or not anything has changed in those units. That information is contained in your revision control system. Put this all together and you have the information that you need.
Of course, just because the source code for a unit has changed, you might argue that re-testing is not needed. Perhaps the only change made was the version, or the date in a copyright label or some such. But it is asking too much to be able to ask a computer to make such a judgement. At some point you need a human to step up and take responsibility.
What is odd about this though is that you are asking the question at all. It seems to me to be enormously risky to attempt partial testing. I cannot understand why you don't simply retest the entire product.
After using it for > 10 years for commercial in-house and freelancer work in large projects, I can recommend to try Apache Ant. It is a build tool which supports dependencies, and has many very helpful features.
Apache Ant also integrates nicely with CI tools such as Hudson/Jenkins, Bamboo etc.
Another suggestion - based on experience with Maven - is to design the general software architecture as modular as possible. If modules (single or multiple source or DCU files in one directory) use a version number in the directory name as a version number, it is possible to control exactly how application are composed from these modules.
If you want to program such a tool yourself the approach would be something like this:
First you need to detect wheter there were any changes made to seperate source files. As you already figured out comparing the file size is bad idea as the file size can stay the same despite lots of changes made to it (as long as there is same amount of text in pas file its size won't change). So instead you could check the last modification time for specific file or create some hash value like MD5 hash for comparison (can be quite slow).
Then you need to generate yourself a dependancy tree which will tell you which files are used for which project/subproject.
Finally based on changes detected in seperate files you check the dependancy tree to see which projects needs to be recompiled.
The problem of such approach is that you would probably have to update the dependancy tree manually each time when new unit is added to the project or an existing one is removed from the project.
But the best way would be to go and use some version controll software istead of reinventing the wheel. I myself like the way how GIT works and I belive that with proper implementation of GIT into the project mannager itself could be quite powerfull do to GIT support of branching/subbranching (each project is its own branch, each version of your software can be its own subbranch).
Now latest version of Delphi does have GIT integration done though SVN but this unfortunately limits some of best GIT functionality. So if you maybe decide to go and integrate GIT support directly into Delphi I'm first in line to use it.
I want to open libraries, because currently I want to see the algorithms used for drawing, modify them and implement them in my program. For example: I have tried to create an algorithm on my own for lines. But I failed. And even if I had succeeded, I fear that it might not give the same result as the algorithm in the libraries. And I don't want this to happen. That's why I want to copy the algorithms used for the methods in libraries. And I really hope that this will help me create the application I'm currently working on and with other applications in the future.
I tried to open the libraries with a code editor. But I had troubles finding the libraries- I don't really know where are they placed nor in what files are their codes stored.
How to open a Java library? Or is there a place in the Internet where the code is uploaded?
It sounds like what you want is to get inside the standard Java libraries (so you can see the code for methods like Graphics.drawLine()).
You can download the source files from the same place you got the JDK, if you are on Windows or Linux. For the Mac, see this question. You can even set up Eclipse so that you can debug into that source as if it were your own code.
However, you will probably not find line-drawing code in Java in these libraries - the Graphics implementation will almost certainly use native methods, and may just call existing methods in the OS.
If you are specifically looking for line drawing algorithms, another option would be to look at the Wikipedia page for the Bresenham (aliased) or Wu (antialiased) algorithm.
Edit:
The part of a Graphics2D call that actually puts pixels on the screen is probably inside a system call and therefore the source would not be available.
A java vector graphics library like Batik might have source for some of these algorithms, but probably relies on the Graphics2D calls for most of them. So, you might look for a comprehensive vector graphics library written in a language other than Java, where those graphics calls do not already exist by default.
Alternately, checking the table of contents for a computer graphics book might point you at a variety of algorithms that you could look up on Wikipedia.
For any given library:
Make sure to obey all licenses when using another's code
If you are referring to the Java SDK source code, you can find it here: http://grepcode.com/
If the project is open source, you can usually just get the source from the project website. No problem, though make sure to obey their license.
If the project is NOT open source, well, then you're in a pickle licensing wise, so I do NOT endorse this, however, you would need to use a Java Decompiler such as JD-Gui
As far as what drawing algorithms to use, there are so many different ones (obviously, people have been trying to draw quickly for many many years), your best bet is to figure out exactly what you need to do and then search for that specific need separately. There isn't really a good repository of ALL of them, except maybe wikipedia.
If you are using the libraries they are on your classpath. Check out how to figure out your classpath in whichever IDE you are using and you can find the JARs you depend on. If they are packaged with sources all you need to do it unjar them and look at the sources.
If you don't have access to the sources you can get the code using a Java Decompiler.
If you are trying to look at a standard Java library, see the other answers about getting the source to the JDK.
If you are interested in an open source library (such as something maintained by the Apache project), look on the site of the project for a 'source jar' which you can open with a standard zip utility.
If the library you want is not open source or you cannot find the source for it, you can try to decompile it. If you are using Eclipse, try this decompiler.
I have a .po file where most translated strings are identical to original ones. However, few are different. How do I quickly find the ones that differ from original?
use podiff
I used it, an it workd for me. Its in C, so you have to compile it. make is your friend
I would suggest using one of the many web-based localization management platforms. To name a few:
Amanuens (disclaimer: my company builds this product)
Web Translate It
Transifex
GetLocalization
This kind of platforms allows you to keep your resource files in sync, edit them in a web-based editor (useful for no-technical translators) and most importantly to highlight/see only the changed/untranslated strings.
We have a multilingual site that is currently using 2 languages, but with several others coming soon. The site is localized primarily by resx files, but with some localized data in a database.
We need to find some tools to manage localization of the site - something that picks up on changes in resx files so translators will only need to translate new or updated texts.
Any ideas or recommendations? We're also interested in any articles about the logistics of localization if anyone has some.
I'm researching this area as we speak and I came across your post. I'm not sure if this is any help but in the past we used RCWinTrans for our localization. This was for mutiple C++/MFC products although it does support .Net . We would have a RCWinTrans project for each language/product we intended to support although you could have multiple languages in a single project. It kept track of state (i.e. not-translated, translated, changed, etc) and would allow us to export the strings to an excel spreadsheet which we could then send onto a translator. They would updated the spreadsheet and we would reimport the data.
Hope this helps, apologies if I'm on the wrong track and I'm teaching grandma to suck egss. I will be creating another thread today with a similar requirement to this btw, but with a few more snagettes - might be worth a look to see if I get any answers. Cheers, Roger
An idea might be to have all the localization data the database (or a localization database), this way one could build the localization tools/interface as part of the application and poyentially use it for many applications?
There is a free tool called Resource Translation Helper ( http://www.winking.be/resource-translation-helper )
Install it
Point it to your project directory
Export to excel (.xls)
Give the excel to your translaters and let them update the columns.
Import the updated xls file again
The tool creates also a file to map the resx translation to the excel rows. Don't remove that or you will not be able to import again.
Works very good, but backup your data before adjusting things ;-)
You cant try to use this Visual Studio extension https://visualstudiogallery.msdn.microsoft.com/2676967b-0516-4f5f-b312-6873e2f9d219.
It allows to export/import your project resources to/from excel and to add new cultures.
Old question, but I would like to add http://www.zeta-resource-editor.com/index.html
Free tool for .NET resource files.
This directly edits the resource files, no need to ex-/import.
Looking at our codebase some code is included in a project explicitly and is pulled in from the search path. Does anyone have an opinion as to which is best practise and why?
Update:
I thought I would clarify my question. All our paths are relatives, so we can have multiple branches that all refer to code within their branches. So I'm not asking about relative paths, but whether units should be in the .dpr or picked up using the search path, which is why the previously asked questions don't quite answer my needs. Thanks to everyone
I have a very basic way of determining this... If the code is specific to the project (not used elsewhere) I include it explicitly. All shared code gets pulled from the library path.
best regards,
don
I don't think I can count the number of times I've helped someone who discovered that the compiler was finding a duplicate copy of a unit somewhere on their search path where they did not expect to find it. They couldn't understand why they were changing their code in the editor (on a copy of the units not found in the search path) and not seeing any change in the behavior of the application. Explicitly including the unit and not setting a search path means there can only be one copy of the unit found by the compiler.
This has been covered here before:
In Delphi, should I add shared units to my projects, to a shared package, or neither?
What is the best way to share Delphi source files among projects?
My answer to the first question is also my answer to your question.
my libraries are in SVN, and I usually check them out for (branch them into) a project at ../libraries relative to the project. This keeps the scope of the includes dirs small and to the point.
In the real source (.pas), paths are totally forbidden.
no project related paths in global delphi searchpath (only per project, or they are truly universally shared sources/components)
I hate poluting source with hardcoded paths, so I usually have only a few units in the project, always with relative paths. Not the VSS w:\ drive substitutes hack please! Typically these are the units that pull in framework parts or are needed due to visual inheritance or form initialization.
Unfortunately, relative paths can be dangerous with Delphi, because they are relative to the working directory, which can change according to Delphi dialogs (e.g. Open). The solution is simple, have an include file with an unique name in the main project.
The shared code vs specific code is a good rule.
I use VSSConnextion a lot, so files that I usually need to check out/in together, naturally belongs to the same project.
After upgrading Delphi twice and moving my project to new computers twice, I've learned that hard-coded paths are evil because root directories tend to change. Doubly so if you're working on a shared project.
I had the same problem. The blue-dots where not showing up in the gutter.
Simple solution (one of):
Menu > Project > Compiler > Build Configuration... set to DEBUG instead of release.
Delphi 2007
PS: well, I thought I was done coding. Someone had requested a new feature. :)