In short, how to make both MSVCRT and MinGW MSYS share the TZ environment variable without conflicts? Or, how to make both support timezones without conflicts?
Further information
In order to have date command of MSYS displaying correct local time, and because MSYS itself uses its own C runtime instead of MSVCRT, I have set TZ environment variable according to GNU C library documentation:
export TZ="BRT+3BRST,M10.3.0/0,M2.3.0/0"
Unfortunately, this conflicts with Microsoft C runtime specs, which dictates for the DST name part:
If daylight saving time is never in effect in the locality, set TZ without a value for dzn. The C run-time library assumes the United States' rules for implementing the calculation of daylight saving time (DST)
Hence, the simple presence of a DST name in TZ variable will cause programs relying on _tzset to malfunction outside the USA. This is my case with Bazaar DVCS, where I have been getting wrong commit times, one hour late, because MSVCRT assumes I have already entered DST period based on TZ setting. If I leave TZ empty, MSYS date displays UTC time, but MSVCRT (and Bazaar) works just fine. If I set TZ as above, then MSVCRT adds one hour to commit times, but MSYS date displays local time.
Bazaar is affected because it uses Python which in turn uses MSVCRT under Windows. Even though I can remove everything from DST name on, that will break date command in MSYS. I have tried several values of TZ as well. MSYS seems to lack any further timezone support than what is described above in GNU reference. Also, I wanted to avoid having TZ not set only when invoking Bazaar, or have it set only when invoking date command, but rather a more general solution.
Alternative formats and zoneinfo database
There is an alternative format for TZ, third one in GNU documentation above, but it seems to not be supported by MSYS, as stated:
But the POSIX.1 standard only specifies the details of the first two formats,
It seems this third format is just IANA's Time Zone Database, which describes a different format for TZ which seems to not be supported by MSYS either, as stated:
To use the database on an extended POSIX implementation set the TZ environment variable to the location's full name, e.g., TZ="America/New_York".
Desipe the above I tried to install IANA's zoneinfo manually onto MSYS without success. I'm not sure if these statements are correct and their formats aren't even recognized by MSYS, or if I just have failed to install zoneinfo data files correctly. I couldn't find a compiled version neither could I compile myself, so I just tried out the tzdata package from Ubuntu.
Something odd to me, though, is that GNU C library documentation above stands it comes with a timezone database already (it sounds to me like zoneinfo). However as said I can't find any kind of timezone database installed anywhere in MSYS, neither I can find any mingw-get package related to timezones. I wonder if the developers simply removed it from the release. This is what the documentation says:
The GNU C Library comes with a large database of time zone information for most regions of the world, which is maintained by a community of volunteers and put in the public domain.
So in sum, if I could make zoneinfo or similar alternative work in MSYS, then I could abandon my current approach of setting TZ as above. I can't find, however, any good information about timezone support in MSYS.
The simple solution is to set your windows environment variable as you expect then change the variable in your ~/.profile file for MSYS to what POSIX expects.
I have figured out MSYS should actually detect timezone automatically without TZ. The following bug report documents the problem: MSYS can't handle timeozones in localized Windows.
In the meanwhile
The best solution I have found so far while this bug doesn't get fixed was:
Create a wrapper script named for example runcrt.sh, available from system path and containing:
#!/bin/bash
env -u TZ $(basename "$0").exe "$#"
Create NTFS symlinks to this script, one for each MSVCRT program I intend to run in MSYS, but without the exe extension and before the actual executables in system path. For example, ruby, python, bzr etc.
For performance reasons, cache configuration of TZ (the export statement generated dynamically) into /etc/profile.d/timezone.sh on Windows initialization (actually we just need to update TZ once a year, which is how often DST period may ever change, but whatever).
Set BASH_ENV to /etc/profile.d/timezone.sh, so not just interactive bash sessions but also shell scripts can be timezone aware.
This way, either interactively or from a shell script, calls to bzr commit, for instance, will get the right commit date into the code repository, because that command is being executed with TZ unset. Similarly, TZ is set for all the other commands such as date and ls, so they print the correct local time as well. I have dropped the opposite approach of setting TZ only for MSYS commands and leaving it blank for anything else because the unset operation is much faster than sourcing the export.
MSYS2 as alternative
MSYS2, however, is not affected by this bug and recognizes the timezone correctly. In fact, it has proper timezone support:
$ tzset
America/Sao_Paulo
$ date +"%T, timezone %Z (%z)"
10:18:12, timezone BRT (-0300)
$ TZ=America/Los_Angeles date +"%T, timezone %Z (%z)"
06:18:14, timezone PDT (-0700)
In your environment you don't have any problems running pure Windows, without msys. And you have no problems running pure msys, when you don't access native Windows apps, like Bazaar. That's what I assume from your question.
Special script wrappers unsetting TZ for every windows command run under msys seems to be reasonable and I guess that's what you do. I know this is not the answer you expected or the one you could write yourself, but since there are no other answers, I thought at least one should be here :) The least evil.
Finally I imagine 3 levels of setting TZ:
A value for Microsoft set in Windows system settings
A value for Msys set in /etc/profile or msys.bat
A value for Microsoft again in wrappers to run Windows commands within Msys, like:
#!/bin/bashexport TZ=; /usr/bin/bazaar "$#"
in the file /usr/local/bin/bazaar
I can't imagine a more general solution. How can a shell know which version of TZ variable is preferred for the given command?
Related
I am using Ghostscript to convert pdf1.3 to pdf/a-1b using this command:
gs -dPDFA -dBATCH -dNOPAUSE -dNOOUTERSAVE -sColorConversionStrategy=sRGB -sDEVICE=pdfwrite -sOutputFile=output.pdf PDFA_def.ps input.pdf
The PDFA_def.ps is customized to use the srgb icc profile. Except that change it is the standard def file which comes with GS 9.26.
Now comes the tricky part:
1- running this conversion locally on a ubuntu 18.10, GS 9.26 it works fine an i get a valid pdf/a
2- running the same command in a docker container (ubuntu 18.10. GS 9.26) creates a pdf/a as well, which is considered to be valid
However, in the first scenario I can process the file using mustang (https://github.com/ZUGFeRD/mustangproject) to create a valid electronic invoice. In the second scenario (docker container) this fails, since the file is not considered to be valid pdf by mustang.
checking both pdf files I would have expected them to be identical since i am running the same converison on it. However they are not. The PDF create in the dockerfile is 10 bytes smaller and shows some different metainformation in the file itself.
I suspect that there must be some "hidden depdencies" that make GS to act different on my host system compared to a docker container, but it feels entirely wrong and I am running out of means to debug further.
Does anyone know, wether GS has some more depdencies that might cause the same command to produce different results?
The answer is 'maybe'. It depends on how Ghostscript was built for starters.
I assume you are using a package, and not building from source yourself. In which case there are a number of dependencies including; FreeType, LibJPEG, JBIG2dec, LibTIFF, JPEG-XR, LCMS2, LibPNG, OpenJPEG, Expat, zlib, potentially IJS, CUPS and X-Windows, depending on what devices were built in.
Any or all of these could be system shared libraries instead of being built using the specific version shipped by Artifex. They could also be different versions on the two systems.
That said, I think its 'unlikely' that this is your problem. However, without seeing the PDF files I can't tell you why there is a difference. Differences in the metadata are to be expected, since that includes a date/time stamp.
I'd really need to see examples of the original and the two output PDF files to be able to comment further.
[Edit]
Looking at the files they have been produced compressed (unsurprisingly) which can obviously lead to differences in size if there are small differences in the input streams. So the first task was to decompress the files.
That done I see there are, essentially, no differences between them. One of the operating systems is using a time zone 7 hours behind UTC, the other is in UTC so where one of the systems is time stamping with (eg)
2019-04-26T19:49:01Z
The other is using
2019-04-26T12:51:35-07:00
So instead of Z (for UTC) you get -07:00 which is where the extra 10 bytes are coming from. Other than that, the Unique IDs are (as you would imagine) different, the Length values for the streams are different for the streams containing dates, and the startxref is different because the streams are different lengths.
Both files claim to be compatible with PDF/A-1b. In short I can see no real differences between them. Since you are using a tool to further process the files, I'd suggest you try taking the PDF file from your working system and try processing it on the non-working system, and vice versa, it seems to me that the problem may be the later processing rather than the PDF file itself. Perhaps you have different versions of that tool on the two systems.
For what it may be worth, Ghostscript can be induced into creating a ZUGFeRD file directly, see this bug report and this commit to the repository.
I'm pretty new to Informix and I'm trying to run a screen with sperform, but it's just seg faulting when I try to query. So far I have:
installed Ubuntu server 12 (64bit)
installed the Dev suite and runtime 7.50
installed the Informix engine 12.10
verified it was all up and running; can connect with dbaccess
created an example database & table and inserted a couple rows
generated a form using isql from the table
ran the generated form with sperform
As soon as I attempt to query with the form, I get a "Segmentation fault (core dumped)" and it exits. Can anyone help me understand why? Isn't this as basic as it gets?
Preliminary answer
Yes; that is as basic as it gets. No; it should not crash. There are essentially no circumstances under which that sequence should crash. You should probably file a bug report with IBM.
The only thing that might conceivably be an issue is that ISQL may have been built with an older version of the CSDK than the server installs and there may be an unexpected incompatibility. It should work, but occasionally flaws creep in. If you want to explore how to prove this possibility, say so. It is a little fiddly, but may get you up and running while the problem is resolved formally.
Extended answer
YES! I'd love to try to fix this.
The first step, it seems to me, is to see whether ISQL (Informix SQL) runs correctly when installed on its own — rather than when mixed with the Informix server code. It should work in both environments, but it is possible that the new server code has changed something that is causing the older tools code to break.
So, reinstall Informix SQL (and the other dev tools if you want, but you could save those until you've got a POC with just ISQL) into a new directory. Let's suppose your server is installed in /opt/informix; you could install your tools in /opt/isql instead. (No need to uninstall the tools from under /opt/informix yet.)
Copy the server sqlhosts file (from /opt/informix/etc/sqlhosts) to the new /opt/isql/etc/sqlhosts.
Change INFORMIXDIR=/opt/isql.
Add the new value to the front of your path (PATH=$INFORMIXDIR/bin:$PATH).
Worry about the setting of LD_LIBRARY_PATH — you want to pick up libraries from under /opt/isql/lib in preference to those under /opt/informix/lib.
Leave INFORMIXSERVER unchanged; you'll still be talking to the same database server.
You should now try to (re)generate the form file and run it. With a small modicum of luck, it will work now.
OK, that works! Don't know if that's a good thing or not, but we're going to try to get that change into production.
It gets you going; that's good. It's also a relief to me that the fundamentals of the QA process for the tools release didn't break down. The product works when run in the environment it was developed for.
It's a nuisance that a later release of the server changed something so that the older build of the tools no longer works with the newer server. It is supposed to be OK. However, running with separate INFORMIXDIR values for tools and server is not unheard of. If the server was on a separate machine, the segregation would be inevitable — the tools would use a separate INFORMIXDIR from the one used by the server (ignoring NFS file systems, etc)
Is it possible that there's some aspect to my steps that cause something to be overwritten?
No. The classic 'Rule of TEN (Tools, Engine, Network)' — install tools before the server (before the network-enabled version of the server) more or less applies and is what you did. The separate network-enabled version of the server ceased to be relevant about 20 years ago, but tools before engine (the 'Rule of TE' just doesn't cut it) is normally correct.
Since the workaround works, we need to look ahead a bit: what does it mean for you?
You have a solution that will work pro tem.
You will need to be careful with environment setting when you run programs.
Programs using the tools (Informix 4GL, Informix SQL) will be run with INFORMIXDIR=/opt/isql and consequential environment settings.
Programs installed by the server (DB-Export, DB-Import, ON-Stat, etc) will be run with INFORMIXDIR=/opt/informix and consequential environment settings.
If you wish, you can set up scripts in /opt/isql/bin for the programs from /opt/informix/bin that you want developers or users to use.
The scripts in /opt/isql/bin will set the environment correctly for the server and then exec the server program.
The scripts in /opt/informix/bin will similarly set the environment correctly for the tools and then exec the tools program.
In each directory, assuming you're careful, you have a single script that actually sets the environment and runs the other program; the program names are simply (symbolic?) links to the master script.
You have two separate master scripts — one to set the server environment, one to set the tools environment.
You should report the problem to IBM (Informix) Technical Support. You can outline what you've had to do to work around the problem. The fact that you have a workaround will lower the urgency, but it is still a problem that should, ideally, be fixed. (The world isn't ideal though, just in case you hadn't noticed; it may take time for the fix to be delivered.)
Is there a way to tell a Delphi project that builds a DLL to use as a host application an executable in the same directory as the output directory of the DLL being built?
something like this:
One thing is, I'm using option sets with Delphi XE2, so in the dproj for the DLL I'm building I don't even have a DCC_ExeOutput directory, not sure if that matters.
Allowing this would seriously decomplicate some issues we've ran into trying to migrate from VSS to SVN.
Also, what do you call the $(thing)'s?
The $(name) things are environment variables. I tried setting the host application to .\$(Platform)\$(Config)\Test.exe and received this error message:
Could not find program, '.\%Platform%\%Config%\Test.exe'.
Note how the $(...) was turned into environment variable syntax.
I also tried with $(systemdrive)\Test.exe and received this error message:
Could not find program, 'C:\Test.exe'.
So clearly environment variables will be substituted with their values, if they do exist. I think it is reasonable to conclude that the environment used to start a host application clearly does not define the special Delphi specific environment variables.
So I think the answer to your question is that you cannot use indirection like this for the host application setting.
On the other hand, environment variables are substituted so perhaps you could use that to make things easier. In other words you could define some environment variables of your own. I've no idea whether that may be of help to you since I don't know the precise details of your problem.
I am having some trouble updating UMDF drivers using "devcon" during a
standard code-deploy-debug cycle. The problem is that "devcon update" isn't
really updating anything unless the version number or the date of the DLL
file and the INF file has changed from what is stored in the system's driver
cache folder. After a maddening series of experiments I've discovered that
one way to force the thing to use the latest files is by doing the
following:
Change the parameters passed to
"stampinf.exe" in "makefile.inc" by
explicitly setting a version with
the "-v" option.
Modify the
resource script file ("DRIVER_NAME.rc") to first define
VER_USE_OTHER_MAJOR_MINOR_VER
before including "ntverp.h" and then
explicitly define
VER_PRODUCTMAJORVERSION and
VER_PRODUCTMINORVERSION. You'll
note that this system does not allow
us to change the build and the
revision numbers. On Win7 this
seems to be fixed at 7600 and 16385
in "ntverp.h". Is this by design?
So, I first modify "makefile.inc" and set the "-v" option to something like
"1.1.7600.16385" manually incrementing the minor version for every single
build and then modify the RC file and update VER_PRODUCTMINORVERSION with
the same number.
Alternatively, if I run a command prompt under the SYSTEM account and go and
delete the driver cache folder in
"C:\windows\system32\DriverStore\FileRepository\DRIVER FOLDER" before
running "devcon" then that works too.
Now, I am thinking I am missing something fairly basic here as this seems to
be a rather painful way of doing it. Please help! Thanks!
Why can't you just unplug the device and replace the unloaded DLL? You shouldn't need to reinstall the driver, just replace the module. Note that you shouldn't do this during production or anything that has to do with customers, but if you're writing a driver, just slam in the new module with the same version number.
On Win7 this seems to be fixed at 7600 and 16385 in "ntverp.h". Is this by design?
Yep, at least until the next service pack
As Paul Betts has suggested above, the way to go seems to be to simply replace the UMDF DLL directly in the driver folder (for e.g. c:\windows\system32\drivers\umdf\) after disabling the device either in the device manager or using "devcon". I'd asked this question on Microsoft's device drivers newsgroup before posting here but hadn't got a satisfactory response - but some folks ended up responding there after I posted here! So I'll put up a link to that post as well:
http://bit.ly/6PDxKT
We need to get data out of an older accounting system. We have received a dll that gives us access to the data we need. It includes a type library that we have imported.
If we run our test application from the same directory as the accounting system, everything works fine. If we try to run our application from a different directory, we get the following error:
Dynamically Bound RTS
Runtime DLL 'OOPS', version 3.1, entry point oops
not recorded in registry, not found or incompatible with requirements
of dynamically bound COBOL program. Dynamic binding of RTS requires:
Runtime DLL 'OOLSM', at least Version 3.1
Can anybody provide some helpful information on this?
Are we supposed to have some kind of cobol runtime in our directory? Or in the path? Or registered in the registry?
Thanks,
-Vegar
Updates:
Setting the system %path% to include the path to the accounting system seems to do the trick. Including it as a user variable did not have the same effect for some reason.
What Cobol are you using?
I had done this for year with Microfocus NetExpress 3.1, and all works just fine.
I write COBOL DLL to access COBOL data files, and also write Delphi DLL to add new features to old COBOL systens.
And yes, I use to set the runtime path, that is environment variable called COBDIR, there are others,but usually %PATH% and %COBDIR%is enough.
If you give more detais about what COBOL compiler are you using, and how is the dll interface that you are calling, will me ore easy to help you.
And maybe "Dependence Walker" can help you to identify what run time files are missing, if it is.
http://www.dependencywalker.com/
If it works from the accounting app's directory, but not a different one, the first thing I'd try is adding that directory to your path.
Unless it is already loaded into memory, Windows looks for DLL's that a program is requesting in every location listed in its PATH environment variable, and also in the directory the application is located within.