I'm pretty new to Informix and I'm trying to run a screen with sperform, but it's just seg faulting when I try to query. So far I have:
installed Ubuntu server 12 (64bit)
installed the Dev suite and runtime 7.50
installed the Informix engine 12.10
verified it was all up and running; can connect with dbaccess
created an example database & table and inserted a couple rows
generated a form using isql from the table
ran the generated form with sperform
As soon as I attempt to query with the form, I get a "Segmentation fault (core dumped)" and it exits. Can anyone help me understand why? Isn't this as basic as it gets?
Preliminary answer
Yes; that is as basic as it gets. No; it should not crash. There are essentially no circumstances under which that sequence should crash. You should probably file a bug report with IBM.
The only thing that might conceivably be an issue is that ISQL may have been built with an older version of the CSDK than the server installs and there may be an unexpected incompatibility. It should work, but occasionally flaws creep in. If you want to explore how to prove this possibility, say so. It is a little fiddly, but may get you up and running while the problem is resolved formally.
Extended answer
YES! I'd love to try to fix this.
The first step, it seems to me, is to see whether ISQL (Informix SQL) runs correctly when installed on its own — rather than when mixed with the Informix server code. It should work in both environments, but it is possible that the new server code has changed something that is causing the older tools code to break.
So, reinstall Informix SQL (and the other dev tools if you want, but you could save those until you've got a POC with just ISQL) into a new directory. Let's suppose your server is installed in /opt/informix; you could install your tools in /opt/isql instead. (No need to uninstall the tools from under /opt/informix yet.)
Copy the server sqlhosts file (from /opt/informix/etc/sqlhosts) to the new /opt/isql/etc/sqlhosts.
Change INFORMIXDIR=/opt/isql.
Add the new value to the front of your path (PATH=$INFORMIXDIR/bin:$PATH).
Worry about the setting of LD_LIBRARY_PATH — you want to pick up libraries from under /opt/isql/lib in preference to those under /opt/informix/lib.
Leave INFORMIXSERVER unchanged; you'll still be talking to the same database server.
You should now try to (re)generate the form file and run it. With a small modicum of luck, it will work now.
OK, that works! Don't know if that's a good thing or not, but we're going to try to get that change into production.
It gets you going; that's good. It's also a relief to me that the fundamentals of the QA process for the tools release didn't break down. The product works when run in the environment it was developed for.
It's a nuisance that a later release of the server changed something so that the older build of the tools no longer works with the newer server. It is supposed to be OK. However, running with separate INFORMIXDIR values for tools and server is not unheard of. If the server was on a separate machine, the segregation would be inevitable — the tools would use a separate INFORMIXDIR from the one used by the server (ignoring NFS file systems, etc)
Is it possible that there's some aspect to my steps that cause something to be overwritten?
No. The classic 'Rule of TEN (Tools, Engine, Network)' — install tools before the server (before the network-enabled version of the server) more or less applies and is what you did. The separate network-enabled version of the server ceased to be relevant about 20 years ago, but tools before engine (the 'Rule of TE' just doesn't cut it) is normally correct.
Since the workaround works, we need to look ahead a bit: what does it mean for you?
You have a solution that will work pro tem.
You will need to be careful with environment setting when you run programs.
Programs using the tools (Informix 4GL, Informix SQL) will be run with INFORMIXDIR=/opt/isql and consequential environment settings.
Programs installed by the server (DB-Export, DB-Import, ON-Stat, etc) will be run with INFORMIXDIR=/opt/informix and consequential environment settings.
If you wish, you can set up scripts in /opt/isql/bin for the programs from /opt/informix/bin that you want developers or users to use.
The scripts in /opt/isql/bin will set the environment correctly for the server and then exec the server program.
The scripts in /opt/informix/bin will similarly set the environment correctly for the tools and then exec the tools program.
In each directory, assuming you're careful, you have a single script that actually sets the environment and runs the other program; the program names are simply (symbolic?) links to the master script.
You have two separate master scripts — one to set the server environment, one to set the tools environment.
You should report the problem to IBM (Informix) Technical Support. You can outline what you've had to do to work around the problem. The fact that you have a workaround will lower the urgency, but it is still a problem that should, ideally, be fixed. (The world isn't ideal though, just in case you hadn't noticed; it may take time for the fix to be delivered.)
Related
Can anyone suggest me the steps to apply patch (patch upgrade) on IBM Informix database. Please suggest the best practices available. If possible share me the URL or any documents.
It's a big topic. A good deal depends on how the server is currently set up — there are setups that make it hard and others that make it easier. Another major factor is your level of risk averseness. You need to make an assessment of the amount of down-time you can afford. Also, how often have you practiced recovery from backups — it probably won't be necessary, but you need to cover your bases.
I am assuming you're using Informix Dynamic Server, not Informix Standard Engine (SE). Upgrading SE is much, much simpler.
Preparation
Before you install, make sure you have a good, recent, level 0 archive of your system.
Also, make sure you know where your software is installed, and which disks and files it uses.
Route 1: Simple, but potentially risky
Make sure you have a backup copy of $INFORMIXDIR.
Take down the servers that are currently running using this $INFORMIXDIR.
Install the new version of the software over the existing software.
Restart the server
Why is this risky? At issue is what happens if anything goes wrong, and also the length of time the server(s) is (are) down. If you bring up the server and decide something is wrong and you wish to go back to the old version, you have to take servers down, reinstall the old software (copy off the backup?), and then bring the (old version of the) server back up. This takes time. This isn't often a problem, but it has happened on occasion over the last couple of decades.
Route 2: Parallel install
This is the way I do it, but I ensure that my system is set up so that this is easy to do. In particular, the file names used to identify the chunks used by the server are symlinks to the actual storage. This makes it easier to move or replace storage when necessary — you change the symlink instead of having to modify the server configuration.
Create a new directory (e.g. /opt/informix.new) and install the new version of the software in it.
Copy the configuration files from the current $INFORMIXDIR (e.g. /opt/informix) into the new one.
Ensure any other files or directories under the old $INFORMIXDIR that are needed for the new one are copied across or recreated empty.
Review the parallel setup; as best you can, make sure that when you're ready to switch, everything will work.
Take the old server down.
Move the old $INFORMIXDIR to a new name (i.e. mv /opt/informix /opt/informix.old).
Move the new $INFORMIXDIR to the working name (i.e. mv /opt/informix.new /opt/informix).
Restart the server
Why is this less risky? The primary advantage is that the old software is still on the machine and switching back to the old version is therefore simply a question of undoing the original pair of move commands. Another major advantage is that the down-time for the system is limited to the time taken to stop, switch directories, and restart the system.
What are the potential downsides? If you weren't careful enough about copying the necessary files from the old to the new system, you can find yourself missing something critical.
Note that if your chunks are not symlinks, and especially if they are cooked files stored under the old $INFORMIXDIR, you can run into problems. These are not insuperable; you just have more work to do than simply moving directories. Do not (repeat not) move or copy chunks while the server is running. They will not (necessarily) be consistent.
Variations? I usually needed multiple versions of Informix around, so I'd use sets of directories like /work3/informix/ids-12.10.FC1 and /work3/informix/ids-11.70.FC4 as the real directories. I'd then use a standard symlink name as $INFORMIXDIR, such as /opt/informix which would link to the current version-specific INFORMIXDIR under /work3/informix in this example. (Actually, there were some extra levels of complexity in my setups, but my requirements as an Informix developer were different from those of most customers.). But the key point is that instead of moving directories, I switched a symlink — rm /opt/informix; ln -s /work4/informix/ids-12.10.FC3 /opt/informix to use 12.10.FC3 instead of 12.10.FC1, for example.
Post-installation
Run a new level 0 archive.
General observations
Informix upgrades are usually seamless and smooth. If there is conversion work to do on the upgrade, the server does it automatically when the new version is brought up.
Be aware of the mechanisms for reverting to an older version of the server if that is found to be necessary.
I've done presentations and/or papers on this in years past at IIUG conferences. Check out the IIUG web site, and the IBM Informix documentation.
After Googling, I found a lot of HipHop documentation, but plenty was posted between 2011 and 2013.
Earlier this year was launched a new version of HipHop that even supports Drupal and includes a lot of improvements...
I've always used the Zend Guard to deploy my commercial applications, but now I started to consider seriously the use of HipHop in production, but here comes the question:
We can run an application using only the bytecode HHBC (Without .php source code)?
Follows the reference of my research
https://github.com/facebook/hhvm/wiki/FAQ
The question may seem very obvious, but it is not so easy to find this answer in the project documentation.
Thanks in advance!
Well, yes and no.
HHVM has a so-called RepoAuthoritative mode in which the HHVM will no longer check the existence of the PHP files or how up-to-date they are; instead, it will retrieve the HHBC directly from its cache.
Theoretically, you can follow these steps:
pre-generate the HHBC for all your PHP files and insert that HHBC in HHVM's cache. This is the so-called pre-analysis phase (if you ever see it in HHVM documentation, this is what they mean by it)
turn on RepoAuthoritative mode (it's just 1 line in HHVM's config)
delete your PHP code
This way your PHP applications will run just fine without the source code being present. Doing a server restart won't change this since HHVM's bytecode cache lives on disk (it's implemented as an SQLite database).
However, it will be kind of a headache if you:
want to change something in your code. You would have to copy your code, make the change and repeat the pre-analysis phase.
want to upgrade HHVM to a newer version. HHVM uses its build ID as part of the cache key so, if you upgrade it, the bytecode cache becomes unreachable and, since you'll be running in RepoAuthoritative mode, your application will be reduced to a bunch of HTTP 404 errors. To fix this, you would have to repeat the pre-analysis phase as well.
Bottom line: no upside, big downside. There's just no point in doing it.
PS: I hope I answered your question. It's also possible that I misunderstood what you asked; if that's the case, please let me know in a comment.
My computer crashed recently. We have a Delphi app that takes a lot of work to get running.
One of my co-workers has it all installed still. Is there a way to copy the stuff stored in the palette? And the library paths?
I am using Delphi 5 (I know it is very very very old)
That information is stored in the Registry. I don't know exactly how Delphi 5 does it, but try looking for a key called HKEY_CURRENT_USER\Software\Borland\Delphi\5 or something like that. You'll find all the registration information under that key, including a list of installed packages. You can export the keys to a registry file, copy it to the new computer and install it.
Standard disclaimer: Mucking around in the registry manually can be risky if you don't know what you're doing. Be very careful, and if this solution causes your computer to crash, your house to burn down, or demons to come flying out your nose, it's not my fault.
Try CNWizards which has an export functionality for your IDE settings. You can use the same tool restore them on the new machine. We use it to get the same settings on every development machine. In that way we can ensure that all builds are the same, regardless of who built it.
Based on my experience of having done this a few times(!), the most important registry keys are:
HKEY_CURRENT_USER\Software\Borland\Delphi\5.0\Known Packages
HKEY_CURRENT_USER\Software\Borland\Delphi\5.0\Library
and possibly
HKEY_CURRENT_USER\Software\Borland\Delphi\5.0\Known IDE Packages
and maybe
HKEY_CURRENT_USER\Software\Borland\Delphi\5.0\Palette
HKEY_CURRENT_USER\Software\Borland\Delphi\5.0\Palette Defaults
So long as you have done a standard D5 installation first.
It's easier/more reliable to let the IDE fill in the other bits as you start using it and you change options as appropriate. Some component packages, eg madExcept, DevExpress etc are often best re-installed using their own installers anyway.
Unless you're going to have multiple users on the same machine using Delphi then the HKLM stuff isn't really all that important - I don't think.
As a related aside - I have learned that a good way to handle this is to build a FinalBuilder script (or similar) to set up my Delphi environment each time I decide to use a new machine/installation. I copy/download/checkout (which can be done in FB too) all package source then use FB to compile it, copy it, create dirs, and fill in the appropriate registry keys etc. I always get a consistent environment and makes it much easier to rebuild individual components or packages as and when they get upgraded too. The items can also be put into the script in 'dependency order' so that you know to re-compile a dependent package if something else changes. I now have a single FB sciprt that builds D5, D2007, D2009, D2010 environments and packages of all my main components, all depending on which compiler(s) I'm interested in which I indicate by a simple variable. Well worth it.
Seems to have just worked for me on a Win 7, SP1 and Delphi 5
Logged as user with Delphi & 3rd party components installed.
registry export
hkey current user\software\borland
(no other borland products so selected Borland)
rather than Borland\Delphi\5.0)
Logged into pc as new user.
Did not start Delphi5 (i.e. never started for this user).
Regedit File, Import
Started Delphi all components, including lots of 3rd
party, present.
Project compiled as expected under new user.
My current Rails development environment is Aptana + RadRails plugin on Windows XP and it's a little slow running tests, rake, and generators.
If you've evolved and proven your Windows Ruby on Rails development environment into something you're happy with and is fast, please share the details below.
Many thanks,
Eliot
http://www.akitaonrails.com/2009/1/13/the-best-environment-for-rails-on-windows
try this guide
To add on to Omar: instead of dealing with VMWare, you could install Portable Ubuntu, which runs inside Windows. Though you will get a performance hit from doing so, it will give you a Linux environment to work in and you won't have to worry about installing another operating system.
Although I work primarily with Ubuntu now, I was using a windows machine with Vim on it. Vim has a plugin called rails.vim. It understands the rails structure very well. These the things I found very useful.
Navigation between model, controller, unit test, functional test within 3-4 keystrokes using :RModel, RUnittest, :RFunctionaltest, RController.
Ability to run a unit/functional/integration test right away using :Rake
A quick jump to console using :RConsole
A quick jump to helpers using :RHelper
the goto file 'gf' shortcut now behaves in a predictable manner. It even looks up files inside gems you have installed.
The video on the site hardly does any justice to it. If you are not a vim user, then I would suggest E text editor. It is not free but worth every penny you pay.
I am led to believe that Rails (well, Ruby, really) on Windows is generally slow, compared to *n[iu]x, but since I haven't experienced the latter, I remain blissfully ignorant. In particular, there's a lag while the Rails environment loads that is tedious even on a fairly fast (3GHz Xeon) box.
On top of that, there's the overhead that an IDE brings. Of the more recent, I've tried NetBeans and RubyMine. Both are very capable and a little slow, compared to my normal working environment of command line and test editor, which pretty much suffice 95% of the time: I find I don't need much IDE support when I'm developing test-first. I still find myself mostly using SciTE, largely because of the "Run" command being easily accessible. With a little tweak to the "require test_helper" line in my tests, a single test execution is no more than a F5 away, and the whole suite available from the command line with a quick "rake".
If I need to debug into the framework to clear up (usually) some misunderstanding on my part, then I currently lean towards NetBeans, where the debugger seems a little more intuitive. I suspect RubyMine may have more power, but I haven't found myself needing it yet.
Irrespective of all the above though, the key to performance on Windows is the time to execute `environment.rb' and that's not an easy nut to crack. (Here's hoping I'm totally wrong and I've missed something super-cool, btw.)
I would seriously consider against Rails development inside Windows and my reasoning behind it is because you won't be using a Windows machine in production.
You will most likely be running some sort of Linux machine because Passenger wont work on Windows, mongrel_cluster (last time I checked) also doesn't run on Windows and IIS is a nightmare. Trust me, consistency between development and production is a huge bonus.
If you must run Windows, then I would recommend running Rails inside a Virtual Machine with a Linux distribution of your choice. That way you could use something like e-texteditor (which comes highly recommended as a great alternative to Textmate) and have a Samba share to a git/svn repository on your Virtual Machine.
Check VMWare Server out and install CentOS / Ubuntu. It's free and will give you an insight into development in Linux which is ultimately where you want to be at.
I'd recommend jruby for windows.
Ruby in Steel isn't bad if you want to use Visual Studio.
It's got it's issues, but it's not as "slow" as the eclipse variants I've tried.
RadRails so far has the most complete code completion I've seen, as it knows about your models and such far more than Ruby in Steel. Even if it's slow to load the data for it, at least it's there.
If there are not immutable reasons that you are using Windows XP, you should just switch to Linux. There are none of the weird compatibility issues that arise on Windows. If your application will eventually be deployed to a linux machine, it's easier to develop on. Plus, it would solve your performance issues.
https://help.ubuntu.com/community/RubyOnRails
If there are constrains that make Windows absolutely necessary, please revise and specify.
My problem is to create an ant target for automating our installer running in console mode.
The installer is created using InstallAnywhere 2008, which UniversalExtractor recognizes as a 7-zip archive. Once I have the archive unpacked, it appears that the task can use an input file to drive the console (at the very least, it appears that emitting a quit shuts everything down correctly, and output is captured).
So it looks to me as though I have all of the pieces I need for proving out this idea except a clean way to perform-self-extraction-then-stop. Searching for a command-line argument to stop the auto execution has not produced a likely candidate, and the only suitable ant task I've found ( http://www.pharmasoft.be/7z/ ) isn't so clearly documented that I have a lot of confidence in it.
The completed completed is expected to work in Windows, Linux, and a small handful of other Unix environments.
What's the best practice to use here?
Since you control the installer creation, can you run the self-extraction step on your machine, package the results before the installer is launched in a ZIP file, etc. and use that instead of the single file executable? Not very elegant but it may work.
Also, I am a bit hesitant to blatantly promote my project :) but since it has been a while since you asked the question and nobody has answered, have you considered an alternative? Our project InstallBuilder allows you to install in unattended mode directly, without having to autoextract the contents. Just invoke the executable with --mode unattended, pass any additional options you may need from the command line or an external file and you are good to go. We have a lot of ex-InstallAnywhere customers :)