I have a couple of SQL files containing script to alter different stored procedures. The Advantage Server is installed and running on the machine, but the Architect (ARC32.exe) isn't.
How would I be able to run those scripts without the architect?
(I have updated my question to make it clear to everyone, although I have already got the answer by #Mark Wilkins)
If you are using v11.x, the SQL command line utility is another possibility. It is a standalone application that should be simple to copy from one place to another. Assuming that some Advantage client is installed on the machine in question, I believe you would only need the command line utility binary itself (asqlcmd.exe).
A simple way of using it would be to put the ALTER PROCEDURE statement in a text file and then run a statement such as:
asqlcmd -CS "Data Source=\\server\path\thedatabase.add;User ID=adssys" -i somefile.sql
Related
We are using Pentaho Data Integration V7 working with multiple data origins with an Oracle DWH destiny.
We have stored all the connection access data in a parametrization table, let's call it : D_PARAM. All the connections are configured using parameters (${database_name} ... etc)
We have , at the begining of every job , a transformation with a "set variables" step which reads the right parameters from D_PARAM.
This all works fine, my problem is :
Every time we want to edit a single transformation, or in the development process of a new one , we can't use the paremetrized connections because the parameters haven't been setted. We need then to use "hardcoded" connections during the development process.
Is there a better way to manage this situation ? The idea of having the connections parametrized is to avoid errors and simplify the connections management, but if at the end we need both kind of connections.. I don't see them so useful.
There's not a simple answer, you could rotate your kettle.properties file to change default values, you keep all the values in the file:
D_PARAM = DBN
D_PARAM_DB1 = DB1
D_PARAM_DB2 = DB2
...
And just update the D_PARAM with the one you need from the different D_PARAM_DBN before starting PDI. It's a hassle to be constantly updating the kettle.properties file, but works out of the box.
You could also try working with environments, for this you would have to install a plugin available in Github: https://github.com/mattcasters/kettle-environment, it was created by a former PDI developer, and I don't know if it works with v7 version, it was updated to work with 8.2, but it would probably work with v7, to test it, you can install your PDI version on another directory on your PC and install there the plugin (and other additional plugins you have in your current installation), so you don't break your setup. This blog entry gives you details on how to use the environments: http://diethardsteiner.github.io/pdi/2018/12/16/Kettle-Environment.html
I don't know if the environments plugin would solve your problem, because you can't change the environment in the middle of a job, but for me, with the maitre script to use the environments when I program a job or transform, it's been easier to work with different projects/paths in my setup.
In Spoon you can click on the “Edit” menu and “Set environment variables”. It’ll list all variables currently in use and you can set their values. Then the transformation will use those values when you run.
Also works in Preview, but it’s somewhat buggy, it doesn’t always take updated values.
I have created a script, with your help to backup and restore an Informix database HERE. As I'm new to Informix, I used dbimport and dbexport.
Now I found out about the onpladm command and I found that it can be used to create, load, or unload all tables in a database.
Can anyone explain how it works and what are the benefits of using it instead of dbimport/dbexport?
I've built and installed the "apoc" procedures according to the github page (The apoc-1.0.0-SNAPSHOT.jar file was copied into the plugins directory after the suerver was stopped, and then I started the server again) but when I try to call any of the procedures, I get an error message.
ex:
$ call apoc.help('search') ;
"There is no procedure with the name apoc.help registered for this
database instance. Please ensure you've spelled the procedure name
correctly and that the procedure is properly deployed."
I have come across the issue on both MacOs and Windows installations. I'm running Neo4j 3.0.0 as a server (locally on port 7474).
Have I missed any of the settings?
Thanks,
Babak.
I had to manually add this line to the .neo4j.conf file:
dbms.directories.plugins=/Applications/Neo4j\ Community\ Edition.app/Contents/Resources/app/plugins
(assuming that's where you dropped the APOC jar) and then restart the server.
(It's a little confusing as there's an option in the management app to configure this path, but it seems not actually to enable plug-ins on the server.)
For Windows users it should look like this:
dbms.directories.plugins=c:/Program\ Files/Neo4j\ CE\ 3.0.0/plugins
Assuming You have Neo4j installed at Neo4j CE 3.0.0. The import
Now (2023) the procedure seems to be different.
There are potentially two files needed to run APOC (https://community.neo4j.com/t5/neo4j-graph-platform/unable-to-see-some-apoc-load-functions/m-p/64154)
Some functions may be disabled by default and need to be enabled in the relevant database config e.g., dbms.security.procedures.allowlist=apoc.coll.,apoc.load.,apoc.periodic.*
Consider turning it on and after use off again for security.
Ok due to requirements I have a main powershell that calls child powershell scripts using the & command. In two of my child powershell scripts I use Excel object to either read an excel file and/or create an excel file. If I run these files locally run great no problems. If I run them through a scheduler (in this case Tidal Scheduling Tool) I have issues.
Issue 1:
The first child script reads an excel file to get the names of the worksheets then uses the worksheet name to query the excel file using OleDB. The query function is in a utilities module and gives an error that it can not find the file or it is locked by another process. I've killed the excel process and still wasn't allowing me to query from the file. As a test I commented out the portion of the script that reads the file and hard coded the worksheet name and works fine so somehow the child script is not able to release the handle on the COM object/file.
Issue 2:
From a second child script I create an excel spreadsheet. I'm creating a csv file which I then save as xls file. Again works fine when running locally but when I run through scheduler I get an error when attempting to run the following line: [void]$worksheet.QueryTables.item($connector.name).Refresh and the error that I get is:
Exception calling "Refresh" with "0" argument(s): "Excel cannot find the text f
ile to refresh this external data range.
Check to make sure the text file has not been moved or renamed, then try the re
fresh again."
Again I'm calling these children scripts using the & command (i.e. & \scriptpath\script.ps1)
Anyone seen this before and know how I can make this work?
Thanks!
I have resolved this issue. Looks like Tidal scheduler has Agents and some of the agents setup at my client will run my scripts with no problems while others will not create files or lock files with no real errors given. Anyway, sorry I don't have more than that but powershell is working fine. :)
I am running a Java Application that transfers the files I need to import to the server the DB2 is on. Then the Java Application creates a JDBC Connection to the database and runs:
CALL SYSPROC.ADMIN_CMD('import from <filename> of del modified by decpt, coldel; messages on server inert into <view>')
The problem I have seems somehow conencted to the charset of either the database of the user the database uses to import the files (using the admin_cmd stored procedure). That problem is:
"Umlaute", like ä,ö,ü get messed up by this import. I had this sort of problem in the past and solution always was to set the LC_CTYPE of the user importing the data to de_DE.iso88591
What I already ruled out as the source of the problem:
- The file transfer to the database server. (Umlaute are still ok after that)
- The JDBC Connection (I simply inserted a line through the sql command instead of reading from a file)
The thing is I don't now what user DB2 uses to import files through ADMIN_CMD. And I also don't believe it could somehow be connected to the DB2 settings, since with every other way of inserting,loading ... data into it, everthing works fine.
And yes, I need to use ADMIN_CMD. The DB2 Command Line Tool is a performance nightmare ..
The best approach (for sanity):
Create all databases as UTF-8
Make sure all operating system locales are UTF-8
Get rid of all applications that don't handle their data as UTF-8
Slaughter every developer and vendor not adhering to UTF-8. Repeat and rinse until 100% completed.
DB2 indeed attempts to be smart and convert your input data for you (the import command basically pipes your data into insert clauses - which always get handled like that). The link I gave will outline the basic principle, and give you a few commands to try out. Also, there is official explanation to the similar. According it you could attempt setting the environment variable db2codepage to correspond with your delimited data files, and that should help. Also, the IXF format exports might work better since they have encoding related information attached in every file.
Thanks for your response.
I finally fixed the issue by adding a
MODIFIED BY CODEPAGE=1252
to my JDBC - ADMIN_CMD Import Command. This seems to override any codepage settings the db was using before. It also appears the default codepage of the database didn't matter, since it is set to 1252. The only thing I can think of right now for being the reason for all this could be a linux setting DB2 uses when importing through ADMIN_CMD.