Yesterday I installed Oracle 12c Enterprise edition on my laptop. When I tried to connect to DB via SQLPLUS i got the below error
C:\Users\USER>sqlplus
SQL*Plus: Release 12.1.0.2.0 Production on Sun Feb 28 14:12:46 2016
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Enter user-name: userdb
Enter password:
ERROR:
ORA-01033: ORACLE initialization or shutdown in progress
Process ID: 0
Session ID: 0 Serial number: 0
I tried all the tricks mentioned on internet but couldn't get rid of this error.
I also tried below
SQL> shutdown immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
SQL> startup;
ORACLE instance started.
Total System Global Area 1543503872 bytes
Fixed Size 3045984 bytes
Variable Size 989857184 bytes
Database Buffers 536870912 bytes
Redo Buffers 13729792 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 10 - see DBWR trace file
ORA-01110: data file 10:
'C:\ORACLEDB12C\APP\USERNAME\ORADATA\ORCL\PDBORCL\EXAMPLE01.DBF'
also tried below but still getting error's
SQL> shutdown abort
ORACLE instance shut down.
SQL> startup nomount
ORACLE instance started.
Total System Global Area 1543503872 bytes
Fixed Size 3045984 bytes
Variable Size 989857184 bytes
Database Buffers 536870912 bytes
Redo Buffers 13729792 bytes
SQL> alter database mount;
Database altered.
SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-01157: cannot identify/lock data file 10 - see DBWR trace file
ORA-01110: data file 10:
'C:\ORACLEDB12C\APP\USERNAME\ORADATA\ORCL\PDBORCL\EXAMPLE01.DBF'
SQL> recover database;
ORA-00283: recovery session canceled due to errors
ORA-01110: data file 10:
'C:\ORACLEDB12C\APP\USERNAME\ORADATA\ORCL\PDBORCL\EXAMPLE01.DBF'
ORA-01157: cannot identify/lock data file 10 - see DBWR trace file
ORA-01110: data file 10:
'C:\ORACLEDB12C\APP\USERNAME\ORADATA\ORCL\PDBORCL\EXAMPLE01.DBF'
Can someone help here? thanks!
I've had the same problem and just want to share my solution with you, if anybody else gets the ORA-01033: ORACLE initialization or shutdown in progress
Error in Oracle 12c Database. My database showed me the error everytime i tried to connect to a user of a sample schema (for ex. hr).
The following worked for me:
SQLPlus> connect sys as sysdba
SQLPlus> alter pluggable database all open;
I had the same error and i solve it.
In fact the problem that you got is due to a changement in the destination of one or few datafiles.
In your case is the datafile 10.In fact the error is:
SQL> recover database;
ORA-00283: recovery session canceled due to errors
ORA-01110: data file 10:
**strong text**'C:\ORACLEDB12C\APP\USERNAME\ORADATA\ORCL\PDBORCL\EXAMPLE01.DBF'**strong text**
ORA-01157: cannot identify/lock data file 10 - see DBWR trace file
ORA-01110: data file 10:
'C:\ORACLEDB12C\APP\USERNAME\ORADATA\ORCL\PDBORCL\EXAMPLE01.DBF'
The solution is to search this datafile in your hard drive an make it in the same directory used by oracle which is in your case : C:\ORACLEDB12C\APP\USERNAME\ORADATA\ORCL\PDBORCL\EXAMPLE01.DBF
Mention that you should do this for all datafile that you have changed its directories.I wish this will be helpful.
Related
I have an ASP.NET MVC application which is trying to open below OLE DB connection:
string conString = #"Provider=Advantage OLE DB Provider;Data Source=" + dbfFilePath + ";Extended Properties=dBASE IV;";
using (dBaseConnection = new OleDbConnection(conString))
{
dBaseConnection.Open();
// Some stuff
}
I have installed below package from here.
I am using this provider in order to access a dbf file (specified on the dbfFilePath variable) and then later add some information into it. When I perform the Open command on the above code snippet I get below exception message:
Error 6420: The 'Discovery' process for the Advantage Database Server failed. Unable to connect to the Advantage Database Server. axServerConnect AdsConnect.
Previously I was using VFPOLEDB.4 provider and it was working ok when reading and modifying the dbf file. The problem is that it is only available in 32-bit (there is no version in 64-bit) and now I need it to be in 64-bit so I decided to use Advantage OLE DB provider that is available in 64-bit and as far as I know it does the same as VFPOLEDB.
What am I doing wrong?
UPDATE 2020/11/16:
If I add some parameters to connection string:
string conString = #"Provider=Advantage OLE DB Provider;Data Source=" + dbfFilePath + ";ServerType=ADS_LOCAL_SERVER;TableType=ADS_VFP;Extended Properties=dBASE IV;";
Then when opening connection I get below exception:
Error 7077: The Advantage Data Dictionary cannot be opened. axServerConnect AdsConnect
UPDATE 2020/11/20:
var dbfFilePath =#"C:\MyApp\Temp"; // using c:\MyApp\Temp\myTable.dbf does not work (below open command fails)
string conString = #"Provider=Advantage OLE DB Provider;Data Source=" + dbfFilePath + ";ServerType=ADS_LOCAL_SERVER; TableType=ADS_VFP;";
using (dBaseConnection = new OleDbConnection(conString))
{
dBaseConnection.Open();
OleDbCommand insertCommand = dBaseConnection.CreateCommand();
insertCommand.CommandText = "INSERT INTO [myTable] VALUES (2,100)";
insertCommand.ExecuteNonQuery();
}
Note: [myTable] has the same name that the dbf file within C:\MyApp\Temp.
Now open command works but when performing insertCommand.ExecuteNonQuery() it gets stuck (it does nothing).
UPDATE 2020/11/27:
Ok,I think I have detected what is happening. It works ok when using Advantage OLE DB provider in 32-bit, however, using Advantage OLE DB provider in 64-bit is not working. In both cases I use it on Windows Server 2012 R2 Standard 64-Bit and Advantage OLE DB provider is version 11.10.
I have checked this using LINQPad 5, and it works but when performing
insertCommand.ExecuteNonQuery()
... and before doing the insert to the dbf file below warning modal window appears waiting for you to click on 'Accept' button. Once you click on the button, insert is done in dbf file correctly.
So, I guess that when running my web application (ASP.NET MVC app) in production environment this warning modal windows does not appear but in fact, it is waiting for you to click on the button to proceed inserting data in the dbf file but as this warning window is not visible (it is not shown) I can click on that button and consequently ExecuteNonQuery never ends (it stalls) and it stays waiting for you to click that button indefinitely.
How can I solve this error? can I modify ads.ini in some way in order to avoid this waring message to appear so application can work?
I see you removed the VFP tag which I think most relevant to this question :)
I again tested that with these codes as a sample and it worked without a glitch:
void Main()
{
string dbfFilesPath = #"C:\PROGRAM FILES (X86)\MICROSOFT VISUAL FOXPRO 9\SAMPLES\Data";
string conString = $#"Provider=Advantage OLE DB Provider;Data Source={dbfFilesPath};ServerType=ADS_LOCAL_SERVER;TableType=ADS_VFP;";
DataTable t = new DataTable();
using (OleDbConnection cn = new OleDbConnection(conString))
using (OleDbCommand cmd = new OleDbCommand($#"insert into Customer
(cust_id, company, contact)
values
(?,?,?)", cn))
{
cmd.Parameters.Add("#cId", OleDbType.VarChar);
cmd.Parameters.Add("#company", OleDbType.VarChar);
cmd.Parameters.Add("#contact", OleDbType.VarChar);
cn.Open();
for (int i = 0; i < 10; i++)
{
cmd.Parameters["#cId"].Value = $"XYZ#{i}";
cmd.Parameters["#company"].Value = $"Company XYZ#{i}";
cmd.Parameters["#contact"].Value = $"Contact XYZ#{i}";
cmd.ExecuteNonQuery();
}
t.Load(new OleDbCommand($"select * from Customer order by cust_id desc", cn).ExecuteReader());
cn.Close();
}
t.Dump(); // tested in LinqPad AnyCPU version
}
Here is the partial result I got:
XYZ#9 Company XYZ#9 Contact XYZ#9 0.0000
XYZ#8 Company XYZ#8 Contact XYZ#8 0.0000
XYZ#7 Company XYZ#7 Contact XYZ#7 0.0000
XYZ#6 Company XYZ#6 Contact XYZ#6 0.0000
XYZ#5 Company XYZ#5 Contact XYZ#5 0.0000
XYZ#4 Company XYZ#4 Contact XYZ#4 0.0000
XYZ#3 Company XYZ#3 Contact XYZ#3 0.0000
XYZ#2 Company XYZ#2 Contact XYZ#2 0.0000
XYZ#1 Company XYZ#1 Contact XYZ#1 0.0000
XYZ#0 Company XYZ#0 Contact XYZ#0 0.0000
XXXXXX Linked Server Company 0.0000
WOLZA Wolski Zajazd Zbyszek Piestrzeniewicz Owner ul. Filtrowa 68 Warszawa 01-012 Poland (26) 642-7012 (26) 642-7012 3694
WINCA Wenna Wines Vladimir Yakovski Owner 0.0000
WILMK Wilman Kala Matti Karttunen Owner/Marketing Assistant Keskuskatu 45 Helsinki 21240 Finland 90-224 8858 90-224 8858 4400
WHITC White Clover Markets Karl Jablonski Owner 305 - 14th Ave. S., Suite 3B Seattle WA 98128 USA (206) 555-4112 (206) 555-4115 38900
WELLI Wellington Importadora Paula Parente Sales Manager Rua do Mercado, 12 Resende SP 08737-363 Brazil (14) 555-8122 3600
WARTH Wartian Herkku Pirkko Koskitalo Accounting Manager Torikatu 38 Oulu 90110 Finland 981-443655 981-443655 24200
As it is not working for you, I think it might have to do with:
Access rights. You say, you use with ASP.Net MVC, I wonder if the 'connecting account' has only read access? In IIS, basic settings as I remember there were a setting for connect as. You might at least set it to connect by an account that has full rights to that directory.
Sharing. The file might be in use shared elsewhere and for some reason your insert is waiting trying to get lock?
SET NULL ON ? Another slight possibility. You might need to execute this first, on the same connection. If there are fields that you are not supplying a value in insert but "not null" in table structure would otherwise cause it to fail.
You might start testing the same file from say within LinqPad running with administrator rights to eliminate the access rights stuff alltogether (if data directory is under program files or program files (x86), then it is a problem by itself.
I would expect an immediate error message, but who knows maybe driver is waiting for a timeout in case of write access failure?
Some ideas (the way I do it:)
Instead of trying to use VFP data with 64 bits access, you might create a server that runs in 32 bits IIS Application pool (or use its own web serving) and handles the data access via REST API (or WCF). I use 32 bits ASP.Net MVC application(s) since years with success using VFOLEDB itself.
If you think of REST API path, you might either use ASP.Net core (which is fast unlike the pre core) or use something else, say Go for building it. Go and Iris framework, for an example is an excellent fit to build a REST API server for your data over night (unlikely you would think Go, but if you do, remember to compile with x86 architecture).
I was doing some R&D on table field alterations. So, I needed a clone of an table.
I ran the command "create table <table name> as select * from <old table>" and it worked.
However, when I ran the second time, I cancelled the command in between and after that I am getting the below error.
$ select count(*) from my_table_copy;
SQL -211: Cannot read system catalog (systables).
ISAM -154: ISAM error: Lock Timeout Expired
SQLSTATE: IX000 at /dev/stdin:1
When I tried to fetch the DB through Open Admin, there also am getting the error:
256 : Database query failed: -
Error: -244 [Informix][Informix ODBC Driver][Informix]Could not do a
physical-order read to fetch next row. sqlerrm(systables)
(SQLExecute[-244] at
How to resolve this?
Thanks,
You must be getting these lock errors, because engine is rolling back your clone table transaction.
Check with "onstat -x" if there is a transaction with an R on the flags column. The est. rb_time column shows an estimate of recovery complete process.
My suggestion? If you don't need exactly the same actual data on the new table, you can put a "SET ISOLATION TO DIRTY READ;" right before your create table command.
In our unit testing suite, we create and destroy a large number of SQLite databases that use the path of ":memory:". Occasionally, and only when running on the iOS simulator, the creation of those databases fails with the rather enigmatic message:
Database ":memory:": unable to open database file
99% of the time, these requests succeed. (Subsequent tests within the same test run typically succeed after this failure occurs.) But when you're using this in an automated build-acceptance test, you want 100%.
We've instrumented for memory consumption (it's within normal limits) and disk-space availability (20GB+ available).
Any ideas?
UPDATE: Captured this happening with extra logging per Richard's suggestion below. Here's the log output:
SQLITE ERROR: (28) attempt to open "/Users/xxx/Library/Developer/CoreSimulator/Devices/CF762060-7D23-4C79-A466-7F20AB6233E7/data/Containers/Data/Application/582E1ED0-81E0-4CC7-A6F6-DBEBC101BBE8/tmp/etilqs_1ghbf1MSTa8ilSj" as
SQLITE ERROR: (14) cannot open file at line 30595 of [f66f7a17b7]
SQLITE ERROR: (14) os_unix.c:30595: (17) open(/Users/xxx/Library/Developer/CoreSimulator/Devices/CF762060-7D23-4C79-A466-7F20AB6233E7/data/Containers/Data/Application/582E1ED0-81E0-4CC7-A6F6-DBEBC101BBE8/tmp/etilqs_1ghbf1MST
I've noticed that even a :memory: database will files on disk if you create a temporary table. The temporary files for unix system are built by a Prng, so there is a non-zero chance of name collision if lots and lots of temporary files are created simultaneously. Or, if the disk is full, the create could fail. Or if for some reason the unix temp directory is not accessible either because it's been deleted or permissions on it are invalid.
For example, I've turned on several loggers in sqlite3 command line by adding these command line arguments to llvc-gcc: -DSQLITE_DEBUG_OS_TRACE=1 -DSQLITE_TEST=1 -DSQLITE_DEBUG=1 then I observed a temp file being created from the command line using this SQL:
$ ./sqlite3
SQLite version 3.8.8.2 2015-01-30 14:30:45
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> create temporary table t( x );
OPENX 3 /var/folders/nf/l1cw8sn1707b73zy5nqycrpw0000gn/T//etilqs_fvwR6KbMm518S4w 01002
OPEN 3
WRITE 3 512 0 0
OPENX 4 /var/folders/nf/l1cw8sn1707b73zy5nqycrpw0000gn/T//etilqs_OJJJ1lrTtQIFnUO 05402
OPEN 4
WRITE 4 1024 0 0
WRITE 4 1024 1024 0
WRITE 3 28 0 0
sqlite>
No ideas. But perhaps if you turned on the error and warning log it will give some clues.
I removed (or decommisioned, can't remember) a DSE analytics node (with IP 10.14.5.50) a couple of months ago. When I now try to execute a dse shark (CREATE TABLE ccc AS SELECT ...) query I now receiving:
15/01/22 13:23:17 ERROR parse.SharkSemanticAnalyzer: org.apache.hadoop.hive.ql.parse.SemanticException: 0:0 Error creating temporary folder on: cfs://10.14.5.50/user/hive/warehouse/mykeyspace.db. Error encountered near token 'TOK_TMP_FILE'
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1256)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1053)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8342)
at shark.parse.SharkSemanticAnalyzer.analyzeInternal(SharkSemanticAnalyzer.scala:105)
at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:284)
at shark.SharkDriver.compile(SharkDriver.scala:215)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:342)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:977)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
at shark.SharkCliDriver.processCmd(SharkCliDriver.scala:347)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
at shark.SharkCliDriver$.main(SharkCliDriver.scala:240)
at shark.SharkCliDriver.main(SharkCliDriver.scala)
Caused by: java.lang.RuntimeException: java.io.IOException: Error connecting to node 10.14.5.50:9160 with strategy STICKY.
at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:216)
at org.apache.hadoop.hive.ql.Context.getExternalScratchDir(Context.java:270)
at org.apache.hadoop.hive.ql.Context.getExternalTmpFileURI(Context.java:363)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1253)
... 12 more
I guess the above error is due to my keyspace referring to the old node:
shark> DESCRIBE DATABASE mykeyspace;
OK
mykeyspace cfs://10.14.5.50/user/hive/warehouse/mykeyspace.db
Time taken: 0.997 seconds
Is there any way for me to fix this incorrect database path?
Tried (but failed) workaround to recreate the database: In cqlsh I created a keyspace thekeyspace and added a table thetable. I the opened up dse hive (and noticed that DESCRIBE DATABASE thekeyspace is giving me a correct cfs path). However, I am unable to drop the the database using DROP DATABASE thekeyspace.
Additional information:
I have no external tables in my keyspace.
Making the SELECT against the tables works.
Setting -hiveconf cassandra.host=WORKING_NODE_IP does not help.
The following commands return proper IP:s (ie. not X.X.X.50):
dsetool listjt
dsetool jobtracker
dsetool sparkmaster
I am getting the same error when I execute the query using dse hive.
No Shark variable is referring to X.X.X.50 when I execute set; in its REPL.
I am running DSE 4.5.
Stumbled across this page that says you need to TRUNCATE "HiveMetaStore"."MetaStore" (in cqlsh) after removing Hive nodes. That did the trick.
Context: SQL Server 2012, Windows Server 2008 R2 (64bit), 2 cores, lots of RAM and HD space,
ADODB via JScript (not .NET)
These really simple stored procedures keep timing out. It's not like I have a lot of records either (from a server point of view): 100,000 or so.
CREATE PROCEDURE [dbo].[Transfer_Part1]
AS
SET NOCOUNT ON
INSERT INTO Primary.dbo.Post
SELECT *
FROM Secondary.dbo.Pre
WHERE HrefProcessed = (-1)
AND ReferrerProcessed = (-1);
CREATE PROCEDURE [dbo].[Transfer_Part3]
AS
SET NOCOUNT ON
DELETE
FROM Secondary.dbo.Pre
WHERE HrefProcessed = (-1)
AND ReferrerProcessed = (-1);
Is there anything I can do to stop getting these messages?
[Microsoft][ODBC SQL Server Driver]Query timeout expired
EXEC Transfer_Part1
[Microsoft][ODBC SQL Server Driver]Query timeout expired
EXEC Transfer_Part3
Do you have an 2-column index on HrefProcessed and ReferrerProcessed?
Also see Bulk DELETE on SQL Server 2008 (Is there anything like Bulk Copy (bcp) for delete data?) for the second one.