I upgraded Opscenter from 5.1.2 to 5.2.0 yesterday, and now none of the graphs on the Dashboard are showing any statistics.
My cluster is datastax enterprise 4.5.1 with the following versions:
cqlsh 4.1.1 | Cassandra 2.0.8.39 | CQL spec 3.1.1 | Thrift protocol 19.39.0
I'm using this cluster for a search workload with solr. The agent.log is filled with the following:
INFO [qtp1313948736-24] 2015-08-06 12:30:52,211 New JMX connection (127.0.0.1:7199)
INFO [qtp1313948736-24] 2015-08-06 12:30:52,214 New JMX connection (127.0.0.1:7199)
INFO [qtp1313948736-24] 2015-08-06 12:30:52,218 HTTP: :get /cluster/solr-cores {} - 200
INFO [jmx-metrics-1] 2015-08-06 12:30:57,186 New JMX connection (127.0.0.1:7199)
INFO [jmx-metrics-1] 2015-08-06 12:30:57,191 New JMX connection (127.0.0.1:7199)
ERROR [cassandra-processor-4] 2015-08-06 12:31:15,082 Error when proccessing cassandra callcom.datastax.driver.core.exceptions.InvalidQueryException: Unknown identifier timestamp
Any ideas?
The migration on upgrade failed. Can look through schema with describe keyspace "OpsCenter" and find tables not upgraded to 5.2.0 (look at comment). The changes made are here:
https://gist.github.com/philip-doctor/2b7c87f551a35a5c7c79
-- depending on how far through the migration you progressed, parts of this may fail
-- this assumes you're using the default name of "OpsCenter" for the opscenter keyspace, otherwise
-- you'll have to rename the "OpsCenter" part.
ALTER TABLE "OpsCenter"."events" ADD message text;
ALTER TABLE "OpsCenter"."events" ADD column_family text;
ALTER TABLE "OpsCenter"."events" ADD target_node text;
ALTER TABLE "OpsCenter"."events" ADD event_source text;
ALTER TABLE "OpsCenter"."events" ADD "keyspace" text;
ALTER TABLE "OpsCenter"."events" ADD api_source_ip text;
ALTER TABLE "OpsCenter"."events" ADD user text;
ALTER TABLE "OpsCenter"."events" ADD source_node text;
ALTER TABLE "OpsCenter"."events" with comment = '{"info": "OpsCenter management data.", "version": [5, 2, 0]}';
ALTER TABLE "OpsCenter"."rollups60" RENAME column1 to timestamp;
ALTER TABLE "OpsCenter"."rollups60" with comment = '{"info": "OpsCenter management data.", "version": [5, 2, 0]}';
ALTER TABLE "OpsCenter"."rollups300" RENAME column1 to timestamp;
ALTER TABLE "OpsCenter"."rollups300" with comment = '{"info": "OpsCenter management data.", "version": [5, 2, 0]}';
ALTER TABLE "OpsCenter"."rollups7200" RENAME column1 to timestamp;
ALTER TABLE "OpsCenter"."rollups7200" with comment = '{"info": "OpsCenter management data.", "version": [5, 2, 0]}';
ALTER TABLE "OpsCenter"."rollups86400" RENAME column1 to timestamp;
ALTER TABLE "OpsCenter"."rollups86400" with comment = '{"info": "OpsCenter management data.", "version": [5, 2, 0]}';
Related
I have Java Web on Weblogic; database is Informix.
The process is as follows:
User query data.
Create serial(only).
Using stored procedure with serial.
SP content like:
insert reporttable
select data from table1
insert reporttable
select data from table2
if(reporttable.count==0)
insert reporttable select 'NO DATA'
Query reporttable with serial.
Show on Web.
Important problem:
table1 has data count 10(data1,data2.......data10)
reporttable result data count 3(data1, data2, NO DATA) impossible
Important!!! The implementation does not process any exceptions.
When the problem occurs, any query on the data shows the same problem.
But when I restart Weblogic (using the same parameters), the query has no problem.
I have no idea to solve the problem; can you help?
I find the error reason.
Test : rename table name
sp use table1、table2、table3
unknown reason maybe abnormal connection
java.sql.SQLSyntaxErrorException: [FMWGEN][Informix JDBC Driver][Informix]The
specified table (table1) is not in the database.
Error message only trigger on first
Execute sp again,no error, and executeing neglect table1
weblogic restart jndi connection
Execute sp result normal
I'm having trouble with an application using H2, seemingly due to database connections being left open. I'm trying to understand if it's H2 leaking the connection or me (I'm aware that it's more likely to be me).
The application has Component1, which creates an in-memory H2 database and starts a TCP server. The H2 database contains a few linked tables which connect to a separate Oracle database using its JNDI identifier. Component2 connects to this database using JDBC with URL jdbc:h2:tcp://127.0.1.1:5521/mem:db1;.
In the exception below I've highlighted where I'm getting confused. The trace seems to say that a query was issued by a TCP client (Component2) that required a query of a linked table, and that the query on the linked table is what left the connection open. I do not control how H2 connects to the linked table via JNDI, so I'm inclined to think this is not my fault. I've included a linked table definition below the exception to show how they are created.
Can someone please help me understand who - between me and H2 - is responsible for this abandoned connection, and why?
Exception
org.apache.tomcat.dbcp.dbcp.AbandonedTrace$AbandonedObjectException: DBCP object created 2019-12-10 05:46:16 by the following code was never closed:
at org.apache.tomcat.dbcp.dbcp.AbandonedTrace.setStackTrace(AbandonedTrace.java:139)
at org.apache.tomcat.dbcp.dbcp.AbandonedObjectPool.borrowObject(AbandonedObjectPool.java:81)
at org.apache.tomcat.dbcp.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
at org.apache.tomcat.dbcp.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
at org.h2.util.JdbcUtils.getConnection(JdbcUtils.java:311)
at org.h2.util.JdbcUtils.getConnection(JdbcUtils.java:274)
at org.h2.table.TableLinkConnection.open(TableLinkConnection.java:89)
at org.h2.table.TableLinkConnection.open(TableLinkConnection.java:76)
at org.h2.engine.Database.getLinkConnection(Database.java:2708)
--> at org.h2.table.TableLink.connect(TableLink.java:96)
at org.h2.table.TableLink.execute(TableLink.java:535)
at org.h2.index.LinkedIndex.find(LinkedIndex.java:129)
at org.h2.index.BaseIndex.find(BaseIndex.java:132)
at org.h2.index.IndexCursor.find(IndexCursor.java:163)
at org.h2.table.TableFilter.next(TableFilter.java:475)
at org.h2.command.dml.Select$LazyResultQueryFlat.fetchNextRow(Select.java:1882)
at org.h2.result.LazyResult.hasNext(LazyResult.java:101)
at org.h2.result.LazyResult.next(LazyResult.java:60)
at org.h2.command.dml.Select.queryFlat(Select.java:742)
at org.h2.command.dml.Select.queryWithoutCache(Select.java:884)
at org.h2.command.dml.Query.queryWithoutCacheLazyCheck(Query.java:151)
at org.h2.command.dml.Query.query(Query.java:435)
at org.h2.command.dml.Query.query(Query.java:397)
at org.h2.command.CommandContainer.query(CommandContainer.java:145)
at org.h2.command.Command.executeQuery(Command.java:202)
at org.h2.server.TcpServerThread.process(TcpServerThread.java:335)
--> at org.h2.server.TcpServerThread.run(TcpServerThread.java:175)
at java.lang.Thread.run(Thread.java:745)
Linked Table
CREATE LINKED TABLE ora_table (
'javax.naming.InitialContext',
'java:comp/env/jdbc/oradb',
NULL,
NULL,
'(
SELECT field
FROM table
)'
)
I am currently testing importing information from a text file to update an existing database called serverstate. I am trying to follow the Influxdb documentation but to no avail as I new to such system.
Contents of the file ServerInfluxdb.txt:
ServerState,state=1 value=1
Command used to import file:
influx -database=serverstate -import -path=ServerInfluxdb.txt
Error Produced:
2019/02/07 10:39:40 error: error parsing query: found ServerState, expected SELECT, DELETE, SHOW, CREATE, DROP, EXPLAIN, GRANT, REVOKE, ALTER, SET, KILL at line 1, char 1
Any help is appreciated.
Thank you in advance,
Regards,
Luke
The import option is use for importing an exported database. That is, your file (ServerInfluxdb.txt) will need to include the DDL for creating the database.
E.g.
# DDL
CREATE DATABASE pirates
CREATE RETENTION POLICY oneday ON pirates DURATION 1d REPLICATION 1
# DML
# CONTEXT-DATABASE: pirates
# CONTEXT-RETENTION-POLICY: oneday
treasures,captain_id=dread_pirate_roberts value=801 1439856000
treasures,captain_id=flint value=29 1439856000
treasures,captain_id=sparrow value=38 1439856000
treasures,captain_id=tetra value=47 1439856000
treasures,captain_id=crunch value=109 1439858880
See: https://docs.influxdata.com/influxdb/v1.7/tools/shell/#import-data-from-a-file-with-import
I work on a CFML script to backup some data in a CSV file from Informix database. The problem is the table has many records 286906 and my scripts timeouts (even I set it not to), the best I could successfully was 260000 with:
SELECT FIRST 260000
APE1, APE2, CALLE, CODPOSTAL, DNI, FCADU, FENACI, LOCALIDAD, NOMBRE, NSS, PROV, TELEFONO
FROM
mytable WHERE FCADU IS NOT NULL AND FENACI IS NOT NULL
is there any way to select the rest of 260000 and then the rest?
I tried with:
SELECT SKIP 260000 FIRST 520000
APE1, APE2, CALLE, CODPOSTAL, DNI, FCADU, FENACI, LOCALIDAD, NOMBRE, NSS, PROV, TELEFONO
FROM
mytable WHERE FCADU IS NOT NULL AND FENACI IS NOT NULL
but I get Error Executing Database Query. A syntax error has occurred.
You can use the Unload statement for creating a file from database:
UNLOAD TO 'mytable.txt' SELECT * FROM mytable;
Maybe that this not works in CFML environment. So you can create a stored procedure which unloads your data.
See unload statement in stored procedure
Is it your script timing out or your database connection? From your question it sou ds to me like its not the coldfusion template that is timing out but the cfquery connection to the database. There is a timeout attribute for the cfquery tag. However apparently it is not reliable a better option is to configure the timout in the advanced section of the datasource within the coldfusion administrator.
Charlie Arehart blogged about this feature here:
http://www.carehart.org/blog/client/index.cfm/2010/7/14/hidden_gem_in_cf9_admin_querytimeout
Novice here! I'm currently creating an application using Ruby on Rails.
This particular application uses binary data for content. Apparently, SQL Server is the best way to go because of the FILESTREAM feature. From what I found from documentation, this basically creates a file system for binary objects that are > 1mb.
With that said, I am using Ruby on Rails and preparing to setup the activerecord-sqlserver-adapter but I need to know how will I be able to specify a column to use FILESTREAM while setting up a database with active record migration? Would I just edit the column to accept FILESTREAM in SQL Server management? (This is obviously after allowing FILESTREAM to be used in SQL SERVER.)
So the setup I predict is:
1. install SQL Server and all supporting components
2. install activerecord-sqlserver-adpater gem
3. create a varbinary(max) database column (for the binary file) - In migration
4. specify in sql server to use said column for FILESTREAM
All in all, How do I configure to specify the use of FILESTREAM when creating a column in a database using rails/ruby?
No that's not the all, every table that has a column varbinary(max) which stored as a FILESTREAM should have a column with rowguid type.
Here is a sample that I've used for Attachments
CREATE TABLE [dbo].[Attachment](
[Attachment_Id] [uniqueidentifier] ROWGUIDCOL NOT NULL,
[ContentLength] [int] NULL,
[ContentType] [nvarchar](100) NULL,
[Contents] [varbinary](max) FILESTREAM NULL,
[DateAdded] [datetime] NULL,
[FileName] [nvarchar](255) NULL,
[Title] [nvarchar](255) NULL,
PRIMARY KEY CLUSTERED
(
[Attachment_Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] FILESTREAM_ON [filestream]
) ON [PRIMARY] FILESTREAM_ON [filestream]