Getting error "Cannot achieve consistency level ONE" while trying to insert record in DSE Cassandra - datastax-enterprise

I am using DSE Cassandra and want to use solr_query so created the Keyspace as follows:
create keyspace demo with replication = {'class': 'NetworkTopologyStrategy', 'Solr': 3};
Created the following table:
create table demo.onlinetransactions
( unique_tran_id text, user_id text, account_type text,
account_id text, create_ts timestamp, data text,
primary key (unique_tran_id) );
However when I try to insert record in this table I am getting error as mentioned below:
insert into demo.onlinetransactions (unique_tran_id, user_id,
account_type, account_id, create_ts, data)
values ('trans1', 'user1', 'creditcard',
'1234567890123451', '2015-01-01 09:00:00', '{amount:100.00,vendor:Amazon}');
Error:
NoHostAvailable: ('Unable to complete the operation against any hosts',
{<Host: 127.0.0.1 dc0>: Unavailable('Error from server: code=1000
[Unavailable exception]
message="Cannot achieve consistency level ONE"
info={\'required_replicas\': 1, \'alive_replicas\': 0,
\'consistency\': \'ONE\'}',)})
What configuration settings do I need to verify for me to be able to insert records in the keyspace with "NetworkTopologyStrategy" and "Solr" setting?
Also I am able to insert records when the keyspace as follows (however this is not using Solr which I want to use.):
CREATE KEYSPACE user WITH replication =
{'class': 'SimpleStrategy', 'replication_factor': '3'}
AND durable_writes = true;

After specifying the correct Data Center name in the KeySpace the solr_query worked with NetworkTopologyStrategy as replication strategy.

Related

Create stored procedure to update table from another server

I have two servers. I copied the table "Response_Master_Incident" on cadarchive server and duplicated it into the ucpdapps2 server and named it "Master_Incident_For_ProQA". When I duplicated it I only selected certain columns to duplicate (since I didn't need all the columns from "Response_Master_Incident").
Now I am trying to create a stored procedure to update the data from "Response_Master_Incident" to "Master_Incident_For_ProQA" pulling over only those select columns.
create procedure UpdateProQATable
as
begin
Select [ID]
,[Master_Incident_Number]
,[Response_Date]
,[Problem]
,[MethodOfCallRcvd]
,[CallTaking_Performed_By]
,[EMD_Used]
,[Determinant]
,[ProQa_CaseNumber]
,[ProQa_CaseNumber_Fire]
,[ProQa_CaseNumber_Police]
,[MachineName]
into Master_Incident_For_ProQA
from Response_Master_Incident where EMD_Used = '1'
end
When I run this stored procedure I get this error
"Msg 208, Level 16, State 1, Procedure UpdateProQATable, Line 4 [Batch Start Line 2]
Invalid object name 'Response_Master_Incident'."
How do I resolve this error. And is there a way to have the procedure update the table where the "Response_Date" is a date from yesterday and not all the data from the "Response_Master_Incident" table?
So I figured it out. I needed to go to Databases/Server Objects/Linked Servers and link the server that has the data I needed to pull over to the server that was getting the data.
After I did that I corrected my script as suggested in the above comment but also added script to delete all the data currently in the table, insert new data, and filtered the new data by the column of EMD_Used and the Response_Date as yesterdays date. After doing this the desired result was met.
delete from Master_Incident_For_ProQA
insert into [ucpdapps2].[ProQAUsage].[dbo].[Master_Incident_For_ProQA]
([ID]
,[Master_Incident_Number]
,[Response_Date]
,[Problem]
,[MethodOfCallRcvd]
,[CallTaking_Performed_By]
,[EMD_Used]
,[Determinant]
,[ProQa_CaseNumber]
,[ProQa_CaseNumber_Fire]
,[ProQa_CaseNumber_Police]
,[MachineName])
Select [ID]
,[Master_Incident_Number]
,[Response_Date]
,[Problem]
,[MethodOfCallRcvd]
,[CallTaking_Performed_By]
,[EMD_Used]
,[Determinant]
,[ProQa_CaseNumber]
,[ProQa_CaseNumber_Fire]
,[ProQa_CaseNumber_Police]
,[MachineName] FROM [cadarchive].[Reporting_System].[dbo].[Response_Master_Incident] where [EMD_Used] = 1 and [Response_Date] >= DATEADD(day, -1, CONVERT(date, getdate())) AND
[Response_Date] < CONVERT(date, getdate())

Invalid table alias when joining multiple tables in hive

Can someone identify why this multi-table join is not accepted? When I bring in the third table, it then fails with invalid table alias. I am not seeing what is wrong:
This works (two table):
select
a.ri as `R_ID`
,oc3.name as `RET`
,a.rch as `RC`
from dev.sl a join dev.codes oc3
on (a.pk_business = oc3.pk_business
and a.pk_data_source = oc3.pk_data_source
and a.pk_frequency = oc3.pk_frequency
and oc3.pk_data_state = '123'
and oc3.code = a.ri and oc3.codeset = 'xyz')
Then add a third table and it fails:
(Three table):
select
a.ri as `R_ID`
,oc3.name as `RET`
,a.rch as `RC`
from dev.sl a join dev.codes oc3
on (a.pk_business = oc3.pk_business
and a.pk_data_source = oc3.pk_data_source
and a.pk_frequency = oc3.pk_frequency
and oc3.pk_data_state = '123'
and oc3.code = a.ri and oc3.codeset = 'xyz') join dev.items b
on (b.pk_business = a.pk_business
and b.pk_data_source = a.pk_data_source
and b.pk_frequency = a.pk_frequency
and b.pk_data_state = '123'
and a.ii = b.item_id
and a.cc = b.country_code)
SemanticException [Error 10009]: Line 1:2920 Invalid table alias 'a':
I have an update - it seems that this was caused by having one table created as an updatable table (TBLPROPERTIES ('transactional'='true')), and one without, and with my session settings of:
SET hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
SET hive.support.concurrency=true;
SET hive.enforce.bucketing=true;
SET hive.exec.dynamic.partition.mode=nonstrict;
This caused the problem. On another session without the settings AND repointing to an identical table "a" created as a non-ACID type table, the multi-table join worked fine. I don't know enough about HIVE to know why - I suspect that a transactional and non-transactional table cannot be joined in the same "transaction" (select statement).
One more update - It may not be due to the transactional table. With additional testing, I now also see it happens with non-transactional tables as well. It seems that the three table join works when I execute it from a putty session directly on the server, but when I use SQL Developer, it will produce the aforementioned error. It appears to be an issue with SQL Developer, but why still is unknown.

Doctrine Assocation Mapping Join Not Executing In ZF3

I am creating my first Association Mapping for a Join. This is also the first time I've used a Foreign Key in pgSQL.
I am working with ZF3. The error I am receiving is:
An exception occurred while executing 'SELECT p0_.reference AS reference_0, p0_.meta_keyword_reference AS meta_keyword_reference_1, p0_.add_date AS add_date_2, p0_.add_membership_reference AS add_membership_reference_3, p0_.remove_date AS remove_date_4, p0_.remove_membership_reference AS remove_membership_reference_5 FROM page_about_meta_keyword_link p0_ INNER JOIN meta_keyword m1_':
SQLSTATE[42601]: Syntax error: 7 ERROR: syntax error at end of input LINE 1: ...page_about_meta_keyword_link p0_ INNER JOIN meta_keyword m1_
The query I am trying to create is
SELECT MetaKeywords.Keyword FROM PageAboutMetaKeywordLink INNER JOIN MetaKeywords ON PageAboutMetaKeywordLink.MetaKeywordReference = MetaKeywords.Reference WHERE PageAboutMetaKeywordLink.RemoveDate IS NULL ORDER BY MetaKeywords.Keyword ASC
From my database experience I expect it is creating the error due to the missing
ON p0_.meta_keyword_reference = m1_reference
I don't understand how to communicate the Join. Based on the documentation I had expected this was automatic. Maybe I misunderstood.
The tables I am trying to Join are page_about_meta_keyword_link.meta_keyword_reference ON meta_keyword.reference . This is the first time I've created a foreign key in pgSQL.
This is the table structure for page_about_meta_keyword_link
CREATE TABLE public.page_about_meta_keyword_link
(
reference bigint NOT NULL DEFAULT nextval('page_about_meta_keyword_link_reference_seq'::regclass),
meta_keyword_reference bigint,
add_date timestamp with time zone DEFAULT now(), -- UTC
add_membership_reference bigint,
remove_date timestamp with time zone, -- UTC
remove_membership_reference bigint,
CONSTRAINT page_about_meta_keyword_link_pkey PRIMARY KEY (reference),
CONSTRAINT page_about_meta_keyword_link_fk FOREIGN KEY (meta_keyword_reference)
REFERENCES public.meta_keyword (reference) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT page_about_meta_keyword_link_reference_unique UNIQUE (reference)
)
This is the meta_keyword
CREATE TABLE public.meta_keyword
(
reference bigint NOT NULL DEFAULT nextval('meta_keyword_reference_seq'::regclass),
keyword text,
effective_date timestamp with time zone DEFAULT now(), -- UTC
membership_reference bigint,
CONSTRAINT meta_keyword_pkey PRIMARY KEY (reference),
CONSTRAINT meta_keyword_reference_unique UNIQUE (reference)
)
This is the query I've created in the Service; The complete Service is found here.
$repository = $this->entityManager->getRepository(PageAboutMetaKeywordLink::class);
$keywords = $this->entityManager->getRepository(MetaKeyword::class);
$qb = $repository->createQueryBuilder('l');
$qb ->join('\Application\Entity\MetaKeyword' , 'k')
->expr()->isNull('l.removeDate');
return $qb->getQuery()->getResult();
The Association Mapping I created is for meta_keyword_reference; The complete Entity is found here.
/**
* #var int|null
*
* #ORM\ManyToOne(targetEntity="MetaKeyword")
* #ORM\JoinColumn(name="meta_keyword_reference", referencedColumnName="reference")
* #ORM\Column(name="meta_keyword_reference", type="bigint", nullable=true)
*/
private $metaKeywordReference;
I have not made any changes to the MetaKeywords Entity. It is found here.
Overall the various sections of the web site will share the meta_keywords. If I understand correctly the connection I am trying to make is ManyToOne.
I am wanting to leave a good reference for other newbies as they are their journey with Zend Framework 3 - Doctrine. Please advise of edits I should be making to this post so it is clear, understandable and concise so I receive the help I need and others will benefit from this in the future.
You double declared a column (meta_keyword_reference). Looking at the docs (same page you linked in question), you've made a mistake in your Annotation. Remove the ORM\Column line (the definition is already in JoinColumn). If you need it to be nullable (not required), add nullable=true to the JoinColumn; use either, not both
/**
* #var int|null
*
* #ORM\ManyToOne(targetEntity="MetaKeyword")
* #ORM\JoinColumn(name="meta_keyword_id", referencedColumnName="id", nullable=true)
*/
private $metaKeywordReference;
Do not worry about declaring a "type", Doctrine will automatically match it to the column you're referencing. Also, you should be referencing Primary Keys. I've assumed reference is not the PK, so I've changed it to id, change it to what it actually is.
Next, I think you're also using DBAL QueryBuilder instead of the ORM QueryBuilder.
The Query you need would be like this:
use Doctrine\ORM\Query\Expr\Join;
use Doctrine\ORM\QueryBuilder;
/** #var QueryBuilder $qb */
$qb = $this->entityManager->createQueryBuilder();
$qb->select('l')
->from(PageAboutMetaKeywordLink::class, 'l')
->join(MetaKeyword::class, 'k', Join::ON, 'l.reference = k.id') // check these property names (NOT DB COLUMNS!)
->where('l.removeDate is null');
Might be a few small errors in there, but that should be about it.

wrong number of arguments (1 for 2..3) for Active Record postgresql query (Rails 4/postgresql 9.4) [duplicate]

Right now I am in the middle of migrating from SQLite to Postgresql and I came across this problem. The following prepared statement works with SQLite:
id = 5
st = ActiveRecord::Base.connection.raw_connection.prepare("DELETE FROM my_table WHERE id = ?")
st.execute(id)
st.close
Unfortunately it is not working with Postgresql - it throws an exception at line 2.
I was looking for solutions and came across this:
id = 5
require 'pg'
conn = PG::Connection.open(:dbname => 'my_db_development')
conn.prepare('statement1', 'DELETE FROM my_table WHERE id = $1')
conn.exec_prepared('statement1', [ id ])
This one fails at line 3. When I print the exception like this
rescue => ex
ex contains this
{"connection":{}}
Executing the SQL in a command line works. Any idea what I am doing wrong?
Thanks in advance!
If you want to use prepare like that then you'll need to make a couple changes:
The PostgreSQL driver wants to see numbered placeholders ($1, $2, ...) not question marks and you need to give your prepared statement a name:
ActiveRecord::Base.connection.raw_connection.prepare('some_name', "DELETE FROM my_table WHERE id = $1")
The calling sequence is prepare followed by exec_prepared:
connection = ActiveRecord::Base.connection.raw_connection
connection.prepare('some_name', "DELETE FROM my_table WHERE id = $1")
st = connection.exec_prepared('some_name', [ id ])
The above approach works for me with ActiveRecord and PostgreSQL, your PG::Connection.open version should work if you're connecting properly.
Another way is to do the quoting yourself:
conn = ActiveRecord::Base.connection
conn.execute(%Q{
delete from my_table
where id = #{conn.quote(id)}
})
That's the sort of thing that ActiveRecord is usually doing behind your back.
Directly interacting with the database tends to be a bit of a mess with Rails since the Rails people don't think you should ever do it.
If you really are just trying to delete a row without interference, you could use delete:
delete()
[...]
The row is simply removed with an SQL DELETE statement on the record’s primary key, and no callbacks are executed.
So you can just say this:
MyTable.delete(id)
and you'll send a simple delete from my_tables where id = ... into the database.

Impala create external table, stored by Hive

I am trying to figure out since yesterday why my table creation is not working. Since I can't link my Impala to my Hbase I can't make queries on my twitter stream :/
Do I need a special JAR like Hive for the SerDe properties ?
Here is my command:
CREATE EXTERNAL TABLE HB_IMPALA_TWEETS (
id int,
id_str string,
text string,
created_at timestamp,
geo_latitude double,
geo_longitude double,
user_screen_name string,
user_location string,
user_followers_count string,
user_profile_image_url string
)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES (
"hbase.columns.mapping" =
":key,tweet:id_str,tweet:text,tweet:created_at,tweet:geo_latitude,tweet:geo_longitude, user:screen_name,user:location,user:followers_count,user:profile_image_url"
)
TBLPROPERTIES("hbase.table.name" = "tweets");
But I got an error on: the strored by:
Query: create EXTERNAL TABLE HB_IMPALA_TWEETS ( id int, id_str string, text string, created_at timestamp, geo_latitude double, geo_longitude double, user_screen_name string, user_location string, user_followers_count string, user_profile_image_url string ) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ( "hbase.columns.mapping" = ":key,tweet:id_str,tweet:text,tweet:created_at,tweet:geo_latitude,tweet:geo_longitude, user:screen_name,user:location,user:followers_count,user:profile_image_url" ) TBLPROPERTIES("hbase.table.name" = "tweets")
ERROR: AnalysisException: Syntax error in line 1:
...image_url string ) STORED BY 'org.apache.hadoop.hive.h...
Encountered: BY
Expected: AS
CAUSED BY: Exception: Syntax error
For info, I followed this page:
https://github.com/AronMacDonald/Twitter_Hbase_Impala/blob/master/README.md
Thanks for helping me :)
Well, it seems that Impala still not support the SerDe (serialization/deserialisation).
"You create the tables on the Impala side using the Hive shell,
because the Impala CREATE TABLE statement currently does not support
custom SerDes and some other syntax needed for these tables: You
designate it as an HBase table using the STORED BY
'org.apache.hadoop.hive.hbase.HBaseStorageHandler' clause on the Hive
CREATE TABLE statement."
So, just run the command on the hive shell, or hue hive, then, in impala, type 'invalidate metadata', and then you can see your table with a 'show tables'.
So for this part the problem seems solved.

Resources