So I'm trying to call a Redshift stored procedure from AWS Lambda but am having no luck. I can get the Lambda function to create and drop tables if I edit the sql_text parameter to do it explicitly but I can't get it to execute my procedure on Redshift. Any ideas on what I'm missing here?
import boto3
import json
def lambda_handler(event, context):
# initiate redshift-data client in boto3
client = boto3.client("redshift-data")
# parameters for running execute_statement
secretArn = 'arn:aws:secretsmanager:us-north-1:234567890123:secret:supersecret-dont-tell-a-soul'
redshift_database = 'dbase'
redshift_user = 'admin_user'
sql_text = 'call public.myproc(''somerandomvalue'')'
redshift_cluster_id = 'the-redshift-cluster'
print("Executing: {}".format(sql_text))
response = client.execute_statement(SecretArn = secretArn, Database=redshift_database, Sql=sql_text,ClusterIdentifier=redshift_cluster_id)
return {'statusCode': 200,'body': json.dumps('Lambdone!')}
Here is the response that I get Lambda
{
'ClusterIdentifier': 'my-data-warehouse'
, 'CreatedAt': datetime.datetime(2021, 7, 9, 18, 1, 59, 34000, tzinfo=tzlocal())
, 'Database': 'bidev', 'Id': '011f00d0-bf9e-45cd-bb12-b045c9504c0b'
, 'SecretArn': 'arn:aws:secretsmanager:us-north-1:234567890123:secret:redshiftqueryeditor-super-duper-secret-secret-keeper'
, 'ResponseMetadata':
{
'RequestId': 'b2b867f0-d20c-4f2b-8e14-64942176bd6e'
, 'HTTPStatusCode': 200
, 'HTTPHeaders':
{
'x-amzn-requestid': 'b2b867f0-d20c-4f2b-8e14-64942176bd6e'
, 'content-type': 'application/x-amz-json-1.1'
, 'content-length': '270', 'date': 'Fri, 09 Jul 2021 18:01:59 GMT'}
, 'RetryAttempts': 0
}
}
Try changing your sql text so that the string is surrounded by double quotes and then the other things only have single quotes.
"call public.myproc('somerandomvalue')"
Related
it's related to Google::Apis::CalendarV3::Event date format error
while I execute my code to add an event to a google calendar i get this error message
invalid: Start and end times must either both be date or both be dateTime.
I tried many ways, I post you 2 of them, I can't make it work !
Is their any helper function in Google API consuming a 'DateTime' and returning to correct string ?
thnaks
Gregoire
start = DateTime.new(2017, 12, 9, 12, 0, 0)
ende = DateTime.new(2017, 12, 9, 12, 0, 0)
event = Google::Apis::CalendarV3::Event.new(
summary: 'test',
description: 'desc',
start: { datetime: start },
end: { datetime: ende }
)
# event = Google::Apis::CalendarV3::Event.new(
# summary: 'test',
# description: 'desc',
# start: { datetime: start.strftime("%Y-%m-%dT%l:%M:%S.000-07:00") },
# end: { datetime: ende.strftime("%Y-%m-%dT%l:%M:%S.000-07:00") }
# )
result = calendar.insert_event('primary', event)
puts "Event created: #{result.html_link}"
Ps : I write software for more than 20 years now, and we still have the same date/time problems !
Finaly the solution is :
date_time: start.to_datetime.rfc3339,
I want to import images in mongoDB along with any dictionary. The dictionary should provide image tags, of which types, numbers and names I can't know at the moment I define the schema.
I'm trying to add a dictionary in eve without success:
curl -F"attr={\"a\":1}" -F "img_id=2asdasdasd" -F "img_data=#c:\path\
1.png;type=image/png" http://127.0.0.1:5000/images
{"_status": "ERR", "_issues": {"attr": "must be of dict type"}, "_error": {"message": "Insertion failure: 1 document(s)
contain(s) error(s)", "code": 422}}
My schema definition looks like that:
'schema': {
#Fixed attributes
'original_name': {
'type': 'string',
'minlength': 4,
'maxlength': 1000,
},
'img_id': {
'type': 'string',
'minlength': 4,
'maxlength': 150,
'required': True,
'unique': True,
},
'img_data': {
'type': 'media'
},
#Additional attributes
'attr': {
'type': 'dict'
}
}
Is it possible at all? Should the schema for dicts be fixed?
EDIT
I wanted to add image first and the dictionary after it, but get an error in PATCH request:
C:\Windows\SysWOW64>curl -X PATCH -i -H "Content-Type: application/json" -d "{\
"img_id\":\"asdasdasd\", \"attr\": {\"a\": 1}}" http://localhost:5000/images/asd
asdasd
HTTP/1.0 405 METHOD NOT ALLOWED
Content-Type: application/json
Content-Length: 106
Server: Eve/0.7.4 Werkzeug/0.9.4 Python/2.7.3
Date: Wed, 28 Jun 2017 22:55:54 GMT
{"_status": "ERR", "_error": {"message": "The method is not allowed for the requested URL.", "code": 405}}
I have posted an issue on Github for the same situation. However I have came with a workaround.
Override the dict validator:
class JsonValidator(Validator):
def _validate_type_dict(self, field, value):
if type(value) is dict:
pass
try:
json.loads(value)
except:
self._error(value, "Invalid JSON")
app = Eve(validator=JsonValidator)
Next, add an insert hook:
def multi_request_json_parser(documents):
for item in documents:
if 'my_json_field' in item.keys():
item['my_json_field'] = json.loads(item['my_json_field'])
app.on_insert_myendpoint += multi_request_json_parser
The dict validator must be overidden, because else the insert hook will not be called due to an validation error.
I have two tables connected with habtm relation (through a table).
Table1
id : integer
name: string
Table2
id : integer
name: string
Table3
id : integer
table1_id: integer
table2_id: integer
I need to group Table1 records by simmilar records from Table2. Example:
userx = Table1.create()
user1.table2_ids = 3, 14, 15
user2.table2_ids = 3, 14, 15, 16
user3.table2_ids = 3, 14, 16
user4.table2_ids = 2, 5, 7
user5.table2_ids = 3, 5
Result of grouping that I want is something like
=> [ [ [1,2], [3, 14, 15] ], [ [2,3], [3,14, 16] ], [ [ 1, 2, 3, 5], [3] ] ]
Where first array is an user ids second is table2_ids.
I there any possible SQL solution or I need to create some kind of algorithm ?
Updated:
Ok, I have a code that is working like I've said. Maybe someone who can help me will find it useful to understand my idea.
def self.compare
hash = {}
Table1.find_each do |table_record|
Table1.find_each do |another_table_record|
if table_record != another_table_record
results = table_record.table2_ids & another_table_record.table2_ids
hash["#{table_record.id}_#{another_table_record.id}"] = results if !results.empty?
end
end
end
#hash = hash.delete_if{|k,v| v.empty?}
hash.sort_by{|k,v| v.count}.to_h
end
But I can bet that you can imagine how long does it takes to show me an output. For my 500 Table1 records it's something near 1-2 minutes. If I will have more, time will be increased in progression, so I need some elegant solution or SQL query.
Table1.find_each do |table_record|
Table1.find_each do |another_table_record|
...
Above codes have performance issue that you have to query database N*N times, which could be optimized down to one single query.
# Query table3, constructing the data useful to us
# { table1_id: [table2_ids], ... }
records = Table3.all.group_by { |t| t.table1_id }.map { |t1_id, t3_records|
[t1_id, t3_records.map(&:table2_id)]
}.to_h
Then you could do exactly the same thing to records to get the final result hash.
UPDATE:
#AKovtunov You miss understood me. My code is the first step. With records, which have {t1_id: t2_ids} hash, you could do sth like this:
hash = {}
records.each do |t1_id, t2_ids|
records.each do |tt1_id, tt2_ids|
if t1_id != tt1_id
inter = t2_ids & tt2_ids
hash["#{t1_id}_#{tt1_id}"] = inter if !inter.empty?
end
end
end
I had uploads to Amazon s3 working with AngularJS and NodeJS but now am using Rails as the backend. So I figured all I'd have to do is translade NodeJS code to Rails. Source: https://github.com/nukulb/s3-angular-file-upload/blob/master/lib/controllers/aws.js
and my conversion:
def aws_signature
mime_type = "image/jpeg"
expiration = Date.new(Time.now.year + 1, 01, 01) #Time.now.utc.strftime('%Y/%m/%d %H:%M:%S+00:00')
s3_policy = {
expiration: expiration,
conditions: [
['starts-with', '$key', '/'],
{bucket: ENV["BUCKET"] },
{acl: 'public-read'},
['starts-with', '$Content-Type', mime_type],
{success_action_status: '201'}
]
}
puts s3_policy.inspect
string_policy = s3_policy.to_json
puts string_policy.inspect
base64_policy = URI.escape(Base64.encode64(string_policy).strip)
puts base64_policy.inspect
digest = OpenSSL::Digest::Digest.new('sha1')
signature = OpenSSL::HMAC.digest(digest, ENV["S3_SECRET"], base64_policy)
puts signature.inspect
s3_credentials = {
s3Policy: base64_policy,
s3Signature: signature,
AWSAccessKeyId: ENV["S3_KEY"]
}
render json: s3_credentials
end
Now I am getting a 304 response from Amazon with SignatureDoesNotMatch in xml.
Did I miss something in the conversion to rails code?
Is there a way to compare the unencrypted params received in amazon?
I 'm working with revision 12 of ember-data RESTAdapter and using the rails-api gem.
I have these models:
App.TransportDocumentRow = DS.Model.extend,
productName: DS.attr 'string'
transportDocument: DS.belongsTo('App.TransportDocument')
App.TransportDocument = DS.Model.extend
number: DS.attr 'string'
transportDocumentRows: DS.hasMany('App.TransportDocumentRow')
configured in this way:
DS.RESTAdapter.map 'App.TransportDocument', {
transportDocumentRows: { embedded: 'always' }
}
(i'm using embedded: always becauseif i don't my document rows are committed with document_id = 0, as asked here
Consider i have already created a transport document (id: 1) with 2 rows. If i delete a row (with id: 1), the result would be a PUT request to /transport_documents/1.
The JSON sent with this put would be something like this:
{"transport_document"=>
{"number"=>"1", "transport_document_rows"=>
[
{"id"=>2, "product_name"=>"aaaa", "transport_document_id"=>1}
]
}, "id"=>"1"
}
while rails would expect something like this:
{"transport_document"=>
{"number"=>"1", "transport_document_rows"=>
[
{"id"=>1, "_delete"=>1}
{"id"=>2, "product_name"=>"aaaa", "transport_document_id"=>1}
]
}, "id"=>"1"
}
Is there a way specified in active_model_serializers to do this?
Or should i make some manual transformations my controller?
Or should i change the payload so that ember produces the correct request?