How to create a bigquery table and import from cloud storage using the ruby api - ruby-on-rails

Im trying to create a table on BigQuery - I have a single dataset and need to use the api to add a table and import data (json.tar.gz) from cloud storage. I need to be able to use the ruby client to automate the whole process. I have two questions:
I have read the docs and tried to get it to upload (code below) and have not been successful and have absolutely no idea what Im doing wrong. Could somebody please enlighten me or point me in the right direction?
Once I make the request, how do I know when the job has actually finished? From the API, I presume Im meant to use a jobs.get request? Having not completed the first part I have been unable to get to look at this aspect.
This is my code below.
config= {
'configuration'=> {
'load'=> {
'sourceUris'=> ["gs://person-bucket/person_json.tar.gz"],
'schema'=> {
'fields'=> [
{ 'name'=>'person_id', 'type'=>'integer' },
{ 'name'=> 'person_name', 'type'=>'string' },
{ 'name'=> 'logged_in_at', 'type'=>'timestamp' },
]
},
'destinationTable'=> {
'projectId'=> "XXXXXXXXX",
'datasetId'=> "personDataset",
'tableId'=> "person"
},
'createDisposition' => 'CREATE_IF_NEEDED',
'maxBadRecords'=> 10,
}
},
'jobReference'=>{'projectId'=>XXXXXXXXX}
}
multipart_boundary="xxx"
body = "--#{multipart_boundary}\n"
body += "Content-Type: application/json; charset=UTF-8\n\n"
body += "#{config.to_json}\n"
body += "--#{multipart_boundary}\n"
body +="Content-Type: application/octet-stream\n\n"
body += "--#{multipart_boundary}--\n"
param_hash = {:api_method=> bigquery.jobs.insert }
param_hash[:parameters] = {'projectId' => 'XXXXXXXX'}
param_hash[:body] = body
param_hash[:headers] = {'Content-Type' => "multipart/related; boundary=#{multipart_boundary}"}
result = #client.execute(param_hash)
puts JSON.parse(result.response.header)
I get the following error:
{"error"=>{"errors"=>[{"domain"=>"global", "reason"=>"wrongUrlForUpload", "message"=>"Uploads must be sent to the upload URL. Re-send this request to https://www.googleapis.com/upload/bigquery/v2/projects/XXXXXXXX/jobs"}], "code"=>400, "message"=>"Uploads must be sent to the upload URL. Re-send this request to https://www.googleapis.com/upload/bigquery/v2/projects/XXXXXXXX/jobs"}}
From the request header, it appears to be going to the same URI the error says it should go to, and I am quite at a loss for how to proceed. Any help would be much appreciated.
Thank you and have a great day!

Since this is a "media upload" request, there is a slightly different protocol for making the request. The ruby doc here http://rubydoc.info/github/google/google-api-ruby-client/file/README.md#Media_Upload describes it in more detail. I'd use resumable upload rather than multipart because it is simpler.
Yes, as you suspected, the way to know when it is done is to do a jobs.get() to look up the status of the running job. The job id will be returned in the response from jobs.insert(). If you want more control, you can pass your own job id, so that in the event that the jobs.insert() call returns an error you can find out whether the job actually started.

Thank you for that. Answer resolved. Please see here :
How to import a json from a file on cloud storage to Bigquery
I think that the line of code in the docs for the resumable uploads section (http://rubydoc.info/github/google/google-api-ruby-client/file/README.md#Media_Upload) should read:
result = client.execute(:api_method => drive.files.insert,
Otherwise, this line will throw an error with 'result' undefined:
upload = result.resumable_upload

Related

How to make a POST request to Twilio to call a number with urequests

I have recently starting doing things with a raspberry pi zero W and wanted to be able to call a number from it.
Unfortunately it seems very hard to use the normal Twilio library because the pi uses MicroPython so I have to use the raw API.
I also gathered that the API uses the content type x-www-form-urlencoded which urequests seems to have a hard time interacting with.
This is my code so far
url = f"https://api.twilio.com/2010-04-01/Accounts/{TWILIO_USER}/Calls.json"
def call() -> None:
body = {"To": CALLING_NUMBER, "From": CALLER_NUMBER}
response = urequests.post(url, json=body, auth=(TWILIO_USER, TWILIO_KEY))
print("Status Code", response.status_code)
print("JSON Response ", response.json())
however I get the error
{'code': 21201, 'more_info': 'https://www.twilio.com/docs/errors/21201', 'message': "No 'To' number is specified", 'status': 400}
I have tried a ton of stuff like url encoding the body myself, using it as the parameter for data, json.dumpsing it, nothing seemed to work. Any help would be greatly appreciated.
Please remember I am limited by the picos standard libraries.
It seems I was close. All I had to do was copy paste urllib's urlencode and use it on body, then it ended up working.

Krakend: Use previous backend response data to populate post body using lua

I am new to Kraken but quite excited about it. What I am trying to do currently is to have two sequential backends where I would like to use parts of the response from the first backend to populate the body of the second request using Lua. the only issue I am having is to get a hold of the data that I can get via {resp0} if I use it outside the Lua scripts.
This is the part of the krakend.json where I would like to access the previous response:
"extra_config": {
"modifier/lua-backend": {
"sources": [ "/etc/krakend/config/lua/enrichRequestBody.lua" ],
"pre": " populate_request_body(request.load(), {resp0});",
"live": false
}
Any input or suggestions would be much appreciated!
Thanks
I have been trying finding {resp0} or the corresponding value from debugging but no luck. I haven't found any documentation or examples for this in the Kraken documentation

Microsoft graph subscription to outlook / drive not working "InvalidRequest"

I created an app on Azure and want to use it to subscribe to business emails of certain users. However, I cannot get it to work and I'm wondering if it is even possible this way. The code is as follows:
url = 'https://graph.microsoft.com/beta/subscriptions'
payload = {
'changeType': 'updated',
'notificationUrl': 'http://<server_ip>/webhook',
'resource': "users/<my_user>#<my_domain>/mailFolders('inbox')/messages",
'expirationDateTime': '2022-07-23T11:52:20',
}
headers = {
'Authorization': f'Bearer {access_token}',
'Content-Type': 'application/json'
}
response = requests.post(url, headers=headers, data=json.dumps(payload))
print(response.status_code)
print(response.reason)
print(response.text)
400
Bad Request
{"error":{"code":"InvalidRequest","message":"Could not process subscription creation
payload. Are all property names spelled and camelCased properly? Also are the
dateTimeOffest properties in a valid internet Date and Time format?","innerError":
{"date":"2022-07-22T09:12:11","request-id":"317989f4-c921-44ce-a701-
c57a660aad3b","client-request-id":"317989f4-c921-44ce-a701-c57a660aad3b"}}}
I also want to subsribe to changes to our drive, for this I substitute the resource for f'drives/{drive_id}/root'. This gives the same error messsage.
I have read all the relevant docs and feel this this should be the correct approach, but the error message is not useful to me for find the issue.

Get Response from CREATE_STREAM

We upload a document from SAPUI5 to our SAP System using the CREATE_STREAM Method of the oData Service in ABAP. The creation of the document works fine.
What we would like to achieve is to get the response back to SAPUI5. Especially when there is an error during the creation of the document in the backend.
In Frontend we use the uploadSet Control.
...oUploadSet.uploadItem(oItem);
In the Backend we create a message with
...lo_message_container->add_message( iv_msg_type = /iwbep/cl_cos_logger=>error
iv_msg_number = '018'
iv_msg_id = lv_msg_id
iv_add_to_response_header = abap_true
)....
We can find the created message in the error protocol of our gateway server (/IWFND/ERROR_LOG). But how can this message be retrieved in SAPUI5 and used in the MessageManger Control?
We tried the onUploadCompleted Control but we can't find any response data there.
Can somebody explain how the response or a message header from the CREAT_STREAM method can be used in SAPUI5?
The "new" UploadSet control is kinda half-baked imo. The response will get lost in some internal method. This internal method will then trigger onUploadCompleted and you get nothing but useless information.
Lucky for us we can easily overwrite this internal stuff. UploadSet has an aggregation Uploader. We have to provide our own Uploader. Problem solved. Here is the line that needs to be modified.
sap.ui.define([
"sap/m/upload/Uploader",
...
], function (Uploader, ...) {
return Uploader.extend("my.custom.control.Uploader", {
uploadItem: function (oItem, aHeaders) {
// beginning of the method. take it from the official sources
oXhr.onreadystatechange = function () {
const oHandler = that._mRequestHandlers[oItem.getId()];
if (this.readyState === window.XMLHttpRequest.DONE && !oHandler.aborted) {
// we need to return the xhr object. it contains the response!
that.fireUploadCompleted({ item: oItem, xhr: oXhr });
}
};
// .. rest of the method
}
});
});
Use it like this
<mvc:View xmlns:custom="my.custom.control" ....>
<UploadSet items="....">
.....
<uploader>
<custom:Uploader uploadUrl="......"
uploadCompleted=".onUploadCompleted"
uploadStarted=".onUploadStarted" />
</uploader>
</UploadSet>
Edit: Your own uploader also means implementing your own event handlers (uploadAborted, uploadCompleted, uploadProgressed, uploadStarted). See the official documentation for more information about the events.

How to retrieve Medium stories for a user from the API?

I'm trying to integrate Medium blogging into an app by showing some cards with posts images and links to the original Medium publication.
From Medium API docs I can see how to retrieve publications and create posts, but it doesn't mention retrieving posts. Is retrieving posts/stories for a user currently possible using the Medium's API?
The API is write-only and is not intended to retrieve posts (Medium staff told me)
You can simply use the RSS feed as such:
https://medium.com/feed/#your_profile
You can simply get the RSS feed via GET, then if you need it in JSON format just use a NPM module like rss-to-json and you're good to go.
Edit:
It is possible to make a request to the following URL and you will get the response. Unfortunately, the response is in RSS format which would require some parsing to JSON if needed.
https://medium.com/feed/#yourhandle
⚠️ The following approach is not applicable anymore as it is behind Cloudflare's DDoS protection.
If you planning to get it from the Client-side using JavaScript or jQuery or Angular, etc. then you need to build an API gateway or web service that serves your feed. In the case of PHP, RoR, or any server-side that should not be the case.
You can get it directly in JSON format as given beneath:
https://medium.com/#yourhandle/latest?format=json
In my case, I made a simple web service in the express app and host it over Heroku. React App hits the API exposed over Heroku and gets the data.
const MEDIUM_URL = "https://medium.com/#yourhandle/latest?format=json";
router.get("/posts", (req, res, next) => {
request.get(MEDIUM_URL, (err, apiRes, body) => {
if (!err && apiRes.statusCode === 200) {
let i = body.indexOf("{");
const data = body.substr(i);
res.send(data);
} else {
res.sendStatus(500).json(err);
}
});
});
Nowadays this URL:
https://medium.com/#username/latest?format=json
sits behind Cloudflare's DDoS protection service so instead of consistently being served your feed in JSON format, you will usually receive instead an HTML which is suppose to render a website to complete a reCAPTCHA and leaving you with no data from an API request.
And the following:
https://medium.com/feed/#username
has a limit of the latest 10 posts.
I'd suggest this free Cloudflare Worker that I made for this purpose. It works as a facade so you don't have to worry about neither how the posts are obtained from source, reCAPTCHAs or pagination.
Full article about it.
Live example. To fetch the following items add the query param ?next= with the value of the JSON field next which the API provides.
const MdFetch = async (name) => {
const res = await fetch(
`https://api.rss2json.com/v1/api.json?rss_url=https://medium.com/feed/${name}`
);
return await res.json();
};
const data = await MdFetch('#chawki726');
To get your posts as JSON objects
you can replace your user name instead of #USERNAME.
https://api.rss2json.com/v1/api.json?rss_url=https://medium.com/feed/#USERNAME
With that REST method you would do this: GET https://api.medium.com/v1/users/{{userId}}/publications and this would return the title, image, and the item's URL.
Further details: https://github.com/Medium/medium-api-docs#32-publications .
You can also add "?format=json" to the end of any URL on Medium and get useful data back.
Use this url, this url will give json format of posts
Replace studytact with your feed name
https://api.rss2json.com/v1/api.json?rss_url=https://medium.com/feed/studytact
I have built a basic function using AWS Lambda and AWS API Gateway if anyone is interested. A detailed explanation is found on this blog post here and the repository for the the Lambda function built with Node.js is found here on Github. Hopefully someone here finds it useful.
(Updating the JS Fiddle and the Clay function that explains it as we updated the function syntax to be cleaner)
I wrapped the Github package #mark-fasel was mentioning below into a Clay microservice that enables you to do exactly this:
Simplified Return Format: https://www.clay.run/services/nicoslepicos/medium-get-user-posts-new/code
I put together a little fiddle, since a user was asking how to use the endpoint in HTML to get the titles for their last 3 posts:
https://jsfiddle.net/h405m3ma/3/
You can call the API as:
curl -i -H "Content-Type: application/json" -X POST -d '{"username":"nicolaerusan"}' https://clay.run/services/nicoslepicos/medium-get-users-posts-simple
You can also use it easily in your node code using the clay-client npm package and just write:
Clay.run('nicoslepicos/medium-get-user-posts-new', {"profile":"profileValue"})
.then((result) => {
// Do what you want with returned result
console.log(result);
})
.catch((error) => {
console.log(error);
});
Hope that's helpful!
Check this One you will get all info about your own post........
mediumController.getBlogs = (req, res) => {
parser('https://medium.com/feed/#profileName', function (err, rss) {
if (err) {
console.log(err);
}
var stories = [];
for (var i = rss.length - 1; i >= 0; i--) {
var new_story = {};
new_story.title = rss[i].title;
new_story.description = rss[i].description;
new_story.date = rss[i].date;
new_story.link = rss[i].link;
new_story.author = rss[i].author;
new_story.comments = rss[i].comments;
stories.push(new_story);
}
console.log('stories:');
console.dir(stories);
res.json(200, {
Data: stories
})
});
}
I have created a custom REST API to retrieve the stats of a given post on Medium, all you need is to send a GET request to my custom API and you will retrieve the stats as a Json abject as follows:
Request :
curl https://endpoint/api/stats?story_url=THE_URL_OF_THE_MEDIUM_STORY
Response:
{
"claps": 78,
"comments": 1
}
The API responds within a reasonable response time (< 2 sec), you can find more about it in the following Medium article.

Resources