how to build paths and responses object for a url - swagger

i would like to create the path object for the following url
url: "http://127.0.0.1:5000"
in the code when the user enters the above mentioned url followed by slash \ a method called start() is to be called and executed.
that method when called it receives response from the backend in form of json file and it retuens it.
how to build the path and responses object for such action please
code:
#app.route("/", methods=['GET','POST'] )
def startCoord2():
baseURL = "https://api.openrouteservice.org/v2/directions/driving-car?api_key=5b3ce3597851110001cf62480ecf8c403567479a87de01df5da651fb&start=8.681436,49.414554&end=8.681561,49.414747"
with urllib.request.urlopen(baseURL) as response:
print ("str(response): " + str(response))
print ("str(response.getCode(): " + str(response.getcode()))
data = response.read()
print ("response.read(): " + str(data))
dataAsJSONObject = json.loads(data)
distance = str(dataAsJSONObject["features"][0]['properties']['segments'][0]['distance'])
return dataAsJSONObject #config['EndPoint']['url'] + config['EndPoint']['api_key'] + "&start=" + startLat + "," + startLng + "&end=" + end1Lat + "," + endLng + ""
yaml file:
openapi: '3.0.2'
info:
title: &apiTitle "Distance micro-service API."
description: "This micro-service receives four comma separated GPS coordinates in the following order, startLongitude, startLatitude, endLongitude
, endLatitude in the url. Then it receives a response in a form of json file. The json file is to be fetched for the distance which represents the distance between
the aforementioned GPS coordinates"
version: '1.0'
termsOfService: "http://127.0.0.1:5000/getDistance/terms"
contact:
name: " "
url: "http://127.0.0.1:5000"
email: "me#provider.com"
license:
name: "CC Attribution-ShareAlike 4.0 (CC BY-SA 4.0)"
url: "https://openweathermap.org/price"
servers:
- url: http://127.0.0.1:5000
description: Local host test server. When accessed it will return the distance from Berlin to Alexandria in Egypt in kilometers.
variables:
originAndDestinationGPSCoords:
default: 13.404954,52.520008,29.924526,31.205753
description:
Local host test server. When accessed it will return the distance from Berlin to Alexandria in Egypt in kilometers.
paths:
/:
get:
tags:
- "get response data in json format"
summary:
get the distance between 8.681436,49.414554,8.681561,49.414747
description: >
to display data in json formate
responses:
'200':
description: Distance fetched successfully.

Related

Using Google Colab how to drive.files().list more than 1000 files from google drive

About once a month I get a google drive folder with lots of videos in it (usually around 700-800) and a spreadsheet that column A gets populated with the names of all of the video files in order of the time stamp in the video file name. Now I've already got the code that does this (I will post it below) but This time I've got about 8,400 video files in the folder and this algorithm has a pageSize limit of 1,000 (it was originally 100, I changed it to 1,000 but that's the highest it will accept) How do I change this code to accept more than 1000
This is the part that initializes everything
!pip install gspread_formatting
import time
import gspread
from gspread import urls
from google.colab import auth
from datetime import datetime
from datetime import timedelta
from gspread_formatting import *
from googleapiclient.discovery import build
from oauth2client.client import GoogleCredentials
from google.auth import default
folder_id = '************************' # change to whatever folder the required videos are in
base_dir = '/Example/drive/videofolder' # change this to whatever folder path you want to grab videos from same as above
file_name_qry_filter = "name contains 'mp4' and name contains 'cam'"
file_pattern="cam*.mp4"
spreadSheetUrl = 'https://docs.google.com/spreadsheets/d/SpreadsheetIDExample/edit#gid=0'
data_drive_id = '***********' # This is the ID of the shared Drive
auth.authenticate_user()
creds, _ = default()
gc = gspread.authorize(creds)
#gc = gspread.authorize(GoogleCredentials.get_application_default())
wb = gc.open_by_url(spreadSheetUrl)
sheet = wb.worksheet('Sheet1')
And this is the main part of the code
prevTimeStamp = None
prevHour = None
def dateChecker(fileName, prevHour):
strippedFileName = fileName.strip(".mp4") # get rid of the .mp4 from the end of the file name
parsedFileName = strippedFileName.split("_") # split the file name into an array of (0 = Cam#, 1 = yyyy-mm-dd, 2 = hh-mm-ss)
timeStamp = parsedFileName[2] # Grabbed specifically the hh-mm-ss time section from the original file name
parsedTimeStamp = timeStamp.split("-") # split the time stamp into an array of (0 = hour, 1 = minute, 2 = second)
hour = int(parsedTimeStamp[0])
minute = int(parsedTimeStamp[1])
second = int(parsedTimeStamp[2]) # set hour, minute, and seccond to it's own variable
commentCell = "Reset"
if prevHour == None:
commentCell = " "
prevHour = hour
else:
if 0 <= hour < 24:
if hour == 0:
if prevHour == 23:
commentCell = " "
else:
commentCell = "Missing Video1"
else:
if hour - prevHour == 1:
commentCell = " "
else:
commentCell = "Missing Video2"
else:
commentCell = "Error hour is not between 0 and 23"
if minute != 0 or 1 < second <60:
commentCell = "Check Length"
prevHour = hour
return commentCell, prevHour
# Drive query variables
parent_folder_qry_filter = "'" + folder_id + "' in parents" #you shouldn't ever need to change this
query = file_name_qry_filter + " and " + parent_folder_qry_filter
drive_service = build('drive', 'v3')
# Build request and call Drive API
page_token = None
response = drive_service.files().list(q=query,
corpora='drive',
supportsAllDrives='true',
includeItemsFromAllDrives='true',
driveId=data_drive_id,
pageSize=1000,
fields='nextPageToken, files(id, name, webViewLink)', # you can add extra fields in the files() if you need more information about the files you're grabbing
pageToken=page_token).execute()
i = 1
array = [[],[]]
# Parse/print results
for file in response.get('files', []):
array.insert(i-1, [file.get('name'), file.get('webViewLink')]) # If you add extra fields above, this is where you will have to start changing the code to make it accomadate the extra fields
i = i + 1
array.sort()
array_sorted = [x for x in array if x] #Idk man this is some alien shit I just copied it from the internet and it worked, it somehow removes any extra blank objects in the array that aren't supposed to be there
arrayLength = len(array_sorted)
print(arrayLength)
commentCell = 'Error'
# for file_name in array_sorted:
# date_gap, start_date, end_date = date_checker(file_name[0])
# if prev_end_date == None:
# print('hello')
# elif start_date != prev_end_date:
# date_gap = 'Missing Video'
for file_name in array_sorted:
commentCell, prevHour = dateChecker(file_name[0], prevHour)
time.sleep(0.3)
#insertRow = [file_name[0], "Not Processed", " ", date_gap, " ", " ", " ", " ", base_dir + '/' + file_name[0], " ", file_name[1], " ", " ", " "]
insertRow = [file_name[0], "Not Processed", " ", commentCell, " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " "]
sheet.append_row(insertRow, value_input_option='USER_ENTERED')
Now I know the problem has to do with the
page_token = None
response = drive_service.files().list(q=query,
corpora='drive',
supportsAllDrives='true',
includeItemsFromAllDrives='true',
driveId=data_drive_id,
pageSize=1000,
fields='nextPageToken, files(id, name, webViewLink)', # you can add extra fields in the files() if you need more information about the files you're grabbing
pageToken=page_token).execute()
In the middle of the main part of the code. I've obviously already tried just changing the pageSize limit to 10,000 but I knew that wouldn't work and I was right, it came back with
HttpError: <HttpError 400 when requesting https://www.googleapis.com/drive/v3/files?q=name+contains+%27mp4%27+and+name+contains+%27cam%27+and+%271ANmLGlNr-Cu0BvH2aRrAh_GXEDk1nWvf%27+in+parents&corpora=drive&supportsAllDrives=true&includeItemsFromAllDrives=true&driveId=0AF92uuRq-00KUk9PVA&pageSize=10000&fields=nextPageToken%2C+files%28id%2C+name%2C+webViewLink%29&alt=json returned "Invalid value '10000'. Values must be within the range: [1, 1000]". Details: "Invalid value '10000'. Values must be within the range: [1, 1000]">
The one idea I have is to have multiple pages with 1000 each and than iterate through them but I barely understood how this part of the code worked a year ago when I set it up and since than I haven't touched google colab except to run this algorithm and Every time I try to google how to do this or look up the google drive API or anything else everything always comes back with how to download and upload a couple file where what I need is just to get a list of the names of all the files.
The documentation explains how to use the pageToken for pagination (the page is for Calendar API but it works the same in Drive):
In order to retrieve the next page, perform the exact same request as previously and append a pageToken field with the value of nextPageToken from the previous page. A new nextPageToken is provided on the following pages until all the results are retrieved.
Essentially you want a loop where you run files.list(), retrieve the pageToken, and run it again while feeding it the previous token until you stop getting tokens.
For your specific scenario you can try to replace the "problem" snippet with the following:
page_token = ""
filelist = {}
while True:
response = drive_service.files().list(q=query,
corpora='drive',
supportsAllDrives='true',
includeItemsFromAllDrives='true',
driveId=data_drive_id,
pageSize=1000,
fields='nextPageToken, files(id, name, webViewLink)',
pageToken=page_token).execute()
page_token = response.get('nextPageToken', None)
filelist.setdefault("files",[]).extend(response.get('files'))
if (not page_token):
break
response = filelist
This does as I described, looping files.list() and adding the results to the filelist variable, then breaking the loop when the API stops returning page tokens. At the end I just assigned the value of filelist to the response variable since that's what you're using in the rest of your code. It should parse the same way but with the full list of results this time.
Sources:
Page through list of resources
Files.list()

quickfixj DataDictionary Validation Issue

I'm new to Java. Working on QuikcFixJ.
I'm trying to load custom.xml file using datadictionary and later parsing msg string from this datadictionary. Not sure what is wrong with datadictionary , its not throwing any error whether I pass correct fix.xml file path or incorrect one.
Later when I pass msg string to Message object it just parse initial 3 tags irrespective of whatever xml file I mention in datadictionary.
Please give me some pointers to resolve it.
Adding code snippet:
public Message createMsg(String type, String message)
{
message = "35=" + type + "\001" + message;
message = "8=FIX4.2\0019=" + message.length() + "\001"+message ;
String msgCharStr = message.replaceAll("\\|", "\001");
int checksum = MessageUtils.checksum(msgCharStr);
msgCharStr = msgCharStr +"\00110=" + (1 +
MessageUtils.checksum(msgCharStr)) + "\001" ;
quickfix.Message msg = new Message();
msg.fromString(msgCharStr, null,true);
}
This code works fine for regular neworder or quoterequest msgs.
But, when I pass quoterequest msg with custom repeating group which I loaded via UseDictionary config settings, it gives undesired output msg.
e.g: Lets say if I pass input, "35=R|146=1|...|453=2|448=XX|447=G|452=1|448=YY|447=D|452=2|......"
parser gives output as :
"35=R|146=1|...|448=XX|447=G|452=1|453=2|448=YY|447=D|452=2|......"
----Code re-build steps for new custom FIX42.xml ---
Copied quickfixj2.3.1.zip (url: https://github.com/quickfix-j/quickfixj/releases )
Updated xml at path quickfixj-messages/quickfixj-messages-fix42/src/main/resources/FIX42.xml
run mvn build command.
Maven re-build jar for below:
quickfixj-all
quickfixj-core/
quickfixj-messages/
quickfixj-codegenerator/
quickfixj-dictgenerator/
Tried using all above jar files in my RobotFramework with an assumption that it will now refer the modified QuoteRequest msg which has repeating group support.
To my surprise, its output was exactly same. No changes.
After quickfixj rebuild for customised xml file.
Final version of code snippet:
public Message createMsg(String type, String message)
{
message = "35=" + type + "\001" + message;
message = "8=FIX4.2\0019=" + message.length() + "\001"+message ;
String msgCharStr = message.replaceAll("\\|", "\001");
int checksum = MessageUtils.checksum(msgCharStr);
msgCharStr = msgCharStr +"\00110=" + (1 +
MessageUtils.checksum(msgCharStr)) + "\001" ;
DataDictioanry dd = DataDictionary("FIX42.xml");
quickfix.Message msg = new Message();
msg.fromString(msgCharStr, dd ,true);
}

Escaping string variables in GraphQL query in ruby graphql-client?

I have a a graphql query that has a data node that requires all quotation marks to be escaped like so:
data: "{\"access_token\": \"SOME_TOKEN\"}".
However, in the program I am writing, SOME_TOKEN is dynamic and thus I need to use a variable to reference the access token. However, because the token is a string, it has quotation marks as part of the variable. I would usually do something like access_token_var.replace(""", "\") but because this is graphql-client, the entire thing is wrapped in a string.
I am getting the error Variable $access_token is declared by __anonymous__ but not used.
How can I escape the quotes that are included in the string variable?
Full query:
CREATE_AUTHENTICATION_MUTATION ||= <<-'GRAPHQL'
mutation($access_token: String!, $refresh_token: String!, $user_id: ID!, $created_at: Int!, $__created: Int!, $expires_in: Int!){
createUserAuthentication(input: {
name: "my_test",
data: "
{\"access_token\": $access_token, \"refresh_token\": $refresh_token, \"user_id\": $user_id, \"instance_url\": \"REDACTED\", \"created_at\": $created_at, \"__created\": $__created, \"token_type\": \"bearer\", \"expires_in\": $expires_in}"
}){
authenticationId
clientMutationId
}
}
GRAPHQL
Your problem isn't the quotes, it's that data is just a string: no interpolation is taking place and so your resulting value will just be a string literal {"access_token": $access_token, ...}.
GraphQL uses javascript as a backend, so you can probably do something like the following:
...
data: "{\"access_token\": \"" + $access_token + "\", \"refresh_token\": \"" + $refresh_token + "\", \"user_id\": \"" + $user_id + "\", \"instance_url\": \"REDACTED\", \"created_at\": \"" + $created_at + "\", \"__created\": \"" + $__created + "\", \"token_type\": \"bearer\", \"expires_in\": \"" + $expires_in + "\"}"
...
Or even:
...
data: JSON.stringify({access_token: $access_token, refresh_token: $refresh_token, user_id: $user_id, instance_url: "REDACTED", created_at: $created_at, __created: $__created, token_type: "bearer", expires_in: $expires_in})
...
As per the example on here: https://graphql.org/graphql-js/mutations-and-input-types/ (search 'stringify')

403 Invalid token error on GET user info

UPDATE: I thought I had to pass the parameters as a JSON string in the request body, but actually I need to put them on the URL (the endpoint string), so it's working now.
I'm new to Valence. I have some Salesforce Apex code (written by someone else) that creates a D2L user. The code is working fine.
I want to add an Apex method to retrieve info for an existing D2L user using the userName parameter. I've copied the existing method, changed to a GET, set the query parameter to userName, and kept everything else the same.
When I call my method, I get a 403 Invalid Token error.
Do I need to use different authorization parameters for a GET? For example, do I still need to include a timestamp?
Here's a portion of the Salesforce Apex code:
public static final String USERS = '/d2l/api/lp/1.0/users/';
String TIMESTAMP_PARAM_VALUE = String.valueOf(Datetime.now().getTime()).substring(0,10);
String method = GETMETHOD;
String action = USERS;
String signData = method + '&' + action + '&' + TIMESTAMP_PARAM_VALUE;
String userSignature = sign(signData,USER_KEY);
String appSignature = sign(signData,APP_KEY);
String SIGNED_USER_PARAM_VALUE = userSignature;
String SIGNED_APP_PARAM_VALUE = appSignature;
String endPoint = DOMAIN + action + '?' +
APP_ID_PARAM + '=' + APP_ID + '&' +
USER_ID_PARAM + '=' + USER_ID + '&' +
SIGNED_USER_PARAM + '=' + SIGNED_USER_PARAM_VALUE + '&' +
SIGNED_APP_PARAM + '=' + SIGNED_APP_PARAM_VALUE + '&' +
TIMESTAMP_PARAM + '=' + TIMESTAMP_PARAM_VALUE;
HttpRequest req = new HttpRequest();
req.setMethod(method);
req.setTimeout(30000);
req.setEndpoint(endPoint);
req.setBody('{ "orgDefinedId"' + ':' + '"' + person.Id + '" }');
I thought I had to pass the parameters as a JSON string in the request body, but actually I need to put them on the URL (the endpoint string), so it's working now

jQuery $.post - do I have to encode the URL parameter?

I'm making an AJAX call with $.post(url, cb). The URL I'm passing in could potentially have weird characters like spaces, &, ? and so on.
Do I have to use $.post(encodeURIComponent(url), cb)?
url is something like /foo/weird-char§.
Do I have to use $.post(encodeURIComponent(url), cb)?
You will have to use encodeURIComponent() but not on the entire URI, only on the data part (weird and chars in your example). The URL and the ? & separating the parameters must stay intact. If you encode the entire URI, it will become unusable.
If you would add the data as POST data using the data parameter:
url = "/foo/possible";
$.post(url, { "weird": "f2(90§§$", "chars": "ß1028490" });
jQuery's Ajax functions would take care of URL encoding the data automatically.
Yes, you would need to encode the keys and values in the query string (but not the ? which separates the path from the query arguments and the & which separates the query arguments). This is built into jQuery if you use the data parameter of the $.post, like so:
$.post(url, { name: "John", time: "2pm" }, cb);
I'm using MVC3/EntityFramework as back-end, the front-end consumes all of my project controllers via jquery, posting directly (using $.post) doesnt requires the data encription, when you pass params directly other than URL hardcoded.
I already tested several chars i even sent an URL(this one http://www.ihackforfun.eu/index.php?title=update-on-url-crazy&more=1&c=1&tb=1&pb=1) as a parameter and had no issue at all even though encodeURIComponent works great when you pass all data in within the URL (hardcoded)
Hardcoded URL i.e.>
var encodedName = encodeURIComponent(name);
var url = "ControllerName/ActionName/" + encodedName + "/" + keyword + "/" + description + "/" + linkUrl + "/" + includeMetrics + "/" + typeTask + "/" + project + "/" + userCreated + "/" + userModified + "/" + status + "/" + parent;; // + name + "/" + keyword + "/" + description + "/" + linkUrl + "/" + includeMetrics + "/" + typeTask + "/" + project + "/" + userCreated + "/" + userModified + "/" + status + "/" + parent;
Otherwise dont use encodeURIComponent and instead try passing params in within the ajax post method
var url = "ControllerName/ActionName/";
$.post(url,
{ name: nameVal, fkKeyword: keyword, description: descriptionVal, linkUrl: linkUrlVal, includeMetrics: includeMetricsVal, FKTypeTask: typeTask, FKProject: project, FKUserCreated: userCreated, FKUserModified: userModified, FKStatus: status, FKParent: parent },
function (data) {.......});

Resources