cant add quandl gem to my rails app - ruby-on-rails

Good afternoon my dear rails developers.Well i am Clement, a ruby on rails beginer and i just started working on my first project which is creating an online business platform.My problem was about how to add a stock chart to my app but fortunately for me i discovered the quandl gem which i heard was very good to work with .But my only problem now is how to get the gem working in the app by creating view, controller and configuring it to render the stock charts can any one help me with the codes for the procedures and walk me through the part with a quick tutorials from how to configure the gem to how to get the feeds rendered in your view. Your help will be appreciated..Below is the github repository for the
Quandl Ruby Client Build Status
The official ruby gem for all your data needs! The Quandl client can be used to interact with the latest version of the Quandl RESTful API.
Deprecation of old package
With the release of our v3 API we are officially deprecating version 2 of the quandl_client ruby gem. We have re-written the package from the ground up and will be moving forward with a 1.x.x package with the name of quandl that will rely on version 3 of our restful api. During this transitional period you can continue to use the old package here:
https://rubygems.org/gems/quandl_client
Installation
gem 'quandl'
Configuration
Option Explanation Example
api_key Your access key tEsTkEy123456789
api_version The version you wish to access the api with 2015-04-09
require 'quandl'
Quandl::ApiConfig.api_key = 'tEsTkEy123456789'
Quandl::ApiConfig.api_version = '2015-04-09'
Retrieving Data
Dataset
Retrieving dataset data can be done in a similar way to Databases. For example to retrieve a dataset use its full code:
require 'quandl'
Quandl::Dataset.get('WIKI/AAPL')
=> ... dataset ...
You can also retrieve a list of datasets associated to a database by using the Database helper method:
Quandl::Database.get('WIKI').datasets
=> ... datasets results ...
By default, each list query will return page 1 of the first 100 results (Please see the API Documentation for more detail)
Data
Dataset data can be queried through a dataset. For example:
require 'quandl'
Quandl::Dataset.get('WIKI/AAPL').data
=> ... data ...
you can access the data much like you would other lists. In addition all the data column fields are mapped to their column_names for convenience:
Quandl::Dataset.get('WIKI/AAPL').data.first.date
=> ... date ...
Database
To retrieve a database simply use its code with the get parameter:
require 'quandl'
Quandl::Database.get('WIKI')
=> ... wiki database ...
You can also retrieve a list of databases by using:
Quandl::Database.all
=> ... results ...
Download Entire Database (Bulk Download)
To get the url for downloading all dataset data of a database:
require 'quandl'
Quandl::ApiConfig.api_key = 'tEsTkEy123456789'
Quandl::Database.get('ZEA').bulk_download_url
=> "https://www.quandl.com/api/v3/databases/ZEA/data?api_key=tEsTkEy123456789"
To bulk download all dataset data of a database:
Quandl::ApiConfig.api_key = 'tEsTkEy123456789'
Quandl::Database.get('ZEA').bulk_download_to_file('/path/to/destination/file_or_folder')
The file or folder path can either be specified as a string or as a File.
For bulk download of premium databases, please ensure that a valid api_key is set, as authentication is required.
For both bulk_download_url and bulk_download_to_file, an optional download_type query parameter can be passed in:
Quandl::Database.get('ZEA').bulk_download_to_file('.', params: {download_type: 'partial'})
If download_type is not specified, a complete bulk download will be performed. Please see the API Documentation for more detail.
Working with results
Instance
All data once retrieved is abstracted into custom classes. You can get a list of the fields in each class by using the data_fields method.
require 'quandl'
database = Quandl::Database.get('WIKI')
database.data_fields
=> ["id", "name", "database_code", "description", "datasets_count", "downloads", "premium", "image"]
You can then uses these methods in your code. Additionally you can access the data by using the hash equalivalent lookup.
database = Quandl::Database.get('WIKI')
database.database_code
=> 'WIKI'
database['database_code']
=> 'WIKI'
In some cases name of the fields returned by the API may not be compatible with the ruby language syntax. These will be converted into compatible field names.
data = Quandl::Dataset.get('WIKI/AAPL').data(params: { limit: 1 }).first
data.column_names
=> ["Date", "Open", "High", "Low", "Close", "Volume", "Ex-Dividend", "Split Ratio", "Adj. Open", "Adj. High", "Adj. Low", "Adj. Close", "Adj. Volume"]
data.data_fields
=> ["date", "open", "high", "low", "close", "volume", "ex_dividend", "split_ratio", "adj_open", "adj_high", "adj_low", "adj_close", "adj_volume"]
List
Most list queries will return a paginated list of results. You can check whether the resulting list has more data by using the more_results? method. By default, each list query will return page 1 of the first 100 results (Please see the API Documentation for more detail). Depending on its results you can pass additional params to filter the data:
require 'quandl'
databases = Quandl::Database.all
=> ... results ...
databases.more_results?
=> true
Quandl::Database.all(params: { page: 2 })
=> ... more results ...
Lists also function as arrays and can be iterated through. Note however that using these features will only work on the current page of data you have locally. You will need to continue to fetch results and iterate again to loop through the full result set.
databases = Quandl::Database.all
databases.each { |d| puts d.database_code }
=> ... print database codes ...
databases.more_results?
=> true
Quandl::Database.all(params: { page: 2 }).each { |d| puts d.database_code }
=> ... print more database codes ...
Lists also return metadata associated with the request. This can include things like the current page, total results, etc. Each of these fields can be accessed through a hash or convenience method.
Quandl::Database.all.current_page
=> 1
Quandl::Database.all['current_page']
=> 1
As a convenience method lists can also return their data in CSV form. To do this simply issue the .to_csv method on a list:
databases = Quandl::Database.all.to_csv
=> "Id,Name,Database Code,Description,Datasets Count,Downloads,Premium,Image,Bundle Ids,Plan ...
Additional Links
Quandl
Quandl Tools
API Docs
License

Related

How do I insert data to Astra DB using GraphQL API?

I am trying to follow this youtube tutorial.
I am getting stuck at inserting the first piece of data. Ania demonstrates it at 20.46 as follows:
mutation insertGenres {
action: insertreference_list(value: {label: "genre", value: "action"}) {
value{
value
},
}
When I try this, I get an error that says:
{
"errors": [
{
"message": "Validation error of type FieldUndefined: Field 'insertreference_list' in type 'Mutation' is undefined # 'insertreference_list'",
"locations": [
{
"line": 2,
"column": 3
}
],
"extensions": {
"classification": "ValidationError"
}
}
]
}
When I google the error, a lot of responses tell people to use mutations instead of queries - but I've started from a mutation. I would like to know how to resolve the error, but I'd also like to find the skills to improve my search strategy for finding answers.
When I look at the documentation for using GraphQL with DataStax, I see a different format to the write structure, which is as follows:
insertbook(value: bookInput!, ifNotExists: Boolean, options:
UpdateOptions): bookMutationResult
It has a colon and a fragment of text after it. It also explicitly states the ifNotExists: Boolean and options. I don't know if there may have been a change to how to use DataStax since the time Ania recorded the tutorial that means it is no longer a current demonstration of how to use the tool, or if there is an answer for this and I just haven't found it yet.
You didn't provide details of how you've configured your Astra DB for Ania's Netflix Clone tutorial so I'm going to assume that you've named your keyspace as netflix.
It seems as though you haven't followed the instructions correctly and have missed steps. I can replicate the error you reported if I skip at least one of the steps in the tutorial.
In step 5 of the tutorial, you needed to do the following:
✅ In graphQL playground, change tab to now use graphql. Edit the end of the URl to change from system to the name of your keyspace: netflix
✅ Populate HTTP HEADER variable x-cassandra-token on the bottom of the page with your token as shown below (again !! yes this is not the same tab)
Switch tabs
In order to insert data, you need to switch to the graphql tab.
If you try to insert the data in the graphql-schema tab, you will get the error you reported.
Set keyspace
You need to update the URI of your GraphQL playground in the graphql tab to use the keyspace name netflix instead of system. For example:
https://db_id-us-west1.apps.astra.datastax.com/api/graphql/system
change to:
https://db_id-us-west1.apps.astra.datastax.com/api/graphql/netflix
If you try to insert data into the system keyspace, you will get the error you reported because the reference_list table does not exist in that keyspace. Cheers!

Rails magento API with savon - complex filters

I am trying to import orders from a Magento store to a rails app using Savon and Magento API. So far here is my code:
require 'savon'
client = Savon.client(wsdl: "http://mywebsite.com/api/v2_soap?wsdl")
session_id = client.call(:login,
message: {
username: "myapiuser",
api_key: "myapipassword"
}).body[:login_response][:login_return]
orders = client.call(:sales_order_list,
message: {
session_id: session_id, complex_filters: [{
key: "created_at", operator: "gt", value: '2014-10-14 00:00:00' }]
})
I need to use a complex filter to find orders created after a certain date. The reason for this is if I try to pull all the orders at once it overloads the server. I tried using the complex filter above, but it still tries to pull all the orders. Am I passing the filter in an improper way? Any idea on how to make this work?

How do I setup geocoder with google_premier?

I've read the docs for the geocoder gem which state you can set a key, client and channel when using Google Premier.
According to some other posts I've read here, it's now possible to use an API key and still not pay as long as you're below the free threshold. We need to do this as we host with Heroku and we keep hitting our daily limit. We're not ourselves, but without any sort of other identification, we're probably reaching a limit identified by IP shared with other Heroku sites. Using a key will help identify us and therefore keep us from hitting a limit.
However, when I look at the sign up pages for the Google API, there are a baffling array of client ids, api keys and secrets, for installed apps, web apps and so on. Which combination is the one required to make geocoder burst into life?
To answer the question :
When subscribing to Google Premier, you should have received a client id starting by gme- and a key (see https://developers.google.com/maps/documentation/business/articles/prelaunch_checklist#welcome_letter)
The third argument needed by geocoder is the channel, that can be any kind of string (see https://developers.google.com/maps/documentation/business/guide#Channels )
You need to add the list of authorised urls originating the requests in the Google Portal (see https://developers.google.com/maps/documentation/business/guide#URLs ).
From the Geocoder doc, you can use a setting like :
# -*- encoding : utf-8 -*-
Geocoder.configure do |config|
config.lookup = :google_premier
config.api_key = ["gme-client-id","key", "channel"]
config.timeout = 10
config.units = :km
end
But it would probably be a better choice to use client-side geocoding like recommended here : https://developers.google.com/maps/articles/geocodestrat?hl=fr#client
This worked for me:
Geocoder.configure(
:lookup => :google_premier,
:api_key => [ 'GOOGLE_CRYPTO_KEY', 'GOOGLE_CLIENT_ID', 'GOOGLE_CHANNEL' ],
:timeout => 5,
:units => :km,
)
You'll need to substitute in the corresponding values from your Google Maps for Business welcome email. Channel is a value of your choosing.

Can't get views by insightTrafficSourceType — YouTube Analytics API

So I'm using the 'google-api-client' gem with Rails, and I'm attempting to call the URL below in order to get video views by day and insightTrafficSourceType. This is a call that appears to be allowable from the Available Reports documentation page.
Additionally, I found that I was able to make this call by using the API Explorer tool provided by Google.
URL:
https://www.googleapis.com/youtube/analytics/v1beta1/reports?metrics=views&ids=channel==CHANNEL_ID&dimensions=day,insightTrafficSourceType&filter=video==VIDEO_ID&start-date=2013-01-15&end-date=2013-01-16&start-time=1970-01-01
Result:
{
:error=>
{
"errors"=>[
{
"domain"=>"global",
"reason"=>"invalid",
"message"=>"Unknown identifier (insightTrafficSourceType) given in field parameters.dimensions."
}
],
"code"=>400,
"message"=>"Unknown identifier (insightTrafficSourceType) given in field parameters.dimensions."
}
}
I'm not sure what extra data I can provide in the initial description of this bug, but as stated before I am making the call to the API with the Google::APIClient Ruby library. The actual code itself looks like this:
client.execute(
:api_method => api.reports.query,
:parameters => options
)
You are still referencing the old beta API, i.e., in your URL, you have 'v1beta' and you should have 'v1' there. Try replacing that and running it again. Also, you can look at the api explorer to see the exact URL that should be generated in live examples with your acct (once you enable OAuth) here:
https://developers.google.com/youtube/analytics/v1/
(Look at the bottom of the page.)
Finally, start-time isn't a parameter listed on the production version of the API, so you will want to remove that as well.

Simultaneously get multiple resources by ID

There exists a DocsClient.get_resource_by_id function to get the document entry for a single ID. Is there a similar way to obtain (in a single call) multiple document entries given multiple document IDs?
My application needs to efficiently download the content from multiple files for which I have the IDs. I need to get the document entries to access the appropriate download URL (I could manually construct the URLs, but this is discouraged in the API docs). It is also advantageous to have the document type and, in the case of spreadsheets, the document entry is required in order to access individual worksheets.
Overall I'm trying to reduce I/O waits, so if there's a way I can bundle the doc ID lookup, it will save me some I/O expense.
[Edit] Backporting AddQuery to gdata v2.0 (from Alain's solution):
client = DocsClient()
# ...
request_feed = gdata.data.BatchFeed()
request_entry = gdata.data.BatchEntry()
request_entry.batch_id = gdata.data.BatchId(text=resource_id)
request_entry.batch_operation = gdata.data.BATCH_QUERY
request_feed.add_batch_entry(entry=request_entry, batch_id_string=resource_id, operation_string=gdata.data.BATCH_QUERY)
batch_url = gdata.docs.client.RESOURCE_FEED_URI + '/batch'
rsp = client.batch(request_feed, batch_url)
rsp.entry is a collection of BatchEntry objects, which appear to refer to the correct resources, but which differ from the entries I'd normally get via client.get_resource_by_id().
My workaround is to convert gdata.data.BatchEntry objects into gdata.docs.data.Resource objects ilke thus:
entry = atom.core.parse(entry.to_string(), gdata.docs.data.Resource)
You can use a batch request to send multiple "GET" requests to the API using a single HTTP request.
Using the Python client library, you can use this code snippet to accomplish that:
def retrieve_resources(gd_client, ids):
"""Retrieve Documents List API Resources using a batch request.
Args:
gd_client: authorized gdata.docs.client.DocsClient instance.
ids: Collection of resource id to retrieve.
Returns:
ResourceFeed containing the retrieved resources.
"""
# Feed that holds the batch request entries.
request_feed = gdata.docs.data.ResourceFeed()
for resource_id in ids:
# Entry that holds the batch request.
request_entry = gdata.docs.data.Resource()
self_link = gdata.docs.client.RESOURCE_SELF_LINK_TEMPLATE % resource_id
request_entry.id = atom.data.Id(text=self_link)
# Add the request entry to the batch feed.
request_feed.AddQuery(entry=request_entry, batch_id_string=resource_id)
# Submit the batch request to the server.
batch_url = gdata.docs.client.RESOURCE_FEED_URI + '/batch'
response_feed = gd_client.Post(request_feed, batch_url)
# Check the batch request's status.
for entry in response_feed.entry:
print '%s: %s (%s)' % (entry.batch_id.text,
entry.batch_status.code,
entry.batch_status.reason)
return response_feed
Make sure to sync to the latest version of the project repository.

Resources