Problems with MailChimp API in Ruby Error Code: -90 - ruby-on-rails

I am using the following code in my MailChimp Controller to submit simple newsletter data. When It is sent I receive the following error as a "Method is not exported by this server -90" I have attached my controller code below. I am using this controller for a simple newsletter signup form. (Name, Email)
class MailchimpController < ApplicationController
require "net/http"
require "uri"
def subscribe
if request.post?
mailchimp = {}
mailchimp['apikey'] = 'f72328d1de9cc76092casdfsd425e467b6641-us2'
mailchimp['id'] = '8037342dd1874'
mailchimp['email_address'] = "email#gmail.com"
mailchimp['merge_vars[FNAME]'] = "FirstName"
mailchimp['output'] = 'json'
uri = URI.parse("http://us2.api.mailchimp.com/1.3/?method=listSubscribe")
response = Net::HTTP.post_form(uri, mailchimp)
mailchimp = ActiveSupport::JSON.decode(response.body)
if mailchimp['error']
render :text => mailchimp['error'] + "code:" + mailchimp['code'].to_s
elsif mailchimp == 'true'
render :text => 'ok'
else
render :text => 'error'
end
end
end
end

I highly recommend the Hominid gem: https://github.com/tatemae-consultancy/hominid

The problem is that Net::HTTP.post_form is not passing the "method" GET parameter. Not being a big ruby user, I'm not certain what the actual proper way to do that with Net::HTTP is, but this works:
require "net/http"
data="apikey=blahblahblah"
response = nil
Net::HTTP.start('us2.api.mailchimp.com', 80) {|http|
response = http.post('/1.3/?method=lists', data)
}
p response.body
That's the lists() method (for simplicity) and you'd have to build up (and urlencode your values!) your the full POST params rather than simply providing the hash.
Did you take a look at the many gems already available for ruby?
http://apidocs.mailchimp.com/downloads/#ruby
The bigger problem, and main reason I'm replying to this, is that your API Key is not obfuscated nearly well enough. Granted I'm used to working with them, but I was able to guess it very quickly. I would suggest immediately going and disabling that key in your account and then editing the post to actually have completely bogus data rather than anything close to the correct key. The list id on the other hand, doesn't matter at all.

You'll be able to use your hash if you convert it to json before passing it to Net::HTTP. The combined code would look something like:
mailchimp = {}
mailchimp['apikey'] = 'APIKEYAPIKEYAPIKEYAPIKEY'
mailchimp['id'] = '8037342dd1874'
mailchimp['email_address'] = "email#gmail.com"
mailchimp['merge_vars[FNAME]'] = "FirstName"
mailchimp['output'] = 'json'
response = nil
Net::HTTP.start('us2.api.mailchimp.com', 80) {|http|
response = http.post('/1.3/?method=listSubscribe', mailchimp.to_json)
}

Related

Check if certain keywords available in users email using gmail api in ruby on rails

I am trying to check/filter through users inbox emails and check for keywords "offer" and "letter"
from calling messages on gmail api it returns the message id and threads which you can use to get the message contents that I have added to an array from my example below
def keyword_check
client = Signet::OAuth2::Client.new(access_token: session[:access_token])
service = Google::Apis::GmailV1::GmailService.new
service.authorization = client
messages = service.list_user_messages('me')
#messages_json = []
messages.messages.each do |m|
response = HTTParty.get("https://www.googleapis.com/gmail/v1/users/me/messages/#{m.id}?access_token=#{session[:access_token]}")
res = JSON.parse(response.body)
#messages_json << res
end
filter = HTTParty.get("https://www.googleapis.com/gmail/v1/users/me/messages?q=offer?access_token=#{session[:access_token]}")
mes = JSON.parse(filter.body)
render json: #messages_json.to_json
end
this returns all the messages in an array but I am finding it difficult filtering the array and checking for the particular keywords and returning both a boolean of true or false and the message itself alone in the array?
I think you should check whether the keywords are present in the response.body before adding them to the array:
def keyword_check
client = Signet::OAuth2::Client.new(access_token: session[:access_token])
service = Google::Apis::GmailV1::GmailService.new
service.authorization = client
messages = service.list_user_messages('me')
#messages_json = []
messages.messages.each do |m|
response = HTTParty.get("https://www.googleapis.com/gmail/v1/users/me/messages/#{m.id}?access_token=#{session[:access_token]}")
res = JSON.parse(response.body)
#messages_json << res if matches_keywords(response.body)
end
filter = HTTParty.get("https://www.googleapis.com/gmail/v1/users/me/messages?q=offer?access_token=#{session[:access_token]}")
mes = JSON.parse(filter.body)
render json: #messages_json.to_json
end
def matches_keywords content
return true if content.include?('offer')
return true if content.include?('letter')
return false
end
Edit: Be aware that the body probably contains all the HTML formatting, css code etcetera... suppose for instance that there's a CSS rule with 'letter-spacing', the new function will always return true, so please check whether you are able to get the TEXT content instead. For this, have a look at the documentation for the Gmail API.
Another approach could be to use Kaminara (or equivalent) to really dive into the HTML structure, and only check the part that holds the actual text ( or some specific or something)

RSpec tests for RestClient::Request.execute: Any way to see the request?

I am using RestClient gem to build an API client and the calls to the API are processed by this method here
def call(api_name,api_endpoint,token = nil,*extra_params)
endpoint = fetch_endpoint(api_name,api_endpoint)
params = {}
endpoint['params'].each_with_index { |p,i| params[p] = endpoint['values'][i] }
puts params
if token.nil? then
response = RestClient::Request.execute(method: endpoint['method'], url: endpoint['url'], params: params.to_json)
else
response = RestClient::Request.execute(method: endpoint['method'], url: endpoint['url'], headers: {"Authorization" => "Bearer #{token}"}, params: params.to_json)
end
response
end
As you may see, all I do is mounting a hash with parameters/values for the call and invoking RestClient::Request#execute to get a response.
It happens that some of my tests, like this one
it 'request_autorization' do
obj = SpotifyIntegration.new
response = obj.call('spotify','request_authorization',nil,state: obj.random_state_string)
myjson = JSON.parse(response.body)
expect(myjson).to eq({})
end
are returning a 400 Bad request error, and I really don't know why. Other tests, like this one
it 'my_playlists (with no authorization token)' do
obj = SpotifyIntegration.new
expect {
response = obj.call('spotify','my_playlists')
}.to raise_error(RestClient::Unauthorized,'401 Unauthorized')
end
processed by the same method, run perfectly fine.
Is there any way to see the request sent? I mean, see how RestClient is mount/sending my request to the corresponding API? May be this way I could understand what is happening.
By "see the request" I mean something like
puts RestClient::Request.prepared_request
or
puts RestClient::Request.prepared_url
I've searched the RestClient documentation and found nothing similar, but maybe some of you know how to do this.
You might try using RestClient.log to get more information. Something like:
RestClient.log = STDOUT
WebMock is also a great test framework for HTTP requests. The tests for rest-client itself make a lot of use of WebMock.

Getting data out of a JSON Response in Rails 3

So I am trying to pull tweets off of Twitter at put them into a rails app (Note because this is an assignment I can't use the Twitter Gem) and I am pretty confused.
I can get the tweets I need in the form of a JSON string but I'm not sure where to go from there. I know that the Twitter API call I'm making returns a JSON array with a bunch of Tweet objects but I don't know how to get at the tweet objects. I tried JSON.parse but was still unable to get the required data out (I'm not sure what that returns). Here's the code I have so far, I've made it pretty clear with comments/strings what I'm trying for. I'm super new to Rails, so this may be way off for what I'm trying to do.
def get_tweets
require 'net/http'
uri = URI("http://search.twitter.com/search.json?q=%23bieber&src=typd")
http = Net::HTTP.new(uri.host, uri.port)
request = Net::HTTP::Get.new(uri.request_uri)
response = http.request(request)
case response
when Net::HTTPSuccess then #to get: text -> "text", date: "created_at", tweeted by: "from_user", profile img url: "profile_img_url"
JSON.parse(response.body)
# Here I need to loop through the JSON array and make n tweet objects with the indicated fields
t = Tweet.new(:name => "JSON array item i with field from_user", :text "JSON array item i with field text", :date => "as before" )
t.save
when Net::HTTPRedirection then
location = response['location']
warn "redirected to #{location}"
fetch(location, limit - 1)
else
response.value
end
end
Thanks!
The JSON.parse method returns a ruby hash or array representing the json object.
In your case, the Json is parsed as a hash, with the "results" key (inside that you have your tweets), and some meta data: "max_id", "since_id", "refresh_url", etc. Refer to twitter documentation for a description on the fields returned.
So again with your example it would be:
parsed_response = JSON.parse(response.body)
parsed_response["results"].each do |tweet|
t = Tweet.new(:name => tweet["from_user_name"], :text => tweet["text"], :date => tweet["created_at"])
t.save
end

How to make a PUT request from a Rails model to controller?

I want to update model via PUT request in Rails app. What would be the best way to do that?
Basically I have:
def method
...
#relation = Relation.find(34)
#relation.name = "new_name"
#relation.save
end
This gives me errors in SQLite ("cannot start a transaction within a transaction").
Switching to put/post should I guess save the problem.. What would be the right way to do it?
So after some time, I actually found the solution. Here is the code for the Resque worker, that updates the Relation model via PUT. Using this method I don't get SQLite busy exception errors.
class VideoCollector
def self.perform(rel_id)
#relation = Relation.find_by_id(rel_id)
#url = Rails.application.routes.url_helpers.relation_url(#relation)
#uri = URI(#url)
#body ={"collected" => "true"}.to_json
request = Net::HTTP::Put.new(#uri.path, initheader = {'Content-Type' =>'application/json'})
request.body = #body
response = Net::HTTP.new(#uri.host, #uri.port).start {|http| http.request(request) }
end
end
Maybe that will be useful to someone.

Using OpenUri, how can I get the contents of a redirecting page?

I want to get data from this page:
http://www.canadapost.ca/cpotools/apps/track/personal/findByTrackNumber?trackingNumber=0656887000494793
But that page forwards to:
http://www.canadapost.ca/cpotools/apps/track/personal/findByTrackNumber?execution=eXs1
So, when I use open, from OpenUri, to try and fetch the data, it throws a RuntimeError error saying HTTP redirection loop:
I'm not really sure how to get that data after it redirects and throws that error.
You need a tool like Mechanize. From it's description:
The Mechanize library is used for
automating interaction with websites.
Mechanize automatically stores and
sends cookies, follows redirects, can
follow links, and submit forms. Form
fields can be populated and submitted.
Mechanize also keeps track of the
sites that you have visited as a
history.
which is exactly what you need. So,
sudo gem install mechanize
then
require 'mechanize'
agent = WWW::Mechanize.new
page = agent.get "http://www.canadapost.ca/cpotools/apps/track/personal/findByTrackNumber trackingNumber=0656887000494793"
page.content # Get the resulting page as a string
page.body # Get the body content of the resulting page as a string
page.search(".somecss") # Search for specific elements by XPath/CSS using nokogiri
and you're ready to rock 'n' roll.
The site seems to be doing some of the redirection logic with sessions. If you don't send back the session cookies they are sending on the first request you will end up in a redirect loop. IMHO it's a crappy implementation on their part.
However, I tried to pass the cookies back to them, but I didn't get it to work, so I can't be completely sure that that is all that's going on here.
While mechanize is a wonderful tool I prefer to "cook" my own thing.
If you are serious about parsing you can take a look at this code. It serves to crawl thousands of site on an international level everyday and as far as I have researched and tweaked there isn't a more stable approach to this that also allows you to highly customize later on your needs.
require "open-uri"
require "zlib"
require "nokogiri"
require "sanitize"
require "htmlentities"
require "readability"
def crawl(url_address)
self.errors = Array.new
begin
begin
url_address = URI.parse(url_address)
rescue URI::InvalidURIError
url_address = URI.decode(url_address)
url_address = URI.encode(url_address)
url_address = URI.parse(url_address)
end
url_address.normalize!
stream = ""
timeout(8) { stream = url_address.open(SHINSO_HEADERS) }
if stream.size > 0
url_crawled = URI.parse(stream.base_uri.to_s)
else
self.errors << "Server said status 200 OK but document file is zero bytes."
return
end
rescue Exception => exception
self.errors << exception
return
end
# extract information before html parsing
self.url_posted = url_address.to_s
self.url_parsed = url_crawled.to_s
self.url_host = url_crawled.host
self.status = stream.status
self.content_type = stream.content_type
self.content_encoding = stream.content_encoding
self.charset = stream.charset
if stream.content_encoding.include?('gzip')
document = Zlib::GzipReader.new(stream).read
elsif stream.content_encoding.include?('deflate')
document = Zlib::Deflate.new().deflate(stream).read
#elsif stream.content_encoding.include?('x-gzip') or
#elsif stream.content_encoding.include?('compress')
else
document = stream.read
end
self.charset_guess = CharGuess.guess(document)
if not self.charset_guess.blank? and (not self.charset_guess.downcase == 'utf-8' or not self.charset_guess.downcase == 'utf8')
document = Iconv.iconv("UTF-8", self.charset_guess, document).to_s
end
document = Nokogiri::HTML.parse(document,nil,"utf8")
document.xpath('//script').remove
document.xpath('//SCRIPT').remove
for item in document.xpath('//*[translate(#src, "ABCDEFGHIJKLMNOPQRSTUVWXYZ", "abcdefghijklmnopqrstuvwxyz")]')
item.set_attribute('src',make_absolute_address(item['src']))
end
document = document.to_s.gsub(/<!--(.|\s)*?-->/,'')
self.content = Nokogiri::HTML.parse(document,nil,"utf8")
end

Resources