In our structured data area (in <head>) we have this
"potentialAction": {
"#type": "SearchAction",
"target": "<%= search_url(search: {q: "{search_term_string}" }) %>",
"query-input": "required name=search_term_string"
}
Watch "target". It show me the link in this way
It show me
https://www.mywebsite.com/search?utf8=%E2%9C%93&search%5Bq%5D=%7Bsearch_term_string%7D
It doesn't show { or } but %7B and %7D
How how to solve?
I already try to fix using
"{search_term_string}".html_safe
or
"{search_term_string}".html_safe
or
%({search_term_string})
or
%({search_term_string}).html_safe
but nothing
You need to produce a template of a URL (not necessarily a valid URL itself but a template that can be used to produce a valid URL). You know exactly what you need to replace, you can be pretty certain that it will only occur in one place, and you know that everything is safe because you're controller everything and GoogleBot knows what it is looking for (presumably).
I'd go ahead and patch up the encoding by hand:
search_url(search: { q: "{search_term_string}" }).sub('%7Bsearch_term_string%7D', '{search_term_string}')
If you think the pattern will appear more than once (which is highly unlikely), use gsub instead of sub.
You could also be more explicit about what you're doing:
.sub('%7Bsearch_term_string%7D') { |encoded| URI.decode(encoded) }
Or put it all in a helper (say search_url_for_microdata) so that you can leave a note to your future self about why you're doing this:
# Untangle URL encoding issues with json+ld microdata for GoogleBot.
def search_url_for_microdata
search_url(search: { q: '{search_term_string}' })
.sub('%7Bsearch_term_string%7D') { |encoded| URI.decode(encoded) }
end
or even:
# Untangle URL encoding issues with json+ld microdata for GoogleBot.
def search_url_for_microdata
decoded = '{search_term_string}'
encoded = URI.encode(decoded)
search_url(search: { q: decoded }).sub(encoded, decoded)
end
or:
DECODED = '{search_term_string}'
ENCODED = URI.encode(DECODED)
# Untangle URL encoding issues with application/ld+json microdata for GoogleBot.
def search_url_for_microdata
search_url(search: { q: DECODED }).sub(ENCODED, DECODED)
end
Related
I'm attempting to send a file to OneDrive using the following code:
$uri = "/me/drive/items/$folderId/children('{$fileName}')/content";
$graph = $this->graph->create($user);
$client = $this->graph->createClient();
$item = $graph->createRequest("PUT", $uri)
->attachBody($fileContent)
->setReturnType(Model\DriveItem::class)
->execute($client);
This works great if $fileName is something like Test.doc
But for some reason, when the filename has a hash (#) in the filename, then I get an error:
object(Microsoft\Graph\Model\DriveItem)#1509 (1) {
["_propDict":protected]=>
array(1) {
["error"]=>
array(3) {
["code"]=>
string(10) "BadRequest"
["message"]=>
string(36) "Bad Request - Error in query syntax."
["innerError"]=>
array(2) {
["request-id"]=>
string(36) "ff3fe15f-b1ee-4e92-8abd-2400b1c1b5cf"
["date"]=>
string(19) "2018-10-04T14:30:51"
}
}
}
Can someone possibly clarify if this is a bug or actual behaviour (i.e. you cannot have a # in a filename)
Thanks
I guess you are utilizing Microsoft Graph Library for PHP, special characters such as # needs to be escaped.
So, either replace the hash with %23 (percent encoding) or use rawurlencode function as shown below:
$fileName = rawurlencode("Guide#.docx");
$requestUrl = "https://graph.microsoft.com/v1.0/drives/$driveId/root:/$fileName:/content";
try {
$item = $client->createRequest("PUT", $requestUrl)
->attachBody($fileContent)
->setReturnType(Model\DriveItem::class)
->execute();
} catch (\Microsoft\Graph\Exception\GraphException $ex) {
print $ex;
}
Although the file name have support # in name, but it doesn't mean the Product Team provide the API or adjust the existing API first time, the API you use may not have fully adjusted to suit thore latest naming rules. So it should be actual behavior now but not bug/or you can treat it as none-existed feature.
There are a related issue in the SharePoint dev issue list, although they aren't same one, but the suggestion is the same, vote the exising feature or submit an new one on UserVoice.
I want to look at the full URL the HTTParty gem has constructed from my parameters, either before or after it is submitted, it doesn’t matter.
I would also be happy grabbing this from the response object, but I can’t see a way to do that either.
(Bit of background)
I’m building a wrapper for an API using the HTTParty gem. It’s broadly working, but occasionally I get an unexpected response from the remote site, and I want to dig into why – is it something I’ve sent incorrectly? If so, what? Have I somehow malformed the request? Looking at the raw URL would be good for troubleshooting but I can’t see how.
For example:
HTTParty.get('http://example.com/resource', query: { foo: 'bar' })
Presumably generates:
http://example.com/resource?foo=bar
But how can I check this?
In one instance I did this:
HTTParty.get('http://example.com/resource', query: { id_numbers: [1, 2, 3] }
But it didn’t work. Through experimenting I was able to produce this which worked:
HTTParty.get('http://example.com/resource', query: { id_numbers: [1, 2, 3].join(',') }
So clearly HTTParty’s default approach to forming the query string didn’t align with the API designers’ preferred format. That’s fine, but it was awkward to figure out exactly what was needed.
You didn't pass the base URI in your example, so it wouldn't work.
Correcting that, you can get the entire URL like this:
res = HTTParty.get('http://example.com/resource', query: { foo: 'bar' })
res.request.last_uri.to_s
# => "http://example.com/resource?foo=bar"
Using a class:
class Example
include HTTParty
base_uri 'example.com'
def resource
self.class.get("/resource", query: { foo: 'bar' })
end
end
example = Example.new
res = example.resource
res.request.last_uri.to_s
# => "http://example.com/resource?foo=bar"
You can see all of the information of the requests HTTParty sends by first setting:
class Example
include HTTParty
debug_output STDOUT
end
Then it will print the request info, including URL, to the console.
As explained here, if you need to get the URL before making the request, you can do
HTTParty::Request.new(:get, '/my-resources/1', query: { thing: 3 }).uri.to_s
I know how to access a header in Rails
request.headers["HEADER_NAME"]
However, I want to get all headers passed by a browser. I see that I can enumerate it
request.headers.each { |header| ... }
However, this will spit out both headers and other environment variables. Is there a way to get only headers?
Update 1
My problem isn't interation. My problem is distinguising between environment variables and headers. Both of them will be reported while interation using each or keys.
Solution
By convention, headers do not usually contain dots. Nginx even rejected requests with dots in headers by default. So I think it's quite a safe assumption to go with.
On contrary, all rails environment garbage is namespaced e.g. action_dispatch.show_exceptions, rack.input etc.
These two facts conveniently suggest a way to distinguish external headers from internal variables:
request.headers.env.reject { |key| key.to_s.include?('.') }
Works neat.
Benchmarking a bit
Note, that include?('.') implementation works about 4 times faster than matching =~ /\./
Benchmark.measure { 500000.times { hsh.reject { |key| key.to_s =~ /\./ } } }
=> real=2.09
Benchmark.measure { 500000.times { hsh.reject { |key| key.to_s.include?('.') } } }
=> real=0.58
Hope that helps.
By using
request.headers.each { |key, value| }
This is iterating your requested header with (key+value), but if you want specific values you have to use key name like, HTTP_KEYNAME because whenever HTTP request come it will append HTTP to keys and be sure about uppercase because it is case sensitive.
for example:
if we have passed auth_token as header request parameter and want to access, we can use this.
request.headers["HTTP_AUTH_TOKEN"]
You can try this to get only headers list from request
request.headers.first(50).to_h.keys
It will convert request.headers object into array and then to hash to get list of all keys in request to be used as
request.headers["keyname"]
It might be not much efficient but I think it can do the job.
Hope this helps.
not sure if this is any helpful but I ended up using this brute force approach
request.env.select {|k,v|
k.match("^HTTP.*|^CONTENT.*|^REMOTE.*|^REQUEST.*|^AUTHORIZATION.*|^SCRIPT.*|^SERVER.*")
}
You might be probably looking for :
request.env
This will basically create a Ruby Hash of the whole request object.
For more details, check this question:
How do I see the whole HTTP request in Rails
If you just want headers:
request.headers.to_h.select { |k,v|
['HTTP','CONTENT','AUTHORIZATION'].any? { |s| k.to_s.starts_with? s }
}
If you want everything that's not an env var:
request.headers.to_h.select { |k,v|
['HTTP','CONTENT','REMOTE','REQUEST','AUTHORIZATION','SCRIPT','SERVER'].any? { |s|
k.to_s.starts_with? s
}
}
You should be able to do
request.headers.each { |key, value| }
In general when iterating over a hash ruby looks at the arity of your block and gives you either a pair (key + value) or separate variables. (The hash in this case is an object internal to the headers object)
Raw Email is
abc+2#gmail.com
For the haml file, I have javascripts like this
$('#Search_email').on('click', function(e){
var val = encodeURIComponent($('#search_email').val());
window.location.search = 'email='+val;
});
In the query url, it shows correctly as
url?email=abc%2B2%40gmail.com
However, for the controller, when I use the debugger to monitor, it shows only
params[:email] = "abc 2#gmail.com"
Does anyone know what happens, why rails decodes directly and in a wrong way. Thanks.
I have a rails application which processes incoming emails via IMAP. Currently a method is used that searches the parts of a TMail object for a given content_type:
def self.search_parts_for_content_type(parts, content_type = 'text/html')
parts.each do |part|
if part.content_type == content_type
return part.body
else
if part.multipart?
if body = self.search_parts_for_content_type(part.parts, content_type)
return body
end
end
end
end
return false
end
These emails are generally in response to a html email it sent out in the first place. (The original outbound email is never the same.) The body text the method above returns contains the full history of the email and I would like to just parse out the reply text.
I'm wondering whether it's reasonable to place some '---please reply above this line---' text at the top of the mail as I have seen in a 37 signals application.
Is there another way to ignore the client specific additions to the email, other than write a multitude of regular expressions (which I haven't yet attempted) for each and every mail client? They all seem to tack on their own bit at the top of any replies.
I have to do email reply parsing on a project I'm working on right now. I ended up using pattern matching to identify the response part, so users wouldn't have to worry about where to insert their reply.
The good news is that the implementation really isn't too difficult. The hard part is just testing all the different email clients and services you want to support and figuring out how to identify each one. Generally, you can use either the message ID or the X-Mailer or Return-Path header to determine where an incoming email came from.
Here's a method that takes a TMail object and extracts the response part of the message and returns that along with the email client/service it was sent from. It assumes you have the original message's From: name and address in the constants FROM_NAME and FROM_ADDRESS.
def find_reply(email)
message_id = email.message_id('')
x_mailer = email.header_string('x-mailer')
# For optimization, this list could be sorted from most popular to least popular email client/service
rules = [
[ 'Gmail', lambda { message_id =~ /.+gmail\.com>\z/}, /^.*#{FROM_NAME}\s+<#{FROM_ADDRESS}>\s*wrote:.*$/ ],
[ 'Yahoo! Mail', lambda { message_id =~ /.+yahoo\.com>\z/}, /^_+\nFrom: #{FROM_NAME} <#{FROM_ADDRESS}>$/ ],
[ 'Microsoft Live Mail/Hotmail', lambda { email.header_string('return-path') =~ /<.+#(hotmail|live).com>/}, /^Date:.+\nSubject:.+\nFrom: #{FROM_ADDRESS}$/ ],
[ 'Outlook Express', lambda { x_mailer =~ /Microsoft Outlook Express/ }, /^----- Original Message -----$/ ],
[ 'Outlook', lambda { x_mailer =~ /Microsoft Office Outlook/ }, /^\s*_+\s*\nFrom: #{FROM_NAME}.*$/ ],
# TODO: other email clients/services
# Generic fallback
[ nil, lambda { true }, /^.*#{FROM_ADDRESS}.*$/ ]
]
# Default to using the whole body as the reply (maybe the user deleted the original message when they replied?)
notes = email.body
source = nil
# Try to detect which email service/client sent this message
rules.find do |r|
if r[1].call
# Try to extract the reply. If we find it, save it and cancel the search.
reply_match = email.body.match(r[2])
if reply_match
notes = email.body[0, reply_match.begin(0)]
source = r[0]
next true
end
end
end
[notes.strip, source]
end
I think you will be stuck on this one. I have been doing some stuff with emails myself in TMail recently, and what you will generally find is that an email that has an HTML part is generally structured like:
part 1 - multipart/mixed
sub part 1 - text/plain
sub part 2 - text/html
end
The email clients I have played with Outlook and Gmail both generate replies in this format, and they just generally quote the original email inline in the reply. At first I though that the 'old' parts of the original email would be separate parts, but they are actually not - the old part is just merged into the reply part.
You could search the part for a line that begins 'From: ' (as most clients generally place a header at the top of the original email text detailing who sent it etc), but its probably not guaranteed.
I don't really see anything wrong with a --- please reply above this line --- generally, its not that invasive, and could make things a lot simpler.