SteamedResponse not working in lumen on other server - response

I am using Lumen for a set of APIs.
using streamedresponse built in library of symphony.
use Symfony\Component\HttpFoundation\StreamedResponse;
protected function getFileResponseHeaders($filename)
{
return [
'Cache-Control' => 'must-revalidate, post-check=0, pre-check=0',
'Content-type' => 'text/csv; =utf-8',
'charset' => 'utf-8',
'Content-Disposition' => 'attachment; filename='.$filename,
'Expires' => '0',
'Pragma' => 'public'
];
}
//'Content-Type: '
protected function streamFile($callback, $headers)
{
$response = new StreamedResponse($callback, 200, $headers);
$response->send();
}
I am using this approach in a scenario where I want to stream data in command line with chunks of 2000. I have bulk of data upto 7 millions rows to stream.
This whole thing is working completly fine on a server with following specs.
php 7.3.27
centos fedora 7
apache 2.4.41
mysql8
But I have other servers where this stream only list down first batch. Specs of other servers are identical as following:
php 7.4
centos 8
apache 2.4.47
mysql8
I want guidance to run this stream on all the servers. I have compared php.ini and every other thing that I can think of.
Thanks in advance.

PHP ZTS was missing from all the servers except the one where the streamed response was working.
Adding php zts on the server fixed the problem for me finally.

Related

Savon adds paragraphs to base64 string

since half a year now I'am trying to add a bas64 encoded pdf from my rails-app to an Order in Plentymarkets via SOAP with savon. This didn't work as freely and easy as described, so I contacted Plenty-support-team, where I've been told the error was about my Base64 string containing newlines.
So I did:
file = open(#kvas.pdf_attachment.url).read
#data = Base64.encode64(file).gsub(/\n/, '')
But even though I tried strict_encode64, url_save_encode64and several variations of .gsub("this",'that'),read tons of threads 'bout base64 encoding
I allways end up with line breaks or paragraphs in the Base64-string shown in the xml-request sent via savon.
Gedit shows the string copyed from the request equally damaged, with newlines starting with +(each + provoces a newline) or / (here its more sporadic) until I switch of the automatic linebreak.
Does Savon interpret line breaks into the base64-string? and can I switch that behaviour off?
Here my Savon call:
client = Savon.client(
:wsdl => #settings.wsdladdr,
:soap_header => {
"verifyingToken" => {
"UserID" => #tokens.userid,
"Token"=> #tokens.token
}},
:open_timeout => 20,
:read_timeout => 20,
:pretty_print_xml => false,
:log => false,
:mime_multipart => true
)
response = client.call(:add_document, message: {:oPlentySoapRequest_AddDocument => {"DocumentList"=>{"item"=>{"OrderDocumentType"=>"RepairBill", "Document"=>{"FileData"=> "#{#data}","FileEnding"=>".pdf", "FileName"=>"66667"}, "OrderID" =>"4009", "CallItemsLimit"=>"1"}}}})
Well, after half a year trying everything up and down the net, I've found an answer (one day after posting this question)
It's all easy:
Base64.encode64(Base64.encode64(file)).sub(/\n/, '')
this did the trick
now the base64 string is a one_liner
who would think of that? A double encoded base64 string???!!!
I have to say, a good documentation saves you from lots of trouble

Ruby Telnet: How to send http request using telnet

I am using ruby telnet library to make HTTP get request(http://127.0.0.1:3000/test) but i am not able to make http get request to my server.
Below is the code that i am trying
require 'net/telnet'
webserver = Net::Telnet::new('Host' => '127.0.0.1', 'Port' => 3000, 'Telnetmode' => false)
size = 0
webserver.cmd("GET / HTTP/1.1\nHost: 127.0.0.1/test") do |c|
print c
end
Please let me know what wrong i am doing here.
You need to end your input HTTP with a carriage return and line ending, otherwise the HTTP server will wait for more headers:
webserver.cmd("GET /test HTTP/1.1\r\nHost: 127.0.0.1\r\n\r\n") do |c|
print c
end
But telnet really isn't the right thing to use (unless you're just experimenting). If you want to make HTTP requests for a real-world program, you should definitely use a proper HTTP library (net/http at the least, or something like Faraday would be even better). HTTP seems simple, but there are many hidden complexities that mean creating a writer/parser from scratch is a lot of work.

SOAP request in ruby with authentication/credentials

I am trying to send XML/SOAP data to a server using an http POST request, and am starting by converting a working Perl script to Ruby on Rails. Using some resources, I have written some preliminary code, but I am unsure how to add user authentication (running this code causes a connection timeout error).
My code so far:
http = Net::HTTP.new(' [host] ', [port])
path = '/rest/of/the/path'
data = [ XML SOAP string ]
headers = {
'Content-Type' => 'application/atom+xml',
'Host' => ' [host] '
}
resp, data = http.post(path, data, headers)
Adding http.basic_auth 'user', 'pass' gave me a no method error
Perl code for supplying credentials:
my $ua = new LWP::UserAgent(keep_alive=>1);
$ua->credentials($netloc, '', "$user", "$pwd");
...
my $request = POST ($url, Content_Type=> 'application/atom+xml', Content=> $soap_req);
my $response = $ua->request($request);
the server uses NTLM, so maybe there is a gem you could recommend (like this?). It looks like the Perl script is using user agents, so I would like to do something similar in Ruby. In summary, how do I add user authentication to my request?
Have you looked at savon gem, https://github.com/savonrb/savon?

Ruby aws-sdk - timeout error

I am trying to upload a file to S3 with the following simple code:
bucket.objects.create("sitemaps/event/#{file_name}", open(file))
I get the following:
Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.
What could be going wrong? Any tips will be appreciated.
This timeout is generally happens when the content length could not be correctly determined based on the opened file. S3 is waiting for additional bytes that aren't coming. The fix is pretty simple, just open your file in binary mode.
Ruby 1.9
bucket.objects.create("sitemaps/event/#{file_name}", open(file, 'rb', :encoding => 'BINARY'))
Ruby 1.8
bucket.objects.create("sitemaps/event/#{file_name}", open(file, 'rb'))
The aws-sdk gem will handle this for you if you pass in the the path to the file:
# use a Pathname object
bucket.objects.create(key, Pathname.new(path_to_file))
# or just the path as a string
bucket.objects.create(key, :file => path_to_file)
Also, you can write to an object in s3 before it exists, so you could also do:
# my favorite syntax
obj = s3.buckets['bucket-name'].objects['object-key'].write(:file => path_to_file)
Hope this helps.
Try modifying the timeout parameters and see if the problem persists.
From the AWS website: http://aws.amazon.com/releasenotes/5234916478554933 (New Configuration Options)
# the new configuration options are:
AWS.config.http_open_timeout #=> new session timeout (15 seconds)
AWS.config.http_read_timeout #=> read response timeout (60 seconds)
AWS.config.http_idle_timeout #=> persistant connections idle longer are closed (60 seconds)
AWS.config.http_wire_trace #=> When true, HTTP wire traces are logged (false)
# you can update the timeouts (with seconds)
AWS.config(:http_open_timeout => 5, :http_read_timeout => 120)
# log to the rails logger
AWS.config(:http_wire_trace => true, :logger => Rails.logger)
# send wire traces to standard out
AWS.config(:http_wire_trace => true, :logger => nil)

Twitter search api blocked from Amazon EC2 in Ruby only, not curl...is this Net::HTTP?

This is a weird one that anyone can repro at home (I think) - I am trying to write a simple service to run searches on Twitter on a service hosted on EC2. Twitter returns me errors 100% of the time when run in ruby, but not in other languages, which would indicate it's not an IP-blocking issue. Here is an example:
admin#ec2-xx-101-152-xxx-production:~$ irb
irb(main):001:0> require 'net/http'
=> true
irb(main):002:0> res = Net::HTTP.post_form(URI.parse('http://search.twitter.com/search.json'), {'q' => 'twitter'})
=> #<Net::HTTPBadRequest 400 Bad Request readbody=true>
irb(main):003:0> exit
admin#ec2-xx-101-152-xxx-production:~$ curl http://search.twitter.com/search.json?q=twitter
{"results":[{"text":""Social Media and SE(Search Engine) come side by side to help promote your business and bran...<snip/>
As you see, CURL works, irb does not. When I run on my local windows box in irb, success:
$ irb
irb(main):001:0> require 'net/http'
=> true
irb(main):002:0> res = Net::HTTP.post_form(URI.parse('http://search.twitter.com/search.json'), {'q' => 'twitter'})
=> #<Net::HTTPOK 200 OK readbody=true>
This is confusing...if there was some kind of core bug in Net::HTTP, I would think it would show up both on windows and linux, and if I was being blocked by my IP, then curl shouldn't work either. I tried this on a fresh Amazon instance too with a fresh IP addy.
Anyone should be able to repro this 'cause I'm using the ec2onrails ami:
ec2-run-instances ami-5394733a -k testkeypair
Just ssh in after that and run those simple lines above. Anyone have ideas what's going on?
Thanks!
Check the Twitter API changelog. They are blocking requests from EC2 that don't have a User-Agent header in the HTTP request because people are using EC2 to find terms to spam.
Twitter recommends setting the User-Agent to your domain name, so they can check out sites that are causing problems and get in touch with you.
The HTTP 400 error message is returned by twitter when a single client exceeds the number of maximum requests per hour. I don't know how your ec2 instance is configured therefore I don't know if your request is identified by a shared Amazon IP or a custom IP. In the first case it's reasonable to think that the limit is reached in a very small amount of time.
More details are available in the Twitter API doumentation:
error codes
rate limiting
To have more details about the reason of the error response, read your response content or headers. You should find an error message and some X-RateLimit twitter headers.
require 'net/http'
response = Net::HTTP.post_form(URI.parse('http://search.twitter.com/search.json'), {'q' => 'twitter'})
p response.headers
p response.body
Thanks for the info. Putting my domain in the USER-AGENT header fixed the same problem for me. I'm running http://LocalChirps.com on EC2 servers.
CURL Code snippet (PHP):
$twitter_api_url = 'http://search.twitter.com/search.atom?rpp='.$count.'&page='.$page;
$ch = curl_init($twitter_api_url);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_USERAGENT, 'LocalChirps.com');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$twitter_data = curl_exec($ch);
$httpcode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if ($httpcode != 200) {
//echo 'error calling twitter';
return;
}

Resources