My Rails Application Uses AWS SDK v3 to invoke lambda functions as follows
lambda_client = Aws::Lambda::Client.new(client_config)
lambda_return_value = lambda_client.invoke(
{
function_name: function_name,
invocation_type: 'RequestResponse',
log_type: 'None',
payload: generated_payload,
}
Most of my lambda functions execute successfully, but the ones that take longer than ~60sec result in the following exception on the ruby side even though the lambda executes completely
A Seahorse::Client::NetworkingError occurred in background at 2019-07-11 00:47:18 -0500 :
Net::ReadTimeout
I have gone through the documentation and cannot find a way to set a longer timeout for my lambda invocation. Any ideas how to get ruby to wait for the invocation and not timeout?
Hi Aws::Lambda::Client default timeout is 60 but you can change this while creating new client. Set :http_read_timeout in your client_config
client_config = {
....
http_read_timeout: 100
}
then create new client
lambda_client = Aws::Lambda::Client.new(client_config)
For more reference: https://docs.aws.amazon.com/sdkforruby/api/Aws/Lambda/Client.html
I hope that helpful
Related
I have to call a microservice(M1) from within another microservice(M2). And since there are going to be a lot of http requests to M1, I am using a connection pool and I am using the persistent gem, please check out the link https://www.rubydoc.info/gems/persistent_http/2.0.3.
I have made the two methods in the class as self-send_get_message and self-send_post_message.
So Whenever I have to make a request, I am calling the method directly by class reference. Is this the correct way of defining the pool and using the Get and Post methods.
class HttpClientPool
##persistent_http = PersistentHTTP.new(
name: 'MyHTTPClient',
logger: Rails.logger,
pool_size: 10,
warn_timeout: 0.25,
force_retry: true,
url: "http://m1.com/",
read_timeout: 2,
open_timeout: 1,
)
##x = 1
def self.send_get_message(path)
puts "--path = #{path}"
##x= ##x+1
puts "---var is #{##x}"
request = Net::HTTP::Get.new(path)
##persistent_http.request(request)
end
end
Now whenever I call HttpClientPool.send_get_message for sending a get request and print ##x the value should be incremented. When I am doing this on local machine - it seems to be fine. But when I deploy on a remote server, the value ##x comes out randomly mostly 2,3,4,5,6 and not seem to consistently increase.
What type of server do you have?
On your local machine, ruby internal thread lock might make the variable increase consistently, but on a multi-threaded environments, it could be accessed and incremented in a "random" way.
By the way: x is a terrible name for a variable and, as regards to a basically persistent HTTP request architecture, wouldn't it be more suitable to use WebSockets or another type of connection architecutre?
I just want to test the Front-End part. So, here is my problem:
Background
I have a robust Ruby on Rails (V3.2) backend app and an entiry new and separate front-end app with ReactJs (V16.4).
Problem
We begin to test React app with the help of Selenium-Webdriver and JestJs, we managed to try several views, but the problem arose when we made POST requests to the rails API.
I don't want to fill my database (development) with garbage because of the tests.
Ex: What happens when I want to test the creation of a new user?.
Possible solutions thought
I was thinking in 3 solutions:
Intercept the API calls and mock them by imitating their response (ex: at submitting click using selenium-webdriver).
Make use of Rails test environment through React
Just revert the call of the API doing the opposite, this would mean creating often undesirable actions in the controller. (ex: doing a delete for each post)
It depends if you want to test the whole stack (frontend/backend) or only the frontend part.
Frontend tests
If you only want to test the frontend part go with your first solution : mock API calls.
You will be limited if you just use the selenium-webdriver directly. I would recommend using nightwatch or testcafe. Testcafe does not depend on selenium. This is also optional in the latest versions of Nightwatch.
Testcafe includes a Request mocking API : http://devexpress.github.io/testcafe/documentation/test-api/intercepting-http-requests/mocking-http-responses.html
With Nightwatch you could use nock. See Nightwatch Mock HTTP Requests
Full stack tests
If you want to test the whole stack, you may use this approach : implement a custom API endpoint to allow for resetting your database in a clean state before or after tests execution. (like "/myapi/clean")
You should disable access to this endpoint in production environments.
You can then implement test hooks (before/after) to call your custom api endpoint :
http://nightwatchjs.org/guide#using-before-each-and-after-each-hooks
http://devexpress.github.io/testcafe/documentation/test-api/test-code-structure.html#test-hooks
You could have a test environment. From my experience, garbage data generated by tests is not such a big deal. You can periodically clean it up. Or you can spin up a new environment for every test run.
Finally I decided to use enzyme with jest and sinon.
example code:
import { mount } from "enzyme";
import sinon from "sinon";
beforeAll(() => {
server = sinon.fakeServer.create();
const initialState = {
example: ExampleData,
auth: AuthData
};
wrapper = mount(
<Root initialState={initialState}>
<ExampleContainer />
</Root>
);
});
it("example description", () => {
server.respondWith("POST", "/api/v1/example", [
200,
{ "Content-Type": "application/json" },
'message: "Example message OK"'
]);
server.respond();
expect(wrapper.find(".response").text().to.equal('Example message OK');
})
In the code above we can see how to intercept API calls using the test DOM created by the enzyme and then mock API responses using sinon.
I'm running an application that uses mechanize to fetch some data every so often from an RSS feed.
It runs as a heroku worker and after a day or so I'm receiving the following error:
Errno::EMFILE: Too many open files - socket(2)
I wasn't able to find a "close" method within mechanize, is there anything special I need to be doing in order to close out my browser sessions?
Here is how I create the browser + read information:
def mechanize_browser
#mechanize_browser ||= begin
agent = Mechanize.new
agent.redirect_ok = true
agent.request_headers = {
'Accept-Encoding' => "gzip,deflate,sdch",
'Accept-Language' => "en-US,en;q=0.8",
}
agent
end
end
And actually fetching information:
response = mechanize_browser.get(url)
And then closing after the response:
def close_mechanize_browser
#mechanize_browser = nil
end
Thanks in advance!
Since you manually can't close each instance of Mechanize, you can try invoking Mechanize as a block. According to the docs:
After the block executes, the instance is cleaned up. This includes closing all open connections.
So, rather than abstracting Mechanize.new into a custom function, try running Mechanize via the start class method, which should automatically close all your connections upon completion of the request:
Mechanize.start do |m|
m.get("http://example.com")
end
I ran into this same issue. The Mechanize start example by #zeantsoi is the answer that I ended up following, but there is also a Mechanize.shutdown method if you want to do this manually without their block.
There is also an option that you can add a lambda on post_connect_hooks
Mechanize.new.post_connect_looks << lambda {|agent, url, response, response_body| agent.shutdown }
I am using nusoap in my PHP application when calling a .net webservice.
The issue is, in some cases .net web service is taking more than actual time for some request, so I want to increase the time my SOAP call waits for the response.
Is there any function or any way that I can keep nusoap call waiting until I get a response from the webservice.
Thanks,
Rama
Nusoap default timeout is 30 secs.
Increase Response timeout to solve this problem.
// creates an instance of the SOAP client object
$client = new nusoap_client($create_url, true);
// creates a proxy so that WSDL methods can be accessed directly
$proxy = $client -> getProxy();
// Set timeouts, nusoap default is 30
$client->timeout = 0;
$client->response_timeout = 100;
Note : This settings also didn't work for some time. So i directly went to nusoap.php file and changed $response_timeout = 120. By default this value set to 30 secs.
It is solved now :)
References : Time out settings - Second reference
When you create the istance of the nusoap_client try
$client = new nusoap_client($$creat_url, true,false,false,false,false,0,300);
where all the false parameters default to false,
0 is the timeout and 300 is the response_timeout
Thanks
I'm using Clearance for authentication in my Rails app. Does anyone know of a way to configure a session timeout? It's been logging me out within 5 minutes of when I login and I can't seem to locate anything that specifies how to set the timeout.
When you installed Clearance, I think it should have added a config/initializers/clearance.rb file. You can configure the session timeout in there through the use of the cookie_expiration config. From their docs, it can look like this:
#example
Clearance.configure do |config|
config.mailer_sender = 'me#example.com'
config.cookie_expiration = lambda { 2.weeks.from_now.utc }
config.password_strategy = MyPasswordStrategy
config.user_model = MyNamespace::MyUser
end
So, just give cookie expiration a lambda that returns a Time object that occurs sometime in the future.
Looking at the rdoc, there's a cookie_expiration method on the Clearance configuration class. Here it is -- look at the source for the method:
By default, it looks like it's 1 year:
def initialize
#mailer_sender = 'donotreply#example.com'
#cookie_expiration = lambda { 1.year.from_now.utc }
end
So I'd look at overriding that in the configuration.
http://rdoc.info:8080/github/thoughtbot/clearance/Clearance/Configuration#cookie_expiration-instance_method
If you're unable to find it, sometimes you can ask on the Thoughtbot IRC channel #thoughtbot on freenode. Sometimes the developers hang out there and they'll field questions.