I'm running tests with Capybara and chrome (not headless for debugging) and some of my tests fail with a timeout error:
Net::ReadTimeout
However, those pages work fine outside of tests.
When the testing browser is opened on those failing pages, I can always see it waiting for S3 images, which might be causing the timeout.
This issue appeared not long ago, it used to work perfectly fine.
Any idea on how to fix this ?
Related
I have an app running at http://localhost/ which loads it's assets from a Vite's devserver which is up at http://localhost:3000/. Cypress is hitting the main app correctly, but it doesn't load the assets, leaving me with a blank page because the page received no response for the http://localhost:3000/app.js file.
Everything works fine if I visit the app directly on the browser. The browser is able to load the assets from port 3000 without problems. They're only failing when requested via the Cypress test runner.
I tried to cy.visit('http://localhost:3000/') and it seems like the Vite's devserver is refusing connections. But I checked the Vite documentation to see if there was something that could be blocking this and nothing caught my eyes. Also, the strange part is that they're only blocked when requested via the test runner.
I'm running Cypress on WSL2 and my app and the Vite devserver are running through Docker mapping to the addresses above. Is there any additional configuration I'm missing?
If I build the assets and serve them from the main app address (http://localhost/dist/app.js), everything works fine and my tests run fine. So, I'm guessing there's some sort of configuration I need to do to allow Cypress to request assets form other hosts?
I have some strange behaviour of my test setup. I have some acceptance tests written with codeception and run them on a jenkins server every 5 minutes. 95% of them pass without any problems but 5% fail with two different errors. Most of the time it is:
[Facebook\WebDriver\Exception\UnknownServerException] Error Message => 'URL 'http://www.waldhelden.de/' didn't load. Error: 'TypeError: 'undefined' is not a function (evaluating 'e.getImageData(16,16,1,1).data.toString()')'
and sometimes:
[Facebook\WebDriver\Exception\UnknownServerException] Error communicating with the remote browser. It may have died.
This setup is running on an Amazon EC2 server. At first it was a t2.small. After reading that these errors could be caused by weak servers I upgraded to a t2.medium (2 Cores / 4 GB) but the errors are still there.
Any ideas what to do to get this fixed?
Thanks,
Udo
I have a Rails 4 application running on Heroku. For one type of request I make a HTTP call to an external service and then return the response to the client.
As I see from the logs, the request to the external service is taking too long and resulting in the Heroku's H12 error where it sends a 503 after 30 seconds. The HTTP request that I am making to the external service eventually comes back with a Net::ReadTimeout after some more time (60 seconds).
However if I run heroku run console and make the same HTTP call (through the same Ruby code), it works just fine. The request completes in about a second or two at the max.
I am unable to understand why this request is timing out when run from the web process while it works seamlessly in the heroku run console.
I am running Puma as my webserver. I followed guidelines given here : https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server
I also tried the basic WEBrick server to see if that helps. But no avail.
Has anyone faced this issue? Any hints on how to debug this?
I'm trying to run my cucumber tests and they seem to stop randomly. A new page is visited but nothing renders on the page except Retry later as in the following screenshot.
I'm on OS X 10.9.3, Chrome 35.0.1916.114, and running with bundle exec cucumber. It's happening in and Firefox also if I change the javascript driver.
The problem was not with Chrome, Cucumber, or Capybara. It was with Rack::Attack. 127.0.0.1 was whitelisted but according to this github issue
it wasn't whitelisting ipv6 and transitional loopback ip addresses
To simplify things I just moved Rack Attack to be production only.
tl;dr
Rack::Attack was to blame. Unless you need it in your test environment, just make the gem production only.
Could you tell me what happens with AWS Server now? From 3 weeks ago, util now, whenever I deploy my RoR app into AWS Server (using ElasticBeantalk tool), I meet a strange issue
Deployment time is quite good (just about 10-15 minutes), and the healthy of server is still green. But after that, the server is inaccessible. This status last about 3 - 4 hours !!! Then, everything is OK, server run fast and smoothly. I totally don't understand server healthy still un-change although this error happen. Everything I can do is "refresh browser periodically until it run"
I don't think my application is bigger enough with total deployment time like that. It just takes me about 20 minutes on local (production mode)
Here're some error I found out when server is hang:
"An error occured while starting up the preloader."
"Gateway timeout" when loading application.js (using chrome debug)
"Bad gateway" when loading application.js (using chrome debug)
Please give me some advise to solve that. I have been stucked on this issue for a long time
Thanks