Pages have different behavior for manual testing and auto testing - docker

I'm using Playwright Framework, gitlab CI.
My problem is that when I'm trying to navigate to the page of the project manually, everything loads fine, when autotest does the same actions, content is not loading.
What can be a troublemaker? There is no cookies or cache problems, no GET, POST etc problems during the testing, somewhy it just doesn't load and that's it.
docker I'm using: https://registry.hub.docker.com/r/atools/chrome-headless/tags
Have tried headed/headless modes, changing the docker, nothing helps.
Can it be GitLab's problem? There is no any specified tokens or accesses for the page which is not loading, it all makes no sense.

Related

Capybara Rspec how to refresh page and see changes?

I have one project where I can use binding.pry to pause execution during Capybara rspec testing, and continually refresh the page in the browser while making changes to the code for that page.
This is EXTREMELY helpful for me.
I have a new project that won't pick up new changes to my code when I refresh the page.
I can't tell what in the projects is different, but how can I configure my app so a refresh during debug in Capybara will pick up changes?
Different server, different config? HELP!

Unknown Format in feature testing rails capybara

I am writing capybara tests. There is a link I have in the view. When I click over the link that links open a pop-up js warning. I have configured Js. in capybara by using phantomjs and petergiest gem.
Without the requested information it's impossible to give an exact answer, but the error you are seeing means the app is requesting a non-JS response (probably HTML). This could be occurring for a couple of reasons
You're not actually running the test with a JS supporting driver. I don't see any js metadata on your scenarios so depending on how you've configured Capybara/RSpec this could by your issue. To confirm, swap from Poltergeist to using Selenium with Chrome or Firefox (non-headless while trying to debug) so you can see if the browser actually starts
You have a JS error preventing JS from running so a normal request is being made instead of XHR. This could be because you actually have a bug in your JS or because you're using Poltergeist/PhantomJS which is massively out of date in JS/CSS support. To test this, swap to using Selenium with Chrome or Firefox and look in the developer console.
Your link isn't correctly configured to make an ajax request - This is impossible to tell without the HTML of the link
Additionally, neither of the tests shown in your image are actually asserting/expecting anything so it's very unclear what exactly you're trying to test.

Using pub serve with ASP.NET Core backend

I'm using Dart to build JS applications that are loaded on web pages hosted from an ASP.NET Core application, and I'm trying to establish a development workflow with either pub serve or potentially pub build that allows for debugging. I've seen some related posts, but I'm still stuck. This is what I've tried:
I used pub build with dart2js and the --mode=debug flag set to generate dart sources and a sourceMap, and then used Chrome to load and debug the web pages. The problem here, apart from long compile times, is the sourceMaps don't seem to work well for the debugging. Lines in the .dart files are often unavailable for debugging, and stepping over function calls doesn't work well, instead diving into framework code. I'm also unable to see values reported reliably.
I used pub get with the --packages-dir flag to copy in dependencies and then loaded the web pages with Dartium hosted by the IIS Express server. This loads pages fine and lets me develop, but I was unable to get breakpoints working at all in Dartium unless I used the debugger() statement directly in my code. I'm also concerned about this approach in general because Dartium is no longer being updated and the Dart team's plan is to move away from it.
As an offshoot of #2, I also tried simply changing my script tag URLs in my ASP.NET pages to point to the resources on the pub serve dev server. This is blocked because pub serve apparently only serves on http, and the ASP.NET application is hosted via HTTPS locally. I tried to change the backend to load on HTTP, but now I'm running into issues with authentication/authorization not working in my .NET app. Also, I had hoped to be able to use dartdevc with this approach, but that gave me 404 errors with requirejs, I think because it was trying to load it from the IIS Express server instead of pub serve (I'm really not sure about that).
I've found some mentions in other StackOverflow posts of setting up some sort of proxying behavior in order to have a backend server request resources from pub serve, but I have no idea how this might be done or if it applies to this situation. I can't find any information.
What strategies are people using for this, and is there a best-practice in mind going forward with Dart 2.0 and dartdevc?

Swashbuckle won't load resources

I have a .NET MVC application that includes a web service.
I have added Swashbuckle to the web service project and on my local machine everything works fine.
When I move the code to our TEST environment I begin to get 404 errors randomly for the various javascript and CSS libraries.
Sometimes the swagger/ui/index page itself throws a 404. Sometimes everything loads.
I've thought about downloading all of these files and placing them in my project for Swagger to use, but based on what I've read, and the way my local environment works, it doesn't seem like that's the way swashbuckle is designed to work, so I'm at a loss.
I have very limited access to the TEST environment so any server configuration will be an issue. My hope is that swagger.config file can be updated to make everything play nice.
I discovered that my TFS build server was NOT overwriting the previous build which led to differences between my load balanced servers and files that were not being updated with changes as I tried to get Swashbuckle to work.

Why is rspec/capybara not reloading my javascript

I am trying to write some integration tests in rspec/capybara/selenium for my Rails 5 app. I have recently started testing some javascript features in the app with headless Chrome.
I struck a problem where I was unable to choose a select element on the page with capybara despite this working fine when I loaded the site manually in chrome in development environment. After some investigating I figured out that the select element was not currently visible. It is hidden when the page loads but should then be made visible immediately on loading by my javascript.
I disabled headless and paused my test with a quick and dirty sleep 60. I then looked at the javascript file in Chrome's developer tools and discovered that it has loaded an old version of the file with none of my recent changes. The file it has loaded no longer exists in my app so it must be being cached somewhere. Any ideas how this might be occurring and how I can fix it?

Resources