Playwright webserver remains open - playwright

I tried to run the basic provided example by playwright.
The example runs and the webserver starts, however, when the test finishes, the webserver that points to our company site (not the provided url in the example) remains open.
I created my basic Playwright setting using init. Afterwards i've added a wevserver configuration with npm run start script.
Can i fix this somehow? Is if related to the different urls?

Related

What is the correct way to use the gitlab-pipelines CI when testing a node.js website with a Java Webdriver project?

I've been given a small node.js website to test.
Initially, I tried to keep it all JavaScript and even managed to write a few tests and weave those into a CI YAML file that instructs GitLab to deploy the container, build the site, run the tests...
However, my tests are getting more and more complicated and I must resort to my Java skills.
Now, the problem is that I don't know to form the CI tasks: There is no one container that has all the needed technology (nor is it what CONTAINERS are for, anyway).
On the other hand, I don't know to get more than one image in the task.
Somehow, my mind imagines I could deploy as: one container has the node.js stuff and builds and runs the site, exposing an endpoint.
Another container has the Java-Maven-Chrome stuff and builds and runs the tests, which access the site via the exposed endpoint.
Or maybe I have the whole concept wrong?
Would appreciate to learn what is the professional solution here. Surely, I am not the first Java QA guy, trying to test a node.js website!
I would really appreciate some example for the YAML file. Because, I can only imagine it as having one field in the beginning "image" - and then that's where my container goes and no room for another.

Live reload and two-way communication for Expo in a docker container under new local CLI

I'm using the "new" (SDK 46) project-scoped Expo CLI in a docker container. Basic flow is:
Dockerfile from node:latest runs the Expo npx project creation script, then copies in some app-specific files
CMD is npx expo start
Using docker-compose to create an instance of the above image with port 19000 mapped to local (on a Mac), and EXPO_PACKAGER_PROXY_URL set to my host local IP (see below). I've also mounted a network volume containing my components to the container to enable live edits on those source files.
If you google around, you'll see a few dozen examples of how to run Expo in a docker container (a practice I really believe should be more industry-standard to get better dev-time consistency). These all make reference to various environment variables used to map URLs correctly to the web-based console, etc.. However, as of the release of the new (non-global) CLI, these examples are all out of date.
Using the Expo Go app I've been able to successfully connect to Metro running on the container, after setting EXPO_PACKAGER_PROXY_URL such that the QR code showing up in the terminal directs the Go app to my host on 19000, and then through to the container.
What is not working is live reloading, or even reloading the app at all. To get a change reflected in the app I need to completely restart my container. For whatever reason, Metro does not push an update to the Go app when files are changed (although weirdly I do get a little note on Go saying "Refreshing..." which shows it knows a file has changed). Furthermore, it seems like a lot of the interaction between the app and the container console are also not happening, for example when the Go app loads the initial JS bundle, loading progress is not shown in the console as it is if I try running expo outside of Docker.
At this point my working theory is that this may have something to do with websockets not playing nicely with the container. Unfortunately Expo has so much wrapped under it that it's tough for me to figure out exactly why.
Given that I'm probably not the only one who will encounter this as more people adopt the new CLI and want a consistent dev environment, I'm hoping to crowdsource some debugging ideas to try to get this working!
(Additional note -- wanted to try using a tunnel to see if this fixes things, but ngrok is also quite a pain to get working correctly through docker, so really trying to avoid that if possible!)

How can I open VS code "in container" without it restarting itself and losing shell settings when "Reopen in container" is invoked?

I have a development use-case where I use a script to configure a shell with docker-machine or other environment and then open a directory containing source and settings (/.vscode/, .devcontainer/) that I can edit/build/debug in the VS code Remote Containers extension.
In short, I'm looking to implement the following sequence when a "start-development.sh" script/hook runs:
Set up host-side env or remote resources (reverse sshfs to mount source to a remote docker-machine, modprobe, docker buildx, xhost for x-passthrough, etc.)
Run VS Code in that shell so settings aren't thrown away with a specified directory (may be mounted via sshfs or other means) in container, not just open on the host
Run cleanup scripts to clean-up and/or destroy real resources (unmount, modprobe -r, etc.) when the development container is stopped (by either closing VS Code or rebuilding the container).
See this script for a simple example of auto-configuring a shell with an AWS instance via docker-machine. I'll be adding a few more examples to this repository over the coming day or so.
It's easy enough to open VS Code in that directory (code -w -n --folder-uri /path/here) and wait for it to quit (so I can perform cleanup steps like taking down the remote docker-machine, un-mounting reverse-sshfs mounted code or disabling kernel mods I use for development, etc.).
However, VS code currently opens in "host mode" and when I choose "Reopen in container" or "Rebuild container" via the UI or command palette, it kills that process and opens another top-level(?) process, quitting the shell & throwing away my configuration and/or prematurely running my cleanup portion of the script so it has the wrong env. when it finally launches in-container. Sadness.
So finally, my question is:
Is there a way to tell VS code to open a folder "in-container"? This would solve a ton of problems for me, instead of a janky dev. cycle where I have to ensure that the code instance isn't restarting itself and messing things up - whenever I rebuild the container, for example.
Alternatively, it'd be great to not quit the top-level code process I started altogether, enabling me to wait on that, or perhaps monitor it in other ways I'm not aware of to prevent erasure of my settings and premature run of my cleanup script?
Thanks in advance!
PS: Please read the entire question before flagging it as "not related to development". If the idea of a zero-install development environment for a complex native project, live on-device development/debugging or deep learning using cloud instances with giant GPUs for Docker where you don't have to manually manage everything and write pages of readmes appeals to you - this is very much about programming.
After all weekend of trying different things, I finally figured it out! The key was this section in the awesome articles about advanced container configuration.
I put that into a bash script and used jq to merge docker.host and other docker env settings into .vscode/settings.json. See this example here.
After running a script that generates this file, the user will only need to reload/relaunch VS code in that workspace folder (where the settings were created) and yay, everything works as expected.
I plan to add some actual samples now that I have the basics working. Unfortunately, I had to separate my create and teardown as separate activate and deactivate hooks. Not a bad workflow still, IMO.

Jhipster application development with Docker and gulp

I am working on a Jhipster application, and am running this application using docker. This is working, however its very cumbersome.
I would like to be able to make UI changes (text, css, html etc) and benefit from gulp and browsersync, that is for example, make a change to an html file, save the file, and have the browser automatically refresh and show the change.
However, the only way I can get changes visible in the browser is to:
1 stop the docker container
2 stop gulp
3 rebuild docker image
4 run docker container
5 view, again, in browser
As you see, it is not optimal.
How then can I streamline this, so that I can either quickly deploy changes into the running docker container, or use gulp to refresh the browser with the changes fronted files?
In the file gulp/config.js, you can change the values of uri and apiPort to point at your JHipster app running in a Docker container.
For example, my Docker uses the IP 192.168.99.100, so I would change uri to match that value. Note that the uri needs to include a colon at the end of it.
uri: 'http://192.168.99.100:',

Rebuild container after each change?

The Docker documentation suggests to use the ONBUILD instruction if you have the following scenario:
For example, if your image is a reusable python application builder, it will require application source code to be added in a particular directory, and it might require a build script to be called after that. You can't just call ADD and RUN now, because you don't yet have access to the application source code, and it will be different for each application build. You could simply provide application developers with a boilerplate Dockerfile to copy-paste into their application, but that is inefficient, error-prone and difficult to update because it mixes with application-specific code.
Basically, this all sounds nice and good, but that does mean that I have to re-create the app container every single time I change something, even if it's only a typo.
This doesn't seem to be very efficient, e.g. when creating web applications where you are used to change something, save, and hit refresh in the browser.
How do you deal with this?
does mean that I have to re-create the app container every single time I change something, even if it's only a typo
not necessarily, you could use the -v option for the docker run command to inject your project files into a container. So you would not have to rebuild a docker image.
Note that the ONBUILD instruction is meant for cases where a Dockerfile inherits FROM a parent Dockerfile. The ONBUILD instructions found in the parent Dockerfile would be run when Docker builds an image of the child Dockerfile.
This doesn't seem to be very efficient, e.g. when creating web applications where you are used to change something, save, and hit refresh in the browser.
If you are using a Docker container to serve a web application while you are iterating on that application code, then I suggest you make a special Docker image which only contains everything to run your app but the app code.
Then share the directory that contains your app code on your host machine with the directory from which the application files are served within the docker container.
For instance, if I'm developing a static web site and my workspace is at /home/thomas/workspace/project1/, then I would start a container running nginx with:
docker run -d -p 80:80 -v /home/thomas/workspace/project1/:/usr/local/nginx/html:ro nginx
That way I can change files in /home/thomas/workspace/project1/ and the changes are reflected live without having to rebuild the docker image or even restart the docker container.

Resources