Jhipster application development with Docker and gulp - docker

I am working on a Jhipster application, and am running this application using docker. This is working, however its very cumbersome.
I would like to be able to make UI changes (text, css, html etc) and benefit from gulp and browsersync, that is for example, make a change to an html file, save the file, and have the browser automatically refresh and show the change.
However, the only way I can get changes visible in the browser is to:
1 stop the docker container
2 stop gulp
3 rebuild docker image
4 run docker container
5 view, again, in browser
As you see, it is not optimal.
How then can I streamline this, so that I can either quickly deploy changes into the running docker container, or use gulp to refresh the browser with the changes fronted files?

In the file gulp/config.js, you can change the values of uri and apiPort to point at your JHipster app running in a Docker container.
For example, my Docker uses the IP 192.168.99.100, so I would change uri to match that value. Note that the uri needs to include a colon at the end of it.
uri: 'http://192.168.99.100:',

Related

Live reload and two-way communication for Expo in a docker container under new local CLI

I'm using the "new" (SDK 46) project-scoped Expo CLI in a docker container. Basic flow is:
Dockerfile from node:latest runs the Expo npx project creation script, then copies in some app-specific files
CMD is npx expo start
Using docker-compose to create an instance of the above image with port 19000 mapped to local (on a Mac), and EXPO_PACKAGER_PROXY_URL set to my host local IP (see below). I've also mounted a network volume containing my components to the container to enable live edits on those source files.
If you google around, you'll see a few dozen examples of how to run Expo in a docker container (a practice I really believe should be more industry-standard to get better dev-time consistency). These all make reference to various environment variables used to map URLs correctly to the web-based console, etc.. However, as of the release of the new (non-global) CLI, these examples are all out of date.
Using the Expo Go app I've been able to successfully connect to Metro running on the container, after setting EXPO_PACKAGER_PROXY_URL such that the QR code showing up in the terminal directs the Go app to my host on 19000, and then through to the container.
What is not working is live reloading, or even reloading the app at all. To get a change reflected in the app I need to completely restart my container. For whatever reason, Metro does not push an update to the Go app when files are changed (although weirdly I do get a little note on Go saying "Refreshing..." which shows it knows a file has changed). Furthermore, it seems like a lot of the interaction between the app and the container console are also not happening, for example when the Go app loads the initial JS bundle, loading progress is not shown in the console as it is if I try running expo outside of Docker.
At this point my working theory is that this may have something to do with websockets not playing nicely with the container. Unfortunately Expo has so much wrapped under it that it's tough for me to figure out exactly why.
Given that I'm probably not the only one who will encounter this as more people adopt the new CLI and want a consistent dev environment, I'm hoping to crowdsource some debugging ideas to try to get this working!
(Additional note -- wanted to try using a tunnel to see if this fixes things, but ngrok is also quite a pain to get working correctly through docker, so really trying to avoid that if possible!)

Hosting multiple Single Page Apps with Docker

We have a couple of single page apps that we want to host on a single web server. I'm only talking about the frontend part (Angular, React). The APIs run elsewhere. Each app is basically just a directory with a collection of static files (js, html, css, etc.) generated by the CI process. In fact, the build process creates one Docker image per app. Each image basically just contains a directory that contains the build artifacts.
All apps should appear in different folders on the same website:
/app1
/app2
/app3
What would be the best practice for deploying the apps? We've come up with a few strategies.
1. A single image / container
We could build a final web server image (e.g. Apache) and merge all the directories from the app images into it.
Cons: Versioning sounds like hell. Each new version of an app causes a new version of the final image. What if we want to revert to an older version of an app while a newer version of another app has already been deployed?
2. Multiple containers with a front-end reverse proxy
We could build each app image with its own built-in web server. And then route them all together with a front-end reverse proxy (nginx, traefik, etc.).
Cons: Waste of resources running multiple web servers.
3. One web server container and multiple data-only containers for the apps
Deploy each app in a separate container that provides it's app directory as a volume but does nothing else. Then there is a separate web server container that shares the same volumes in order to have access to all the files.
So far I like the 3rd variant best. Whenever a new version of an app needs to be deployed, we simply do a Docker pull on a new version of its image. But it still seems hacky. Volumes must be deleted manually. Otherwise the volume will not be seeded with the new content. Also having containers that do nothing isn't the Docker way, isn't it?
A Docker container wraps a process, but your compiled front-end applications are static files. That is, the setup you're describing here doesn't really match Docker's model.
Without Docker you could imagine deploying these to a single directory
/var/www/
app1/
index.html
css/app.css
app2/
index.html
css/app2.css
js/main.js
and serve these with a single HTTP server; you would not typically run a separate server for each front-end application.
A totally reasonable option, in fact, is to completely ignore Docker here. Even if your back-end applications are being served from containers, you can publish your front-end code (again, compiled to static files) via whatever hosting service you have conveniently available. Things like Webpack's file hashing can help support deploying updated versions of the application without breaking existing clients.
If I was using Docker I'd use either of your first two options but not the third. Running a combined all-the-front-ends HTTP server is the same pattern already discussed, except the HTTP server is in a container instead of the host. Running a dedicated HTTP server for each front-end application lets you use Docker's image versioning, and the incremental cost of an additional HTTP server isn't that expensive.
I would avoid any approach that involves named volumes or "data-only containers". Nothing ever automatically copies content into a volume, except for one specific corner case (on native Docker only, using named volumes but not any other kind of mount, only the first time you use a volume but never updating the volume content), and so you'd have to manually write code to copy content out of an image into a shared hosting location; that's more complicated and doesn't really gain you anything over directly running Webpack on the host.

How to deliver dockerized app to client?

My app consists of web server (node.js), multiple workers (node.js) and Postgres database. Normally I would just create app on heroku with postgres addon and push app there with processes defined in Procfile.
However client wants the app to be delivered to his private server with docker. So the flow should look like this: I make some changes in my node.js app (in web server on workers), "push" changes to repo (docker hub?) and client when he is ready "pulls" changed app (images?) to his server and app (docker containers?) restart with new, updated code.
I am new to docker and even after reading few articles/tutorials I am not sure how I can use docker...
So ideally if there would be one docker image (in docker hub) which would contain my app code, database and client could just pull it somehow and just run it... Is it possible with docker?
Standard strategy is to pack each component of your system into separate docker image (this is called a microservice architecture) and then create an "orchestration" - a set of scripts for deployment, start/stop and update.
For example:
deployment script pulls images from docker repo (Docker Hub or your private repo) and calls start script
start script just does docker run for each component
stop script calls docker stop for each component
update script calls stop script, then updates images from repo, then calls start script
There are software projects on the internet intended to simplify the orchestration, e.g. this SO answer has a comprehensive list. But usually plain bash scripts work just fine.

Rebuild container after each change?

The Docker documentation suggests to use the ONBUILD instruction if you have the following scenario:
For example, if your image is a reusable python application builder, it will require application source code to be added in a particular directory, and it might require a build script to be called after that. You can't just call ADD and RUN now, because you don't yet have access to the application source code, and it will be different for each application build. You could simply provide application developers with a boilerplate Dockerfile to copy-paste into their application, but that is inefficient, error-prone and difficult to update because it mixes with application-specific code.
Basically, this all sounds nice and good, but that does mean that I have to re-create the app container every single time I change something, even if it's only a typo.
This doesn't seem to be very efficient, e.g. when creating web applications where you are used to change something, save, and hit refresh in the browser.
How do you deal with this?
does mean that I have to re-create the app container every single time I change something, even if it's only a typo
not necessarily, you could use the -v option for the docker run command to inject your project files into a container. So you would not have to rebuild a docker image.
Note that the ONBUILD instruction is meant for cases where a Dockerfile inherits FROM a parent Dockerfile. The ONBUILD instructions found in the parent Dockerfile would be run when Docker builds an image of the child Dockerfile.
This doesn't seem to be very efficient, e.g. when creating web applications where you are used to change something, save, and hit refresh in the browser.
If you are using a Docker container to serve a web application while you are iterating on that application code, then I suggest you make a special Docker image which only contains everything to run your app but the app code.
Then share the directory that contains your app code on your host machine with the directory from which the application files are served within the docker container.
For instance, if I'm developing a static web site and my workspace is at /home/thomas/workspace/project1/, then I would start a container running nginx with:
docker run -d -p 80:80 -v /home/thomas/workspace/project1/:/usr/local/nginx/html:ro nginx
That way I can change files in /home/thomas/workspace/project1/ and the changes are reflected live without having to rebuild the docker image or even restart the docker container.

What would be a good docker webdev workflow?

I have a hunch that docker could greatly improve my webdev workflow - but I haven't quite managed to wrap my head around how to approach a project adding docker to the stack.
The basic software stack would look like this:
Software
Docker image(s) providing custom LAMP stack
Apache with several modules
MYSQL
PHP
Some CMS, e.g. Silverstripe
GIT
Workflow
I could imagine the workflow to look somewhat like the following:
Development
Write a Dockerfile that defines a LAMP-container meeting the requirements stated above
REQ: The machine should start apache/mysql right after booting
Build the docker image
Copy the files required to run the CMS into e.g. ~/dev/cmsdir
Put ~/dev/cmsdir/ under version control
Run the docker container, and somehow mount ~/dev/cmsdir to /var/www/ on the container
Populate the database
Do work in /dev/cmsdir/
Commit & shut down docker container
Deployment
Set up remote host (e.g. with ansible)
Push container image to remote host
Fetch cmsdir-project via git
Run the docker container, pull in the database and mount cmsdir into /var/www
Now, this looks all quite nice on paper, BUT I am not quite sure whether this would be the right approach at all.
Questions:
While developing locally, how would I get the database to persist between reboots of the container instance? Or would I need to run sql-dump every time before spinning down the container?
Should I have separate container instances for the db and the apache server? Or would it be sufficient to have a single container for above use case?
If using separate containers for database and server, how could I automate spinning them up and down at the same time?
How would I actually mount /dev/cmsdir/ into the containers /var/www/-directory? Should I utilize data-volumes for this?
Did I miss any pitfalls? Anything that could be simplified?
If you need database persistance indepent of your CMS container, you can use one container for MySQL and one container for your CMS. In such case, you can have your MySQL container still running and your can redeploy your CMS as often as you want independently.
For development - the another option is to map mysql data directories from your host/development machine using data volumes. This way you can manage data files for mysql (in docker) using git (on host) and "reload" initial state anytime you want (before starting mysql container).
Yes, I think you should have a separate container for db.
I am using just basic script:
#!/bin/bash
$JOB1 = (docker run ... /usr/sbin/mysqld)
$JOB2 = (docker run ... /usr/sbin/apache2)
echo MySql=$JOB1, Apache=$JOB2
Yes, you can use data-volumes -v switch. I would use this for development. You can use read-only mounting, so no changes will be made to this directory if you want (your app should store data somewhere else anyway).
docker run -v=/home/user/dev/cmsdir:/var/www/cmsdir:ro image /usr/sbin/apache2
Anyway, for final deployment, I would build and image using dockerfile with ADD /home/user/dev/cmsdir /var/www/cmsdir
I don't know :-)
You want to use docker-compose. Follow the tutorial here. Very simple. Seems to tick all your boxes.
https://docs.docker.com/compose/
I understand this post is over a year old at this time, but I have recently asked myself very similar questions and have several great answers to your questions.
You can setup a MySQL docker instance and have data persist on a stateless data container, aka the data container does not need to be actively running
Yes I would recommend having a separate instance for your web server and database. This is the power of Docker.
Check out this repo I have been building. Basically it is as simple as make build & make run and you can have a web server and database container running locally.
You use the -v argument when running the container for the first time, this will link a specific folder on the container to the host running the container.
I think your ideas are great and it is currently possible to achieve all that you are asking.
Here is a turn key solution achieving all of the needs you have listed.
I've put together an easy to use docker compose setup that should match your development workflow requirements.
https://github.com/ehyland/docker-silverstripe-dev
Main Features
Persistent DB
Your choice of HHVM + NGINX or Apache2 + PHP5
Debug and set breakpoints with xDebug
The README.md should be clear enough to get you started.

Resources