strategy to run Grunt in Docker - docker

We're developing an application for which I've setup three docker containers (web, db and build) which we run with docker-compose.
I've configured docker so that the hosts folder (html) is shared as a writeable folder to web and build.
Inside the build container runs a Grunt watch task as a user node which has the uid 1000.
As the Grunt tasks regulary builds CSS and JavaScript files, these files belong to the user 1000. As our whole team uses these setup to develop, the files for every team mate belong to another ("random") user, the user which has the uid 1000.
What would be the best strategy to avoid this problem? I could think of running the Grunt task with the userid of the hosts user, which started the container. But how to accomplish that?
I should mention, that we don't need those generated files for the version control. So it would be okay when the generated files are local to the docker container. But as the locations in which these files are generated are spread across the whole application, I don't have an idea how I could solve the problem with a read-only volume.

Related

shared folder for a docker container

It is certainly a basic question but I don't know how to deal with this issue.
I am creating a simple docker image executing Python scripts and which be deployed on differents users's Windows laptops. It needs specific shared folder in order to write outputs at the end of the process.
Users are not able to manage any informatic stuff like docker or simple terminal.
So they run it with a bat file where I indicate the docker command with -v option.
But obviously users path are differents on each laptop. How to create a standard image which could avoid this specific mounting path
thanks a lot

Build a Docker Linux Container with Jenkins including dynamically added password

I need to build a complex docker container which has an alpine Linux, sftp, vstp and a special application who relies on input from sftp|ftps from a list of customers.
However, this container acts more like a full blown VM...anyway I have to dynamically create the container including all user accounts with their passwords.
I managed it with a bash script, supervisord and a nice Dockerfile:
Result -> it works like a charm.
While I copy my script plus all the config files and app data etc. inside the container I also provide a classic csv list to add my users and their belonging group memberships and passwords.
So far so good...Ok, now the problem. The whole content should be stored in a git repo and absolute no passwords should be there...and:
- no git encryption
- no docker compose
- no docker services etc.
Now my task is to provide all passwords via environment variables through jenkins (not Dockerfile ENV's) ...I have no idea how I can do this. I mean...environment variables are in the environment from jenkins and not inside the container ? Any recommendations ?

Docker separation of concerns / services

I have a laravel project which I am using with docker. Currently I am using a single container to host all the services (apache, mySQL etc) as well as the needed dependencies (project files, git, composer etc) I need for my project.
From what I am reading the current best practice is to put each service into a separate container. So far this seems simple enough since these services are designed to run at length (apache server, mySQL server). When I spin up these 'service' containers using -d they remain running (docker ps) since their main process continuously runs.
However, when I remove all the services from my project container, then there is no main process left to continuously run. This means my container immediately exits once spun up.
I have read the 'hacks' of running other processes like tail -f /dev/null, sleep infinity, using interactive mode, installing supervisord (which I assume would end up watching no processes in such containers?) and even leaving the container to run in the foreground (taking up a terminal console...).
How do I network such a container to keep it running like the abstracted services but detached without these hacks? I cannot seem to find much information on this in the official docker docs nor can I find any examples of other projects (please link any)
EDIT: I am not talking about volumes / storage containers to store the data my project processes, but rather how I can use a container to store the project itself and its dependencies that aren't services (project files, git, composer)
when you run the container try running with the flags ...
docker run -dt ..... etc
you might even try .....
docker run -dti ..... etc
let me know if this brings any joy. has certainly worked for me on occassions.
i know you wanted to avoid hacks but if the above fails then also add ...
CMD cat
to the end of your Dockerfile - it is a hack but is the cleanest hack :)
So after reading this a few times along with Joachim Isaksson's comment, I finally get it. Tools don't need the containers to run continuously to use. Proper separation of the project files, services (mySQL, apache) and tools (git, composer) are done differently.
The project files are persisted within a data volume container. The services are networked since they expose ports. The tools live in their own containers which share the project files data volume - they are not networked. Logs, databases and other output can be persisted in different volumes.
When you wish to run one of these tools, you spin up the tool container by passing the relevant command using docker run. The tool then manipulates the data within the directory persisted within the shared volume. The containers only persist as long as the command to manipulate the data within the shared volume takes to run and then the container stops.
I don't know why this took me so long to grasp, but this is the aha moment for me.

Is it a bad idea to use docker to run a front end build process during development?

I have an angular project I'm working on containerizing. It currently has enough build tooling that a front-end developer could (and this is how it currently works) just run gulp in the project root, edit source files in src/, and the build tooling handles running traceur, templates and libsass and so forth, spitting content into a build directory. That build directory is served with a minimal server in development, and handled by nginx in production.
My goal is to build a docker-based environment that replicates this workflow. My users are developers who are on different kinds of boxes, so having build dependencies frozen in a dockerfile seems to make sense.
I've gotten close enough to this- docker mounts the host volume, the developer edits the file on the local disk, and the gulp build watcher is running in the docker container instance and rebuilds the site (and triggers livereload, etc etc).
The issue I have is with wrapping my head around how filesystem layers work. Is this process of rebuilding files in the container's build/frontend directory generating a ton of extraneous saved layers? That's not something I'd really like, because I'm not wild about monotonically growing this instance as developers tweak and rebuild and tweak and rebuild. It'd only grow locally, but having to go through the "okay, time to clean up and start over" process seems tedious.
Is this process of rebuilding files in the container's build/frontend directory generating a ton of extraneous saved layers?
Nope, the only way to stack up extra layers is to commit a container with changes to a new image then use that new image to create the next container. Rinse, repeat.
Filesystem layers are saved when a container is committed to a new image (docker commit ...). When a container is running there will be a single read/write layer on top that contains all of the changes made to the container since it was created.
having to go through the "okay, time to clean up and start over" process seems tedious.
If you run the build container with docker run --rm ... then you'll get the cleanup for free. The build container will be created fresh from the image each time.
Also, data volumes bypass the union filesystem so there's a good chance you won't write to the container's filesystem at all.

Docker: One image per user? Or one image for all users?

Question about Docker best practices/intended use:
I have docker working, and it's cool. I run a PaaS company, and my intent is maybe to use docker to run individual instances of our service for a given user.
So now I have an image that I've created that contains all the stuff for our service... and I can run it. But once I want to set it up for a specific user, theres a set of config files that I will need to modify for each user's instance.
So... the question is: Should that be part of my image filesystem, and hence, I then create a new image (based on my current image, but with their specific config files inside it) for each user?
Or should I put those on the host filesystem in a set of directories, and map the host filesystem config files into the correct running container for each user (hence, having only one image shared among all users)?
Modern PAAS systems favour building an image for each customer, creating versioned copies of both software and configuration. This follows the "Build, release, run" recommendation of the 12 factor app website:
http://12factor.net/
An docker based example is Deis. It uses Heroku build packs to customize the software application environment and the environment settings are also baked into a docker image. At run-time these images are run by chef on each application server.
This approach works well with Docker, because images are easy to build. The challenge I think is managing the docker images, something the docker registry is designed to support.

Resources