Build a Docker Linux Container with Jenkins including dynamically added password - docker

I need to build a complex docker container which has an alpine Linux, sftp, vstp and a special application who relies on input from sftp|ftps from a list of customers.
However, this container acts more like a full blown VM...anyway I have to dynamically create the container including all user accounts with their passwords.
I managed it with a bash script, supervisord and a nice Dockerfile:
Result -> it works like a charm.
While I copy my script plus all the config files and app data etc. inside the container I also provide a classic csv list to add my users and their belonging group memberships and passwords.
So far so good...Ok, now the problem. The whole content should be stored in a git repo and absolute no passwords should be there...and:
- no git encryption
- no docker compose
- no docker services etc.
Now my task is to provide all passwords via environment variables through jenkins (not Dockerfile ENV's) ...I have no idea how I can do this. I mean...environment variables are in the environment from jenkins and not inside the container ? Any recommendations ?

Related

Docker multiple machine configurations

I want to know if there is a suggested approach on how to configure Docker machines using configuration files. I have a service that I configure for several users, it is basically a Django app.
Until now I had a shared base image and a bunch of scripts. When I need to create a new machine for a new user, I create it in Google Cloud Engine using the base image. Then I :
SSH into it
Launch a script that download everything via git and launch all services
Copy required credential files using scp
Is there a way to optimize some steps with Docker (using secrets or some external config management tool)?
Thanks!

Nexus repository configuration with dockerization

Is it possible to configure Nexus repository manager (3.9.0) in a way which is suitable for a Docker based containerized environment?
We need a customized docker image which contains basic configurations for the nexus repository manager, like project specific repositories, LDAP based authentication for users. We found that most of the nexus configurations live in the database (OrientDB) used by nexus. We also found that there is a REST interface offered by nexus to handle configurations by 3rd parties, but we found no configuration exporter/importer capabilites besides backup (directory servers ha LDIF, application servers ha command line scripts, etc.).
Right now we export the configuration as backup files, and during the customized docker image build we copy those backup file back to the file system in the container:
FROM sonatype/nexus3:latest
[...]
# Copy backup files
COPY backup/* ${NEXUS_DATA}/backup/
When the conatiner starts up it will pick up the backup files and the nexus will be configured the way we need. However though, it would be much better if there was a way which would allow us the handle these configurations via a set of config files.
All that data is stored under /nexus-data, so you can create an initial docker container with a docker volume or a host directory that would keep all that data. After you preconfigured that instance you can distribute your customized docker image with that docker volume containing nexus data. Or if you used a host directory you can simply copy over all that data is similar fashion as you do now, but use /nexus-data directory instead.
You can find more information at DockerHub under Persistent Data.

How do I finalize my Docker setup and how is it actually called?

I am pretty new to Docker. After reading specifically what I needed I figured out how to create a pretty nice Docker setup. I have created some setup where in I can start up multiple systems using one docker-compose.yml file.
I am currently using this for testing specific PHP code on different PHP and MySQL versions. The file structure looks something like this:
./mysql55/Dockerfile
./mysql56/Dockerfile
./mysql57/Dockerfile
./php53/Dockerfile
./php54/Dockerfile
./php56/Dockerfile
./php70/Dockerfile
./php71/Dockerfile
./php72/Dockerfile
./web (shared folder with test files available on all php machines)
./master_web (web interface to send test request to all possible versions using one call)
./docker-compose.yml
In the docker-compose file I setup different containers most refering to the local Dockerfiles, some refering to online image names. When I run docker-compose up all containers start as expected in the configured network configuration and I'm able to use it as desired.
I would first of all like to know how this setup is called. Is this called a "docker swarn" or is such setup called differently?
Secondly, I'd like to make one "compiled/combined" file (image, container, swarn, engine, machine) or however it is called) of this. That I can save without having to depend on external sources again. Of course the docker-compose.yml file will work as long as all the refered external sources are still available. But I'd like to pusblish my fully confired setup as is. How do I do that?
You can publish built images with Docker registry. You can setup your own or use third-party service.
After that, you need to prefix you image names with your registry IP/DNS in docker-compose.yml. This way, you can deploy it anywhere docker-compose is installed (and docker-compose itself can be run as docker container too), just need to copy your docker-compose.yml file there.
docker-machine is tool to deploy to multiple machines, as is docker swarm.

strategy to run Grunt in Docker

We're developing an application for which I've setup three docker containers (web, db and build) which we run with docker-compose.
I've configured docker so that the hosts folder (html) is shared as a writeable folder to web and build.
Inside the build container runs a Grunt watch task as a user node which has the uid 1000.
As the Grunt tasks regulary builds CSS and JavaScript files, these files belong to the user 1000. As our whole team uses these setup to develop, the files for every team mate belong to another ("random") user, the user which has the uid 1000.
What would be the best strategy to avoid this problem? I could think of running the Grunt task with the userid of the hosts user, which started the container. But how to accomplish that?
I should mention, that we don't need those generated files for the version control. So it would be okay when the generated files are local to the docker container. But as the locations in which these files are generated are spread across the whole application, I don't have an idea how I could solve the problem with a read-only volume.

Docker: One image per user? Or one image for all users?

Question about Docker best practices/intended use:
I have docker working, and it's cool. I run a PaaS company, and my intent is maybe to use docker to run individual instances of our service for a given user.
So now I have an image that I've created that contains all the stuff for our service... and I can run it. But once I want to set it up for a specific user, theres a set of config files that I will need to modify for each user's instance.
So... the question is: Should that be part of my image filesystem, and hence, I then create a new image (based on my current image, but with their specific config files inside it) for each user?
Or should I put those on the host filesystem in a set of directories, and map the host filesystem config files into the correct running container for each user (hence, having only one image shared among all users)?
Modern PAAS systems favour building an image for each customer, creating versioned copies of both software and configuration. This follows the "Build, release, run" recommendation of the 12 factor app website:
http://12factor.net/
An docker based example is Deis. It uses Heroku build packs to customize the software application environment and the environment settings are also baked into a docker image. At run-time these images are run by chef on each application server.
This approach works well with Docker, because images are easy to build. The challenge I think is managing the docker images, something the docker registry is designed to support.

Resources