Authentication information such as database connection strings or passwords should almost never be stored in version control systems.
It looks like the only method of specifying environment variables for an app hosted on OpenShift is to commit them to the Git repository. There is a discussion about this on the OpenShift forums, but no useful suggested workarounds for the problem.
Is there another approach I can use to add authentication information to my app without having to commit it to the repository?
SSH into you application and navigate to your data directory
cd app-root/data
in this directory create a file with your variables (e.g. ".myenv") with content like
export MY_VAR="something"
and then in your repository in ".openshift/action_hooks/pre_start" add this line
source ${OPENSHIFT_DATA_DIR}/.myenv
Openshift supports now setting environment vaiables with the rhc commandline tool like this:
rhc set-env HEROKU_POSTGRESQL_DB_URL='jdbc:postgresql://myurl' -a myapp
I think thats way easier than all the other answers...
See: https://blog.openshift.com/taking-advantage-of-environment-variables-in-openshift-php-apps/
Adding .openshift/action_hooks/pre_start_* is not very cool, because you have to modify your repository in addition to add a file by SSH.
For nodejs, editting nodejs/configuration/node.env works well for some days, but I experienced the file got reverted several times. So it is not stable.
I found a much better solution.
echo -n foobar > ~/.env/user_vars/MY_SECRET
This works perfectly.
(Maybe this is what is done with rhc set-env ...?)
Hope this helps!
Your other option is to create an openshift branch of your project on your local machine. You can create a folder/files for the private information that only lives in your openshift branch. You would still need to source the files in your pre_start hook, something like source ${OPENSHIFT_REPO_DIR}/.private.
Then develop in your master branch, merge into your openshift branch, and push from your openshift branch to OpenShift master branch. This sound convoluted at first, but does make for a very easy workflow, especially if you're origin is shared.
This would be the workflow if your origin was on GitHub.
github/master <--> local/master --> local/openshift --> openshift/master
Notice the only bidirectional link is between github and your local master, so there should be no reason for your credentials to "escape".
This approach also has the added benefit of being able to keep any OpenShift specific changes confined to the openshift branch (like for Gemfiles, ENV variables, paths, etc).
As for security, on the OpenShift server, the repo should have the same security as your $OPENSHIFT_DATA_DIR, so you're not really exposing yourself any more.
Caveat:
Depending on your framework, the files in your $OPENSHIFT_REPO_DIR may be directly accessible via HTTP. You should be able to prevent this with an .htaccess file.
Related
I'm exploring how best to use Github Codespaces for my organization. Our dev environment consists of a Docker dev environment that we run on local machines. It relies on pulling other private repos we maintain via the local machine's ssh-agent. I'd ideally like to keep things as consistent as possible and have our Codespaces solution use the same Docker dev environment from within the codespace.
There's a naive solution of just building a new codespace with no devcontainer.json and going through all the setup for a dev environment each time you create a new one... but I'd like to avoid this. Ideally, I keep the same dev experience and am able to get the codespace to prebuild by building the docker image and somehow getting access to our other private repos.
An extremely hacky-feeling solution that works for automated building is creating an ssh key and storing it as a user codespace secret, then setting up the ssh-agent with that ssh-key as part of the postCreateCommand. My understanding is that this would not work with the onCreateCommand because "it will not typically have access to user-scoped assets or secrets.". To reiterate, this works for automated building, but not pre-building.
From this Github issue it looks like cloning via ssh is a complete no-go with prebuilds because ssh will need a user-defined ssh key, which isn't available from the onCreateCommand. The only potential workaround I can see for this is having an organization-wide read-only ssh-key... which seems potentially even sketchier than having user-created ssh keys as user secrets.
The other possibility I can think of is switching to https for the git clones. This would require adding access to the other repos, which is no big deal. BUT I can't quite see how to get access from within the docker image. When I tried this, I was getting errors because I was asked for a username and password when I ran a git clone from within docker... even though git clone worked fine in the base codespace. Is there a way to forward whatever tokens Github uses for access to other repos into the docker build process? Is there a way to have user-generated tokens get passed into the docker build process and use that for access instead?
Thoughts and roasts welcome.
I'm new to docker so I have a very simple question: Where do you put your config files?
Say you want to install mongodb. You install it but then you need to create/edit a file. I don't think they fit on github since they're used for deployment though it's not a bad place to store the files.
I was just wondering if docker had any support for storing such config files so you can add them as part of running an image.
Do you have to use swarms?
Typically you'll store the configuration files on the Docker host and then use volumes to bind mount your configuration files in the container. This allows you to separately manage the configuration file from the running containers. When you make a change to the configuration, you can just restart the container.
You can then use a configuration management tool like Salt, Puppet, or Chef to manage copying/storing the configuration file onto the Docker host. Things like passwords can be managed by the secrets capabilities of the tool. When set up this way, changing a configuration file just means you need to restart your container and not build a new image.
Yes, in most cases you definitely want to keep your Dockerfiles in version control. If your org (or you personally) use GitHub for this, that's fine, but stick them wherever your other repos are. One of the main ideas in DevOps is to treat infrastructure as code. In fact, one of the main benefits of something like a Dockerfile (or a chef cookbook, or a puppet file, etc) is that it is "used for deployment" but can also be version-controlled, meaningfully diffed, etc.
Screenshot: my docker-compose for wordpress
I've learned last week how to deploy 3 containers of wordpress, phpmyadmin and mysql. They work fine. The containers were connected between them, using a volume and the same network. The docker was configured from a docker compose file. .yml I used Git of my native operative system to version the changes.
But then I found another way to do the same:
I installed a image of Debian, then added git, apache2, mariadb and phpmyadmin, i connected all and use a "docker commit" to save changes of my development every time.
Then, a coworker told me to use a docker-file and add volumes an use Git for versioning.
Which is the best way?
What problems have the first and second ways?
Is there another way?
From my view you search for optimal deployment structure, its a long way to go and find information about. Here my opinons:
I wouldn't recommend this version because the mix of operation system (win/linux) can cause big problems. Example, Line Breaks, Folder/File Filename.
But the docker compose idea is the right way to setup the test, dev enviroment local.
is outside of git, thats not optimal, but a good solution when save everything.
is alright, but you done already with docker compose. Here the usage of volume can cause same problems as 1. You can use git versioning in commandline mode to develop, but I don't recommend it.
Alternative Ways
Use Software that able to deploy remotely to the php server, like PHPStorm, Eclipse, Winscp use local to develop the application and link it to the Apache/PHP Maschine or Container over FTP/SFTP. You work local and transfer the changed files into the running maschine or container. The Git Versioning would be done on the local maschine. You can also use mysql tools to backup the database local. So if the docker container brake you can setup it easy again.
Make sure you save also config files of apache, php, mysql into git, that makes the resetup of docker container smart.
Use (Gitlab & Gitlab CI), (Bitbucket & Bamboo), (Git & Jenkins) to deploy your php changes to the servers or docker containers.
At best read articles over continuous delivery and continuous integration.
This option is suitable for rollout to customer or dev, beta systems.
Just setting up a new Rails app and I have my Vagrant files along with a folder full of dev machine provisioning files for Ansible. These allow me to spin up a dev virtual machine, provision it and have everything up and running really quickly.
My question is, should all that be in my projects version control repository? I will be working on this project across several machines so have it accessible and synced would be useful but on the other hand I don't wish those items to be deployed when I finally deploy it to production? Also, having those files committed would keep a history of them which would also be nice.
What would you recommend?
This is very much a thing of your personal preference.
Some people keep everything in a single self-contained repo. Other people keep application code in a separate repo from their configuration/provisioning/deployment code.
Either way have their own benefits and drawbacks and there's no wrong of doing it as long as you do keep in some version control system.
When I set up new projects I create a directory structure along the lines of:
/<application_name>
./src
./deployment
./docs
Actual source code goes in src, any deployment-specific scripts (e.g. Ansible playbook dirs, Vagrant files) go in deployment and of course any documentation goes in docs.
Then I commit all this to source control. The deployment scripts are then written to be executed from their directory but change into the src directory to perform their actions.
If you are making a service with a Dockerfile is it preferred for you to build an image with the Dockerfile and push it to the registry -- rather than distribute the Dockerfile (and repo) for people to build their images?
What use cases favour Dockerfile+repo distribution, and what use case favour Registry distribution?
I'd imagine the same question could be applied to source code versus binary package installs.
Pushing to a central shared registry allows you to freeze and certify a particular configuration and then make it available to others in your organisation.
At DevTable we were initially using a Dockerfile that was run when we deployed our servers in order to generate our Docker images. As our docker image become more complex and had more dependencies, it was taking longer and longer to generate the image from the Dockerfile. What we really needed was a way to generate the image once and then pull the finished product to our servers.
Normally, one would accomplish this by pushing their image to index.docker.io, however we have proprietary code that we couldn't publish to the world. You may also end up in such a situation if you're planning to build a hosted product around Docker.
To address this need in to community, we built Quay, which aims to be the Github of Docker images. Check it out and let us know if it solves a need for you.
Private repositories on your own server are also an option.
To run the server, clone the https://github.com/dotcloud/docker-registry to your own server.
To use your own server, prefix the tag with the address of the registry's host. For example:
# Tag to create a repository with the full registry location.
# The location (e.g. localhost.localdomain:5000) becomes
# a permanent part of the repository name
$ sudo docker tag 0u812deadbeef your_server.example.com:5000/repo_name
# Push the new repository to its home location on your server
$ sudo docker push your_server.example.com:5000/repo_name
(see http://docs.docker.io.s3-website-us-west-2.amazonaws.com/use/workingwithrepository/#private-registry)
I think it depends a little bit on your application, but I would prefer the Dockerfile:
A Dockerfile...
... in the root of a project makes it super easy to build and run it, it is just one command.
... can be changed by a developer if needed.
... is documentation about how to build your project
... is very small compared with an image which could be useful for people with a slow internet connection
... is in the same location as the code, so when people checkout the code, they will find it.
An Image in a registry...
... is already build and ready!
... must be maintained. If you commit new code or update your application you must also update the image.
... must be crafted carefully: Can the configuration be changed? How you handle the logs? How big is it? Do you package an NGINX within the image or is this part of the outer world? As #Mark O'Connor said, you will freeze a certain configuration, but that's maybe not the configuration someone-else want to use.
This is why I would prefer the Dockerfile. It is the same with a Vagrantfile - it would prefer the Vagrantfile instead of the VM image. And it is the same with a ANT or Maven script - it would prefer the build script instead of the packaged artifact (at least if I want to contribute code to the project).