I have a webapi build in Flask and I am using aws elastic beanstalk to serve my app.
I was integrating Jenkins for CI/CD and this is what my pipelie does:
fetch the code
Build docker image
Push docker image to Docker hub
Deploy docker image to aws (Docker hub to aws).
All the steps above work as expected but I just have a question related to the .env vars.
If wanted to have different environments, (production/development) Where should I place the .env file that my webapi uses. For development everyone can have their own .env file but for production not everyone should access these variables. Having said that where could I place this .env file so that when my pipeline starts I can get these vars to deploy.
Thanks.
You will have to configure the environment variables in the AWS elastic beanstalk dashboard of your created app. You should see Environment variables under software configuration section.
Related
I am wondering how should I serve and deploy Quasar app in docker on different environments. Env is injected to App during the build so if I use docker-compose on differents environment it will fail to change env inside app.
For now I found solution to create function to add my docker-compose env to my index.html area and then parse it by another function to variables in program.
Is there more elegant way to do this? For example to create one env file with variables to all environment and in docker-compose just point which sets of variables should be used on that environment? (Ofc everything should work on docker-compose, I want to build only once and deploy to many environments).
I got this docker image of my Vue.js app that fetches data from an api running in a java backend. In production the api is running under app.example.com/api, in staging it will run under staging.example.com/api and when running it on my local computer the api will be running at localhost:8081. When running the frontend on my computer I might be using vue cli serve in the project folder, or it might be started as a docker image using docker-compose. The backend is always running as a docker image.
I would like to be able to use the same docker image for local docker-compose, deploy to staging and deploy to production, but using a different url to the backend api. As a bonus it would be nice to be able to use vue-cli serve.
How can this be achieved?
You can use an environment variable containing the API url and then use the environment variable in your Vue app.
The Vue cli supports environment variables and allows you to use environment variables that start with VUE_APP_ in your client-side code. So if you create an environment variable called VUE_APP_API_URL in the environment you're running the Vue CLI in (whether it is Docker or on your host machine) you should be able to use process.env.VUE_APP_API_URL in your Vue code.
If you're running Vue CLI locally, you can just run export VUE_APP_API_URL="localhost:8081" before running vue cli serve.
Docker also supports environment variables. For example, if your SPA Docker service is called "frontend", you can add an environment variable to your Docker Compose file like this:
frontend:
environment:
- VUE_APP_API_URL
If you have the VUE_APP_API_URL environment variable set in the host you're running Docker from it will be passed on to your "frontend" Docker container. So, for example, if you're running it locally your can run export VUE_APP_API_URL="localhost:8081" before running Docker Compose.
You can also pass through environment variables using an .env file. You can read more about environment variables in Docker Compose files here if you're interested.
you can create .env files
env file for development, production ...
check this out for better details:
https://dev.to/ratracegrad/how-to-use-environment-variables-in-vue-js-4ko7
I am having some trouble with my docker containers and environment variables.
Currently i have a docker-compose.yml with the following defined:
version: '2.1'
services:
some-service:
build:
context: .
image: image/replacedvalues
ports:
- 8080
environment:
- PROFILE=acc
- ENVA
- ENVB
- TZ=Europe/Berlin
some-service-acc:
extends:
service: some-service
environment:
- SERVICE_NAME=some-service-acc
Now when i deploy this manually (via SSH command line directly) on server A, it will take the environmental variables from Server A and put them in my container. So i have the values of ENVA and ENVB from the host in my container. Using the following command (after building the image ofcourse): docker-compose up some-service-acc.
We are currently developing a better infrastructure and want to deploy services via Jenkins. Jenkins is up and running in a docker container on server B.
I can deploy the service via Jenkins (Job-DSL, setting DOCKER_HOST="tcp://serverA:2375"temporary). So it will run all docker (compose) commands on ServerA from the Jenkins Container on Server B. The service is up and running except that it doesn't have values for the ENVA and the ENVB.
Jenkins runs the following with the Job-DSL groovy script:
withEnv(["DOCKER_HOST=tcp://serverA:2375"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
I tried setting them in my Jenkins container and on Server B itself but neither worked. Only when i deploy manually directly on Server A it works.
When i use docker inspect to inspect the running container, i get the following output for the env block:
"Env": [
"PROFILE=acc",
"affinity:container==JADFG09gtq340iggIN0jg53ij0gokngfs",
"TZ=Europe/Berlin",
"SERVICE_NAME=some-service-acc",
"ENVA",
"ENVB",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=C.UTF-8",
"JAVA_VERSION=8",
"JAVA_UPDATE=121",
"JAVA_BUILD=13",
"JAVA_PATH=e9e7ea248e2c4826b92b3f075a80e441",
"JAVA_HOME=/usr/lib/jvm/default-jvm",
"JAVA_OPTS="
]
Where do i need to set the Environmental variables so that they will be passed to the container? I prefer to store the variables on Server A. But if this is not possible, can someone explain me how it could be done? It is not an option to hardcode the values in the compose file or anywhere else in the source as they contain sensitive data.
If i am asking this in the wrong place, please redirect me to where i should be.
Thanks!
You need to set the environment variables in the shell that is running the docker-compose command line. In Jenkins, that's best done be inside your groovy script (Jenkins doesn't use the host environment within the build slave):
withEnv(["DOCKER_HOST=tcp://serverA:2375", "ENVA=hello", "ENVB=world"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
Edit: from the comments, you also want to pass secrets.
To do that, there are plugins like the Mask Password that would allow you to pass variables without them showing up in the logs or job configuration. (I'm fairly certain a determined intruder could still get to the values since Jenkins itself knows it and passes it to your script in clear text.)
The better option IMO is to use a secrets management tool inside of docker. Hashicorp has their Vault product which implements an encrypted K/V store where values are accessed with a time limited token and offers the ability to generate new passwords per request with integration into the target system. I'd consider this the highest level of security when fully configured, but you can configure this countless ways to suit your own needs. You'll need to write something to pull the secret and inject it into your container's environment (it's a rest protocol that you can add to your entrypoint).
The latest option from Docker itself is secrets management that requires the new Swarm Mode. You save your secret in the swarm and add it to the containers you want as a file using an entry in the docker-compose.yml version 3 format. If you already use Swarm Mode and can start your containers with docker stack deploy instead of docker-compose, this is a fairly easy solution to implement.
I'm currently struggling with the deployment of my services and I wanted to ask, what's the proper way when you have to deal with multiple repositories. The repositories are independent, but to run in production, everything needs to be launched.
My Setup:
Git Repository Backend:
Backend Project Rails
docker-compose: backend(expose 3000), db and redis
Git Repository Frontend
Express.js server
docker-compose: (expose 4200)
Both can be run independently and test can be executed by CI
Git Repository Nginx for Production
Needs to connect to the other two services (same docker network)
forwards requests to the right service
I have already tried to include the two services as submodules into the Nginx repository and use the docker-compose of the nginx repo, but I'm not really happy with it.
You can have your CI build and push images for each service you want to run, and have the production environment run all 3 containers.
Then, your production docker-compose.yml would look like this:
lb:
image: nginx
depends_on:
- rails
- express
ports: 80:80
rails:
image: yourorg/railsapp
express:
image: yourorg/expressapp
Be noted that docker-compose isn't recommended for production environments; you should be looking at using Distributed Application Bundles (this is still an experimental feature, which will be released to core in version 1.13)
Alternatively, you can orchestrate your containers with a tool like ansible or a bash script; just make sure you create a docker network and attach all three containers to it so they can find each other.
Edit: since Docker v17 and the deprecation of DABs in favour of the Compose file v3, it seems that for single-host environments, docker-compose is a valid way for running multi-service applications. For multi-host/HA/clusterised scenarios you may want to look into either Docker Swarm for a self-managed solution, or Docker Cloud for a more PaaS approach. In any case, I'd advise you to try it out in Play-with-Docker, the official online sandbox where you can spin out multiple hosts and play around with a swarm cluster without needing to spin out your own boxes.
I have my app inside a container and it's reading environment variables for passwords and API keys to access services. If I run the app on my machine (not inside docker), I just export SERVICE_KEY='wefhsuidfhda98' and the app can use it.
What's the standard approach to this? I was thinking of having a secret file which would get added to the server with export commands and then run a source on that file.
I'm using docker & fig.
The solution I settled on was the following: save the environment variables in a secret file and pass those on to the container using fig.
have a secret_env file with secret info, e.g.
export GEO_BING_SERVICE_KEY='98hfaidfaf'
export JIRA_PASSWORD='asdf8jriadf9'
have secret_env in my .gitignore
have a secret_env.template file for developers, e.g.
export GEO_BING_SERVICE_KEY='' # can leave empty if you wish
export JIRA_PASSWORD='' # write your pass
in my fig.yml I send the variables through:
environment:
- GEO_BING_SERVICE_KEY
- JIRA_PASSWORD
call source secret_env before building
docker run provides environment variables:
docker run -e SERVICE_KEY=wefsud your/image
Then your application would read SERVICE_KEY from the environment.
https://docs.docker.com/reference/run/
In fig, you'd use
environment:
- SERVICE_KEY: wefsud
in your app spec. http://www.fig.sh/yml.html
From a security perspective, the former solution is no worse than running it on your host if your docker binary requires root access. If you're allowing 'docker' group users to run docker, it's less secure, since any docker user could docker inspect the running container. Running on your host, you'd need to be root to inspect the environment variables of a running process.