I am using Google container engine and Deis Workflow to run my rails application. I do not have a dockerfile, I just use Heroku buildpack.I was successfully able to deploy my app but I am not able to configure my database. I understand that my database cannot reside in my application, it has to be another service which will persist data. I am thinking of using AWS RDB and I think I will want to configure something like what is pointed here on heroku.
I am new to Kubernetes and this workflow, I would really appriciate if someone can point me out how to proceed and achieve this.
Have a look at the Helm Charts repository.
After installing Helm you can run:
helm install stable/postgresql
Related
I've recently joined a new company which already has a version of Google Tag Manager server-side up and running. I am new to Google Cloud Platform (GCP), and I have not been able to find the supposed docker image in the image repository for our account. Or, at least I am trying to figure out how to check if there is one and how do I correlate its digest to what image we've deployed that is located at gcr.io/cloud-tagging-10302018/gtm-cloud-image.
I've tried deploying it both automatically provisioned in my own cloud account and also running the manual steps and got it working. But I can't for the life of me figure out how to check which version we have deployed at our company as it is already live.
I suspect it is quite a bit of an old version (unless it auto-updates?), seeing as the GTM server-side docker repository has had frequent updates.
Being new to the whole container imaging with docker, I figured I could use Cloud shell to check it that way, but it seems when setting up the specific Appengine instance with the shell script provided (located here), it doesn't really "load" a docker image as if you'd deployed it yourself. At least I don't think so, because I can't find any info using docker commands in the Cloud shell of said GCP project running the flexible Appengine environment.
Any guidance on how to find out which version of GTM server-side is running in our Appengine instance?
To check what docker images your App Engine Flex uses is by ssh to the instance. To ssh to your App Engine instances is by going to the instance tab then choosing the correct service and version then click the ssh button or you can access it by using this gcloud command on your terminal or cloud shell:
gcloud app instances ssh "INSTANCE_ID" --service "SERVICE_NAME" --version "VERSION_ID" --project "PROJECT_ID"
Once you have successfully ssh to your instance, run docker images command to list your docker images
I have been developing a web application using Python, Flask, Docker(-Compose) and git/github and getting to the point where I try to figure out the best way/workflow to bring it to production. I have read some articles but not sure what is a best practice from different approaches.
My current setup is purely development oriented:
Local Docker using docker-compose to build various service images (such as db, backend workers, webapp (flask & uwisg), nginx).
using .env file for docker-compose to pass configuration to the services
Source code is bind mounted from the local docker host
db data is stored in a named volume
Using local git for source control (though I have connected it to a github repository but not been using it much since I am the only one currently developing the application)
From what I understand the steps to production could be the following:
Implement docker-compose override to distinguish between dev and prod
Implement Dockerfile Multistage builds to create prod images which include the source code in the image and do not include dev dependencies
Tag and push the production images to a registry (docker, google?) or better push the git to github?
[do security scans of the prod images]
deploy/pull the prod images from the registry (or build from github) on a service like GKE for instance
Is this a common way to do it? Am I missing something?
How would I best go about using an integration/staging environment between dev and prod, so that I can first test new prod builds or debug prod images in integration?
Does GKE for instance offer an easy way to setup an integration environment? Or could I use the Docker installation on my NAS for that?
Any best practices for backing up production (like db data most importantly)?
Thanks in advance!
I am a newbie when it comes to docker.
I have a web app that contains 4 services. I manage to create a docker-compose for it.
I would like now to publish it.
My plan is to
upload the whole repository with the compose file and the source codes to a private repository in github.
then create a droplet in digital ocean
I would like to be able to publish the code easily through github only. that it will be automatically uploaded to the server and restart the required services.
what would be the best approach?
Yes, there is. There is an App platefrom in Digitalocean. Once you use it to deploy your docker image and whenever you update docker image via github, your site will be rebuilding (ci/cd).
I hope this can be help for you.
I am trying to launch a rails API on AWS. I have created an Elastic Beanstalk app, created a PostgreSQL RDS, and setup CodePipeline, but when I try to deploy I get an error that says "CannotConnectError: Error connecting to Redis". My app uses Redis to cache user login certificates, and when I run it locally I just type "redis-server" in the terminal before "rails s" and it works like a charm. I have tried creating an ElastiCache instance, but I can't figure out how to connect it to my app. I'm also unsure of whether using ElastiCache for this might be overkill, and if it might instead be better to somehow configure the app to start running Redis without it when it's deployed. Another possible solution I can think of is if there a way for me to run terminal commands on my Elastic Beanstalk app and just deploy Redis manually?
I am having a lot of trouble finding a clear explanation of what I am supposed to do to setup Redis to work with Elastic Beanstalk. Can anyone help explain this, or point me to a good resource?
if there a way for me to run terminal commands on my Elastic Beanstalk app and just deploy Redis manually?
Yes, you can do this. EB allows you to write customization scripts through .ebextensions. Thus using it you could install and setup your local Redis server on the EB.
To install redis, the following 10_install_redis.config in your .ebextensions config file could be used:
commands:
10_install_redis:
command: amazon-linux-extras install -y redis4.0
You would have to build upon the above to further setup and customize the redis server to your needs.
However, running redis on your EB instance is not a very good practice. It would be better to have it outside of your EB environment, for example on a separate EC2 instance or ECS containers if you want to save up on cost as compared to running AWS managed ElastiCache.
I want to setup BitBucket on a my server.
BitBucket needs
Web Server and
Postgresql Db server.
So can I setup a docker container with all these installed.
My goal is also to create a single script which can setup the above environment, without the need to give difficult instructions to someone and download binaries individually, setup start-up etc..
Please point me in the correct direction.
You can run both servers with official images:
Bitbucket with the image atlassian/bitbucket-server and
PostgreSQL with the image postgres.
Follow the descriptions there. Be aware that you need to use data-volumes to persist the Bitbucket repositories and the PostgreSQL database files.
To link both together, an easy way would be to use Docker compose. Or use
docker run --link postgres-container-name bitbucket.