I have a repo with a piece of software, and a docker for users who have installation problems. I need to re-build the docker every time I publish a new version, and also want to use automated testing after it. DockerHub has such functionality, but builds are too long and are killed by timeout. Also I can't use tests there, as some tests use ~8 Gb RAM.
Are there any other services for these tasks? I'm fine with paying for it, but don't want to spend time for long configuration and maintenance (e.g. for having my own build server).
TravisCI.
It's fairly easy to start, hosted CI service which is free as long as you're keeping the repo public.
It's well known, common and you will find thousands of helpful questions and answers under the [travisci] tag
I'm adding a link to their documentation with example on how to build Dockerfile.
https://docs.travis-ci.com/user/docker/#building-a-docker-image-from-a-dockerfile
Also, I've tried to find memory and time limitations but couldn't find any in quick search.
Related
I've got a project with 4 components, and every component has hosting set up on Google Cloud Run, separate deployments for testing and for production. I'm also using Google Cloud Build to handle the build & deployment of the components.
Due to lack of good webhook events from source system, I'm currently forced to trigger a rebuild of all components in a project every time there is a new change. In the project this means 8 different images to build and deploy, as testing and production use different build-time settings as well.
I've managed to optimize Cloud Build to handle the 8 concurrent builds pretty nicely, but they all finish around the same time, and then all 8 are pushed to Cloud Run. It often seems like Cloud Run does not like this at all and starts throwing some errors to me that I've been unable to resolve.
First and more serious is that often about 4-6 of the 8 deployments go through as expected, and the remaining ones either are significantly delayed or just fail, often so that the first few go through fine, then a few with significant delays, and the final 1-2 just fail. This seems to be caused by some "reconciliation request quota" being exhausted in the region (in this case europe-north1), as this is the error I can see at the top of the Cloud Run service -view:
Additionally and mostly annoyingly, the Cloud Run dashboard itself does not seem to handle having 8 services deployed, as just sitting on the dashboard view listing the services regularly throws me another error related to some read quotas:
I've tried contacting Google via their recommended "Send feedback" button but have received no reply in ~1wk+ (who knows when I sent it, because they don't seem to confirm receipt).
One option I can do to try and improve the situation is to deploy the "testing" and "production" variants in different regions, however that would be less than optimal, and seems like this is some simple configuration somewhere about the limits. Are there other options for me to consider? Or should I just try to set up some synchronization on these that not all deployments are fired at once?
Optimizing the need to build and deploy all components at once is not really an option in this case, since they have some shared code as well, and when that changes it would still be necessary to support this.
This is an issue with Cloud Run. Developers are expected to be able to deploy many services in parallel.
The bug should be fixed within a few days or couple of weeks.
[update] Bug should now be fixed.
Make sure to use the --async flag if you want to deploy in parrallel: gcloud run deploy $SERVICE --image --async
So I have been using docker with docker-compose for quite some time in a development environment, I love how easy it is.
Until now, I also used docker-compose to serve my apps on my own server as I could afford short down times like docker-compose restart.
But in my current project, we need rolling updates.
We still have one node, and it shall remain as we don't plan on having scalability issue for quite some time.
I read I need to use docker swarm, fine, but when I look for some tutorials on how to set it up, along with using my docker-compose.yml files, I can't find any developer-oriented (instead of devops) resources that would simply tell me the steps to achieve this, even though I don't understand everything it is ok, as I will along the way.
Are there any tutorials to learn how to set this up out there? If not, shouldn't we build it here?
I am definitely sure we are quite numerous to have the issue, as docker is now a must have for devs, but we (devs) still don't want to dive too deep into the devops world.
Cheers, hope it gets attention instead of criticism.
After giving multiple tries to docker swarm, I did struggle a lot with concurrency and orchestration issues, hence I decided to stick with docker-compose which I'm much more comfortable with.
Here's how I achieved rolling updates: https://github.com/docker/compose/issues/1786#issuecomment-579794865
It works actually pretty nice though observers told me it was a similar strategy to what swarm does by default.
So I guess most of my issues went away by removing replication of nodes.
When I get time, I'll give swarm another try. For now, compose does a great job for me.
Goal:
I have an application based inside a docker container. I want to be able to use continuous integration for that with push to deploy using Bitbucket Pipelines to Google Cloud. I need access to an SQL database (MariaDB preferably), and some kind of caching system (be it memcache, redis, or something else).
Problem:
I'm entirely unsure of what services I need from said cloud provider to facilitate this as simply as possible, while still being cost effective. I looked into using Google's AppEngine, but I don't know if I'm doing something weird or odd, but for 1 vCPU w/ 1 GB of RAM and 10GB of storage, it was $55/month USD. Which is far more than I really want to pay. But I'm not even sure this is what I need for this. I don't need much power (these are very small apps, used by a very small amount of people). Also, again, not sure if I'm doing something wrong, but I was unable to find a caching solution with Google that wasn't insanely expensive (MemoryStore). Basically, I'm completely overwhelmed by the # of options, and am just looking for a cost effective / simple solution for continuous delivery of a docker application to Google Cloud
Using the tools discussed here , you can set up an end-to-end continuous delivery pipeline covering code, build, test, deploy, and monitor phases of software development across multi-cloud, hybrid, or on-premise environment. Likely this is a way to start.
I am not a developer, but reading about CI/CD at the moment. Now I am wondering about good practices for automated code deployment. I read a lot about the deployment of code to a pre-existing environment so far.
My question now is whether it is also good-practice to use e.g. a Jenkins workflow to deploy an environment from scratch when a new build is created. For example for testing of a newly created build, deleting the environment again after testing.
I know that there are various plugins to interact with AWS, Azure etc. that could be used to develop a job for deployment of a virtual machine.
There are also plugins to trigger Puppet to deploy infra (as code) and there are plugins to invoke an infrastructure orchestration.
So everything is available to be able to deploy the infrastructure and middleware before deploying code (with some extra effort of course).
Is this something that is used in real life? How is it done?
The background of my question is my interest in full automation of development with as few clicks as possible, and cost saving in a pay-per-use model by not having idle machines.
My question now is whether it is also good-practice to use e.g. a Jenkins workflow to deploy an environment from scratch when a new build is created
Yes it is good practice to deploy an environment from scratch. Like you say, Jenkins and Jenkins pipelines can certainly help with kicking off and orchestrating that process depending on your specific requirements. Deploying a full environment from scratch is one of the hardest things to automate, and if that is automated, it implies that a lot of other things are also automated, such as infrastructure, application deployments, application configuration, and so on.
Is this something that is used in real life?
Yes, definitely. A lot of shops do this. The simpler your environments, the easier it is, and therefore, a startup with one backend app would have relatively little trouble achieving this valhalla state. But even the creation of the most complex environments--with hundreds of interdependent applications--can be fully automated; it just takes more time and effort.
The background of my question is my interest in full automation of development with as less clicks as possible and cost saving in a pay-per-use model by not having idling machines.
Yes, definitely. The "spin up and destroy" strategy benefits all hosting models (since, after full automation, no one ever has to wait for someone to manually provision an environment), but those using public clouds see even larger benefits in terms of cost (vs always leaving AWS environments running, for example).
I appreciate your thoughts.
Not a problem. I will advise that this question doesn't fit stackoverflow's question and answer sweet spot super well, since it is quite general. In the future, I would recommend chatting with your developers, finding folks who are excited about this sort of thing, and formulating more specific questions when you all get stuck in the weeds on something. Welcome to stackoverflow!
All is being used in various combinations; the objective is to deliver continuous value to end user. My two cents:
Build & Release
It depends on what you are using. I personally recommend to use what is available with the tool. For example, VSTS (Visual Studio Team Services) offers complete CI/CD pipeline. But if you have a unique need which can only be served by Jenkins then you must use that and VSTS offers that out of the box.
IAC (Infrastructure as code)
In addition to Puppet etc. You can take benefits of AZURE ARM (Azure Resource Manager) Template to Build and destroy an environment. Again, see what is available out of the box with the tool set you have.
Pay-per-use
What I have personally used is Azure Dev/Test Labs and have the code deployed to that via CI/CD pipeline. Later setup Shutdown policy on the VM so it will auto-start and auto-shutdown based on time provided. This is a great feature to let you save cost on the resources being used and replicate environments.
For example, UAT environment might not needed until QA is signed off. But using IAC you can quickly spin up the environment automatically and then have one-click deployment setup to deploy code to UAT.
I work on a large open source project based on ruby on rails. We use Github, Travis, Code Climate and others. Our test suite takes a long time to run and we have many pull requests opened and updated through the day, which creates a large backlog. We even implemented a build killer in our bot to prevent any unnecessary builds, however we still have a backlog. Is it possible for us to host our own runner to increase the number of workers?
There's Travis CI Enterprise (https://enterprise.travis-ci.com/) that lets people host their own runners, but that's probably mostly only for paid-for customers. Have you guys swapped over to the container-based builds? Might speed things up a bit. What's the project?