I am currently working for an enterprise and have been asked to use GitHub Actions instead of ADO/Azure Pipelines or Jenkins.
My objective is to create a self-service model where we have a basic CI/CD framework that teams can use as a starting place for their pipelines. It has all the security, quality, and governance rolled in, making life easier for devs and reducing duplication of effort. Thus reusable workflows are a must, and being able to launch containers from a build agent/runner/worker is a must.
Let me lay out my understanding of the situation with GitHub Actions:
Github Actions can do two things:
Launch a Container
Run JavaScript
Running a container within a container is a considered a bad idea, and in fact, support for it is going away in the near future.
GitHub Hosted Runners run in a container.
Github Actions don't support Reusable Workflows until Q3 of 2021
If my understanding is correct, then I'm dead in the water:
A GitHub hosted runner for Actions is basically useless in my case unless I want to write JavaScript.
It looks like I'm back in the VM business to self-host a runner so that I can use it to host containers instead of running Docker within Docker
My ability to create a generalized framework for my dev teams is somewhat undermined until GitHub gets around to implementing Reusable Workflows. (I think this is the biggest assumption, most likely to be disproved with a workaround)
Before I push back for a different CI tool, somebody please tell me what I'm missing here or what workarounds make this do-able.
After additional research and some testing, my hypotheses were confirmed:
Using a self-hosted runner on a VM is the most straightforward way to solve the "container-in-a-container" problem. It also solves the problem of consuming private/self-hosted package feeds from the runner without whitelisting every i.p. range used by GitHub hosted runners, which something most enterprise InfoSec teams would be reluctant to do.
This question had second one rolled in, "How to create a reusable CI/CD framework for an enterprise in GitHub Actions" which was bad form on my part. The most straightforward options are a) wait until Reusable Workflows are fully implemented and worked out b) use a more mature orchestration tool like Jenkins, TeamCity, or Azure DevOps if you can't wait.
A couple of things to look at:
You might be able to use create your own GitHub actions to share behavior, info on how to call here.
For self hosted then docker-github-actions-runner is a great starting point.
Related
We are planning to implement jenkins for the whole organization. We want to go with single jenkins instance that will be used by many teams with Slave architecture. I want to know if there are any best practices for Plugin management. Since teams would request for installing different plugin, how can i manage these plugsin installation.
Thanks in advance for all your help
I would install Docker on all agent machines and instruct teams to make use of Docker containers in their Pipelines as much as possible. Thereby you forego the need of installing different programming languages and plugins on all your agents.
For plugins that really have to be added to Jenkins, I'd set up a test instance of your Jenkins server, to try out the plugin, and see whether it clashes with existing plugins. Try to keep the number of plugins low, suggest people to only use quality plugins that get recent upgrades, and remove plugins when you no longer need them.
One issue you will encounter is Jenkins has no (as far as I can find) authorization strategy for plugins. Basically, all plugins are available to everyone. Thia may be an issue if teams have different and contraindicated requirements (eg: a team not allowed to use ssh or HTTP requests). If you have a homogeneous approach to SW development, code, infra, tools, etc, then it becomes a matter of scale only..
In a large org, you also may have issues with a single Jenkins merely finding maintenance windows. It also creates a single point of failure.Are you OK w/that or need H/A?
You may benfit from several masters (per business unit or product) and use JCasC to manage common configurations overview, plugin to make your life easier.
I've been learning Travis CI and I want to use it to help automate tests on a MEAN application, then deploy it. However, there are some ways to go about this.
After reading, I learned I can create two separate repositories, thus maintaining two separate applications: a client application and a backend application. Since they are separate repositories, I can have separate .travis.yml files on each and perform continuous integration on the client application and backend application. However, I need advice on this approach because I have questions:
For the client app, I have to write tests. Since I'll be using angular, I want to test responsiveness and if components are working as intended. The client application also has to communicate with the backend application and I want to see if it is properly getting the correct results (such as clicking a button triggers a GET request and see if I'm getting the correct response body). Since the client app is on a separate repository, and when I build it on TravisCI, how will I connect the client application to the backend application if it exists on a separate repository?
I read around and I can use submodules in git. Thus, the client application and the backend application can be submodules for a 'master repository'. Therefore, how will the trigger in TravisCI work? Will I have separate travis.yml files in each submodule, or will I have to have one in the "master repository"?
If I were to get everything to work properly and have the client application and backend application both successfully deploy and the two are hosted on different servers, how will I fix the cross-domain issue?
The other approach is to host the static files produced by ng build --prod and have the node backend application host them. When Travis CI is triggered, I can first build the node backend application and run the tests on it first and then run the tests on the angular client application. After all of the tests are passed, where do I deploy? I know I have to deploy the node application since it will host the static files, so I how exactly will I deploy the backend application in Travis CI?
I know this is going to push it, but I'll ask it anyway. In the future, I want to learn how to implement microservices, and I want to use Nginx for the purpose of load balancing. How will I go about that? Docker can help me create a production environment where I can see if the Nginx server and node application are working well, but how do I include that in Travis CI?
If my question is a bit vague, please let me know what parts of it are vague so I can edit it that way I can make more sense of what I'm asking for. Thank you, and I look forward to an answer :)
Question is ultra-broad. You should solve one problem at a time, because by the time you solve 1 and 2 I doubt that 3 will be your only concern, and all of these issues are not really related.
try spending a bit of time reading Travis CI documentation, but also how to write tests and what different types of tests will do for you. Your issue is less about Travis than about what are unit tests vs. what are integration tests. So write simple standalone tests for your frontend, simple standalone tests for your backend, maybe run integration tests manually for a while, and when it becomes a real issue, then you will have better knowledge of how everything works together and you will find a way. Long story short: there is no single best way to run integration tests and it mostly depends on many, many things in your app (how does it run, what type of DB do you use, etc.)
you should read about submodules. Perhaps you need them, perhaps not. There is no way to tell. You can use submodules with Travis CI, but you can also not use submodules. Depends on what you want to achieve. Focus on the end goal for your architecture, not on what Travis CI needs!
what cross-domain issue? Again, this is a very different problem, and probably not the most prominent one you will face. Since I have no idea what server technology you will use there is no way I can answer that question properly. If you use express, this may be what you are looking for: https://expressjs.com/en/resources/middleware/cors.html
General bit of advice: all of your questions boil down to experience. Try solving one problem at a time, get started with your project and when you hit a specific issue, it's much much easier to solve than asking about "microservices". There are many ways to do microservices right, each solving a different issue. Without any knowledge of what your application is about and what issues you want to solve, microservices may or may not be what you are looking for, but there are also many other components that can affect your stack. Just get started and don't think about all of this too much for now - it's better to have something soon that you can test and learn upon, than think for weeks about something that you will never get because it is only theory.
I'm starting with Kubernetes (through GKE) and I want to setup Gitlab Review Apps.
My use case is quite simple. I've read tons of articles but I could not find clear explanations and best practices on the way to do it. This is the reason why I'm asking here.
Here is what I want to achieve :
I have a PHP application, based on Symfony4, versioned on my Gitlab CE instance (self-hosted)
I setup my Kubernetes using GKE into Gitlab
I want, on each merge request, deploy a new environment on my cluster where I am able to test the application and the new feature (this is the principle of Review Apps).
As far as I read, I've only found simple implementations of it. What I want to do, is deploy to a new LAMP (or LEMP) environment to test my new feature.
The thing that I don't understand is how to proceed to deploy my application.
I'm able to write Docker files, store them on mi Gitlab registry, etc ...
In my use case, what is the best way to proceed?
In my application repository, do I have to store a Docker file which includes all my LAMP configuration (a complete image with all my LAMP setup)? I don't like this approach, it seems strange to me.
Do I have to store different custom images (for Apache, MySQL, PHP-FPM, Redis) on my registry and call them and deploy them on GKE during review Stage in my gitlab-cy.yml file?
I'm a little bit stuck on that and I can't share code because it's more about the way to handle everything.
If you have any explanations to help me, it would be great!
I can, of course, explain a little bit more if needed!
Thanks a lot for your help.
Happy new year!
After evaluating existent tools like Ansible Tower, rundeck and others, it seems that no tool can fulfill the needed requirements.
We have complex data-center servers, cluster of DB and web servers, the data-center has a lot of client-systems, +100, and other tools like solr, redis, kafka... deployed there across the physical servers, not to mention that the same data-center servers have different accounts, linux users, (QA,stag,production..etc), for now the meta-data about these environments alongside their web-apps, source code to be used, servers of the cluster are all defined on xml and there is a bash scriptsreads from that XML that operated manually to run any operation/task (like checkout the source, build, deploy, start, stop... and other customized operations)
This system should be done with a developer and DevOps engineers together, but what I want to know, is there any preferable framework(s) that could be used for this system? does the workflows frameworks are usable on this case? e.g, Activiti BPMN? the Ant is an option but we don't need just an automation tool more that scheduler and logging and a lot of other services.
And if this is not the right place, can you please point out where cat I ask such question?
What's required is to create a web-based system as automation tool with:
UI to define the specific operations to be done, like build, deploy across the cluster specific web-app on specific env, start/stop specific web-app on specific machine, or any other customized operation, with multiple selections and flexible and dynamic options choosing way.
The FE should show the executing workflows and operations within them.
Dynamic way to create set of operations as a single workflow, that have dynamic ability to set the dependencies among them.
An interface between the back-end code of this system with the already existent bash scripts that will do the actual tasks across the DC servers.
A scheduler to be able to organize these operations in respect to a defined complicated dependencies between the workflows.
Authentication & authorization services to the users since there will be a lot of customized roles upon the operations,environments, the products...etc
Logging system to save the operations outputs.
Why not use a combination of ansible/Docker and jenkins, Jenkins can do most of the stuff you described using Pipeline projects/MultiProjects and Ansible for your UI and role related details
The task you're describing is basic network orchestration, and there're a bunch of orchestration engines/software out there. However, the fact that you are asking this question means you're just starting out and not ready to invest in a full fledged management product yet.
I strongly suggest that you take a look at Chef for your purposes.
Hope this helps.
I would recommend you to take a look at jenkinsx.io if you are targeting kubernetes and containers (docker). As part of the activiti bpm team we are trying to align with such technologies to make people life easy to integrate more complex workflows with DevOps and operations of your projects.
The basic flow of making the change to the code and then pushing the code using IDE or CLI to the CF instance seems to be quite inefficient. What other approach exists, preferably with following characteristics:
Ease of setting up the environment on a developer's machine including cases such as on boarding of new team member when the replicability and consistency is important
Seamless development experience - change/deploy/test
Support for local services binding and mocking/stubbing
There was similar question already but it is quite outdated and with the pace of changes in the industry (e.g. Docker explosion) there might be better alternatives to the push.