I've been learning Travis CI and I want to use it to help automate tests on a MEAN application, then deploy it. However, there are some ways to go about this.
After reading, I learned I can create two separate repositories, thus maintaining two separate applications: a client application and a backend application. Since they are separate repositories, I can have separate .travis.yml files on each and perform continuous integration on the client application and backend application. However, I need advice on this approach because I have questions:
For the client app, I have to write tests. Since I'll be using angular, I want to test responsiveness and if components are working as intended. The client application also has to communicate with the backend application and I want to see if it is properly getting the correct results (such as clicking a button triggers a GET request and see if I'm getting the correct response body). Since the client app is on a separate repository, and when I build it on TravisCI, how will I connect the client application to the backend application if it exists on a separate repository?
I read around and I can use submodules in git. Thus, the client application and the backend application can be submodules for a 'master repository'. Therefore, how will the trigger in TravisCI work? Will I have separate travis.yml files in each submodule, or will I have to have one in the "master repository"?
If I were to get everything to work properly and have the client application and backend application both successfully deploy and the two are hosted on different servers, how will I fix the cross-domain issue?
The other approach is to host the static files produced by ng build --prod and have the node backend application host them. When Travis CI is triggered, I can first build the node backend application and run the tests on it first and then run the tests on the angular client application. After all of the tests are passed, where do I deploy? I know I have to deploy the node application since it will host the static files, so I how exactly will I deploy the backend application in Travis CI?
I know this is going to push it, but I'll ask it anyway. In the future, I want to learn how to implement microservices, and I want to use Nginx for the purpose of load balancing. How will I go about that? Docker can help me create a production environment where I can see if the Nginx server and node application are working well, but how do I include that in Travis CI?
If my question is a bit vague, please let me know what parts of it are vague so I can edit it that way I can make more sense of what I'm asking for. Thank you, and I look forward to an answer :)
Question is ultra-broad. You should solve one problem at a time, because by the time you solve 1 and 2 I doubt that 3 will be your only concern, and all of these issues are not really related.
try spending a bit of time reading Travis CI documentation, but also how to write tests and what different types of tests will do for you. Your issue is less about Travis than about what are unit tests vs. what are integration tests. So write simple standalone tests for your frontend, simple standalone tests for your backend, maybe run integration tests manually for a while, and when it becomes a real issue, then you will have better knowledge of how everything works together and you will find a way. Long story short: there is no single best way to run integration tests and it mostly depends on many, many things in your app (how does it run, what type of DB do you use, etc.)
you should read about submodules. Perhaps you need them, perhaps not. There is no way to tell. You can use submodules with Travis CI, but you can also not use submodules. Depends on what you want to achieve. Focus on the end goal for your architecture, not on what Travis CI needs!
what cross-domain issue? Again, this is a very different problem, and probably not the most prominent one you will face. Since I have no idea what server technology you will use there is no way I can answer that question properly. If you use express, this may be what you are looking for: https://expressjs.com/en/resources/middleware/cors.html
General bit of advice: all of your questions boil down to experience. Try solving one problem at a time, get started with your project and when you hit a specific issue, it's much much easier to solve than asking about "microservices". There are many ways to do microservices right, each solving a different issue. Without any knowledge of what your application is about and what issues you want to solve, microservices may or may not be what you are looking for, but there are also many other components that can affect your stack. Just get started and don't think about all of this too much for now - it's better to have something soon that you can test and learn upon, than think for weeks about something that you will never get because it is only theory.
Related
I am currently working for an enterprise and have been asked to use GitHub Actions instead of ADO/Azure Pipelines or Jenkins.
My objective is to create a self-service model where we have a basic CI/CD framework that teams can use as a starting place for their pipelines. It has all the security, quality, and governance rolled in, making life easier for devs and reducing duplication of effort. Thus reusable workflows are a must, and being able to launch containers from a build agent/runner/worker is a must.
Let me lay out my understanding of the situation with GitHub Actions:
Github Actions can do two things:
Launch a Container
Run JavaScript
Running a container within a container is a considered a bad idea, and in fact, support for it is going away in the near future.
GitHub Hosted Runners run in a container.
Github Actions don't support Reusable Workflows until Q3 of 2021
If my understanding is correct, then I'm dead in the water:
A GitHub hosted runner for Actions is basically useless in my case unless I want to write JavaScript.
It looks like I'm back in the VM business to self-host a runner so that I can use it to host containers instead of running Docker within Docker
My ability to create a generalized framework for my dev teams is somewhat undermined until GitHub gets around to implementing Reusable Workflows. (I think this is the biggest assumption, most likely to be disproved with a workaround)
Before I push back for a different CI tool, somebody please tell me what I'm missing here or what workarounds make this do-able.
After additional research and some testing, my hypotheses were confirmed:
Using a self-hosted runner on a VM is the most straightforward way to solve the "container-in-a-container" problem. It also solves the problem of consuming private/self-hosted package feeds from the runner without whitelisting every i.p. range used by GitHub hosted runners, which something most enterprise InfoSec teams would be reluctant to do.
This question had second one rolled in, "How to create a reusable CI/CD framework for an enterprise in GitHub Actions" which was bad form on my part. The most straightforward options are a) wait until Reusable Workflows are fully implemented and worked out b) use a more mature orchestration tool like Jenkins, TeamCity, or Azure DevOps if you can't wait.
A couple of things to look at:
You might be able to use create your own GitHub actions to share behavior, info on how to call here.
For self hosted then docker-github-actions-runner is a great starting point.
This is sort of an open-ended question/request (hope that's allowed).
On my team we are using Karate API testing for our project, which we love. The tests are easy to write and fairly understandable to people without coding backgrounds. The biggest problem we're facing is that these API tests have some inherent degree of flakiness (since the code we're testing makes calls to other systems). When running the tests locally on my machine, it's easy to see where the test failed. However, we're also using a Jenkins pipeline, and when the tests fail in Jenkins it's difficult to see why/how they failed. By default we get a message like this:
com.company.api.OurKarateTests > [crossdock] Find Crossdock Location.[1:7] LPN is invalid FAILED
com.intuit.karate.exception.KarateException
Basically all this tells us is the file name and starting line of the scenario that failed. We do have our pipeline set up so that we can pass in a debug flag and get more information. There are two problems with this however; one is that you have to remember to put in this flag in every commit you want to see info on; the other is that we go from having not enough information to too much (reading through a 24MB file of the whole build).
What I'm looking for is suggestions on how to improve this process, preferably without making changes to the Jenkins pipeline (another team manages this, and it will likely take a long time). Though if changing the pipeline is the only way to do this, I'd like to know that. I'm willing to "think outside the box" and entertain unorthodox solutions (like, posting to a slack integration).
We're currently on Karate version 0.9.3, but I will probably plan to upgrade to 0.9.5 as part of this effort. I've read a bit about the changes. Would the "ExecutionHook" thing be a good way to do this? I will be experimenting with this on my own a bit.
Have other teams/devs faced this issue? What were your solutions? Again we really love Karate, just struggling with the integration of it to Jenkins.
Aren't you using the Cucumber Reporting library as described here: https://github.com/intuit/karate/tree/master/karate-demo#example-report
If you do - you will get an HTML report with all traffic (and anything you print) in-line with the test-steps, and of-course error traces, and most teams find this sufficient for build troubleshooting, there is no need to dig through logs.
Do try upgrade as well, because we keep trying to improve the usefulness of the logs, and you may see improvements if you had failed in a JS block or karate-config.js.
Else, yes the ExecutionHook would be a good thing to explore - but I would be really surprised if the HTML report does not give you what you need.
I am very very new to Docker and trying to really wrap my head around the concept and also struggling a little bit. While I have not created any image yet but my team is moving to Docker and I do have very fundamental questions. Let me start with what I understand
I can create an Image of my application which can consist of OS version, Web server configuration and Applicaton binaries
However what I do not understand is that there are far more things involved in n-tier application and I have a lot of questions which I am struggling to find answers for. i just wanted to post some of them here and see if some of them can be clarified.
As I just mentioned above an n-tier application has far more things involved than my Binaries and web server settings. If I have multiple layers(Binaries) for my application say one for services and one for MVC client then do I need an image per layer?)
What happens to .config files? One thing that confused me a lot is that it is mentioned that you can use the same image for testing and prod. Then something has to be different across these environments right? Would that something be config files? if yes then why it is not mentioned anywhere?
What happens to DB? Do we spin up another image for DB?
I am hope I am not very far off on my assumptions.
I find it easy to find resources on amazon/online to learn new frameworks/programming languages. However, when it comes to web app deployment, I find it really hard to find a single resource that explains all. Most of them just tell you what to do without explaining 'why'.
I just did my second rails app deployment with unicorn and nginx. Although I 'kind of' know what is going on when reading the tutorial, I would like to know more about the 'how' and 'why' of the steps that were given.
In short, I would like to read books/articles/tutorials (hopefully all-in-one) about best practices for web app deployment, and how to scale/load-balance your app when it gets more traffic (and hopefully it the tutorial uses NGINX/unicorn setup).
Well, deployment and scaling are completely orthogonal topics, so I don't think you'll find an article/tutorial that covers both.
As for deployment, you better automate your work. When you deploy to 100+ servers, the last thing you want to do is to do it manually (fetch from git, migrate db, restart servers, etc).
Capistrano - very well known and adopted tool for deploying Rails.
Vlad - another tool that gains more attention lately.
Chef - very powerful tool for server management. Several top Russian sites use it.
I come from a traditional programming background by which I mean C, Java, C#, C++ and a little python and VBA.
Now I'm trying to create a small CRUD application for the purpose of taking a form and turning it into an online form and database for later querying.
My initial thinking seems to lead me to Ruby on Rails given the fact that there is a lot of good stuff about it on the internet and my greatest strength is that I pick up languages fairly quickly so the fact that I have never seen Ruby code until 10 mins ago is no big deal. Now having said that I'm looking for cheap infrastructure to a) host an svn repository and a web server to allow me to develop and learn what I need and eventually deploy.
In short;
1) Where can I go for this cheap infastracture for the purposes of learning and eventual deployment?
2) Where should I go for infrastructure to host and SVN repository? (I haven't coded in a while, but will be needing this for a multitude of things and am not in a place to run this off a home desktop/server)
3) If you don't agree with my Ruby on Rails conclusion, what would you recommend and why?
Is SVN mandatory? If you don't mind trying something like git, you can get a small application up and running using heroku in no time. And, for free.
There might be a few stumbling blocks at first–getting your local development environment all set up–but you'll be able to get going fairly quickly.
unfuddle has great SVN and GIT hosting for free, you actually get private repos with SSH.
If you haven't given GIT a try, I would. SVN was the first source control I used, but after messing with git a little I immediately switched, it is SO much faster.