Server Development Tool? - windows-services

For my programming tasks I use about 2-3 remote servers to deploy and run my code against different conditions. This cannot be emulated locally as the server configuration requires powerful hardware. Most of time I need to stop service, update binaries, start service, view logs in realtime, download logs. Currently I'm doing this manually and over time this becomes a real pain in the ass, especially because the environment is not ideal in terms of network bandwidth, reliability etc.
I just wonder if someone from server programmers have the similar problems and how do you bear with them. Any special tools/hints/secrets?

Sounds like you could use Scripting with Windows PowerShell to automate a lot of the manual tasks that you are performing now.

Unit testing and mocking helps. Instead of having to run all the big powerful stuff all the time, you can save that for a nightly build & smoke or CI server job (most of the time).
Similarly, using of a distributed CI server, like Hudson or Buildbot helps you script the distribution and testing to all the machines.

Related

Example of web development local and production environment sample setup and workflow

I am working on moving our existing websites from a shared hosting to a VPS and then they will be redeveloped and improved using Laravel. My background is not software development, I have however a decent understanding of web development (enough to make a blog, CMS etc) BUT I have never worked in a web dev team so I don't know how things should be done "properly".
Locally I have always used XAMPP and remotely it has always been as basic as publishing files via Filezilla.
Now I have been required to do:
Version control - the changes to the website should be reviewed by a second (non technical person) before going live
Develop a system based on Laravel
What I am struggling to understand is how Ubuntu Server, GIT, Docker, Kubernetes, NGINX etc. all work together. Basically I don't know what "the big picture" looks like, how a decent workflow should look like.
So far I have manually installed all the necessary software to run Laravel on the VPS (the LAMP stack) but soon after I started to run into problems (libraries that are activated locally are not activated remotely). It has also become clear that software updates and differences between my local environment and remote (production environment) will make the issues worst over time.
Can someone explain in VERY general terms, how things should fit together so that my setup is both resilient, robust and scalable? For example:
Install docker on the server and on your computer
Download such and such image
connect GIT in such and such way
Enable unattended-upgrades
The more I read the more I get confused.
What I would like is a simple guide/idea on how things should be done properly.

I am working on creating a baseline of a developer's set up for them to 'plug and play'. What would be the best option? VM, Containers or else?

I am trying to find the best way to achieve the following scenario;
I am currently working on getting a complex enterprise web application that consist on:
DB
BPM Engine
SOA Engine
Reporting Engine
Web Application Server
IDE
The applications is currently running in non-prod and prod environment but each environment is independent (no infra as a code, and deployments go from dev -> ... -> prod).
When a new developer comes in, they can't run the system in their local machine as it involves too many components (will come to this later). So they do development in their local machine and to test, they need to publish and deploy to dev. Test, rinse and repeat.
I am currently working on reverse engineer the whole thing so I can get it working on my local machine provided that I can install and run all the components. I am nearly there after fiddling with a lot of configuration, settings, etc.
This work I would like others to use, so they can also run the project in their local machines. In fact, since we will be migrating soon, I would like to pack the whole thing in a way that I can deploy it anywhere (the app already working and configured) and parametrised somehow whether is DEV, SYS, UAT, PROD. This, according to my understanding is what a docker image would do for you correct? You do all the work and then you create an image out of it? Then you can have this image running in a container and that way, other people can 'reuse' your work?
Is this the correct way of doing it? Any hints / comments would be appreciated
Apologies for my writing.

Why is it unadvisable to run Jenkins on the same computer one develops on?

I have read four tutorials about getting started with Jenkins, and whilst they say it is possible to run Jenkins on the same computer on develops on they also all recommend installing it on a separate one, most commonly a Mac Mini. However: I only own a MacBook Pro; am short on cash; and am only person contributing to my iOS projects currently (I want to learn Jenkins for future client work). So it would be better for me for now to use my MacBook for both purposes.
Whilst I appreciate this is a matter of opinion somewhat, I am wondering what the reason is for the recommendation of separation, and whether I might be able to run Jenkins on the MacBook for now?
Thank you for reading.
The reason it is advised to have a master server and a number of slave server is only valid in company (or big team) environment. It is that build job can be CPU and memory intensive and often many developer starts jobs on the server. In cases like that one machine (being the master and slave server ot once) will be slow. Not only the jobs will take longer to finish, but even the web interface may become unresponsive.
For learning the basic configuration steps one machine is totally enough and you can even run your builds with your Jenkins instance.
I'm not entirely sure what the reason for that is in those tutorials, however, I can suggest an easy way to get started with Jenkins for free (That's how I usually run jenkins for personal use). You can create a free account with one of the Cloud providers like AWS, GCP or Azure and have your jenkins running there. For example, in AWS you can have a 1-year free trial account where you can spin up some free servers. There are many tutorials online, like this one, which will show you step by step of how to get started with Jenkins on AWS. Here are some high-level steps:
Create a free account in AWS (or any other cloud provider)
Spin up an EC2 instance - it can be any linux version or windows, whatever you are more comfortable with
SSH or RDP to the instance and install jenkins - there are exact installation steps for any flavor of your OS out there
Once the installation is complete, you will be able to access jenkins on your browser - in case of AWS, it would be the public ip of the server and default port 8080

Ruby development environment (OS X vs. Ubuntu)

I'm a developer who uses RoR-CoffeeScript-Sass-Passenger-Apache. We use EC2 for our deployment and we have Macbook Airs for development. While the rails community is very much Mac-friendly, because of the whole deployment stack difference in dev vs. prod, I'm using a virtualbox+ubuntu while my peers are developing on OS X native.
Having on OS X Native adds more problem as we have more dependencies in the stack (Solr, Beanstalk, Mongodb and more which works well in Ubuntu)
I'm looking for suggestions on how Rails developers using Mac and Amazon EC2 can have their dev and prod environment setup.
Would also like feedback on use of vagrant for distribution of development environments for this use case.
A common practice would be to replicate your stack as a "staging" environment. With EC2, you can just create AMI's of your existing machines and duplicate them, turning them on only to test deploys, and run your tests to make sure everything is running properly before deploying it to production. Or often you may wish to leave it on permanently so developers can quickly deploy updates or patches to test as need be.
Doing it this way ensures that you have an exact replica of your production system to test before rolling out, thereby eliminating any (catastrophic) issues pertaining to the deploy sneaking out into production.
Our team has been been developing on Macs and deploying to Ubuntu on EC2 for three years now with very few issues. Several things have helped make this a smooth process:
We can run the entire app stack** on a Mac. Between macports, homebrew, and building from source when necessary, we have managed to get every piece of technology that we run in prod working on our dev boxes. The way the pieces are configured and fit together is different locally (in prod, for example, we auto-discover our memcached instance, whereas locally it's hard coded) but every integration can be tested on Macs first before going to prod.
Our continuous build system is on the same setup as our prod boxes. This means if you check in some code that depends on some piece of local magic it's discovered quickly.
We run a soak (some people call this staging or integ) stack that is configured identically to production. This sometimes causes some development overhead but has so many benefits that it's well worth it. All code goes through this stack before being pushed to prod.
This setup has worked well enough that over time we've allowed more parts of the setup to drift apart. We used to run passenger locally (like we do in prod) but now use Pow. We regularly experiment with new ruby versions in development for some time before upgrading the rest of the stack.
I've had to develop using a virtualized environment for other projects (OSX + CentOS in VirtualBox) and definitely found it more painful that all-native. For one, it felt like managing two machines instead of one. Everything also felt sloooooowww.
If there's a piece of the stack that is painful to run on the Mac, I would definitely prefer to take the hit of either a) spending the time of getting it working locally or b) abstracting that piece away, rather than pay the tax of dealing with a virtual environment.
** I'm only including the Rails app and direct dependencies in this discussion. For example, we use puppet to configure our EC2 fleet, but don't run it on our dev boxes.

Proper continuous integration and continuous deployment with Git and Heroku

I am developing a ruby on rails website using heroku and git.
What tools and features should I use to set up the following simple development process?
CODE > CHECK-IN > AUTO TEST > AUTO DEPLOY
I check my code into my repository (preferred option, hosted git like github)
Tests are automatically run AND website is deployed in my staging heroku app
If tests pass, the website is automatically deployed on my production heroku app
If tests fail, I want to be notified somehow.
How would you do this?
CircleCi offers exactly what you need. We'll run your tests on every push, deploy them if they pass (to Heroku or using Capistrano/Fabric/anything really), and send you notifications if they fail.
to preface I am one of the founders of Codeship (https://codeship.io), which is a service that supports exactly this.
But more on topic, basically there are 2 different ways I think this could be implemented (please keep in mind that all branch names I use are arbitrary and can be named totally different):
staging/production in one go
Whenever you push to your master or a specific deploy branch you run your tests and if all of them pass you first deploy to your staging app, run separate tests (Selenium or sauce labs is great for that) and if that works out including migrations you push to your production app.
This is great as the latest version is always available in production and we use this cycle for a long time now. Works great for us. The downside is that pushing to a staging heroku app takes some time. If you want to run the migrations against a copy of your production data this takes even more time. It's not an eternity, but it takes a couple of minutes.
staging/production as separate steps
You could have separate staging/production branches which are deployed to the respective heroku applications. This has the advantage of being faster and you can control when to release certain parts. Especially for applications where you want external feedback before deploying to production this works great.
We support all of that at Railsonfire, but we are currently working on a new version of our service which is way better. We integrate really well with Heroku so you don't have to think about that (but still have the option to do it yourself in any way you want)
We use Integrity. It is a pretty simple solution - it won't do everything under the sun, but it's quite easy to set up and handles the most common use cases/features. It's also pretty easy to hack on, if you want it to do more.
Integrity states:
Heroku is the easiest and fastest way to deploy Integrity.
However:
NOTE
It isn’t possible to access repositories over SSH on Heroku
This is because your Integrity app will need an SSH key. It's not impossible, but definitely a few hoops to jump through. You'll need to give Integrity a private key and put it in the app, and then hack Integrity to use that ssh key when it initiates the git clone.
Of the things you listed, the automatic deploy is probably the thing most people would not expect their CI server to do (and Integrity does not provide out of the box). You'll need to configure git to use that ssh key and initiate a git push from the proper location (the checked out repository).
Unfortunately I don't know the details of how to do this--we actually run Integrity on a VPS.
There are many tools in the market that do this. SnapCI offers deployment pipelines that let you push every commit through tests and then subsequently into staging and production as different stages of a deployment pipeline. We also have full support for test parallelization, building branches and pull-requests.
Well, there is Hudson which provides a git plugin as well as scripting support. The rest is configuration, I would guess.
Hudson: http://hudson.dev.java.net
Try Heroku-Bartender. A write-up here.

Resources