Cannot visit Public IP of gitlab-aws-image - ruby-on-rails

I followed https://about.gitlab.com/aws/ and I could not visit the "Public IP" of the aws image. It said "This site can’t be reached". So, then I ssh'd into the instance and found there was no /etc/gitlab/gitlab.rb file so I created one and simply pasted in the contents of https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/files/gitlab-config-template/gitlab.rb.template and replaced external_url 'GENERATED_EXTERNAL_URL' with the public IP. Still it doesn't work. Any tips?
Also on https://about.gitlab.com/aws/ it says you should use a c4.large instance but that sounds expensive -- can I just use a t2.micro?
I am used to using github so I was never worried about losing files but now that I'm hosting myself what is the professional way to backup (like what if the ec2 instance crashes) -- through s3 and by following http://docs.gitlab.com/omnibus/settings/backups.html?
Finally, The reason why I need to host my own gitlab is because I need to run pre-receive githooks. Is there any easier way to run pre-receive githooks without subscribing to an expensive enterprise service?

I believe https://about.gitlab.com/aws/ is broken. It's better to setup the default ubuntu instance given by amazon (you can pick t2.medium or c4.large) and then just follow the instructions for installation on gitlab.com for that version of ubuntu. It's just 4 steps (don't do the "from source").

Related

Easy to set up docker-compose hosting

I am trying to find an easy to set up docker hosting. What I have is a private git repository with an application, that I can get running locally just with checking it out and running docker-compose up -d. I am not at the moment looking for a production-ready solution, just for a way to get it running somewhere so that few of the potential customers can see the progress, paly with the app a little and suggest improvements. So any service where it is not too much hassle to get it running and accessible from the web.
Solution 1
You could use play-with-docker . This is a free online docker environment accessible via web. The docker-compose tool is also available. The only downside is that the environment will expire after 4 hours. An other similar free online service is also katacoda.
Solution 2
Create AWS account and deploy a linux VM in the free tier. The free tier enable you to create a VM with limited resources for one year.
Solution 3
Prepare a virtual box VM with everyting is needed to run your application.
If you need I can provide further details about the above solutions.

Replicating Dev environment from one EC2 machine to another

So, I have been running into this problem constantly --
I usually spin up EC2 machines temporarily for running some benchmarks or small projects and then shut them down when the work is done.
However, everytime I spin up a new machine, I have to setup my environment all over again.
Here are the things I want to be configured easily:
I have a custom .vimrc, .tmux.conf, .zshrc file that I need to be setup
I have to re-install all the basic packages on ubuntu
I have to re-install all the vim plugins
Some times partitin/format the harddisk and do other sys-admin work.
I've used Docker before but I find Docker to be more invasive for what I need. It's an additional software that I've to run and I have to mount filesystems, setup extra networking bridge for that, configure ssh-in and ssh-out for etc. So I would like to avoid Docker if possible for this.
I think Vagrant has similar problems.
I am wondering if I should just create an EC2 AMI for this. Is that the best solution to this problem?
Thanks!
just dump your configuration to a S3 bucket and get it back. you can also create init scripts on your machines to install them as you like (or fetch data directly as you need). you can even get more advanced but i guess its not needed.
There's a few options you could use.
One option is to create a startup script that installs everything you like. You can then launch new machines that are fully configured. However, it takes a bit of work to get the script right.
Another option is to Stop the instance when not in use and start it later. You won't be charged for EC2, but you will be charged for the EBS volume storage.
Or, you could create an AMI of the instance, then launch a new instance later from the AMI. This can be slightly cheaper because the AMI only keeps the storage blocks that are in-use and AMI/snapshot storage is cheaper than EBS storage.

Docker and SSH for development with phpStorm

I am trying to setup a small development environment using Docker. phpStorm team is working hard on get Docker integrated for remote interpreter and therefore for debugging but sadly is not working yet (see here). The only way I have to add such capabilities for debugging is by creating and enabling an SSH access to the container which works like a charm.
Now, I have read a lot about this and some people like the one on this post says is not recommended. I have read others which says to have a dedicated SSH Docker container which I don't get how to fit on this environment.
I am already creating a user docker-user (check repo here) for certain tasks like run composer without root permissions. That could be used for this SSH stuff easily by adding a default password to it.
How would you handle this under such circumstances?
I too have implemented the ssh server workaround when using jetbrains IDEs.
Usually what I do is add a public ssh key to the ~/.ssh/authorized_keys file for the SSH user in the target container/system, and enable passwordless sudo.
One solution that I've thought of, but not yet had the time to implement, would be to make some sort of SSH service that would be a gateway to a docker exec command. That would potentially allow at least some functionality without having to modify your images in any way for this dev requirement.

How can I share my full app (.yml file) with others?

I created an app which consists of many components so I use docker-compose.
I published all my images into my private repository (but I also use public repos from other providers).
If I have many customers: how can they receive my full app?
I could send them my docker-compose.yml file per email or if I have access to the servers, I can scp the .yml file.
But is there another solution to provide my full app without scp'ing a yml file?
Edit:
So I just read about docker-machine. This looks good, and I already linked it with an Azure subscription.
Now what's the easiest way to deploy a new VM with my docker-application? Do I still have to scp my .yml file, ssh into this machine and start docker-compose? Or can I tell to use a specific .yml during VM creation and automatically run it?
There is no official distribution system specifically for Compose files, but there are many options.
The easiest option would be to host the Compose file from a website. You could even use github or github pages. Once you have it hosted by an http server you can curl it to download it.
There is also:
composehub a community project to act as a package manager for Compose files
Some related issues: #1597, #3098, #1818
The experimental DAB feature in Docker

Use docker to migrate a system

I have an aws ec2 account, where I am running couple of web apps on nginx. I don't know much about docker, except it is a container that takes snapshot of filesystem. Now, for some reason I am forced to switch accounts. I have opened a new aws ec2 account. Can I use docker to set up a container in my old virtual system, then get an image and deploy in my new system? This way I can remove the headache of having to install many components, configure nginx and all applications in my new system. Can I do that? If so, how?
According to the best practices of Docker and its CaaS, images are not supposed to "virtualize" a whole lot of services, on the contrary. Docker does not aim at taking a snapshot of the system (it uses FS overlay to create images, but theses are not snapshots).
So basically, if your (yet unclear) question is: "Can I virtualize my whole system into one image" the answer is: "No".
What you can do is using an image for each of your service (you'll find everything you need on the hub.docker) to keep a clean system on your new one.
Another solution would be to list all the installed Linux packages on your old system, and installed them on the new one and copy all the configuration files.

Resources