How to install new software onto a GCP flexible environment VM - ruby-on-rails

I have a ruby on rails app running on a Google Cloud Platform VM running on the app engine flexible environment. It looks like it installs most of the software on the VM when I deploy the app with gcloud --project project-name preview app deploy I think it installs rails and other software from reading the temporary dockerfile it creates. It grabs the info for the dockerfile from the app.yaml file (I got this setup from following their tutorials).
This was working fine for me but now I need to install ImageMagick onto the server to manipulate images on the site. Normally you do this by running sudo apt-get install imagemagick from the project directory. When I SSH onto the VM I cant find the project directory so that doesn't work.
I have no idea how to get it to run sudo apt-get install imagemagick each time I make a new deploy to the site so it has the software on the new VM.
As you might be able to tell I'm not very good with the server side of things and want to know what I'm supposed to do to get new software onto the VM the right way so its always there like ruby and rails etc.. are each time I make a new deploy.

You should use a custom Dockerfile when you need additional configuration.
To install ImageMagics you have to set runtime: custom in your app.yaml, create a Dockerfile based on default one, and add following line:
RUN apt-get update && \
apt-get install imagemagick -y

Related

Trying to log in to JIRA in Docker container and getting SSL: WRONG_SIGNATURE_TYPE error

I'm new to Docker and I'm not very experienced in SSL/certificates etc.
I'm working on a web application that lets the user log in to JIRA via JIRA API to do things. This works on my computer and I don't get any SSL errors. However, when I run it in a Docker container I get
ssl.SSLError: [SSL: WRONG_SIGNATURE_TYPE] wrong signature type error.
My friend ran the exact same Dockerfile in his computer and created a container and it works which is confusing.
I checked the requests library version on my computer and compared it to the one in Docker container but they are the same. What could be the problem? Thank you
Note: I use Windows
Faced similar problem.
Find the solution here
Just to sum up:
There is missing dependencies inside container which your own system already has. You should install them inside docker.
Idk how but pyopenssl library should be install to
So, you need to add:
RUN apt-get update \
&& apt-get install openssl \
&& apt-get install ca-certificates
In your Dockerfile
And add:
pyopenssl==19.1.0
To your requirements.txt
If you don't use requirements.txt just add:
RUN pip install pyopenssl
To your Dockerfile
Hope it'll help
Similar issue happened to me - python tls requests were working fine on my host, but failed with WRONG_SIGNATURE_TYPE once I dockerized my script.
The issue seems to stem from where requests lib uses older TLS stacks with OpenSSL 1.1.1pre9.
I noticed rolling back to python:3.5.3 stopped the error, however for compatibility with newer versions the solution posted in github issue thread worked fine.
i.e. import the TLSAdapter, use it to setup a requests session, then start making requests.

would dockerfile apt-get cache cause nonidentical docker container?

I am reading dockerfile documentation.
I saw it mention the dockerfile would utilize cache better to improve build process.
So the documentation recommend that if you try to RUN apt-get update, merge the command to the following package install such as RUN apt-get update && apt-get install curl to avoid installing out-date package due to the cache.
I am wondering what if I download the same dockerfile but I build the docker image at different computers at different time.
Because the local cache in each computer, they still have chance to build different docker container even they run the same dockerfile.
I haven't encountered this problem. Just wonder is this possible and how to prevent it?
Thanks.
Debian APT repositories are external resources that change regularly, so if you docker build on a different machine (or repeat a docker build --no-cache on the same machine) you can get different package versions.
On the one hand, this is hard to avoid. Both the Debian and Ubuntu repositories promptly delete old versions of packages: the reason to apt-get update and install in the same RUN command is that yesterday's package index can reference package files that no longer exist in today's repository. In principle you could work around this by manually downloading every .deb file you need and manually dpkg --install them, skipping the networked APT layer.
On the other, this usually doesn't matter. Once you're using a released version of Debian or Ubuntu, package updates tend to be limited to security updates and bug fixes; you won't get a different major version of a package on one system vs. another. This isn't something I've seen raised as an issue, except that having a cached apt-get update layer can cause you to miss a security update you might have wanted.
Just a docker image is unchangeable. To ensure that the Dockerfile will generate the same image, you need to pin the exact software version in your install command.

Can I Install Docker Over cPanel?

Can I install Docker over a server with pre-installed cPanel and CentOS 7? Since I am not aware of Docker, I am not completely sure whether it will mess with cPanel or not. I already have a server with CentOS 7 and cPanel configured. I want to know if I can install Docker over this configuration I mentioned without messing up?
Yes you can install docker over cPanel/WHM just like installing it on any other CentOS server/virtual machine.
Just follow these simple steps (as root):
1) yum install -y yum-utils device-mapper-persistent-data lvm2 (these should be already installed...)
2) yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
3) yum install docker-ce
4) enable docker at boot (systemctl enable docker)
5) start docker service (systemctl start docker)
The guide above is for CentOS 7.x. Don't expect to find any references or options related to Docker in the WHM interface. You will be able to control docker via command line from a SSH shell.
I have some docker containers already running on my cPanel/WHM server and I have no issues with them. I basically use them for caching, proxying and other similar stuff.
And as long as you follow these instructions, you won't mess-up any of your cPanel/WHM services/settings or current cPanel accounts/settings/sites/emails etc.
Not sure why you haven't tried this already!
I've been doing research and working on getting Docker working on cPanel. It's not just getting it to work on a CentOS 7 box but rather making it palatable for the cPanel crowd in the form of a plugin. So far I can confirm that it's absolutely doable. Here's what I've accomplished and how:
Integrate Docker Compose with cPanel (which is somewhat a step
further from WHM)
Leverage the user-namespace kernel feature in Linux so Docker
services can't escalate their privileges (see userns remap)
Leverage Docker Compose so users can build complex services and
start ready apps from the store with a click
Make sure services starting via Docker run on a non-public IP on the
server. Everything gets routed via ProxyPass
cPanel has been gracious to provide a Slack channel for people to discuss this upcoming plugin. I'd be more than happy to invite you if you'd like to be kept updated or to contribute. Let me know!
FYI, there's more info here on https://www.unixy.net/docker if you're interested. Please note that this plugin is in private beta but more than happy to let people use it!
Yes you could, in fact someone else has done it already: https://github.com/mirhosting/cPanel-docker

How to install docker-engine using docker binary without internet connection

I have downloaded docker binary version 1.8.2 and copied that to my backup server (centos server) which doesn't have internet connectivity. I have marked this as executable and started the docker daemon as mentioned in [https://docs.docker.com/engine/installation/binaries/][1]. But it doesn't seem to get installed as a docker service. For all the commands, I have to execute as sudo ./docker-1.8.2 {command}. Is there a way to install docker-engine as a service? Currently sudo docker version shows command not found. I'm a newbie to docker setup. Please advise.
Why not download the rpm package (there are also centos 6 packages), copy to USB stick and then to your server and simply install it with rpm command and that's it. That way you'd get the same installation as if you were to run yum.
Of course you may have some dependencies missing, but you could download all of these as well.
Firstly, if you're downloading bare binaries on an enterprise linux, you're probably doing things in a very bad way. Immediately, you're breaking updates and consistency, and leaving your system in a risky, messy state.
Try using yumdownloader --resolve to get the docker installable and anything it needs.
A better option may be to mirror the installation artifacts, and grab it from the local mirror, but that's beyond the scope if you don't do this already.

Deploying Rails to Elastic Beanstalk, nodejs?

I want to host my app on a VPC EC2 instance with AWS, and this line in the documentation has me somewhat confused:
Install nodejs to allow the Rails server to run locally:
$ sudo apt-get install nodejs
I was just wondering why I need nodejs at all, let alone to allow the rails server to run locally since i thought that this was already handled by WEBrick.
Heres a link to the documentation in question:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Ruby_rails.html
Rails uses a Javascript runtime for certain tasks (generating scaffolds and compiling templates among others). You do not need to use nodejs, you can add other runtimes as gems in your Gemfile, e.g. therubyracer, therubyrhino. (YMMV on EC2, if it suggests using nodejs, I would install it unless you have a good reason not to)
The title of the question is unclear. Elasticbeanstalk uses EC2 instances but you should never ssh directly into an EC2 server to make changes.
Elasticbeanstalk will destroy and create EC2 instances to scale with the web traffic coming to the application. Making changes to one particular instance will not guarantee it will make the changes for all of the EC2 instances belonging to one Elasticbeanstalk application.
Also that particular instance can be destroyed when deploying, or rebuilding or when the app scales back down.
If you're using the Ruby platform on Elasticbeanstalk you'll need to use their builtin in EB extensions to either run a command to install node or use the yum package manager to install it.
Here's documentation that describes the yum method:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#linux-packages

Resources