Jelastic - Zero downtime deployment with PM2 - devops

Is it possible not to stop nodejs app when updating app source code from GIT?
Currently Jelastic stops the server before fetching files from GIT:
Stopping nodejs server:
[PM2] Applying action deleteProcessId on app [all](ids: 0,1)
[PM2] [app](0) ✓
[PM2] [app](1) ✓
[PM2] [v] All Applications Stopped
[PM2] [v] PM2 Daemon Stopped
[ OK ]
Is it possible not to stop the app? I'd like to call "pm2 reload app" instead of stopping it.
I tried searching which script does this by using grep. Unfortunately, no files containing Stopping nodejs server: line.

The NodeJS service is stopped before the update from GIT because it can take plenty of time to pull all the changes and install the updated dependencies - and the behavior of the application can be unpredictable
during these operations.
Please also note that the update from git is performed only if the last
commit ID on the remote differs from the last local commit ID when the update process is triggered on Jelastic - in other words, there are no pulls and stop-starts if there are no changes on the remote.

Related

How to work efficiently locally with Kubernetes/Docker?

I am new with Docker and i just made my first test with Kubernetes locally (with Minikube), and it sounds promising!
Now i would like to know how we are supposed to work with these tools efficiently when working on the code.
With docker, i wasn’t very satisfy about the process but it wasn’t so bad:
I made a change in the code
I stop the container
I rebuild the image
I run the image again
I guess there are ways/tools to avoid to executes all theses steps manually but i was thinking about diving further later.
But now i work with Kubernetes/Minikube, here is what the developing process looks like:
I made a change in the code
I delete the pod
I rebuild the image
I save it as a tar archive, then loads it in minikube
Executing all of these steps everytime we make a change in the code slow down significantly the productivity.
Is there a way to optimize/automatize this process for every time we make a change in the code?
There's a bunch of third party tools our there to help with this such as Draft and gitkube.
I personally use draft, which creates a heroku like workflow which makes pushing new applications much easier.
Using Draft with minikube is fairly straightforward:
# enable some plugins
minikube addons enable registry
minikube addons enable ingress
# install helm
# depends on your workstation, I have a mac so:
brew install kubernetes-helm
# configure helm on your minikube
helm init
# install draft
brew tap azure/draft
brew install draft
draft init --auto-accept --ingress-enabled
# from your repo do:
draft create --app myapp
# run your app
draft up
More resources:
https://medium.com/#vittore/draft-on-k8s-part1-e5b046857df4
https://radu-matei.com/blog/real-world-draft/

Continuous deployment using LFTP gets "stuck" temporarily after about 10 files

I am using GitLab Community Edition and GitLab runner CI setup to deploy (synchronize) a bunch of JSON files on a server using LFTP. This job however, seems to "freeze" for a few minutes every 10 files roughly. Having to synchronize roughly 400 files sometimes, this job simply crashes because it sometimes takes more than an hour to complete. The JSON files are all 1KB. Neither the source and target servers should have any firewalls rate limiting the FTP. Both are hosted at OVH.
The following LFTP command is executed in orer to synchronize everything:
lftp -v -c "set sftp:auto-confirm true; open sftp://$DEVELOPMENT_DEPLOY_USER:$DEVELOPMENT_DEPLOY_PASSWORD#$DEVELOPMENT_DEPLOY_HOST:$DEVELOPMENT_DEPLOY_PORT; mirror -Rev ./configuration_files configuration/configuration_files --exclude .* --exclude .*/ --include ./*.json"
Job is ran in Docker, using this container to deploy everything. What could cause this?
For those of you coming from google we had the exact same setup. The way to get LFTP to stop hanging when running in a docker or some other CI you can use this command:
lftp -c "set net:timeout 5; set net:max-retries 2; set net:reconnect-interval-base 5; set ftp:ssl-force yes; set ftp:ssl-protect-data true; open -u $USERNAME,$PASSWORD $HOST; mirror dist / -Renv --parallel=10"
This does several things:
It makes it so it won't wait forever or get into a continuous loop
when it can't do a command. This should speed things along.
Makes sure we are using SSL/TLS. If you don't need this remove those
options.
Synchronizes one folder to the new location. The options -Renv can
be explained here: https://lftp.yar.ru/lftp-man.html
Lastly in the gitlab CI I set the job to retry if it fails. This will spin up a new docker instance that gets around any open file or connection limitations. The above LFTP command will run again but since we are using the -n flag it will only move over the files that were missed on the first job if it doesn't succeed. This gets everything moved over without hassle. You can read more about CI job retrys here: https://docs.gitlab.com/ee/ci/yaml/#retry
Have you looked at using rsync instead? I'm fairly sure you can benefit from the incremental copying of files as opposed to copying the entire set over each time.

Geth 1.8.3, cant sync to rinkeby.io - Stuck at: IPC endpoint opened

I cant seem to get the latest version of Geth 1.8.3 working. It stops and never starts syncing.
I'm trying to get it on to the rinkeby.io testnet for smart contract testing.
I had success with 1.7.3 after downloading and using ADD to copy the files to the image. But I would like to automate the build, so that it can be used in Google cloud with a deployment.yml.
Currently I'm testing on my local machine since I know the firewall worked with 1.7.3. (no specific rules set)
The Dockerfile builds just fine and I can see that it shows up in the rinkeby.io node list, but even after 1 hour not a single block has been synced.
its stuck on: IPC endpoint opened
With 1.7.3 it took 10-15 seconds to start the sync.
Dockerfile
# ----- 1st stage build -----
FROM golang:1.9-alpine as builder
RUN apk add --no-cache make gcc musl-dev linux-headers git curl
WORKDIR /
RUN curl -o rinkeby.json https://www.rinkeby.io/rinkeby.json
RUN git clone https://github.com/ethereum/go-ethereum.git
RUN cd /go-ethereum && make geth
# ----- 2nd stage build -----
FROM alpine:latest
RUN apk add --no-cache ca-certificates
COPY --from=builder rinkeby.json $HOME/rinkeby.json
COPY --from=builder /go-ethereum/build/bin/geth /usr/local/bin/
VOLUME [ "/root/.rinkeby/geth/lightchaindata" ]
EXPOSE 8545 8546 30303 30304 30303/udp 30304/udp
RUN geth --datadir=$HOME/.rinkeby init rinkeby.json
CMD ["sh", "-c", "geth --networkid=4 --datadir=$HOME/.rinkeby --rpcaddr 0.0.0.0 --syncmode=fast --ethstats='Oxy:Respect my authoritah!#stats.rinkeby.io' --bootnodes=enode://a24ac7c5484ef4ed0c5eb2d36620ba4e4aa13b8c84684e1b4aab0cebea2ae45cb4d375b77eab56516d34bfbd3c1a833fc51296ff084b770b94fb9028c4d25ccf#52.169.42.101:30303?discport=30304"]
output when i run the docker image.
Console:
INFO [04-04|13:14:10] Maximum peer count ETH=25 LES=0 total=25
INFO [04-04|13:14:10] Starting peer-to-peer node instance=Geth/v1.8.4-unstable-6ab9f0a1/linux-amd64/go1.9.2
INFO [04-04|13:14:10] Allocated cache and file handles database=/root/.rinkeby/geth/chaindata cache=768 handles=1024
WARN [04-04|13:14:10] Upgrading database to use lookup entries
INFO [04-04|13:14:10] Database deduplication successful deduped=0
INFO [04-04|13:14:10] Initialised chain configuration config="{ChainID: 4 Homestead: 1 DAO: <nil> DAOSupport: false EIP150: 2 EIP155: 3 EIP158: 3 Byzantium: 1035301 Constantinople: <nil> Engine: clique}"
INFO [04-04|13:14:10] Initialising Ethereum protocol versions="[63 62]" network=4
INFO [04-04|13:14:10] Loaded most recent local header number=0 hash=6341fd…67e177 td=1
INFO [04-04|13:14:10] Loaded most recent local full block number=0 hash=6341fd…67e177 td=1
INFO [04-04|13:14:10] Loaded most recent local fast block number=0 hash=6341fd…67e177 td=1
INFO [04-04|13:14:10] Regenerated local transaction journal transactions=0 accounts=0
INFO [04-04|13:14:10] Starting P2P networking
INFO [04-04|13:14:12] UDP listener up self=enode://6d27f79b944aa75787213835ff512b03ec51434b2508a12735bb365210e57b0084795e5275150974cb976525811c65a49b756ac069ca78e4bd6929ea4d609b65#[::]:30303
INFO [04-04|13:14:12] Stats daemon started
INFO [04-04|13:14:12] RLPx listener up self=enode://6d27f79b944aa75787213835ff512b03ec51434b2508a12735bb365210e57b0084795e5275150974cb976525811c65a49b756ac069ca78e4bd6929ea4d609b65#[::]:30303
INFO [04-04|13:14:12] IPC endpoint opened url=/root/.rinkeby/geth.ipc
I did find out what was wrong after rebuilding the Dockerfile from scratch.
At the end of my CMD line there is this ending.
?discport=30304
By removing it, it works as intended.
This was a remnant from a different users code.
For me the problem is the fast sync still running for hours, takes forever and never stops on its own.
I ran this command :
**geth --rinkeby --syncmode "fast" --rpc --rpcapi db,eth,net,web3,personal --cache=1024 --rpcport 8545 --rpcaddr 127.0.0.1 --rpccorsdomain "*" **
consuming storage as it´s still running(over 5GB), sometimes I got an error : No space left on device, I deleted everything and then start over.
I run eth.syncing , here´s what I got:
https://imgur.com/a/fj2fvpy ( after 3 hours of sync )
https://imgur.com/a/SqrnW8x ( 30 min after the first eth.syning check )
I tried 100 * eth.syncing.currentBlock / eth.syncing.highestBlock and I got 99.9969.... and then it keeps slowing down and decreasing and never reached 99.9970.
I am running Geth/v1.8.4-stable-2423ae01/linux-amd64/go1.10 on Ubuntu 16.04LTS
After going round and round, I discovered that using the '--bootnodes' flag is the simplest way to get your private peers to recognize each other across a network:
Geth.exe --bootnodes "enode://A#B:30303" --datadir="C:\Ledger"--nat=extip:192.168.0.9 ...
When spinning up your IPC pipe, specify a node that is likely to always be up (likely your first init'd node). You also need to be very explicit about IP addresses using the '--nat' flag. In the above example A is the IP address of the node that will be used to auto-detect all other nodes, and build peer connections.
I dervied this approach using information found in this link on the Geth Github site.

how to kill uWSGI process

So I have finally gotten nginx + uWSGI running successfully for my Django install
however the problem I am having now is when I make changes to the code I need to restart the uWSGI process to view my changes
I feel like I am running the correct command here (i am very new to linux as well btw):
uwsgi --stop /var/run/uwsgi.pid
uwsgi --reload /var/run/uwsgi.pid
I get no error when I run these commands however my old code is still what loads
I also know its not a coding issue because I ran my django app in its development server and everything ran fine
The recommended way to signal reloading of application data is to use the --touch-reload option. Sample syntax on a .ini fine is:
touch-reload /var/run/uwsgi/app/myapp/reload
Where myappis your application name. /var/run/uwsgi/app is the recommended place for such files (could be anywhere). The reload file is an empty file whose timestamp is watched by uwsgi, whenever it changes (by, for example, using touch) uWSGI detects that change and restarts the corresponding uWSGI application instance.
So, whenever you update your code you should touch the file in order to update the in-memory version of the application. For example, on bash:
sudo touch /var/run/uwsgi/app/myapp/reload
Note --reload is an undocumented option on current uWSGI version.

GitlabHQ - W denied for rails

At work I've been tasked with setting up out GIT server with a front end and I found GitlabHQ which looks amazing.
I've installed it all semi-successfully but I cannot push my repos at all as it says I need to push them.
Since I've never used GitLabHQ before first is:
You should push repository to proceed.
After push you will be able to browse code, commits etc.
Normal when adding projects?
and every-time I run
git push -u origin master
I get this,
W access for focus DENIED to rails
(Or there may be no repository at the given path. Did you spell it correctly?)
fatal: The remote end hung up unexpectedly
is anyone able to help since I can't expect the team to keep SSH'ing?
Thanks.
EDIT:
Server = Ubuntu Server 11.10 fully updated and I followed these instructions: https://github.com/gitlabhq/gitlabhq/wiki/V2.0-easy-setup-for-ubuntu
This was fixed by re-running the install (It must have failed silently the first time) and killing the process once it had started with
lsof -p :3000
kill 9 {Whatever the PID was returned from above}
Then re-running the bundle (differs between production or not) I use this
bundle exec rails s -e production -d

Resources