Re-bundle on changes with Docker or PM2 - docker

So I've deployed an app to a remote server. The app is quite large, so I've made some production configs for bundling the client files with webpack.
Currently I'm running the server with docker-compose, and in the server-container I'm running pm2-docker (as there are some workers that needs to be run).
To bundle the client, I'm just using a command like npm run build, so I'm wondering how I could rebuild the client whenever I push changes to the remote server.
To duck it down: whenever there is changes, do npm run build
What would be the best way to do this? Are you able to run some kind of command specified in the PM2-.yml-file or in the docker-compose .yml-file?
Thankful for any help!

Related

How to setup dockerized binaries in VSCode

I have learned to use docker as development server (LAMP and MEAN) and now I feel I should take next step, By removing PHP and node binaries from system and use binaries from containers. So on a fresh Solus install, I setup containers for PHP, node, Ruby etc. Solus already recommends using containers for such tasks. But I got stuck on first day.
I installed vs code (Code-oss) on installed extensions (prettier, PHPCS etc) on it, and they need path of installed binaries (path/to/phpcs, path/to/node etc).
I initially set up configuration path as
docker run -it --rm herloct/phpcs phpcs
based on https://gist.github.com/barraq/e7f85262bc7a0af2d8d8884d27b62d2c but using more updated container. It didn't work, So I set it up as alias thinking it would fool VSCode into thinking it is native command, but it didn't work either. I have confirmed that using those command directly from terminal does work, But VSCode PHPIntellisense extension does not want to work.
Any suggestion?
P.S. Any tip to keep container running in background as to avoid container bootup delay everytime I use PHPCS or javac from container? I can keep LAMP server running but everytime I enter terminal tools, it loads up new container to execute command, and then kill container causing delay for bootup and closing.
In case it is still relevant to someone: You might want to create a VS Code development container to use dockerized binaries.
For this to work, a .devcontainer.json is required which could be as simple as:
{
"image": "mcr.microsoft.com/vscode/devcontainers/typescript-node:0-12"
}

First run of Docker -- Running makeitopen.com's F8 App

I'm reading through makeitopen.com and want to run the F8 app.
The instructions say to install the following dependencies:
Yarn
Watchman
Docker
Docker Compose
I've run brew install on all of these, and none appeared to indicate that any of them had already been installed. I have not done any config or setup or anything on any of these new packages.
The next step is to run yarn server and here's what I got from that:
$ docker-compose up
ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
error Command failed with exit code 1.
Not having any experience with any of these packages, I don't really know what to do (googling brings up so many different scenarios). What do I do now?
PS. Usually when I work with React Native I run npm start to start the expo-ready app, but the F8 project doesn't respond to npm start.
UPDATE (sort of):
I ran docker-compose up which appeared to run all the docker tasks, and I'm assuming the server is running (although I haven't tried yarn server again).
I continued with the instructions, installing dependencies with yarn (which did appear to throw some errors. quite a few, actually, but also a lot of success).
I then ran yarn ios, and after I put the Facebook SDK in the right folder on my computer, the XCode project opened.
The Xcode build failed. Surprise, right? It did make it through a lot of the tasks. But it can't find FBSDKShareKit/FBSDKShareKit.h (although that file does appear to exist in FBSDKShareKit/Headers/)
Any thoughts? Is there any way in the world I can just run this in expo?
If docker and docker-compose are installed properly, you either need root priviledges or use the docker group to add yourself:
usermod -aG docker your-username
Keep in mind, that all members of the docker usergroup de facto have root access on the host system. Its recommended to only add trusted users and keep precautionary measures to avoid abuse, but this is another topic.
When docker is not working properly, check if it's daemon is running and maybe restart the service:
# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled)
Active: active (running) since Thu 2019-02-28 19:41:47 CET; 3 weeks 3 days ago
Then create the container again using docker-compose up.
Why a simple npm start doesn't work
The package.json file shows that those script exists, but it runs npm start. Looking at the docker-compose.yml file, we see that it creates 5 containers for it's mongo database as well as grapql and the frontend/backend. Without docker, it wouldn't be possible to set up a lot of services that fast. You'd need to install and configure them manually.
At the end your system may be messed up with software, when playing around with different software or developing for multiple open source projects. Docker is a great way to deploy modern applications with keeping them flexible and separated. It's worth to get started with those technology.

Starting Rails server issue - Jenkins Pipeline

We are starting integrate our build and automation testing process to jenkins pipeline, and I have some issue with starting rails server.
First of all, this is our pipeline chart:
In any step of "Config" (0,1,2), I start different rails app with specific port, using: rails s -p XXXX -d, and just after the execution command, I run lsof -i:XXXX and I DO see the server running.
But, in QA stage, I want to use the servers I ran in Servers Configuration stage, but I get connection refused in our tests, and also, when accessing the server the apps ran on, I don't see them running anymore, even that I used -d to daemonize them.
Any ideas? it seems like the rails servers ran only for the servers configuration stage and then closed, is that possible? and if so, how do I handle them?
Thanks!

Building a iOS app with Fastlane inside Docker

I'm trying to streamline my iOS development builds and read about Docker.
If I understood it right, I could create an image that would include all the dependencies and my fellow devs could just pull it and build inside it.
Point is now, does this also work with Fastlane (which uses the Xcode cli tools I think) and "Docker for Mac"?
Also, I'm using React-Native, which seems to start a second process for bundling the JavaScript that will be included in the native build later and I read Docker only allows one process, is this a problem?
The problem with using Docker is that even if you use Docker for mac, you won't have access to macOS-based images. Docker runs in a lightweight virtual machine called xhyve - at least if you install docker via the Docker for Mac package - that runs Linux on your mac.
Essentially what this means is that your docker container is going to be limited to non-Xcode functionality. Here's what you definitely won't be able to do, at least not without a non-trivial amount of work:
Compile your app's native code
Take screenshots of your app or run your app in the Simulator
Signing the finished app with Apple's codesign
Here's things that you could potentially use your docker container for:
Building the JS code (I assume, since RN should work on Linux)
Uploading your app with iTMSTransporter (i.e. using fastlane's deliver)
Downloading/Creating certificates, provisioning profiles and push certificates (i.e. fastlane's match, cert, pem and sigh)
Working with git
All in all you're probably going to be very limited. Instead, it would be advisable to use things like Gemfile and Brewfile to list all your dependencies, and have a small setup.sh script that runs brew bundle and bundle install to install them on your colleague's machines. You can also set it up to run those during building (with Xcode's script build phases), so that no one can accidentally forget to install something that is needed for the build.
That being said, there is a fastlane docker image that is being worked on here that is also available on the Docker Hub. Note that it has only ever been tested to run the fastlane tests (that don't depend on macOS-only software), so it doesn't actually claim to run fastlane reliably.
I read Docker only allows one process
Docker allows multiple processes, it just doesn't allow more than one main process. If your main process stops everything else and the container stops with it. If you just want to use it to install dependencies so that you can run one-off commands that use them, instead of hosting a long-running service, you can always do that by using docker run:
docker run <repo/image:tag> <your_command>
Or launch an interactive shell into the container:
docker run -it <repo/image:tag> /bin/bash

Use RubyMine and Docker for development

I'm trying to develop a Rails project without having to install Ruby and all server tools in my Windows local machine. I've created my Docker containers (Ruby and MySQL) and installed the Docker plugin on RubyMine 2016.1, however it seems not very practical for the development daily use, I mean the cycle develop, run, debug, just before deployment to test server.
Am I missing something to make this workflow possible? Or isn't Docker suggested for this step in the development process?
I don't develop under Windows, but here's how I handle this problem under Mac OS X. First off, my Rails project has a Guardfile set up that launches rails (guard-rails gem) and also manages running my tests whenever I make changes (guard-minitest gem). That's important to get fast turnaround time in development.
I launch docker daemonized, mounting a local directory into the docker image, with an exposed port 3000, running a never-ending command.
docker run -d -v {local Rails root}:/home/{railsapp} -p 3000:3000 {image id} tail -f /dev/null
I do this so I can connect to it with an arbitrary number of shells, to do any activities I can only do locally.
Ruby 2.2.5, Rails 5, and a bunch of Unix developer tools (heroku toolbelt, gcc et al.) are installed in the container. I don't set up a separate database container, as I use SQLite3 for development and pg for production (heroku). Eventually when my database use gets more complicated, I'll need to set that up, but until then it works very well to get off the ground.
I point RubyMine to the local rails root. With this, any changes are immediately reflected in the container. In another command line, I spin up ($ is host, # is container):
$ docker exec -it {container id} /bin/bash
# cd /home/{railsapp}
# bundle install
# bundle exec rake db:migrate
# bundle exec guard
bundle install is only when I've made Gemfile changes or the first time.
bundle exec rake db:migrate is only when I've made DB changes or the first time.
At this point I typically have a Rails instance that I can browse to at localhost:3000, and the RubyMine project is 'synchronized' to the Docker image. I then mostly make my changes in RubyMine, ignoring messages about not having various gems installed, etc., and focus on keeping my tests running cleanly as I develop.
For handling a console when I get exceptions, I need to add:
config.web_console.whitelisted_ips = ['172.16.0.0/12', '192.168.0.0/16']
to config/environments/development.rb in order for it to allow a web debug console when exceptions happen in development. (The 192.168/* might not be necessary in all cases, but some folks have run into problems that require it.)
I still can't debug using RubyMine, but I don't miss it anywhere near as much as I thought I would, especially with web consoles being available. Plus it allows me to run all the cool tools completely in the development environment, and not pollute my host system at all.
I spent a day or so trying to get the remote debugger to work, but the core problem appears to be that (the way ruby-debug works) you need to allow the debugging process (in the docker container) to 'reach out' to the host's port to connect and send debugging information. Unfortunately binding ports puts them 'in use', and so you can't create a 'listen only' connection from the host/RubyMine to a specific container port. I believe it's just a limitation of Docker at present, and a change in either the way Docker handles networking, or in the way the ruby-debug-ide command handles transmitting debugging information would help fix it.
The upshot of this approach is that it allows me very fast turnaround time for testing, and equally fast turnaround time for in-browser development. This is optimal for new app development, but might not work as well if you have a large, old, crufty, and less-tested codebase.
Most/all of these capabilities should be present in the Windows version of Docker, as well.

Resources