Debugging rails app running docker with vagrant - ruby-on-rails

I'm trying to figure out the best development workflow with vagrant and docker running a rails app. In my dockerfile I have this:
FROM quirky/rails:latest
RUN mkdir /opt/app
WORKDIR /opt/app
# Install gems
ADD ./Gemfile /opt/app/Gemfile
ADD ./Gemfile.lock /opt/app/Gemfile.lock
RUN bundle install
# Instal npm packages
ADD ./package.json /opt/app/package.json
RUN npm install
# Expose directories and ports
VOLUME /opt/app
EXPOSE 3000
# Run the web server
WORKDIR /opt/app
CMD rm -f /opt/app/tmp/pids/server.pid && bundle exec rails s
My Vagrantfile looks like this:
config.vm.define "app" do |app|
app.vm.provider "docker" do |d|
d.build_dir = "."
d.link "db:db"
d.link "redis:redis"
d.link "solr:solr"
d.volumes = ["/app:/opt/app"]
d.ports = ["3000:3000"]
d.vagrant_vagrantfile = "./docker/Vagrantfile"
d.remains_running = true
end
end
config.vm.define "db" do |db|
db.vm.provider "docker" do |d|
d.image = "paintedfox/postgresql"
d.name = "db"
d.env = {USER: "vagrant", PASS: "password"}
d.vagrant_vagrantfile = "./docker/Vagrantfile"
end
end
config.vm.define "redis" do |redis|
redis.vm.provider "docker" do |d|
d.image = "dockerfile/redis"
d.name = "redis"
d.ports = ["6379:6379"]
d.vagrant_vagrantfile = "./docker/Vagrantfile"
end
end
config.vm.define "solr" do |solr|
solr.vm.provider "docker" do |d|
d.image = "quirky/solr"
d.name = "solr"
d.ports = ["8080:8080"]
d.vagrant_vagrantfile = "./docker/Vagrantfile"
end
end
Typically if I want to debug something I just stick a debugger statement in the code and I'm running it as a local process and it just hits the breakpoint and brings up pry or whatever the debugger console is. How does this work inside of a container inside vagrant?
This is how I start my dev environment:
vagrant up app --provider=docker
It launches it in the background. There doesn't appear to be a way to launch it and attach to it. Am I missing a command or a flag I can pass in to vagrant?

You are looking for docker exec or nsenter [1]. With one of these tools you can log into the container without SSH and check your logs.
If you want to debug vagrant creating and running the docker-container you can append the --debug flag like so:
vagrant up app --provider=docker --debug
But this won't give you any debug-info from your Vagrantfile directly. If you still want to get debug-messages out of your vagrantfile I recommend you to read about vagrants UI class.
PS: Maybe you simply want puts statements like so: puts "I'm here!"?
PPS: If you want to stick with vagrant and SSH the has_ssh value and a SSH-Server in the container is the way to go.
nsenter/docker exec

Have you tried the has_ssh option for the Vagrand Docker provider? It states that:
If true, then Vagrant will support SSH with the container. This allows vagrant ssh to work, provisioners, etc. This defaults to false
As an aside, I haven't tried this myself. I'm using Docker with a CoreOS image and running docker containers manually (with provisioning).

Related

Installing gerrit plugin in docker container

When running gerritcodereview/gerrit docker container. Gerrit is installed within the /var/gerrit directoy in the container. But when trying to install plugins by docker cp the plugin .jar file, downloaded from https://gerrit-ci.gerritforge.com/job/plugin-its-jira-bazel-stable-2.16/ into the /var/gerrit/plugins directory, plugins are not showing up in the list amongst installed plugins. Eventhough I restarted the container.
I ran gerrit with:
docker run -ti -p 8080:8080 -p 29418:29418 gerritcodereview/gerrit
And Gerrit is accessible via:
http://localhost:8080/admin/plugins
I also have a list of plugins in the plugins manager, but don't know how to add more plugins to the list, have tried to use gerrit-ci.gerritforge.com url in [httpd]. http://localhost:8080/plugins/plugin-manager/static/index.html
My gerrit.config file looks like this:
[gerrit]
basePath = git
serverId = 62b710a2-3947-4e96-a196-6b3fb1f6bc2c
canonicalWebUrl = http://10033a3fe5b7
[database]
type = h2
database = db/ReviewDB
[index]
type = LUCENE
[auth]
type = DEVELOPMENT_BECOME_ANY_ACCOUNT
[sendemail]
smtpServer = localhost
[sshd]
listenAddress = *:29418
[httpd]
listenUrl = http://*:8080/
filterClass = com.googlesource.gerrit.plugins.ootb.FirstTimeRedirect
firstTimeRedirectUrl = /login/%23%2F?account_id=1000000
[cache]
directory = cache
[plugins]
allowRemoteAdmin = true
[container]
javaOptions = "-Dflogger.backend_factory=com.google.common.flogger.backend.log4j.Log4jBackendFactory#getInstance"
javaOptions = "-Dflogger.logging_context=com.google.gerrit.server.logging.LoggingContext#getInstance"
user = gerrit
javaHome = /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64/jre
javaOptions = -Djava.security.egd=file:/dev/./urandom
[receive]
enableSignedPush = false
[noteDb "changes"]
autoMigrate = true
I am pretty sure that Gerrit runs from /var/gerrit, even for your version as that is the version I used before.
Why don't you use docker-compose together with a custom Dockerfile. This way you can easily recreate your image and don't need to worry about adding plugins again after you upgrade your version.
I would suggest that you play around with these scripts and use it for your testing.
This is what my Dockerfile looks like for my previous 2.16 installation:
FROM gerritcodereview/gerrit:2.16.8
# Add custom plugins that are not downloaded from the web
COPY ./plugins/* /var/gerrit/plugins/
# Add logo
COPY ./static/* /var/gerrit/static/
ADD https://gerrit-ci.gerritforge.com/view/Plugins-stable-2.16/job/plugin-avatars-gravatar-bazel-master-stable-2.16/lastSuccessfulBuild/artifact/bazel-genfiles/plugins/avatars-gravatar/avatars-gravatar.jar /var/gerrit/plugins/
USER root
# Fix any permissions
RUN chown -R gerrit:gerrit /var/gerrit
USER gerrit
ENV CANONICAL_WEB_URL=https://gerrit.mycompoany.net/r/
And below the docker-compose.yml
version: '3.4'
services:
gerrit:
build: .
ports:
- "29418:29418"
- "8080:8080"
restart: unless-stopped
volumes:
- /external/gerrit2.16/etc:/var/gerrit/etc
- /external/gerrit2.16/git:/var/gerrit/git
- /external/gerrit2.16/index:/var/gerrit/index
- /external/gerrit2.16/cache:/var/gerrit/cache
- /external/gerrit2.16/logs:/var/gerrit/logs
- /external/gerrit2.16/.ssh:/var/gerrit/.ssh
# entrypoint: java -jar /var/gerrit/bin/gerrit.war init --install-all-plugins -d /var/gerrit
# entrypoint: java -jar /var/gerrit/bin/gerrit.war reindex -d /var/gerrit
Finally found out a way that works for me in my use case.
copy content of your public key and insert into ssh web browser profile settings: my_gerrit_admin_username
Add key to ssh-agent:
eval `ssh-agent`
ssh-add .ssh/id_rsa
from terminal outside container, run:
ssh -p 29418 my_gerrit_admin_username#localhost gerrit plugin install -n its-base.jar https://gerrit-ci.gerritforge.com/job/plugin-its-base-bazel-stable-2.16/lastSuccessfulBuild/artifact/bazel-bin/plugins/its-base/its-base.jar
check web browser that plugin is installed among plugins.

dockerd not running on nixos

I installed docker on nixos, using:
nix-env -i docker
after that, dockerd was not running, so I started the daemon manually with:
dockerd
and in the logs, I see:
WARN[2019-06-26T01:02:31.784701442Z] could not change group
/var/run/docker.sock to docker: group docker not found
should I care about this warning?
When installing docker on NixOS, it's best to enable it in the NixOS configuration. Doing so will install docker as a system service.
Snippet for /etc/nixos/configuration.nix:
virtualisation.docker.enable = true;
# ...
users.users.YOU = { # merge this with your unix user definition, "YOU" is for illustration
isNormalUser = true;
# ...
extraGroups = [
# ...
"docker"
];
};
created a group docker. Docker needs that user group to start as a service.

Vagrant docker-exec

I am running Vagrant on Mac OS X. I have created following Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.synced_folder ".", "/vagrant", disabled: true
config.ssh.insert_key = true
config.vm.provider "docker" do |doc|
doc.image = "httpd"
doc.ports = ["80:80"]
doc.name = 'apache'
doc.remains_running = true
doc.has_ssh = false
end end
It is starting, however I can't execute following command:
vagrant docker-exec -dt apache -- /bin/bash
I have also tried to change apache into container ID, but have failed too.
The container is running as I can check it in Virtualbox.
I can only see that I have vagrant docker-logs and vagrant docker-run, but the documentation of Vagrant says that there should be docker-exec.
Any ideas?
-i --interactive is required if you want a bash shell you can type in.
-d --detach will not work for typing either as the process will be in started in the background.
Use vagrant docker-exec -it apache -- /bin/bash
Yep, that is correct. I also know now that you need to do list-commands to see this docker-exec command.
thank you

Vagrant docker provider: create and start vs run

I'm new to vagrant, using 1.7.4 with VirtualBox 5.0.10 on Windows 7 and trying to figure out how to get it to setup and run docker containers the way I'd like, which is like so:
Start my docker host, which is already provisioned with the latest docker tools and boots with the cadvisor container started - I get this box from the publicly available williamyeh/ubuntu-trusty64-docker
If (for example) the mongo container I'd like to use has not been created on the docker host, just create it (don't start it)
Else, if the container already exists, start it (don't try to create it)
With my current setup, using the docker provider, after the first use of vagrant up, using vagrant halt followed by vagrant up will produce this error:
Bringing machine 'default' up with 'docker' provider...
==> default: Docker host is required. One will be created if necessary...
default: Docker host VM is already ready.
==> default: Warning: When using a remote Docker host, forwarded ports will NOT be
==> default: immediately available on your machine. They will still be forwarded on
==> default: the remote machine, however, so if you have a way to access the remote
==> default: machine, then you should be able to access those ports there. This is
==> default: not an error, it is only an informational message.
==> default: Creating the container...
default: Name: mongo-container
default: Image: mongo
default: Port: 27017:27017
A Docker command executed by Vagrant didn't complete successfully!
The command run along with the output from the command is shown
below.
Command: "docker" "run" "--name" "mongo-container" "-d" "-p" "27017:27017" "-d" "mongo"
Stderr: Error response from daemon: Conflict. The name "mongo-container" is already in use by container 7a436a4a3422. You have to remove (or rename) that container to be able to reuse that name.
Here is the Vagrantfile I'm using for the docker host:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.require_version ">= 1.6.0"
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.hostname = "docker-host"
config.vm.box_check_update = false
config.ssh.insert_key = false
config.vm.box = "williamyeh/ubuntu-trusty64-docker"
config.vm.network "forwarded_port", guest: 27017, host: 27017
config.vm.synced_folder ".", "/vagrant", disabled: true
end
...and here is the docker provider Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.require_version ">= 1.6.0"
VAGRANTFILE_API_VERSION = "2"
ENV['VAGRANT_DEFAULT_PROVIDER'] = 'docker'
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.provider "docker" do |docker|
docker.vagrant_vagrantfile = "../docker-host/Vagrantfile"
docker.image = "mongo"
docker.ports = ['27017:27017']
docker.name = 'mongo-container'
end
end
Well, I'm not sure what had gotten munged in my environment, but while reconfiguring my setup, I deleted and restored my base docker host image, and from that point on, vagrant up, followed by vagrant halt, followed by vagrant up on the docker provider worked exactly like I was expecting it to.
At any rate, I guess this question is already supported by vagrant.

Environment variables and PHP

I have an ubuntu server with a handful of custom environment variables set in /etc/environment as per the ubuntu community recommendation
When I use php from the command line I can use php's getenv() function to access this variables.
Also, if I run phpinfo() from the command line I see all of my variables in the ENVIRONMENT section.
However, when trying to access the same data inside processes being run by php5-fpm this data is not available. All I can see in the ENVIRONMENT section of phpinfo() is:
USER www-data
HOME /var/www
I know the command line uses this ini:
/etc/php5/cli/php.ini
And fpm uses:
/etc/php5/fpm/php.ini
I've not managed to find any differences between the two that would explain why the ENV variables are not coming through in both.
Also if run:
sudo su www-data
and then echo the environment variables I am expecting they are indeed available to the www-data user.
What do I need to do to get my environment variables into the php processes run by fpm?
It turns out that you have to explicitly set the ENV vars in the php-fpm.conf
Here's an example:
[global]
pid = /var/run/php5-fpm.pid
error_log = /var/log/php5-fpm.log
[www]
user = www-data
group = www-data
listen = /var/run/php5-fpm.sock
pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
chdir = /
env[MY_ENV_VAR_1] = 'value1'
env[MY_ENV_VAR_2] = 'value2'
1. Setting environment variables automatically in php-fpm.conf
clear_env = no
2. Setting environment variables manually in php-fpm.conf
env[MY_ENV_VAR_1] = 'value1'
env[MY_ENV_VAR_2] = 'value2'
! Both methods are described in php-fpm.conf:
Clear environment in FPM workers Prevents arbitrary environment
variables from reaching FPM worker processes by clearing the
environment in workers before env vars specified in this pool
configuration are added. Setting to "no" will make all environment
variables available to PHP code via getenv(), $_ENV and $_SERVER.
Default Value: yes
clear_env = no
Pass environment variables like LD_LIBRARY_PATH. All $VARIABLEs are
taken from the current environment. Default Value: clean env
env[HOSTNAME] = $HOSTNAME
env[PATH] = /usr/local/bin:/usr/bin:/bin
env[TMP] = /tmp
env[TMPDIR] = /tmp
env[TEMP] = /tmp
I found solution in this github discussion .
The problem is when you run the php-fpm. The process not load the environment.
You can load it in the startup script.
My php-fpm is install by apt-get.
So modify the
/etc/init.d/php5-fpm
and add (beware the space between the dot and the slash)
. /etc/profile
and modify the /etc/profile to add
. /home/user/env.sh
In the env.sh. You can export the environment whatever you need.
Then modify
php-fpm.conf
add env[MY_ENV_VAR_1] = 'value1' under the [www] section.
Last. restart the php-fpm. You'll get the environment load by the fpm.
Adding on to the answers above, I was running php-fpm7 and nginx in an alpine:3.8 docker container. The problem that I faced was the env variables of USER myuser was not getting copied into the USER root
My entrypoint for docker was
sudo nginx # Runs nginx as daemon
sudo php-fpm7 -F -O # Runs php-fpm7 in foreground
The solution for this was
sudo -E nginx
sudo -E php-fpm7 -F -O
-E option of sudo copies all env variables of current user to the root
Of course, your php-fpm.d/www.conf file should have clear_env=no
And FYI, if you're using a daemon service like supervisord they have their own settings to copy the env. For example, supervisord has setting called copy_env=True

Resources