I'm deploying a rails application using Google App Engine and it takes a lot of time to reinstall libraries like rbenv, ruby,...
Is there anyway to prevent this, I just want to install new library only
Yeah... we're actively working on making this faster. In the interim, here's how you can make it faster. At the end of the day - all we're really doing with App Engine Flex is creating a Dockerfile for you, and then doing a docker build. With Ruby, we try to play some fancy tricks like letting you tell us what version of rbenv or ruby you want to run. If you're fine hard coding all of that, you can just use our base image.
To do that, first open the terminal and cd into the dir with your code. Then run:
gcloud beta app gen-config --custom
Follow along with the prompts. This is going to create a Dockerfile in your CWD. Go ahead and edit that file, and check out what it's doing. In the simplest form, you can delete most of it and end up with something like this:
FROM gcr.io/google_appengine/ruby
COPY . /app/
RUN bundle install --deployment && rbenv rehash;
ENV RACK_ENV=production \
RAILS_ENV=production \
RAILS_SERVE_STATIC_FILES=true
RUN if test -d app/assets -a -f config/application.rb; then \
bundle exec rake assets:precompile; \
fi
ENTRYPOINT []
CMD bundle exec rackup -p $PORT
Most of the heavy lifting is already done in gcr.io/google_appengine/ruby, so you can just essentially add your code, perform any gem installs you need, and then set the entrypoint. You could also fork our base docker image and create your own. After you have this file, you should do a build to test it:
docker build -t myapp .
Now go ahead and run it, just to make sure:
docker run -it -p 8080:8080 myapp
Visit http://localhost:8080 to make sure it's all looking good. Now when you run glcoud app deploy the next time, we're going to use this Dockerfile. Should be much, much faster.
Hope this helps!
Related
This is likely a standard task, but I've spent a lot of time googling and prototyping this without success.
I want to set up CI for a Java application that needs a database (MySQL/MariaDB) for its tests. Basically, just a clean database where it can write to. I have decided to use Jenkins for this. I have managed to set up an environment where I can compile the application, but fail to provide it with a database.
What I have tried is to use a Docker image with Java and MariaDB. However, I run into problems starting MariaDB daemon, because at that point Jenkins already activates its user (UID 1000), which doesn't have permissions to start the daemon, which only the root user can do.
My Dockerfile:
FROM eclipse-temurin:17-jdk-focal
RUN apt-get update \
&& apt-get install -y git mariadb-client mariadb-server wget \
&& apt-get clean
COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
The docker-entrypoint.sh is pretty trivial (and also chmod a+x'd, that's not the problem):
#! /bin/sh
service mysql start
exec "$#"
However, Jenkins fails with these messages:
$ docker run -t -d -u 1000:1001 [...] c8b472cda8b242e11e2d42c27001df616dbd9356 cat
$ docker top cbc373ea10653153a9fe76720c204e8c2fb5e2eb572ecbdbd7db28e1d42f122d -eo pid,comm
ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument, as required by official docker images (see https://github.com/docker-library/official-images#consistency for entrypoint consistency requirements).
Alternatively you can force image entrypoint to be disabled by adding option `--entrypoint=''`.
I have tried debugging this from the command line using the built Docker image c8b472cda8b. The problem is as described before: because Jenkins passes -u 1000:1001 to Docker, docker-entrypoint.sh script no longer runs as root and therefore fails to start the daemon. Somewhere in Docker or Jenkins the error is "eaten up" and not shown, but basically the end result is that mysqld doesn't run and also it doesn't get to exec "$#".
If I execute exactly the same command as Jenkins, but without -u ... argument, leaving me as root, then everything works fine.
I'm sure there must be a simple way to start the daemon and/or set this up somehow completely differently (external database?), but can't figure it out. I'm practically new to Docker and especially to Jenkins.
My suggestion is:
Run the docker build command without -u (as root)
Create Jenkins user inside the container (via Dockerfile)
At the end of the entrypoint.sh switch to jenkins user by su - jenkins
One disadvantage is that every time you enter the container you will be root user
I have been facing this problem for a long time now. Whenever I try to install anything on docker during build, that requires interactive install, the build "hangs" at the interaction screen. For example, for a particular project, I needed to install sddm in docker (Yeah, yeah I know I am stupid). Now, the build simply hangs at the step wherein I am supposed to select my keyboard layout. How do I go about such problems?
PS: Not all installation scripts are shell scripts that can be modified (like apt install sddm -y).
PS: spawn and echo is not always helpful.
As I didn't completely undestand the problem when I posted my first answer. Here's another possible solution:
run the base container
exec into the container with bash docker exec -it mycontainer bash
install needed software interactively
create image from running container with docker commit mycontainer mytag
Well, the answer was pretty easy actually. The ENV variable was to be set properly
ENV DEBIAN_FRONTEND=noninteractive
I developed a few ROS packages and I want to put the packages in a docker container because installing all the ROS packages all the time is tedious. Therefore I created a dockerfile that uses a base ROS image, installed all the necessary dependencies, copied my workspace, built the workspace in the docker container and sourced everything afterward. You can find the docker file here:
FROM ros:kinetic-ros-base
RUN apt-get update && apt-get install locales
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
RUN apt-get update && apt-get install -y \
&& rm -rf /var/likb/apt/lists/*
COPY . /catkin_ws/src/
WORKDIR /catkin_ws
RUN /bin/bash -c '. /opt/ros/kinetic/setup.bash; catkin_make'
RUN /bin/bash -c '. /opt/ros/kinetic/setup.bash; source devel/setup.bash'
CMD ["roslaunch", "master_launch sim_perception.launch"]
The problem is: When I run the docker container wit the "run" command, docker doesn't seem to know that I sourced my new ROS workspace and therefore it cannot launch automatically my launch script. If I run the docker container as bash script with "run -it bash" I can source my workspace again and then roslaunch my .launch file.
So can someone tell me how to write my dockerfile correctly so I launch my .launch file automatically when I run the container? Thanks!
From Docker Docs
Each RUN instruction is run independently and won't effect next instruction so when you run last Line no PATH are saved from ROS.
You need Source .bashrc or every environment you need using source first.
You can wrap everything you want (source command and roslaunch command) inside a sh file then just run that file at the end
If you review the convention of ros_entrypoint.sh you can see how best to source the workspace you would like in the docker. We're all so busy learning how to make docker and ros do the real things, it's easy to skip over some of the nuance of this interplay. This sucked forever for me; hope this is helpful for you.
I looked forever and found what seemed like only bad advice, and in the absence of an explicit standard or clear guidance I've settled into what seems like a sane approach that also allows you to control what launches at runtime with environment variables. I now consider this as the right solution for my needs.
In the Dockerfile for the image you want to set the start/launch behavior;
towards the end; you should use ADD line to insert your own ros_entrypoint.sh (example included); Set it as the ENTRYPOINT and then a CMD to run by default run something when the docker start.
note: you'll (obviously?) need to run the docker build process for these changes to be effective
Dockerfile looks like this:
all your other dockerfile ^^
.....
# towards the end
COPY ./ros_entrypoint.sh /
ENTRYPOINT ["/ros_entrypoint.sh"]
CMD ["bash"]
Example ros_entryppoint.sh:
#!/bin/bash
set -e
# setup ros environment
if [ -z "${SETUP}" ]; then
# basic ros environment
source "/opt/ros/$ROS_DISTRO/setup.bash"
else
#from environment variable; should be a absolute path to the appropriate workspaces's setup.bash
source $SETUP
fi
exec "$#"
Used in this way the docker will automatically source either the basic ros bits... or if you provide another workspace's setup.bash path in the $SETUP environment variable, it will be used in the container.
So a few ways to work with this:
From the command line prior to running docker
export SETUP=/absolute/path/to/the/setup.bash
docker run -it your-docker-image
From the command line (inline)
docker run --env SETUP=/absolute/path/to/the/setup.bash your-docker-image
From docker-compose
service-name:
network_mode: host
environment:
- SETUP=/absolute/path/to/the_workspace/devel/setup.bash #or whatever
command: roslaunch package_name launchfile_that_needed_to_be_sourced.launch
#command: /bin/bash # wake up and do something else
I'm new to docker and am trying to dockerize an app I have. Here is the dockerfile I am using:
FROM golang:1.10
WORKDIR /go/src/github.com/myuser/pkg
ADD . .
RUN curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
RUN dep ensure
CMD ["go", "run", "cmd/pkg/main.go"]
The issue I am running into is that I will update source files on my local machine with some log statements, rebuild the image, and try running it in a container. However, the CMD (go run cmd/pkg/main.go) will not reflect the changes I made.
I looked into the container filesystem and I see that the source files are updated and match what I have locally. But when I run go run cmd/pkg/main.go within the container, I don't see the log statements I added.
I've tried using the --no-cache option when building the image, but that doesn't seem to help. Is this a problem with the golang image, or my dockerfile setup?
UPDATE: I have found the issue. The issue is related to using dep for vendoring. The vendor folder had outdated files for my package because dep ensure was pulling them from github instead of locally. I will be moving to go 1.1 which support to go modules to fix this.
I see several things:
According to your Dockerfile
Maybe you need a dep init before dep ensure
Probably you need to check if main.go path is correct.
According to docker philosophy
In my humble opinion, you should create an image with docker build -t <your_image_name> ., executing that where your Dockerfile is, but without CMD line.
I would execute your go run <your main.go> in your docker run -d <your_image_name> go run <cmd/pkg/main.go> or whatever is your command.
If something is wrong, you can check exited containers with docker ps -a and furthermore check logs with docker logs <your_CONTAINER_name/id>
Other way to check logs is access to the container using bash and execute go run manually:
docker run -ti <your_image_name> bash
# go run blablabla
I'm experimenting with Ruby on Rails and Docker, following this tutorial. In Build the Project section You can see that Rails scaffold is run with
docker-compose run web rails new . --force --database=postgresql --skip-bundle
And immediately after, the:
sudo chown -R $USER:$USER .
To get access to generated files, because Docker creates them as root.
How can I omit changing the permissions every time? Let's say I want to create simple migration file using:
docker-compose run web rails g migration create_users
It seems unpractical to modify the ownership after every simple command like this, but in every tutorial/source I've found noone talks about it.
I've exeprienced the same issue and the main problem for me was sublime-text not able to edit the files owned by root. So my solution was to use sublime-text as root for I could comfortably edit any project files. To do this execute in your shell:
gksu subl
That solved my problem. Hope it solved yours.
UPDATED
Ok. I found solution which doesn't depend on any Ruby editor or IDE. So what you need is add your current ubuntu user to docker users group for you could run docker commands as its user, not root.
First, you may need to add the docker group if it doesn't exist (but it should exist now). In your shell execute
sudo groupadd docker
Add your current user to docker group
sudo gpasswd -a ${USER} docker
Restart the docker daemon
sudo service docker restart
or
sudo service docker.io restart
(depends on your system)
Activate user groups changes. To do it either perform LOG OUT / LOG IN or in your shell
newgrp docker
Then add to the end of your dockerfile
USER <your_user_name>
or
USER <your_user_id>
Now you can execute commands like
docker-compose run web rails g migration SomeMigration
without sudo but as your current user.
And the files created by that commands will be owned by your current user not root.