English is not my native language, so I apologize in advance for very much possible errors.
I'm new to Docker and Linux in general, and trying to learn rn.
I have a task, I need to create dockerfile based upon tomcat:9.0-alpine, then in this exact file clone provided repository. Then I need to build image from this dockerfile, run container and then visit index.html in a browser, where I will see a specific page.
This how my dockerfile looks like:
FROM tomcat:9.0-alpine
RUN apk update
RUN apk add git
RUN git clone https://github.com/a1qatraineeship/docker_task.git $TOMCAT_HOME/webapps/whateverApp/
When I build image from this, I see that repo was cloned in a directory, that I specified:
#7 [4/4] RUN git clone https://github.com/a1qatraineeship/docker_task.git $TOMCAT_HOME/webapps/aquaApp/
#7 sha256:72b802c3b98dad7151daeba7db257b7b1f1089dc26fb5809fee52f881e19edb5
#7 0.319 Cloning into '/webapps/whateverApp'...
#7 DONE 1.9s
But when I run container and go to http://localhost:8888/whateverApp/, I get "404 - not found".
If I go to http://localhost:8888, I see Tomcat default page, so Tomcat is deffinetely working.
If I bash into container and go to /webapps - I don't see a folder that I specified (whateverApp) in there.
What I'm missing? All seems to work ok, there is no errors thrown, but don't see supposedly cloned repository. In examples that I saw there was no mention about any access restrictions or whatever. Even my teacher said before, that whole dockerfile will essentially consist of 4 lines, and only thing I really need to find out is to where to clone repo for everything to work properly. But If it don't clone at all, how can I verify that I placed files into right place?
The problem is your environment variable, is empty when the container run TOMCAT_HOME
FROM tomcat:9.0-alpine
ENV TOMCAT_HOME=/usr/local/tomcat
RUN apk update
RUN apk add git
RUN git clone https://github.com/a1qatraineeship/docker_task.git $TOMCAT_HOME/webapps/whateverApp/
and with that ENV should work, gook luck!
Related
I have a (private) repository at GitHub with my project and integrated GitHub-actions which is building a docker-image and pushing it directly to GHCR.
But I have a problem with storing and passing secrets to my build image. I have the following structure in my project:
.git (gitignored)
.env (gitignored)
config (gitignored) / config files (jsons)
src (git) / other folders and files
As you may see, I have .env file and config folder. Both of them store data or files, which are not in the repo but are required to be in the built environment.
So I'd like to ask, is there any option not to pass all these files to my main remote repo (even if it's private) but to link them during the build stage within the github-actions?
It's not a problem to publish env & configs somewhere else, privately, in another separate private remote-repo. The point is not to push these files to the main-private-repo, because RBAC logic doesn't allow me to restrict access to the selected files.
P.S. Any other advice of using GitLab CI or BitBucket, if you know how to solve the problem is also appreciated. Don't be shy to share it!
So it seems that this question is a bit hot, so I have found an answer for it.
Example that is shown above is based on node.js and nest.js app and pulling the private remote repo from GitHub.
In my case, this scenario was about pulling from separate private repo config files and other secrets. And we merge them with our project during container build. This option isn't about security of secrets inside container itself. But for making one part of a project (repo itself with business logic) available to other developers (they won't see credentionals and configs from separate private repo, in your development repo) and a secret-private repo with separate access permission.
You all need your personal access token (PAT), on github you can found it here:
As for GitLab, the flow is still the same. You'll need to pass token from somewhere in the settings. And also, just a good advice, create not just one, but two docker files, before testing it.
Why https instead of ssh? In that case you'll need also to pass ssh keys and also config the client correctly. It's a bit more complicated because of CRLF and LF formats, crypto-algos supported by ssh and so on.
# it could be Go, PHP, what-ever
FROM node:17
# you will need your GitHub token from settings
# we will pass it to build env via GitHub action
ARG CR_PAT
ENV CR_PAT=$CR_PAT
# update OS in build container
RUN apt-get update
RUN apt-get install -y git
# workdir app, it is a cd (directory)
WORKDIR /usr/src/app
# installing nest library
RUN npm install -g #nestjs/cli
# config git with credentials
# we will use https since it is much easier to config instead of ssh
RUN git config --global url."https://${github_username}:${CR_PAT}#github.com/".insteadOf "https://github.com/"
# cloning the repo to WORKDIR
RUN git clone https://github.com/${github_username}/${repo_name}.git
# we move all files from pulled repo to root of WORKDIR
# including files named with dot at the beginning (like .env)
RUN mv repo_folder/* repo_folder/.[^.]* . && rmdir repo_folder/
# node.js stuff
COPY package.json ./
RUN yarn install
COPY . .
RUN nest build app
CMD wait && ["node"]
As a result, you'll see a fully container with your code merged with files and code from other separate repo which we pull from.
I have made a ROS workspace and inside a package.
I did catkin_make and everything is working well.
I would like to give this package (or should I give the entire workspace?) to another person.
I am thinking to give him a zip file of the files and folders (it contains launch files, python scripts, rviz files etc) so I am expecting he will unzip it in his machine
I would like he can run the launch files without problems
What is what he needs to do for this? (of course he will have ROS installed, that is no problem)
I am thinking perhaps he should do source devel/setup.bash but is this enough?
When sharing a workspace with somebody only the source space src has to be shared. It should contain all our packages with their launch files (*.launch), Python (*.py) and C++ nodes (*.cpp, *.hpp), YAML configuration files (*.yaml), RViz configurations (*.rviz), robot descriptions (*.urdf, *.xacro) and describe how each node should be compiled in a CMakeLists.txt. Additionally you are supposed to keep track of all the Debian packages you install inside the package.xml file of each package.
If for some obscure reason there are things that I have to do that can't be accommodated in the standard installation instructions given above, I will actually write a bash script that performs these steps for me and add it either to the package itself or the workspace. This way I can automate also more complex steps such as installing OpenCV or modifying the .bashrc. Here a small example of what such a minimal script (I generally name them install_dependencies.sh) might look like:
#!/bin/bash
# Get current workspace
WS_DIR="$(dirname "$(dirname "$(readlink -fm "$0")")")"
# Check if script is run as root '$ sudo ...'
if ["$EUID" -ne 0]
then
echo "Error: This script has to be run as root '$ sudo ./install_dependencies.sh'
exit 1
fi
echo "Installing dependencies..."
# Modify .bashrc
echo "- Modifying '~/.bashrc'..."
echo "source ${WS_DIR}/devel/setup.bash" >> ~/.bashrc
echo ""
echo "Dependencies installed."
If for some reason even that is not possible I make always sure to document it properly either in a Markdown *.md read-me either in a /doc folder inside your package, in the read-me.md inside the base folder of your repository or inside the root folder of your workspace.
The receiver then only has to
Create a new workspace
Copy or clone the package files to its src folder
Install all the Debian package dependencies listed in the package.xml files with $ rosdep install
(If any: Execute the bash scripts I created by hand $ sudo ./install_dependencies.sh or perform the steps given in the documentation)
Build the workspace with $ catkin_make or $ catkin build from catkin-tools
Source the current environment variables with $ source devel/setup.bash
Make sure that the Python nodes are executable either by $ chmod +x <filename> or right-clicking the corresponding Python nodes (located in src or scripts of your package), selecting Properties/Permissions and enabling Allow executing file as program.
Run the desired Python or C++ nodes ($ rosrun <package_name> <executable_name>) and launch files ($ roslaunch <package_name> <launch_file_name>)
It is up to you to share the code as a compressed file, in form of a Git repository or a more advanced way (see below) but I will introduce some best practices in the following paragraphs that will pay off in the long run with larger packages.
Sharing a package or sharing a workspace?
One can either share a single package or an entire workspace. I personally think that most of the time one should share the entire workspace instead of the package alone even if you only cloned the other packages from a public Github repo. This might save the receiver a lot of headache e.g. when checking out the wrong branch.
Version control with Git
Arguably the best way to arrange your packages is by using Git. I'd actually make a repository for every package you create (if a couple of packages are virtually inseparable you can also bundled them to a single Git repo or better use metapackages). Then create an additional repository for your workspace and include your own packages and packages from other sources as submodules. This allows your code to be modular and re-usable: You can share only a package or the entire workspace!
As a first step I always add a .gitignore file to each package repository which excludes *.pyc files and another one to the workspace repository that ignores the build, devel and install folders.
You can add a particular repository as submodule to your workspace Git repository by opening a console inside the src folder of your workspace repository and typing
$ git submodule add -b <branch_name> <git_url_to_package> <optional_directory_rename>
Note that you can actually track a particular branch of a repository that you include as a submodule. In case you need a submodule at some point follow this guide.
If you share the workspace repository with someone they will have to have access to each individual submodule repository and they will have to not only pull the repository but also update the submodules with
$ git clone --recurse-submodules <git_url_to_workspace_repository>
and potentially update them to the latest commit with
$ git submodule update --remote
After these two steps they should have a full version of the repository with submodules and they should be able to progress with the steps listed in the section above.
1.1 Unit-testing and continuous integration
Before sharing a repository you will have to verify that everything is working correctly. This can take a decent amount of time, in particular if the code base is large and you are modifying it frequently. In the ideal case you would have to install it on a brand new machine or inside a virtual box in order to make sure that the set-up works which would take quite some time. This is where unit testing comes into play: For every class and function you program you will write a test. This way you can simply run these tests and make sure everything is working correctly. Generally these unit tests will be performed automatically and the working branches merged continuously. Generally the test routines are written with the libraries Boost::Test (C++), GoogleTest (generally used in ROS with C++), unittest (for Python) and QtTest (for GUIs). For ROS launch files there is additionally rostest. How this can be done in ROS is described here and here.
ROSjects
If you do not even want the person you are sending the code to to go through the hassle to set it up you might consider sending them a ROSject. A ROSject is an online virtual ROS environment (by the guys behind The Construct, the main source of ROS courses and of ROS tutorials on Youtube) that can be created and shared very easily from your existing Git repository as can be seen here. The simulation runs entirely in the cloud on a virtual machine. This way the potential of failure is very low but it is not a choice if your code is supposed to run on hardware and not only in simulation.
Docker
If your installation procedure is complex you might as well use a container such as a Docker.
More information about using Docker in combination with ROS can be found here. The Docker container might introduce though a bit of overhead and it is probably no choice for code which should have real-time priority in combination with a real-time patched operating system.
Debian or snap package
Another way of sending somebody a ROS package is by packing it into a Debian or snap package. This process takes a while and is in particular favourable if you want to give your code to a large number of users that should use the code out of the box. Instructions on how this can be done for Debian packages can be found here and here, while a guide for snap can be found here.
I have a Dockerfile and a tex file in my repository. I use Github Actions to build an image(ubuntu 18.10 with packages for PDFLaTeX) and run a container, which gets main.tex and produces main.pdf with PDFLaTeX. So far everything seems to work OK, but the problem is I can't copy the pdf from container to repository. I tried using docker cp:
docker cp pdf-creator:/main.tex .
But it doesn't seem to work, as pdf doesn't appear in my repository. Can you think of any other way to solve this?
The docker cp command copies a file into the local filesystem. In the context of a GitHub action, this is just whatever virtual environment is being used to run your code: it has nothing to do with your repository.
The only way to add something to your repository is to git add the file, git commit the change, and git push the change to your repository (which of course requires providing your Action with the necessary credentials to push changes to your repository, probably using a GitHub Secret).
But rather than adding the file to your repository, maybe you want to look at support for Artifacts? This lets you save files generated as part of your workflow and make them available for Download.
The workflow step would look something like:
- name: Archive generated PDF file
uses: actions/upload-artifact#v2
with:
name: main.pdf
path: /main.pdf
See the linked docs for more information.
I am building a web application I'd like to deploy as a Docker container. The application depends on a set of assets stored in a separate Git repository. The reason for using a separate repository is that the history of that repository is much larger than the current checkout and we'd like to have a way to throw away that history without touching the history of the repository containing the source code.
In the example below, containing only the relevant parts, I'm passing the assets repository commit ID into the build process using a file:
FROM something:something
# [install Git and stuff]
COPY ["assets_git_id", "/root/"]
RUN git clone --bare git://lala/assets.git /root/assets.git \
&& mkdir -p /srv/app/assets
&& git --git-dir=/root/assets.git --work-tree=/srv/app/assets checkout $(</root/assets_git_id) .
&& rm -r /root/assets.git
# [set up the rest of the application]
The problem here is that whenever that ID changes, the whole repository is cloned during the build process and most of the data is thrown away.
What is the canonical way reduce the wasted resources in such a case? Ideally I'd like to have access to a directory from inside the container during build whose contents are kept between multiple runs of the same build. The RUN script could then just update the repository and copy the relevant data from it instead of cloning the whole repository each time.
Use multi stage builds
# Stage 1
FROM something:something as GitSource
# [install Git]
RUN git clone --bare git://lala/assets.git /root/assets.git
COPY ["assets_git_id", "/root/"]
RUN git --git-dir=/root/assets.git pull
RUN mkdir -p /srv/app/assets
RUN git --git-dir=/root/assets.git --work-tree=/srv/app/assets checkout $(</root/assets_git_id) .
# Stage 2
FROM something:something
COPY --from=GitSource /srv/app/assets /srb/app/assets
# [set up the rest of the application]
For the final image, it will discard whatever you do in Stage 1, except what is being copied to Stage 2.
If I want to install code from a release version in Github in Docker, how can I do that taking up the least space possible in the image? Currently, I've done something like:
RUN wget https://github.com/some/repo/archive/v1.5.1.tar.gz¬
RUN tar -xvzf v1.5.1.tar.gz¬
WORKDIR /unzipped-1.5.1/¬
RUN make; make install
Issue here is the final image will have the downloaded tar, the unzipped version, and everything that gets created during make. I don't need the vast majority of this. How do I install my library in my image without keeping all of this extra data?
This is the textbook definition of the problem that the docker multi-stage build aims to solve.
The idea is to use a separate build with the dependencies and use that docker image to build the final product.
Note that this is available only in the new versions of Docker (17.05 onwards).