Setup Docker Jenkins with default plugins - docker

I want to create a Jenkins based image to have some plugins installed as well as npm. To do so I have the following Dockerfile:
FROM jenkins:2.60.3
RUN install-plugins.sh bitbucket
USER root
RUN apt-get update
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash -
RUN apt-get install -y nodejs
RUN npm --version
USER jenkins
That works fine however when I run the image I have two problems:
It seems that the plugins I tried to install manually didn't get persisted for some reason.
I get prompted the list of plugins I want to install but I don't want to install anything else.
Am I missing anything configuring the Dockerfile or is it that what I want to achieve is simply not possible?

Without seeing the contents of install-plugins.sh, I can't comment as to why the plugins aren't persisting. It is most likely caused by an incorrect installation destination; persistence shouldn't be an issue at this stage, since the plugin installation is built into the image itself.
As for the latter issue, you should be able to skip the installation wizard altogether by adding the line ENV JAVA_OPTS=-Djenkins.install.runSetupWizard=false
to your Dockerfile. Please note that this can be a security risk, if the Jenkins image is exposed to the world at large, since this option disables the need for authentication
EDIT: The default plugin directory for the Docker image is /var/jenkins_home/plugins
EDIT 2: According to the README on the Jenkins Docker repo, adding the line RUN echo 2.0 > /usr/share/jenkins/ref/jenkins.install.UpgradeWizard.stateshould accomplish the same thing

Things have changed since 2017, when the last answer was posted, and it no longer works. The current way is in following Dockerfile snippet:
# Prevent setup wizard from running.
# WARNING: Jenkins will start with security disabled, without any password.
ENV JENKINS_OPTS="-Djenkins.install.runSetupWizard=false"
# plugins.txt must contain the list of plugins to be installed
# (One plugin per line, e.g. sidebar-link:1.11.0)
COPY plugins.txt /tmp/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /tmp/plugins.txt

Related

Can i modify the docker image provided by playwright to add custom node version

I was utilizing Playwright to test my frontend application at work, however, we use node version 16.15.0 specifically. But, while looking at the docker file by Playwright I see that they install the latest node version which is causing issues when running in CircleCi.
Does anyone have any ideas for a workaround? Would I have to create a custom docker image using Playwright's image to tackle this and install the correct node version?
Any help would be appreciated!
https://github.com/microsoft/playwright/blob/main/utils/docker/Dockerfile.focal.
https://playwright.dev/docs/docker.
Yes, that would be the way to go. The best way to go about this would be to patch the Dockerfile.focal with an ARG instruction. You will then be able to pass values to this argument with your docker build command. This is the best approach that will make maintenance easier. Edit the Dockerfile.focal and add this variable as:
# leave this blank or specify a default value.
ARG NODE_VERSION=
Then in docker build you can set the value for this as. The docker build command in the script on the repo will change as follows:
docker build --platform "${PLATFORM}" -t "$3" -f "Dockerfile.$2" --build-arg NODE_VERSION=16.15.0 .
This will inject this variable into the image when it is being built so you can have the correct version. Also, this will make it easier to maintain since you will not have to change the Dockerfile every time you upgrade the version of NodeJS in your image.
Now, finally, you can edit the build.sh script to use the version variable. You can edit the line 13 in the script to something like:
apt-get install -y nodejs="{NODE_VERSION}" && \
You can use apt search nodejs after running the setup script to verify the correct version of the package.

How can I update the composer version that is being used inside my ddev containers?

Currently my docker/ddev setup is running Composer version 1.10.6 2020-05-06 inside the container.
I would like to make the composer version inside the container be 1.10.7 2020-06-03.
I found one way to do it: ddev exec sudo composer self-update, but it's not permanent. The container reverts back to using 1.10.6 after a ddev restart.
In all of my searches, I can't find a way to update the documents that create the container so they update composer permanently. I don't need it to attempt to update every time I start my container, I just need to be able to tell it now to permanently change over to the version I want.
An additional piece: adding RUN sudo composer self-update to the .ddev/web-build/Dockerfile makes it attempt to update every time, which is not ideal. I want to update when I'm ready, as I also need to update my test servers to match versions.
I added that command to my Dockerfile and it updated to 1.10.7. I removed the command from my Dockerfile so that it doesn't update every time I restart ddev. When I restarted ddev (without that command in the Dockerfile) it reverted composer back to 1.10.6.
Where is it getting the instructions to use that version? I need to find that and tell it to use 1.10.7 instead. I don't want it to update itself every time I do ddev restart.
It's not normally important, but you can add a .ddev/web-build/Dockerfile with these contents:
ARG BASE_IMAGE
FROM $BASE_IMAGE
RUN composer self-update
And your composer will be updated during the image build process.
Randy's suggestion worked well for me, however I've also found an alternative solution which involves less typing.
Read the project config.yaml and it explains how the Composer version can be changed.
This file is found in ~/yourprojectname/.ddev/config.yaml.
The first lines of the file are the configuration used and the remaining lines of the file explain the configuration alternatives available. Enjoy :)
# if composer_version:"" it will use the current ddev default composer release.
# It can also be set to "1", to get most recent composer v1
# or "2" for most recent composer v2.
# It can be set to any existing specific composer version.
# After first project 'ddev start' this will not be updated until it changes

Manage apt-get based dependency's on Docker

We have a large code based on c++ in my company and we are trying to move to a microservice infrastructure based on docker.
We have a couple of library in house that help us with things like helper functions and utility that we regularly use in our code. Our idea was to create a base image for developers with this library's already installed and make it available to use it as our "base" image. This will give us the benefit of all our software always using the latest version of our own library's.
My questions is related to the cache system of Docker in relation with CI and external dependencys. Lets say we have a Docker file like this:
FROM ubuntu:latest
# Install External dependencys
RUN apt update && apt install -y\
boost-libs \
etc...
# Copy our software
...
# Build it
...
# Install it
...
If our code changes we can trigger the CI and docker will understand that it can use the cached image that was created before up to the point were it copy's our software. What happens if one of our external dependency's offers a newer version? Will the cached be automatically be invalidated? How can we trigger a CI build in case any of our packages receives a new version?
In essence how do we make sure we are always using the latest packages available for our external dependency's?
Please keep in mind the above Dockerfile is just an example to illustrate we are trying to use other tricks in the playbook such us using a lighter base image (not Ubuntu) and multistage builds to avoid dev-packages in our production containers.
Docker's caching algorithm is fairly simple. It looks at the previous state of the image build, and the string of the command you are running. If you are performing a COPY or ADD, it also looks at a hash of those files being copied. If a previous build is found on the server with the same previous state and command being run, it will reuse the cache.
That means an external change, e.g. pulling packages from an external repository, will not be detected and the cache will be reused instead of rerunning that line. There are two solutions that I've seen to this:
Option 1: Change your command by adding versions to the dependencies. When one of those dependencies changes, you'll need to update your build. This is added work, but also guarantees that you only pull in the versions that you are ready for. That would look like (fixing boost-libs to a 1.5 version number):
# Install External dependencys
RUN apt update && apt install -y\
boost-libs=1.5 \
etc...
Option 2: change a build arg. These are injected as environment variables into the RUN commands and docker sees a change in the environment as a different command to run. That would look like:
# Install External dependencys
ARG UNIQUE_VAR
RUN apt update && apt install -y\
boost-libs \
etc...
And then you could build the above with the following to trigger the cache to be recreated daily on that line:
docker build --build-arg "UNIQUE_VAR=$(date +%Y%m%d)" ...
Note, there's also the option to build without the cache any time you wish with:
docker build --no-cache ...
This would cause the cache to be ignored for all steps (except the from line).

envsubst command getting stuck in a container

I have a requirement that before an application runs, some part of it needs to read the environmental variable. For this I have the following docker file
FROM nodesource/jessie:0.12.7
# install gettext for envsubst
RUN apt-get update
RUN apt-get install -y gettext-base
# cache package.json and node_modules to speed up builds
ADD package.json package.json
RUN npm install
# Add source files
ADD src src
# Substiture value for backend endpoint env var
RUN envsubst < src/js/envapp.js > src/js/app.js
ADD node_modules node_modules
EXPOSE 8000
CMD ["npm","start"]
The above envsubst line reads (should read) an env variable $MYENV and substitutes it. But when I open the file app.js, its empty.
I checked if the environmental variable exists in the container and it does. Any reason its value is not read and substituted?
I also tried the same command in teh container and it works. It only does not work when I run the image
This is likely because $MYENV is not available for envsubst when you run the image.
Each RUN command runs on its own shell.
From the Docker documentations:
RUN (the command is run in a shell - /bin/sh -c - shell form)
You need to source your profile as well, for example if the $MYENV environment variable is available in the .bashrc file, you can modify your Dockerfile like this:
RUN source ~/.bashrc && envsubst < src/js/envapp.js > src/js/app.js
I encountered the same issues, and after much research and fishing through the internet. I managed to find a few work arounds to this issue. Below I'll list them and identifiable risks at the time of this "Answer post"
Solutions:
1.) apt-get install -y gettext its a standard GNU package language library, one of these libraries that it includes is envsubst` and I can confirm that it works for docker UBUNTU:latest and it should work for every flavored version.
2.) npm install envsub dependent on the "use case" - this approach would be better supported by node based projects.
3.) enstub cli project lib in my opinion it seems a bit overkill to downloading a custom cli from a random stranger but, it's also another option.
Risk:
apt-get install -y gettext:
1.) gettext - this approach would NOT be ideal for VM's as with any package library, it requires maintenance and updates as time passes. However, this isn't necessary for docker because once an a container is initialized and up and running we can create a bashscript to add the package, substitute env vars and then remove the package.
2.) It's a bad idea for VM's because it can be used to execute arbitrary code
npm install ensub
1.) envsub - updating packages and this approach wouldn't be ideal if your dealing with a different stack and not using nodejs.
NOTE:
There's also a PHP version for those developing a PHP application and it seems to work PHP's cli if you need a custom environment.
Resources:
GetText package library info: https://www.gnu.org/software/gettext/
GetText Risk - https://ubuntu.com/security/notices/USN-3815-2
PHP-GetText - apt-get install -y php-gettext
Custom ensubst cli: https://github.com/a8m/envsubst
I suggest that since your are using Node, you use the npm envsub module.
This module is well tested and is developed with docker in mind.
It avoids the need for relying on other dependencies when you already have the full Node arsenal at your fingertips.
envsub is described as
envsub is envsubst for NodeJS
NodeJS global CLI module providing file-level environment variable substitution via Handlebars
I am the author of the package. I think you will enjoy it.
I had some issues with envsubst in Docker.
For some reasons envsubst doesn't work when I try to copy the output in the same file. For example, this is not working:
RUN envsubst < file.conf > file.conf
But when I when I tried to use a temp file the issue disappeared:
RUN envsubst < file.conf > file.conf.temp && cp -f file.conf.temp file.conf

How to run travis-ci locally

I'd rather not have to push every little change to .travis.yml and every little change I make to the source in order to run the build. With jenkins you can download jenkins and run locally. Does travis offer something like this?
Note: I've seen the travis-ci cli and downloaded it, but all it seems
to do is call their API, which then connects to my GitHub repo, so if
I don't push, it won't matter that I restart the last build.
This process allows you to completely reproduce any Travis build job on your computer. Also, you can interrupt the process at any time and debug. Below is an example where I perfectly reproduce the results of job #191.1 on php-school/cli-menu
.
Prerequisites
You have public repo on GitHub
You ran at least one build on Travis
You have Docker set up on your computer
Set up the build environment
Reference: https://docs.travis-ci.com/user/common-build-problems/
Make up your own temporary build ID
BUILDID="build-$RANDOM"
View the build log, open the show more button for WORKER INFORMATION and find the INSTANCE line, paste it in here and run (replace the tag after the colon with the newest available one):
INSTANCE="travisci/ci-garnet:packer-1512502276-986baf0"
Run the headless server
docker run --name $BUILDID -dit $INSTANCE /sbin/init
Run the attached client
docker exec -it $BUILDID bash -l
Run the job
Now you are now inside your Travis environment. Run su - travis to begin.
This step is well defined but it is more tedious and manual. You will find every command that Travis runs in the environment. To do this, look for for everything in the right column which has a tag like 0.03s.
On the left side you will see the actual commands. Run those commands, in order.
Result
Now is a good time to run the history command. You can restart the process and replay those commands to run the same test against an updated code base.
If your repo is private: ssh-keygen -t rsa -b 4096 -C "YOUR EMAIL REGISTERED IN GITHUB" then cat ~/.ssh/id_rsa.pub and click here to add a key
FYI: you can git pull from inside docker to load commits from your dev box before you push them to GitHub
If you want to change the commands Travis runs then it is YOUR responsibility to figure out how that translates back into a working .travis.yml.
I don't know how to clean up the Docker environment, it looks complicated, maybe this leaks memory
Travis-ci offers a new container-based infrastructure that uses docker. This can be very useful if you're trying to troubleshoot a travis-ci build by reproducing it locally. This is taken from Travis CI's documentation.
Troubleshooting Locally in a Docker Image
If you're having trouble tracking down the exact problem in a build it often helps to run the build locally. To do this you need to be using our container based infrastructure (ie, have sudo: false in your .travis.yml), and to know which Docker image you are using on Travis CI.
Running a Container Based Docker Image Locally
Download and install the Docker Engine.
Select an image from Docker Hub. If you're not using a language-specific image pick ci-ruby. Open a terminal and start an interactive Docker session using the image URL:
docker run -it travisci/ubuntu-ruby:18.04 /bin/bash
Switch to the travis user:
su - travis
Clone your git repository into the / folder of the image.
Manually install any dependencies.
Manually run your Travis CI build command.
UPDATE: I now have a complete turnkey, all-in-one answer, see https://stackoverflow.com/a/49019950/300224. Only took 3 years to figure out!
According to the Travis documentation: https://github.com/travis-ci/travis-ci there is a concoction of projects that collude to deliver the Travis CI web service we know and love. The following subset of projects appears to allow local make test functionality using the .travis.yml in your project:
travis-build
travis-build creates the build
script for each job. It takes the configuration from the .travis.yml file and
creates a bash script that is then run in the build environment by
travis-worker.
travis-cookbooks
travis-cookbooks holds the
Chef cookbooks that are used to provision the build environments.
travis-worker
travis-worker is responsible for
running the build scripts in a clean environment. It streams the log output to
travis-logs and pushes state updates (build starting/finishing)
to travis-hub.
(The other subprojects are responsible for communicating with GitHub, their web interface, email, and their API.)
Similar to Scott McLeod's but this also generates a bash script to run the steps from the .travis.yml.
Troubleshooting Locally in Docker with a generated Bash script
# choose the image according to the language chosen in .travis.yml
$ docker run -it -u travis quay.io/travisci/travis-jvm /bin/bash
# now that you are in the docker image, switch to the travis user
sudo - travis
# Install a recent ruby (default is 1.9.3)
rvm install 2.3.0
rvm use 2.3.0
# Install travis-build to generate a .sh out of .travis.yml
cd builds
git clone https://github.com/travis-ci/travis-build.git
cd travis-build
gem install travis
# to create ~/.travis
travis version
ln -s `pwd` ~/.travis/travis-build
bundle install
# Create project dir, assuming your project is `AUTHOR/PROJECT` on GitHub
cd ~/builds
mkdir AUTHOR
cd AUTHOR
git clone https://github.com/AUTHOR/PROJECT.git
cd PROJECT
# change to the branch or commit you want to investigate
travis compile > ci.sh
# You most likely will need to edit ci.sh as it ignores matrix and env
bash ci.sh
Use wwtd (what would travis do) ruby gem to run tests on your local machine roughly as they would run on travis.
It will recreate the build matrix and run each configuration, great to sanity check setup before pushing.
gem i wwtd
wwtd
tl;dr Use image specified at https://docs.travis-ci.com/user/common-build-problems/#troubleshooting-locally-in-a-docker-image in combination with https://github.com/travis-ci/travis-build#use-as-addon-for-travis-cli.
EDIT 2019-12-06
#troubleshooting-locally-in-a-docker-image section was replaced by #running-builds-in-debug-mode which also describes how to SSH to the job running in the debug mode.
EDIT 2019-07-26
#troubleshooting-locally-in-a-docker-image section is no longer part of the docs; here's why
https://github.com/travis-ci/docs-travis-ci-com/issues/2342
https://blog.travis-ci.com/2018-10-04-combining-linux-infrastructures
https://blog.travis-ci.com/2018-11-30-announcing-xenial-build-environment-for-enterprise
Though, it's still in git history: https://github.com/travis-ci/docs-travis-ci-com/pull/2193.
Look for (quite old, couldn't find newer) image versions at: https://travis-ci.org/travis-ci/docs-travis-ci-com/builds/230889063#L661.
I wanted to inspect why one of the tests in my build failed with an error I din't get locally.
Worked.
What actually worked was using the image specified at Troubleshooting Locally in a Docker Image docs page. In my case it was travisci/ci-garnet:packer-1512502276-986baf0.
I was able to add travise compile following steps described at https://github.com/travis-ci/travis-build#use-as-addon-for-travis-cli.
dm#z580:~$ docker run --name travis-debug -dit travisci/ci-garnet:packer-1512502276-986baf0 /sbin/init
dm#z580:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
travisci/ci-garnet packer-1512502276-986baf0 6cbda6a950d3 11 months ago 10.2GB
dm#z580:~$ docker exec -it travis-debug bash -l
root#912e43dbfea4:/# su - travis
travis#912e43dbfea4:~$ cd builds/
travis#912e43dbfea4:~/builds$ git clone https://github.com/travis-ci/travis-build
travis#912e43dbfea4:~/builds$ cd travis-build
travis#912e43dbfea4:~/builds/travis-build$ mkdir -p ~/.travis
travis#912e43dbfea4:~/builds/travis-build$ ln -s $PWD ~/.travis/travis-build
travis#912e43dbfea4:~/builds/travis-build$ gem install bundler
travis#912e43dbfea4:~/builds/travis-build$ bundle install --gemfile ~/.travis/travis-build/Gemfile
travis#912e43dbfea4:~/builds/travis-build$ bundler binstubs travis
travis#912e43dbfea4:~/builds/travis-build$ cd ..
travis#912e43dbfea4:~/builds$ git clone --depth=50 --branch=master https://github.com/DusanMadar/PySyncDroid.git DusanMadar/PySyncDroid
travis#912e43dbfea4:~/builds$ cd DusanMadar/PySyncDroid/
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ ~/.travis/travis-build/bin/travis compile > ci.sh
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ sed -i 's,--branch\\=\\\x27\\\x27,--branch\\=master,g' ci.sh
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ bash ci.sh
Everything from .travis.yml was executed as expected (dependencies installed, tests ran, ...).
Note that before running bash ci.sh I had to change --branch\=\'\'\ to --branch\=master\ (see the second to last sed -i ... command) in ci.sh.
If that doesn't work the command bellow will help to identify the target line number and you can edit the line manually.
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$ cat ci.sh | grep -in branch
840: travis_cmd git\ clone\ --depth\=50\ --branch\=\'\'\ https://github.com/DusanMadar/PySyncDroid.git\ DusanMadar/PySyncDroid --echo --retry --timing
889:export TRAVIS_BRANCH=''
899:export TRAVIS_PULL_REQUEST_BRANCH=''
travis#912e43dbfea4:~/builds/DusanMadar/PySyncDroid$
Didn't work.
Followed the accepted answer for this question but didn't
find the image (travis-ci-garnet-trusty-1512502259-986baf0) mentioned by instance at https://hub.docker.com/u/travisci/.
Build worker version points to travis-ci/worker commit and its travis-worker-install references quay.io/travisci/ as image registry. So I tried it.
dm#z580:~$ docker run -it -u travis quay.io/travisci/travis-python /bin/bash
travis#370c23a773c9:/$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 12.04.5 LTS
Release: 12.04
Codename: precise
travis#370c23a773c9:/$
dm#z580:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/travisci/travis-python latest 753a216d776c 3 years ago 5.36GB
Definitely not Trusty (Ubuntu 14.04) and not small either.
You could try Trevor, which uses Docker to run your Travis build.
From its description:
I often need to run tests for multiple versions of Node.js. But I don't want to switch versions manually using n/nvm or push the code to Travis CI just to run the tests.
That's why I created Trevor. It reads .travis.yml and runs tests in all versions you requested, just like Travis CI. Now, you can test before push and keep your git history clean.
I'm not sure what was your original reason for running Travis locally, if you just wanted to play with it, then stop reading here as it's irrelevant for you.
If you already have experience with hosted Travis and you want to get the same experience in your own datacenter, read on.
Since Dec 2014 Travis CI offers an Enterprise on-premises version.
http://blog.travis-ci.com/2014-12-19-introducing-travis-ci-enterprise/
The pricing is part of the article as well:
The licensing is done per seats, where every license includes 20 users. Pricing starts at $6,000 per license, which includes 20 users and 5 concurrent builds. There's a premium option with unlimited builds for $8,500.
I wasn't able to use the answers here as-is. For starters, as noted, the Travis help document on running jobs locally has been taken down. All of the blog entries and articles I found are based on that. The new "debug" mode doesn't appeal to me because I want to avoid the queue times and the Travis infrastructure until I've got some confidence I have gotten somewhere with my changes.
In my case I'm updating a Puppet module and I'm not an expert in Puppet, nor particularly experienced in Ruby, Travis, or their ecosystems. But I managed to build a workable test image out of tips and ideas in this article and elsewhere, and by examining the Travis CI build logs pretty closely.
I was unable to find recent images matching the names in the CI logs (for example, I could find travisci/ci-sardonyx, but could not find anything with "xenial" or with the same build name). From the logs it appears images are now transferred via AMQP instead of a mechanism more familiar to me.
I was able to find an image travsci/ubuntu-ruby:16.04 which matches the OS I'm targeting for my particular case. It does not have all the components used in the Travis CI, so I built a new one based on this, with some components added to the image and others added in the container at runtime depending on the need.
So I can't offer a clear procedure, sorry. But what I did, essentially boiled down:
Find a recent Travis CI image in Docker Hub matching your target OS as closely as possible.
Clone the repository to a build directory, and launch the container with the build directory mounted as a volume, with the working directory set to the target volume
Now the hard work: go through the Travis build log and set up the environment. In my case, this meant setting up RVM, and then using bundle to install the project's dependencies. RVM appeared to be already present in the Travis environment but I had to install it; everything else came from reproducing the commands in the build log.
Run the tests.
If the results don't match what you saw in the Travis CI logs, go back to (3) and see where to go.
Optionally, create a reusable image.
Dev and test locally and then push and hopefully your Travis results will be as expected.
I know this is not concrete and may be obvious, and your mileage will definitely vary, but hopefully this is of some use to somebody. The Dockerfile and a README for my image are on GitHub for reference.
It is possible to SSH to Travis CI environment via a bounce host. The feature isn't built in Travis CI, but it can be achieved by the following steps.
On the bounce host, create travis user and ensure that you can SSH to it.
Put these lines in the script: section of your .travis.yml (e.g. at the end).
- echo travis:$sshpassword | sudo chpasswd
- sudo sed -i 's/ChallengeResponseAuthentication no/ChallengeResponseAuthentication yes/' /etc/ssh/sshd_config
- sudo service ssh restart
- sudo apt-get install sshpass
- sshpass -p $sshpassword ssh -R 9999:localhost:22 -o StrictHostKeyChecking=no travis#$bouncehostip
Where $bouncehostip is the IP/host of your bounce host, and $sshpassword is your defined SSH password. These variables can be added as encrypted variables.
Push the changes. You should be able to make an SSH connection to your bounce host.
Source: Shell into Travis CI Build Environment.
Here is the full example:
# use the new container infrastructure
sudo: required
dist: trusty
language: python
python: "2.7"
script:
- echo travis:$sshpassword | sudo chpasswd
- sudo sed -i 's/ChallengeResponseAuthentication no/ChallengeResponseAuthentication yes/' /etc/ssh/sshd_config
- sudo service ssh restart
- sudo apt-get install sshpass
- sshpass -p $sshpassword ssh -R 9999:localhost:22 -o StrictHostKeyChecking=no travisci#$bouncehostip
See: c-mart/travis-shell at GitHub.
See also: How to reproduce a travis-ci build environment for debugging

Resources