Other database dockers that I've worked with (like Postgres) have a mechanism to import some initial data into their empty instance once the container starts for the first time. This is usually in form of putting your SQL files in a specific folder.
I need to do the same for Neo4j. I want to compose a Neo4j docker image with some data in it. What's the right way to do this?
This could be achieved...
There are 2 requirements:
set initial password, which could be achieved using bin/neo4j-admin set-initial-password <password> and then
import data from file in cypher format cat import/data.cypher | NEO4J_USERNAME=neo4j NEO4J_PASSWORD=${NEO4J_PASSWD} bin/cypher-shell --fail-fast
Sample Dockerfile may look like this
FROM neo4j:3.2
ENV NEO4J_PASSWD neo4jadmin
ENV NEO4J_AUTH neo4j/${NEO4J_PASSWD}
COPY data.cypher /var/lib/neo4j/import/
VOLUME /data
CMD bin/neo4j-admin set-initial-password ${NEO4J_PASSWD} || true && \
bin/neo4j start && sleep 5 && \
for f in import/*; do \
[ -f "$f" ] || continue; \
cat "$f" | NEO4J_USERNAME=neo4j NEO4J_PASSWORD=${NEO4J_PASSWD} bin/cypher-shell --fail-fast && rm "$f"; \
done && \
tail -f logs/neo4j.log
Building image sudo docker build -t neo4j-3.1:loaddata .
And running container docker run -it --rm --name neo4jtest neo4j-3.1:loaddata
example of docker-compose for Neo4j
version: '3'
services:
# ...
neo4j:
image: 'neo4j:4.1'
ports:
- '7474:7474'
- '7687:7687'
volumes:
- '$HOME/data:/data'
- '$HOME/logs:/logs'
- '$HOME/import:/var/lib/neo4j/import'
- '$HOME/conf:/var/lib/neo4j/conf'
environment:
NEO4J_AUTH : 'neo4j/your_password'
# ...
Related
I would like to build a minio docker container for integration test purposes.
I would like to do the following in my Dockerfile.
Create the minio container
Create a test bucket
Copy a small amount of test data into a test bucket
Start the minio service
Test Data
./test-data/foo.txt
./test-data/bar.txt
FROM minio/minio
RUN mkdir -p /buckets/my-bucket
COPY test-data /buckets/my-bucket/test-data"
EXPOSE 9000 9001
CMD [ "minio", "server", "/buckets", "--address", ":9000", "--console-address", ":9001" ]
I know that I could run mc in a separate container to populate my bucket, but that requires a little bit of orchestration.
Is there a way that I could accomplish these steps in a Dockerfile?
A Dockerfile is just a collection of shell commands...so you can do pretty much anything you want. For example:
FROM docker.io/minio/minio:latest
COPY --from=docker.io/minio/mc:latest /usr/bin/mc /usr/bin/mc
RUN mkdir /buckets
RUN minio server /buckets & \
server_pid=$!; \
until mc alias set local http://localhost:9000 minioadmin minioadmin; do \
sleep 1; \
done; \
mc mb local/bucket1; \
echo this is file1 | mc pipe local/bucket1/file1; \
echo this is file2 | mc pipe local/bucket1/file2; \
kill $server_pid
CMD ["minio", "server", "/buckets", "--address", ":9000", "--console-address", ":9001"]
If we use the above Dockerfile to build an image named minio-demo, and then start a container like this:
$ docker run --rm -p 127.0.0.1:9000:9000 -p 127.0.0.1:9001:9001 minio-demo
We see:
$ mc alias set demo http://localhost:9000 minioadmin minioadmin
$ mc ls demo
[2022-07-07 22:01:35 EDT] 0B bucket1/
$ mc ls demo/bucket1
[2022-07-07 22:01:35 EDT] 14B STANDARD file1
[2022-07-07 22:01:35 EDT] 14B STANDARD file2
For my job, I would like to run Jenkins and Docker Rootless (with the sysbox runtime only for this container), all in Docker Rootless.
I would like this because I need a secure environment given I don't inspect Jenkins pipelines
But when I run docker rootless in docker rootless, I get this error:
[rootlesskit:parent] error: failed to setup UID/GID map: newuidmap 54 [0 1000 1 1 100000 65536] failed: newuidmap: write to uid_map failed: Operation not permitted
: exit status 1
I tried many actions but failed to get it to work. Someone would have a solution to do this, please?
Thanks for reading me, have a nice day!
Edit 1
Hello, I take the liberty of relaunching this question, because being essential for the safety of our environment, my bosses remind me every day. Would someone have the answer to this problem please
Things getting a little tricky when you want to use the docker build command inside a Jenkins container.
I stumbled upon this issue when wanted to build docker images without being root, under the user 'jenkins' instead.
I wrote the solution in an article in which I explain in detail what is happening under the hood.
The key point is to figure out which GID the docker.sock socket is running under (depends on the system). So here is what you gotta do:
Run the command:
$ stat /var/run/docker.sock
Output:
jenkins#wsl-ubuntu:~$ stat /var/run/docker.sock
File: /var/run/docker.sock
Size: 0 Blocks: 0 IO Block: 4096 socket
Device: 17h/23d Inode: 552 Links: 1
Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1001/ docker)
Access: 2021-03-03 10:43:05.570000000 +0200
Modify: 2021-03-03 10:43:05.570000000 +0200
Change: 2021-03-03 10:43:05.570000000 +0200
Birth: -
In this case, the GID is 1001, but can also be 999 or something else in your machine.
Now, create a Dockerfile and paste the code below replacing the ENV variable with your own from the stat command output above:
FROM jenkins/jenkins:lts-alpine
USER root
ARG DOCKER_HOST_GID=1001 #Replace with your own docker.sock GID
ARG JAVA_OPTS=""
ENV DOCKER_HOST_GID $DOCKER_HOST_GID
ENV JAVA_OPTS $JAVA_OPTS
RUN set -eux \
&& apk --no-cache update \
&& apk --no-cache upgrade --available \
&& apk --no-cache add shadow \
&& apk --no-cache add docker curl --repository http://dl-cdn.alpinelinux.org/alpine/latest-stable/community \
&& deluser --remove-home jenkins \
&& addgroup -S jenkins -g $DOCKER_HOST_GID \
&& adduser -S -G jenkins -u $DOCKER_HOST_GID jenkins \
&& usermod -aG docker jenkins \
&& apk del shadow curl
USER jenkins
WORKDIR $JENKINS_HOME
For the sake of a working example, here is a docker-compose file:
version: '3.3'
services:
jenkins:
image: jenkins_master
container_name: jenkins_master
hostname: jenkins_master
restart: unless-stopped
env_file:
- jenkins.env
build:
context: .
cpus: 2
mem_limit: 1024m
mem_reservation: 800M
ports:
- 8090:8080
- 50010:50000
- 2375:2376
volumes:
- ./jenkins_data:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
networks:
- default
volumes:
jenkins_data: {}
networks:
default:
driver: bridge
Now lets create the ENV variables:
cat > jenkins.env <<EOF
DOCKER_HOST_GID=1001 #Replace with your own docker.sock GID
JAVA_OPTS=-Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
EOF
and lastly, run the command docker-compose up -d.
It will build the image, and run it.
Then visit HTTP://host_machine_ip:8090 , and that's all.
If you run docker inspect --format '{{ index (index .Config.Env) }}' jenkins_master you will see that the 1st and 2nd variables are the ones we set.
More details can be found here: How to run rootless docker in dockerized Jenkins installation
I am using Couchbase java client SDK 2.7.9 and am running into a problem while trying to run automated integration tests. In such test we usually use random ports to be able to run the same thing on the same Jenkins slave (using docker for example).
But, with the client, we can specify many custom ports but not the 8092, 8093, 8094 and 8095.
The popular TestContainers modules mention as well that those port have to remain static in their Couchbase module: https://www.testcontainers.org/modules/databases/couchbase/ 1
Apparently it is also possible to change those ports at the server level.
Example:
Docker-compose.yml
version: '3.0'
services:
rapid_test_cb:
build:
context: ""
dockerfile: cb.docker
ports:
- "8091"
- "8092"
- "8093"
- "11210"
The docker image is ‘couchbase:community-5.1.1’
Internally the ports are the ones written above but externally they are random. At the client level you can set up bootstrapHttpDirectPort and bootstrapCarrierDirectPort but apparently the 8092 and 8093 ports are taken from the server-side (who does not know which port was assigned to him).
I would like to ask you whether it is possible to change those ports at the client level and, if not, to seriously consider adding that feature.
So, as discussed with the Couchbase team here,
it is not really possible. So we found a way to make it work using Gradle's docker compose plugin but I imagine it would work in different situations (TestContainer could use a similar system).
docker-compose.yml:
version: '3.0'
services:
rapid_test_cb:
build:
context: ""
dockerfile: cb.docker
ports:
- "${COUCHBASE_RANDOM_PORT_8091}:${COUCHBASE_RANDOM_PORT_8091}"
- "${COUCHBASE_RANDOM_PORT_8092}:${COUCHBASE_RANDOM_PORT_8092}"
- "${COUCHBASE_RANDOM_PORT_8093}:${COUCHBASE_RANDOM_PORT_8093}"
- "${COUCHBASE_RANDOM_PORT_11210}:${COUCHBASE_RANDOM_PORT_11210}"
environment:
COUCHBASE_RANDOM_PORT_8091: ${COUCHBASE_RANDOM_PORT_8091}
COUCHBASE_RANDOM_PORT_8092: ${COUCHBASE_RANDOM_PORT_8092}
COUCHBASE_RANDOM_PORT_8093: ${COUCHBASE_RANDOM_PORT_8093}
COUCHBASE_RANDOM_PORT_11210: ${COUCHBASE_RANDOM_PORT_11210}
cb.docker:
FROM couchbase:community-5.1.1
COPY configure-node.sh /opt/couchbase
#HEALTHCHECK --interval=5s --timeout=3s CMD curl --fail http://localhost:8091/pools || exit 1
RUN chmod u+x /opt/couchbase/configure-node.sh
RUN echo "{rest_port, 8091}.\n{query_port, 8093}.\n{memcached_port, 11210}." >> /opt/couchbase/etc/couchbase/static_config
CMD ["/opt/couchbase/configure-node.sh"]
configure-node.sh:
#!/bin/bash
poll() {
# The argument supplied to the function is invoked using "$#", we check the return value with $?
"$#"
while [ $? -ne 0 ]
do
echo 'waiting for couchbase to start'
sleep 1
"$#"
done
}
set -x
set -m
if [[ -n "${COUCHBASE_RANDOM_PORT_8092}" ]]; then
sed -i "s|8092|${COUCHBASE_RANDOM_PORT_8092}|g" /opt/couchbase/etc/couchdb/default.d/capi.ini
fi
if [[ -n "${COUCHBASE_RANDOM_PORT_8091}" ]]; then
sed -i "s|8091|${COUCHBASE_RANDOM_PORT_8091}|g" /opt/couchbase/etc/couchbase/static_config
fi
if [[ -n "${COUCHBASE_RANDOM_PORT_8093}" ]]; then
sed -i "s|8093|${COUCHBASE_RANDOM_PORT_8093}|g" /opt/couchbase/etc/couchbase/static_config
fi
if [[ -n "${COUCHBASE_RANDOM_PORT_11210}" ]]; then
sed -i "s|11210|${COUCHBASE_RANDOM_PORT_11210}|g" /opt/couchbase/etc/couchbase/static_config
fi
/entrypoint.sh couchbase-server &
poll curl -s localhost:${COUCHBASE_RANDOM_PORT_8091:-8091}
# Setup index and memory quota
curl -v -X POST http://127.0.0.1:${COUCHBASE_RANDOM_PORT_8091:-8091}/pools/default --noproxy '127.0.0.1' -d memoryQuota=300 -d indexMemoryQuota=300
# Setup services
curl -v http://127.0.0.1:${COUCHBASE_RANDOM_PORT_8091:-8091}/node/controller/setupServices --noproxy '127.0.0.1' -d services=kv%2Cn1ql%2Cindex
# Setup credentials
curl -v http://127.0.0.1:${COUCHBASE_RANDOM_PORT_8091:-8091}/settings/web --noproxy '127.0.0.1' -d port=${couchbase_random_port_8091:-8091} -d username=Administrator -d password=password
# Load the rapid_test bucket
curl -X POST -u Administrator:password -d name=rapid_test -d ramQuotaMB=128 --noproxy '127.0.0.1' -d authType=sasl -d saslPassword=password -d replicaNumber=0 -d flushEnabled=1 -v http://127.0.0.1:${COUCHBASE_RANDOM_PORT_8091:-8091}/pools/default/buckets
fg 1
Gradle's docker compose configuration:
def findRandomOpenPortOnAllLocalInterfaces = {
new ServerSocket(0).withCloseable { socket ->
return socket.getLocalPort().intValue()
}
}
dockerCompose {
environment.put 'COUCHBASE_RANDOM_PORT_8091', findRandomOpenPortOnAllLocalInterfaces()
environment.put 'COUCHBASE_RANDOM_PORT_8092', findRandomOpenPortOnAllLocalInterfaces()
environment.put 'COUCHBASE_RANDOM_PORT_8093', findRandomOpenPortOnAllLocalInterfaces()
environment.put 'COUCHBASE_RANDOM_PORT_11210', findRandomOpenPortOnAllLocalInterfaces()
}
integTest.doFirst {
systemProperty 'com.couchbase.bootstrapHttpDirectPort', couchbase_random_port_8091
systemProperty 'com.couchbase.bootstrapCarrierDirectPort', couchbase_random_port_11210
}
Introduction
I am setting up a project where we try to use docker for everything.
It's php(symfony) + npm project. We have working and battle-tested (we are using this setup for more than a year on several projects) docker-compose.yaml.
But to make it for developers more friendly, I came up with setting up bin-docker folder, that is, using direnv, placed first in the user's PATH
/.envrc:
export PATH="$(pwd)/bin-docker:$PATH"
Folder contains files, that are supposed to replace bin files with the in-docker ones
❯ tree bin-docker
bin-docker
├── _tty.sh
├── composer
├── npm
├── php
└── php-xdebug
E.g.php file contains:
#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
PROJECT_ROOT="$(dirname "$DIR")"
source ${DIR}/_tty.sh
if [ $(docker-compose ps php | grep Up | wc -l) -gt 0 ]; then
docker_compose_exec \
--workdir=/src${PWD:${#PROJECT_ROOT}} \
php php "$#"
else
docker_compose_run \
--entrypoint=/usr/local/bin/php \
--workdir=/src${PWD:${#PROJECT_ROOT}} \
php "$#"
fi
npm:
#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
PROJECT_ROOT="$(dirname "$DIR")"
source ${DIR}/_tty.sh
docker_run --init \
--entrypoint=npm \
-v "$PROJECT_ROOT":"$PROJECT_ROOT" \
-w "$(pwd)" \
-u "$(id -u "$USER"):$(id -g "$USER")" \
mangoweb/mango-cli:v2.3.2 "$#"
It works great, you can simply use symfony's bin/console and it will "magically" run in the docker-container.
The problem
The only problem and my question is, how to properly map host user to container's user. Properly for all major OS (macOS, Windows(WSL), Linux) because our developers use all of them. I will talk about the npm, because it uses public image anyone can download.
When I do not map user at all, on Linux the files create in mounted volume are the owned by root, and users have to chmod the files afterwards. Not ideal at all.
When I use -u "$(id -u "$USER"):$(id -g "$USER")" it break's because the in-container user now doesn't have rights to create cache folder in container, also on macOS standard UID is 501, which breaks everything.
What is the proper way to map the user, or is there any other better way to do any part of this setup?
Attachments:
docker-compose.yaml: (It's shortened from sensitive or non-important info)
version: '2.4'
x-php-service-base: &php-service-base
restart: on-failure
depends_on:
- redis
- elasticsearch
working_dir: /src
volumes:
- .:/src:cached
environment:
APP_ENV: dev
SESSION_STORE_URI: tcp://redis:6379
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.3
environment:
discovery.type: single-node
xpack.security.enabled: "false"
kibana:
image: docker.elastic.co/kibana/kibana:6.2.3
environment:
SERVER_NAME: localhost
ELASTICSEARCH_URL: http://elasticsearch:9200
depends_on:
- elasticsearch
redis:
image: redis:4.0.8-alpine
php:
<<: *php-service-base
image: custom-php-image:7.2
php-xdebug:
<<: *php-service-base
image: custom-php-image-with-xdebug:7.2
nginx:
image: custom-nginx-image
restart: on-failure
depends_on:
- php
- php-xdebug
_tty.sh: Only to properly pass tty status to docker run
if [ -t 1 ]; then
DC_INTERACTIVITY=""
else
DC_INTERACTIVITY="-T"
fi
function docker_run {
if [ -t 1 ]; then
docker run --rm --interactive --tty=true "$#"
else
docker run --rm --interactive --tty=false "$#"
fi
}
function docker_compose_run {
docker-compose run --rm $DC_INTERACTIVITY "$#"
}
function docker_compose_exec {
docker-compose exec $DC_INTERACTIVITY "$#"
}
This may answer your problem.
I came across a tutorial as to how to do setup user namespaces in Ubuntu. Note the use case in the tutorial is for using nvidia-docker and restricting permissions. In particular Dr. Kinghorn states in hist post:
The main idea of a user-namespace is that a processes UID (user ID) and GID (group ID) can be different inside and outside of a containers namespace. The significant consequence of this is that a container can have it's root process mapped to a non-privileged user ID on the host.
Which sounds like what you're looking for. Hope this helps.
I have Jenkins docker image and I want to relax Jenkins Content Security Policy from docker environment.
I can do that from Jenkins script console:
System.setProperty("hudson.model.DirectoryBrowserSupport.CSP", "default-src 'self'; style-src 'self' 'unsafe-inline';")
System.getProperty("hudson.model.DirectoryBrowserSupport.CSP")
But not from docker-compose environment. Then docker container is restarting on run.
Docker service is run by 'jenkins.sh' script:
cat /usr/local/bin/jenkins.sh
#! /bin/bash -e
: "${JENKINS_HOME:="/var/jenkins_home"}"
touch "${COPY_REFERENCE_FILE_LOG}" || { echo "Can not write to ${COPY_REFERENCE_FILE_LOG}. Wrong volume permissions?"; exit 1; }
echo "--- Copying files at $(date)" >> "$COPY_REFERENCE_FILE_LOG"
find /usr/share/jenkins/ref/ -type f -exec bash -c '. /usr/local/bin/jenkins-support; for arg; do copy_reference_file "$arg"; done' _ {} +
# if `docker run` first argument start with `--` the user is passing jenkins launcher arguments
if [[ $# -lt 1 ]] || [[ "$1" == "--"* ]]; then
# read JAVA_OPTS and JENKINS_OPTS into arrays to avoid need for eval (and associated vulnerabilities)
java_opts_array=()
while IFS= read -r -d '' item; do
java_opts_array+=( "$item" )
done < <([[ $JAVA_OPTS ]] && xargs printf '%s\0' <<<"$JAVA_OPTS")
jenkins_opts_array=( )
while IFS= read -r -d '' item; do
jenkins_opts_array+=( "$item" )
done < <([[ $JENKINS_OPTS ]] && xargs printf '%s\0' <<<"$JENKINS_OPTS")
exec java "${java_opts_array[#]}" -jar /usr/share/jenkins/jenkins.war "${jenkins_opts_array[#]}" "$#"
fi
# As argument is not jenkins, assume user want to run his own process, for example a `bash` shell to explore this image
exec "$#"
My jenkins Dockerfile environment:
ENV JAVA_OPTS="-Xmx2048m"
ENV JENKINS_OPTS="--logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war"
My docker-compose.yml:
version: '2'
services:
jenkins:
build: jenkins
image: my-jenkins
container_name: my-jenkins
environment:
- JAVA_OPTS="-Xmx2048m"
# - JENKINS_OPTS="--logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war"
# - JENKINS_OPTS="--logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war -Dhudson.model.DirectoryBrowserSupport.CSP=\"default-src 'self'; style-src 'self' 'unsafe-inline';\""
# - JENKINS_OPTS="--logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war -Dhudson.model.DirectoryBrowserSupport.CSP=default-src 'self'; style-src 'self' 'unsafe-inline';"
ports:
- "49001:8080"
- "50000:50000"
volumes:
- data-jenkins-home:/var/jenkins_home
restart: always
volumes:
data-jenkins-home:
Jenkins container is broken (it is restarting in about a second or two) if any of the upper rows are uncommented. Run throws:
Mar 02, 2017 11:32:25 AM Main deleteWinstoneTempContents
WARNING: Failed to delete the temporary Winstone file /tmp/winstone/jenkins.war
I see that the 'jenkins.sh' is recreating JENKINS_OPTS array. Is it possible to set env variable JENKINS_OPTS to run the service properly using taht script?
You can set JENKINS_OPTS in the docker run command which creates container.
e.g. this docker run command shows how JAVA_OPTS and JENKINS_OPTS can be set.
Also it shows how jenkins GUI port can be mapped (from 8080 in container to 9090 to outside world here). Also it shows how jenkins home dir can be customized (docker volume mount).
JENKINS_PORT=9090
JENKINS_SLAVE_PORT=50000
JENKINS_DIR=jenkins
IMAGE=whatever
docker run -it \
-d \
--name jenkins42 \
--restart always \
-p $OMN_HOST_IP:$JENKINS_PORT:8080 \
-p $OMN_HOST_IP:$JENKINS_SLAVE_PORT:50000 \
--env JAVA_OPTS="-Dhudson.Main.development=true \
-Dhudson.footerURL=http://customurl.com \
-Xms800M -Xmx800M -Xmn400M \
" \
-v $JENKINS_DIR:/var/jenkins_home \
$VARGS \
$IMAGE