I am not able to run gitleaks docker image - docker

I am trying to run the gitleaks docker image, but it is not scanning the code and giving me the error. But if I am doing the same by installing gitleaks then it is scanning the entire code.
I am using webgoat as vulnerable code for scanning and below is my sample command.
docker run -v /Users/<Path>/WebGoat/ zricethezav/gitleaks:latest detect -v --source .
As i am trying to map the official gitleaks command i.e.
docker run -v ${path_to_host_folder_to_scan}:/path zricethezav/gitleaks:latest [COMMAND] --source="/path"
It is giving me an error i.e.
○
│╲
│ ○
○ ░
░ gitleaks
7:07PM ERR [git] fatal: not a git repository (or any of the parent directories): .git
7:07PM ERR git error encountered, see logs
7:07PM WRN partial scan completed in 60.5ms
7:07PM WRN no leaks found in partial scan

I think your command needs to be more like this:
docker run -v /Users/<Path>/WebGoat:/path zricethezav/gitleaks:latest detect -v --source /path
The format of the -v argument for docker is {host_dir}:{container_dir} telling Docker to mount the host_dir directory at the location of container_dir inside your running container. I presume the --path argument tells gitleaks in which directory to scan. This should be the location you've mounted the volume inside the container. Passing . like you did will make it scan the current work directory of the process running inside the container I think.
You can find more details about volume mounting in the docker documentation

Related

What does running the multiarch/qemu-user-static does before building a container?

Can someone explain me in simple terms what does
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes -c yes
do when called right before doing a docker build a container from a Dockerfile?
I have the notion that it is to permit the use of containers from other architectures into the X86 architecture, but I am not sure I quite understand the explanation I found in some sites.
Does the presence of the above instruction(docker run) implies that the Dockerfile of the build stage is for another architecture?
I too had this question recently, and I don't have a complete answer, but here is what I do know, or at least believe:
Setup & Test
The magic to setup - required once per reboot of system, is just this:
# start root's docker (not via any `-rootless` scripts, obviously)
sudo systemctl start docker
# setup QEMU static executables formats
sudo docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
# test
docker run --rm -t arm64v8/ubuntu uname -m
# shoudl expect:
# >> aarch64
# optional: shutdown root's docker
sudo systemctl stop docker
Note that the test example assumes that you are running that your own personal "rootless-"docker, therefore as yourself, not as root (nor via sudo), and it works just dandy.
Gory Details
... which are important if you want to understand how/why this works.
The main sources for this info:
https://docs.docker.com/desktop/multi-arch/
https://github.com/multiarch/qemu-user-static (what we are using)
https://hub.docker.com/r/multiarch/qemu-user-static/ (using buildx to build multi/cross-arch images)
https://dbhi.github.io/qus/related.html (an alternative to qemu-user-static)
https://github.com/dbhi/qus (source repo for above)
https://dev.to/asacasa/how-to-set-up-binfmtmisc-for-qemu-the-hard-way-3bl4 (manual curation of same)
https://en.wikipedia.org/wiki/Binfmt_misc
The fundamental trick to making this work is to install new "magic" strings into the kernel process space so that when an (ARM) executable is run inside a docker image, it recognizes the bin-fmt and uses the QEMU interpreter (from the multiarch/* docker image) to execute it. Before we setup the bin formats, the contents look like this:
root#odysseus # mount | grep binfmt_misc
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=35,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=45170)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
root#odysseus # ls /proc/sys/fs/binfmt_misc/
jar llvm-6.0-runtime.binfmt python2.7 python3.6 python3.7 python3.8 register sbcl status
After we start (root's) dockerd and setup the formats:
root#odysseus # systemctl start docker
root#odysseus # docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
Setting /usr/bin/qemu-alpha-static as binfmt interpreter for alpha
Setting /usr/bin/qemu-arm-static as binfmt interpreter for arm
[...]
root#odysseus # ls /proc/sys/fs/binfmt_misc/
jar python3.8 qemu-armeb qemu-microblazeel qemu-mipsn32 qemu-ppc64le qemu-sh4eb qemu-xtensaeb
llvm-6.0-runtime.binfmt qemu-aarch64 qemu-hexagon qemu-mips qemu-mipsn32el qemu-riscv32 qemu-sparc register
python2.7 qemu-aarch64_be qemu-hppa qemu-mips64 qemu-or1k qemu-riscv64 qemu-sparc32plus sbcl
python3.6 qemu-alpha qemu-m68k qemu-mips64el qemu-ppc qemu-s390x qemu-sparc64 status
python3.7 qemu-arm qemu-microblaze qemu-mipsel qemu-ppc64 qemu-sh4 qemu-xtensa
Now we can run an ARM version of ubuntu:
root#odysseus # docker run --rm -t arm64v8/ubuntu uname -m
WARNING: The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64) and no specific platform was requested
aarch64
The warning is to be expected since the host CPU is AMD, and can be gotten rid of by specifying the platform to docker:
root#odysseus # docker run --rm --platform linux/arm64 -t arm64v8/ubuntu uname -m
aarch64
How does this really work?
At the base of it is just QEMU's ability to interpose a DBM (dynamic binary modification) interpreter to translate the instruction set of one system to that of the underlying platform.
The only trick we have to do is tell the underlying system where to find those interpreters. Thats what the qemu-user-static image does in registering the binary format magic strings / interpreters. So, what's in those binfmts?
root#odysseus # cat /proc/sys/fs/binfmt_misc/qemu-aarch64
enabled
interpreter /usr/bin/qemu-aarch64-static
flags: F
offset 0
magic 7f454c460201010000000000000000000200b700
mask ffffffffffffff00fffffffffffffffffeffffff
Huh - that's interesting, especially because on the host system there is no /usr/bin/qemu-aarch64-static, and it's not in the target image either, so where does this thing live? It's in the qemu-user-static image itself, with the appropriate tag of the form: <HOST-ARCH>-<GUEST-ARCH>, as in multiarch/qemu-user-static:x86_64-aarch64.
# Not on the local system
odysseus % ls /usr/bin/qemu*
ls: cannot access '/usr/bin/qemu*': No such file or directory
# Not in the target image
odysseus % docker run --rm --platform linux/arm64 -t arm64v8/ubuntu bash -c 'ls /usr/bin/qemu*'
/usr/bin/ls: cannot access '/usr/bin/qemu*': No such file or directory
# where is it?
odysseus % docker run --rm multiarch/qemu-user-static:x86_64-aarch64 sh -c 'ls /usr/bin/qemu*'
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "sh": executable file not found in $PATH: unknown.
# Hmm, no `sh` in that image - let's try directly...
odysseus % docker run --rm multiarch/qemu-user-static:x86_64-aarch64 /usr/bin/qemu-aarch64-static --version
qemu-aarch64 version 7.0.0 (Debian 1:7.0+dfsg-7)
Copyright (c) 2003-2022 Fabrice Bellard and the QEMU Project developers
# AHA - there it is.
That's the real magic that I don't yet quite understand. Somehow docker is, I believe, using that image to spin up the QEMU interpreter, and then feeding it the code from the actual image/container you want to run, as in the uname example from earlier. Some web-searching left me unsatiated as to how this magic is achieved, but I'm guessing if I kept following links from here I might find the true source of that slight-of-hand.
https://docs.docker.com/desktop/multi-arch/
To complement #crimson-egret's answer: The fix-binary flag in binfmt_misc was used to make the statically compiled qemu emulator work across different namespaces/chroots/containers.
In the doc for binfmt_misc you can find the explanation of the fix-binary flag:
F - fix binary
The usual behaviour of binfmt_misc is to spawn the binary lazily when the misc format file is invoked. However, this doesn’t work very well in the face of mount namespaces and changeroots, so the F mode opens the binary as soon as the emulation is installed and uses the opened image to spawn the emulator, meaning it is always available once installed, regardless of how the environment changes.
This bug report also explained:
...
The fix-binary flag of binfmt is meant to specifically deal with this. The interpreter file (e.g. qemu-arm-static) is loaded when its binfmt rule is installed instead of when a file that requires it is encountered. When the kernel then encounters a file which requires that interpreter it simply execs the already open file descriptor instead of opening a new one (IOW: the kernel already has the correct file descriptor open, so possibly divergent roots no longer play into finding the interpreter thus allowing namespaces/containers/chroots of a foreign architecture to be run like native ones).
If you use the qemu-user-static image without the -p yes option, the fix-binary flag won't be added, and running the arm64 container won't work because now the kernel will actually try to open the qemu emulator in the container's root filesystem:
$ docker run --rm --privileged multiarch/qemu-user-static --reset
[...]
$ cat /proc/sys/fs/binfmt_misc/qemu-aarch64
enabled
interpreter /usr/bin/qemu-aarch64-static
flags:
offset 0
magic 7f454c460201010000000000000000000200b700
mask ffffffffffffff00fffffffffffffffffeffffff
$ docker run --rm -t arm64v8/ubuntu uname -m
WARNING: The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64) and no specific platform was requested
exec /usr/bin/uname: no such file or directory
failed to resize tty, using default size

Run docker load inside RPM file

I'm trying to do an offline deployment of a docker image with RPM on CentOS.
My spec file is pretty simple :
Source1: myimage.tar.gz
...
%install
cp %{SOURCE1} ...
...
%post
docker load -i myimage.tar.gz
docker-compose up -d
docker image prune -af
I compress my image using docker save and gzip. Then, on another machine, I just load the image with docker and use docker-compose to run my service.
When executing the commands "docker load" and "docker-compose up", I got that error:
sudo: unable to execute /bin/docker: Permission denied
sudo: unable to execute /bin/docker-compose: Permission denied
sudo: unable to execute /bin/docker: Permission denied
My user is part of the docker group, I checked if the RPM file was executed using root, it is...
If I run the RPM on my dev machine, it works, if I execute the commands in a script that is not part of the RPM, it works...
Any ideas ?
Thanks in advance.
You're probably being blocked by SELinux. You can temporarily disable it to check with setenforce 0.
If that is the problem (it is; this is a comment turned into an answer), some possible solutions:
You might be able to use audit2allow to change the denials into new rules to import.
Maybe udica will help. I don't know enough about it to tell.
I tried the first solution and it worked ! grep rpm_script_t /var/log/audit/audit.log | audit2allow -m mypolicy > mypolicy.te
The problem came from the fact that the RPM scripts didn't have the access to the container_runtime_exec_t:file entrypoint that I suppose, allow it to run containers like docker.
Thanks a lot for the tip !

How to serve a tensorflow model using docker image tensorflow/serving when there are custom ops?

I'm trying to use the tf-sentencepiece operation in my model found here https://github.com/google/sentencepiece/tree/master/tensorflow
There is no issue building the model and getting a saved_model.pb file with variables and assets. However, if I try to use the docker image for tensorflow/serving, it says
Loading servable: {name: model version: 1} failed:
Not found: Op type not registered 'SentencepieceEncodeSparse' in binary running on 0ccbcd3998d1.
Make sure the Op and Kernel are registered in the binary running in this process.
Note that if you are loading a saved graph which used ops from tf.contrib, accessing
(e.g.) `tf.contrib.resampler` should be done before importing the graph,
as contrib ops are lazily registered when the module is first accessed.
I am unfamiliar with how to build anything manually, and was hoping that I could do this without many changes.
One approach would be to:
Pull a docker development image
$ docker pull tensorflow/serving:latest-devel
In the container, make your code changes
$ docker run -it tensorflow/serving:latest-devel
Modify the code to add the op dependency here.
In the container, build TensorFlow Serving
container:$ tensorflow_serving/model_servers:tensorflow_model_server && cp bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server /usr/local/bin/
Use the exit command to exit the container
Look up the container ID:
$ docker ps
Use that container ID to commit the development image:
$ docker commit $USER/tf-serving-devel-custom-op
Now build a serving container using the development container as the source
$ mkdir /tmp/tfserving
$ cd /tmp/tfserving
$ git clone https://github.com/tensorflow/serving .
$ docker build -t $USER/tensorflow-serving --build-arg TF_SERVING_BUILD_IMAGE=$USER/tf-serving-devel-custom-op -f tensorflow_serving/tools/docker/Dockerfile .
You can now use $USER/tensorflow-serving to serve your image following the Docker instructions

Docker TICK Sandbox does not provide UDF Python functionality

I'm running this docker image to use the TICK Kapacitor locally.
The problem I face is that when I try to use User Defined Functions, e.g any of these examples I get the error message that /usr/bin/python2 does not exist.
I add the following to the kapacitor.conf:
[udf.functions]
[udf.functions.tTest]
prog = "/usr/bin/python2"
args = ["-u", "/tmp/kapacitor_udf/mirror.py"]
timeout = "10s"
[udf.functions.tTest.env]
PYTHONPATH = "/tmp/kapacitor_udf/kapacitor/udf/agent/py"
Further attempts from my side including altering the image used to build Kapacitor to install python works but the agent seems to fail to compile anyway.
Is there anyone who managed to get UDFs running using the Kapacitor Docker image?
Thanks
Docker image from the official repository: docker pull kapacitor does not have python installed inside. You can verify this by running shell in the container:
PS> docker exec -it kapacitor bash
and execute one of the command options:
$ python -VERSION
$ python: command not found
or
$ readlink -f $(which python) | xargs -I% sh -c 'echo -n "%:"; % -V'
$ readlink: missing operand
or
$ find / -type f -executable -iname 'python *'
void returns. And oppositely if python is available, commands return version and list of executable files
Note: Here and further all command snippets are given for Powershell on Windows. And all command snippets inside docker container are given for bash shell as Linux images are used.
Basicly, there is two options to get kapacitor image with python inside to execute UDFs:
Install the python in the kapacitor image, i.e. build new docker image from very kapacitor image.
Example could be found here:
Build a new verion of kapacitor image from one of the python official images
The second option is more natural as you get consistent python installation and keep efforts on doing work of installing python which already done by the docker community.
So following option 2 we'll perform:
Examine the Dockefile of the official kapacitor image
Choose an appropriate python image
Create new project and Dockerfile for kapacitor
Build and Check the kapacitor image
Examine Dockefile of official kapacitor image
General note:
For any image, the original Dockerfiles can be obtained in this way:
https://hub.docker.com/
-> Description Tab
-> Supported tags and respective Dockerfile links section
-> each of the tags is a link that leads to the Dockerfile
So for kapacitor everything is in the influxdata-docker git repository
Then in the Dockerfile we find that the image is created based on
FROM buildpack-deps: stretch-curl
here:
buildpack-deps
the image provided by the project of the same name https://hub.docker.com/_/buildpack-deps
curl
This variant includes just the curl, wget, and ca-certificates packages. This is perfect for cases like the Java JRE, where downloading JARs is very common and
  necessary, but checking out code isn't.
stretch
short version name of the OS, in this case Debian 9 stretch https://www.debian.org/News/2017/20170617
Buildpack-deps images are in turn built based on
FROM debian: stretch
And Debian images from the minimum docker image
FROM: scratch
Choose appropriate python image
Among python images, for example 3.7, you can find similar versions inheriting from buildpack-deps
FROM buildpack-deps: stretch
Following the inheritance, we'll see:
FROM buildpack-deps: stretch
FROM buildpack-deps: stretch-smc
FROM buildpack-deps: stretch-curl
FROM debian: stretch
In other words, the python: 3.7-stretch image only adds functionality to the Debian compared to the kapacitor image.
This means that we can to rebuild kapacitor image on top of the python image: 3.7-stretch with no risk or gaining incompatibility.
Docker context folder preparation
Clone the repository
https://github.com/influxdata/influxdata-docker.git
Create the folder influxdata-docker/kapacitor/1.5/udf_python/python3.7
Copy the following three files into it from influxdata-docker/kapacitor/1.5/:
Dockerfile
entrypoint.sh
kapacitor.conf
In the copied Dockerfile FROM buildpack-deps: stretch-curl replace with FROM python: 3.7-stretch
Be carefuly! If we work on Windows and because of scientific curiosity open the entrypoint.sh file in the project folder, then be sure to check that it does not change the end-line character from Linux (LF) to Windows variant: (CR LF).
   Otherwise, when you start the container later, you get an error:
or in the container log:
exec: bad interpreter: No such file or directory
or if you'll start debugging and, running the container with bash, will do:
$ root # d4022ac550d4: / # exec /entrypoint_.sh
$ bash: /entrypoint_.sh: / bin / bash ^ M: bad interpreter: No such file or directory
Building
Run PS> docker build -f. \ Dockerfile -t kapacitor_python_udf
Again, in case of Windows environment
If during the build execution an error occurs of the form:
E: Release file for http://security.ubuntu.com/ubuntu/dists/bionic-security/InRelease is not valid yet (invalid for another 9h 14min 10s). Updates for this repository will not be applied.
then your computer clock probably went out of sync and/or Docker Desktop incorrectly initialized the time after the system returned from sleep. See the issue)
To fix it, restart Docker Desktop and / or Windows settings -> Date and time settings -> Clock synchronization -> perform Sync
You can also read more here
Launch and check
Launching the container with the same actions as for the standard image. Example:
PS> docker run --name=kapacitor -d `
--net=influxdb-network `
-h kapacitor `
-p 9092:9092 `
-e KAPACITOR_INFLUXDB_0_URLS_0=http://influxdb:8086 `
-v ${PWD}:/var/lib/kapacitor `
-v ${PWD}/kapacitor.conf:/etc/kapacitor/kapacitor.conf:ro `
kapacitor
Check:
PS> docker exec -it kapacitor_2 bash
$ python -VERSION
$ Python 3.7.7
$ readlink -f $(which python) | xargs -I% sh -c 'echo -n "%:"; % -V'
$ /usr/local/bin/python3.7: Python 3.7.7

How can you cache gradle inside docker?

I'm trying to cache things that my gradle build download each time currently. For that I try to mount a volume with the -v option like -v gradle_cache:/root/.gradle
The thing is each time I rerun the build with the exat same command it still downloads everything again. The full command I use to run the image is
sudo docker run --rm -v gradle_cache:/root/.gradle -v "$PWD":/home/gradle/project -w /home/gradle/project gradle:jdk8-alpine gradle jar
I also checked in the directory where docker saves the volumes content at /var/lib/docker/volumes/gradle_cache/_data but that is also empty.
my console log
What am I missing to make this working?
Edit: As per request I rerun the command with the --scan option.
And also with a diffrent gradle home:
$ sudo docker run --rm -v gradle_cache:/root/.gradle -v "$PWD":/home/gradle/project -w /home/gradle/project gradle:jdk8-alpine gradle jar --gradle-user-home /root/.gradle
FAILURE: Build failed with an exception.
* What went wrong:
Failed to load native library 'libnative-platform.so' for Linux amd64.
After looking at the Dockerfile for the Container I'm using I found out, that the right option to use is -v gradle_cache:/home/gradle/.gradle.
What made me think that the files were cached in /root/.gradle is that the Dockerfile also sets that up as a symlink from /home/gradle/.gradle:
ln -s /home/gradle/.gradle /root/.gradle
So inspecting the filesystem after a build made it look like the files were stored there.
Since 6.2.1, Gradle now supports a shared, read-only dependency cache for this scenario:
It’s a common practice to run builds in ephemeral containers. A container is typically spawned to only execute a single build before it is destroyed. This can become a practical problem when a build depends on a lot of dependencies which each container has to re-download. To help with this scenario, Gradle provides a couple of options:
copying the dependency cache into each container
sharing a read-only dependency cache between multiple containers
https://docs.gradle.org/current/userguide/dependency_resolution.html#sub:ephemeral-ci-cache describes the steps to create and use the shared cache.
Alternatively to have more control on the cache directory you can use this:
ENV GRADLE_USER_HOME /path/to/custom/cache/dir
VOLUME $GRADLE_USER_HOME

Resources