InitialAdminPassword file not present inside var/jenkins_home/secret folder - docker

Hi All I am trying to run a Jenkins on my machine(linux) using docker container.
When I'm running a Jenkins docker container it's running successfully,
But It's not showing the initialAdminPassword in the console.
So I tried checking the password in the initialAdminPassword file inside
Jenkins container using docker exec command but couldn't found the file.
I even tried with super user but the result was same.

Related

Docker bind mount is empty inside a containerized TeamCity build agent with DOCKER_IN_DOCKER enabled

I'm running containerized build agents using the linux-sudo image tag and, using a Dockerfile (here) I have successfully customized it to suit our needs. This is running very well and successfully running the builds I need it to.
I am running it in a Swarm cluster with a docker-compose file (here), and I have recently enabled the DOCKER_IN_DOCKER variable. I can successfully run docker run hello-world in this container and I'm happy with the results. However I am having an issue running a small utility container inside the agent with a bind mount volume.
I want to use this Dockerfile inside the build agent to run npm CLI commands against the files in a mounted directory. I'm using the following command to run the container with a custom command and a volume as a bind mount.
docker run -it -v $(pwd):/app {IMAGE_TAG} install
So in theory, running npm install against the local directory that is mounted in the container (npm is the command in the ENTRYPOINT so I can just pass install to the container for simplicity). I can run this on other environments (Ubuntu and WSL) and it works very well. However when I run it in the linux-sudo build agent image it doesn't seem to mount the directory properly. If I inspect the directory in the running utility container (the npm one), the /app folder is empty. Shouldn't I expect to see the contents of the bind mount here as I do in other environments?
I have inspected the container, and it confirms there is a volume created of type bind and I can also see it when I list out the docker volumes.
Is there something fundamental I am doing wrong here? I would really appreciate some help if possible please?

How to copy (log) files from build container to host if the build fails?

How do I copy some log files from container to the host while running docker build from commands under Dockerfile? As soon as the build fails the building container disappear.
One way is to after every RUN command swallow the non-zero exit code, output the logs to the STDOUT and then re-push the original exit code. But it doesn't seem to scale up, like if we want to copy a whole directory, we won't be zipping and outputting that to console :P
Is there any possible potential solution? Maybe connecting a file from host to container or mounting a directory under build process?
If you push the logs to the stdout you can get the logs by using these commands:
LOG_PATH=$(docker inspect --format='{{.LogPath}}' <the-name-of-your-docker>)
echo ================= Docker log start ===========================
sudo cat $LOG_PATH
echo ================= Docker log end =============================
If you want container data to persist after the container exits (e.g. the logs, databases or other stuff) then you should mount a local drive/folder into the container and write whatever you need to persist to the mounted location.
In your case, mount a local folder (on the host) into the container and write the build logs to that folder.
Also see answer to this question for more options Docker: Copying files from Docker container to host

docker-compose docker-entrypoint-initdb.d Permission denied

I am trying to run the puppet pupperware suite (all 3 servers/puppet server/puppet DB/DB server).
I am using the official Yaml file provided by puppetlabs for docker compose : https://github.com/puppetlabs/pupperware/blob/master/docker-compose.yml
When I run that Yaml file in docker compose however, I am running into the following error (from docker-compose logs):
postgres_1 | ls: cannot open directory '/docker-entrypoint-initdb.d/': Permission denied
And as a result, the build fails (only the puppet server comes up, but not the other ones).
My docker host is a Fedora 33 virtual machine running inside a Proxmox environment. Proxmox runs on the physical host.
I have disabled SELinux, and I am running docker (moby) rootless. My local user (uid 1000) can run docker without sudo.
I believe I need to set permission in the container (probably via a Dockerfile) but I am not sure how to change that and I am not sure how to use a Dockerfile and docker-compose simultaneously.
thank you for your help
The docker-compose file is from the Puppet 6 era. The docker images that the Pupperware setup currently pulls, are latest, which is Puppet 7.
I got my pre-existing setup functioning again by changing the image names to:
puppet/puppetserver:6.14.1
postgres:9.6
puppet/puppetdb:6.13.1
Maybe this works for you as well.
well, since it's been a month and you have no answers I will tell try to help you with what I know.
You should put a Dockerfile in the root of your project. It contains commands to be run by the docker daemon AND the commands run by the linux inside the container. Then it runs through the contents of your docker-compose.yml and runs the commands in there.
So to solve the permission problem you should add RUN, which executes the linux command in Bash and add data to the folder.
Also look at this answer

using docker-compose on a kubernetes instance with jenkins - mounting empty volumes

I have a Jenkins instance setup using Googles Jenkins on Kubernetes solution. I have not changed any of the settings of the Kubernetes Pod.
When I trigger a new job I am successfully able to get everything up and running until the point of my tests.
My tests use docker-compose. First I make sure to install docker (1.5-1+b1) and docker-compose (1.8.0-2) on the instance (I know I can optimize this by using an image that already includes these, but I am still just in proof-of-concept).
When I run the docker-compose up command everything works and the services start their initialization scripts. However, the mounts are empty. I have verified that the files exist on the Jenkins slave, and the mount is created inside the docker service when I run docker-compose, however they are empty.
Some information:
In order to get around file permissions I am using /tmp as the Jenkins Workspace. I am using SCM to pull my files (successfully) and in the docker-compose file I specify version: '2' and the mount paths with absolute paths. The volume section of the service that fails looks like this:
volumes:
- /tmp/automation:/opt/automation
I changed the command that is run in the service to ls /opt/automation and the result is an empty directory.
What am I missing? I just want to mount a directory into my docker-compose service. This works perfectly from Windows, Ubuntu, and Centos devices. Why won't it work using the Kubernetes instance?
I found the reason it fails here:
A Docker container in a Docker container uses the parent HOST's Docker daemon and hence, any volumes that are mounted in the "docker-in-docker" case is still referenced from the HOST, and not from the Container.
Therefore, the actual path mounted from the Jenkins container "does not exist" in the HOST. Due to this, a new directory is created in the "docker-in-docker" container that is empty. Same thing applies when a directory is mounted to a new Docker container inside a Container.
So it seems like it will be impossible to mount something from the outer docker into the inner docker. And another solution must be found.

Docker services in image not launching

I am new to docker. I've prepared a docker image. On it, I've installed application as user root. On launch, the root .bashrc contains some lines that be execute. On the machine where I prepare the image, all was running correcting. I saved the image as a tar with the command docker save to a tar file. Using the tar file, I loaded the image on another machine. Using docker, when I start the image using the command docker run, it doesn't execute the root .bashrc. When I execute it manually using source .bashrc it executes but there are services which fails. Any idea why this is happening ? Because I was thinking that the moment you have the image, you load the image in a container, it is expected to work similarly everywhere.

Resources