Sonarcube credentials in docker container - docker

How do i set my own sonarcube credentials while building in docker container. By default t takes admin:admin credentials.
I am wondering is there any sonar cli which i can righ tin dockerfile.
Any suggestions

You can use api/users/createweb service to create a new user, once the web server is started, but not before.

Related

How to login docker account in Gitlab-ci

I have subscribed for a Pro plan of docker account to increase rate limit in my self hosted Gitlab CI jobs. Then successfully logged-in using this command on the server:
$ sudo docker login -u user -p *******
This is my .gitlab-ci.yml file:
image: edbizarro/gitlab-ci-pipeline-php:7.3-alpine
unittest:
stage: testing
services:
- mysql:latest
script:
- ./vendor/bin/phpunit --colors --stop-on-failure
But when jobs get started, I'm still getting this error:
Running with gitlab-runner 13.6.0 (8fa89735)
on fafa-group-runner n7oiBzAk
Preparing the "docker" executor
30:53
Using Docker executor with image edbizarro/gitlab-ci-pipeline-php:7.3-alpine ...
Starting service mysql:latest ...
Pulling docker image mysql:latest ...
ERROR: Preparation failed: Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit (docker.go:142:4s)
Am I missing something?
You performed the docker login as the root user on the host. However the images are being pulled by the GitLab runner, which will be another user, possibly containerized.
The instructions for configuring runner registry credentials has several options, including setting DOCKER_AUTH_CONFIG in either the project's .gitlab-ci.yml or runner's config.toml. That variable contains the content of the ~/.docker/config.json with the registry credentials inside.
Check also GitLab 13.9 (February 2021)
Automatically authenticate when using the Dependency Proxy
By proxying and caching container images from Docker Hub, the Dependency Proxy helps you to improve the performance of your pipelines.
Even though the proxy is intended to be heavily used with CI/CD, to use the feature, you had to add your credentials to the DOCKER_AUTH_CONFIG CI/CD variable or manually run docker login in your pipeline. These solutions worked fine, but when you consider how many .gitlab-ci.yml files that you need to update, it would be better if the GitLab Runner could automatically authenticate for you.
Since the Runner is already able to automatically authenticate with the integrated GitLab Container Registry, we were able to leverage that functionality to help you automatically authenticate with the Dependency Proxy.
Now it’s easier to use the Dependency Proxy to proxy and cache your container images from Docker Hub and start having faster, more reliable builds.
See Documentation and Issue.

Google Cloud can't find default credentials when trying to run docker image

I am trying to run a Docker image through a Google Cloud proxy and despite my best efforts Google Cloud continues giving me this error:
Can't create logging client: google: could not find default
credentials. See
https://developers.google.com/accounts/docs/application-default-credentials
for more information.
Whenever I try to run my Docker image using this command:
sudo docker run dc701c583cdb
I have tried updating my GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of my key file.
I have successfully logged in to Google Cloud using the gcloud auth application-default login command.
I've defined and associated my project in Google Cloud.
I am attempting this in order to run an open source project. I'm quite sure I created the Docker image correctly. I have a feeling the issue is coming from the fact that I am not correctly connecting the existing project to my Google Cloud.
Any advice would be greatly appreciated. I am using Docker 18.06.1-ce and Google Cloud-SDK 219.0.1. Running on a virtual linux machine with Ubuntu 18.04.
When running the google/cloud-sdk container from Docker Hub in a newly-created Ubuntu 18.04 instance, the container's gcloud automatically inherits the instance's user configuration. Give it a try: run that container and then run gcloud info inside of it.
As such, I believe you might be doing something wrong. I recommend you take a look at the aforementioned container to see how that can be made to work.

How to run container in a remote docker host with Jenkins

I have two servers:
Server A: Build server with Jenkins and Docker installed.
Server B: Production server with Docker installed.
I want to build a Docker image in Server A, and then run the corresponding container in Server B. The question is then:
What's the recommended way of running a container in Server B from Server A, once Jenkins is done with the docker build? Do I have to push the image to Docker hub to pull it in Server B, or can I somehow transfer the image directly?
I'm really not looking for specific Jenkins plugins or stuff, but rather, from a security and architecture standpoint, what's the best approach to accomplish this?
I've read a ton of posts and SO answers about this and have come to realize that there are plenty of ways to do it, but I'm still unsure what's the ultimate, most common way to do this. I've seen these alternatives:
Using docker-machine
Using Docker Restful Remote API
Using plain ssh root#server.b "docker run ..."
Using Docker Swarm (I'm super noob so I'm still unsure if this is even an option for my use case)
Edit:
I run Servers A and B in Digital Ocean.
Docker image can be saved to a regular tar archive:
docker image save -o <FILE> <IMAGE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_save/
Then scp this tar archive to another host, and run docker load to load the image:
docker image load -i <FILE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_load/
This save-scp-load method is rarely used. The common approach is to set up a private Docker registry behind your firewall. And push images to or pull from that private registry. This doc describes how to deploy a container registry. Or you can choose registry service provided by a third party, such as Gitlab's container registry.
When using Docker repositories, you only push/pull the layers which have been changed.
You can use Docker REST API. Jenkins HTTP Request plugin can be used to make HTTP requests. You can run Docker commands directly on a remote Docker host setting the DOCKER_HOST environment variable. To export an the environment variable to the current shell:
export DOCKER_HOST="tcp://your-remote-server.org:2375"
Please be aware of the security concerns when allowing TCP traffic. More info.
Another method is to use SSH Agent Plugin in Jenkins.

Is it possible to log into Gitlabs container registry without using the CI runner?

Is it possible to log into the Gitlab registry automatically from a script outside of the context of their CI runner?
I have a very simple deployment process with a home-baked script which does the following in a nutshell:
build container image
push to registry.gitlab.com
log into the target server
pull the container image from registry.gitlab.com
run the container
Each interaction with Gitlab's registry requires the following:
docker login registry.gitlab.com
which will prompt for my username / password. I would prefer to be able to do something like this:
docker login -u <username> -p <password> registry.gitlab.com
so that I can achieve true automation of my deployments.
I've looked at Gitlab's documentation and spent some time searching on Google but all references I've found relate to using Gitlab's CI runner, which I do not need for my use-case.
Is what I'm trying to achieve possible?
Absolutely!
When you do a docker login your.docker.registry.com, what happens is:
Your machine attempts to log in to the docker registry specified
If login is successful, it creates a file at $HOME/.docker/config.json
This is outlined in the official docker documentation.
The config.json is a pretty easy to understand and pretty easy to echo in a shell script. Once you've logged in successfully, you can view your generated config.json.
Essentially this means you can execute something like this in a bash script:
echo '{"auths":{"registry.gitlab.com":{"auth":"base64encodedcredentials"}}}' > ~/.docker/config.json
Once the machine has this file in the right location, it shouldn't prompt you for a login or password when pulling images.
Just remember that your username and password are stored in this file in a base64 encoded format. You may want another set of dedicated credentials if this bothers you.

What is Docker URL

I am building a gradle project in Jenkins. and client has asked to build the project in docker image. i am new to Jenkins and docker(i am able to build the project normally on Jenkins) i have installed the docker plugin and now it asks for DOCKER URL and Docker API under cloud settings. what are those and how to configure the docker. i am running Jenkins on the remote server, which is setup by another person. i don't have access to shell command, i have to use docker and build the project, and what is dockerfile and how to build and what to put in it.
As per the jenkins docker plugin page
The URL to use to access your Docker server API (e.g: http://172.16.42.43:4243)
But I will recommend you to use https://your_local_docker_machine_ip_here:2376

Resources