Hi I am trying to enable cloud watch logging in my docker container on my mac machine.
Docker version .
Version: 18.03.1-ce .
API version: 1.37 .
I am getting following error every-time i start container
Error response from daemon: failed to initialize logging driver: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I have tried following approaches:
Exporting AWS_ACCESS_KEY_ID (etc.) in /etc/default/docker
mounted ~/.aws/credentials
Passing aws credentials as env
But every-time i get same error.
docker run -d -p 5801:8080 --env AWS_REGION=us-west-2 -v /Users/me/.aws/credentials:/root/.aws/credentials:ro --log-driver=awslogs --log-opt awslogs-region=us-west-2 --log-opt awslogs-group=perf-log-group --log-opt awslogs-create-group=true --log-opt awslogs-stream=awslogs-ing imageId
Could you please suggest what i am missing here as if i remove log part application works fine and i am able access aws api in application.
I came here searching the net for an answer as I was not capable to understand what the docs want me to do, too. Finally this tutorial helped me make my way through it: https://transang.me/configure-docker-to-send-log-to-aws/. Although I am on an Ubuntu 20.04 I assume we both face the same trouble to understand where to put the env information
You will have to provide the credentials that shall be used to the docker daemon of your local machine, not to the docker build or the docker run command.
according to the tutorial put the config here:
# cat /etc/systemd/system/docker.service.d/override.conf
[Service]
Environment="AWS_ACCESS_KEY_ID=my-aws-access-key"
Environment="AWS_SECRET_ACCESS_KEY=my-secret-access-key"
followed by the commands
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
to flush the changes and reload the docker daemon or stop and feed the environment variables directly to the daemon:
$ sudo systemctl stop docker
$ sudo env AWS_ACCESS_KEY_ID=my-aws-access-key AWS_SECRET_ACCESS_KEY=my-secret-access-key /usr/bin/dockerd
Pitfall: If you have MFA enabled you may need to provide the security token, too (at least I stumbled over it) than the daemon invocation becomes:
$ sudo env AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN /usr/bin/dockerd
Related
Via my local I would like to run docker-compose on my remote machine. I have found two ways that should accomplish this but am running into errors.
Fist I am running the following versions:
me:api$ docker-compose -v
docker-compose version 1.29.2, build unknown
me:api$ docker -v
Docker version 20.10.7, build 20.10.7-0ubuntu5~20.04.2
First way and the way I would prefer for this to work is to the the --context flag. Based on this blog post should be possible.
I created the context like so:
docker context create prod --docker "host=ssh://user#host.com"
I can then run the following and get an output of running containers
docker --context prod ps
However running with docker-compose the command fails
docker-compose --context prod -f docker-compose.yml -f docker-compose.prod.yml up -d
ERROR: Context 'prod' not found
The other option was to use the -H flag based on this SO answer to set the host that I want to execute the commands on. Yes I can SSH into my machine with the user I am using.
docker-compose -H ssh://user#host -f docker-compose.yml -f docker-compose.prod.yml up -d
/bin/sh: 1: ssh: Permission denied
Well it does not look great. I have not been able to get any of the methods to work with docker-compose however there is a new (at the time of writing this) docker compose replacement from docker. The instructions are here. I was hoping that the --context flag would work but it is not found with the new version which leads me to believe that it did not work before. The -H flag lead to the same Permission Denied error as before.
The only way I was able to get this to work was by setting the DOCKER_HOST environment variable.
I'm on a fresh Fedora CoreOS which comes with Docker version 19.03.11.
My core user is in the docker group:
[core#localhost ~]$ groups
core adm wheel sudo systemd-journal docker
Following the deployment instructions for portainer, I create a new Portainer container like this (as core or root, it doesn't even matter):
$ docker volume create portainer_data
$ docker run -d -p 9000:9000 -p 8000:8000 --name portainer --restart always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
And when I try to connect to the local daemon:
Permissions of /var/run/docker.sock:
[core#localhost ~]$ ll /var/run/docker.sock
srw-rw----. 1 root docker 0 Aug 2 10:02 /var/run/docker.sock
Even if I chmod o+rw /var/run/docker.sock it doesn't work. This indicates that the problem might be in the container itself so I tried to access it but I can't:
[core#localhost ~]$ docker exec -it portainer sh
OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"sh\": executable file not found in $PATH": unknown
All resources I found so far suggest to add the user to the docker group, which I did, reboot the machine, which I did, or set 666 on /var/run/docker.sock, which I did but prefer not to. Nothing helped.
Any idea what's wrong and how to fix it?
If it is a SELinux issue, try first to follow portainer/portainer issue 849
Correct way is to add :z to the volume mapping, so you're not defeating the purpose of docker.
Like so:
docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock:z portainer/portainer
Also we need a way to add the z or Z flag in Portainer for new containers. This has been a feature since 1.7 e.g. 2015 in Docker.
That, or using dpw/selinux-dockersock
Thanks to MrPaperbag on the Portainer Discord I found out it's because of a restriction by SELinux.
Found the solution here: https://nanxiao.me/en/selinux-cause-permission-denied-issue-in-using-docker/
Either run docker run with --privileged, or set SELinux mode as permissive using setenforce 0. Probably there's a way to properly configure SELinux instead of just circumventing it, however, for my use case this is good enough.
2 days I try to run the docker inside an ubuntu container:
docker run -it ubuntu bash
Install docker by instruction of https://docs.docker.com/engine/install/ubuntu/ or/and https://phoenixnap.com/kb/how-to-install-docker-on-ubuntu-18-04
Finally I have installed docker:
root#e65411d2b70a:/# docker -v
Docker version 19.03.6, build 369ce74a3c
But when I try to run docker run hello-world have some problem
root#5ac21097b6f6:/# docker run hello-world
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'.
In service list not docker:
root#5ac21097b6f6:/# service docker start
docker: unrecognized service
root#5ac21097b6f6:/# service --status-all
[ - ] apparmor
[ + ] cgroupfs-mount
[ - ] dbus
[ ? ] hwclock.sh
[ - ] procps
[ ? ] ubuntu-fan
When try to run dockerd:
root#5ac21097b6f6:/# dockerd
INFO[2020-04-23T07:01:11.622627006Z] Starting up
INFO[2020-04-23T07:01:11.624389266Z] libcontainerd: started new containerd process pid=154
INFO[2020-04-23T07:01:11.624460438Z] parsed scheme: "unix" module=grpc
INFO[2020-04-23T07:01:11.624477203Z] scheme "unix" not registered, fallback to default scheme module=grpc
INFO[2020-04-23T07:01:11.624532871Z] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>} module=grpc
INFO[2020-04-23T07:01:11.624560679Z] ClientConn switching balancer to "pick_first" module=grpc
INFO[2020-04-23T07:01:11.664827037Z] starting containerd revision= version="1.3.3-0ubuntu1~18.04.2"
ERRO[2020-04-23T07:01:11.664943052Z] failed to change OOM score to -500 error="write /proc/154/oom_score_adj: permission denied"
...
INFO[2020-04-23T07:01:11.816951247Z] stopping event stream following graceful shutdown error="context canceled" module=libcontainerd namespace=plugins.moby
failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: iptables v1.6.1: can't initialize iptables table `nat': Permission denied (you must be root)
Perhaps iptables or your kernel needs to be upgraded.
(exit status 3)
Not understand why Permission denied if user root.
Install sudo and add root to the group, but it's not help.
apt-get install sudo
usermod -a -G sudo root
- sudo dockerd have the save problem.
How to make work docker inside ubuntu container? Do you have ideas?
ps. I know about docker-in-docker, I need exactly docker inside ubuntu-container
pss. I know about -v /var/run/docker.sock:/var/run/docker.sock - but needed independent the docker service inside ubuntu-container.
When running docker in docker, the container must use the docker engine on your host.
Here is a simple working setup:
1) Create a dockerfile with docker CLI installed. I am using the official compose image, so you also have docker-compose
FROM docker/compose:1.25.5
WORKDIR /app
ENTRYPOINT ["/bin/sh"]
2) When running it, mount the docker sock
$ docker build -t dind .
$ docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock dind
Form within the container, you now have docker. Try running docker ps
If you want to do docker in docker without -v /var/run/docker.sock:/var/run/docker.sock then I am afraid that there is no good way to do this.
Sharing the docker socket from host is the classic way to make docker containers run within another docker container.
I was trying my best to run containers within containers just like you for the past few days. Wasted many hours. So far most of the people advise me to do stuff like using the docker's DIND image which is not applicable for my case, as I need the main container to be Ubuntu OS, or to run some privilege command and map the daemon socket into container, like -v /var/run/docker.sock:/var/run/docker.sock
(Which never ever works for me, or for any Ubuntu OS I tried. Reason being, the main container which is based on Ubuntu OS does not comes with systemd which is important to run docker containers conveniently like a usual local machine)
The solution I found was to use Nestybox on my Ubuntu 20.04 system and it works best. Its also extremely simple to execute, provided your local system is ubuntu (which they support best), as the container runtime are specifically deigned for such application. It also has the most flexible options.
The free edition of Nestybox is perhaps the best method as of Nov 2022. Highly recommends you to try it without bothering all the tedious setup other people suggest. They have many pre-constructed solutions to address such specific needs with a simple command line.
The Nestybox provide special runtime environment for newly created docker container, they also provides some ubuntu/common OS images with docker and systemd in built.
Their goal is to make the main container function exactly the same as a virtual machine securely. You can literally ssh into your ubuntu main container as well without the ability to access anything in the main machine. From your main container you may create all kinds of containers like a normal local system does. That systemd is very important for you to setup docker conveniently inside the container.
One simple common command to execute sysbox:
dock run --runtime=sysbox-runc -it any_image
If you think thats what you are looking for, you can find out more at their github:
https://github.com/nestybox/sysbox
Quicklink to instruction on how to deploy a simple sysbox runtime environment container:
https://github.com/nestybox/sysbox/blob/master/docs/quickstart/README.md
Trying to configure a container running outside of GCP to log to Google Cloud Platform (StackDriver). One requirement is that the Docker daemon is able to locate the environment variable GOOGLE_APPLICATION_CREDENTIALS so it can authenticate. One would assume that the following would work, but it doesn't:
GOOGLE_APPLICATION_CREDENTIALS=/usr/local/keys/project-1.json docker run --log-driver=gcplogs ...
That outputs:
ERROR: for api Cannot start service api:
failed to initialize logging driver: google: could not find default credentials.
See https://developers.google.com/accounts/docs/application-default-credentials
for more information.
Haven't found any documentation on how to set that directly on daemon.json, but I don't want that either because I might have different containers logging to different GCP projects.
I've tried this on Mac (docker desktop) and Debian.
This is question that keeps coming back. What is happening here is that environment variable GOOGLE_APPLICATION_CREDENTIALS is loaded by the system docker daemon. System daemons don't see the environment variables set in the user login. What you need to do is set the GOOGLE_APPLICATION_CREDENTIALS at the system level.
Here is how to do that in Ubuntu(Systemd):
$ sudo mkdir -p /etc/systemd/system/docker.service.d
Create /etc/systemd/system/docker.service.d/env.conf with the following content:
[Service]
Environment="GOOGLE_APPLICATION_CREDENTIALS=/path/to/file.json"
Apply the changes.
$ sudo systemctl daemon-reload
Once done restart docker/containerd daemons
$ sudo systemctl restart containerd
$ sudo systemctl restart docker
Test the gcplogs driver
docker run --log-driver=gcplogs --log-opt gcp-project="my-project" hello-world
I would like my centos7 container to log message to /var/log/messages
[root#gen-r-vrt-057-009 ~]# docker exec -it rsyslog_base_centos7 "/bin/bash"
[root#gen-r-vrt-057-009 /]# logger "lior"
[root#gen-r-vrt-057-009 /]# cat /var/log/messages
[root#gen-r-vrt-057-009 /]#
I installed rsyslog, tried running container in several ways:
docker run -dit --name rsyslog_base_centos7 --network host --privileged rsyslog/rsyslog_base_centos7:latest /usr/sbin/init
docker run -dit --name rsyslog_base_centos7 --log-driver=syslog --network host --privileged rsyslog/rsyslog_base_centos7:latest /usr/sbin/init
docker run -dit --name rsyslog_base_centos7 --log-driver=syslog --network host -v /dev/log:/dev/log --privileged rsyslog/rsyslog_base_centos7:latest /usr/sbin/init
But nothing seems to do the trick.
Container os and docker version:
[root#gen-r-vrt-057-009 /]# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
[root#gen-r-vrt-057-009 /]# exit
[root#gen-r-vrt-057-009 ~]# docker -v
Docker version 17.03.2-ce, build f5ec1e2
Any ideas?
Thanks
If I understand you correctly, you want to run rsyslog inside the container but want to make rsyslog log data from the host machine. By default, this is not possible due to isolation.
It is an interesting use case, and we probably should add an issue tracker at https://github.com/rsyslog/rsyslog-docker.
You can probably achieve your goal by mounting /dev/log into the container, but depending on the host OS that requires some extra work there as well.
The rsyslog/rsyslog_base_centos7 is designed with the intent to provide a base container that you can use to make applications inside the container use rsyslog logging.
Please also have a look at this Twitter conversation: https://twitter.com/rgerhards/status/978183898776686592 - doc updates will be upcoming once we have the actual procedure.
Note: This answer was completely rewritten as I originally totally missed the point.
Smart people from rsyslog put the following Docker image together:
https://hub.docker.com/r/rsyslog/rsyslog_base_centos7
It allows for your use case :
c) want to run a client machine where rsyslog processes log messagesv
(the default CentOS 7 config does NOT work inside a container, but
this container has a corrected config!)
Here is a URL to a patch you can throw to CentOS7 docker config to make it work:
https://gist.github.com/oleksandriegorov/2718a7e35b8d17ada934b651d627ab97
Of course, restart rsyslogd to apply changes.