Error in using `gcloud.auth.application-default.login` cmd in gcloud CLI - docker

I am learning Kubernetes in GCP. To deploy the cluster into google cloud, I used skaffold. Following is the YAML file:
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
# local:
# push: false
googleCloudBuild:
projectId: ticketing-dev-368011
artifacts:
- image: gcr.io/{project-id}/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: "src/**/*.ts"
dest: .
In google cloud CLI, when I give the CLI command skaffold dev, the error popped up saying:
getting cloudbuild client: google: could not find default credentials.
Then I give the command in my local terminal gcloud auth application-default login, I was prompted to give consent on the browser, and I gave consent and the page was redirected to a page "successfull authentication". But When I see my terminal, there is an error message showing as
Error saving Application Default Credentials: Unable to write file [C:\Users\Admin\AppData\Roaming\gcloud\application_default_credentials.json]: [Errno 9] Bad file descriptor
And I found there is no such file being created in the above directory. Can someone please help me What I have done wrong?

Related

Drone CI - docker plugin - parse error. Why drone can't parse?

could somebody help me, please?
I try to build and publish my image to a private docker registry:
kind: pipeline
name: default
steps:
- name: docker
image: plugins/docker
settings:
username: ****
password: ****
repo: https://*****.com:5000/myfirstimage
registry: https://*****.com:5000
tags:
- latest
But got the next error:
Error parsing reference: "https://*****.com:5000/myfirstimage:latest" is not a valid repository/tag: invalid reference format
15921 time="2020-10-18T17:52:20Z" level=fatal msg="exit status 1"
But, when I try to push manually, all is ok.
What am I doing wrong? Will be grateful for any help.
It is not common to include the scheme in the address to docker registry and repo.
Try to write those addresses without https:// as in
repo: <registry-hostname>.com:5000/myrepo/myfirstimage
registry: <registry-hostname>.com:5000

GCloud Build - Error: Dial tcp -- no such host

Hi I have the following skaffold.yaml file
apiVersion: skaffold/v2beta5
kind: Config
build:
artifacts:
- image: us.gc.io/directed-relic-285313/auth
context: auth
sync:
manual:
- src: 'src/**/*.ts'
dest: .
docker:
dockerfile: Dockerfile
googleCloudBuild:
projectId: directed-relic-285313
deploy:
kubectl:
manifests:
- ./infra/k8s/*
I setup everything to be able to leverage google cloud for development. I.e, I downloaded google cloud SDK, installed google cloud context, configured kubectl to use it and so on.
Now, when I run
skaffold dev
I see the following error
Successfully built 4a849a25796b
Successfully tagged us.gc.io/directed-relic-285313/auth:latest
PUSH
Pushing us.gc.io/directed-relic-285313/auth:latest
The push refers to repository [us.gc.io/directed-relic-285313/auth]
Get https://us.gc.io/v2/: dial tcp: lookup us.gc.io on 169.254.169.254:53: no such host
ERROR: push attempt 1 detected failure, retrying: step exited with non-zero status: 1
Pushing us.gc.io/directed-relic-285313/auth:latest
The push refers to repository [us.gc.io/directed-relic-285313/auth]
Get https://us.gc.io/v2/: dial tcp: lookup us.gc.io on 169.254.169.254:53: no such host
ERROR: push attempt 2 detected failure, retrying: step exited with non-zero status: 1
Pushing us.gc.io/directed-relic-285313/auth:latest
Any idea where to start to debug this? Or what could be causing the error?
Had a typo....
us.gcr.io... not us.gc.io
Thanks tarun_khosla

GitLab and Docker registry on seperate servers

I'm a little bit desperate and starting to going mad.
I tried to configure my gitlab instance (omnibus) to work with external private docker image registry. Initialy I thought is should be relatively easy task. But now I totaly confused.
My initial installation looked like this:
generate selfsigned cert
clear instance of docker registry on Ubuntu 18.04 with docker-compose and nginx. Secured with letsEncrypt on registry.domain.com
I use following script:
version: '3'
services:
registry:
restart: always
image: registry:2
ports:
- "5000:5000"
environment:
REGISTRY_AUTH: token
REGISTRY_AUTH_TOKEN_REALM: https://registry.domain.com:5000/auth
REGISTRY_AUTH_TOKEN_SERVICE: "Docker registry"
REGISTRY_AUTH_TOKEN_ISSUER: "gitlab-issuer"
REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE: /etc/gitlab/registry-certs/registry-auth.crt
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
volumes:
- ./auth:/auth
- ./data:/data
clear instance of gitlab on Ubuntu 18.04. Secured with letsEncrypt on gitlab.domain.com
some changes in gitlab.rb like:
registry_external_url ‘https://registry.domain.com/’
gitlab_rails[‘registry_enabled’] = true
gitlab_rails[‘registry_host’] = “registry.domain.com”
gitlab_rails[‘registry_port’] = “5000”
gitlab_rails[‘registry_api_url’] = “htps://registry.prismstudio.space:5000”
gitlab_rails[‘registry_key_path’] = “/etc/gitlab/registry-certs/registry-auth.key”
gitlab_rails[‘registry_issuer’] = “gitlab-issuer”
After gitlab-ctl reconfigure i receive letsEncrypt error:
letsencrypt_certificate[gitlab.domain.net] (letsencrypt::http_authorization line 5) had an error: RuntimeError: acme_certificate[staging] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/letsencrypt/resources/certificate.rb line 25) had an error: RuntimeError: ruby_block[create certificate for gitlab.domain.net] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/acme/resources/certificate.rb line 108) had an error: RuntimeError: [gitlab.domain.com] Validation failed, unable to request certificate
I literary try everything but nothing helps me.
Is there any straightforward way to se tup GitLab server with external Docker registry? How to configure it properly? I'm open to burn everything to the ground and make it once again with working configuration.

Filebeat not running using docker-compose: setting 'filebeat.prospectors' has been removed

I'm trying to launch filebeat using docker-compose (I intend to add other services later on) but every time I execute the docker-compose.yml file, the filebeat service always ends up with the following error:
filebeat_1 | 2019-08-01T14:01:02.750Z ERROR instance/beat.go:877 Exiting: 1 error: setting 'filebeat.prospectors' has been removed
filebeat_1 | Exiting: 1 error: setting 'filebeat.prospectors' has been removed
I discovered the error by accessing the docker-compose logs.
My docker-compose file is as simple as it can be at the moment. It simply calls a filebeat Dockerfile and launches the service immediately after.
Next to my Dockerfile for filebeat I have a simple config file (filebeat.yml), which is copied to the container, replacing the default filebeat.yml.
If I execute the Dockerfile using the docker command, the filebeat instance works just fine: it uses my config file and identifies the "output.json" file as well.
I'm currently using version 7.2 of filebeat and I know that the "filebeat.prospectors" isn't being used. I also know for sure that this specific configuration isn't coming from my filebeat.yml file (you'll find it below).
It seems that, when using docker-compose, the container is accessing another configuration file instead of the one that is being copied to the container, by the Dockerfile, but so far I haven't been able to figure it out how, why and how can I fix it...
Here's my docker-compose.yml file:
version: "3.7"
services:
filebeat:
build: "./filebeat"
command: filebeat -e -strict.perms=false
The filebeat.yml file:
filebeat.inputs:
- paths:
- '/usr/share/filebeat/*.json'
fields_under_root: true
fields:
tags: ['json']
output:
logstash:
hosts: ['localhost:5044']
The Dockerfile file:
FROM docker.elastic.co/beats/filebeat:7.2.0
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
COPY output.json /usr/share/filebeat/output.json
USER root
RUN chown root:filebeat /usr/share/filebeat/filebeat.yml
RUN mkdir /usr/share/filebeat/dockerlogs
USER filebeat
The output I'm expecting should be similar to the following, which comes from the successful executions I'm getting when I'm executing it as a single container.
The ERROR is expected because I don't have logstash configured at the moment.
INFO crawler/crawler.go:72 Loading Inputs: 1
INFO log/input.go:148 Configured paths: [/usr/share/filebeat/*.json]
INFO input/input.go:114 Starting input of type: log; ID: 2772412032856660548
INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
INFO log/harvester.go:253 Harvester started for file: /usr/share/filebeat/output.json
INFO pipeline/output.go:95 Connecting to backoff(async(tcp://localhost:5044))
ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://localhost:5044)): dial tcp [::1]:5044: connect: cannot assign requested address
INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://localhost:5044)) with 1 reconnect attempt(s)
ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://localhost:5044)): dial tcp [::1]:5044: connect: cannot assign requested address
INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://localhost:5044)) with 2 reconnect attempt(s)
I managed to figure out what the problem was.
I needed to map the location of the config file and logs directory in the docker-compose file, using the volumes tag:
version: "3.7"
services:
filebeat:
build: "./filebeat"
command: filebeat -e -strict.perms=false
volumes:
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml
- ./filebeat/logs:/usr/share/filebeat/dockerlogs
Finally I just had to execute the docker-compose command and everything start working properly:
docker-compose -f docker-compose.yml up -d

Filebeat 7.2 - Save logs from Docker containers to Logstash

I have a few Docker containers running on my ec2 instance.
I want to save logs from these containers directly to Logstash (Elastic Cloud).
When I tried to install Filebeat manually, everything worked allright.
I have downloaded it using
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.2.0-linux-x86_64.tar.gz
I have unpacked it, changed filebeat.yml configuration to
filebeat.inputs:
- type: log
enabled: true
fields:
application: "myapp"
fields_under_root: true
paths:
- /var/lib/docker/containers/*/*.log
cloud.id: "iamnotshowingyoumycloudidthisisjustfake"
cloud.auth: "elastic:mypassword"
This worked just fine, I could find my logs after searching application: "myapp" in Kibana.
However, when I tried to run Filebeat from Docker, no success.
This is filebeat part of my docker-compose.yml
filebeat:
image: docker.elastic.co/beats/filebeat:7.2.0
container_name: filebeat
volumes:
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- /var/run/docker.sock:/var/run/docker.sock #needed for autodiscover
My previous filebeat.yml from manual execution doesn't work, so I have tried many examples, but nothing worked. This is one example which I think should work, but it doesn't. Docker container starts no problem, but it can't read from logfiles somehow.
filebeat.autodiscover:
providers:
- type: docker
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/lib/docker/containers/*/*.log
json.keys_under_root: true
json.add_error_key: true
fields_under_root: true
fields:
application: "myapp"
cloud.id: "iamnotshowingyoumycloudidthisisjustfake"
cloud.auth: "elastic:mypassword"
I have also tried something like this
filebeat.autodiscover:
providers:
- type: docker
templates:
config:
- type: docker
containers.ids:
- "*"
filebeat.inputs:
- type: docker
containers.ids:
- "*"
processors:
- add_docker_metadata:
fields:
application: "myapp"
fields_under_root: true
cloud.id: "iamnotshowingyoumycloudidthisisjustfake"
cloud.auth: "elastic:mypassword"
I have no clue what else to try, filebeat logs still shows
"harvester":{"open_files":0,"running":0}}
I am 100% sure that logs from containers are under /var/lib/docker/containers/*/*.log ... as I said, Filebeat worked, when installed manually, not as docker image.
Any suggesions ?
Output log from Filebeat
2019-07-23T05:35:58.128Z INFO instance/beat.go:292 Setup Beat: filebeat; Version: 7.2.0
2019-07-23T05:35:58.128Z INFO [index-management] idxmgmt/std.go:178 Set output.elasticsearch.index to 'filebeat-7.2.0' as ILM is enabled.
2019-07-23T05:35:58.129Z INFO elasticsearch/client.go:166 Elasticsearch url: https://123456789.us-east-1.aws.found.io:443
2019-07-23T05:35:58.129Z INFO [publisher] pipeline/module.go:97 Beat name: e3e5163f622d
2019-07-23T05:35:58.136Z INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
2019-07-23T05:35:58.142Z INFO instance/beat.go:421 filebeat start running.
2019-07-23T05:35:58.142Z INFO registrar/migrate.go:104 No registry home found. Create: /usr/share/filebeat/data/registry/filebeat
2019-07-23T05:35:58.142Z INFO registrar/migrate.go:112 Initialize registry meta file
2019-07-23T05:35:58.144Z INFO registrar/registrar.go:108 No registry file found under: /usr/share/filebeat/data/registry/filebeat/data.json. Creating a new registry file.
2019-07-23T05:35:58.146Z INFO registrar/registrar.go:145 Loading registrar data from /usr/share/filebeat/data/registry/filebeat/data.json
2019-07-23T05:35:58.146Z INFO registrar/registrar.go:152 States Loaded from registrar: 0
2019-07-23T05:35:58.146Z INFO crawler/crawler.go:72 Loading Inputs: 1
2019-07-23T05:35:58.146Z WARN [cfgwarn] docker/input.go:49 DEPRECATED: 'docker' input deprecated. Use 'container' input instead. Will be removed in version: 8.0.0
2019-07-23T05:35:58.150Z INFO log/input.go:148 Configured paths: [/var/lib/docker/containers/*/*.log]
2019-07-23T05:35:58.150Z INFO input/input.go:114 Starting input of type: docker; ID: 11882227825887812171
2019-07-23T05:35:58.150Z INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
2019-07-23T05:35:58.150Z WARN [cfgwarn] docker/docker.go:57 BETA: The docker autodiscover is beta
2019-07-23T05:35:58.153Z INFO [autodiscover] autodiscover/autodiscover.go:105 Starting autodiscover manager
2019-07-23T05:36:28.144Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s
{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":10,"time":{"ms":17}},"total":{"ticks":40,"time":{"ms":52},"value":40},"user":{"ticks":30,"time":{"ms":35}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":9},"info":{"ephemeral_id":"4427db93-2943-4a8d-8c55-6a2e64f19555","uptime":{"ms":30111}},"memstats":{"gc_next":4194304,"memory_alloc":2118672,"memory_total":6463872,"rss":28352512},"runtime":{"goroutines":34}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"elasticsearch"},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0},"writes":{"success":1,"total":1}},"system":{"cpu":{"cores":1},"load":{"1":0.31,"15":0.03,"5":0.09,"norm":{"1":0.31,"15":0.03,"5":0.09}}}}}}
Hmm, I don't see anything obvious in the Filebeat config on why its not working, I have a very similar config running for a 6.x Filebeat.
I would suggest doing a docker inspect on the container and confirming that the mounts are there, maybe check on permissions but errors would have probably shown in the logs.
Also could you try looking into using container input? I believe it is the recommended method for container logs in 7.2+: https://www.elastic.co/guide/en/beats/filebeat/7.2/filebeat-input-container.html

Resources