Google Cloud can't find default credentials when trying to run docker image - docker

I am trying to run a Docker image through a Google Cloud proxy and despite my best efforts Google Cloud continues giving me this error:
Can't create logging client: google: could not find default
credentials. See
https://developers.google.com/accounts/docs/application-default-credentials
for more information.
Whenever I try to run my Docker image using this command:
sudo docker run dc701c583cdb
I have tried updating my GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of my key file.
I have successfully logged in to Google Cloud using the gcloud auth application-default login command.
I've defined and associated my project in Google Cloud.
I am attempting this in order to run an open source project. I'm quite sure I created the Docker image correctly. I have a feeling the issue is coming from the fact that I am not correctly connecting the existing project to my Google Cloud.
Any advice would be greatly appreciated. I am using Docker 18.06.1-ce and Google Cloud-SDK 219.0.1. Running on a virtual linux machine with Ubuntu 18.04.

When running the google/cloud-sdk container from Docker Hub in a newly-created Ubuntu 18.04 instance, the container's gcloud automatically inherits the instance's user configuration. Give it a try: run that container and then run gcloud info inside of it.
As such, I believe you might be doing something wrong. I recommend you take a look at the aforementioned container to see how that can be made to work.

Related

How to supply env file for a docker GCP CloudRun Service

I have .env file for my docker-compose, and was able to run using "docker-compose up"
Now I pushed to cloud registry, and want to Cloud Run
How can I supply the various environemnt variables?
I did create secrets in secret manager, but how can I integrate both, so that my container starts reading all those needed secrets?
Note: My docker-compose is an app with database, but I can split them as 2 containers, if needed, but they still need secrets
Edit: Added secret references.
EDIT:
I am unable to run my container
If env file X=x , and docker-compose environemnt app.prop=${X}
then should I create secret X or x?
Is Cloud run using Dockerfile or docker-compose? I image pushed is built from docker-compose only. Sorry I am getting confused (not assuming trivial things as it is not working)
It is not possible to use docker-compose on Cloud Run, as it is designed for individual stateless containers. My suggestion is to create an image from your application service, upload the image to Google Container Registry in order to use it for your Cloud Run service, and connect it to Cloud SQL following the attached documentation. You can store database credentials with Secret Manager and pass it to your Cloud Run service as environment variables (check this documentation).

wso2 api manager Docker image needs paid subscription

I am planning to use WSO2 API Manager for a client...Planning to use the API Manager Docker image for hosting it..
But it looks like to use API Manager docker image ,I need to have paid subscription once the trial period ends..
https://wso2.com/api-management/install/docker/get-started/ ..the link says
" In order to use WSO2 product Docker images, you need an active WSO2 subscription."
Is it like that?
Cant i have the image running in the client premises without any subscription?
You can build it yourself using their official dockerfiles which hosted on github and then push it to your own registry.
The rest of the dockerfiles for other WSO2 Products can be found under the same github account.
The following steps are describing How to build an image and run WSO2 API Manager, taken from this README.md file.
Checkout this repository into your local machine using the following Git command.
git clone https://github.com/wso2/docker-apim.git
The local copy of the dockerfiles/ubuntu/apim directory will be referred to as AM_DOCKERFILE_HOME from this point onwards.
Add WSO2 API Manager distribution and MySQL connector to <AM_DOCKERFILE_HOME>/files.
Download WSO2 API Manager v2.6.0
distribution and extract it to <AM_DOCKERFILE_HOME>/files.
Download MySQL Connector/J
and copy that to <AM_DOCKERFILE_HOME>/files.
Once all of these are in place, it should look as follows:
<AM_DOCKERFILE_HOME>/files/wso2am-2.6.0/
<AM_DOCKERFILE_HOME>/files/mysql-connector-java-<version>-bin.jar
Please refer to WSO2 Update Manager documentation
in order to obtain latest bug fixes and updates for the product.
Build the Docker image.
Navigate to <AM_DOCKERFILE_HOME> directory.
Execute docker build command as shown below.
docker build -t wso2am:2.6.0 .
Running the Docker image.
docker run -it -p 9443:9443 wso2am:2.6.0
Here, only port 9443 (HTTPS servlet transport) has been mapped to a Docker host port.
You may map other container service ports, which have been exposed to Docker host ports, as desired.
Accessing management console.
To access the management console, use the docker host IP and port 9443.
https://<DOCKER_HOST>:9443/carbon
In here, refers to hostname or IP of the host machine on top of which containers are spawned.
How to update configurations
Configurations would lie on the Docker host machine and they can be volume mounted to the container.
As an example, steps required to change the port offset using carbon.xml is as follows.
Stop the API Manager container if it's already running. In WSO2 API Manager 2.6.0 product distribution, carbon.xml configuration file
can be found at <DISTRIBUTION_HOME>/repository/conf. Copy the file to some suitable location of the host machine, referred to as <SOURCE_CONFIGS>/carbon.xml and change the offset value under ports to 1.
Grant read permission to other users for <SOURCE_CONFIGS>/carbon.xml
chmod o+r <SOURCE_CONFIGS>/carbon.xml
Run the image by mounting the file to container as follows.
docker run \
-p 9444:9444 \
--volume <SOURCE_CONFIGS>/carbon.xml:<TARGET_CONFIGS>/carbon.xml \
wso2am:2.6.0
In here, refers to /home/wso2carbon/wso2am-2.6.0/repository/conf folder of the container.
As explained above these steps for ubuntu, for other distributions you can check the following directory and then read the README.md file inside
You can build the docker images yourself. Follow the instructions given at https://github.com/wso2/docker-apim/tree/master/dockerfiles/ubuntu/apim#how-to-build-an-image-and-run.
Thes caveat is that you will not be getting any bug fixes if you do not have a subscription.

Google Cloud Composer - Deploying Docker Image

Definitely missing something, and could use some quick assistance!
Simply, how do you deploy a Docker image to an Airflow DAG for running jobs? Does anyone have a simple example of deploying a Google container and running it via Airflow/Composer?
You can use the Docker Operator, included in the core Airflow repository.
If pulling an image from a private registry, you'll need to set a connection config with the relevant credentials and pass it to the docker_conn_id param.

How to run Arangodb on Openshift?

While different database images are available for OpenShift Container Platform users as explained here, others including Arangodb is not yet available. I tried to install Arangodb official container from Dcokerhub by running the following command via Openshift CLI:
oc new-app arangodb
but it does not run successfully throwing the following error:
chown: changing ownership of '/var/lib/arangodb3': Operation not permitted
It is related to permissions. By default, OpenShift runs containers using an arbitrarily assigned user ID and not as the root as documented in Support Arbitrary User IDs section. I tried to chanage the permission of directories and files that may be written to by processes in the image to be owned by the root group and be read/writable by that group in the Dockerfile:
RUN chgrp -R 0 /some/directory \
&& chmod -R g+rwX /some/directory
This time it throws the following error:
FATAL cannot set uid 'arangodb': Operation not permitted
By looking at the script that thatinitializes arangodb (arangod script), arangodb runs as arangodb:arangodb, which should (or may !!!) be arangodb:0 in the case of Openshift.
Now, I am really confused. I've read and searched a lot:
Getting any Docker image running in your own OpenShift
cluster
User namespaces have arrived in
Docker!
new-app fails on some official Docker images due to chown
permissions
I also tried doing the reverse engineering by looking at mongodb
image
provided by openshift. But at the end, I got more confused.
I also do not want to ask cluster administrators to allow the project to run as root using:
# oadm policy add-scc-to-user anyuid -z default
Th more I read, the more I get confused. Has anybody done it before that can provide me a docker container I can run on Openshift?
With ArangoDB 3.4 the docker image has been migrated to an alpine based Image, and its core now shouldn't invoke CHOWN/CHRGRP anymore when invoked in the right way.
This should be one of the requirements to get it working on Openshift.
If you still have problems running ArangoDB on openshift, use the github issue tracker with the specific problems you see. You may also want to try to add changes to the dockerfile, so it can be improved.

How do I run Docker on Google Compute Engine?

What's the procedure for installing and running Docker on Google Compute Engine?
Until the recent GA release of Compute Engine, running Docker was not supported on GCE (due to kernel restrictions) but with the newly announced ability to deploy and use custom kernels, that restriction is no longer intact and Docker now works great on GCE.
Thanks to proppy, the instructions for running Docker on Google Compute Engine are now documented for you here: http://docs.docker.io/en/master/installation/google/. Enjoy!
They now have a VM which has docker pre-installed now.
$ gcloud compute instances create instance-name
--image projects/google-containers/global/images/container-vm-v20140522
--zone us-central1-a
--machine-type f1-micro
https://developers.google.com/compute/docs/containers/container_vms
A little late, but I wanted to add an answer with a more detailed workflow and links, since answers are still rather scattered:
Create a Docker image
a. Locally
b. Using Google Container Builder
Push local Docker image to Google Container Repository
docker tag <current name>:<current tag> gcr.io/<project name>/<new name>
gcloud docker -- push gcr.io/<project name>/<new name>
UPDATE
If you have upgraded to Docker client versions above 18.03, gcloud docker commands are no longer supported. Instead of the above push, use:
docker push gcr.io/<project name>/<new name>
If you have issues after upgrading, see more here.
Create a compute instance.
This process actually obfuscates a number of steps. It creates a virtual machine (VM) instance using Google Compute Engine, which uses a Google-provided, container-optimized OS image. The image includes Docker and additional software responsible for starting our docker container. Our container image is then pulled from the Container Repository, and run using docker run when the VM starts. Note: you still need to use docker attach even though the container is running. It's worth pointing out only one container can be run per VM instance. Use Kubernetes to deploy multiple containers per VM (the steps are similar). Find more details on all the options in the links at the bottom of this post.
gcloud beta compute instances create-with-container <desired instance name> \
--zone <google zone> \
--container-stdin \
--container-tty \
--container-image <google repository path>:<tag> \
--container-command <command (in quotes)> \
--service-account <e-mail>
Tip You can view available gcloud projects with gcloud projects list
SSH into the compute instance.
gcloud beta compute ssh <instance name> \
--zone <zone>
Stop or Delete the instance. If an instance is stopped, you will still be billed for resources such as static IPs and persistent disks. To avoid being billed at all, use delete the instance.
a. Stop
gcloud compute instances stop <instance name>
b. Delete
gcloud compute instances delete <instance name>
Related Links:
More on deploying containers on VMs
More on zones
More create-with-container options
As of now, for just Docker, the Container-optimized OS is certainly the way to go:
gcloud compute images list --project=cos-cloud --no-standard-images
It comes with Docker and Kubernetes preinstalled. The only thing it lacks is the Cloud SDK command-line tools. (It also lacks python3, despite Google's announce of Python 2 sunset on 2020-01-01. Well, it's still 27 days to go...)
As an additional piece of information I wanted to share, I was searching for a standard image that would offer both docker and gcloud/gsutil preinstalled (and found none, oops). I do not think I'm alone in this boat, as gcloud is the thing you could hardly go by without on GCE¹.
My best find so far was the Ubuntu 18.04 image that came with their own (non-Debian) package manager, snap. The image comes with the Cloud SDK preinstalled, and Docker installs literally in a snap, 11 seconds on an F1 instance initial test, about 6s on an n1-standard-1. The only snag I hit was the error message that the docker authorization helper was not available; an attempt to add it with gcloud components install failed because the SDK was installed as a snap, too. However, the helper is actually there, only not in the PATH. The following was what got me the both tools available in a single transient builder VM in the least amount of setup script runtime, starting off the supported Ubuntu 18.04 LTS image²:
snap install docker
ln -s /snap/google-cloud-sdk/current/bin/docker-credential-gcloud /usr/bin
gcloud -q auth configure-docker
¹ I needed both for a Daisy workflow imaging a disk with both artifacts from GS buckets and a couple huge, 2GB+ library images from the local gcr.io registry that were shared between the build (as cloud builder layers) and the runtime (where I had to create and extract containers to the newly built image). But that's besides the point; one may needs both tools for a multitude of possible reasons.
² Use gcloud compute images list --uri | grep ubuntu-1804 to get the most current one.
Google's GitHub site offers now a gce image including docker. https://github.com/GoogleCloudPlatform/cloud-sdk-docker-image
It's as easy as:
creating a Compute Engine instance
curl https://get.docker.io | bash
Using docker-machine is another way to host your google compute instance with docker.
docker-machine create \
--driver google \
--google-project $PROJECT \
--google-zone asia-east1-c \
--google-machine-type f1-micro $YOUR_INSTANCE
If you want to login this machine on google cloud compute instance, just use docker-machine ssh $YOUR_INSTANCE
Refer to docker machine driver gce
There is now improved support for containers on GCE:
Google Compute Engine is extending its support for Docker containers. This release is an Open Preview of a container-optimized OS image that includes Docker and an open source agent to manage containers. Below, you'll find links to interact with the community interested in Docker on Google, open source repositories, and examples to get started. We look forward to hearing your feedback and seeing what you build.
Note that this is currently (as of 27 May 2014) in Open Preview:
This is an Open Preview release of containers on Virtual Machines. As a result, we may make backward-incompatible changes and it is not covered by any SLA or deprecation policy. Customers should take this into account when using this Open Preview release.
Running Docker on GCE instance is not supported. The instance goes down and not able to login again.
We can use the Docker image given by the GCE, to create a instance.
If your google cloud virtual machine is based on ubuntu use the following command to install docker
sudo apt install docker.io
You may use this link: https://cloud.google.com/cloud-build/docs/quickstart-docker#top_of_page.
The said link explains how to use Cloud Build to build a Docker image and push the image to Container Registry. You will first build the image using a Dockerfile and then build the same image using the Cloud Build's build configuration file.
Its better to get it while creating compute instance
Go to the VM instances page.
Click the Create instance button to create a new instance.
Under the Container section, check Deploy container image.
Specify a container image name under Container image and configure options to run the container if desired. For example, you can specify gcr.io/cloud-marketplace/google/nginx1:1.12 for the container image.
Click Create.
Installing Docker on GCP Compute Engine VMs:
This is the link to GCP documentation on the topic:
https://cloud.google.com/compute/docs/containers#installing
In it it links to the Docker install guide, you should follow the instructions depending on what type of Linux you have running in the vm.

Resources