I'm trying to create a superset instance in Rancher but it do not prompted me to a admin user creation and I cant get past through login. I already tried admin/admin but it shows me "login failed".
Is there any step that I'm skipping?
My .yml file is based in https://howchoo.com/kubernetes/how-to-install-apache-superset-on-a-gke-kubernetes-cluster.
You can use the helm to install the superset on kubernetes cluster.
once you have updated the values.yaml file for the helm you can apply those changes.
Helm chat : https://github.com/helm/charts/tree/master/stable/superset
in this helm init container setting up the user and database details :
initFile: |-
/usr/local/bin/superset-init --username admin --firstname myfirstname --lastname mylastname --email admin#fab.org --password mypassword
superset load_examples
superset runserver
you can read the documentation in Github. You can set the password and username.
I did it without using helm.
I found this documentation and was able to run superset properly after executing the commands that are in the documentation.
Related
I've created a Bitnami Dokuwiki Docker container on my Mac using:
curl -sSL https://raw.githubusercontent.com/bitnami/bitnami-docker-dokuwiki/master/docker-compose.yml > docker-compose.yml
docker-compose up -d
I can connect to it in my browser, and there is a login link, but no way I can find to create a user.
Is there a default user, hopefully an admin user? Or do I need to create a user another way?
The default login is superuser, bitnami1.
It sets up this superuser account when you initially create the container. You can change the username and password it uses by passing environment variables DOKUWIKI_USERNAME and DOKUWIKI_PASSWORD to docker with -e.
I'm deploying a Flask app using Docker Machine on AWS. The credentials file is located in ~/.aws/:
[default]
aws_access_key_id=AKIAJ<NOT_REAL>7TUVKNORFB2A
aws_secret_access_key=M8G9Zei4B<NOT_REAL_EITHER>pcml1l7vzyedec8FkLWAYBSC7K
region=eu-west-2
Running it as follows:
docker-machine create --driver amazonec2 --amazonec2-open-port 5001 sandbox
According to Docker docs this should work but getting this output:
Error setting machine configuration from flags provided: amazonec2 driver requires AWS credentials configured with the --amazonec2-access-key and --amazonec2-secret-key options, environment variables, ~/.aws/credentials, or an instance role
Before you ask, yes, I set permissions in a such a way that Docker is allowed to access the credentials file.
What should I do ?
Solution found here https://www.digitalocean.com/community/questions/ssh-not-available-when-creating-docker-machine-through-digitalocean
Problem was running Docker as a snap (Ubuntu's repo) instead of official build from Docker. As soon as I uninstalled the Docker snap and installed official build Docker was able to find credentials file immediately.
So I have my docker image uploaded to my projects registry. I can navigate to https://console.cloud.google.com/gcr/images/ and I see my Image listed there.
Now I want to run a VM on this project and on this one use docker to run this very image.
This is the command within my VM:
sudo /usr/bin/docker run eu.gcr.io/my-project-name/example001
The response is:
Unable to find image 'eu.gcr.io/.../example001:latest' locally
/usr/bin/docker: Error response from daemon: unauthorized: You don't have the needed permissions to perform this op
eration, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.goo
gle.com/container-registry/docs/advanced-authentication.
See '/usr/bin/docker run --help'.
Please see the image attached. I can list my images if I define "eu.gcr.io/..." as my project path. However the machine seems to run on ".gcr.io" so therefor not able to access my image? How would I fix this - and why is my image on "eu.gcr.io" and the machine on ".gcr.io", I cant find a method to change this (either move the image to gcr.io or move the machine to, eu.gcr.io). However I'm not sure if this is the issue.
Maybe it is an authentication issue with docker?
VM basically cannot be on ".gcr.io", it can run in non-European region/zone, but it shouldn't be a problem.
From GCP access control point of view registry is just a bucket.
So I believe first thing you need to check is that VM has access to Google Cloud Storage.
With gcloud:
gcloud compute instances describe <instance-name>
check if VM has scope to read from devstorage:
serviceAccounts:
- email: ...-compute#developer.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/devstorage.read_only
- ...
This scope should be in place to read from registry:
https://www.googleapis.com/auth/devstorage.read_only
If you don't have such scope on VM, but have there gcloud configured, you can use gcloud as credential helper:
gcloud auth configure-docker
as stated in doc you referred: https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud_as_a_docker_credential_helper
The answer is found here:
https://serverfault.com/questions/900026/gcp-no-access-to-container-registry-from-compute-engine
It is the docker command which needs the authorization. Not the hostname (eu.gcr.io) is the issue here. I used 'gcloud docker -- pull ...' command to get the image from the repository to use within my VM.
after you create linux VM on GCP, SSH to it, you have to install Google SDK 1 using Cloud SDK with scripts]1 or manually.
If you are running Ubuntu follow the documentation here if you are installing with Red Hat or CentOS follow the documentation here after finishing the Google SDK you have to Run gcloud init to initialize the SDK, just open a terminal and tape [gcloud init] you have to configure your profile. after that you have to install Docker
sudo apt-get -y install docker-ce
sudo systemctl start docker
You need to have access to the registries which you will be pushing to and pulling from.
Configured Docker to use gcloud as a credential helper. To use gcloud as the crediential helper, run the command:
gcloud auth configure-docker
After that you can pull or push images on your registry using the gcloud command with the docker as shown below:
Push: gcloud docker -- push gcr.io/google-containers/example-image:latest
pull: gcloud docker -- pull gcr.io/google-containers/example-image:latest
I am a bit confused about how I can authenticate the gcloud sdk on a docker container. Right now, my docker file includes the following:
#Install the google SDK
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
RUN mkdir -p /usr/local/gcloud
RUN tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz
RUN /usr/local/gcloud/google-cloud-sdk/install.sh
RUN /usr/local/gcloud/google-cloud-sdk/bin/gcloud init
However, I am confused how I would authenticate? When I run gcloud auth application-default login on my machine, it opens a new tab in chrome which prompts me to login. How would I input my credentials on the docker container if it opens a new tab in google chrome in the container?
You might consider using deb packages when setting up your docker container as it is done on docker hub.
That said you should NOT run gcloud init or gcloud auth application-default login or gcloud auth login... those are interactive commands which launch browser. To provide credentials to the container supply it with service account key file.
You can download one from cloud console: https://console.cloud.google.com/iam-admin/serviceaccounts/project?project=YOUR_PROJECT or create it with gcloud command
gcloud iam service-accounts keys create
see reference guide.
Either way once you have the key file ADD it to your container and run
gcloud auth activate-service-account --key-file=MY_KEY_FILE.json
You should be now set, but if you want to use it as Application Default Credentials (ADC), that is in the context of other libraries and tools, you need to set the following environment variable to point to the key file:
export GOOGLE_APPLICATION_CREDENTIALS=/the/path/to/MY_KEY_FILE.json
One thing to point out here is that gcloud tool does not use ADC, so later if you change your account to something else, for example via
gcloud config set core/account my_other_login#gmail.com
other tools and libraries will continue using old account via ADC key file but gcloud will now use different account.
You can map your local Google SDK credentials into the image. [Source].
Begin by signing in using:
$ gcloud auth application-default login
Then add the following to your docker-compose.yaml:
volumes:
- ~/.config/gcloud:/root/.config/gcloud
I have a GitLab project gitlab.com/my-group/my-project which has a CI pipeline that builds an image and pushes it to the project's GitLab registry registry.gitlab.com/my-group/my-project:tag. I want to deploy this image to Google Compute Engine, where I have a VM running docker.
Easy enough to do it manually by ssh'ing into the VM, then docker login registry.gitlab.com and docker run ... registry.gitlab.com/my-group/my-project:tag. Except the docker login command is interactive, which is a no-go for CI. It can accept a username and password on the command line, but that hardly feels like the right thing to do, even if my login info is in a secret variable (storing my GitLab login credentials in a GitLab secret variable?...)
This is the intended workflow on the Deploy stage of the pipeline:
Either install the gcloud tool or use an image with it preinstalled
gcloud compute ssh my-gce-vm-name --quiet --command \
"docker login registry.gitlab.com && docker run registry.gitlab.com/my-group/my-project:tag"
Since the gcloud command would be running within the GitLab CI Runner, it could have access to secret variables, but is that really the best way to log in to the GitLab Registry over ssh from GitLab?
I'll answer my own question in case anyone else stumbles upon it. GitLab creates ephemeral access tokens for each build of the pipeline that give the user gitlab-ci-token access to the GitLab Registry. The solution was to log in as the gitlab-ci-token user in the build.
.gitlab-ci.yml (excerpt):
deploy:
stage: deploy
before_script:
- gcloud compute ssh my-instance-name --command "docker login registry.gitlab.com/my-group/my-project -u gitlab-ci-token -p $CI_BUILD_TOKEN"
The docker login command creates a local configuration file in which your credentials are stored at $HOME/.docker/config.json that looks like this (also see the documentation on this):
{
"auths": {
"<registry-url>": {
"auth": "<credentials>"
}
}
}
As long as the config.json file is present on your host and your credentials (in this case simply being stored as base64("<username>:<password>")) do not change, there is no need to run docker login on every build or to store your credentials as variables for your CI job.
My suggestion would be to simply ensure that the config.json file is present on your target machine (either by running docker login once manually or by deploying the file using whatever configuration management tool you like). This saves you from handling the login and managing credentials within your build pipeline.
Regarding the SSH login per se; this should work just fine. If you really want to eliminate the SSH login, you could setup the Docker engine on your target machine to listen on an external socket, configure authentication and encryption using TLS client certificates as described in the official documentation and directly talk to the remote server's Docker API from within the build job:
variables:
DOCKER_HOST: "tcp://<target-server>:2376"
DOCKER_TLS_VERIFY: "1"
script:
- docker run registry.gitlab.com/my-group/my-project:tag
We had the same "problem" on other hosting providers. Our solution is to use some kind of custom script which runs on the target machine and can be called via a Rest-Api Endpoint (secured by Basic-Auth or what ever).
So you could just trigger the remote host to do the docker login and upgrade your service without granting SSH Access via gitlab-ci.