How to configure sqitch snowflake docker? - docker

I have pulled the Sqitch docker repository from the https://github.com/sqitchers/docker-sqitch and rebuilt it for snowflake as per the given instructions on https://github.com/sqitchers/docker-sqitch/tree/main/snowflake
Currently, I am following this tutorial at https://sqitch.org/docs/manual/sqitchtutorial-snowflake/.
I haven't found any documentation for how to use rebuild the docker image for Sqitch. Could you please guide me?
I have tried my self to create a new project but I am getting errors and warning while using it.
I am using Sqitch from my rebuild repository and getting the below error :
../docker-sqitch/sqitch deploy 'db:snowflake://myusername#snowflakeaccout/database?Driver=Snowflake;warehouse=my_warehouse_name'
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
DBD::ODBC 1.59 required to manage Snowflake

Related

docker: Error response from daemon: manifest for gcr.io/google_containers/hyperkube-amd64:v1.24.2 not found

Following this guide:
https://jamesdefabia.github.io/docs/getting-started-guides/docker/
and both
export K8S_VERSION=$(curl -sS https://storage.googleapis.com/kubernetes-release/release/stable.txt)
and
export K8S_VERSION=$(curl -sS https://storage.googleapis.com/kubernetes-release/release/latest.txt)
fail at the docker run stage with a not found error. E.g:
docker: Error response from daemon: manifest for gcr.io/google_containers/hyperkube-amd64:v1.24.2 not found: manifest unknown: Failed to fetch "v1.24.2" from request "/v2/google_containers/hyperkube-amd64/manifests/v1.24.2".
Any suggestions?
Check the repo of hyperkube and use an available tag:
https://console.cloud.google.com/gcr/images/google-containers/global/hyperkube-amd64
As mentioned by #zerkms #vladtkachuk that google hyperkube is not available anymore. As mentioned in the document:
Hyperkube, an all-in-one binary for Kubernetes components, is now
deprecated and will not be built by the Kubernetes project going
forward.Several, older beta API versions are deprecated in 1.19 and
will be removed in version 1.22. We will provide a follow-on update
since this means 1.22 will likely end up being a breaking release for
many end users.
Setting up a local Kubernetes environment as your development environment is the recommended option, no matter your situation, because this setup can create a safe and agile application-deployment process.
Fortunately, there are multiple platforms that you can try out to run Kubernetes locally, and they are all open source and available under the Apache 2.0 license.
Minikube has the primary goals of being the best tool for local Kubernetes application development, and to support all Kubernetes features that fit.
kind runs local Kubernetes clusters using Docker container "nodes."

Looking for Docker Image octopusdeploy/octo for Apple M1

I'm running Docker Desktop for Mac (v4.4.2), part of the deployment script of the company, it needs to use octopus and the Docker image for it is not available for the Apple M1 platform according to the error message:
Unable to find image 'octopusdeploy/octo:latest' locally
latest: Pulling from octopusdeploy/octo
/usr/bin/docker: no matching manifest for linux/arm64/v8 in the manifest list entries.
I've also tried to manually download the latest image and manually update the tag to match 'octopusdeploy/octo:latest' but encountered many other issues, most importantly, the build artifact was not what our devops engineer expected.
I understand there is no straight answer to this as it's not officially supported. Any pointers for direction is greatly appreciated
Here is the source code for the octo docker images, they dont have any tags matching with linux/arm64/v8, so you might have to build your self (Dockerfile) and publish somewhere.

Kubernetes deployments in IBM Cloud fails for me

I try and deploy an app in a kubernetes cluster following these instructions
https://cloud.ibm.com/docs/containers?topic=containers-cs_apps_tutorial#cs_apps_tutorial
Then I make a build following the instructions with ibmcloud cr build -t registry..bluemix.net//hello-world:1 .
Output looks good except a securitywarning
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
But as this was just a test I did not worry.
At the next stage running this command following instructions
kubectl run hello-world-deployment --image=registry..bluemix.net//hello-world:1
I get the following error
error: failed to discover supported resources: Get http://localhost:8080/apis/apps/v1?timeout=32s: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
As you see in the message it looks like it is trying to do something to my local PC rather than the IBMCloud. What have I missed to do?
As #N Fritze mentioned in the comment, in order to organize access to Kubernetes cluster you might require to set KUBECONFIG environment variable which holds a list of kubeconfig files needed to provide sufficient information about authentication method in API server.
Find more information about managing Kubernetes Service in official IBM Cloud documentation. As issue has been already solved, answer composed for any further contributors research.

Trouble using Visual Studio to publish Docker image to GitLab container registry

I'm using VS 2017 15.7.3 with Docker enabled for an ASP.net core 2.1 project that has been committed to a private gitlab server I'm running on site. I turned on the registry features of GitLab and I can connect to and login to the server from my box.
So, with regard to VS2017, I created a new solution (Solution1) that shares its name with my GitLab repo. I have configured the publish settings for this project with my credentials and the push location of https://mygitlabserver.example.com:4567/solution1/solution1/.
The profile type I selected is Container Registry->Custom. I'm trying to push out an image for the first project in the solution (Project1). I have not modified the VS project properties Package tab settings, so the package ID remains the same as it started - Project1.
When I publish, I get a generic error in a tmp file in %LOCALAPPDATA%\Temp whose contents are as follows:
System.Exception: Running the docker.exe push command failed.
at Microsoft.VisualStudio.Web.Azure.Publish.ContainerRegistryProfileVisual.<PostPublishAsync>d__24.MoveNext()
I confirmed I can tag this image with Docker on the command line with the above URL and push it out successfully. I'm not sure if VS2017 has some other settings I need to use, but the documentation is light for working with a private server - they seem to be pushing Azure and I'm finding very little documentation outside of this.
Can anyone give any guidance or the location of more detailed logfiles?
Location of log files
C:\Users\User\AppData\Local\Temp

gcloud ERROR: (gcloud.app.deploy) Error Response: [3]

After running gcloud app deploy i am having the next error trying to deploy my application to a container using gcloud and the google cloud API.
Step 5 : CMD npm start
---> Running in cb3b29e90183
---> 296d95a6ac52
Removing intermediate container cb3b29e90183
Successfully built 296d95a6ac52
PUSH
The push refers to a repository [us.gcr.io/<ID_PROJECT>/appengine/default.20160906t225412] (len: 1)
296d95a6ac52: Preparing
296d95a6ac52: Pushing
296d95a6ac52: Pushed
d6a5f487b829: Preparing
d6a5f487b829: Pushing
d6a5f487b829: Pushed
b71be5d9c21a: Preparing
b71be5d9c21a: Pushing
b71be5d9c21a: Pushed
75d5a58c171b: Preparing
75d5a58c171b: Pushing
75d5a58c171b: Pushed
9ff051f37ab2: Image already exists
363507e00b22: Image already exists
818131a74c7c: Image already exists
cc57a274adf5: Image already exists
c7c7a273971f: Image already exists
b21b3e3bc691: Image already exists
latest: digest:sha256:70668fb04a90187c890eb6ba3119b6af46838a5518f7a96e8996f1d5fda6dc52 size: 33255
DONE
Updating service [default]...failed.
ERROR: (gcloud.app.deploy) Error Response: [3] Docker image us.gcr.io/<PROJET_ID>/appengine/default.20160906t225412:latest was either not found, or you do not have access to it.
I just recently updated my google cloud SDK from the version 122.0.0 to the version 124.0.0 I am running this in my local machine mac environment, this is the complete version's list:
gcloud --version
Google Cloud SDK 124.0.0
bq 2.0.24
bq-nix 2.0.24
core 2016.08.29
core-nix 2016.08.29
gcloud
gsutil 4.21
gsutil-nix 4.21
I found the error and the solution, apparently the gcloud SDK version upgrade, from 122.0.0 to 124.0.0 got corrupted my project id in the gcloud portal.
I tried to switch back from 124.0.0 to 122.0.0 unsuccessfully and also upgrade again to 126.0.0, but finally I found that creating a new project and migrating all my containers made the trick and once there everything worked correctly!.
I have to say it, gcloud is a very useful and powerful tool, but with an error like this and finding out that there is actually few support provide it from Google makes me think to move back to AWS.
App Engine no longer supports Docker V1 format images for new deployments. It looks like the error message used doesn't really convey this.
Here are the docs on how to tell which docker format an image is in:
https://cloud.google.com/container-registry/docs/ui
We'll work on getting the error message fixed. Sorry about the hassle.

Resources