I can see that my VM images are available in Google Container Registry following execution of the commands :
docker tag sutechnology/transcode eu.gcr.io/supereye/transcode
docker push eu.gcr.io/supereye/transcode
gcloud auth configure-docker docker push eu.gcr.io/supereye/transcode
Although I can see the images, I haven't been able to use this image while creating a new instance in Google Compute Engine. How can I use the image that I see in Container Registry while creating a new VM instance? Here below is my full config :
machine_type = "zones/europe-west2-b/machineTypes/n1-standard-1"
disk_type = "zones/europe-west2-b/diskTypes/pd-standard"
config = {
'name': name,
'machineType': machine_type,
# Specify the boot disk and the image to use as a source.
'disks': [
{
'boot': True,
'autoDelete': True,
'initializeParams': {
'sourceImage': source_disk_image,
}
}
],
# Specify a network interface with NAT to access the public
# internet.
'networkInterfaces': [{
'network': 'global/networks/default',
'accessConfigs': [
{'type': 'ONE_TO_ONE_NAT', 'name': 'External NAT'}
]
}],
# Allow the instance to access cloud storage and logging.
'serviceAccounts': [{
'email': 'default',
'scopes': [
'https://www.googleapis.com/auth/devstorage.read_write',
'https://www.googleapis.com/auth/logging.write'
]
}],
# Metadata is readable from the instance and allows you to
# pass configuration from deployment scripts to instances.
'metadata': {
'items': [{
# Startup script is automatically executed by the
# instance upon startup.
'key': 'startup-script',
'value': startup_script,
'VIDEOPATH': videopath
}]
}
}
And instance creation function below :
compute.instances().insert(
project=project,
zone=zone,
body=config).execute()
Google Container Registry (GCR), is used for storing docker images, which is then used to create the containers NOT compute Engine machine.
For Compute Engine, either use the public Images or Custom Images Snapshots of existing machines.
For Ref:- https://cloud.google.com/container-registry
Hope this helps
Related
I am trying to use CDF for terraform to build and push a docker image to AWS ECR. I have decided to use terraform docker provider for it. Here is my code
class MyStack extends TerraformStack {
constructor(scope: Construct, name: string) {
super(scope, name);
const usProvider = new aws.AwsProvider(this, "us-provider", {
region: "us-east-1",
defaultTags: {
tags: {
Project: "CV",
Name: "CV",
},
},
});
const repo = new aws.ecr.EcrpublicRepository(this, "docker-repo", {
provider: usProvider,
repositoryName: "cv",
forceDestroy: true,
});
const authToken = new aws.ecr.DataAwsEcrpublicAuthorizationToken(
this,
"auth-token",
{
provider: usProvider,
}
);
new docker.DockerProvider(this, "docker-provider", {
registryAuth: [
{
address: repo.repositoryUri,
username: authToken.userName,
password: authToken.password,
},
],
});
new docker.RegistryImage(this, "image-on-public-ecr", {
name: repo.repositoryUri,
buildAttribute: {
context: __dirname,
},
});
}
}
But during deployment, I have this error: Unable to create image, image not found: unable to get digest: Got bad response from registry: 400 Bad Request. But it still is able to push to the registry, I can see it from the AWS console.
I can't seem to find any mistake in my code, and I don't understand the error. I hope you can help
In the Terraform execution model is build so that Terraform first finds all the information it needs to get the current state of your infrastructure and the in a second step calculates the plan of changes that need to be applied to get the current state into the that you described through your configuration.
This poses a problem here, the provider you declare is using information that is only available once the plan is being put into action, there is no repo url / auth token before the ECR repo is being created.
There a different ways to solve this problem: You can make use of the cross-stack references / multi-stack feature and split the ECR repo creation into a separate TerraformStack that deploys beforehand. You can pass a value from that stack into your other stack and use it to configure the provider.
Another way to solve this is by building and pushing your image outside of the terraform provider through the null provider with a local provisioner as it's done in the docker-aws-ecs E2E example
Hi I am trying to get the hostname into my azure module by getting it form the enviroment variables. the module is written in C# and .NetCore 3.1
var deviceId = Environment.GetEnvironmentVariable("HOST_HOSTNAME");
I have tried to add the variable in de deployment template
"createOptions": {
"Cmd": [
"-e HOST_HOSTNAME=(hostname)"
]
}
The result is
deviceId == null
Can you try using "env" in your deployment template? You should add it on the same level as "settings" JSON object. Something like:
"env": {
"HOS_HOSTNAME": {
"value": "<valuehere>"
}
}
You can also do this in the Azure Portal. See for example how is done in the tutorial Give modules access to a device's local storage
I have created a Docker image that I'd like to run in GCP using Terraform. I have tagged and pushed the image to GCR like this:
docker tag carlspring/hello-spring-boot:1.0 eu.gcr.io/${PROJECT_ID}/carlspring/hello-spring-boot:1.0
docker push eu.gcr.io/carlspring/carlspring/hello-spring-boot:1.0
I have the following code:
provider "google" {
// Set this to CREDENTIALS
credentials = file("credentials.json")
// Set this to PROJECT_ID
project = "carlspring"
region = "europe-west2"
zone = "europe-west2-a"
}
resource "google_compute_network" "vpc_network" {
name = "carlspring-terraform-network"
}
resource "google_compute_instance" "docker" {
count = 1
name = "tf-docker-${count.index}"
machine_type = "f1-micro"
zone = var.zone
tags = ["docker-node"]
boot_disk {
initialize_params {
image = "carlspring/hello-spring-boot"
}
}
}
After doing:
terraform init
terraform plan
terraform apply
I get:
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
google_compute_instance.docker[0]: Creating...
Error: Error resolving image name 'carlspring/hello-spring-boot': Could not find image or family carlspring/hello-spring-boot
on main.tf line 18, in resource "google_compute_instance" "docker":
18: resource "google_compute_instance" "docker" {
The examples I've seen online are either using K8s, or starting a VM image running a Linux in which Docker is installed and an image is being started. Can't I just simply use my own container to start the instance?
google_compute_instance expects a VM image, not a Docker image. If you want to deploy Docker images to GCP, the easiest option is Cloud Run. To use it with Terraform you need cloud_run_service.
For example:
resource "google_cloud_run_service" "default" {
name = "cloudrun-srv"
location = "us-central1"
template {
spec {
containers {
image = "eu.gcr.io/carlspring/carlspring/hello-spring-boot:1.0"
}
}
}
traffic {
percent = 100
latest_revision = true
}
}
Note that I used eu.gcr.io/carlspring/carlspring/hello-spring-boot:1.0 and not carlspring/hello-spring-boot. You must use the fully qualified name as the short one points to Docker Hub where your image will not be found.
Terraform can be used to create a GCP VM Instance with Docker Image.
Here is an example: https://github.com/terraform-providers/terraform-provider-google/issues/1022#issuecomment-475383003
Hope this helps.
The following line indicates the image does not exist:
Error: Error resolving image name 'carlspring/hello-spring-boot': Could not find image or family carlspring/hello-spring-boot
You should tag the image as eu.gcr.io/carlspring/hello-spring-boot:1.0.
Or alternatively, change image reference in boot_disk block to be eu.gcr.io/carlspring/carlspring/hello-spring-boot:1.0.
You can do this using a VM in GCE whose operating system is based on a Google-supplied Container OS image. You then can use this terraform module that facilitates the fetching and running of a container image.
For reasons of precise control of our builds we are using the new buildkit (moby/buildkit) directly. So without a Dockerfile.
We are creating a script like this example: https://github.com/moby/buildkit/blob/master/examples/buildkit0/buildkit.go
While it works (great), documentation is lacking.
How do I add an entrypoint? (i.e. default command to run)
and
How do I set the default workdir for when the container starts?
and
How do I set which ports to expose?
LLB layer in BuildKit does not deal with images. It's one specific exporter for the build result. If you use a frontend like Dockerfile, it will prepare an image config for the exporter as well as invoking the LLB build. If you are using LLB directly you need to create an image config yourself as well. If you use buildctl this would look something like buildctl build --output 'type=docker,name=test,"containerimage.config={""Config"":{""Cmd"":[""bash""]}}"'
In Go API you would pass this with ExportEntry https://godoc.org/github.com/moby/buildkit/client#ExportEntry attributes. The image format is documented at https://github.com/moby/moby/blob/master/image/spec/v1.2.md .
Note that you don't need to fill RootFS in the image config. BuildKit will fill this in automatically. More background info https://github.com/moby/buildkit/issues/1041
Tõnis answer actually helped me solve it. I'm also posting here for an example for how to do it.
config := Config{
Cmd: cmd,
WorkingDir: "/opt/company/bin",
ExposedPorts: map[string]struct{}{
"80/tcp": {},
"8232/tcp": {},
},
Env: []string{"PATH=/opt/company/bin:" + system.DefaultPathEnv},
}
imgConfig := ImgConfig{
Config: config,
}
configStr, _ := json.Marshal(imgConfig)
Exports: []client.ExportEntry{
{
Type: "image",
Attrs: map[string]string{
"name": manifest.Tag,
"push": "true",
"push-by-digest": "false",
"registry.insecure": strconv.FormatBool(insecureRegistry),
"containerimage.config": string(configStr),
},
},
},
I am trying to push a docker container image to the Google Container Engine registry:
$ sudo gcloud docker push gcr.io/<my_project>/node
The push refers to a repository [gcr.io/<my_project>/node] (len: 1)
24c43271ae0b: Image already exists
70a0868daa56: Image already exists
1d86f57ee56d: Image already exists
a46b473f6491: Image already exists
a1ac8265769f: Image already exists
290bb858e1c3: Image already exists
d6fc4b5b4917: Image already exists
3842411e5c4c: Image already exists
7a01cc5f27b1: Image already exists
dbacfa057b30: Image already exists
latest: digest: sha256:02be2e66ad2fe022f433e228aa43f32d969433402820035ac8826880dbc325e4 size: 17236
Received unexpected HTTP status: 500 Internal Server Error
I can not make the command verbose more. Neither with:
$ sudo gcloud docker push gcr.io/<my_project>/node --verbosity info
nor with this command that should work:
$ sudo gcloud docker --log-level=info push gcr.io/sigma-cairn-99810/node
usage: gcloud docker [EXTRA_ARGS ...] [optional flags]
ERROR: (gcloud.docker) unrecognized arguments: --log-level=info
according to the documentation (see EXTRA_ARGS) and --log-level=info is a valid docker option:
SYNOPSIS
gcloud docker [EXTRA_ARGS ...] [--authorize-only, -a]
[--docker-host DOCKER_HOST]
[--server SERVER,[SERVER,...], -s SERVER,[SERVER,...]; default="gcr.io,us.gcr.io,eu.gcr.io,asia.gcr.io,b.gcr.io,bucket.gcr.io,appengine.gcr.io"]
[GLOBAL-FLAG ...]
...
POSITIONAL ARGUMENTS
[EXTRA_ARGS ...]
Arguments to pass to docker.
I am using the default service account that GCP installs on my container-vm machine instance. I have given it also Owner permissions to all resources in <my_project>.
UPDATE:
Running sudo gsutil ls -bL gs://artifacts.<my_project>.appspot.com I get:
gs://artifacts.<my_project>.appspot.com/ :
Storage class: STANDARD
Location constraint: US
Versioning enabled: None
Logging configuration: None
Website configuration: None
CORS configuration: None
Lifecycle configuration: None
ACL: []
Default ACL: []
If I do the same thing after authenticating with my non-service account, I get both ACL and Default ACL:
ACL:
[
{
"entity": "project-owners-262451203973",
"projectTeam": {
"projectNumber": "262451203973",
"team": "owners"
},
"role": "OWNER"
},
{
"entity": "project-editors-262451203973",
"projectTeam": {
"projectNumber": "262451203973",
"team": "editors"
},
"role": "OWNER"
},
{
"entity": "project-viewers-262451203973",
"projectTeam": {
"projectNumber": "262451203973",
"team": "viewers"
},
"role": "READER"
}
]
Default ACL:
[
{
"entity": "project-owners-262451203973",
"projectTeam": {
"projectNumber": "262451203973",
"team": "owners"
},
"role": "OWNER"
},
{
"entity": "project-editors-262451203973",
"projectTeam": {
"projectNumber": "262451203973",
"team": "editors"
},
"role": "OWNER"
},
{
"entity": "project-viewers-262451203973",
"projectTeam": {
"projectNumber": "262451203973",
"team": "viewers"
},
"role": "READER"
}
]
Can you run sudo gsutil ls -bL gs://artifacts.<my_project>.appspot.com and see if you can access to the GCS bucket. This will verify permissions for storage for the docker image.
While I think you should have permission by being added to owner, this will verify if you do or not.
As for the EXTRA_ARGS, I think --log-level="info" is only valid for the command docker daemon, docker push does not recognize --log-level="info"
UPDATE
From reviewing the logs again, you are pushing a mostly existing image, as the "image already exist" log entries indicate. On the first new write step it failed. That indicates that the problem seems likely to be that the instance you started with originally only had read only scope.
Can you please run this command and share the output.
curl -H "Metadata-Flavor:Google" http://metadata/computeMetadata/v1/instance/service-accounts/default/scopes
We are looking for the scope https://www.googleapis.com/auth/devstorage.read_write.
What might have happened is that the instance was not originally created with this scope, and as the scope on an instance cannot be modified, it maintains only being able to read.
If this is the case, the solution would likely be creating a new instance.
We will file a bug to ensure better messaging is provided in situations like this.