Using docker buildkit's go client, how do I add an entrypoint? - docker

For reasons of precise control of our builds we are using the new buildkit (moby/buildkit) directly. So without a Dockerfile.
We are creating a script like this example: https://github.com/moby/buildkit/blob/master/examples/buildkit0/buildkit.go
While it works (great), documentation is lacking.
How do I add an entrypoint? (i.e. default command to run)
and
How do I set the default workdir for when the container starts?
and
How do I set which ports to expose?

LLB layer in BuildKit does not deal with images. It's one specific exporter for the build result. If you use a frontend like Dockerfile, it will prepare an image config for the exporter as well as invoking the LLB build. If you are using LLB directly you need to create an image config yourself as well. If you use buildctl this would look something like buildctl build --output 'type=docker,name=test,"containerimage.config={""Config"":{""Cmd"":[""bash""]}}"'
In Go API you would pass this with ExportEntry https://godoc.org/github.com/moby/buildkit/client#ExportEntry attributes. The image format is documented at https://github.com/moby/moby/blob/master/image/spec/v1.2.md .
Note that you don't need to fill RootFS in the image config. BuildKit will fill this in automatically. More background info https://github.com/moby/buildkit/issues/1041

Tõnis answer actually helped me solve it. I'm also posting here for an example for how to do it.
config := Config{
Cmd: cmd,
WorkingDir: "/opt/company/bin",
ExposedPorts: map[string]struct{}{
"80/tcp": {},
"8232/tcp": {},
},
Env: []string{"PATH=/opt/company/bin:" + system.DefaultPathEnv},
}
imgConfig := ImgConfig{
Config: config,
}
configStr, _ := json.Marshal(imgConfig)
Exports: []client.ExportEntry{
{
Type: "image",
Attrs: map[string]string{
"name": manifest.Tag,
"push": "true",
"push-by-digest": "false",
"registry.insecure": strconv.FormatBool(insecureRegistry),
"containerimage.config": string(configStr),
},
},
},

Related

SwaggerUI Docker support for two api files (locally)

I'm trying to create a Swagger UI configuration to show several of my apis. They are not hosted publicly, the definition files are in my local file system. I'm using swagger ui with docker. I run it with the following command:
docker run -p 8080:8080 -v $(pwd)/_build:/spec swaggerapi/swagger-ui
In _build directory is where I have my yaml spec files. This is the swagger-config.yaml config file:
urls:
- /spec/openapi2.yaml
- /spec/openapi.yaml
plugins:
- topbar
I have also tried:
urls:
- url: /spec/openapi2.yaml
name: API1
- url: /spec/openapi.yaml
name: API2
plugins:
- topbar
After running it, this is what I see:
That's the default API of Swagger UI, so I suppose there's an error in my configuration. I have tried several things, but they have not worked and I do not seem to find any good documentation about the swagger-config.yaml configuration file.
Any idea to make it work with several APIs?
According to the comments in the Swagger UI issue tracker, the Docker version needs the config file in the JSON format rather than YAML.
Try using this swagger-config.json:
{
"urls": [
{
"url": "/spec/openapi2.yaml",
"name": "API1"
},
{
"url": "/spec/openapi.yaml",
"name": "API2"
}
],
"plugins": [
"topbar"
]
}
Also add -e CONFIG_URL=/path/to/swagger-config.json to the docker run command.

How to control Docker Image Version using Terraform while pushing images to ECR

I am new to Terraform so please be kind.
During Build process, Terraform is pushing the docker image to AWS ECR with a new name with every build.
As Image Name is different, we need to create a new Task Definition for each new build.
Is there a way to handle this issue in Terraform?
Any help is appreciated.
If you are ok to replace the task definition each time with the new image, then you can update the image name used in the task definition and Terraform will handle the update for you.
If you need to generate a new task definition each time and leave the old ones in place then read on..
If you do not need to keep the task definition in the Terraform state, then you could remove it after deployment so that next time a new one will be created.
The state rm command removes a resource from the state:
terraform state rm aws_ecs_task_definition.service
If you do need to keep each task definition in the Terraform state, you could create a data source for which you can use the for_each operator to generate a collection of resources for.
As an example you could save the container definitions of each task definition to a JSON file within a folder. Each file looks something like this:
[
{
"name": "a-service",
"image": "my-image",
"cpu": 10,
"memory": 512,
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
]
}
]
Use the fileset function to list files in the folder and generate a new resource for each file using for_each:
resource "aws_ecs_task_definition" "service" {
family = "service"
for_each = fileset(path.module, "task-definitions/*.json")
container_definitions = file("${path.module}/${each.key}")
}

How can I deploy VM images from Container Registry using Google Cloud

I can see that my VM images are available in Google Container Registry following execution of the commands :
docker tag sutechnology/transcode eu.gcr.io/supereye/transcode
docker push eu.gcr.io/supereye/transcode
gcloud auth configure-docker docker push eu.gcr.io/supereye/transcode
Although I can see the images, I haven't been able to use this image while creating a new instance in Google Compute Engine. How can I use the image that I see in Container Registry while creating a new VM instance? Here below is my full config :
machine_type = "zones/europe-west2-b/machineTypes/n1-standard-1"
disk_type = "zones/europe-west2-b/diskTypes/pd-standard"
config = {
'name': name,
'machineType': machine_type,
# Specify the boot disk and the image to use as a source.
'disks': [
{
'boot': True,
'autoDelete': True,
'initializeParams': {
'sourceImage': source_disk_image,
}
}
],
# Specify a network interface with NAT to access the public
# internet.
'networkInterfaces': [{
'network': 'global/networks/default',
'accessConfigs': [
{'type': 'ONE_TO_ONE_NAT', 'name': 'External NAT'}
]
}],
# Allow the instance to access cloud storage and logging.
'serviceAccounts': [{
'email': 'default',
'scopes': [
'https://www.googleapis.com/auth/devstorage.read_write',
'https://www.googleapis.com/auth/logging.write'
]
}],
# Metadata is readable from the instance and allows you to
# pass configuration from deployment scripts to instances.
'metadata': {
'items': [{
# Startup script is automatically executed by the
# instance upon startup.
'key': 'startup-script',
'value': startup_script,
'VIDEOPATH': videopath
}]
}
}
And instance creation function below :
compute.instances().insert(
project=project,
zone=zone,
body=config).execute()
Google Container Registry (GCR), is used for storing docker images, which is then used to create the containers NOT compute Engine machine.
For Compute Engine, either use the public Images or Custom Images Snapshots of existing machines.
For Ref:- https://cloud.google.com/container-registry
Hope this helps

how do you properly pass a command to a container when using "azure-arm-containerinstance" from azure node sdk?

just looking for some guidance on how to properly invoke a command when a container starts, when creating it via azure-arm-containerinstance package. There is very little documentation on this specific part and I wasn't able to find any examples out there on the internet.
return client.containerGroups
.beginCreateOrUpdate(process.env.AZURE_RESOURCE_GROUP, containerInstanceName, {
tags: ['server'],
location: process.env.AZURE_INSTANCE_LOCATION,
containers: [
{
image: process.env.CONTAINER_IMAGE,
name: containerInstanceName,
command: ["./some-executable","?Type=Fall?"],
ports: [
{
port: 1111,
protocol: 'UDP',
},
],
resources: {
requests: {
cpu: Number(process.env.INSTANCE_CPU),
memoryInGB: Number(process.env.INSTANCE_MEMORY),
},
},
},
],
imageRegistryCredentials: [
{
server: process.env.CONTAINER_REGISTRY_SERVER,
username: process.env.CONTAINER_REGISTRY_USERNAME,
password: process.env.CONTAINER_REGISTRY_PASSWORD,
},
],```
Specifically this part below, is this correct? Just an array of strings? Are there any good examples anywhere? (tried both google and bing) Is this equivalent of docker's CMD ["command","argument"]?
```command: ["./some-executable","?Type=Fall?"],```
With your issue, most you have done is right, but there are points should pay attention to.
one is the command property will overwrite the CMD setting in the Dockerfile. So if the command will not always keep running, then the container will in a terminate state when the command finish execute.
Second is the command property is an array with string members and they will execute like a shell script. So I suggest you can set it like this:
command: ['/bin/bash','-c','echo $PATH'],
And you'd better keep the first two strings no change, just change the after.
If you have any more questions please let me know. Or if it's helpful you can accept it :-)

How to get started with dockerode

I am planning on running my app in docker. I want to dynamically start, stop, build, run commands, ... on docker container. I found a tool named dockerode. Here is the project repos. This project has doc, but I am not understanding very well. I would like to understand few thing. This is how to build an image
docker.createContainer({Image: 'ubuntu', Cmd: ['/bin/bash'], name: 'ubuntu-test'}, function (err, container) {
container.start(function (err, data) {
//...
});
});
It is possible to make RUN apt-get update like when we use Dockerfile, or RUN ADD /path/host /path/docker during build ? how to move my app into container after build ?
Let's see this code :
//tty:true
docker.createContainer({ /*...*/ Tty: true /*...*/ }, function(err, container) {
/* ... */
container.attach({stream: true, stdout: true, stderr: true}, function (err, stream) {
stream.pipe(process.stdout);
});
/* ... */
}
How can I know how many params I can put here { /*...*/ Tty: true /*...*/ } ?
Has someone tried this package too ? please help me to start with.
Dockerode is just a node wrapper for Docker API. You can find all params you can use for each command in api docs.
For example docker.createContainer will call POST /containers/create (docs are here: https://docs.docker.com/engine/reference/api/docker_remote_api_v1.24/#/create-a-container)
Check files in lib folder of dockerode repo to see what api command is wrapped for each dockerode method.

Resources