I am trying to create a container with memory limit using the docker go client - https://godoc.org/github.com/docker/docker/client#Client.ContainerCreate
However I cannot figure out where to add these parameters in the function.
docker run -m 250m --name test repo/tag
In the docker api, it comes under Host Config structure but in go doc I saw the option under resources which is used in HostConfig - https://godoc.org/github.com/docker/docker/api/types/container#HostConfig
Calling like this
import(
....
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/events"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/client"
"github.com/docker/go-connections/nat"
)
...
resp, err1 := cli.ContainerCreate(ctx,
&container.Config{
User: strconv.Itoa(os.Getuid()), // avoid permission issues
Image: cfg.ImageName,
AttachStdin: false,
AttachStdout: true,
AttachStderr: true,
Tty: true,
ExposedPorts: exposedPorts,
Labels: labels,
Env: envVars,
},
&container.HostConfig{
Binds: binds,
NetworkMode: container.NetworkMode(cfg.Network),
PortBindings: nat.PortMap{
"1880": []nat.PortBinding{
nat.PortBinding{
HostIP: "",
HostPort: "1880",
},
}},
AutoRemove: true,
Memory : 262144000, //this does not work
},
nil, // &network.NetworkingConfig{},
name,
)
unknown field 'Memory' in struct literal of type container.HostConfig. Since it does not have a field name and only type I have no idea how to add resources to Hostconfig. Any help is appreciated - I am a newbie at go and am trying to tweak an opensource project I was using - redzilla - due to my system's resource constraints
You can define memory limit using Resources field of HostConfig struct.
Resources: container.Resources{ Memory:3e+7 }
Related
I am trying to run docker inside container using Go Docker Engine API. I am able to mount a folder from host system to the container but only empty dir is being copied into the docker inside the container. Please help me out if there is any alternative for the same. I am starting my container using following command.
docker run --rm -v C:\Users\user\source\repos\game:/app/myrepo -v /var/run/docker.sock:/var/run/docker.sock testimage
Attached is the piece of code.
Go Docker SDK code to start container
resp, err := cli.ContainerCreate(ctx, &container.Config{
Image: "hello-image",
Cmd: []string{"ls"}, #the actual cmd would look different
Tty: true,
}, &container.HostConfig{
Binds: []string{
"/app/myrepo:/myrepo",
},
}, nil, nil, containername)
if err != nil {
panic(err)
}
updated Code for binds with absolute path
resp, err := cli.ContainerCreate(ctx, &container.Config{
Image: "hello-image",
Cmd: []string{"ls"}, #the actual cmd would look different
Tty: true,
}, &container.HostConfig{
Mounts: []mount.Mount{
{
Type: mount.TypeBind,
Source: "/app/myrepo",
Target: "/myrepo",
},
},
}, nil, nil, containername)
if err != nil {
panic(err)
}
As discussed in the comments the OP is running an app in a container. The app is connecting to the docker daemon on the host (via shared /var/run/docker.sock) and attempting to create a container. The issue was that the request includes a mount point, the source being /app/myrepo, which a source path that is valid within the container but not on the host.
To aid in understanding why this is an issue you need to consider how the API request is made. Your code will generate a JSON formatted request; this will include something like this:
...
"HostConfig": {
...
"Mounts": [
{
"Source": "/app/myrepo",
"Destination": "/myrepo",
}
]
}
It's important to note that the Source path is passed as a string and the Docker Daemon will interpret this in the hosts (e.g. the windows box) context. When it attempts to locate the requested path (/app/myrepo) it will not find it because that path does not exist on the host. To correct this you need to send a valid path e.g.
Mounts: []mount.Mount{
{
Type: mount.TypeBind,
Source: "c:/Users/user/source/repos/game",
Target: "/myrepo",
},
}
One note of caution; accessing the Docker API in this way (bind mount /var/run/docker.sock:) is convenient but if someone gains access to the container then they also gain full control of all containers (because they can access the Docker API). You may want to consider using a proxy (for example).
I'm trying to create a docker network for my application written in Golang.
I'm aware that I can use this NetworkCreate function, but I'm not sure how to specify the network option.
In the regular terminal console, I can just create the network with
docker network create -d bridge --subnet=174.3.12.5/16 mynet
But how to use the NetworkCreate() as an equivalent for this network creation?
package main
import (
"context"
"fmt"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/client"
)
func main() {
cli, err := client.NewClientWithOpts(client.FromEnv)
if err != nil {
fmt.Println(err)
}
newnetwork := types.NetworkCreate{IPAM: &network.IPAM{
Driver: "default",
Config: []network.IPAMConfig{network.IPAMConfig{
Subnet: "174.3.12.5/16",
}},
}}
res, err := cli.NetworkCreate(context.Background(), "test", newnetwork)
if err != nil {
fmt.Println(err)
return
}
fmt.Println(res)
}
this is a minimal implementable example. the name for driver is default.
You can specify the network options via the NetworkCreate options struct.
If you want to convert the command docker network create -d bridge --subnet=174.3.12.5/16 mynet to a golang equivalent, It'll look something like this:
networkResponse, err := client.NetworkCreate(context.Background(), "mynet", types.NetworkCreate{
Driver: "bridge",
IPAM: &network.IPAM{
Config: network.IPAMConfig{
Subnet: "174.3.12.5/16",
},
},
})
You can use exec.Command(...).CombinedOutput()
Background
I need to setup a docker-compose with a RabbitMQ service and my application. This RabbitMQ service needs to have 3 things to work properly:
a user named "user1" with full permissions
a vhost named "vhost1"
inside "vhost1", I need an exchange called "Pizza"
What we tried
To achieve this we tried creating a folder in our project called rabbitmq with the following files:
definitions.json
{
"rabbit_version": "3.6.6",
"users": [
{
"name": "user1",
"password_hash": "pass1",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": "administrator"
}
],
"vhosts": [
{
"name": "\/vhost1"
}
],
"permissions": [
{
"user": "user1",
"vhost": "\/vhost1",
"configure": ".*",
"write": ".*",
"read": ".*"
}
],
"parameters": [],
"policies": [],
"queues": [],
"exchanges": [],
"bindings": []
}
rabbitmq.conf
loopback_users.guest = false
listeners.tcp.default = 5672
We are mounting this folder using the volumes command from docker-compose using the following file:
version: '3'
services:
rabbit:
image: rabbitmq:management
ports:
- "8080:15672"
- "5672:5672"
volumes:
- ${PWD}/rabbitmq:/etc/rabbitmq
Problems
We are facing two issues at the moment:
we are not creating the exchange called "Pizza".
we cannot access the RabbitMQ management UI via localhost:8080 even though we specify the mapping of this port in our docker-compose file.
Questions
How do we define an exchange for a vhost in the defininitions.json file? (where can I read about it?)
Why can't we access the UI? What are we doing wrong?
Solutions
1. Exchange creation
The first issue is easily solvable. The reason the exchange is not being created is because the "exchanges" field in the the definitions.json file is empty. To fix this you need to add an exchange object to that list:
"exchanges": [
{
"name": "Pizza",
"vhost": "\/vhost1",
"type": "fanout",
"durable": true,
"auto_delete": false,
"internal": false,
"arguments": {}
}
],
One can read more about this in this blog post:
https://devops.datenkollektiv.de/creating-a-custom-rabbitmq-container-with-preconfigured-queues.html
2. Accessing the management UI
Here there are several problems with the configurations. First, I was smashing the contents of the original /etc/rabbitmq folder in the container with the ones in my local folder. This was not intended, and the fix to this issue can be found here:
Unknown variable "management.load_definitions" in rabbitmq rabbit.conf file
The second issue was in the rabbitmq.conf file. We were missing the field that tells the application to load our definitions file. Following is the correct version of the rabbitmq.conf file:
loopback_users.guest = false
listeners.tcp.default = 5672
management.load_definitions = /etc/rabbitmq/definitions.json
The third (and final) issue was with the user's password, specifically the password_hash field, which needs to follow a specific algorithm and be encoded in a specific format. More about this can be read in RabbitMQ's official documentation:
https://www.rabbitmq.com/passwords.html
To skip the pain of dealing with the salting, hashing and encoding, if all you want is to test a setup for integration purpose like we want, then just go with the password test12 that is given in the example:
"users": [
{
"name": "user1",
"password_hash": "kI3GCqW5JLMJa4iX1lo7X4D6XbYqlLgxIs30+P6tENUV2POR",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": "administrator"
}
]
If however, it is really important for you to know how to generate user passwords that RabbitMQ will accept here is a bash script, created by the blood and tears of a colleague:
#!/bin/bash
PWD_HEX=$(echo -n $1 | xxd -p)
SALT="908D C60A"
HEX="$SALT $PWD_HEX"
SHA256=$(echo -n $HEX | xxd -r -p | sha256sum)
# This is thw pwd to be inserted on your rabbit load_definitions file
echo "908D C60A $SHA256" | xxd -r -p | base64
Usage: ./my_script userpass1
Conclusion
And with all this out of the way one should be able to create users, vhosts and exchanges while also having access to the management UI, all via a docker image.
I couldn't find such a specific command around the internet so I kindly ask for your help with this one :)
Context
I have defined a podTemplate with a few containers, by using the containerTemplate methods:
ubuntu:trusty (14.04 LTS)
postgres:9.6
and finally, wurstmeister/kafka:latest
Doing some Groovy coding in Pipeline, I install several dependencies into my ubuntu:trusty container, such as latest Git, Golang 1.9, etc., and I also checkout my project from Github.
After all that dependencies are dealt with, I manage to compile, run migrations (which means Postgres is up and running and my app is connected to it), and spin up my app just fine until it complains that Kafka is not running because it couldn't connect to any broker.
Debugging sessions
After some debug sessions I have ps aux'ed each and every container to make sure all the services I needed were running in their respective containers, such as:
container(postgres) {
sh 'ps aux' # Show Postgres, as expected
}
container(linux) {
sh 'ps aux | grep post' # Does not show Postgres, as expected
sh 'ps aux | grep kafka' # Does not show Kafka, as expected
}
container(kafka) {
sh 'ps aux' # Does NOT show any Kafka running
}
I have also exported KAFKA_ADVERTISED_HOST_NAME var to 127.0.0.1 as explained in the image docs, without success, with the following code:
containerTemplate(
name: kafka,
image: 'wurstmeister/kafka:latest',
ttyEnabled: true,
command: 'cat',
envVars: [
envVar(key: 'KAFKA_ADVERTISED_HOST_NAME', value: '127.0.0.1'),
envVar(key: 'KAFKA_AUTO_CREATE_TOPICS_ENABLE', value: 'true'),
]
)
Questions
This image documentation details https://hub.docker.com/r/wurstmeister/kafka/ is explicit about starting a Kafka cluster with docker-compose up -d
1) How do I actually do that with this Kubernetes plugin + Docker + Groovy + Pipeline combo in Jenkins?
2) Do I actually need to do that? Postgres image docs (https://hub.docker.com/_/postgres/) also mentions about running the instance with docker run, but I didn't need to do that at all, which makes me think that containerTemplate is probably doing it automatically. So why is it not doing this for the Kafka container?
Thanks!
So... problem is with this image, and way how kubernetes works with them.
Kafka does not start because you override dockers CMD with command:'cat' which causes start-kafka.sh to never run.
Because of above I suggest using different image. Below template worked for me.
containerTemplate(
name: 'kafka',
image: 'quay.io/jamftest/now-kafka-all-in-one:1.1.0.B',
resourceRequestMemory: '500Mi',
ttyEnabled: true,
ports: [
portMapping(name: 'zookeeper', containerPort: 2181, hostPort: 2181),
portMapping(name: 'kafka', containerPort: 9092, hostPort: 9092)
],
command: 'supervisord -n',
envVars: [
containerEnvVar(key: 'ADVERTISED_HOST', value: 'localhost')
]
),
I'm using Rancher over Kubernetes to create our test/dev environment. First of all, it's a great tool and I'm amazed of how it simplify the management of such environments.
That said, I have an issue (which is probably more a knowledge lack of Rancher). I try to automate the deployment via Jenkins, and as we will have several stacks into our test environment, I want to dynamically update the loadbalancer instances to add routes for new environement from Jenkins with Rancher CLI.
At the moment, I just try to run this command :
rancher --url http://myrancher_server:8080 --access-key <key> --secret-key <secret> --env dev-test stack create kubernetes-ingress-lbs -r loadbalancer-rancher-service.yml
My docker-compose.yml file is like the following :
version: '2'
services:
frontend:
image: 172.19.51.97:5000/frontend
dev-test-lb:
image: rancher/load-balancer-service
ports:
- 82: 8086
links:
- fronted:frontend
My rancher compose file is like this:
version: '2'
services:
dev-test-lb:
scale: 4
lb_config:
port_rules:
- source_port: 82
path: /products
target_port: 8086
service: products
- source_port: 82
path: /
target_port: 4201
service: frontend
health_check:
port: 42
interval: 2000
unhealthy_threshold: 3
healthy_threshold: 2
response_timeout: 2000
Now when I execute this I have the following response :
Bad response statusCode [422]. Status [422 status code 422]. Body: [code=NotUnique, fieldName=name, baseType=error] from [http://myrancher_server:8080/v2-beta/projects/1a21/stacks]
Obviously I can't edit an existing stack with a service that already exsit. Do you know if it's best practice do to this like that ? I checked man, and I only see the "create" action on "rancher stack", so I'm wondering if we can update ?
My rancher server is v1.5.10 and all my rancher agents and Kubernetes drivers are up-to-date.
Thanks a lot for your help fellows :)
Ok, just for the information, I found that this is possible via the Rest API of Rancher.
Check the following link : http://docs.rancher.com/rancher/v1.2/en/api/v2-beta/api-resources/service/
I didn't found that at first 'cause the Googling I've done around was all about rancher cli at first. But as it's still beta, we can't do the same stuff as via the rest API.
Basically, just send an update resource query :
PUT rancherserver/v2-beta/projects/1a12/services/
{
"description": "Loadbalancer for our test env",
"lbConfig": {
"portRules": [
{
"hostname": "",
"protocol": "http",
"source_port": "80",
"targetPort": "4200",
"path": "/"
}
]
},
"name": "kubernetes-ingress-lbs"
}