I have a Kubernetes v1.18.3 cluster and the workers have a Docker v19.03.6 dameon.
I'm looking for a way to automatically inject the HTTP_PROXY and HTTPS_PROXY to every container that Kubernetes creates.
I tried creating a ~/.docker/config.json file, but it didn't work.
What would be the proper way to accomplish it?
Was interested in your case, even reproduced with the same docker and k8s versions...
Used official Configure Docker to use a proxy server documentation to set proxy for docker in ~/.docker/config.json
Configure the Docker client On the Docker client, create or edit the
file ~/.docker/config.json in the home directory of the user which
starts containers. Add JSON such as the following, substituting the
type of proxy with httpsProxy or ftpProxy if necessary, and
substituting the address and port of the proxy server. You can
configure multiple proxy servers at the same time.
{
"proxies":
{
"default":
{
"httpProxy": "http://127.0.0.1:3001",
"httpsProxy": "http://127.0.0.1:3001",
"noProxy": "*.test.example.com,.example2.com"
}
}
}
Save the file.
When you create or start new containers, the environment variables are set automatically within the container.
My config was:
{
"proxies": {
"default": {
"httpProxy": "http://user:pass#my.proxy.domain.com",
"httpsProxy": "http://user:pass#my.proxy.domain.com"
}
}
}
So basically after setting above in ~/.docker/config.json, the proxy-server will be automatically used when starting a brand new containers.
In my case that worked, I can verify that by using cli and creating e.g busybox container.
$ docker container run --rm busybox env
HTTP_PROXY=http://user:pass#my.proxy.domain.com
http_proxy=http://user:pass#my.proxy.domain.com
HTTPS_PROXY=http://user:pass#my.proxy.domain.com
https_proxy=http://user:pass#my.proxy.domain.com
HOME=/root
Please keep in mind that there should be issues with next part:
On the Docker client, create or edit the file ~/.docker/config.json in
the home directory of the user which starts containers.
Be careful with USER you use and make sure your HOME env is set to correct one.
Links to github almost similar issue and ways to resolve:
1) https://github.com/kubernetes/kubernetes/issues/45487#issuecomment-312042754
I dug into this a bit and the issue for me was that the HOME
environment variable was empty when kubelet was launched through a
systemd unit. While it's not documented this way, loading the
configuration from /root/docker/config.json or /root/.dockercfg
requires that HOME=/root
Setting User=root in the [Service] declaration fixed it up for me.
2) https://github.com/kubernetes/kubernetes/issues/45487#issuecomment-378116386
3) https://github.com/kubernetes/kubernetes/issues/45487#issuecomment-464516064 (partial info)
(3) vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Add User=root
File looks kind of like this
[Service]
User=root
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
(4) Reload and restart kubelet
systemctl daemon-reload
systemctl restart kubelet
Exactly in my case everything worked fine from scratch. So read carefully and check points I highlighted. Most probably you have very tiny problem/typo, cause it works as expected.
I hope my investigation will help you
This sounds right & logical, but does not work for me.
I am running kubernetes version "v1.18.6". Data below. This fails. But setting same http-proxy as Env for dockerd, works.
admin#str-s6000-acs-13:/etc/systemd/system/kubelet.service.d$ sudo cat /proc/$(pidof kubelet)/environ | tr '\0' '\n'
LANG=en_US.UTF-8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOME=/root
LOGNAME=root
USER=root
SHELL=/bin/sh
INVOCATION_ID=fd58e75d7be64758b01e2d8d63fdf7f6
JOURNAL_STREAM=9:11737012
KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf
KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml
KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2
admin#str-s6000-acs-13:/etc/systemd/system/kubelet.service.d$ sudo cat /root/.docker/config.json
{
"proxies":
{
"default":
{
"httpProxy": "http://20.72.201.152:3128",
"httpsProxy": "http://20.72.201.152:3128"
}
}
}
admin#str-s6000-acs-13:/etc/systemd/system/kubelet.service.d$
Apr 12 01:14:49 str-s6000-acs-13 dockerd[27360]: time="2021-04-12T01:14:49.630282308Z" level=error msg="Handler for POST /images/create returned error: Get https://sonicanalytics.azurecr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Related
I have one docker container what is running pyppeteer.
It have memory leak, so it will stoped in 24 hours.
I need some auto healing system, I think Kubernetes can do that. No loadbalance, just one instance, one container. It is suitable?
++++
Finally, I selected docker-py, managed by using containers.run, containers.prune.
It is working for me.
If your container has no state, and you know it is going to run out of memory every 24 hours, I would say cronjob is the best option.
You can do what you want on k8s, but that's overkilling. Entire k8s cluster for one container, doesn't sound right to me.
Another thing is if you have more apps, or containers as k8s can run lots of services independent one from another, so you would not be wasting resources.
There are several options for your use case, one of them is running kubernetes. But you should consider the overhead on resources and maintenance burden when running kubernetes just for a single container.
I suggest you explore having systemd restart your container in case it crashes or just simple use docker itself: With the --restart=always parmeter the docker daemon ensures the container is running. Note: Even after restarting the system docker will ensure the container is restarted in that case. So a --restart=on-failure might be a better option.
See this page for more information: https://docs.docker.com/config/containers/start-containers-automatically/#use-a-restart-policy
I didn't work with Puppeteer but after short research found this:
By default, Docker runs a container with a /dev/shm shared memory space 64MB. This is typically too small for Chrome and will cause Chrome to crash when rendering large pages. To fix, run the container with docker run --shm-size=1gb to increase the size of /dev/shm. Since Chrome 65, this is no longer necessary. Instead, launch the browser with the --disable-dev-shm-usage flag:
const browser = await puppeteer.launch({
args: ['--disable-dev-shm-usage']
});
This will write shared memory files into /tmp instead of /dev/shm.
Hope this help.
It is possible to use Kubernetes auto-healing feature without creating full-scale Kubernetes cluster. It's only required to install compatible versions of docker and kubelet packages. It could be helpful to install kubeadm package also.
Kubelet is the part of Kubernetes control-plane that takes care of keeping Pods in healthy condition. It runs as a systemd service, and creates static pods using YAML manifest files from /etc/kubernetes/manifests (location is configurable).
All other application troubleshooting can be done using regular docker commands:
docker ps ...
docker inspect
docker logs ...
docker exec ...
docker attach ...
docker cp ...
A good example of this approach from the official documentation is running external etcd cluster instances. (Note: Kubelet configuration part may not work as expected with recent kubelet versions. I've put more details on that below.)
Also kubelet can take care of pod resource usage by applying limits part of a pod spec. So, you can set the memory limit and when container reach this limit kubelet will restart it.
Kubelet can make a health-check of the application in the pod, if liveness probe section is included in the Pod spec. If you can create a command to check your application condition more precisely, kubelet can restart the container when the command return non zero exit code several times in a row (configurable).
If kubelet refuses to start, you can check kubelet logs using the following command:
journalctl -e -u kubelet
Kubelet can refuse to start mostly because of:
absence of kubelet initial config. It can be generated using kubeadm command: kubeadm init phase kubelet-start. (You may also need to generate CA certificate /etc/kubernetes/pki/ca.crt mentioned in the kubelet config. It can be done using kubadm: kubeadm init phase certs ca)
different cgroups driver settings for docker and kubelet. Kubelet works fine with both cgroupsfs and systemd drivers. Docker default driver is cgroupfs. Kubeamd also generates kubelet config with cgroupsfs driver, so just ensure that they are the same. Docker cgroups driver can be specified in the service definition file, e.g /lib/systemd/system/docker.service or /usr/lib/systemd/system/docker.service:
#add cgroups driver option to ExecStart:
ExecStart=/usr/bin/dockerd \
--exec-opt native.cgroupdriver=systemd # or cgroupfs
To change cgroups driver for recent kubelet version it's required to specify kubelet config file for the service, because such command line options are deprecated now:
sed -i 's/ExecStart=\/usr\/bin\/kubelet/ExecStart=\/usr\/bin\/kubelet --config=\/var\/lib\/kubelet\/config.yaml/' /lib/systemd/system/kubelet.service
Then change the cgroups line in the kubelet config. Couple more options also require changes. Here is the kubelet config that I've used for same purpose:
address: 127.0.0.1 # changed, was 0.0.0.0
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: false # changed, was true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt # kubeadm init phase certs ca
authorization:
mode: AlwaysAllow # changed, was Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: cgroupfs # could be changed to systemd or left as is, as docker default driver is cgroupfs
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
configMapAndSecretChangeDetectionStrategy: Watch
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuCFSQuotaPeriod: 100ms
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kind: KubeletConfiguration
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeLeaseDurationSeconds: 40
nodeStatusReportFrequency: 1m0s
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
port: 10250
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
Restart docker/kubelet services:
systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet
I am deploying vault docker image on Ubuntu 16.04, I am successful initializing it from inside the image itself, but I cant get any Rest Responses, and even curl does not work.
I am doing the following:
Create config file local.json :
{
"listener": [{
"tcp": {
"address": "127.0.0.1:8200",
"tls_disable" : 1
}
}],
"storage" :{
"file" : {
"path" : "/vault/data"
}
}
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
}
under /vault/config directory
running the command to start the image
docker run -d -p 8200:8200 -v /home/vault:/vault --cap-add=IPC_LOCK vault server
entering bash terminal of the image :
docker exec -it containerId /bin/sh
Running inside the following command
export VAULT_ADDR='http://127.0.0.1:8200' and than vault init
It works fine, but when I am trying to send rest to check if vault initialized:
Get request to the following url : http://Ip-of-the-docker-host:8200/v1/sys/init
Getting No Response.
even curl command fails:
curl http://127.0.0.1:8200/v1/sys/init
curl: (56) Recv failure: Connection reset by peer
Didnt find anywhere online with a proper explanation what is the problem, or if I am doing something wrong.
Any Ideas?
If a server running in a Docker container binds to 127.0.0.1, it's unreachable from anything outside that specific container (and since containers usually only run a single process, that means it's unreachable by anyone). Change the listener address to 0.0.0.0:8200; if you need to restrict access to the Vault server, bind it to a specific host address in the docker run -p option.
I'm trying to run vault docker in server mode as described here. This is the command I'm using to run vault
docker run --cap-add=IPC_LOCK -e 'VAULT_LOCAL_CONFIG={"backend": {"file": {"path": "/home/jwahba/PycharmProjects/work/vault/vault.json"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h"}' vault server
And this is the vault.json configuration file
storage "inmem" {}
listener "tcp" {
address = "127.0.0.1:8200"
tls_disable = 1
}
disable_mlock = true
The container comes up successfully.
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
55100205d2ab vault "docker-entrypoint..." 6 minutes ago Up 6 minutes 8200/tcp stoic_blackwell
However, when I try to execute
docker exec stoic_blackwell vault status
I get the below error:
Error checking seal status: Get https://127.0.0.1:8200/v1/sys/seal-status: dial tcp 127.0.0.1:8200: connect: connection refused
There is a similar question here but unfortunately I couldn't figure out what I misconfigured.
Any suggestions please?
The VAULT_LOCAL_CONFIG parameter specifies the configuration of your Vault; using the {"backend": {"file": annotation you set a file backend as the storage one.
So, in VAULT_LOCAL_CONFIG you should directly include what you wrote in your configuration file (vault.json).
Sidenote: The configuration file that you wrote is in HCL language, not json.
Please try it with below command,
vault status -tls-skip-verify
I have a set of dockerized applications scattered across multiple servers and trying to setup production-level centralized logging with ELK. I'm ok with the ELK part itself, but I'm a little confused about how to forward the logs to my logstashes.
I'm trying to use Filebeat, because of its loadbalance feature.
I'd also like to avoid packing Filebeat (or anything else) into all my dockers, and keep it separated, dockerized or not.
How can I proceed?
I've been trying the following. My Dockers log on stdout so with a non-dockerized Filebeat configured to read from stdin I do:
docker logs -f mycontainer | ./filebeat -e -c filebeat.yml
That appears to work at the beginning. The first logs are forwarded to my logstash. The cached one I guess. But at some point it gets stuck and keep sending the same event
Is that just a bug or am I headed in the wrong direction? What solution have you setup?
Here's one way to forward docker logs to the ELK stack (requires docker >= 1.8 for the gelf log driver):
Start a Logstash container with the gelf input plugin to reads from gelf and outputs to an Elasticsearch host (ES_HOST:port):
docker run --rm -p 12201:12201/udp logstash \
logstash -e 'input { gelf { } } output { elasticsearch { hosts => ["ES_HOST:PORT"] } }'
Now start a Docker container and use the gelf Docker logging driver. Here's a dumb example:
docker run --log-driver=gelf --log-opt gelf-address=udp://localhost:12201 busybox \
/bin/sh -c 'while true; do echo "Hello $(date)"; sleep 1; done'
Load up Kibana and things that would've landed in docker logs are now visible. The gelf source code shows that some handy fields are generated for you (hat-tip: Christophe Labouisse): _container_id, _container_name, _image_id, _image_name, _command, _tag, _created.
If you use docker-compose (make sure to use docker-compose >= 1.5) and add the appropriate settings in docker-compose.yml after starting the logstash container:
log_driver: "gelf"
log_opt:
gelf-address: "udp://localhost:12201"
Docker allows you to specify the logDriver in use. This answer does not care about Filebeat or load balancing.
In a presentation I used syslog to forward the logs to a Logstash (ELK) instance listening on port 5000.
The following command constantly sends messages through syslog to Logstash:
docker run -t -d --log-driver=syslog --log-opt syslog-address=tcp://127.0.0.1:5000 ubuntu /bin/bash -c 'while true; do echo "Hello $(date)"; sleep 1; done'
Using filebeat you can just pipe docker logs output as you've described. Behavior you are seeing definitely sounds like a bug, but can also be the partial line read configuration hitting you (resend partial lines until newline symbol is found).
A problem I see with piping is possible back pressure in case no logstash is available. If filebeat can not send any events, it will buffer up events internally and at some point stop reading from stdin. No idea how/if docker protects from stdout becoming unresponsive. Another problem with piping might be restart behavior of filebeat + docker if you are using docker-compose. docker-compose by default reuses images + image state. So when you restart, you will ship all old logs again (given the underlying log file has not been rotated yet).
Instead of piping you can try to read the log files written by docker to the host system. The default docker log driver is the json log driver . You can and should configure the json log driver to do log-rotation + keep some old files (for buffering up on disk). See max-size and max-file options. The json driver puts one line of 'json' data for every line to be logged. On the docker host system the log files are written to /var/lib/docker/containers/container_id/container_id-json.log . These files will be forwarded by filebeat to logstash. If logstash or network becomes unavailable or filebeat is restarted, it continues forwarding log lines where it left of (given files have been not deleted due to log rotation). No events will be lost. In logstash you can use the json_lines codec or filter to parse the json lines and a grok filter to gain some more information from your logs.
There has been some discussion about using libbeat (used by filebeat for shipping log files) to add a new log driver to docker. Maybe it is possible to collect logs via dockerbeat in the future by using the docker logs api (I'm not aware of any plans about utilising the logs api, though).
Using syslog is also an option. Maybe you can get some syslog relay on your docker host load balancing log events. Or have syslog write log files and use filebeat to forward them. I think rsyslog has at least some failover mode. You can use logstash syslog input plugin and rsyslog to forward logs to logstash with failover support in case the active logstash instance becomes unavailable.
I created my own docker image using the Docker API to collect the logs of the containers running on the machine and ship them to Logstash thanks to Filebeat. No need to install or configure anything on the host.
Check it out and tell me if it suits your needs: https://hub.docker.com/r/bargenson/filebeat/.
The code is available here: https://github.com/bargenson/docker-filebeat
Just for helping others that need to do this, you can simply use Filebeat to ship the logs. I would use the container by #brice-argenson, but I needed SSL support so I went with a locally installed Filebeat instance.
The prospector from filebeat is (repeat for more containers):
- input_type: log
paths:
- /var/lib/docker/containers/<guid>/*.log
document_type: docker_log
fields:
dockercontainer: container_name
It sucks a bit that you need to know the GUIDs as they could change on updates.
On the logstash server, setup the usual filebeat input source for logstash, and use a filter like this:
filter {
if [type] == "docker_log" {
json {
source => "message"
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
mutate {
rename => { "log" => "message" }
}
date {
match => [ "time", "ISO8601" ]
}
}
}
This will parse the JSON from the Docker logs, and set the timestamp to the one reported by Docker.
If you are reading logs from the nginx Docker image, you can add this filter as well:
filter {
if [fields][dockercontainer] == "nginx" {
grok {
match => { "message" => "(?m)%{IPORHOST:targethost} %{COMBINEDAPACHELOG}" }
}
mutate {
convert => { "[bytes]" => "integer" }
convert => { "[response]" => "integer" }
}
mutate {
rename => { "bytes" => "http_streamlen" }
rename => { "response" => "http_statuscode" }
}
}
}
The convert/renames are optional, but fixes an oversight in the COMBINEDAPACHELOG expression where it does not cast these values to integers, making them unavailable for aggregation in Kibana.
I verified what erewok wrote above in a comment:
According to the docs, you should be able to use a pattern like this
in your prospectors.paths: /var/lib/docker/containers/*/*.log – erewok
Apr 18 at 21:03
The docker container guids, represented as the first '*', are correctly resolved when filebeat starts up. I do not know what happens as containers are added.
I'm trying to get my head around something that's been working on a Centos+Vagrant, but not on our providers RHEL (Red Hat Enterprise Linux Server release 6.5 (Santiago)). A sudo service docker restart hands this:
Stopping docker: [ OK ]
Starting cgconfig service: Error: cannot mount cpuset to /cgroup/cpuset: Device or resource busy
/sbin/cgconfigparser; error loading /etc/cgconfig.conf: Cgroup mounting failed
Failed to parse /etc/cgconfig.conf [FAILED]
Starting docker: [ OK ]
The service starts okey enough, but images cannot run. A mounting failed error is shown when I try. And the startup-log also gives a warning or two. Regarding the kernelwarning, centos gives the same and has no problems as Epel should resolve this:
WARNING: You are running linux kernel version 2.6.32-431.17.1.el6.x86_64, which might be unstable running docker. Please upgrade your kernel to 3.8.0.
2014/08/07 08:58:29 docker daemon: 1.1.2 d84a070; execdriver: native; graphdriver:
[1233d0af] +job serveapi(unix:///var/run/docker.sock)
[1233d0af] +job initserver()
[1233d0af.initserver()] Creating server
2014/08/07 08:58:29 Listening for HTTP on unix (/var/run/docker.sock)
[1233d0af] +job init_networkdriver()
[1233d0af] -job init_networkdriver() = OK (0)
2014/08/07 08:58:29 WARNING: mountpoint not found
Anyone had any success overcoming this problem or should I throw in the towel and wait for the provider to update to RHEL 7?
I have the same issue.
(1) check cgconfig status
# /etc/init.d/cgconfig status
if it stopped, restart it
# /etc/init.d/cgconfig restart
check cgconfig is running
(2) check cgconfig is on
# chkconfig --list cgconfig
cgconfig 0:off 1:off 2:off 3:off 4:off 5:off 6:off
if cgconfig is off, turn it on
(3) if still does not work, may be some cgroups modules is missing. In the kernel .config file, make menuconfig, add those modules into kernel and recompile and reboot
after that, it should be OK
I ended up asking the same question at Google Groups and in the end finding a solution with some help. What worked for me was this:
umount cgroup
sudo service cgconfig start
The project of making Docker work was put on halt all the same. Later a problem of network connection for the containers. This took to much time to solve and had to give up.
So I spent the whole day trying to rig docker to work on my vps. I was running into this same error. Basically what it came down to was the fact that OpenVZ didn't support docker containers up until a couple months ago. Specifically this RHEL update:
https://openvz.org/Download/kernel/rhel6/042stab105.14
Assuming this is your problem, or some variation of it, the burden of solving it is on your host. They will need to follow these steps:
https://openvz.org/Docker_inside_CT
In my case
/etc/rc.d/rc.cgconfig start
was generating
Starting cgconfig service: Error: cannot mount cpu,cpuacct,memory to
/cgroup/cpu_and_mem: Device or resource busy /usr/sbin/cgconfigparser;
error loading /etc/cgconfig.conf: Cgroup mounting failed Failed to
parse /etc/cgconfig.conf
i had to use:
/etc/rc.d/rc.cgconfig restart
and it automagicly umouted and mounted groups
Stopping cgconfig service: Starting cgconfig service:
it seems like the cgconfig service not running,so check it!
# /etc/init.d/cgconfig status
# mkdir -p /cgroup/cpuacct /cgroup/memory /cgroup/devices /cgroup/freezer net_cls /cgroup/blkio
# cat /etc/cgconfig.conf |tail|grep "="|awk '{print "mount -t cgroup -o",$1,$1,$NF}'>cgroup_mount.sh
# sh ./cgroup_mount.sh
# /etc/init.d/cgconfig restart
# /etc/init.d/docker restart
This situation occurs when the kernel is booted with cgroup_disable=memory and /etc/cgconfig.conf contains memory = /cgroup/memory;
This causes only /cgroup/cpuset to be mounted instead of the full set.
Solution: either remove cgroup_disable=memory from your kernel boot options or comment out memory = /cgroup/memory; from cgconfig.conf.
The cgconfig service startup uses mount and umount which requires an extra privilege bump from docker.
See the --privileged=true flag here for more info.
I was able to overcome this issue by starting my container with:
docker run -it --privileged=true my-image.
Tested in Centos6, Centos6.5.