I was trying to run a container with seccomp profile using Docker SDK, and this error showed up
"Decoding seccomp profile failed: invalid character 'd' looking for
beginning of value"
Below is the code
import docker
client = docker.from_env()
security_opt = [
"seccomp=default.json"
]
container = client.containers.run(image="nginx",
name="pikachu2",
security_opt=security_opt,
detach=True)
I've tried to search from other sources (this and this, for example), but none of them worked.
I'd like to know what exactly is the problem and how to fix it. Thank you.
Related
VSCode Version:
1.62.2
Local OS Version:
Windows 10.0.18363
Reproduces in: Remote - Containers
Name of Dev Container Definition with Issue:
/vscode/devcontainers/typescript-node
In our company we use a proxy which terminates the SSL connections. When I now try to start any devcontainer (the workspace is in the WSL2 filesystem), I get the following error message:
Installing VS Code Server for commit 3a6960b964327f0e3882ce18fcebd07ed191b316
[2021-11-12T17:01:44.400Z] Start: Downloading VS Code Server
[2021-11-12T17:01:44.400Z] 3a6960b964327f0e3882ce18fcebd07ed191b316 linux-x64 stable
[2021-11-12T17:01:44.481Z] Stop (81 ms): Downloading VS Code Server
[2021-11-12T17:01:44.499Z] Error: unable to get local issuer certificate
at TLSSocket.onConnectSecure (_tls_wrap.js:1497:34)
at TLSSocket.emit (events.js:315:20)
at TLSSocket._finishInit (_tls_wrap.js:932:8)
at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:706:12)
In the dockerfile I copy the company certificates and update them:
ADD ./certs /usr/local/share/ca-certificates
RUN update-ca-certificates 2>/dev/null
The proxy environment variables are also set correctly. Out of desperation I also tried to disable the certificate check for wget:
RUN su node -c "echo check_certificate=off >> ~/.wgetrc"
Even in the devcontainer configuration I have disabled the proxy and the security check for VS code via the settings:
// Set *default* container specific settings.json values on container create.
"settings": {
"http.proxy": "http://<proxy.url>:8080",
"http.proxyStrictSSL": false
},
I have tried many other things, like setting NODE_TLS_REJECT_UNAUTHORIZED=0 as env variable inside the dockerfile, unfortunately without any success. Outside the company network, without the proxy, it works wonderfully.
Maybe one of you has an idea how I can solve this problem?
A working if not so nice solution to the problem is to add HTTPS exceptions for the following domains:
https://update.code.visualstudio.com
https://az764295.vo.msecnd.net
A list of common hostnames can be found here:
https://code.visualstudio.com/docs/setup/network
Hi I`m new to using apparmor. So i created a simple script on my Debian 10 to look how apparmor works:
#! /bin/sh
echo "hi from Apparmor">/tmp/hi.txt
cat /tmp/hi.txt
rm /tmp/hi.txt
Then I saved the file as s.sh and try to generate a profile:
Please tell me how i can solve this problem.
Thank for any answer!
This is a known bug in Debian Buster.
You can solve this by creating missing files until it works.
Source :
In the following example, we will thus try to create a profile for /sbin/dhclient. For this we will use aa-genprof dhclient. In Debian Buster there is a known bug[6] that makes the previous command fail with the following error: ERROR: Include file /etc/apparmor.d/local/usr.lib.dovecot.deliver not found. To fix it create the missing files with touch file. It will invite you to use the application in another window and when done to come back to aa-genprof to scan for AppArmor events in the system logs and convert those logs into access rules. For each logged event, it will make one or more rule suggestions that you can either approve or further edit in multiple ways:
https://debian-handbook.info/browse/fr-FR/stable/sect.apparmor.html
I have nextcloud installed and working fine in a docker but want to have fail2ban monitor the log files for brute force attempts. I know nextcloud has it's own baked in but it just throttles the log in attempts and I would like to all out ban them (I also have this problem with other containers as well). The docker-compose is set to create the nextcloud.log file to /mnt/nextcloud/log/nextcloud.log. I followed this guide to create the jail
https://www.c-rieger.de/nextcloud-installation-guide-ubuntu/#c06
Fail2ban is running on the host machine however, fail2ban fails to start with:
[447]: ERROR Failed during configuration: Have not found any log file for nextcloud jail
[447]: ERROR Async configuration of server failed
Thinking it was simply a permission issue, I chowned everything to root and tried to start again but still the service won't start. What am I doing wrong?
Thanks for the help!
The docker-compose is set to create the nextcloud.log file to /mnt/nextcloud/log/nextcloud.log
Be sure this file really exists and your jail.local has correct entry logpath:
[nextcloud]
...
logpath = /mnt/nextcloud/log/nextcloud.log
You can also check resulting config using dump:
fail2ban-client -d | grep 'nextcloud.*logpath'
But I'm still not sure the error message you provide was throwed by fail2ban, because its error messages look different, see https://github.com/fail2ban/fail2ban/commit/27947407bc7910f0f50972113218ebc73c4a22c7
It should be something like:
-have not found a log file for nextcloud log
+Have not found any log file for nextcloud jail
I built an image with Google Cloud Build using Docker Compose. In my cloudbuild.yml file I have the following steps:
Build the docker image using docker compose
Tag the built image
Create an instance template
Create instance group
Now here is the problem every time a new instance gets built the created container from the image keeps restarting and never actually boots up. In spite of this I can build the image and start it as a container on the instance independent from the image from cloud build.
I managed to find some clues from the logs:
E1219 19:13:52 7f28dce6d700 api_server.cc:184 Metadata request unsuccessful: Server responded with 'Forbidden' (403): Transport endpoint is not connected
oauth2.cc:289 Getting auth token from metadata server docker
I also got some clue by running the following in the instance:
docker -a -i start <container_id>
Output: Unrecognized input header: 99
The cloudbuild.yml file looks like (I've replaced some variables with ...):
#cloudbuild.yaml
steps:
- name: 'docker/compose:1.22.0'
args: ['-f', 'docker/docker-compose.tb.prod.yml', 'up', '-d']
- name: 'gcr.io/cloud-builders/docker'
args: ['tag', 'tb:latest', '...']
- name: 'gcr.io/cloud-builders/gcloud'
args: [
'beta', 'compute', '--project=...', 'instance-templates', 'create-with-container',
'tb-app-staging-${COMMIT_SHA}',
'--machine-type=n1-standard-2', '--network=...', '--network-tier=PREMIUM', '--metadata=google-logging-enabled=true',
'--maintenance-policy=MIGRATE', '--service-account=...',
'--scopes=https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/devstorage.read_only,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/servicecontrol,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/trace.append',
'--tags=http-server,https-server', '--image=cos-stable-69-10895-62-0', '--image-project=cos-cloud', '--boot-disk-size=20GB', '--boot-disk-type=pd-standard',
'--container-restart-policy=always', '--labels=container-vm=cos-stable-69-10895-62-0',
'--boot-disk-device-name=...',
'--container-image=...',
]
- name: 'gcr.io/cloud-builders/gcloud'
args: [
'beta', 'compute', '--project=...', 'instance-groups',
'managed', 'rolling-action', 'start-update',
'tb-app-staging',
'--version',
'template=...',
'--zone=europe-west1-b',
'--max-surge=20',
'--max-unavailable=9999'
]
images: ['...']
timeout: 1200s
I found the issue and I'll answer this question myself just incase someone else runs into the same issue.
The problem was that in my docker-compose.yml I have the configuration for stdin_open and tty set to true but my cloudbuild.yml file did not accept it and was failing silently (annoying!).
To fix the issue you will need to use the flags --container-stdin and --container-tty on the create-with-container command.
More details can be found on the google docs https://cloud.google.com/compute/docs/containers/configuring-options-to-run-containers
I has a similar issue the reason was setting USER in Dockerfile. I was using changing user to 'node' which is user available in official nodejs images. But does not work on Google cloud containers.
FROM node:current-buster-slim
USER node
According to the documentation at bazelbuild/rules_docker, it should be possible to work with these container images on OSX, and it also claims that it's possible to do so without docker.
These rules do not require / use Docker for pulling, building, or pushing images. This means:
They can be used to develop Docker containers on Windows / OSX without boot2docker or docker-machine installed.
They do not require root access on your workstation.
How do I do that? Here's a simple rule:
go_image(
name = "helloworld_image",
importpath = "github.com/nictuku/helloworld",
library = ":go_default_library",
visibility = ["//visibility:public"],
)
I can build the image with bazel build :helloworld_image. It produces a tar ball in blaze-bin, but it won't run it:
INFO: Running command line: bazel-bin/helloworld_image
Loaded image ID: sha256:08d312b529d30431c68741fd3a31468a02533f27a8c2c29eedc969dae5a39852
Tagging 08d312b529d30431c68741fd3a31468a02533f27a8c2c29eedc969dae5a39852 as bazel:helloworld_image
standard_init_linux.go:185: exec user process caused "exec format error"
ERROR: Non-zero return code '1' from command: Process exited with status 1.
It's trying to run the linux this is OSX, which is silly.
I also tried doing a "docker load" on the .tar content but it doesn't seem to like that format.
$ docker load -i bazel-bin/helloworld_image-layer.tar
open /var/lib/docker/tmp/docker-import-330829602/app/json: no such file or directory
Help? Thanks!
You are building for your host platform by default so you need to build for the container platform if you want to do that.
Since you are using a go binary, you can do cross compilation by specifying --cpu=k8 on the command line. Ideally we would be able to just say that the docker image needs a linux binary (so no need to specify the --cpu command-line flag) but this is still a work in progress in Bazel.