How to run docker container as privileged within Cloud Run - docker

I have a docker container that needs to run with --privileged to establish a VPN connection once it boots up
I am migrating it into Cloud Run using Cloud Build
I tried --container-privileged but that seems to only work for GCE, I also added the following to the args for the gcloud run deploy call in the cloudbuild.yaml but it complains with error: "Invalid command \"docker run --privileged\": file not found anywhere in PATH
- --command
- docker run --privileged

Google Cloud Run does not use Docker to run containers.
Cloud Run uses gVisor.
Cloud Run does not support privileged containers.

Related

aws sam: How to view the command run while running the docker image

I am checking on AWS SAM.
I have setup a HelloWorld example for python3.7 runtime
I have started the server:
sam local start-api
When i try to access the endpoint localhost/3000/hello it downloads an image public.ecr.aws/sam/emulation-python3.7:rapid-1.40.0-x86_64
and mount the build folder inside the container
Mounting /home/testuser/sam-app/.aws-sam/build/HelloWorldFunction as /var/task:ro,delegated inside runtime container
I see the container is run using the command.
"/var/rapid/aws-lambda-rie --log-level error"
what does this do.
How to see the full docker run command run by sam

Run docker inside of docker on AWS Fargate

I created a task definition on Amazon ECS and want to run in with Fargate. I set up my task, network mode is awsvpc. I created a new container with a docker image (simple "Hello world" project) on Amazon ECR. Run the task - everything works fine. Now I need to run a docker container from hub.docker.com as a part of the task
Dockerfile
FROM ubuntu
RUN apt-get update && apt-install ...
ADD script.sh /script.sh
RUN chmod +x /script.sh
ENTRYPOINT ["/script.sh"]
script.sh
#!/bin/bash
...prepare data
docker run -rm some_container_from_docker_hub
...continue process data
Initially, I got "command not found" error. OK, I installed docker into my image. Now I've got "Cannot connect to the Docker daemon".
My question: is there any way to run a docker container inside of another docker container on Amazon Fargate?
You can't run a container from another container using Fargate.
Running a container from another one, like in your case, would mean that you could have access to the docker daemon. Accessing the docker daemon means root access to the host machine. This breaks the docker container isolation and is unsafe.
Depending on your usage, I suggest you use an EC2 instance, use CodeBuild or build an operator that is able to talk with the api to span containers.
[Edit]: It seems that there is an open issue on this topic [ECS,Fargate]: Support for building Docker containers #95

Running a bitcoind docker

Thanks to the bitcoin.stack community I have successfully launched a bitcoind docker with an external volume which has the block data
Currently its in 100% sync but I am facing an issue to get information using bitcoin-cli I need to run bitcoind -reindex and then add txindex=1 into bitcoin.conf
As I pulled the docker image from docker hub I do not have any control over its docker file and I have 140GB+ blockchain data that I do not wanna discard and start over
How do I run --reindex on an docker container ?
While your container is running you can run docker exec -it <mybitcoindcontainer> /bin/sh. This should give you a shell inside your running container. You can then run your choice of commands at the shell prompt.

how to deploy specific docker container just by docker run?

https://github.com/getsentry/onpremise
mkdir -p data/{sentry,postgres} - Make our local database and sentry config directories.
This directory is bind-mounted with postgres so you don't lose state!
docker-compose run --rm web config generate-secret-key - Generate a secret key.
Add it to docker-compose.yml in base as SENTRY_SECRET_KEY.
docker-compose run --rm web upgrade - Build the database.
Use the interactive prompts to create a user account.
docker-compose up -d - Lift all services (detached/background mode).
Access your instance at localhost:9000!
I'm new to docker.
I tried to run sentry container locally, succeeded.
But when I was trying to deploy it on a cloud container service platform,I met some problems.
The platform just provide one way to run docker: docker run xxx , unlike aws which can use cli.
So how could I deploy on that platform? Thanks.
Additionally,I must use that platform cause it's my company's product lol.

How to properly start Docker inside Jenkins that is also running in Docker

I'm trying to run Docker inside a Jenkins container that is also running in Docker (i.e. Docker in Docker). What I want to know is how to properly start the Docker service when booting Jenkins. The only solution I've found today is to build my own Jenkins image based on the official Jenkins image but change the jenkins script loaded by the entry point to also start up Docker:
# I've added this line just before Jenkins is started from the script:
sudo service docker start
# I've also removed "exec" from the original file which used "exec java $JAVA_TOPS ..." but that didn't work
java $JAVA_OPTS -jar /usr/share/jenkins/jenkins.war $JENKINS_OPTS "$#"
This works when I run (using docker run) a new container but the problem is that if I do (docker start) on stopped container the Docker service is not started.
I strongly suspect that this is not the right way to start my Docker service. My plan is to perhaps use supervisord to start Jenkins and Docker separately (I suppose container linking is out of the question since Docker should be executed as a service on the same container that Jenkins is running on?). My concern with this approach is that I'm going to lose the EntryPoint specified in the Jenkins Dockerfile which allows me to pass arguments to the Jenkins container when starting the container, for example:
docker run -p 8080:8080 -v /your/home:/var/jenkins_home jenkins -- <jenkins_arguments>
Does anyone have any recommendations on a good way to solve this preferably by not forking the official Jenkins image?
I'm pretty you cannot do that.
Docker in Docker doesn't mean you have to run docker inside docker with 3 level : host > First level container > Second Level Container
In fact, you just need to share docker with host, and this is your host who will run others containers.
To do that, you have to mount volume with -v parameter
-v /var/run/docker.sock:/var/run/docker.sock
with this command, when you will docker run inside you jenkins container, the docker client will communicate with docker deamon from your host in order to run new container.
To do that, you should run your jenkins container with privileged
--privileged
To resume, here is the full command line
docker run -d -v /var/run/docker.sock:/var/run/docker.sock --privileged myimage
And you you don't need to create a new jenkins image for that.
Hoping to have helped you
http://container-solutions.com/running-docker-in-jenkins-in-docker/

Resources