I need to use a container with gcsfuse in google cloud composer, as far as i know --privileged flags when running the container is needed for gcsfuse, how to use --privileged flags when running the container using kubernetes pod operator in google cloud composer?
Looking at the KubernetesPodOperator and its internal implementation, I don't think securityContext is exposed and configurable. You may want to file an Apache Airflow JIRA issue for feature tracking.
Related
I have built a docker image that when run, it registers itself as a GitHub Runner. This runner will, amongst other things, be used to build and push images to GitHub Container Registry. I don't want to deploy the containers to GKE or Compute, as I don't want the overhead of managing those resources. I would prefer to deploy the containers to Google Cloud Run. I've scoured the docs for help but I can't seem to find the answers to the following question:
Can I run 'docker in docker' when the container is deployed to GCP Cloud Run?
How do I specify the volume mount required when deploying the container to Google Cloud Run, i.e. the usual mapping with docker run would be:
-v /var/run/docker.sock:/var/run/docker.sock
I never tested but it's possible that the current Cloud Run sandbox prevent this king of use. And I don't really know the use case for this!
You can't mount volume in Cloud Run, it's stateless. You only have a in-memory file system in the /tmp directory (and it's in memory, size correctly your Cloud Run instance memory to take this into account). You can connect your instance to 3rd party product, such as Google Cloud Storage or databases, but no volume mountable on Cloud Run (for now)
If you have these requirement you can maybe have a look to autopilot and deploy directly your container on fully managed K8S.
I am new to the world containers, docker, kubernetes and I am investigating requirements for implementing a my middleware distributed project. I took some key container courses with Docker and Kubernetes.
But I would like to ask for those who have more experience, in a production environment (or just execution and instantiation of modules, where each module would be a container) what would be the dependencies to execute a container?
Is it mandatory for me to have the dependency package for docker and docker itself installed for this? To just raise the pods and services with Kubernetes is it also mandatory to have kubectl installed on my host?
Note: For local development and deployment using google cloud I have already done some testing and I know it is necessary.
To Setup docker on your system you need below things
if you are going to setup K8s with docker
docker-ce/docker
kubelet
kubectl
curl & wget
if you are going to setup k8s with minikube
you will need minikube
virtualenv
I feel you need to be more specific what exactly you wanted to know.
There are multiple container technologies are existing currently. To install docker specifically your linux machine should have kernel version > 3.10.
If you want to install Kubernetes on your linux machines
you need to modify OS level things.(like firewall,swap etc)
you need to install one of the container run time & other kubernetes packages(kubelet kubeadm kubectl ) then setup container networking.
Here you can find clear instructions to install kuberentes via Kubeadm
I'm unable to run a health check other than process on a docker image deployed to Pivotal Cloud Foundry.
I can deploy fine with health-check-type=process, but that isn't terribly useful. Once the container is up and running I can access the health check http endpoint at /nuxeo/runningstatus, but PCF doesn't seem to be able to check that endpoint, presumably because I'm deploying a pre-built docker container rather that an app via source or jar.
I've modified the timeout to be something way longer than it needs to be, so that isn't the problem. Is there some other way of monitoring dockers deployed to PCF?
The problem was the docker container exposed two ports, one on which the healthcheck endpoint was accessible and another that could be used for debugging. PCF always chose to try to run the health check against the debug port.
There is no way to specify, for PCF, a port for the health check to run against. It chooses among the exposed ports and for a reason I don't know always chose the the one intended for debugging.
I tried reordering the ports in the Dockerfile, but that had no effect. Ultimately I just removed the debug port from being exposed in the Docker file and things worked as expected.
When working locally I often use the docker exec command to look around and debug containers.
Is there a way to do this from my PC when the containers are deployed on docker-cloud?
I realize there is a terminal tab on the docker-cloud GUI but I'm finding it a bit limited.
Yes, if you can open an ssh session on your docker cloud service (which is probably possible).
Or, more likely, if you run and access your container through a Docker Cloud Agent, which allows you to use any Linux host (“bring your own host”) as a node which you can then use to deploy containers.
Otherwise, no, as the socket used by docker cloud session would not be exposed through internet, and is used only locally on the remote cloud server.
You can use the approach here(https://docs.docker.com/docker-cloud/infrastructure/ssh-into-a-node/#ssh-into-docker-cloud-node) and it has worked for me.
I want to see the logs from my Docker Swarm service. Not only because I want all my logs to be collected for the usual reason, but also because I want to work out why the service is crashing with "task: non-zero exit (1)".
I see that there is work to implement docker logs in the pipeline, but there a way to access logs for production services? Or is Docker Swarm not ready for production wrt logging?
With Docker Swarm 17.03 you can now access the logs of a multi instance service via command line.
docker service logs -f {NAME_OF_THE_SERVICE}
You can get the name of the service with:
docker service ls
Note that this is an experimental feature (not production ready) and in order to use it you must enable the experimental mode:
Update: docker service logs is now a standard feature of docker >= 17.06. https://docs.docker.com/engine/reference/commandline/service_logs/#parent-command
similar question: How to log container in docker swarm mode
What we've done successfully is utilize GrayLog. If you look at docker run documentation, you can specify a log-driver and log-options that allow you to send all console messages to a graylog cluster.
docker run... --log-driver=gelf --log-opt gelf-address=udp://your.gelf.ip.address:port --log-opt tag="YourIdentifier"
You can also technically configure it at the global level for the docker daemon, but I would advise against that. It won't let you add the "Tag" option, which is exceptionally useful for filtering down your results.
Docker service definitions also support log driver and log options so you can use docker service update to adjust your services without destroying them.
As the documents says:
docker service logs [OPTIONS] SERVICE|TASK
resource: https://docs.docker.com/engine/reference/commandline/service_logs/