How do I use stripe cli as a docker service? - docker

Thanks in advance for all your support guys!!!!
So, I have a dockerized django application in which I want to maintain the stripe-cli webhook command:
stripe listen --foward-to http://host-name:8000/webhook/...
by having it(stripe-cli) as a docker service and what I have done to try to make it work is as shown below:
services:
stripe-cli:
image: stripe/stripe-cli
container_name: stripe-cli
command: "listen --api-key=my_stripe_login_key --forward-to http://localhost:8000/stripe/webhook"
But when I try to build the docker compose file, I am getting an error;
stripe-cli | exec /bin/stripe: exec format error
stripe-cli exited with code 1
I will appreciate for any of your feedback!
Other Information:
I have an ubuntu docker base image
I am using an arc-based linux distribution XeroLinux on my host machine

This is a known issue with the Stripe CLI and the latest version of the Docker container. If you specify the container version as v1.12.4 you should be able to run your docker service following the guidance for the CLI here.
I would watch the issue to get updated when the issue gets resolved. Also you can bump the issue by commenting on it.

Related

Docker build error: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

I need a simple solution to build a docker image, push it to ECR, and deploy it to ECS.
The final part, which deploys the ECR image to ECS is working. (I'm using a deploy.py short script that uses Python's AWS boto3 SDK, found it easier than making the ECS Orb work..)
However, I'm struggling with the first part, need help. I just need to automate the simple docker build, docker tag and docker push. It's very simple, but I don't know what I'm doing wrong.
Can anyone help me? It follows the code, I'm running it locally for debug purposes:
version: '2.1'
jobs:
build:
docker:
- image: cimg/python:3.8
environment:
AWS_ACCESS_KEY_ID: yadayadayada
AWS_SECRET_ACCESS_KEY: yadayadayada
AWS_DEFAULT_REGION: yadayadayada
steps:
- checkout
- run: |
docker build -t myimg .
docker tag myimg:latest asdf.dkr.ecr.asdf.amazonaws.com/asddf:latest
docker push asdf.dkr.ecr.asdf.amazonaws.com/asdf:latest
pip install boto3
python deploy.py
Learnign CircleCI is really frustrating, no good resources for beginners...
Thanks in advance!
You need to use the setup_remote_docker special step in order to get a remote Docker engine running so that your Docker commands will work.
Learnign CircleCI is really frustrating, no good resources for beginners...
Really? You can find my answer (the "setup_remote_docker") and how to use it right on CircleCI Docs in a guide called Running Docker Commands.
I hope this helps. Also, you'll see that setting a Docker version is optional but I strongly suggest you set one. The default version is very old.

Can’t sign plugin using Grafana with Docker

I tried install this plugin to Grafana from github:
https://github.com/Vertamedia/chtable
I cloned this repository to pligins folder then added plugin to my grafana container:
grafana:
image: grafana/grafana
ports:
- '3000:3000'
environment:
- GF_PATHS_CONFIG="grafana/etc/grafana.ini"
- GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=vertamedia-clickhouse-datasource,vertamedia-chtable
- GF_INSTALL_PLUGINS=grafana-piechart-panel,grafana-worldmap-panel,vertamedia-clickhouse-datasource,vertamedia-chtable
Then when I tried create new dashboard panel using this plugin get error with message:
An unexpected error happened TypeError: Cannot read property ‘emit’ of
undefined
Grafana version: Grafana v7.4.3 (010f20c1c8)
My plugin is unsigned. How I can fix this error and use this plugin?
Here I will list steps I used to install zabbix plugin to grafana container. You may try following the similar way to this plugin.
First I downloaded grafana-zabbix plugin related files from offcial github.
wget https://github.com/alexanderzobnin/grafana-zabbix/releases/download/v4.1.4/alexanderzobnin-zabbix-app-4.1.4.zip
Extract that zip file.
Then in gragana.ini you have to uncomment allow_loading_unsigned_plugins. By default its commented.
To get this grafana.ini file, I ran docker run grafana/grafana:latest and connected to that running grafana container and copied /etc/grafana/grafana.ini
[plugins]
allow_loading_unsigned_plugins = true
Dockerfile
FROM grafana/grafana:latest
COPY grafana.ini /etc/grafana/grafana.ini
COPY alexanderzobnin-zabbix-app /var/lib/grafana/plugins/
Using #SachithMuhandiram's answer I was able to get a signed plugin into running Grafana container. I do realize this doesn't answer the question asked (allow unsigned plugins). However, I did land on the thread while I was researching this problem. So, I'll leave the answer here, some may find it useful.
docker run -d -p 3000:3000 grafana/grafana
docker ps -a
docker cp relative/path-to/sample_plugin [container_id]:/var/lib/grafana/plugins/
docker restart [container_id]

Docker on Windows10 home - inside docker container connect to the docker engine

When creating a Jenkins Docker container, it is very useful to able to connect to the Docker daemon. In that way, I can start docker commands inside the Jenkins container.
For example, after starting the Jenkins Docker container, I would like to 'docker exec -it container-id bash' and start 'docker ps'.
On Linux you can use bind-mounts on /var/run/docker.sock. On Windows this seems not possible. The solution is by using 'named pipes'. So, in my docker-compose.yml file I tried to create a named pipe.
version: '2'
services:
jenkins:
image: jenkins-docker
build:
context: ./
dockerfile: Dockerfile_docker
ports:
- "8080:8080"
- "50000:50000"
networks:
- jenkins
volumes:
- jenkins_home:/var/jenkins_home
- \\.\pipe\docker_engine:\\.\pipe\docker_engine
# - /var/run/docker.sock:/var/run/docker.sock
# - /path/to/postgresql/data:/var/run/postgresql/data
# - etc.
Starting docker-compose with this file, I get the following error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is
the docker daemon running?
How can I setup the docker-compose file so that I can use the docker.sock (or Docker) inside the started container?
On Linux you can use something like volumes: /var/run/docker.sock:/var/run/docker.sock. This does not work in a Windows environment. When you add this folder (/var) to Oracle VM Virtualbox, it won't get any IP forever. And on many posts
You can expose the daemon on tcp://localhost:2375 without TLS in the settings. This way you can configure Jenkins to use the Docker API instead of the socket. I encourage you to read this article by Nick Janetakis about "Understanding how the Docker Daemon and the Docker CLI work together".
And then there are several Docker plugins for Jenkins that allows this connection:
Also, you can find additional information in the Docker plugin documentation on wiki.jenkins.io:
def dockerCloudParameters = [
connectTimeout: 3,
containerCapStr: '4',
credentialsId: '',
dockerHostname: '',
name: 'docker.local',
readTimeout: 60,
serverUrl: 'unix:///var/run/docker.sock', // <-- Replace here by the tcp address
version: ''
]
EDIT 1:
I don't know if it is useful, but the Docker Daemon on Windows is located to C:\ProgramData\docker according to the Docker Daemon configuration doc.
EDIT 2:
You need to say explicitly the container to use the host network because you want to expose both Jenkins and Docker API.
Following this documentation, you only have to add --network=host (or network_mode: 'host' in docker-compose) to your container/service. For further information, you can read this article to understand what is the purpose of this network mode.
First try was to start a Docker environment using "Docker Quickstart terminal". This is a good solution when running Docker commands within that environment.
When installing a complete CI/CD Jenkins environment via Docker means that WITHIN the Jenkins Docker container you need to access the Docker daemon. After trying many solutions, reading many posts, this did not work. #Paul Rey, thank you very much for trying all kinds of routes.
A good solution is to get an Ubuntu Virtual Machine and install it via the Oracle VM Virtualbox. It is then VERY IMPORTANT to install Docker via this official description.
Before installing Docker, of course you need to install Curl, Git, etc.

HttpException: -404 Failed to connect to remote server on mac while running Docker

I am getting Error HttpException: -404 Failed to connect to remote server while running jar file from docker execute a command docker exec -it Test_docker java -jar TestDocker.jar.
Note: I have created docker on windows, Where my docker machine IP is 192.168.99.100 and my docker exec command running successfully.I am accessing SPARQL endpoint on windows using URL: http://192.168.99.100:8890/sparql this will work perfectly. But when I am using same on mac it will give me an error which I mention above. I have also try to change SPARQL endpoint on my code as http://localhost:8890/sparql but not work well though it will work fine on chrome browser on mac while executing through command it will giving me an error.
Here my docker-compose file,
version: "3"
services:
jardemo_test:
container_name: Test_docker
image: "java:latest"
working_dir: /usr/src/myapp
volumes:
- /docker/test:/usr/src/myapp
tty: true
depends_on:
- virtuoso
virtuoso:
container_name: virtuoso_docker
image: openlink/virtuoso_opensource
ports:
- "8890:8890"
- "1111:1111"
environment:
DB_USER_NAME: dba
DBA_PASSWORD: dba
DEFAULT_GRAPH: http://localhost:8890/test
volumes:
- /docker/virtuoso-test/:/data
Note: I have tried all the way to set the environment variable on docker-compose file default graph URL with all the IP address but it won't allow.Which IP I have tried all combination listed below.
Though I am getting the same error.
DEFAULT_GRAPH: http://localhost:8890/test
DEFAULT_GRAPH: http://127.0.0.1:8890/test
DEFAULT_GRAPH: http://0.0.0.0:8890/test
below is my docker-compose ps result,
$ docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------------------------------------
Test_docker /bin/bash Up
virtuoso_docker /opt/virtuoso-opensource/b ... Up 0.0.0.0:1111->1111/tcp, 0.0.0.0:8890->8890/tcp
Below is my code which I am using,
QueryExecution qexec = QueryExecutionFactory.sparqlService("http://localhost:8890/sparql", queryString);
ResultSet results1 = qexec.execSelect();
Info: After running successful docker I have accessed the http://localhost:8890/sparql. it will successfully run on the mac.
Can anybody please help me to solve this issue?Also, welcome your suggestions and thought.Thanks for the help and your time in advance.
As per my colleague suggested, The problem is that the code in de docker file sees the docker as the local host. The IP-address 192.168.99.100 is also not known because mac doesn't have it. To solve the problem of connections, docker uses its own network.The docker-compose service names are used as the reference. So instead of using http://localhost:8890/sparql, you should use http://virtuoso:8890/sparql as virtuoso is the service name.
I tried this and it will solve my problem.

Docker error : No access to /dev/mem. Try running as root

I have a raspberry pi and I have installed dockers in it. I have made a python script to read gpio status in it. So when I run the below command
sudo docker run -it --device /dev/gpiomem app-image
It runs perfectly and shows the gpio status. Now I have created a docker-compose.yml file as I want to deploy this app.py to the swarm cluster which I have created.
Below is the content of docker-compose.yml
version: "3"
services:
app:
image: app-image
deploy:
mode: global
restart: always
privileged: true
When I start the deployment using sudo docker stack deploy command, the image is deployed but it gives error:
No access to /dev/mem. Try running as root
So it says that it do not have access to /dev/mem, but this is very strange when I am using device, why the service do not have access. It also says trying running as root which I think all the containers are in root already. I also tried giving the full permissions to the file by including the command chmod 777 /dev/gpiomem in the code but it still shows this error.
My main question is that when it runs normally using docker run.. command why it is showing error in docker-compose file when deploying using sudo docker stack deploy.? How to resolve this issue.?
Thanks
Adding devices, capabilities, and using privileged mode are not supported in swarm mode. Those options in the yml file exist for using docker-compose instead of docker stack deploy. You can track the progress on getting these features added to swarm mode in github issue #24862.
Since all you need to do is access a device, you may have luck adding the file for the device as a volume, but that's a shot in the dark:
volumes:
- /dev/gpiomem:/dev/gpiomem
As stated in docker-compose devices
Note: This option is ignored when deploying a stack in swarm mode with
a (version 3) Compose file.
The devices option is ignored in swarm. You can use privileged: true which will give access to all devices.

Resources