here is my docker compose yml file:
version: "3.3"
services:
tutorial:
image: fiware/tutorials.context-provider
hostname: iot-sensors
container_name: fiware-tutorial
networks:
- default
expose:
- "3000"
- "3001"
ports:
- "3000:3000"
- "3001:3001"
environment:
- "DEBUG=tutorial:*"
- "PORT=3000"
- "IOTA_HTTP_HOST=iot-agent"
- "IOTA_HTTP_PORT=7896"
- "DUMMY_DEVICES_PORT=3001"
- "DUMMY_DEVICES_API_KEY=4jggokgpepnvsb2uv4s40d59ov"
And this the result:docker-compose.yml execution
Why cant i run it on Raspberry PI3( OS Linux 11 Debian bullseye)? Please help!
Thank you very much for your time!
As the error message suggests, you are hitting an exec format error when trying to run the dockerization of the fiware/tutorials.context-provider on a Raspberry PI since the compiled binaries are based on the amd64 architecture.
As can be seen from the answer to this question, that won't work on an ARM based machine, since Docker is a virtualisation platform, not an emulator.
Since no image based on your architecture is currently available, if you need it, you will have to build an ARM version yourself. The current code and Dockerfile can be found here: https://github.com/FIWARE/tutorials.NGSI-v2/tree/master/docker
So I would assume you will need to amend the dockerization and rebuild the binaries to overcome the exec format error issue - this seems to be a commons Issue with Raspberry Pi.
However I'm still unsure why creating a ARM dockerization is necessary, as all you are attempting to do is to containerize and run code emulating dummy IoT devices on a Raspberry Pi. A Raspberry Pi itself can send a stream of data directly as a real device - it doesn't need a device emulator to be a device, it is one.
Related
I'm currently working in a custom project involving Docker and Jboss. The first 2/3 times that I run the command docker-compose up, the log gets stuck in different part of the build. After those 2/3 attempts, the command works correctly. I'm working on a MacBook Pro 2021 with macOS Ventura 13.0.
The docker-compose file is the following:
version: '2'
services:
webapp:
environment:
- SCRIPT_DEBUG=false
- DEBUG=${WEBAPP}
image:
"${REGISTRY}/ispdev/jboss:743GA-jdk1.8-V2"
ports:
- "${WEBAPP_PORT}:8080"
- "${WEBAPP_DEBUG_PORT}:8787"
volumes:
- "${WS_ROOT_DIR}/${APPL_ROOT}/${WEBAPP_EAR_DIR}/${WEBAPP_FINAL_ARTIFACT}:/opt/eap/standalone/deployments/${WEBAPP_FINAL_ARTIFACT}"
- "${WS_ROOT_DIR}/${COMPOSE_ROOT}/resources/jboss.yml:/usr/local/y2j/jboss.yml"
I've just tried to run several times docker-compose up
What version do you have of docker? docker -v
You may have to reinstall docker. Here is a link
https://docs.docker.com/desktop/install/mac-install/
Is it possible run docker container with redis:2.8 on macOS with M1?
docker log
Setup:
macOS M1
docker-compose.yml with redis 2.8
version: '2'
services:
redis:
image: redis:2.8
ports:
- "6379:6379"
Install docker through official documentation https://docs.docker.com/desktop/install/mac-install/#mac-with-apple-silicon
Run Docker application
Go into Redis2.8
Run container through button on top right corner
I got this error
In another way through
docker-compose up redis
I got this error
If you google the error you are getting runtime ...
failed to create new OS thread (have 2 already; errno=22)
you will see a hit on another StackOverflow question Failed to create new OS thread (have 2 already; errno=22). That looks to provide the answer - the platform of your M1 Mmac is not compatible with that specific image you are using so you need to find an image that will work on your M1 Mac.
I've been running into some issues with trying to get the docker to work properly with the gpu. A bit of background on what I'm trying to do - I'm currently trying to run open3D within the docker (I've been able to run it fine on my local machine), but I've been running into the issue of giving my docker container access.
I'm not entirely sure what is needed, and most of the guides or details has been in regards to nvidia and ubuntu, without much detail on how to get it work with a mac.
I've tried a few things with the docker-compose file - here it is right now, thought I feel like I'm in the wrong direction.
services:
cmake:
container_name: cmake_container
build:
context: .
dockerfile: Dockerfile
tty: true
command: /bin/bash
volumes:
- ./testdata_files:/app/testdata_files
deploy:
resources:
reservations:
devices:
- driver: amd
count: 1
capabilities: [gpu]
Here are my graphic details :
AMD Radeon Pro 5500M 8GB
Intel UHD Graphics 630 1536 MB
I can successfully bring up a CosmosDb Emulator instance within docker-compose, but the data I am trying to seed has more than 25 static containers, which is more than the default emulator allows. Per https://learn.microsoft.com/en-us/azure/cosmos-db/emulator-command-line-parameters#set-partitioncount you can set this partition count higher with a parameter, but I am unable to find a proper entrypoint into the compose that accepts that parameter.
I have found nothing in my searches that affords any insight into this as most people have either not used compose or not even used Docker for their Cosmos Emulator instance. Any insight would be appreciated.
Here is my docker-compose.yml for CosmosDb:
services:
cosmosdb:
container_name: "azurecosmosemulator"
hostname: "azurecosmosemulator"
image: 'mcr.microsoft.com/cosmosdb/windows/azure-cosmos-emulator'
platform: windows
tty: true
mem_limit: 2GB
ports:
- '8081:8081'
- '8900:8900'
- '8901:8901'
- '8902:8902'
- '10250:10250'
- '10251:10251'
- '10252:10252'
- '10253:10253'
- '10254:10254'
- '10255:10255'
- '10256:10256'
- '10350:10350'
networks:
default:
ipv4_address: 172.16.238.246
volumes:
- '${hostDirectory}:C:\CosmosDB.Emulator\bind-mount'
I have attempted to add a command in there for starting the container, but it does not accept any arguments I have tried.
My answer for this was a work around. Ultimately, running windows and linux containers side-by-side was a sizeable pain. Recently, Microsoft put out a linux container version of the emulator, which allowed me to provide an environment variable for partition counts, and run the process far more efficiently.
Reference here: https://learn.microsoft.com/en-us/azure/cosmos-db/linux-emulator?tabs=ssl-netstd21
I would like to run 2 docker images with docker-compose.
one image should run with nvidia-docker and the other with docker.
I've seen this post use nvidia-docker-compose launch a container, but exited soon
but this is not working for me(not even running only one image)...
any idea would be great.
UPDATE : please check nvidia-docker 2 and its support of docker-compose first
https://github.com/NVIDIA/nvidia-docker/wiki/Frequently-Asked-Questions#do-you-support-docker-compose
(I'd first suggest adding the nvidia-docker tag).
If you look at the nvidia-docker-compose code here it only generates a specific docker-file for docker-compose after a query of the nvidia configuration on localhost:3476.
You can also make by hand this docker-compose file as they turn out to be quite simple, follow this example, replace 375.66 with your nvidia driver version and put as many /dev/nvidia[n] lines as you have graphic cards (did not try to put services on separate GPUs but go for it !):
services:
exampleservice0:
devices:
- /dev/nvidia0
- /dev/nvidia1
- /dev/nvidiactl
- /dev/nvidia-uvm
- /dev/nvidia-uvm-tools
environment:
- EXAMPLE_ENV_VARIABLE=example
image: company/image
volumes:
- ./disk:/disk
- nvidia_driver_375.66:/usr/local/nvidia:ro
version: '2'
volumes:
media: null
nvidia_driver_375.66:
external: true
Then just run this hand-made docker-compose file with a classic docker-compose command.
Maybe you can then compose with non nvidia dockers by skipping the nvidia specific stuff in the other services.
Additionally to the accepted answer, here's my approach, a bit shorter.
I needed to use the old version of docker-compose (2.3) because of the required runtime: nvidia (won't necessarily work with version: 3 - see this). Setting NVIDIA_VISIBLE_DEVICES=all will make all the GPUs visible.
version: '2.3'
services:
your-service-name:
runtime: nvidia
environment:
- NVIDIA_VISIBLE_DEVICES=all
# ...your stuff
My example is available here.
Tested on NVIDIA Docker 2.5.0, Docker CE 19.03.13 and NVIDIA-SMI 418.152.00 and CUDA 10.1 on Debian 10.