Today I had to move my Domoticz/jadahl/Synology setup to one that runs in a Docker container. While this didn’t give any problems, I have one issue. Domoticz allows scripts to be executed when a switch is toggled. I have been running PHP scripts for years this way and I was wondering if it is possible to run a script located on the Synology from the Docker container. Totally new to Docker so forgive any stupid questions.
If not, any tips on how to approach this so I can get back to my dayjob?
Solved this by creating my own image:
FROM domoticz/domoticz:latest
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install etherwake wget curl php-cli php-xml php-soap -y
Related
The full error is
ERROR: libmount/2.33.1: Error in source() method, line 26
tools.get(**self.conan_data["sources"][self.version])
FileExistsError: [Errno 17] File exists: './util-linux-2.33.1/tests/expected/libmount/context-X-mount.mkdir'
My setup is a dockerized conen where the container is built like this:
FROM gcc:10.2.0
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get install -y cmake
RUN apt-get install -y python3-pip
RUN pip3 install --upgrade pip
RUN pip3 install conan
RUN conan remote add bincrafters https://api.bintray.com/conan/bincrafters/public-conan
CMD ["/bin/bash"]
My basepath contains the folders build/conan and there is a conanfile.txt in the basepath.
The conanfile.txt contains:
[requires]
sdl2/2.0.12#bincrafters/stable
The motivation to dockerize is so that I get to a stable buid environment over all my machines.
build/conan is extracted to store all cached files between builds, or so I hope it will once this works.
I made this into a repository so you can check out this example
EDIT: I modified the repo as I went on investigating - the original is in the commit history.
https://github.com/Aypahyo/dockerized-conan-shows-fileexistserror-errno-17-file-exists-util-linux-2.git
What I want is to use conan install from within a container on a mounted docker container with caching on the host machine.
My obvious question is: What is happening here and how do I fix it?
The issue seems to be stemming from the volume mount on my system.
I followed user uilianries advice and went for building a container based on an official conan-docker-tools container as well as moving the volume into a docker managed volume.
This error message is gone now although it looks like this approach in general may not fit what I want to do.
I modified the repository for this question with what I ended up with. https://github.com/Aypahyo/dockerized-conan-shows-fileexistserror-errno-17-file-exists-util-linux-2
caching does not work as I want it to but that is not what this question was about.
I am preparing a test automation which require me to install network manager so that the code api(which uses python3-networkmanager) could be tested.
In the docker file, I tried installing:
apt-get install dbus \
network-manager
start receiving error:
networkmanager.systems do not have hostname property.
I looked for solutions, but appears that will require:
Privilege user (cannot use privilege user, project requirement)
Reboot after installing same. (in docker, hence, can't reboot)
This leaves me with an only option for mocking debian networkmanager that can communicate with python3-networkmanager.
Trying to figure out, how I can mock same?
RUN apt-get update && apt-get install -y \
network-manager
worked for me.
I would like to contribute as I had to spend some time getting it to work.
Inside the dockerfile you have to add:
RUN apt-get update && apt-get install -y network-manager dbus
Also, I added a script to start the network manager:
#!/bin/bash
service dbus start
service NetworkManager start
Then in the Dockerfile you have to call this start script at the end:
COPY start_script.sh /etc/init/
ENTRYPOINT ["/etc/init/start_script.sh"]
Now you can build your container and run it like this:
docker run --net="host" -v /run/dbus/system_bus_socket:/run/dbus/system_bus_socket container
For me, it is enough to work with OrangePi and Docker without a privileged container.
I'm trying to set up a local GoCD CI server using docker for both the base server and agents. I can get everything running fine, but issues spring up when I try make sure the agent containers have everything installed in them that I need to build my projects.
I want to preface this with I'm aware that I might not be using these technologies correctly, but I don't know much better atm. If there are better ways of doing things, I'd love to learn.
To start, I'm using the official GoCD docker image and that works just fine.
Creating a blank agent also works just fine.
However, one of my projects requires node, yarn and webpack to be build (good ol' react site).
Of course a standard agent container has nothing but the agent installed on it so I've had a shot using a Dockerfile to install all the tech I need to build my projects.
FROM gocd/gocd-agent-ubuntu-18.04:v19.11.0
SHELL ["/bin/bash", "-c"]
USER root
RUN apt-get update
RUN apt-get install -y git curl wget build-essential ca-certificates libssl-dev htop openjdk-8-jre python python-pip
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && \
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update && apt-get install -y yarn
# This user is created in the base agent image
USER go
ENV NVM_DIR /home/go/.nvm
ENV NODE_VERSION 10.17.0
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.1/install.sh | bash \
&& . $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default \
&& npm install -g webpack webpack-cli
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/v$NODE_VERSION/bin:$PATH
This is the current version of this file, but I've been through many many iterations of frustrations where an globally installed npm package is never on the path and thus not conveniently available.
The docker build works fine, its just that in this iteration of the Dockerfile, webpack is not found when the agent tries running a build.
My question is:
Is a Dockerfile the right place to do things like install yarn, node, webpack etc... ?
If so, how can I ensure everything I install through npm is actually available?
If not, what are the current best practices about this?
Any help, thoughts and anecdotes are fully welcomed and appreciated!
Cheers~!
You should separate gocd-server and gocd-agent to various containers.
Pull images:
docker pull gocd/gocd-server:v18.10.0 docker pull
gocd/gocd-agent-alpine-3.8:v18.10.0
Build and run them, check if it's ok. Then connect into bash in agent container
docker exec -it gocd-agent bash
Install the binaries using the alpine package manager.
apk add --no-cache nodejs yarn
Then logout and update the container image. Now you have an image with needed packeges. Also read this article.
You have two options with gocd agents.
The first one is the agent use docker, and create other containers, for any purpose that the pipeline needs. So you can have a lot of agents with this option, and the rules or definitions occurs in the pipeline. The agent only execute.
The second one, is an agent with al kind of program installed you needed. I use this one. For this case, you use a Dockerfile with all, and generate the image for all the agents.
For example i have an agent with gcloud, kubectl, sonar scanner and jmeter, who test with sonar before the deploy, then deploy in gcp, and for last step, it test with jmeter after the deploy.
I need to know how to setup a Docker to implement a container that could help me run an Odoo 10.0 ERP environment in it.
I'm looking for references or some setup guides, even I don't mind if you can paste the CLI below. I'm currently developing in a Ubuntu OS.
Thanks in Advance.......!!!
#NaNDuzIRa This is quite simple. I suggest that when you want to learn how to do something even if you need it very fast to look into the "man page" of the tool that you are trying to use to package your application. In this case, it is Docker.
Create a file name Dockerfile or dockerfile
Now that you know the OS flavor you want to use. Include that at the beginning of the "Dockerfile"
Then, you can add how you want to install your application in the OS.
Finally, you include the installation steps of Odoo for which i have added a link at the bottom of this post.
#OS of the image, Latest Ubuntu
FROM ubuntu:latest
#Privilege raised to install the application or package as a root user
USER root
#Some packages that will be used for the installation
RUN apt update && apt -y install wget
#installing Odoo
RUN wget -O - https://nightly.odoo.com/odoo.key | apt-key add -
RUN echo "deb http://nightly.odoo.com/10.0/nightly/deb/ ./" >> /etc/apt/sources.list.d/odoo.list
RUN apt-get -y update && apt-get -y install odoo
References
Docker
Dockerfile
Odoo
So currently I'm building an API in PHP as different (micro) services which runs on nginx.
I've followed all the Docker fundamental video's and went through the docs, but I still can't figure out how to implement it.
Do I need a server where I push my code to and deploy on the containers (with CI or so)?
Does the container volume get pushed to the hub as well? So my code will be in the container itself?
I think you messed up a bit what's container and what's an image. For me image is something you build on disk to run. And container is an image running on the computer and serving/doing things.
Do I need a server where I push my code to and deploy on the containers
No, you don't. You start building image from some base image, and from a Dockerfile. So make some work dir, copy Dockerfile here, copy your PHP sources here as PHPAPI and in Dockerfile have commands to copy PHP into docker. Along the lines
FROM ubuntu:15.04
MAINTAINER guidsen
RUN apt-get update && \
apt-get install -y nginx && \
apt-get install -y php && \
apt-get autoremove; apt-get clean; apt-get autoclean
RUN mkdir -p /root/PHPAPI
COPY PHPAPI /root/PHPAPI
WORKDIR /root/PHPAPI
CMD /root/PHPAPI/main.php
Does the container volume get pushed to the hub as well? So my code will be in the container itself?
That depends on what do you use to run containers from image. AWS I think require image pulled from Docker hub, so you have to push it here first. Some other cloud providers or private clouds require to push image directly to them. And yes, your code would be in the image and will be run in the container.