I need to know how to setup a Docker to implement a container that could help me run an Odoo 10.0 ERP environment in it.
I'm looking for references or some setup guides, even I don't mind if you can paste the CLI below. I'm currently developing in a Ubuntu OS.
Thanks in Advance.......!!!
#NaNDuzIRa This is quite simple. I suggest that when you want to learn how to do something even if you need it very fast to look into the "man page" of the tool that you are trying to use to package your application. In this case, it is Docker.
Create a file name Dockerfile or dockerfile
Now that you know the OS flavor you want to use. Include that at the beginning of the "Dockerfile"
Then, you can add how you want to install your application in the OS.
Finally, you include the installation steps of Odoo for which i have added a link at the bottom of this post.
#OS of the image, Latest Ubuntu
FROM ubuntu:latest
#Privilege raised to install the application or package as a root user
USER root
#Some packages that will be used for the installation
RUN apt update && apt -y install wget
#installing Odoo
RUN wget -O - https://nightly.odoo.com/odoo.key | apt-key add -
RUN echo "deb http://nightly.odoo.com/10.0/nightly/deb/ ./" >> /etc/apt/sources.list.d/odoo.list
RUN apt-get -y update && apt-get -y install odoo
References
Docker
Dockerfile
Odoo
Related
Today I had to move my Domoticz/jadahl/Synology setup to one that runs in a Docker container. While this didn’t give any problems, I have one issue. Domoticz allows scripts to be executed when a switch is toggled. I have been running PHP scripts for years this way and I was wondering if it is possible to run a script located on the Synology from the Docker container. Totally new to Docker so forgive any stupid questions.
If not, any tips on how to approach this so I can get back to my dayjob?
Solved this by creating my own image:
FROM domoticz/domoticz:latest
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install etherwake wget curl php-cli php-xml php-soap -y
If I understand correctly, on standard Ubuntu systems for example, root certificates are provided by ca-certificates package and get updated when the package itself is updated.
But how can the root certificates be updated when using docker containers ? Is there a common preferred way of doing this, or must the containers be redeployed with an up-to-date docker image ?
The containers must be redeployed with an up-to-date image.
The Docker Hub base images like ubuntu actually get updated fairly regularly, and if you look at the tag list you can see that there are several date-stamped variants of the images. So one approach that will get you pretty close to current is to always (have your CI system) pull the base image before you build.
docker pull ubuntu:18.04
docker build .
If you can't do that, or if you're working from some sort of derived image that updates less frequently, you can just manually run apt-get upgrade in your Dockerfile. Doing this in the same place you're otherwise installing packages makes sense. It needs to be in the same RUN line as a matching apt-get update, and you might need some way to force Docker to not cache that update line to get current updates.
FROM python:3.8-slim
# Have an option to force rebuilds; the RUN line won't be
# cacheable if the dependency_stamp option changes
ARG dependency_stamp
ENV dependency_stamp=${dependency_stamp:-unknown}
RUN touch /dependencies.${dependency_stamp}
# Update base OS packages and install other things we need
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get upgrade \
&& DEBIAN_FRONTEND=noninteractive apt-get install \
--no-install-recommends --assume-yes \
...
If you find yourself doing this routinely, maintaining your own base images that are upgraded to current packages but don't have anything else installed can be helpful; if you find yourself doing that, you might have more control over the process and get smaller images if you build an image FROM ubuntu and install e.g. Python, rather than building an image FROM python and then installing updates over it.
I'm trying to set up a local GoCD CI server using docker for both the base server and agents. I can get everything running fine, but issues spring up when I try make sure the agent containers have everything installed in them that I need to build my projects.
I want to preface this with I'm aware that I might not be using these technologies correctly, but I don't know much better atm. If there are better ways of doing things, I'd love to learn.
To start, I'm using the official GoCD docker image and that works just fine.
Creating a blank agent also works just fine.
However, one of my projects requires node, yarn and webpack to be build (good ol' react site).
Of course a standard agent container has nothing but the agent installed on it so I've had a shot using a Dockerfile to install all the tech I need to build my projects.
FROM gocd/gocd-agent-ubuntu-18.04:v19.11.0
SHELL ["/bin/bash", "-c"]
USER root
RUN apt-get update
RUN apt-get install -y git curl wget build-essential ca-certificates libssl-dev htop openjdk-8-jre python python-pip
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && \
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update && apt-get install -y yarn
# This user is created in the base agent image
USER go
ENV NVM_DIR /home/go/.nvm
ENV NODE_VERSION 10.17.0
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.1/install.sh | bash \
&& . $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default \
&& npm install -g webpack webpack-cli
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/v$NODE_VERSION/bin:$PATH
This is the current version of this file, but I've been through many many iterations of frustrations where an globally installed npm package is never on the path and thus not conveniently available.
The docker build works fine, its just that in this iteration of the Dockerfile, webpack is not found when the agent tries running a build.
My question is:
Is a Dockerfile the right place to do things like install yarn, node, webpack etc... ?
If so, how can I ensure everything I install through npm is actually available?
If not, what are the current best practices about this?
Any help, thoughts and anecdotes are fully welcomed and appreciated!
Cheers~!
You should separate gocd-server and gocd-agent to various containers.
Pull images:
docker pull gocd/gocd-server:v18.10.0 docker pull
gocd/gocd-agent-alpine-3.8:v18.10.0
Build and run them, check if it's ok. Then connect into bash in agent container
docker exec -it gocd-agent bash
Install the binaries using the alpine package manager.
apk add --no-cache nodejs yarn
Then logout and update the container image. Now you have an image with needed packeges. Also read this article.
You have two options with gocd agents.
The first one is the agent use docker, and create other containers, for any purpose that the pipeline needs. So you can have a lot of agents with this option, and the rules or definitions occurs in the pipeline. The agent only execute.
The second one, is an agent with al kind of program installed you needed. I use this one. For this case, you use a Dockerfile with all, and generate the image for all the agents.
For example i have an agent with gcloud, kubectl, sonar scanner and jmeter, who test with sonar before the deploy, then deploy in gcp, and for last step, it test with jmeter after the deploy.
I am new at using Docker so this may be obvious for some. I am running Ubuntu 18.04TLS.
I want to install the package "python3-protobuf" inside an image. I try to do this with the following line in the Dockerfile:
...
RUN apt-get update && \
apt-get install -y --no-install-recommends \
python3-protobuf \
<some other packages to be installed>
...
When I run 'docker build -t myImageName', I get the message:
E: Unable to locate package python3-protobuf
There are many packages that I am installing but this is the only one that is creating a problem for me.
I know that the package name is correct because in the terminal, when I 'apt search' for it, it is found. Additionally, in the dockerfile I do the recommended 'update' and 'install' steps. So it should be finding it. Any ideas why it does not?
#banuj answered this question.
The package "python3-protobuf" became available from Ubuntu 18.04 and onward. The base image I took is using ubuntu 16.04.
I have two way to solve this:
Use a base image that is with ubuntu 18.04 (or later)
Use pip to install the package.
I ended up using option two.
I am trying to nano/vim inside a docker container to edit the tomcat config files but i am getting an error that nano/vim is unknown command. I tried to yum install, still yum is unknown comand. How do I go about it
The most common editor is vi. To install some packages into your container you have to know it's base image. Most of distros create a special file in /etc/ with all necessary information: something-release, you can find it out with this command:
cat /etc/*release
And then use the package manager of current distro.
for Alpine it will be apk update && apk add vim.
for Ubuntu/Debian - apt update && apt install vim.
for Centos/RedHat/Fedora - yum install vim
etc