When i'm configure nextcloud (which run in docker container ) using environment variables , i can't visit site after it and i need to configure manually with connection to docker with bash .
How to solve this problem or make it automatically without creating my own docker image?
The environment variable only gets picked up and applied to the config when building a brand new instance. If you've already created a config.php file which is mapped in that volume, that environment variable will not override it.
If you want to keep your existing config intact, you need to SSH into your NAS and go to your Nextcloud Docker folder and find /config/config.php. For me this was located at: /docker/nextcloud/config/www/nextcloud/config
Then type: sudo nano config.php
Quick vi refresher - i to insert, esc to exit edit mode, and :qw to quit write mode but in this instance you may need to use :qw!
And add a new domain just add new entries by appending a new item to the PHP array:
'trusted_domains' =>
array (
0 => '192.168.0.29',
1 => 'cloud.example.com',
),
Reference: https://help.nextcloud.com/t/howto-add-a-new-trusted-domain/26
That sounds like an issue with Trusted Domains.
If you have a look at their repository (readme) at https://github.com/nextcloud/docker you will see an environment variabled called NEXTCLOUD_TRUSTED_DOMAINS which you can set in your docker environment.
Alternatively, you will find it in the {app}/config/config.php
The default values set for it, in my experience, is only 'localhost' to enable connecting to NextCloud from localhost at the very least.
Hope this helps.
Related
I have seen some similar questions, but none of them appear to solve my problem. I want to add a user to a docker container and in my Dockerfile, I define the username with:
ARG USERNAME="some_user"
Instead, I want the username to be the current user's computer username, as obtained by running the command whoami in the local terminal.
So what I would like to have is something like
ARG USERNAME=$(whoami)
.
This $(whoami) should be obtained from the local system environment, and not from the docker container.
Is there a way to do this for dockerfiles? I have thought of .env and docker-compose solutions but these also require each user to set their own username according to my knowledge.
There is no integrated way to execute arbitrary commands on the host directly outside of a container using just docker build / docker-compose build.
So to execute an arbitrary command to get/generate the required information you'll need to provide a custom script / use another build system to call docker/docker-compose with the respective flags or maybe generate the .env file from a template / interactively.
If you only need the current user name you may want to use the $USER / $LOGNAME environment variables that are set by the system in many default configurations. But since these are just normal environment variables their values may be incorrect / empty / manually changed by the user, see this question.
I am trying to pass on environment variables to be read from an XML file inside a docker container running wildly app service and hosted inside REHL 7 image.
What I've done so far:
I've created an environment file as key value pair, for example: FILESERVICE_MAX_POOL_SIZE=5
I am running docker by referencing the environment file: docker run -d --env-file ./ENV_VARIABLES <myImage>
In the Dockerfile I copy the xml template I need: COPY dockerfiles/standalone.xml /opt/wildfly/standalone/configuration/standalone.xml
Inside the XML template I'm trying to reference the environment variable: <max-pool-size>${env.FILESERVICE_MAX_POOL_SIZE}</max-pool-size>
I can see those environment variables inside the running container as root but not as the wildly user which needs them. How can I make an attribute visible to a specific user other than root ?
Clearly I'm doing something fundamentally wrong here just not sure what ?
Thanks in advance for your help.
Problem solved: wildfly couldn't see the attributes because in my startup script I didn't add the -E flag for sudo to preserve environment variables.
While setting up and configure some docker containers I asked myself how I could automatically edit some config files inside the container after the containerized service finished installing (since the config files are created at the installation).
I have tried that using a shell file and adding it as the entrypoint in the Dockerfile. However, as I have said the config file does not exist right at the beginning and hence the sed commands in the script fail.
Linking an config files with - ./myConfig.conf:/xy/myConfig.conf is also not an option because the config contains some installation dependent options.
The most reasonable solution I have found was running a script, which edits the config, manually after the installation has finished with docker exec -i mycontainer sh < editconfig.sh
EDIT
My question is formulated in general terms. However, the question arose while working with Nextcloud in a docker-compose setup similar to the official example. That container contains a config.php file which is the general config file of Nextcloud and is generated during the installation. Certain properties of that files have to be changed (there are only a very limited number of environmental variables to specify). Since I am conducting some tests with this container I have to repeatedly reinstall it and thus reedit the config file.
Maybe you can try another approach and have your config file/application pick its settings from the environmental variables. That would be consistent with the 12factor app methodology see here
How I understand your case you need to start your container from creating config by some template.
I see a number of options to do it:
Use some script that generates a config from template and arguments from a command line or environment variables. (Jinja2 and python for example or Mustache and node.js ). In this case, your entrypoint generate the template and after this start application. For change config, you will be forced restart service (container).
Run some service can save the configuration and render you configuration in run time. Personally, I like consul template, we active use this engine in our environment, and have no problems for while. In this case, config is more dynamic and able to be changed "on the fly". In your container, you will have two processes, application, and consul-template daemon. Obviously, you will need to run and maintain consul. For reloading config restart of an application process is enough.
Run a custom script to create the config. :)
I've been successfully run Ignite docker with parameter CONFIG_URI=https://raw.githubusercontent.com/apache/ignite/master/examples/config/example-cache.xml.
But I want to enable persistence and create a custom config file which I want to pass instead of CONFIG_URI.
Is there a way to pass a CONFIG file from host with the docker run command ?
On your Docker run command, you can use the -v parameter (or the equivalent in the Dockerfile) to map a local directory to that of the container.
Then you'd move your configuration file in there and set your CONFIG_URI to point to that, something like CONFIG_URI=file:///opt/etc/ignite.xml.
Of course you'll need to create a volume of some kind for the persistent files; you don't want to be storing them inside the container.
As antkr notes, if you're using Kubernetes, you can use a config map and StatefulSets, but you'd still need to set the CONFIG_URL in the same way.
Since you are going to use persistence, configure persistent volume according to following documentation:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
Mount it to your pod and read the configuration file from the volume using the CONFIG_URI parameter.
So I've been looking around for an example of how I can specify environment variables for my Docker container from the AWS EB web interface. Typically in EB you can add environment properties which are available at runtime. I was using these for my previous deployment before I switched to Docker, but it appears as though Docker has some different rules with regards to how the environment properties are handled, is that correct? According to this article [1], ONLY the AWS credentials and PARAM1-PARAM5 will be present in the environment variables, but no custom properties will be present. That's what it sounds like to me, especially considering the containers that do support custom environment properties say it explicitly, like Python shown here [2]. Does anyone have any experience with this software combination? All I need to specify is a single environment variable that tells me whether the application is in "staging" or "production" mode, then all my environment specific configurations are set up by the application itself.
[1] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-docker
[2] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-python
Custom environment variables are supported with the AWS Elastic Beanstalk Docker container. Looks like a miss in the documentation. You can define custom environment variables for your environment and expect that they will be passed along to the docker container.
I've needed to pass environment variable in moment docker run using Elastic Beanstalk, but, is not allowed put this information in Dockerrun.aws.json.
Below the steps to resolve this scenario:
Create a folder .ebextensions
Create a .config file in the folder
Fill the .config file:
option_settings:
-option_name: VARIABLE_NAME value: VARIABLE_VALUE
Zip the folder .ebextensions file along with the Dockerrun.aws.json plus Dockerfile and upload it to Beanstalk
To see the result, inside EC2 instance, execute the command "docker inspect CONTAINER_ID" and will see the environment variable.
At least for me the environment variables that I set in the EB console were not being populated into the Docker container. I found the following link helpful though: https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-env-variables-shell/
I used a slightly different approach where instead of exporting the vars to the shell, I used the ebextension to create a .env file which I then loaded from Python within my container.
The steps would be as follows:
Create a directory called '.ebextensions' in your app root dir
Create a file in this directory called 'load-env-vars.config'
Enter the following contents:
commands:
setvars:
command: /opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "\(.key)=\"\(.value)\""' > /var/app/current/.env
packages:
yum:
jq: []
This will create a .env file in /var/app/current which is where your code should be within the EB instance
Use a package like python-dotenv to load the .env file or something similar if you aren't using Python. Note that this solution should be generic to any language/framework that you're using within your container.
I don't think the docs are a miss as Rohit Banga's answer suggests. Thought I agree that "you can define custom environment variables for your environment and expect that they will be passed along to the docker container".
The Docker container portion of the docs say, "No DOCKER-SPECIFIC configuration options are provided by Elastic Beanstalk" ... which doesn't necessarily mean that no environment variables are passed to the Docker container.
For example, for the Ruby container the Ruby-specific variables that are always passed are ... RAILS_SKIP_MIGRATIONS, RAILS_SKIP_ASSET_COMPILATION, BUNDLE_WITHOUT, RACK_ENV, RAILS_ENV. And so on. For the Ruby container, the assumption is you are running a Ruby app, hence setting some sensible defaults to make sure they are always available.
On the other hand, for the Docker container it seems it's open. You specify whatever variables you want ... they make no assumptions as to what you are running, Rails (Ruby), Django (Python) etc ... because it could be anything. They don't know before hand what you want to run and that makes it difficult to set sensible defaults.