I'm new to docker and I've been told ddev is a simple way to set up a local container to run a TYPO3 project.
But I'm confused. I'm not familiar with all these containers yet. How should I proceed to get a grip?
The tutorial is based on https://docs.typo3.org/m/typo3/guide-contributionworkflow/master/en-us/Appendix/SettingUpTypo3Ddev.html but mind – that is a step-by-step-manual if you want to contribute to the TYPO3 core. If you want to run your own site, the «Clone TYPO3» section doesn’t apply.
So start like this:
Install Docker (Desktop App is fine) from
https://www.docker.com/products/docker-desktop
Install ddev: https://ddev.readthedocs.io/en/latest/#installation (Mac: brew tap drud/ddev && brew install ddev)
Create a directory where you want to run the site: mkdir mysite; cd mysite
Configure ddev: run ddev config
There’s not much to choose from in the wizard. You can set the web-root (eg. public_html, so you have a level more above) and choose from a few CMS presets. They don’t change too much, in the case of TYPO3 it will manage the db connection and some nginx settings.
The file .ddev/config.yaml will be created. In it you can find a lot of options.
Add your site (and, if necessary, run composer)
Run ddev with ddev start
See if mkcert is installed, if not, follow the provided instructions (this will make sure you can use self-signed certificates, at least in firefox) (mac: brew install mkcert nss; mkcert -install)
ddev will output a few informations, where you can find your site, which port, where phpmyadmin is etc
ddev help gives you more commands
If you want to log into the container, use ddev ssh. This is NOT used to change files etc. The files are mirrored automatically into the container! But you can log in to install binaries etc. Let’s try that.
Some commands you may need: What system are we running? uname -a -> linuxkit // Update available packages: sudo apt-get update // Search for a package apt-cache search packagename // Install Pdftools (pdftotext, pdfinfo..): sudo apt-get install poppler-utils // Get the path to imagemagick (if it’s already installed): whereis convert (remember, imagemagick is a collection, convert is one of the tools) // log out from the container, back to your system: exit
Now, how to connect to the database which lives inside the docker container?
run ddev describe and you will get the login data. It’s basically db for everything.
For TYPO3, the ddev setup command provides an AdditionalConfiguration.php file that can be used. It’s missing two important parameters though, SystemMaintainers and Installtool Password. Here’s an example.
$GLOBALS['TYPO3_CONF_VARS']['SYS']['trustedHostsPattern'] = '.*';
$GLOBALS['TYPO3_CONF_VARS']['DB']['Connections']['Default'] = array_merge($GLOBALS['TYPO3_CONF_VARS']['DB']['Connections']['Default'], [
'dbname' => 'db',
'host' => 'db',
'password' => 'db',
'port' => '3306',
'user' => 'db',
]);
// This mail configuration sends all emails to mailhog
$GLOBALS['TYPO3_CONF_VARS']['MAIL']['transport'] = 'smtp';
$GLOBALS['TYPO3_CONF_VARS']['MAIL']['transport_smtp_server'] = 'localhost:1025';
$GLOBALS['TYPO3_CONF_VARS']['SYS']['devIPmask'] = '*';
$GLOBALS['TYPO3_CONF_VARS']['SYS']['displayErrors'] = 1;
// add these
$GLOBALS['TYPO3_CONF_VARS']['SYS']['systemMaintainers'] = [123,456];
$GLOBALS['TYPO3_CONF_VARS']['BE']['lockSSL'] = 1; // optional
$GLOBALS['TYPO3_CONF_VARS']['BE']['installToolPassword'] = '123';
But what if you want to access the database with a separate tool instead of the preconfigured phpMyAdmin? If you use sequel pro, simply run ddev sequelpro and your database will be launched automagically in sequel pro.
You can also do this manually; then you need to define the db port to access it externally. Do this in .ddev/config.yaml, by adding (for example) host_db_port: "32778" Now we can set up a db management tool as such (and store the bookmark):
Remember: PHP will still use the default Port 3306!
Ok, here we go. ddev is already started, so make sure you’re in your local directory (where .ddev/ is) and run ddev describe to see the parameters again. Probably, if you go to https://mysite.ddev.local, you will find everything from your webroot working.
When done, finish with ddev stop. I’m not really sure where databases are persisted though yet, when ddev is stopped. Maybe you get a dump first with ddev snapshot.
Explore many more possibilities of ddev with ddev help.
While setting up and configure some docker containers I asked myself how I could automatically edit some config files inside the container after the containerized service finished installing (since the config files are created at the installation).
I have tried that using a shell file and adding it as the entrypoint in the Dockerfile. However, as I have said the config file does not exist right at the beginning and hence the sed commands in the script fail.
Linking an config files with - ./myConfig.conf:/xy/myConfig.conf is also not an option because the config contains some installation dependent options.
The most reasonable solution I have found was running a script, which edits the config, manually after the installation has finished with docker exec -i mycontainer sh < editconfig.sh
EDIT
My question is formulated in general terms. However, the question arose while working with Nextcloud in a docker-compose setup similar to the official example. That container contains a config.php file which is the general config file of Nextcloud and is generated during the installation. Certain properties of that files have to be changed (there are only a very limited number of environmental variables to specify). Since I am conducting some tests with this container I have to repeatedly reinstall it and thus reedit the config file.
Maybe you can try another approach and have your config file/application pick its settings from the environmental variables. That would be consistent with the 12factor app methodology see here
How I understand your case you need to start your container from creating config by some template.
I see a number of options to do it:
Use some script that generates a config from template and arguments from a command line or environment variables. (Jinja2 and python for example or Mustache and node.js ). In this case, your entrypoint generate the template and after this start application. For change config, you will be forced restart service (container).
Run some service can save the configuration and render you configuration in run time. Personally, I like consul template, we active use this engine in our environment, and have no problems for while. In this case, config is more dynamic and able to be changed "on the fly". In your container, you will have two processes, application, and consul-template daemon. Obviously, you will need to run and maintain consul. For reloading config restart of an application process is enough.
Run a custom script to create the config. :)
I'm trying to get a task defined in ConEmu to run multiple instance of Ubuntu bash using the WSL layer of Windows 10.
I followed the examples to set up a task to split the UI the way I want, and that part works great. My problem is that I'm trying to use environment variables to pass through commands to run after logging in, and I want different things to run in each panel.
Here is the task command I'm using:
set "STARTUP_CMD='gfp && make server' " & set "PATH=%ConEmuBaseDirShort%\wsl;%PATH%" & %ConEmuBaseDirShort%\conemu-cyg-64.exe --wsl -cur_console:p -cur_console:d:C:\xxx\yyy
On the Linux side I have code in my ~/.bash_aliases file that looks for the STARTUP_CMD env var and tries to execute it. I found code that can pull env vars from the Windows side, which is where the 'set' commands appear to be storing things. Problem is, Windows doesn't know what to do with these, and it tries to expand them when they are read, so it all blows up.
I had this working before, but had to wipe and rebuild my machine recently, and unfortunately didn't have the working command backed up anywhere.
I thought this was the recommended way to run bash with WSL, but I would rather have a way to send stuff directly to the Linux layer as env vars (or if someone has a better way to queue up different commands for each pane, I'm all for that too). Any help would be much appreciated.
Thanks!
Oh course I find the answer right after posting the question... posting here to help others that hit the same issue (or my future self if I forget and have to wipe my machine again).
set "PATH=%ConEmuBaseDirShort%\wsl;%PATH%" & %ConEmuBaseDirShort%\conemu-cyg-64.exe --wsl -eSTARTUP_CMD="gfp && make server" -cur_console:p -cur_console:d:C:\xxx\yyy
You just have to prefix the env var you want with -e and pass it as a param to conemu-cyg. It goes through without any modification on the Windows side and you can read it just like any other env var on the Linux side.
I am using Chocolatey to install Docker.
When I originally run the following command:
choco install docker
and try to run the "docker --version" command, everything goes as expected.
Docker version 17.10.0-ce, build f4ffd25
When I try to run "dockerd" command, it shows as not being part of my path.
'dockerd' is not recognized as an internal or external command,
Looking at the PATH variable, and navigating to where Chocolatey stores the executables, dockerd.exe is not present while docker.exe is. Am I missing something in instructing Chocolatey in adding dockerd?
The reason I need the dockerd executable is so that I can limit the number of concurrent downloads, as shown in the Docker documentation.
This is a decision that the package maintainer(s) for Docker have made. If you have a look here:
https://chocolatey.org/packages/docker#files
You will see that there is a dockerd.exe.ignore file. This file is used to instruct Chocolatey to explicitly not create what is referred to as a shim file, which would make it work from the command line, in the same way as Docker does.
My best suggestion would be to reach out to the maintainers of that package to ask them why this was done, and to perhaps get it changed. You can do this by clicking on the Contact Maintainers link on this page:
https://chocolatey.org/packages/docker
As a workaround, you could add the following path to your Windows PATH environment variable:
C:\ProgramData\chocolatey\lib\docker\tools\docker
Which would allow it to work.
The firewall I'm behind is running Microsoft ISA server in NTLM-only mode. Hash anyone have success getting their Ruby gems to install/update via Ruby SSPI gem or other method?
... or am I just being lazy?
Note: rubysspi-1.2.4 does not work.
This also works for "igem", part of the IronRuby project
For the Windows OS, I used Fiddler to work around the issue.
Install/Run Fiddler from www.fiddler2.com
Run gem:
$ gem install --http-proxy http://localhost:8888 $gem_name
I wasn't able to get mine working from the command-line switch but I have been able to do it just by setting my HTTP_PROXY environment variable. (Note that case seems to be important). I have a batch file that has a line like this in it:
SET HTTP_PROXY=http://%USER%:%PASSWORD%#%SERVER%:%PORT%
I set the four referenced variables before I get to this line obviously. As an example if my username is "wolfbyte", my password is "secret" and my proxy is called "pigsy" and operates on port 8080:
SET HTTP_PROXY=http://wolfbyte:secret#pigsy:8080
You might want to be careful how you manage that because it stores your password in plain text in the machine's session but I don't think it should be too much of an issue.
This totally worked:
gem install --http-proxy http://COMPANY.PROXY.ADDRESS $gem_name
I've been using cntlm (http://cntlm.sourceforge.net/) at work. Configuration is very similar to ntlmaps.
gem install --http-proxy http://localhost:3128 _name_of_gem_
Works great, and also allows me to connect my Ubuntu box to the ISA proxy.
Check out http://cntlm.wiki.sourceforge.net/ for more information
I tried some of these solutions, and none of them worked. I finally found a solution that works for me:
gem install -p http://proxy_ip:proxy_port rails
using the -p parameter to pass the proxy. I'm using Gem version 1.9.1.
Create a .gemrc file (either in /etc/gemrc or ~/.gemrc or for example with chef gem in /opt/chef/embedded/etc/gemrc) containing:
http_proxy: http://proxy:3128
Then you can gem install as usual.
This solved my problem perfectly:
gem install -p http://proxy_ip:proxy_port compass
You might need to add your user name and password to it:
gem install -p http://[username]:[password]#proxy_ip:proxy_port compass
If you are having problems getting authenticated through your proxy, be sure to set the environment variables in exactly the format below:
set HTTP_PROXY=some.proxy.com
set HTTP_PROXY_USER=user
set HTTP_PROXY_PASS=password
The user:password# syntax doesn't seem to work and there are also some badly named environment variables floating around on Stack Overflow and various forum posts.
Also be aware that it can take a while for your gems to start downloading. At first I thought it wasn't working but with a bit of patience they started downloading as expected.
Quick answer : Add proxy configuration with parameter for both install/update
gem install --http-proxy http://host:port/ package_name
gem update --http-proxy http://host:port/ package_name
I tried all the above solutions, however none of them worked. If you're on linux/macOS i highly suggest using tsocks over an ssh tunnel. What you need in order to get this setup working is a machine where you can log in via ssh, and in addition to that a programm called tsocks installed.
The idea here is to create a dynamic tunnel via SSH (a socks5 proxy). We then configure tsocks to use this tunnel and to start our applications, in this case:
tsocks gem install ...
or to account for rails 3.0:
tsocks bundle install
A more detailed guide can be found under:
http://blog.byscripts.info/2011/04/bypass-a-proxy-with-ssh-tunnel-and-tsocks-under-ubuntu/
Despite being written for Ubuntu the procedure should be applicable for all Unix based machines. An alternative to tsocks for Windows is FreeCap (http://www.freecap.ru/eng/). A viable SSH client on windows is called putty.
Posts abound regarding this topic, and to help others save hours of trying different solutions, here is the final result of my hours of tinkering.
The three solutions around the internet at the moment are:
rubysspi
apserver
cntlm
rubysspi only works from a Windows machine, AFAIK, as it relies on the Win32Api library. So if you are on a Windows box trying to run through a proxy, this is the solution for you. If you are on a Linux distro, you're out of luck.
apserver seems to be a dead project. The link listed in the posts I've seen lead to 404 page on sourceforge. I search for "apserver" on sourceforge returns nothing.
The sourceforge link for cntlm that I've seen redirects to http://cntlm.awk.cz/, but that times out. A search on sourceforge turns up this link, which does work: http://sourceforge.net/projects/cntlm/
After downloading and configuring cntlm I have managed to install a gem through the proxy, so this seems to be the best solution for Linux distros.
A workaround is to install http://web.archive.org/web/20060913093359/http://apserver.sourceforge.net:80/ on your local machine, configure it and run gems through this proxy.
Install: Just download apserver 097 (and not the experimental 098!) and unpack.
Configure: Edit the server.cfg file and put the values for your MS proxy in PARENT_PROXY and PARENT_PROXY_PORT. Enter the values for DOMAIN and USER. Leave PASSWORD blank (nothing after the colon) – you will be prompted when launching it.
Run apserver: cd aps097; python main.py
Run Gems: gem install—http-proxy http://localhost:5865/ library
I am working behind a proxy and just installed SASS by downloading directly from http://rubygems.org.
I then ran sudo gem install [path/to/downloaded/gem/file]. I cannot say this will work for all gems, but it may help some people.
This worked for me in a Windows box:
set HTTP_PROXY=http://server:port
set HTTP_PROXY_USER=username
set HTTP_PROXY_PASS=userparssword
set HTTPS_PROXY=http://server:port
set HTTPS_PROXY_USER=username
set HTTPS_PROXY_PASS=userpassword
I have a batch file with these lines that I use to set environment values when I need it.
The trick, in my case, was HTTPS_PROXY sets. Without them, I always got a 407 proxy authentication error.
If you are on a *nix system, use this:
export http_proxy=http://${proxy.host}:${port}
export https_proxy=http://${proxy.host}:${port}
and then try:
gem install ${gem_name}
rubysspi-1.3.1 worked for me on Windows 7, using the instructions from this page:
http://www.stuartellis.eu/articles/installing-ruby/
If you want to use SOCKS5 proxy, you may try rubygems-socksproxy https://github.com/gussan/rubygems-socksproxy.
It works for me on OSX 10.9.3.
If behind a proxy, you can navigate to Ruby downloads, click on Download, which will download the specified update ( or Gem ) to a desired location.
Next, via Ruby command line, navigate to the downloaded location by using : pushd [directory]
eg : pushd D:\Setups
then run the following command: gem install [update name] --local
eg: gem install rubygems-update --local.
Tested on Windows 7 with Ruby update version 2.4.1.
To check use following command : ruby -v
Rather than editing batch files (which you may have to do for other Ruby gems, e.g. Bundler), it's probably better to do this once, and do it properly.
On Windows, behind my corporate proxy, all I had to do was add the HTTP_PROXY environment variable to my system.
Start -> right click Computer -> Properties
Choose "Advanced System Settings"
Click Advanced -> Environment Variables
Create a new System variable named "HTTP_PROXY", and set the Value to your proxy server
Reboot or log out and back in again
Depending on your authentication requirements, the HTTP_PROXY value can be as simple as:
http://proxy-server-name
Or more complex as others have pointed out
http://username:password#proxy-server-name:port-number
for anyone tunnelling with SSH; you can create a version of the gem command that uses SOCKS proxy:
Install socksify with gem install socksify (you'll need to be able to do this step without proxy, at least)
Copy your existing gem exe
cp $(command which gem) /usr/local/bin/proxy_gem
Open it in your favourite editor and add this at the top (after the shebang)
require 'socksify'
if ENV['SOCKS_PROXY']
require 'socksify'
host, port = ENV['SOCKS_PROXY'].split(':')
TCPSocket.socks_server = host || 'localhost'
TCPSocket.socks_port = port.to_i || 1080
end
Set up your tunnel
ssh -D 8123 -f -C -q -N user#proxy
Run your gem command with proxy_gem
SOCKS_PROXY=localhost:8123 proxy_gem push mygem