How to set up a TYPO3 site with docker and ddev? - docker

I'm new to docker and I've been told ddev is a simple way to set up a local container to run a TYPO3 project.
But I'm confused. I'm not familiar with all these containers yet. How should I proceed to get a grip?

The tutorial is based on https://docs.typo3.org/m/typo3/guide-contributionworkflow/master/en-us/Appendix/SettingUpTypo3Ddev.html but mind – that is a step-by-step-manual if you want to contribute to the TYPO3 core. If you want to run your own site, the «Clone TYPO3» section doesn’t apply.
So start like this:
Install Docker (Desktop App is fine) from
https://www.docker.com/products/docker-desktop
Install ddev: https://ddev.readthedocs.io/en/latest/#installation (Mac: brew tap drud/ddev && brew install ddev)
Create a directory where you want to run the site: mkdir mysite; cd mysite
Configure ddev: run ddev config
There’s not much to choose from in the wizard. You can set the web-root (eg. public_html, so you have a level more above) and choose from a few CMS presets. They don’t change too much, in the case of TYPO3 it will manage the db connection and some nginx settings.
The file .ddev/config.yaml will be created. In it you can find a lot of options.
Add your site (and, if necessary, run composer)
Run ddev with ddev start
See if mkcert is installed, if not, follow the provided instructions (this will make sure you can use self-signed certificates, at least in firefox) (mac: brew install mkcert nss; mkcert -install)
ddev will output a few informations, where you can find your site, which port, where phpmyadmin is etc
ddev help gives you more commands
If you want to log into the container, use ddev ssh. This is NOT used to change files etc. The files are mirrored automatically into the container! But you can log in to install binaries etc. Let’s try that.
Some commands you may need: What system are we running? uname -a -> linuxkit // Update available packages: sudo apt-get update // Search for a package apt-cache search packagename // Install Pdftools (pdftotext, pdfinfo..): sudo apt-get install poppler-utils // Get the path to imagemagick (if it’s already installed): whereis convert (remember, imagemagick is a collection, convert is one of the tools) // log out from the container, back to your system: exit
Now, how to connect to the database which lives inside the docker container?
run ddev describe and you will get the login data. It’s basically db for everything.
For TYPO3, the ddev setup command provides an AdditionalConfiguration.php file that can be used. It’s missing two important parameters though, SystemMaintainers and Installtool Password. Here’s an example.
$GLOBALS['TYPO3_CONF_VARS']['SYS']['trustedHostsPattern'] = '.*';
$GLOBALS['TYPO3_CONF_VARS']['DB']['Connections']['Default'] = array_merge($GLOBALS['TYPO3_CONF_VARS']['DB']['Connections']['Default'], [
'dbname' => 'db',
'host' => 'db',
'password' => 'db',
'port' => '3306',
'user' => 'db',
]);
// This mail configuration sends all emails to mailhog
$GLOBALS['TYPO3_CONF_VARS']['MAIL']['transport'] = 'smtp';
$GLOBALS['TYPO3_CONF_VARS']['MAIL']['transport_smtp_server'] = 'localhost:1025';
$GLOBALS['TYPO3_CONF_VARS']['SYS']['devIPmask'] = '*';
$GLOBALS['TYPO3_CONF_VARS']['SYS']['displayErrors'] = 1;
// add these
$GLOBALS['TYPO3_CONF_VARS']['SYS']['systemMaintainers'] = [123,456];
$GLOBALS['TYPO3_CONF_VARS']['BE']['lockSSL'] = 1; // optional
$GLOBALS['TYPO3_CONF_VARS']['BE']['installToolPassword'] = '123';
But what if you want to access the database with a separate tool instead of the preconfigured phpMyAdmin? If you use sequel pro, simply run ddev sequelpro and your database will be launched automagically in sequel pro.
You can also do this manually; then you need to define the db port to access it externally. Do this in .ddev/config.yaml, by adding (for example) host_db_port: "32778" Now we can set up a db management tool as such (and store the bookmark):
Remember: PHP will still use the default Port 3306!
Ok, here we go. ddev is already started, so make sure you’re in your local directory (where .ddev/ is) and run ddev describe to see the parameters again. Probably, if you go to https://mysite.ddev.local, you will find everything from your webroot working.
When done, finish with ddev stop. I’m not really sure where databases are persisted though yet, when ddev is stopped. Maybe you get a dump first with ddev snapshot.
Explore many more possibilities of ddev with ddev help.

Related

How can I update the composer version that is being used inside my ddev containers?

Currently my docker/ddev setup is running Composer version 1.10.6 2020-05-06 inside the container.
I would like to make the composer version inside the container be 1.10.7 2020-06-03.
I found one way to do it: ddev exec sudo composer self-update, but it's not permanent. The container reverts back to using 1.10.6 after a ddev restart.
In all of my searches, I can't find a way to update the documents that create the container so they update composer permanently. I don't need it to attempt to update every time I start my container, I just need to be able to tell it now to permanently change over to the version I want.
An additional piece: adding RUN sudo composer self-update to the .ddev/web-build/Dockerfile makes it attempt to update every time, which is not ideal. I want to update when I'm ready, as I also need to update my test servers to match versions.
I added that command to my Dockerfile and it updated to 1.10.7. I removed the command from my Dockerfile so that it doesn't update every time I restart ddev. When I restarted ddev (without that command in the Dockerfile) it reverted composer back to 1.10.6.
Where is it getting the instructions to use that version? I need to find that and tell it to use 1.10.7 instead. I don't want it to update itself every time I do ddev restart.
It's not normally important, but you can add a .ddev/web-build/Dockerfile with these contents:
ARG BASE_IMAGE
FROM $BASE_IMAGE
RUN composer self-update
And your composer will be updated during the image build process.
Randy's suggestion worked well for me, however I've also found an alternative solution which involves less typing.
Read the project config.yaml and it explains how the Composer version can be changed.
This file is found in ~/yourprojectname/.ddev/config.yaml.
The first lines of the file are the configuration used and the remaining lines of the file explain the configuration alternatives available. Enjoy :)
# if composer_version:"" it will use the current ddev default composer release.
# It can also be set to "1", to get most recent composer v1
# or "2" for most recent composer v2.
# It can be set to any existing specific composer version.
# After first project 'ddev start' this will not be updated until it changes

How to remove a snap application (docker) completely

I made the mistake of installing Docker via Snap... Once I realised that snap hadn't permissions to run in my working directory (on a different partition), I removed it. Now I can't use docker after I've installed it via apt-get.
Please help.
I've done sudo snap remove docker but when I sudo apt install docker and run via docker, I get bash: /snap/bin/docker: No such file or directory
The command you are looking for is:
sudo apt install docker.io
i.e it's docker.io not just docker
On Ubuntu, the package docker is described as a "System tray for KDE3/GNOME2 applications", which is probably not what you want!
I had the same problem. This works for me.
sudo snap remove docker
sudo reboot
the point is to restart the instance or terminal.
I hope this method can help
I did the same and just restarting the instance fixed it.
The problem is simply that your bash shell caches the locations of known executables, in order to avoid having to scan through your executables search path (that is, the directories listed in $PATH) every time you type a command. Because you have removed the executable from one directory (/snap/bin) and added it to another directory (/usr/bin), this cache is now out of date. This means that it will look in the wrong location if you try to invoke the executable simply by typing docker rather than its full path.
It is possible to fix it simply by starting a new bash shell, for example open a new terminal window and type the command in there.
Alternatively if you wish to refresh the cache in the terminal session that you are already using, type:
hash -r
It is not necessary to restart your computer (although this would also work).

Separating Docker files and application source files to optimize production environment

I have a bunch of (Ruby) scripts stored on a server. Up until now, my team has used them by opening an accessor app that launches a list of the script names, and they select the script they want to run in that instance on the files in their working folder. The scripts are run directly from the server, so updates made to the script files are automatically reflected when a user runs the script.
The scripts require a fair amount of specific dependencies, so I'm trying to move to a Docker-based workflow to eliminate the problems we encounter with incongruent computer environments. I've been able to successfully build an image with our script library and run an instance of it on my computer.
However, all of the documentation and tutorials include the application source files when building an image, so that all the files are copied over by the Dockerfile. From my understanding, this means that any time the code in the application files needs to be updated, all the users will need to rebuild the image before trying to run anything. I would very rarely ever need to make changes to the environment settings/dependencies, but the app code is changed relatively frequently, so it seems like having every user rebuild an image every single time a line of app code is changed would actually slow down everyone's workflow considerably.
My question is this: Is it not possible to have Docker simply create the environment that a user must have to run the applications, but have the applications themselves still run directly off the server where they were originally stored? And does a new container need to be created every single time a user wants to run any one of the scripts? (The users are not tech-savvy.)
Generally you'd do this by using a Docker image instead of the checked-out tree of scripts. You can use a Docker registry to store a built copy of the image somewhere on the network; Docker Hub works for this, most large public-cloud providers have some version of this (AWS ECR, Google GCR, Azure ACR, ...), or you can run your own. The workflow for using this would generally look like
# Get any updates to the "latest" version of the image
# (can be run infrequently)
docker pull ourorg/scripts
# Actually run the script, injecting config files and credentials
docker run --rm \
-v $PWD/config:/config \
-v $HOME/.ssh:/config/.ssh \
ourorg/scripts \
some_script.rb
# Nothing in this example actually requires a local copy of the scripts
I'm envisioning a directory that has kind of a mix of scripts and support files and not a lot of organization to it. Still, you could write a simple Dockerfile that looks like
FROM ruby:2.7
WORKDIR /opt/scripts
# As of Bundler 2.1, there is no compatibility between Bundler
# versions; this must match exactly what is in Gemfile.lock
RUN gem install bundler -v 2.1.4
# Copy the scripts in and do basic installation
COPY Gemfile Gemfile.lock .
RUN bundle install
COPY . .
ENV PATH /opt/scripts:$PATH
# Prefix all commands with...
ENTRYPOINT ["bundle", "exec"]
# The default command to run is...
CMD ["ls"]
On the back end you'd need a continous integration service (Jenkins is popular if a little unwieldy; there are a large selection of cloud-hosted ones) that can rebuild the Docker image whenever there's a commit to the source repository. You can generally rig this up so that it happens automatically whenever anybody pushes anything.
This process makes more sense of most people are just using the set of scripts and few of them are developing them. It's also a little bit difficult to discover what the scripts are (you might be able to docker run --rm ourorg/scripts ls though).
Is it not possible to have Docker simply create the environment that a user must have to run the applications, but have the applications themselves still run directly off the server where they were originally stored?
This always strikes me as an ineffective use of Docker. You have all of the fiddly steps of your current workflow that require everyone to run a git pull or equivalent routinely, but you also have to inject the host source tree into the container. If there are OS incompatibilities in, for example, native gems in the vendor tree, you have to work around that.
# You still need to do this periodically
git pull
# And you also need to
sudo docker run \
--rm \
-v $PWD:/app \
-v $HOME/config:/config \
-v $HOME/.ssh:/config/.ssh \
-w /app \
ruby:2.7 \
bundle exec ./some_script.rb
Some of these details (especially the config file and credentials) you'd have to deal with even if you did build an image; some others of the details you could improve by building an image. Inside the image you need to correct the ownership and permissions on the ssh keys and replace the $PWD/vendor tree with something the container can run, without modifying the mounted host directories.
Is it not possible to have Docker simply create the environment that a user must have to run the applications, but have the applications themselves still run directly off the server where they were originally stored?
You can build an image with all the environment already installed then mount the directory with the scripts so the container can read the scripts from the host. Something like
docker run -it --rm -v /opt/myscripts:/myscripts myimage somescript.rb
Then your image Dockerfile would end with:
WORKDIR /myscripts
ENTRYPOINT ["/usr/bin/ruby"]
And does a new container need to be created every single time a user wants to run any one of the scripts?
Of course, a container is just an isolated process managed by docker, you could make a wrapper so the users wouldn't need to type the full docker run command.

docker install with tcp enabled 0.0.0.0

Wondering if anyone knows how to install with tcp enabled? Something like below? I
yum install docker --tcp-enabled --host 0.0.0.0
I understand I can go and manual change OPTIONS in /etc/sysconfig/docker.
I am trying to provision a server with a fresh docker install through scripts and I do not want to log onto the box and make these changes, everytime a new version comes out. I also understand I can just use a script with sed/awk to do this, But just wondering if easier way, without having to maintain a script.
My preferred solution is to use /etc/docker/daemon.json. This will let you add options to just about any install.
Note that I don't believe this will unset options that were defined on the command line, it's designed to let you use both. Those command line options are defined by your startup script, which from your description is systemd on a RedHat/CentOS environment with /etc/sysconfig/docker injected environment variables (you won't see this on other platforms like Debian). So if you need to remove an option, you'll still need to update your /etc/sysconfig/docker.

Non-privileged, non-root, user to start or restart webserver server such as nginx without root or sudo

I'm using capistrano to deploy a rails web app. I want to give the deploy user on the webserver as few privileges as I can. I was able to do everything I need to do as a non-privileged user except restart the webserver.
I'm doing this on an ubuntu server, but this problem is not specific to my use case (rails, capistrano, deployment), and I've seen a lot of approaches to this problem that seem to involve poor security practices. Wondering whether others can vet my solution and advise whether it's secure?
First, not necessary, but I have no idea why /etc/init.d/nginx would need any (even read) access by other users. If they need to read it, make them become root (by sudo or other means), so I:
chmod 750 /etc/init.d/nginx
Since the ownership is owner root, group root (or can be set such with chown root:root /etc/init.d/nginx) only root, or a user properly sudo'ed, can read, change or run /etc/init.d/nginx, and I'm not going to give my deploy user any such broad rights. Instead, I'm only going to give the deploy user the specific sudo right to run the control script /etc/init.d/nginx. They will not be able to run an editor to edit it, because they will only have the ability to execute that script. That means that if a someone gets access to my box as the deploy user, they can restart and stop, etc, the nginx process, but they cannot do more, like change the script to do lots of other, evil things.
Specifically, I'm doing this:
visudo
visudo is a specific tool used to edit the sudoers file, and you have to have sudoer privileges to access it.
Using visudo, I add:
# Give deploy the right to control nginx
deploy ALL=NOPASSWD: /etc/init.d/nginx
Check the sudo man page, but as I understand this, the first column is the user being given the sudo rights, in this case, “deploy”. The ALL gives deploy access from all types of terminals/logins (for example, over ssh). The end, /etc/init.d/nginx, ONLY gives the deploy user root access to run /etc/init.d/nginx (and in this case, the NOPASSWD means without a password, which I need for an unattended deployment). The deploy user cannot edit the script to make it evil, they would need FULL sudo access to do that. In fact, no one can unless they have root access, in which case there's a bigger problem. (I tested that the user deploy could not edit the script after doing this, and so should you!)
What do you folks think? Does this work? Are there better ways to do this? My question is similar to this and this, but provides more explanation than I found there, sorry if it's too duplicative, if so, I'll delete it, though I'm also asking for different approaches.
The best practice is to use /etc/sudoers.d/myuser
The /etc/sudoers.d/ folder can contain multiple files that allow users to call stuff using sudo without being root.
The file usually contains a user and a list of commands that the user can run without having to specify a password. Such as
sudo service nginx restart
Note that we are running the command using sudo. Without the sudo the sudoers.d/myuser file will never be used.
An example of such a file is
myuser ALL=(ALL) NOPASSWD: /usr/sbin/service nginx start,/usr/sbin/service nginx stop,/usr/sbin/service nginx restart
This will allow the myuser user to call all start, stop and restart for the nginx service.
You could add another line with another service or continue to append them to the comma separated list, for more items to control.
Also make shure you have run the command below to secure things
chmod 0440 /etc/sudoers.d/myuser
This is also the way I start and stop services my own created upstart scripts that live in /etc/init
It can be worth checking that out if you want to be able to run your own services easily.
Instructions:
In all commands, replace myuser with the name of your user that you want to use to start, restart, and stop nginx without sudo.
Open sudoers file for your user:
$ sudo visudo -f /etc/sudoers.d/myuser
Editor will open. There you paste the following line:
$ myusername ALL=(ALL) NOPASSWD: /usr/sbin/service nginx start,/usr/sbin/service nginx stop,/usr/sbin/service nginx restart
Save by hitting ctrl+o. It will ask where you want to save, simply press enter to confirm the default. Then exit out of the editor with ctrl+x.

Resources