Docker stack deploy error about top-level object mappings - docker

Following along with the Docker getting started guide, https://docs.docker.com/get-started/part3/#your-first-docker-composeyml-file, I'm running into an issue. I've created the docker-compose.yml file and verified that the contents are correct:
version: "3"
services:
web:
image: joshuabelden/get-started:part2
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
I also verified that I can run my image outside of a swarm. After running the command:
docker stack deploy -c docker-compose.yml getstartedlab
I'm getting the following error:
Top-level object must be a mapping
I can't seem to find any information on the error message.

What I did to solve this is I removed the double quotes and made them single quotes to change
version: "3" -> version: '3'
This removed the error for me, also do this for all double quotes.

You probably didn't save after modifying the docker-compose.yml file. So if you run 'docker compose up' without having saved, you get the error about top-level object mappings.

You have to add the "volumes" where your code should be copied:
version: "3"
services:
web:
image: iconkam/get-started:part2
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
volumes:
- .:/app
ports:
- "80:80"
networks:
- webnet
networks:
webnet:

I am using Symfony 6 on Linux Ubuntu and I had the same type of problem but not with "docker stack".
I had this problem simply when running:
docker-compose up
$ docker-compose up
Top-level object must be a mapping
I searched for a long time to find a solution.
No solution described here worked.
In fact the problem was caused by the presence of a default file in Symfony 6:
docker-compose.override.yml
I commented all the content inside but it was not enough.
Renaming the file made the command "docker-compose up" worked.
Deleting the file is also a solution ;-)

This happens when Docker is running in Kubernetes mode and not swarm.
I fixed it by changing it to Swarm through settings > Kubernetes

This error arises from the formatting of the file. Please try to convert the file encoding to UTF-8 and you will be able to run docker stack deploy command. Double quotes is not an issue here.

In my case, I wrapped all the values in double quotes expect the replica, and it got fixed. Like so:
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: "image details"
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: "50M"
restart_policy:
condition: "on-failure"
ports:
- "4000:80"
networks:
- "webnet"
networks:
webnet:

Sometimes it's just the formatting in the file. I recommend selecting your text in the compose file and seeing if you have a training blank space somewhere.
In my case, I had a space right after the image tag.

Just close and reopen Terminal again - and run command again.
p.s. I'm using Windows + WSL Terminal. Time to time randomly see same error.

Probably you forgot to save the docker compose file.

Another reason this may be happening while you're pulling your hair out:
You have the environment variable COMPOSE_FILE set, which refers to a file which is empty or an invalid yml file.
In my case it was set to docker-compose.combined.yml while running docker compose config > docker-compose.combined.yml, which first created an empty file to write to, and then stated the error Top-level object must be a mapping while trying to read that empty file.

I just had this problem and the reason was that I had an empty docker-compose.override.yml

Related

Memory resources for a docker container running from docker-compose

I’m running Sonarqube on Docker compose and my file looks like this:
version: "3"
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
- "5432:5432"
links:
- db:db
environment:
- SONARQUBE_JDBC_URL=jdbc:postgresql://db:5432/sonar
- SONARQUBE_JDBC_USERNAME=postgres
- SONARQUBE_JDBC_PASSWORD=sonar
volumes:
- ..../Work/tools/_SonarQube_home/conf:/opt/sonarqube/conf
# - sonarqube_data:/opt/sonarqube_new/data
- ...../Work/tools/_SonarQube_home/data:/opt/sonarqube/data
- ....../Work/tools/_SonarQube_home/extensions:/opt/sonarqube/extensions
- ..../Work/tools/_SonarQube_home/bundled-plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=sonar
- POSTGRES_DB=sonar
volumes:
- .../Work/tools/_PostgreSQL_data:/var/lib/postgresql
# This needs explicit mapping due to https://github.com/docker-library/postgres/blob/4e48e3228a30763913ece952c611e5e9b95c8759/Dockerfile.template#L52
- ..../Work/tools/_PostgreSQL_data/data:/var/lib/postgresql/data
Everything works and that’s great. One moment I saw that Sonarqube instance started to act slowly, therefore I checked docker stats. It looks like this:
| CPU | Mem Usage/ Limit |
|-------| --------------------
| 5.39% | 1.6GiB / 1.952GiB |
How do I define more RAM resources for the server, let’s say 4 GB? Previously it was mem_limit but now on version 3 it doesn’t exist.
What would be a good solution for that?
Thanks!
If you are deploying to Swarm, then you can use the resources keyword in your Compose file. (it's described under Resources in the file reference https://docs.docker.com/compose/compose-file/)
So you can do something like this is Swarm:
version: "3.7"
services:
redis:
image: redis:alpine
deploy:
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
If you are using Compose, then you have the option to go back to Compose file version 2.0, as described in the Compose file reference by Docker.
Looking for options to set resources on non swarm mode containers?
The options described here are specific to the deploy key and swarm mode. If you want to set resource constraints on non swarm
deployments, use Compose file format version 2 CPU, memory, and other
resource options. If you have further questions, refer to the
discussion on the GitHub issue docker/compose/4513.
I'm not familiar with Sonarqube memory issue, but you may want to have a look at this https://docs.sonarqube.org/display/SONARqube71/Java+Process+Memory.
In Compose file version 3, resource limits moved to under a deploy: {resources: ...} key, but are also only documented to work in Swarm mode. So to actually set them you need to switch to a mostly-compatible version 2 Compose file.
version: '2'
services:
sonarqube:
mem_limit: 4g
The default should be for the container to be able to use an unlimited amount of memory. If you're running in an environment where Docker is inside a Linux VM (anything based on Docker Toolbox or Docker Machine, Docker for Mac) it's limited by the memory size of the VM.

How do I connect containers using container name with docker-compose?

I am trying to understand how I access containers between each other through their container name. Specifically when using a pgadmin container and connecting to a postgresql container through dns.
In docker-compose V3 , I cannot link them, nor does networks: seem to be available either.
The main reason to need this is when the containers spin up they don't have a static IP address, so in pgadmin I can't connect to the postgresql DB using the same IP every time , so a dns name would work better (ie: the container name).
Can we do this with docker-compose or at least set a static ip address for a specific container?
I have tried creating a user defined network:
networks:
backed:
and then using it in the service:
app:
networks:
- backend
This causes a docker-compose error regarding an invalid option of "networks" in the app.
docker-compose.yml
version: "0.1"
services:
devapi:
container_name: devapi
restart: always
build: .
ports:
- "3000:3000"
api-postgres-pgadmin:
container_name: api-postgres-pgadmin
image: dpage/pgadmin4:latest
ports:
- "5050:80"
environment:
- PGADMIN_DEFAULT_EMAIL=stuff#stuff.com
- PGADMIN_DEFAULT_PASSWORD=12345
api-postgres:
container_name: api-postgres
image: postgres:10
volumes:
- ./data:/data/db
ports:
- "15432:5432"
environment:
- POSTGRES_PASSWORD=12345
Actually, I spot one immediate problem:
version: "0.1"
Why are you doing this? The current version of the compose file format is 3.x. E.g:
version: "3"
See e.g. the Compose file version 3 reference.
The version determines which feature are available. It's entirely possible that by setting version: "0.1" you are explicitly disabling support for the networks parameter. You'll note that the reference shows examples using the networks attribute.
As an aside, unless there is a particular reason you ened it, I would drop the use of the container_name in your compose file, since this makes it impossible to run multiple instances of the same compose file on your host.
networks are available from docker-compose version 3 but you are using version:"0.1" in your docker-compose file.
Change the version: "0.1" to version: "3" in docker-compose.yml.

Docker stack deploy command error running compose yml file docker-compose.yml

[root#d1 docker]# docker stack deploy -c docker-compose.yml getstartedlab
yaml: line 12: did not find expected key
Here is the docker yml file I am using please see the docker-compose.yml file below listed
version: "3"
services:
web:
image: pragneshpanchal/httpdsrv
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition:on-failure
ports:
- "4000:80"
networks:
- webnet
networks:
webnet:
You are missing a space between condition: and on-failure. Please add a space and see if it works.
Did you put your Yaml through a yaml validator?
It's really sensitive to the indentation, so if you put one too many space at some place you can have some strange behavior even with something valid.
I tried to put you file through this validator (http://www.yamllint.com/) and it came false when copy/pasted in it. Try to respect a 2 spaces when working on a child item, like this:
services:
web:
image: pragneshpanchal/httpdsrv
For the moment I can see multiple type of spacing and this will always get you errors. Like in ports and the first network call.
For the second network call, it should be at the same level as services.
And lastly, as stated in Mark answer's there is a missing space in your restart condition.

Docker Deploy stack extra hosts ignored

docker stack deploy isnt respecting the extra_hosts parameter in my compose file. when i do a simple docker-compose up the entry is created in the /etc/hosts however when i do docker deploy –compose-file docker-compose.yml myapp it ignores extra_hosts, any insights?
Below is the docker-compose.xml:
version: '3'
services:
web:
image: user-service
deploy:
labels:
- the label
build:
context: ./
environment:
DATABASE_URL: jdbc:postgresql://dbhost:5432/postgres
ports:
- 9002:9002
extra_hosts:
- "dbhost: ${DB_HOST}"
networks:
- wellness_swarm
env_file:
- .env
networks:
wellness_swarm:
external:
name: wellness_swarm
the docker-compose config also displays the compose file properly.
This may not be a direct answer to the question as it doesn't use env variables but what I found was that the extra_hosts block in the compose file was ignored in swarm mode if entered in the format above.
i.e. this works for me and puts entries in /etc/hosts in the container:
extra_hosts:
retisdev: 10.48.161.44
retistesting: 10.48.161.44
whereas when entered in the other format it gets ignored when deploying as a service
extra_hosts:
- "retisdev=10.48.161.44"
- "retistesting=10.48.161.44"
I think it's an ordering issue. The ${} variable you've got in the compose file runs during the YAML processing before the service definition is created. Then stack deploy processes the .env file for running in the container as envvars, but the YAML variable is needed first...
To fix that, you should use the docker-compose config command first, to process the YAML, and then use the output of that to send to the stack deploy.
docker-compose config will show you the output you're likely wanting.
Then do a pipe to get a one-liner.
docker-compose config | docker stack deploy -c - myapp
Note: Ideally you wouldn't use the extra_hosts, but rather put the envvar directly in the connection string. Your way seems like unnecessary complexity and isn't the usual way I see a connection string created.
e.g.
version: '3'
services:
web:
image: user-service
deploy:
labels:
- the label
build:
context: ./
environment:
DATABASE_URL: jdbc:postgresql://${DB_HOST}:5432/postgres
ports:
- 9002:9002
networks:
- wellness_swarm
env_file:
- .env
networks:
wellness_swarm:
external:
name: wellness_swarm
As i see https://github.com/moby/moby/issues/29133 seems like it is by design where in the compose command takes into consideration the environment variables mentioned in .env file however the deploy command ignores that :( why is that so, pretty lame reasons!

Docker Compose file is invalid, additional properties not allowed

This is my sample docker-compose.yml file.
version: '2'
config-server:
image: ccc/config-server
restart: always
registration-server:
image: ccc/registration-server
restart: always
ports:
- 1111:1111
When I use docker-compose up -d I get an error:
"ERROR: The Compose file './docker-compose.yml' is invalid because:
Additional properties are not allowed ('registration-server', 'config-server' were unexpected)
You might be seeing this error because you're using the wrong Compose file version. Either specify a version of "2" (or "2.0") and place your service definitions under the `services` key, or omit the `version` key and place your service definitions at the root of the file to use version 1.
For more on the Compose file format versions, see https://docs.docker.com/compose/compose-file/
You are missing a services keyword, your correct .yml is:
version: '2'
services:
config-server:
image: ccc/config-server
restart: always
registration-server:
image: ccc/registration-server
restart: always
ports:
- 1111:1111
This can also happpen if one of the keys is misspelled. In my case memory was spelled incorrectly:
After fixing it:
version: "3.8"
services:
redis:
image: redis:alpine
deploy:
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
In the This can also happen if, this can also happens if global level volumes is indented wrong, like not starting at column 0.

Resources