I want to create simple data base model. I'am using postgresql-provider package major:1 minor:1. I've followed instructions to create model. I've added preparations and resource to my Droplet object. Message I receive after running is
No command supplied, defaulting to serve...
Database prepared
Server 'default' starting at 0.0.0.0:8080
Can someone help me with the problem?
With regards to the message No command supplied, defaulting to serve, this is because the binary executable is expecting a 'command'.
vapor run [command]
.build/[configuration]/App [command]
There are a variety of commands available, such as vapor run prepare to run your database preparations, or vapor run serve to begin the HTTP server. You can even add your own commands.
When the executable is run without any commands, it assumes you meant to run the serve command, which is the meaning of your messsage No command supplied, defaulting to serve.
To suppress this, simply use vapor run serve or .build/[configuration]/App serve to run your Vapor project.
Notice how it said in the message Database prepared. That is because all the tables you've specified in your models already exist.
If you've made changes to your models, you'll first need to revert your changes. Vapor has a set of commands just for preparing a database.
vapor run prepare --revert
and
vapor run prepare
The --revert one will run whatever code you've put in the revert function on your models (usually people just delete the table), and then the other command will run the prepare functions and create your models' tables from scratch again.
Related
I'm new to Docker, and I'm not sure how to quite deal with this situation.
So I'm trying to run a docker container in order to replicate some results from a research paper, specifically from here: https://github.com/danhper/bigcode-tools/blob/master/doc/tutorial.md
(image link: https://hub.docker.com/r/tuvistavie/bigcode-tools/).
I'm using a windows machine, and every time I try to run the docker image (via: docker run -p 80:80 tuvistavie/bigcode-tools), it instantly closes. I've tried running other images, such as the getting-started, but that image doesn't close instantly.
I've looked at some other potential workarounds, like using -dit, but since the instructions require setting an alias/doskey for a docker run command, using the alias and chaining it with other commands multiple times results in creating a queue for the docker container since the port is tied to the alias.
Like in the instructions from the GitHub link, I'm trying to set an alias/doskey to make api calls to pull data, but I am unable to get any data nor am I getting any errors when performing the calls on the command prompt.
Sorry for the long question, and thank you for your time!
Going in order of the instructions:
0. I can run this, it added the image to my Docker Desktop
1.
Since I'm using a windows machine, I had to use 'set' instead of 'export'
I'm not exactly sure what the $ is meant for in UNIX, and whether or not it has significant meaning, but from my understanding, the whole purpose is to create a directory named 'bigcode-workspace'
Instead of 'alias,' I needed to use doskey.
Since -dit prevented my image from instantly closing, I added that in as well, but I'm not 100% sure what it means. Running docker run (...) resulted in the docker image instantly closing.
When it came to using the doskey alias + another command, I've tried:
(doskey macro) (another command)
(doskey macro) ^& (another command)
(doskey macro) $T (another command)
This also seemed to be using github api call, so I also added a --token=(github_token), but that didn't change anything either
Because the later steps require expected data pulled from here, I am unable to progress any further.
Looks like this image is designed to be used as a command-line utility. So it should not be running continuously, but you run it via alias docker-bigcode for your tasks.
$BIGCODE_WORKSPACE is an environment variable expansion here. So on a Windows machine it's %BIGCODE_WORKSPACE%. You might want to set this variable in Settings->System->About->Advanced System Settings, because variables set with SET command will apply to the current command prompt session only. Or you can specify the path directly, without environment variable.
As for alias then I would just create a batch file with the following content:
docker run -p 6006:6006 -v %BIGCODE_WORKSPACE%:/bigcode-tools/workspace tuvistavie/bigcode-tools %*
This will run the specified command appending the batch file parameters at the end. You might need to add double quotes if BIGCODE_WORKSPACE path contains spaces.
I have a multi-process web app. The processes are contributed by different buildpacks. The default process will start the web application. I have a use case in which a given shell script should be executed before the default process invocation.
I have tried the following approach;
Create a custom-buildpack
Create a script that needs to be executed and invoke the web process in it.
Create a new process based on the above shell sciprt by specifying it in launch.toml definition
Make the buildpack launchable
The entrypoint.sh
#!/usr/bin/env bash
# Some fancy stuff..
#Invoke the web process
/cnb/process/web
Create lauch.toml from the build script of custom-buildpack. Make the entrypoint process the default one.
cat > "$layers_dir/launch.toml" << EOL
[[processes]]
type = "entrypoint"
command = "bash"
args = ["$scriptlayer/bin/entrypoint.sh"]
default = true
EOL
echo -e '[types]\nlaunch = true' > "$layers_dir/assembly-scripts.toml"
Truncated pack inspect-image output
Processes:
TYPE SHELL COMMAND ARGS
entrypoint (default) bash bash /layers/gw_assembly-scripts/assembly-scripts/bin/entrypoint.sh
task bash catalina.sh run
tomcat bash catalina.sh run
web bash catalina.sh run
Is there any better CNB native approach to achieve this use case?
You have a couple of options here:
The simplest option would be to add a .profile script to the root of your application. It's a bash script, so anything you can write in bash can be done there, however, it's primarily for initializing your app and setting additional env variables.
This file runs prior to the command in your process type. I looked for documentation on this behavior, but only found it briefly mentioned in the buildpacks spec.
As an example, if I put .profile in the root of my application and inside that file, I write echo 'Hello World!'. I'll see Hello World! printed before any of my process types execute.
If you want to create a buildpack, you can achieve something similar to the .profile script by having your buildpack include an exec.d binary.
This is a binary that's part of your launch image and gets run prior to any of your process types. It allows you to take actions to initialize an application and set additional environment variables dynamically before your application starts.
This mechanism is often used by buildpack authors to provide dynamic behavior at runtime based on changes to environment variables or Kubernetes service bindings. For example, turning on/off features like APM tools, debugging, and metrics.
A few other miscellaneous notes.
Neither of the options above allows you to change the actual process type. The process type that will be executed is selected prior to these options (.profile and exec.d) running and you cannot influence that from within. You can only use them to run things prior to the process type running.
The buildpack spec does not allow for a buildpack to modify the process types for another buildpack. So you cannot create a buildpack that wraps or modifies process types set by another buildpack. That said, a buildpack can override the process types set by another buildpack. Buildpacks that are later in the order group will override earlier buildpacks.
From the spec: A combined processes list derived from all launch.toml files such that process types from later buildpacks override identical process types from earlier buildpacks.
With buildpacks, the entrypoint is always the launcher. The launcher is a process that runs and implements the application side of the buildpack specification. It runs .profile, exec.d binaries, sets up buildpack provide environment variables and eventually launch the specified process type.
If you override the entrypoint for a container then the launcher won't run and none of the things it is supposed to do will happen. Sometimes this is desired, like if you're troubleshooting, but usually you want the launcher to be the entrypoint.
This question already has answers here:
Disable cache for specific RUN commands
(9 answers)
Closed 1 year ago.
I frequently seem to have to write Dockerfiles like this (line numbers added for clarity):
1. FROM somebase
2. RUN cp /some/local/stuff /some/docker/container/path
3. RUN some-other-local-commands
4. RUN wget http://some.remote.server/some.remote.path.for.example.json
5. RUN some-other-local-commands-which-may-depend-on-the-json
On line (4), I'm fetching a remote resource. Let's assume for now that's a JSON file. It might change from time-to-time, maybe not on every build, but perhaps every few hours or days.
What this means is that every time I build my container, I want to ensure the freshest JSON file is fetched. One way to force this is to add the --no-cache parameter to my docker build command, but this forces all of the lines/layers to rebuild, including (1)-(3), where that is likely not necessary. Is there a pattern or technique to automatically 'taint' or 'mark' line (4) so that Docker knows it always has to re-run the wget (presumably this would also have to force a rebuild of line 5), whilst still getting the layer caching behaviour for lines (1)-(3) when Docker detects the pre-req files haven't changed?
If the specific thing you're trying to trigger rebuilds is the result of RUN wget ... a specific URL, Docker does actually have native support for this.
There are two similar commands to copy files into a container. COPY only copies files from the build context. ADD can also fetch external URLs and unpack local archives (but not both at the same time). The general recommendation is to use COPY, unless you need one of the specific things ADD does differently.
So you should be able to say
ADD http://some.remote.server/some.remote.path.for.example.json .
RUN some-other-local-commands-which-may-depend-on-the-json
and the RUN command will use the Docker layer cache based on the contents of the fetched file.
If this approach doesn't work for you (maybe you need special authentication to fetch the file) you can also fetch the file outside of Docker before you run docker build, and then COPY it in. Again, it will work like any other file you COPY in, and layer caching will take effect based on whether the file has changed or not.
I'm trying to figure out how to run an npm script using docker-compose but I only want to run it once (if the data volume hasn't yet been created -- e.g. the VERY first time I docker-compose build && docker-compose up).
The script uses the Sequelize CLI to run a seed file for the database, but if this is run more than once, it'll error in my database because of a duplicate key constraint violation.
This is because I'm using a data volume (so if it's been run before, it's already persisted).
Oh, and this needs to be run after another script has run (the migration script).
So in order:
npm run db:migrate <-- this can run every time docker-compose up is run
npm run db:seed <-- this can only run once as long as the persistent volume hasn't been created
any other scripts can now run (to start my server)
Are there any concepts like this that can be used with docker-compose?
Which database did you use?
In many cases (such as maraidb, mongodb) you can use the directory /docker-entrypoint-initdb.d
Every mounted file wil be executed in alphabetic order if the container starts.
To do your operation only on first start the fist part of your script chould be check if there is already a database or not.
EDIT: Take a look to the doku, which file types are supportet. .sql ans .js schoul work in the most caes, but for npm I'm not shure
I have a solr setup with two cores. I want to schedule a core(core1, backend) for full import frequently(e.g. after every 5 mins), then swap with the live(core0, serving) core from shell command through a shceduler.
For full-import command, I am using following shell command
wget -o - -q -t 1 http://localhost:8080/solr/core1/dataimport?command=full-import
Which works fine. If I do a core swap from browser by hitting
http://localhost:8080/solr/admin/cores?action=SWAP&core=core1&other=core0, I get latest update instantly on search. But if I schedule this URL as shell command similar to dataimport, it doesn't do that swap.
Did you try with
curl
"http://'localhost':8080/solr/admin/cores?action=SWAP&core=core1&other=core0"
from shell?
There is catch with the SWAPs
Apache Solr allows to swap two cores around for non-Cloud configurations. They take each other’s name, so it is a good way to push an updated core into a production without downtime.
But an interesting question is how this is achieved. Normally, core name is it’s directory name too. So, does Solr rename the directory on the filesystem too?
Not really! Instead name property in the core.properties file is updated to use the name of the other core. Usually that property is used to give an alternave name of the core for when the directory naming conventions are not suitable.
The gotcha is - of course - that you still have two directories with right looking names for the cores you see in the Admin UI. So, it is very easy to forget that extra redirection/rename step when troubleshooting somebody else’s - or even your own old - setup.