How to force pull docker images in DC OS? - docker

For docker orchestration, we are currently using mesos and chronos to schedule job runs.
Now, we dropped chronos and try to set it up via DCOs, using mesos and metronome.
In chronos, I could activate force pulling a docker image via its yml config:
container:
type: docker
image: registry.example.com:5001/the-app:production
forcePullImage: true
Now, in DC/OS using metronome and mesos, I also want it to force it to always pull the up-to-date image from the registry, instead of relying on its cached version.
Yet the json config for docker seems limited:
"docker": {
"image": "registry.example.com:5001/the-app:production"
},
If I push a new image to the production tag, the old image is used for the job run on mesos.
Just for the sake of it, I tried adding the flag:
"docker": {
"image": "registry.example.com:5001/my-app:staging",
"forcePullImage": true
},
yet on the put request, I get an error:
http PUT example.com/service/metronome/v1/jobs/the-app < app-config.json
HTTP/1.1 422 Unprocessable Entity
Connection: keep-alive
Content-Length: 147
Content-Type: application/json
Date: Fri, 12 May 2017 09:57:55 GMT
Server: openresty/1.9.15.1
{
"details": [
{
"errors": [
"Additional properties are not allowed but found 'forcePullImage'."
],
"path": "/run/docker"
}
],
"message": "Object is not valid"
}
How can I achieve that the DC OS always pulls the up-to-date image? Or do I have to always update the job definition via a unique image tag?

The Metronome API doesn't support this yet, see https://github.com/dcos/metronome/blob/master/api/src/main/resources/public/api/v1/schema/jobspec.schema.json

As this is currently not possible I created a feature request asking for this feature.
In the meantime, I created workaround to be able to update the image tag for all the registered jobs using typescript and request-promise library.
Basically I fetch all the jobs from the metronome api, filter them by id starting with my app name, and then change the docker image, and issue for each changed job a PUT request to the metronome api to update the config.
Here's my solution:
const targetTag = 'stage-build-1501'; // currently hardcoded, should be set via jenkins run
const app = 'my-app';
const dockerImage = `registry.example.com:5001/${app}:${targetTag}`;
interface JobConfig {
id: string;
description: string;
labels: object;
run: {
cpus: number,
mem: number,
disk: number,
cmd: string,
env: any,
placement: any,
artifacts: any[];
maxLaunchDelay: 3600;
docker: { image: string };
volumes: any[];
restart: any;
};
}
const rp = require('request-promise');
const BASE_URL = 'http://example.com';
const METRONOME_URL = '/service/metronome/v1/jobs';
const JOBS_URL = BASE_URL + METRONOME_URL;
const jobsOptions = {
uri: JOBS_URL,
headers: {
'User-Agent': 'Request-Promise',
},
json: true,
};
const createJobUpdateOptions = (jobConfig: JobConfig) => {
return {
method: 'PUT',
body: jobConfig,
uri: `${JOBS_URL}/${jobConfig.id}`,
headers: {
'User-Agent': 'Request-Promise',
},
json: true,
};
};
rp(jobsOptions).then((jobs: JobConfig[]) => {
const filteredJobs = jobs.filter((job: any) => {
return job.id.includes('job-prefix.'); // I don't want to change the image of all jobs, only for the same application
});
filteredJobs.map((job: JobConfig) => {
job.run.docker.image = dockerImage;
});
filteredJobs.map((updateJob: JobConfig) => {
console.log(`${updateJob.id} to be updated!`);
const requestOption = createJobUpdateOptions(updateJob);
rp(requestOption).then((response: any) => {
console.log(`Updated schedule for ${updateJob.id}`);
});
});
});

I had a similar problem where my image repo was authenticated and I could not provide the necessary auth info using the metronome syntax. I worked around this by specifying 2 commands instead of the directly referencing the image.
docker --config /etc/.docker pull
docker --config /etc/.docker run

I think the "forcePullImage": true should work with the docker dictionary.
Check:
https://mesosphere.github.io/marathon/docs/native-docker.html
Look at the "force pull option".

Related

NEXTJS 404 in deployment to docker, but not in dev environment

for some reasin, Im getting 404 on my route that is actually working on local.
this is my next config:
const nextConfig = {
reactStrictMode: true,
experimental: {
appDir: true,
output: 'standalone',
}
}
package jsong: "next": "13.1.1",
When the app loads, I get this error:
Invalid next.config.js options detected:
- The value at .experimental has an unexpected property, output
What can I do? im using appDir, yet again, its working on local.
this is my docker image FROM node:16-alpine
Thanks
You need to place output not in experimental, but in first level of module.exports.
module.exports = {
output: 'standalone',
experimental: {
appDir: true
},
}

Nuxt Proxy Issue when deploying using Docker (Github Action)

I am trying to deploy my nuxt app using github actions. I tried to run my app built in docker container at my local environment, but it doesn't work. When I open application using browser,I could check nothing but the background image I set using css.
I believe it might be issue related to proxy or serverMiddleware that I set in nuxt.config.js.
Servermiddleware is for managing session, and proxy server is used to avoid CORS issues when getting data from external api server.
nuxt.config.js
proxy: {
'/api/v1/': {
target: 'http://192.168.219.101:8082',
pathRewrite: {'^/api/v1/cryptolive/': '/'},
changeOrigin: true,
},
}
serverMiddleware: [
// bodyParser.json(),
session({
secret: 'super-secret-key',
resave: false,
saveUninitialized: false,
cookie: {
maxAge: 60000,
},
}),
'~/apis',
],

Terraform docker_registry_image error: 'unable to get digest: Got bad response from registry: 400 Bad Request'

I am trying to use CDF for terraform to build and push a docker image to AWS ECR. I have decided to use terraform docker provider for it. Here is my code
class MyStack extends TerraformStack {
constructor(scope: Construct, name: string) {
super(scope, name);
const usProvider = new aws.AwsProvider(this, "us-provider", {
region: "us-east-1",
defaultTags: {
tags: {
Project: "CV",
Name: "CV",
},
},
});
const repo = new aws.ecr.EcrpublicRepository(this, "docker-repo", {
provider: usProvider,
repositoryName: "cv",
forceDestroy: true,
});
const authToken = new aws.ecr.DataAwsEcrpublicAuthorizationToken(
this,
"auth-token",
{
provider: usProvider,
}
);
new docker.DockerProvider(this, "docker-provider", {
registryAuth: [
{
address: repo.repositoryUri,
username: authToken.userName,
password: authToken.password,
},
],
});
new docker.RegistryImage(this, "image-on-public-ecr", {
name: repo.repositoryUri,
buildAttribute: {
context: __dirname,
},
});
}
}
But during deployment, I have this error: Unable to create image, image not found: unable to get digest: Got bad response from registry: 400 Bad Request. But it still is able to push to the registry, I can see it from the AWS console.
I can't seem to find any mistake in my code, and I don't understand the error. I hope you can help
In the Terraform execution model is build so that Terraform first finds all the information it needs to get the current state of your infrastructure and the in a second step calculates the plan of changes that need to be applied to get the current state into the that you described through your configuration.
This poses a problem here, the provider you declare is using information that is only available once the plan is being put into action, there is no repo url / auth token before the ECR repo is being created.
There a different ways to solve this problem: You can make use of the cross-stack references / multi-stack feature and split the ECR repo creation into a separate TerraformStack that deploys beforehand. You can pass a value from that stack into your other stack and use it to configure the provider.
Another way to solve this is by building and pushing your image outside of the terraform provider through the null provider with a local provisioner as it's done in the docker-aws-ecs E2E example

how do you properly pass a command to a container when using "azure-arm-containerinstance" from azure node sdk?

just looking for some guidance on how to properly invoke a command when a container starts, when creating it via azure-arm-containerinstance package. There is very little documentation on this specific part and I wasn't able to find any examples out there on the internet.
return client.containerGroups
.beginCreateOrUpdate(process.env.AZURE_RESOURCE_GROUP, containerInstanceName, {
tags: ['server'],
location: process.env.AZURE_INSTANCE_LOCATION,
containers: [
{
image: process.env.CONTAINER_IMAGE,
name: containerInstanceName,
command: ["./some-executable","?Type=Fall?"],
ports: [
{
port: 1111,
protocol: 'UDP',
},
],
resources: {
requests: {
cpu: Number(process.env.INSTANCE_CPU),
memoryInGB: Number(process.env.INSTANCE_MEMORY),
},
},
},
],
imageRegistryCredentials: [
{
server: process.env.CONTAINER_REGISTRY_SERVER,
username: process.env.CONTAINER_REGISTRY_USERNAME,
password: process.env.CONTAINER_REGISTRY_PASSWORD,
},
],```
Specifically this part below, is this correct? Just an array of strings? Are there any good examples anywhere? (tried both google and bing) Is this equivalent of docker's CMD ["command","argument"]?
```command: ["./some-executable","?Type=Fall?"],```
With your issue, most you have done is right, but there are points should pay attention to.
one is the command property will overwrite the CMD setting in the Dockerfile. So if the command will not always keep running, then the container will in a terminate state when the command finish execute.
Second is the command property is an array with string members and they will execute like a shell script. So I suggest you can set it like this:
command: ['/bin/bash','-c','echo $PATH'],
And you'd better keep the first two strings no change, just change the after.
If you have any more questions please let me know. Or if it's helpful you can accept it :-)

How to get started with dockerode

I am planning on running my app in docker. I want to dynamically start, stop, build, run commands, ... on docker container. I found a tool named dockerode. Here is the project repos. This project has doc, but I am not understanding very well. I would like to understand few thing. This is how to build an image
docker.createContainer({Image: 'ubuntu', Cmd: ['/bin/bash'], name: 'ubuntu-test'}, function (err, container) {
container.start(function (err, data) {
//...
});
});
It is possible to make RUN apt-get update like when we use Dockerfile, or RUN ADD /path/host /path/docker during build ? how to move my app into container after build ?
Let's see this code :
//tty:true
docker.createContainer({ /*...*/ Tty: true /*...*/ }, function(err, container) {
/* ... */
container.attach({stream: true, stdout: true, stderr: true}, function (err, stream) {
stream.pipe(process.stdout);
});
/* ... */
}
How can I know how many params I can put here { /*...*/ Tty: true /*...*/ } ?
Has someone tried this package too ? please help me to start with.
Dockerode is just a node wrapper for Docker API. You can find all params you can use for each command in api docs.
For example docker.createContainer will call POST /containers/create (docs are here: https://docs.docker.com/engine/reference/api/docker_remote_api_v1.24/#/create-a-container)
Check files in lib folder of dockerode repo to see what api command is wrapped for each dockerode method.

Resources