tar: could not chdir to 'D:\a\1\docker' - docker

I am trying to use Cache Task in Azure Pipelines for the Docker setup. According to the documentation I need to set below parameters:
Key (Required)
Path (Required)
RestoreKeys (Optional)
- task: Cache#2
inputs:
key: 'docker | "$(Agent.OS)" | cache'
path: '$(Pipeline.Workspace)/docker'
Unfortunately, the post-job for Cache task always failing with this error. Any suggestions?
Starting: Cache
==============================================================================
Task : Cache
Description : Cache files between runs
Version : 2.0.1
Author : Microsoft Corporation
Help : https://aka.ms/pipeline-caching-docs
==============================================================================
Resolving key:
- docker [string]
- "Windows_NT" [string]
- cache [string]
Resolved to: docker|"Windows_NT"|cache
ApplicationInsightsTelemetrySender will correlate events with X-TFS-Session xxxx
Getting a pipeline cache artifact with one of the following fingerprints:
Fingerprint: `docker|"Windows_NT"|cache`
There is a cache miss.
tar: could not chdir to 'D:\a\1\docker'
ApplicationInsightsTelemetrySender correlated 1 events with X-TFS-Session xxxx
##[error]Process returned non-zero exit code: 1
Finishing: Cache
Update: After making the changes in creating the direction based on the suggested answer the cache has been hit but the size of it is 0.0MB. Do we need to take care of copy ourselves?
Starting: Cache
==============================================================================
Task : Cache
Description : Cache files between runs
Version : 2.0.1
Author : Microsoft Corporation
Help : https://aka.ms/pipeline-caching-docs
==============================================================================
Resolving key:
- docker [string]
- "Windows_NT" [string]
- cache [string]
Resolved to: docker|"Windows_NT"|cache
ApplicationInsightsTelemetrySender will correlate events with X-TFS-Session xxxxxx
Getting a pipeline cache artifact with one of the following fingerprints:
Fingerprint: `docker|"Windows_NT"|cache`
There is a cache hit: `docker|"Windows_NT"|cache`
Used scope: 3;xxxx;refs/heads/master;xxxx
Entry found at fingerprint: `docker|"Windows_NT"|cache`
7-Zip 19.00 (x64) : Copyright (c) 1999-2018 Igor Pavlov : 2019-02-21
Extracting archive:
Expected size to be downloaded: 0.0 MB
**Downloaded 0.0 MB out of 0.0 MB (214%).
Downloaded 0.0 MB out of 0.0 MB (214%).**
Download statistics:
Total Content: 0.0 MB
Physical Content Downloaded: 0.0 MB
Compression Saved: 0.0 MB
Local Caching Saved: 0.0 MB
Chunks Downloaded: 3
Nodes Downloaded: 0
--
Path =
Type = tar
Code Page = UTF-8
Everything is Ok

I could reproduce the same issue when the docker folder is not created before the cache task.
You need to create the folder before the cache task or directly use the existing folder.
Here is an example:
pool:
vmImage: windows-latest
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: 'New-Item -ItemType directory -Path $(Pipeline.Workspace)/docker'
- task: Cache#2
inputs:
key: 'docker | "$(Agent.OS)" | cache'
path: '$(Pipeline.Workspace)/docker'

I got the same issue. After creating the cache path folder before cache task, error is resolved.
- task: PowerShell#2
inputs:
targetType: 'inline'
script: 'New-Item -ItemType directory -Path $(Pipeline.Workspace)/docker'
As mentioned, still the cache itself didn't work as expected. I modified the cache folder, cache key and cache path to different values, since cache is immutable. And Cache key and restoreKeys are set to same value.
pool:
vmImage: windows-2019
variables:
MAVEN_CACHE_FOLDER: $(Pipeline.Workspace)/testcache1/.m2/repository
MAVEN_OPTS: '-Dmaven.repo.local=$(MAVEN_CACHE_FOLDER)'
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: 'New-Item -ItemType directory -Path $(MAVEN_CACHE_FOLDER)'
- task: Cache#2
inputs:
key: mykeyazureunique
restoreKeys: mykeyazureunique
path: $(MAVEN_CACHE_FOLDER)
displayName: Cache Maven local repo
- task: MavenAuthenticate#0
displayName: Authenticate Maven to Artifacts feed
inputs:
artifactsFeeds: artifacts-maven
#mavenServiceConnections: serviceConnection1, serviceConnection2 # Optional
- task: Maven#3
displayName: Maven deploy into Artifact feed
inputs:
mavenPomFile: 'pom.xml'
goals: 'clean install'
mavenOptions: '-Xmx3072m $(MAVEN_OPTS)'
publishJUnitResults: false
javaHomeOption: 'JDKVersion'
mavenVersionOption: 'Default'
mavenAuthenticateFeed: false
effectivePomSkip: false
sonarQubeRunAnalysis: false
Note: Cache will be set only if the job is successful.
If the cache is saved successfully, then you will see below message in the Post-job:Cache
Content upload statistics:
Total Content: 41.3 MB
Physical Content Uploaded: 17.9 MB
Logical Content Uploaded: 20.7 MB
Compression Saved: 2.8 MB
Deduplication Saved: 20.7 MB
Number of Chunks Uploaded: 265
Total Number of Chunks: 793
Now the cache is set properly, we have to make sure Cache location is picked up while execution. First thing, verify that cache is restored properly. Below log will be displayed if restore is done
There is a cache hit: `mykeyazureunique`
Extracting archive:
Expected size to be downloaded: 20.7 MB
Downloaded 0.0 MB out of 20.7 MB (0%).
Downloaded 20.7 MB out of 20.7 MB (100%).
Downloaded 20.7 MB out of 20.7 MB (100%).
Then Cache location has to be communicated to target runner. In my case, I have used Maven. So I have set cache location in the Maven_opts.
MAVEN_OPTS: '-Dmaven.repo.local=$(MAVEN_CACHE_FOLDER)'
mavenOptions: '-Xmx3072m $(MAVEN_OPTS)'

Related

Bazel's container_pull failing to pull aws-cli image

tldr; When I try to pull an AWS-CLI image from Docker Hub using Bazel, I'm getting odd 404 errors. Pulling other images in the same way works fine.
I'm trying to use Bazel in my monorepo to (among many other things) create several Docker images. One of the Docker images I'm creating uses the verified AWS CLI image as a base.
I'm following along with the rules_docker documentation along with examples provided in that repo.
WORKSPACE File:
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_file")
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "io_bazel_rules_docker",
sha256 = "b1e80761a8a8243d03ebca8845e9cc1ba6c82ce7c5179ce2b295cd36f7e394bf",
urls = ["https://github.com/bazelbuild/rules_docker/releases/download/v0.25.0/rules_docker-v0.25.0.tar.gz"],
)
load(
"#io_bazel_rules_docker//repositories:repositories.bzl",
container_repositories = "repositories",
)
container_repositories()
load("#io_bazel_rules_docker//repositories:deps.bzl", container_deps = "deps")
container_deps()
load(
"#io_bazel_rules_docker//container:container.bzl",
"container_pull",
)
load("#io_bazel_rules_docker//contrib:dockerfile_build.bzl",
"dockerfile_image")
container_pull(
name = "alpine_linux_amd64",
digest = "sha256:954b378c375d852eb3c63ab88978f640b4348b01c1b3456a024a81536dafbbf4",
registry = "index.docker.io",
repository = "library/alpine",
# tag field is ignored since digest is set
tag = "3.8",
)
container_pull(
name = "aws_cli",
digest = "sha256:abb7e318502e78ec99d85bfa0121d5fbc11d8c49bb95f7f12db0b546ebd5ff99",
registry = "index.docker.io",
repository = "library/amazon",
# tag field is ignored since digest is set
tag = "2.9.9",
)
http_file(
name = "sam_archive",
downloaded_file_path = "aws-sam-cli-linux-x86_64.zip",
sha256 = "74264b224f133461e324e7877ed8218fe38ac2320ba498024f0c297de7bb3e95",
urls = [
"https://github.com/aws/aws-sam-cli/releases/download/v1.67.0/aws-sam-cli-linux-x86_64.zip",
],
)
And BUILD file:
load("#io_bazel_rules_docker//container:container.bzl", "container_image", "container_layer")
load("#io_bazel_rules_docker//contrib:test.bzl", "container_test")
load("#io_bazel_rules_docker//docker/util:run.bzl", "container_run_and_commit")
# Includes the aws-cli installation archive
container_image(
name = "aws_cli",
base = "#aws_cli//image"
)
container_image(
name = "basic_alpine",
base = "#alpine_linux_amd64//image",
cmd = ["Hello World!"],
entrypoint = ["echo"],
)
Building basic_alpine works fine:
$ bazel build //:basic_alpine
INFO: Analyzed target //:basic_alpine (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //:basic_alpine up-to-date:
bazel-bin/basic_alpine-layer.tar
INFO: Elapsed time: 1.140s, Critical Path: 0.99s
INFO: 50 processes: 16 internal, 34 linux-sandbox.
INFO: Build completed successfully, 50 total actions
Admittedly new to Bazel and maybe I'm not doing this correctly, but building aws_cli fails:
$ bazel build //:aws_cli
INFO: Repository aws_cli instantiated at:
/home/jdibling/repos/stream-ai.io/products/filedrop/monorepo/WORKSPACE:38:15: in <toplevel>
Repository rule container_pull defined at:
/home/jdibling/.cache/bazel/_bazel_jdibling/4ce73e7de2c4ac9889a94fb9b2da25fc/external/io_bazel_rules_docker/container/pull.bzl:294:33: in <toplevel>
ERROR: An error occurred during the fetch of repository 'aws_cli':
Traceback (most recent call last):
File "/home/jdibling/.cache/bazel/_bazel_jdibling/4ce73e7de2c4ac9889a94fb9b2da25fc/external/io_bazel_rules_docker/container/pull.bzl", line 240, column 13, in _impl
fail("Pull command failed: %s (%s)" % (result.stderr, " ".join([str(a) for a in args])))
Error in fail: Pull command failed: 2022/12/23 08:31:25 Running the Image Puller to pull images from a Docker Registry...
2022/12/23 08:31:29 Image pull was unsuccessful: reading image "index.docker.io/library/amazon#sha256:abb7e318502e78ec99d85bfa0121d5fbc11d8c49bb95f7f12db0b546ebd5ff99": GET https://index.docker.io/v2/library/amazon/manifests/sha256:abb7e318502e78ec99d85bfa0121d5fbc11d8c49bb95f7f12db0b546ebd5ff99: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/amazon Type:repository]]
(/home/jdibling/.cache/bazel/_bazel_jdibling/4ce73e7de2c4ac9889a94fb9b2da25fc/external/go_puller_linux_amd64/file/downloaded -directory /home/jdibling/.cache/bazel/_bazel_jdibling/4ce73e7de2c4ac9889a94fb9b2da25fc/external/aws_cli/image -os linux -os-version -os-features -architecture amd64 -variant -features -name index.docker.io/library/amazon#sha256:abb7e318502e78ec99d85bfa0121d5fbc11d8c49bb95f7f12db0b546ebd5ff99)
ERROR: /home/jdibling/repos/stream-ai.io/products/filedrop/monorepo/WORKSPACE:38:15: fetching container_pull rule //external:aws_cli: Traceback (most recent call last):
File "/home/jdibling/.cache/bazel/_bazel_jdibling/4ce73e7de2c4ac9889a94fb9b2da25fc/external/io_bazel_rules_docker/container/pull.bzl", line 240, column 13, in _impl
fail("Pull command failed: %s (%s)" % (result.stderr, " ".join([str(a) for a in args])))
Error in fail: Pull command failed: 2022/12/23 08:31:25 Running the Image Puller to pull images from a Docker Registry...
2022/12/23 08:31:29 Image pull was unsuccessful: reading image "index.docker.io/library/amazon#sha256:abb7e318502e78ec99d85bfa0121d5fbc11d8c49bb95f7f12db0b546ebd5ff99": GET https://index.docker.io/v2/library/amazon/manifests/sha256:abb7e318502e78ec99d85bfa0121d5fbc11d8c49bb95f7f12db0b546ebd5ff99: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/amazon Type:repository]]
(/home/jdibling/.cache/bazel/_bazel_jdibling/4ce73e7de2c4ac9889a94fb9b2da25fc/external/go_puller_linux_amd64/file/downloaded -directory /home/jdibling/.cache/bazel/_bazel_jdibling/4ce73e7de2c4ac9889a94fb9b2da25fc/external/aws_cli/image -os linux -os-version -os-features -architecture amd64 -variant -features -name index.docker.io/library/amazon#sha256:abb7e318502e78ec99d85bfa0121d5fbc11d8c49bb95f7f12db0b546ebd5ff99)
ERROR: /home/jdibling/repos/stream-ai.io/products/filedrop/monorepo/BUILD:6:16: //:aws_cli depends on #aws_cli//image:image in repository #aws_cli which failed to fetch. no such package '#aws_cli//image': Pull command failed: 2022/12/23 08:31:25 Running the Image Puller to pull images from a Docker Registry...
2022/12/23 08:31:29 Image pull was unsuccessful: reading image "index.docker.io/library/amazon#sha256:abb7e318502e78ec99d85bfa0121d5fbc11d8c49bb95f7f12db0b546ebd5ff99": GET https://index.docker.io/v2/library/amazon/manifests/sha256:abb7e318502e78ec99d85bfa0121d5fbc11d8c49bb95f7f12db0b546ebd5ff99: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/amazon Type:repository]]
(/home/jdibling/.cache/bazel/_bazel_jdibling/4ce73e7de2c4ac9889a94fb9b2da25fc/external/go_puller_linux_amd64/file/downloaded -directory /home/jdibling/.cache/bazel/_bazel_jdibling/4ce73e7de2c4ac9889a94fb9b2da25fc/external/aws_cli/image -os linux -os-version -os-features -architecture amd64 -variant -features -name index.docker.io/library/amazon#sha256:abb7e318502e78ec99d85bfa0121d5fbc11d8c49bb95f7f12db0b546ebd5ff99)
ERROR: Analysis of target '//:aws_cli' failed; build aborted: Analysis failed
INFO: Elapsed time: 4.171s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded, 0 targets configured)
Just a quick sanity check - should that be library/amazonlinux? AFAICT library/amazon does not exist. However, that one does not have a tag with the sha265 that you specify.
The link you have in the intro is for the amazon/aws-cli image, which does have that tag, so maybe that's the one that you mean to pull?

How to access resources in a Quarkus native image?

I started playing with quarkus and graalvm. I added files (txt and jpg) to resources in the project (src/main/resources/). To be sure that I have access to this file in controller I display size of it:
URL url = Thread.currentThread().getContextClassLoader().getResource("/Resource2.txt");
File file = new File(url.toURI());
return "Hello My Friend! File size in bytes = " + file.length();
and when I run it with maven (mvn quarkus:dev) it works. Controller code is here.
Problem occurred when I created native Quarkus application and try to run inside docker.
To be sure that file is included in native image, I added a big jpg file (3.3MB), created resources-config.json:
{ "resources":
{ "includes": [
{ "pattern": "IMG_3_3M\\.jpg$"},
{ "pattern": "Resources2\\.txt$"}
]
}}
and in application.properties added:
quarkus.native.additional-build-args = -H:ResourceConfigurationFiles=resources-config.json. The native runner size was increased from:
39M Mar 21 12:48 hello-quarkus-1.0-SNAPSHOT-runner
to: 44M Mar 21 12:19 hello-quarkus-1.0-SNAPSHOT-runner
So I assume that jpg file was included, but still when run native application inside docker image, I got NPE:
Caused by: java.lang.NullPointerException
at it.tostao.quickstart.GreetingResource.hello(GreetingResource.java:24)
where line 24: is url.toURI().
Any idea how I can read resources in native image? Is something missing in the configuration?
here is sample image to reproduce the problem, all commands needed to build and run native image you can find in README.MD:
https://github.com/sleski/hello-quarkus
So far I checked this urls and still was not able to find resources in native image:
• How to include classpath resources in a Quarkus native image?
• How to read classpath resources in Quarkus native image?
• https://quarkus.io/guides/writing-native-applications-tips
• Read txt file from resources folder on maven Quarkus project From Docker Container
First fix json
{
"resources": [
{
"pattern": "Resource2.txt"
}
]
}
or you can have *.txt as pattern. like in the doc
https://github.com/oracle/graal/blob/master/docs/reference-manual/native-image/Resources.md says use
InputStream resource = ModuleLayer.boot().findModule(moduleName).getResourceAsStream(resourcePath);
when I tried I had issues. you can see the working code below for your project
#Path("/hello")
public class GreetingResource {
#GET
#Produces(MediaType.TEXT_PLAIN)
public String hello() throws IOException {
String moduleName = "java.base";
String resourcePath = "/Resource2.txt";
Module resource = ModuleLayer.boot().findModule(moduleName).get();
InputStream ins = resource.getResourceAsStream(resourcePath);
if (ins == null) {
System.out.println("module came empty, now trying to load from GreetingResource");
ins = GreetingResource.class.getResourceAsStream(resourcePath);
}
if (ins != null) {
StringBuilder sb = new StringBuilder();
for (int ch; (ch = ins.read()) != -1; ) {
sb.append((char) ch);
}
return "Hello My Friend! File size in bytes = " + sb;
}
return "empty";
}
}
GreetingResource.class.getResourceAsStream(resourcePath); is actually bringing the resource here. I think this feature may change in the future so I left ModuleLayer in the code too. I used graalvm 17-21.3.0
you can find the build log below
[INFO] [io.quarkus.deployment.pkg.steps.NativeImageBuildRunner] C:\Program Files\GraalVM\graalvm-ce-java17-21.3.0\bin\native-image.cmd -J-Dsun.nio.ch.maxUpdateArraySize=100 -J-Djava.util.logging.manager=org.jboss.logmanager.LogManager -J-Dvertx.logger-delegate-factory-class-name=io.quarkus.vertx.core.runtime.VertxLogDelegateFactory -J-Dvertx.disableDnsResolver=true -J-Dio.netty.leakDetection.level=DISABLED -J-Dio.netty.allocator.maxOrder=3 -J-Duser.language=en -J-Duser.country=GB -J-Dfile.encoding=UTF-8 -H:-ParseOnce -J--add-exports=java.security.jgss/sun.security.krb5=ALL-UNNAMED -J--add-opens=java.base/java.text=ALL-UNNAMED -H:ResourceConfigurationFiles=resources-config.json -H:+PrintAnalysisCallTree -H:Log=registerResource:verbose -H:InitialCollectionPolicy=com.oracle.svm.core.genscavenge.CollectionPolicy\$BySpaceAndTime -H:+JNI -H:+AllowFoldMethods -J-Djava.awt.headless=true -H:FallbackThreshold=0 -H:+ReportExceptionStackTraces -H:-AddAllCharsets -H:EnableURLProtocols=http -H:-UseServiceLoaderFeature -H:+StackTrace -J--add-exports=java.management/sun.management=ALL-UNNAMED hello-quarkus-1.0-SNAPSHOT-runner -jar hello-quarkus-1.0-SNAPSHOT-runner.jar
[hello-quarkus-1.0-SNAPSHOT-runner:20428] classlist: 2,920.35 ms, 0.94 GB
[hello-quarkus-1.0-SNAPSHOT-runner:20428] (cap): 1,493.84 ms, 0.94 GB
[hello-quarkus-1.0-SNAPSHOT-runner:20428] setup: 2,871.07 ms, 0.94 GB
[Use -Dgraal.LogFile=<path> to redirect Graal log output to a file.]
[thread:1] scope: main
[thread:1] scope: main.registerResource
ResourcesFeature: registerResource: Resource2.txt
14:23:38,709 INFO [org.jbo.threads] JBoss Threads version 3.4.2.Final
[thread:1] scope: main.registerResource
ResourcesFeature: registerResource: java/lang/uniName.dat
[hello-quarkus-1.0-SNAPSHOT-runner:20428] (clinit): 475.20 ms, 5.14 GB
[hello-quarkus-1.0-SNAPSHOT-runner:20428] (typeflow): 2,931.83 ms, 5.14 GB
[hello-quarkus-1.0-SNAPSHOT-runner:20428] (objects): 24,294.27 ms, 5.14 GB
[hello-quarkus-1.0-SNAPSHOT-runner:20428] (features): 2,979.07 ms, 5.14 GB
[hello-quarkus-1.0-SNAPSHOT-runner:20428] analysis: 32,083.24 ms, 5.14 GB
# Printing call tree to: C:\Users\ozkan\tmp\hello-quarkus\target\hello-quarkus-1.0-SNAPSHOT-native-image-source-jar\reports\call_tree_hello-quarkus-1.0-SNAPSHOT-runner_20220324_142406.txt
# Printing list of used methods to: C:\Users\ozkan\tmp\hello-quarkus\target\hello-quarkus-1.0-SNAPSHOT-native-image-source-jar\reports\used_methods_hello-quarkus-1.0-SNAPSHOT-runner_20220324_142407.txt
# Printing list of used classes to: C:\Users\ozkan\tmp\hello-quarkus\target\hello-quarkus-1.0-SNAPSHOT-native-image-source-jar\reports\used_classes_hello-quarkus-1.0-SNAPSHOT-runner_20220324_142407.txt
# Printing list of used packages to: C:\Users\ozkan\tmp\hello-quarkus\target\hello-quarkus-1.0-SNAPSHOT-native-image-source-jar\reports\used_packages_hello-quarkus-1.0-SNAPSHOT-runner_20220324_142407.txt
# Printing call tree for vm entry point to: C:\Users\ozkan\tmp\hello-quarkus\target\hello-quarkus-1.0-SNAPSHOT-native-image-source-jar\reports\csv_call_tree_vm_hello-quarkus-1.0-SNAPSHOT-runner_20220324_142408.csv
# Printing call tree for methods to: C:\Users\ozkan\tmp\hello-quarkus\target\hello-quarkus-1.0-SNAPSHOT-native-image-source-jar\reports\csv_call_tree_methods_hello-quarkus-1.0-SNAPSHOT-runner_20220324_142408.csv
# Printing call tree for virtual methods to: C:\Users\ozkan\tmp\hello-quarkus\target\hello-quarkus-1.0-SNAPSHOT-native-image-source-jar\reports\csv_call_tree_virtual_methods_hello-quarkus-1.0-SNAPSHOT-runner_20220324_142408.csv
# Printing call tree for entry points to: C:\Users\ozkan\tmp\hello-quarkus\target\hello-quarkus-1.0-SNAPSHOT-native-image-source-jar\reports\csv_call_tree_entry_points_hello-quarkus-1.0-SNAPSHOT-runner_20220324_142408.csv
# Printing call tree for direct edges to: C:\Users\ozkan\tmp\hello-quarkus\target\hello-quarkus-1.0-SNAPSHOT-native-image-source-jar\reports\csv_call_tree_direct_edges_hello-quarkus-1.0-SNAPSHOT-runner_20220324_142408.csv
# Printing call tree for overriden by edges to: C:\Users\ozkan\tmp\hello-quarkus\target\hello-quarkus-1.0-SNAPSHOT-native-image-source-jar\reports\csv_call_tree_override_by_edges_hello-quarkus-1.0-SNAPSHOT-runner_20220324_142408.csv
# Printing call tree for virtual edges to: C:\Users\ozkan\tmp\hello-quarkus\target\hello-quarkus-1.0-SNAPSHOT-native-image-source-jar\reports\csv_call_tree_virtual_edges_hello-quarkus-1.0-SNAPSHOT-runner_20220324_142408.csv
[hello-quarkus-1.0-SNAPSHOT-runner:20428] universe: 1,547.28 ms, 5.14 GB
[hello-quarkus-1.0-SNAPSHOT-runner:20428] (parse): 4,919.32 ms, 4.87 GB
[hello-quarkus-1.0-SNAPSHOT-runner:20428] (inline): 7,013.78 ms, 5.83 GB
[hello-quarkus-1.0-SNAPSHOT-runner:20428] (compile): 27,387.04 ms, 5.56 GB
[hello-quarkus-1.0-SNAPSHOT-runner:20428] compile: 41,595.59 ms, 5.56 GB
[hello-quarkus-1.0-SNAPSHOT-runner:20428] image: 2,515.22 ms, 5.56 GB
[hello-quarkus-1.0-SNAPSHOT-runner:20428] write: 858.79 ms, 5.56 GB
[hello-quarkus-1.0-SNAPSHOT-runner:20428] [total]: 90,068.97 ms, 5.56 GB
# Printing build artifacts to: C:\Users\ozkan\tmp\hello-quarkus\target\hello-quarkus-1.0-SNAPSHOT-native-image-source-jar\hello-quarkus-1.0-SNAPSHOT-runner.build_artifacts.txt
[INFO] [io.quarkus.deployment.QuarkusAugmentor] Quarkus augmentation completed in 94323ms
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:37 min
[INFO] Finished at: 2022-03-24T14:24:56Z
[INFO] ------------------------------------------------------------------------
This is how I solved the problem - example with changes are in this branch: https://github.com/sleski/hello-quarkus/tree/solution_for_native
Explanations:
Image is in: src/main/resources/images and name is:IMG_3_3M.jpg.
In application.properties I added images.location variable:
images.location=src/main/resources/images/
%prod.images.location=/work/images/
and in the java controller I added:
#ConfigProperty(name = "images.location")
String imageLocation;
in docker.native added: COPY target/classes/images/*.jpg /work/images/
When I start application with querkus:dev it is getting image from src/main/resources/images and when I run narive image: /work/images.
In both cases work: File size is = 3412177.

SaltStack: getting No top file or master_tops data matches found

I am new to SaltStack and following some tutorials and trying to execute state.apply but getting below error:
# salt "host2" state.apply
host2:
----------
ID: states
Function: no.None
Result: False
Comment: No Top file or external nodes data matches found
Started:
Duration:
Changes:
Summary for host2
------------
Succeeded: 0
Failed: 1
------------
Total states run: 1
I am able to test.ping successfully to host.
here is directory structure:
/etc/salt/srv/salt/states
|-top.sls
|-installations
|-init.sls
file root entry in master config
file_roots:
base:
- /srv/salt/states
top.sls ->
base:
'*':
- installations
init.sls->
install_apache:
pkg.installed:
- name: apache2
You need to change the path to your states, or move them to the path set in file_roots.
The file_roots option is where you should place your files, you should have the following tree:
# tree /srv/salt/
/srv/salt/
|-- installations
`-- init.sls
`-- top.sls
Or you could change your file_roots, but I wouldn't do it, since /srv/salt/ seems to be a sort of "standard".
Have a look at the tutorials, if you haven't already: https://docs.saltstack.com/en/getstarted/fundamentals/
I changes the
file_root:
base:
- /etc/salt/srv/salt/state
and it works for me. looks it wasn't picking path correctly

Adding Local Files in Beeline (Hive)

I'm trying to add local files via the Beeline client, however I keep running into an issue where it tells me the file does not exist.
[test#test-001 tmp]$ touch /tmp/m.py
[test#test-001 tmp]$ stat /tmp/m.py
File: ‘/tmp/m.py’
Size: 0 Blocks: 0 IO Block: 4096 regular empty file
Device: 801h/2049d Inode: 34091464 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1036/ test) Gid: ( 1037/ test)
Context: unconfined_u:object_r:user_tmp_t:s0
Access: 2017-02-27 22:04:06.527970709 +0000
Modify: 2017-02-27 22:04:06.527970709 +0000
Change: 2017-02-27 22:04:06.527970709 +0000
Birth: -
[test#test-001 tmp]$ beeline -u jdbc:hive2://hs2-test:10000/default -n r-zubis
Connecting to jdbc:hive2://hs2-test:10000/default
Connected to: Apache Hive (version 1.2.1.2.3.0.0-2557)
Driver: Hive JDBC (version 1.2.1)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.2.1 by Apache Hive
0: jdbc:hive2://hs2-test:10000/def> ADD FILE '/tmp/m.py';
Error: Error while processing statement: '/tmp/m.py' does not exist (state=,code=1)
0: jdbc:hive2://hs2-test:10000/def>
What's the issue?
You can only add files on the box HiveServer2 is running on. (and I needed to remove the quotes) I found it via a blog comment on Cloudera. Not sure why this isn't in the Beeline docs.
If, like me you are stuck in the position where HiveServer2 is running remotely, beeline will let you load the files from HDFS,
hdfs fs -put /tmp/m.py
then
beeline> add file hdfs:/user/homedir/m.py;

hadoop only launch local job by default why?

I have written my own hadoop program and I can run using pseudo distribute mode in my own laptop, however, when I put the program in the cluster which can run example jar of hadoop, it by default launches the local job though I indicate the hdfs file path, below is the output, give suggestions?
./hadoop -jar MyRandomForest_oob_distance.jar hdfs://montana-01:8020/user/randomforest/input/genotype1.txt hdfs://montana-01:8020/user/randomforest/input/phenotype1.txt hdfs://montana-01:8020/user/randomforest/output1_distance/ hdfs://montana-01:8020/user/randomforest/input/genotype101.txt hdfs://montana-01:8020/user/randomforest/input/phenotype101.txt 33 500 1
12/03/16 16:21:25 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
12/03/16 16:21:25 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/03/16 16:21:25 INFO mapred.JobClient: Running job: job_local_0001
12/03/16 16:21:25 INFO mapred.MapTask: io.sort.mb = 100
12/03/16 16:21:25 INFO mapred.MapTask: data buffer = 79691776/99614720
12/03/16 16:21:25 INFO mapred.MapTask: record buffer = 262144/327680
12/03/16 16:21:25 WARN mapred.LocalJobRunner: job_local_0001
java.io.FileNotFoundException: File /user/randomforest/input/genotype1.txt does not exist.
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:361)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:125)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:356)
at Data.Data.loadData(Data.java:103)
at MapReduce.DearMapper.loadData(DearMapper.java:261)
at MapReduce.DearMapper.setup(DearMapper.java:332)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
12/03/16 16:21:26 INFO mapred.JobClient: map 0% reduce 0%
12/03/16 16:21:26 INFO mapred.JobClient: Job complete: job_local_0001
12/03/16 16:21:26 INFO mapred.JobClient: Counters: 0
Total Running time is: 1 secs
LocalJobRunner has been chosen as your configuration most probably has the mapred.job.tracker property set to local or has not been set at all (in which case the default is local). To check, go to "wherever you extracted/installed hadoop"/etc/hadoop/ and see if the file mapred-site.xml exists (for me it did not, a file called mapped-site.xml.template was there). In that file (or create it if it doesn't exist) make sure it has the following property:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
See the source for org.apache.hadoop.mapred.JobClient.init(JobConf)
What is the value of this configuration property in the hadoop configuration on the machine you are submitting this from? Also confirm that the hadoop executable you are running references this configuration (and that you don't have 2+ installations configured differently) - type which hadoop and trace any symlinks you come across.
Alternatively you can override this when you submit your job, if you know the JobTracker host and port number using the -jt option:
hadoop jar MyRandomForest_oob_distance.jar -jt hostname:port hdfs://montana-01:8020/user/randomforest/input/genotype1.txt hdfs://montana-01:8020/user/randomforest/input/phenotype1.txt hdfs://montana-01:8020/user/randomforest/output1_distance/ hdfs://montana-01:8020/user/randomforest/input/genotype101.txt hdfs://montana-01:8020/user/randomforest/input/phenotype101.txt 33 500 1
If you're using Hadoop 2 and your job is running locally instead of on the cluster, ensure that you have setup mapred-site.xml to contain the mapreduce.framework.name property with a value of yarn. You also need to set up an aux-service in yarn-site.xml
Checkout the Cloudera Hadoop 2 operator migration blog for more information.
I had the same problem that every mapreduce v2 (mrv2) or yarn task only ran with the mapred.LocalJobRunner
INFO mapred.LocalJobRunner: Starting task: attempt_local284299729_0001_m_000000_0
The Resourcemanager and Nodemanagers were accessible and the mapreduce.framework.name was set to yarn.
Setting the HADOOP_MAPRED_HOME before executing the job fixed the problem for me.
export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
cheers
dan

Resources