I am writing CI for Github Actions with Rust. I want to execute docker-compose from Rust for some reasons. Within the Command, I am specifying a current working dir in the folder where the docker-compose.yml is located. This is some code:
let docker_compose_file = current_dir.parent().unwrap().to_owned();
if docker_compose_file.join("docker-compose.yml").exists() {
println!(
"Found docker-compose file, full path: {:#?}",
docker_compose_file
);
// DEBUG CODE
let ls_result = Command::new("ls")
.arg(docker_compose_file.to_str().unwrap())
.output()
.unwrap()
.stdout;
let y = String::from_utf8(ls_result).unwrap();
println!("LS'ing gives: {:#?}", y);
} else {
panic!(
"Wrong file, current working dir: {:#?}",
docker_compose_file
);
}
let result = Command::new("docker-compose")
.current_dir(&docker_compose_file)
.args(&["up", "-d"])
.status()
.unwrap();
I see in the Github Actions the following logging:
Found docker-compose file, full path:
"/Users/runner/work/something/something/server"
LS'ing gives:
"Cargo.lock\nCargo.toml\napi\nbatch_jobs\nci_setup\ncommon\ndatabase\ndeployment.md\ndocker-compose.yml\nrustfmt.toml\nserver\ntarget\n"
thread 'main' panicked at 'called Result::unwrap() on an Err
value: Os { code: 2, kind: NotFound, message: "No such file or
directory" }', ci_setup/src/main.rs:137:14 note: run with
RUST_BACKTRACE=1 environment variable to display a backtrace
I am confused. ls clearly shows me the docker-compose.yml file is present. Why do I get the Rust error, saying it can not find the file or directory?
main.rs:137 refers to the unwrap method call at the bottom of the code example.
Related
I add this import - const { addMatchImageSnapshotPlugin } = require("cypress-image-snapshot/plugin");
into file cypress/plugins/index.js.
And error is thrown while running on Docker:
Your `pluginsFile` is set to `/cypress/plugins/index.js`, but either the file is missing, it contains a syntax error, or threw an error when required. The `pluginsFile` must be a `.js`, `.ts`, or `.coffee` file.
It works perfectly when I'm running the specs locally but strangely it fail when they are runned on the docker image.
`
So I'm trying to run the indexer on localnet following the official tutorial https://docs.near.org/docs/tutorials/near-indexer
However when I run cargo run -- init to generate the localnet json config I get this error
Finished dev [unoptimized + debuginfo] target(s) in 17.62s
Running `target/debug/example-indexer init`
thread 'main' panicked at 'Failed to deserialize config: Error("expected value", line: 1, column: 1)', /home/francois/.cargo/git/checkouts/nearcore-5bf7818cf2261fd0/a44be20/nearcore/src/config.rs:499:39
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
At some point it seems the json is not created or not created properly I guess, the function crashing in config.rf line 499 is
impl From<&str> for Config {
fn from(content: &str) -> Self {
serde_json::from_str(content).expect("Failed to deserialize config")
}
}
It's quite difficult to debug since cargo run -- init is using some inner near function (also I'm new to rust).
the config.json file is created but it seems the permission are not set properly by the script, the content of config.json is
"<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message> ... "
If anyone from the community has encountered this problem or has a hint it would be great!! thanks a lot !
In the tutorial you referenced, it mentions a similar error, and suggests the following:
Open your config.json located in the .near folder in the root of your home directory. ( ~/.near/config.json )
In this file, locate: "tracked_shards": [] and change the value to [0].
Save the file and try running your indexer again.
So I had the wrong config with download_config: false,
It should be download_config: false, for the localnet use
I've been stuck on this for awhile. I would like docker to ignore a particular directory when building an image, because my user account does not have permissions to read that directory. I cannot move it, so that's not an alternative.
This is the structure of my project. docker/data is the directory that I do not have permissions to read, and docker/node-express.dockerfile is the image I'm trying to build.
Running docker build --no-cache --tag node-express --file ./docker/node-express.dockerfile . in the root directory outputs the error
error checking context: 'can't stat '/home/anthony/Repositories/Anthony-Monterrosa/aws-postgres-node-stack/docker/data''.
After this error and a bit of googling I learned about .dockerignore files, and made one in the root directory. The following is the file's text.
docker/data
I ran the command again but got an identical error. A bit more googling and I found out about image-specific .dockerignore files, so I set DOCKER_BUILDKIT to 1, created docker/node-express.dockerfile.dockerignore with the following content
data
docker/data
(I am not sure how relative paths work with image-specific .dockerignores, so I added both). Ran the command again, but still with the same error.
So, I don't seem to have ignores working correctly with either .dockerignore file, or both. What am I missing here?
The error is:
error checking context: 'can't stat '/home/anthony/Repositories/Anthony-Monterrosa/aws-postgres-node-stack/docker/data''.
So looks there is some operation before .dockerignore effect.
As there is no context content in your docker folder, I suggest you just add docker in .dockerignore.
This way, although still error, but the build will continue like next:
shubuntu1#shubuntu1:~/trial2020/trial$ docker build -t abcd:1 -f docker/Dockerfile .
ERRO[0000] Tar: Can't stat file /home/shubuntu1/trial2020/trial to tar: open
/home/shubuntu1/trial2020/trial/docker/data: permission denied
Sending build context to Docker daemon 3.072kB
Step 1/1 : FROM ubuntu:18.04
---> 3556258649b2
Successfully built 3556258649b2
Successfully tagged abcd:1
UPDATE why according to your comments:
You may want to have a look for docker-ce source code, build.go & context.go:
build.go:
if err := build.ValidateContextDirectory(contextDir, excludes); err != nil {
return errors.Errorf("error checking context: '%s'.", err)
}
context.go:
func ValidateContextDirectory(srcPath string, excludes []string) error {
contextRoot, err := getContextRoot(srcPath)
if err != nil {
return err
}
pm, err := fileutils.NewPatternMatcher(excludes)
if err != nil {
return err
}
return filepath.Walk(contextRoot, func(filePath string, f os.FileInfo, err error) error {
if err != nil {
if os.IsPermission(err) {
return errors.Errorf("can't stat '%s'", filePath)
}
if os.IsNotExist(err) {
return errors.Errorf("file ('%s') not found or excluded by .dockerignore", filePath)
}
return err
}
// skip this directory/file if it's not in the path, it won't get added to the context
if relFilePath, err := filepath.Rel(contextRoot, filePath); err != nil {
return err
} else if skip, err := filepathMatches(pm, relFilePath); err != nil {
return err
} else if skip {
if f.IsDir() {
return filepath.SkipDir
}
return nil
}
......
})
}
Before docker daemon tar the build context, it will first try to validate the context directory:
docker/data in .dockerignore:
It will use Walk to ergodic all things under docker, when it comes to docker/data, next code finally make the build exit, so you did not get image generated:
if os.IsPermission(err) {
return errors.Errorf("can't stat '%s'", filePath)
}
docker in .dockerignore:
Same as above, difference is next code will effect when comes to the match docker in .dockerignore:
return filepath.SkipDir
This will make the Walk ignore the subfolders of docker, then docker/data no chance to be ergodic, so no permission error there.
The ERRO[0000] Tar: Can't stat file comes from other later steps which won't exit the image build.
I am trying to run hadoop using docker provided here:
https://github.com/big-data-europe/docker-hadoop
I use the following command:
docker-compose up -d
to up the service and am able to access it and browse file system using: localhost:9870. Problem rises whenever I try to use pyhdfs to put file on HDFS. Here is my sample code:
hdfs_client = HdfsClient(hosts = 'localhost:9870')
# Determine the output_hdfs_path
output_hdfs_path = 'path/to/test/dir'
# Does the output path exist? If not then create it
if not hdfs_client.exists(output_hdfs_path):
hdfs_client.mkdirs(output_hdfs_path)
hdfs_client.create(output_hdfs_path + 'data.json', data = 'This is test.', overwrite = True)
If test directory does not exist on HDFS, the code is able to successfully create it but when it gets to the .create part it throws the following exception:
pyhdfs.HdfsIOException: Failed to find datanode, suggest to check cluster health. excludeDatanodes=null
What surprises me is that my code is able to create the empty directory but fails to put the file on HDFS. My docker-compose.yml file is exactly the same as the one provided in the github repo. The only change I've made is in the hadoop.env file where I change:
CORE_CONF_fs_defaultFS=hdfs://namenode:9000
to
CORE_CONF_fs_defaultFS=hdfs://localhost:9000
I have seen this other post on sof and tried the following command:
hdfs dfs -mkdir hdfs:///demofolder
which works fine in my case. Any help is much appreciated.
I would keep the default CORE_CONF_fs_defaultFS=hdfs://namenode:9000 setting.
Works fine for me after adding a forward slash to the paths
import pyhdfs
fs = pyhdfs.HdfsClient(hosts="namenode")
output_hdfs_path = '/path/to/test/dir'
if not fs.exists(output_hdfs_path):
fs.mkdirs(output_hdfs_path)
fs.create(output_hdfs_path + '/data.json', data = 'This is test.')
# check that it's present
list(fs.walk(output_hdfs_path))
[('/path/to/test/dir', [], ['data.json'])]
I have recreated a simple example in this tiny github repo. I am attempting to use symfony/dependency-injection to configure monolog/monolog to write logs to php://stderr. I am using a yaml file called services.yml to configure dependency injection.
This all works fine if my yml file looks like this:
parameters:
log.file: 'php://stderr'
log.level: 'DEBUG'
services:
stream_handler:
class: \Monolog\Handler\StreamHandler
arguments:
- '%log.file%'
- '%log.level%'
log:
class: \Monolog\Logger
arguments: [ 'default', ['#stream_handler'] ]
However, my goal is to read the path of the log files and the log level from environment variables, $APP_LOG and LOG_LEVEL respectively. According to The symphony documentations on external paramaters the correct way to do that in the services.yml file is like this:
parameters:
log.file: '%env(APP_LOG)%'
log.level: '%env(LOGGING_LEVEL)%'
In my sample app I verified PHP can read these environment variables with the following:
echo "Hello World!\n\n";
echo 'APP_LOG=' . (getenv('APP_LOG') ?? '__NULL__') . "\n";
echo 'LOG_LEVEL=' . (getenv('LOG_LEVEL') ?? '__NULL__') . "\n";
Which writes the following to the browser when I use my original services.yml with hard coded values.:
Hello World!
APP_LOG=php://stderr
LOG_LEVEL=debug
However, if I use the %env(VAR_NAME)% syntax in services.yml, I get the following error:
Fatal error: Uncaught UnexpectedValueException: The stream or file "env_PATH_a61e1e48db268605210ee2286597d6fb" could not be opened: failed to open stream: Permission denied in /var/www/vendor/monolog/monolog/src/Monolog/Handler/StreamHandler.php:107 Stack trace: #0 /var/www/vendor/monolog/monolog/src/Monolog/Handler/AbstractProcessingHandler.php(37): Monolog\Handler\StreamHandler->write(Array) #1 /var/www/vendor/monolog/monolog/src/Monolog/Logger.php(337): Monolog\Handler\AbstractProcessingHandler->handle(Array) #2 /var/www/vendor/monolog/monolog/src/Monolog/Logger.php(532): Monolog\Logger->addRecord(100, 'Initialized dep...', Array) #3 /var/www/html/index.php(17): Monolog\Logger->debug('Initialized dep...') #4 {main} thrown in /var/www/vendor/monolog/monolog/src/Monolog/Handler/StreamHandler.php on line 107
What am I doing wrong?
Ok you need a few things here. First of all you need version 3.3 of Symfony, which is still in beta. 3.2 was the released version when I encountered this. Second you need to "compile" the environment variables.
Edit your composer.json with the following values and run composer update. You might need to update other dependencies. You can substitute ^3.3 with dev-master.
"symfony/config": "^3.3",
"symfony/console": "^3.3",
"symfony/dependency-injection": "^3.3",
"symfony/yaml": "^3.3",
You will likely have to do this for symfony/__WHATEVER__ if you have other symfony components.
Now in you're code after you load your yaml configuration into your dependency container you compile it.
So after you're lines here (perhaps in bin/console):
$container = new ContainerBuilder();
$loader = new YamlFileLoader($container, new FileLocator(__DIR__ . DIRECTORY_SEPARATOR . '..'));
$loader->load('services.yml');
Do this:
$container->compile(true);
Your IDE's intellisense might tell you compile takes no parameters. That's ok. That's because compile() grabs its args indirectly via func_get_arg().
public function compile(/*$resolveEnvPlaceholders = false*/)
{
if (1 <= func_num_args()) {
$resolveEnvPlaceholders = func_get_arg(0);
} else {
. . .
}
References
Github issue where this was discussed
Pull request to add compile(true)
Using this command after loading your services.yaml file should help.
$containerBuilder->compile(true);
given your files gets also validated by the checks for proper configurations which this method also does. The parameter is $resolveEnvPlaceholders which makes environmental variables accessible to the yaml services configuration.