I'm trying to parse an uploaded file as follows:
lib/thingy_web/controllers/things_controller.ex
def create(conn, %{"data" => %Plug.Upload{content_type: "application/octet-stream", filename: basename, path: dirname}}) do
things_params = dirname <> "/" <> basename
|> File.stream!
|> NimbleCSV.RFC4180.parse_stream
|> Enum.map(&AllThings.create_things_params/1)
|> Enum.map(&AllThings.create_things/1)
conn
|> put_status(:created)
end
However, when I try a POST with a test file:
curl -F 'data=#/root/test' http://localhost:4000/api/thing
I get the error:
[debug] Processing with ThingyWebWeb.ThingsController.create/2
Parameters: %{"data" => %Plug.Upload{content_type: "application/octet-stream", filename: "test", path: "/tmp/plug-1514/multipart-1514490176-65282591343221-1"}}
Pipelines: [:api]
[info] Sent 500 in 55ms
[error] #PID<0.544.0> running ThingyWeb.Endpoint terminated
Server: localhost:4000 (http)
Request: POST /api/thing
** (exit) an exception was raised:
** (File.Error) could not stream "/tmp/plug-1514/multipart-1514490176-65282591343221-1/test": not a directory
(elixir) lib/file/stream.ex:79: anonymous fn/2 in Enumerable.File.Stream.reduce/3
(elixir) lib/stream.ex:1270: anonymous fn/5 in Stream.resource/3
(elixir) lib/stream.ex:806: Stream.do_transform/8
Subsequent inspection of /tmp/plug-1514/ reveals that it is indeed an empty directory.
Is the uploaded file short-lived and can be configured to be long-lived, or am I missing something altogether here?
path contains the full path to the uploaded file. filename is just the name of the file that the user selected in the browser (or in this case, curl); the uploaded file is not stored with that name. You need to only pass the path to File.stream!/1:
things_params =
path
|> File.stream!
|> ...
Related
I've been trying to upload images in the /public directory this code is working fine locally (Windows OS)
import getConfig from "next/config";
import fs from "fs";
const address=path.join(getConfig().serverRuntimeConfig.PROJECT_ROOT, `/public/uploads/users/${username}`);
if (!fs.existsSync(address)) {
fs.mkdirSync(address, { recursive: true });
}
I'm using multer for file uploading from client-side.
The above code is working fine on window os locally but in after deployment at vercel it throws the error:
2022-03-21T16:05:16.872Z 693e7f44-12d9-4f4e-90cf-f030a299f918 ERROR Unhandled Promise Rejection
{"errorType":"Runtime.UnhandledPromiseRejection","errorMessage":"Error:
ENOENT: no such file or directory, mkdir
'/vercel/path0/public/uploads/users/saif'","reason":{"errorType":"Error","errorMessage":"ENOENT:
no such file or directory, mkdir
'/vercel/path0/public/uploads/users/saif'","code":"ENOENT","errno":-2,"syscall":"mkdir","path":"/vercel/path0/public/uploads/users/saif","stack":["Error:
ENOENT: no such file or directory, mkdir
'/vercel/path0/public/uploads/users/saif'"," at Object.mkdirSync
(fs.js:1013:3)"," at DiskStorage.destination [as getDestination]
(/var/task/.next/server/pages/api/User/index.js:155:55)"," at
processTicksAndRejections (internal/process/task_queues.js:95:5)","
at runNextTicks (internal/process/task_queues.js:64:3)"," at
processImmediate
(internal/timers.js:437:9)"]},"promise":{},"stack":["Runtime.UnhandledPromiseRejection:
Error: ENOENT: no such file or directory, mkdir
'/vercel/path0/public/uploads/users/saif'"," at process.
(/var/runtime/index.js:35:15)"," at process.emit
(events.js:412:35)"," at processPromiseRejections
(internal/process/promises.js:245:33)"," at
processTicksAndRejections (internal/process/task_queues.js:96:32)","
at runNextTicks (internal/process/task_queues.js:64:3)"," at
processImmediate (internal/timers.js:437:9)"]} Unknown application
error occurred
Vercel as a platform does not allow persistent file storage as these are serverless functions, they encourage uploads to a bucket like s3 -
https://vercel.com/docs/concepts/solutions/file-storage
Create a Serverless Function to return a presigned URL.
From the front-end, call your Serverless Function to get the presigned POST URL.
Allow the user to upload a file on the front-end. Forward the
file to the POST URL.
Note: here the presigned url is a s3 location that you are creating as a location.
They also post multiple examples by using different examples using s3 or Google storage bucket.
So I'm trying to run the indexer on localnet following the official tutorial https://docs.near.org/docs/tutorials/near-indexer
However when I run cargo run -- init to generate the localnet json config I get this error
Finished dev [unoptimized + debuginfo] target(s) in 17.62s
Running `target/debug/example-indexer init`
thread 'main' panicked at 'Failed to deserialize config: Error("expected value", line: 1, column: 1)', /home/francois/.cargo/git/checkouts/nearcore-5bf7818cf2261fd0/a44be20/nearcore/src/config.rs:499:39
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
At some point it seems the json is not created or not created properly I guess, the function crashing in config.rf line 499 is
impl From<&str> for Config {
fn from(content: &str) -> Self {
serde_json::from_str(content).expect("Failed to deserialize config")
}
}
It's quite difficult to debug since cargo run -- init is using some inner near function (also I'm new to rust).
the config.json file is created but it seems the permission are not set properly by the script, the content of config.json is
"<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message> ... "
If anyone from the community has encountered this problem or has a hint it would be great!! thanks a lot !
In the tutorial you referenced, it mentions a similar error, and suggests the following:
Open your config.json located in the .near folder in the root of your home directory. ( ~/.near/config.json )
In this file, locate: "tracked_shards": [] and change the value to [0].
Save the file and try running your indexer again.
So I had the wrong config with download_config: false,
It should be download_config: false, for the localnet use
I have a simple erlang application and i am trying to start it to no avail getting a bad return error:
> {error,
> {bad_return,
> {{mapp,start,[normal,[]]},
> {'EXIT',
> {undef,
> [{mapp,start,[normal,[]],[]},
> {application_master,start_it_old,4,
> [{file,"application_master.erl"},{line,277}]}]}}}}}
.app
{
application,mapp,
[
{vsn,"1.0.0"},
{description,"some description"},
{mod,{mapp,[]}},
{modules,[mapp,m1]}
]
}.
Folder structure:
-root
-mapp.app
-src
-m1.erl
-mapp.erl
-include
-state.hrl
-ebin
Application
-module(mapp).
-behaviour(application).
-export([start/2,stop/1]).
start(normal,_Args)->
Pid=m1:start_link(),
{ok,Pid}.
stop(State)->ok.
Module
-module(m1).
-include("r.hrl").
-export([start_link/0]).
start_link()->
Pid=spawn_link(?MODULE,serv,#state{count=2}),
Pid.
serv(State=#state{count=C})->
receive
{From,MSG} ->From ! {ok,MSG},
serv(State#state{count=C=1})
end.
.hrl
-record(state,{
count=0
}).
So my m1 module returns a Pid.I comply to the application:start/2 return type and return a {ok,Pid}.
What is wrong here ? I have tried both with Pid and {ok,Pid} to no avail.
The error states that the mapp:start/2 is undef. Seeing that your mapp.erl exports it, I suspect that the module mapp is not loaded.
How are you running the application? I suspect that you're not using a release tool like rebar3 or erlang.mk because usually the app files are inside src.
I have the following endpoint initialization in lib/flashcards_web/endpoint.ex:
#doc """
Callback invoked for dynamically configuring the endpoint.
It receives the endpoint configuration and checks if
configuration should be loaded from the system environment.
"""
def init(_key, config) do
if config[:load_from_system_env] do
port = System.get_env("PORT") || raise "expected the PORT environment variable to be set"
jwt_token_ttl_minutes =
"USER_SESSION_MINUTES"
|> System.get_env
|> String.to_integer
|| raise "expected the USER_SESSION_MINUTES environment variable to be set"
config =
config
|> Keyword.put(:http, [:inet6, port: port])
|> Keyword.put(:jwt_token_ttl_minutes, jwt_token_ttl_minutes)
{:ok, config}
else
{:ok, config}
end
end
and the required load_from_system_env: true line in config/dev.exs:
# For development, we disable any cache and enable
# debugging and code reloading.
#
# The watchers configuration can be used to run external
# watchers to your application. For example, we use it
# with brunch.io to recompile .js and .css sources.
config :flashcards, FlashcardsWeb.Endpoint,
http: [port: 4000],
debug_errors: true,
code_reloader: true,
check_origin: false,
watchers: [node: ["node_modules/brunch/bin/brunch", "watch", "--stdin",
cd: Path.expand("../assets", __DIR__)]],
load_from_system_env: true
However when running
PORT=4000 USER_SESSION_MINUTES=1 iex -S mix phx.server
I get:
iex(1)> Application.get_env(:flashcards, FlashcardsWeb.Endpoint)[:jwt_token_ttl_minutes]
nil
Am I missing something here?
Found the solution to access the dynamic endpoint configuration.
The docs mention that a config/2 function is automatically generated at the endpoint.
The dynamic endpoint configuration can therefore be accessed as follows:
iex(2)> FlashcardsWeb.Endpoint.config(:jwt_token_ttl_minutes)
1
I want to build a Yeoman generator that needs to unzip a file.
From their documentation, it seems this process is done using this.registerTransformStream(...). It says it accept any gulp plugin, so I tried gulp-unzip (link)
Here's my code:
// index.js
...
writing: function() {
var source = this.templatePath('zip'); // the folder where the zipped file is
var destination = this.destinationRoot();
this.fs.copy(source, destination);
this.registerTransformStream(unzip() );
}
...
The result seems promising, first it shows all the file list then I get Error: write after end error.
Here's the dump:
create license.txt
create readme.html
create config.php
...
...
events.js:141
throw er; // Unhandled 'error' event
^
Error: write after end
at writeAfterEnd (C:\Users\myname\Documents\project\generator-test\node_modules\gulp-unzip\node_modules\readable-stream\lib\_stream_writable.js:144:12)
at Transform.Writable.write (C:\Users\myname\Documents\project\generator-test\node_modules\gulp-unzip\node_modules\readable-stream\lib\_stream_writable.js:192:5)
at DestroyableTransform.ondata (C:\Users\myname\Documents\project\generator-test\node_modules\through2\node_modules\readable-stream\lib\_stream_readable.js:531:20)
at emitOne (events.js:77:13)
at DestroyableTransform.emit (events.js:169:7)
at readableAddChunk (C:\Users\myname\Documents\project\generator-test\node_modules\through2\node_modules\readable-stream\lib\_stream_readable.js:198:18)
at DestroyableTransform.Readable.push (C:\Users\myname\Documents\project\generator-test\node_modules\through2\node_modules\readable-stream\lib\_stream_readable.js:157:10)
at DestroyableTransform.Transform.push (C:\Users\myname\Documents\project\generator-test\node_modules\through2\node_modules\readable-stream\lib\_stream_transform.js:123:32)
at DestroyableTransform._transform (C:\Users\myname\Documents\project\generator-test\node_modules\mem-fs-editor\lib\actions\commit.js:34:12)
at DestroyableTransform.Transform._read (C:\Users\myname\Documents\project\generator-test\node_modules\through2\node_modules\readable-stream\lib\_stream_transform.js:159:10)
The destination folder is empty after this. It seems the stream is trying to write the unzipped file but failed.
Does anyone solved this problem before? Or is there alternative way by just using the built-in fs?
Thanks a lot