I'm trying to run Google's usd_from_gltf utility inside AWS Lambda, using a custom Docker image. The setup seems to be working locally but when executing the same Lambda in AWS, it fails for certain input files.
Minimal app
https://github.com/petrbroz/glb-to-usdz-test
This is a minimalistic AWS SAM app with a Lambda function called GlbToUsdzFunction that downloads a Glb file from specified URL and converts it to Usdz. The Lambda function uses a custom Docker image (https://github.com/leon/docker-gltf-to-udsz), and Python's subprocess to run the usd_from_gltf tool to handle the conversion.
Sample file URLs
https://petrbroz.s3.us-west-1.amazonaws.com/glb-to-usdz-issues/snowmobile.glb
https://petrbroz.s3.us-west-1.amazonaws.com/glb-to-usdz-issues/wall-e.glb
When running locally
The Lambda function succeeds for both snowmobile.glb and wall-e.glb. Here's an example output for the former:
$ sam build
$ echo "{ \"url\": \"https://petrbroz.s3.us-west-1.amazonaws.com/glb-to-usdz-issues/snowmobile.glb\" }" | sam local invoke "GlbToUsdzFunction" --event -
Reading invoke payload from stdin (you can also pass it from file with --event)
Invoking Container created from glbtousdzfunction:glb-to-usdz-lambda
Building image.................
Skip pulling image and use local one: glbtousdzfunction:rapid-1.46.0-x86_64.
START RequestId: 720b6b49-e36c-4429-96fb-9e0e5c02c09b Version: $LATEST
Downloading file
Converting file
Warning: extensionsUsed: Extension is in extensionsUsed but not actually referenced: KHR_texture_transform [GLTF_WARN_EXTENSION_UNREFERENCED]
END RequestId: 720b6b49-e36c-4429-96fb-9e0e5c02c09b
REPORT RequestId: 720b6b49-e36c-4429-96fb-9e0e5c02c09b Init Duration: 0.22 ms Duration: 19997.59 ms Billed Duration: 19998 ms Memory Size: 1024 MB Max Memory Used: 1024 MB
{"status": "success"}
When running in AWS
The Lambda function succeeds for snowmobile.glb but fails for wall-e.glb. Here's the output for the latter:
START RequestId: b1bdc496-ec12-430e-a641-2574af354d60 Version: $LATEST
Downloading file
Converting file
ERROR: USD: Insufficient permissions to write to destination directory '/var/tmp' (Replace) [UFG_ERROR_USD]
ERROR: USD: Failed to map '/var/tmp/output.usdc': No such file or directory (AddFile) [UFG_ERROR_USD]
Warning: USD: Failed to add temporary layer at '/var/tmp/output.usdc' to the package at path 'output.usdz'. (_CreateNewUsdzPackage) [UFG_WARN_USD]
ERROR: Cannot write USD: "/tmp/output.usdz" [UFG_ERROR_IO_WRITE_USD]
Command '['usd_from_gltf', '/tmp/input.glb', '/tmp/output.usdz']' returned non-zero exit status 255.
END RequestId: b1bdc496-ec12-430e-a641-2574af354d60
REPORT RequestId: b1bdc496-ec12-430e-a641-2574af354d60 Duration: 2039.96 ms Billed Duration: 5166 ms Memory Size: 1024 MB Max Memory Used: 101 MB Init Duration: 3125.71 ms
Has anyone run into this? Am I doing something wrong here, or is this perhaps a bug on the AWS side, or on the usd_from_gltf side?
Some USD conversions cause the library to write intermediate files and it looks like it is built to use /var/tmp for this purpose. Since Lambdas can only write to /tmp, the workaround we came up with is to link /var/tmp to /tmp:
in the glb-to-usdz Dockerfile, add a line like RUN rm -rf /var/tmp && ln -s /tmp /var/tmp
This allows your second example to succeed.
Related
I'm trying to get a function to run in AWS Lambda that uses Selenium and Firefox/geckodriver in order to run. I've decided to go the route of creating a container image, and then uploading and running that instead of using a pre-configured runtime. I was able to create a Dockerfile that correctly installs Firefox and Python, downloads geckodriver, and installs my test code:
FROM alpine:latest
RUN apk add firefox python3 py3-pip
RUN pip install requests selenium
RUN mkdir /app
WORKDIR /app
RUN wget -qO gecko.tar.gz https://github.com/mozilla/geckodriver/releases/download/v0.28.0/geckodriver-v0.28.0-linux64.tar.gz
RUN tar xf gecko.tar.gz
RUN mv geckodriver /usr/bin
COPY *.py ./
ENTRYPOINT ["/usr/bin/python3","/app/lambda_function.py"]
The Selenium test code:
#!/usr/bin/env python3
import util
import os
import sys
import requests
def lambda_wrapper():
api_base = f'http://{os.environ["AWS_LAMBDA_RUNTIME_API"]}/2018-06-01'
response = requests.get(api_base + '/runtime/invocation/next')
request_id = response.headers['Lambda-Runtime-Aws-Request-Id']
try:
result = selenium_test()
# Send result back
requests.post(api_base + f'/runtime/invocation/{request_id}/response', json={'url': result})
except Exception as e:
# Error reporting
import traceback
requests.post(api_base + f'/runtime/invocation/{request_id}/error', json={'errorMessage': str(e), 'traceback': traceback.format_exc(), 'logs': open('/tmp/gecko.log', 'r').read()})
raise
def selenium_test():
from selenium.webdriver import Firefox
from selenium.webdriver.firefox.options import Options
options = Options()
options.add_argument('-headless')
options.add_argument('--window-size 1920,1080')
ffx = Firefox(options=options, log_path='/tmp/gecko.log')
ffx.get("https://google.com")
url = ffx.current_url
ffx.close()
print(url)
return url
def main():
# For testing purposes, currently not using the Lambda API even in AWS so that
# the same container can run on my local machine.
# Call lambda_wrapper() instead to get geckodriver logs as well (not informative).
selenium_test()
if __name__ == '__main__':
main()
I'm able to successfully build this container on my local machine with docker build -t lambda-test . and then run it with docker run -m 512M lambda-test.
However, the exact same container crashes with an error when I try and upload it to Lambda to run. I set the memory limit to 1024M and the timeout to 30 seconds. The traceback says that Firefox was unexpectedly killed by a signal:
START RequestId: 52adeab9-8ee7-4a10-a728-82087ec9de30 Version: $LATEST
/app/lambda_function.py:29: DeprecationWarning: use service_log_path instead of log_path
ffx = Firefox(options=options, log_path='/tmp/gecko.log')
Traceback (most recent call last):
File "/app/lambda_function.py", line 45, in <module>
main()
File "/app/lambda_function.py", line 41, in main
lambda_wrapper()
File "/app/lambda_function.py", line 12, in lambda_wrapper
result = selenium_test()
File "/app/lambda_function.py", line 29, in selenium_test
ffx = Firefox(options=options, log_path='/tmp/gecko.log')
File "/usr/lib/python3.8/site-packages/selenium/webdriver/firefox/webdriver.py", line 170, in __init__
RemoteWebDriver.__init__(
File "/usr/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 157, in __init__
self.start_session(capabilities, browser_profile)
File "/usr/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 252, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/usr/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/usr/lib/python3.8/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: Process unexpectedly closed with status signal
END RequestId: 52adeab9-8ee7-4a10-a728-82087ec9de30
REPORT RequestId: 52adeab9-8ee7-4a10-a728-82087ec9de30 Duration: 20507.74 ms Billed Duration: 21350 ms Memory Size: 1024 MB Max Memory Used: 131 MB Init Duration: 842.11 ms
Unknown application error occurred
I had it upload the geckodriver logs as well, but there wasn't much useful information in there:
1608506540595 geckodriver INFO Listening on 127.0.0.1:41597
1608506541569 mozrunner::runner INFO Running command: "/usr/bin/firefox" "--marionette" "-headless" "--window-size 1920,1080" "-foreground" "-no-remote" "-profile" "/tmp/rust_mozprofileQCapHy"
*** You are running in headless mode.
How can I even begin to debug this? The fact that the exact same container behaves differently depending upon where it's run seems fishy to me, but I'm not knowledgeable enough about Selenium, Docker, or Lambda to pinpoint exactly where the problem is.
Is my docker run command not accurately recreating the environment in Lambda? If so, then what command would I run to better simulate the Lambda environment? I'm not really sure where else to go from here, seeing as I can't actually reproduce the error locally to test with.
If anyone wants to take a look at the full code and try building it themselves, the repository is here - the lambda code is in lambda_function.py.
As for prior research, this question a) is about ChromeDriver and b) has no answers from over a year ago. The link from that one only has information about how to run a container in Lambda, which I'm already doing. This answer is almost my problem, but I know that there's not a version mismatch because the container works on my laptop just fine.
I have exactly the same problem and a possible explanation.
I think what you want is not possible for the time being.
According to AWS DevOps Blog Firefox relies on fallocate system call and /dev/shm.
However AWS Lambda does not mount /dev/shm so Firefox will crash when trying to allocate memory. Unfortunately, this handling cannot be disabled for Firefox.
However if you can live with Chromium, there is an option for chromedriver --disable-dev-shm-usage that disables the usage of /dev/shm and instead writes shared memory files to /tmp.
chromedriver works fine for me on AWS Lambda, if that is an option for you.
According to AWS DevOps Blog you can also use AWS Fargate to run Firefox/geckodriver.
There is an entry in the AWS forum from 2015 that requests mounting /dev/shm in Lambdas, but nothing happened since then.
I have a pipeline which uses a global singularity image and rule-based conda wrappers.
However, some of the tools don't have wrappers (i.e. htslib's bgzip and tabix).
Now I need to learn how to run jobs in containers.
In the official documentation link it says:
"Allowed image urls entail everything supported by singularity (e.g., shub:// and docker://)."
Now I've tried the following image from singularity hub but I get an error:
minimal reproducible example:
config.yaml
# Files
REF_GENOME: "c_elegans.PRJNA13758.WS265.genomic.fa"
GENOME_ANNOTATION: "c_elegans.PRJNA13758.WS265.annotations.gff3"
Snakefile
# Directories------------------------------------------------------------------
configfile: "config.yaml"
# Setting the names of all directories
dir_list = ["REF_DIR", "LOG_DIR", "BENCHMARK_DIR", "QC_DIR", "TRIM_DIR", "ALIGN_DIR", "MARKDUP_DIR", "CALLING_DIR", "ANNOT_DIR"]
dir_names = ["refs", "logs", "benchmarks", "qc", "trimming", "alignment", "mark_duplicates", "variant_calling", "annotation"]
dirs_dict = dict(zip(dir_list, dir_names))
GENOME_INDEX=config["REF_GENOME"]+".fai"
VEP_ANNOT=config["GENOME_ANNOTATION"]+".gz"
VEP_ANNOT_INDEX=config["GENOME_ANNOTATION"]+".gz.tbi"
# Singularity with conda wrappers
singularity: "docker://continuumio/miniconda3:4.5.11"
# Rules -----------------------------------------------------------------------
rule all:
input:
expand('{REF_DIR}/{GENOME_ANNOTATION}{ext}', REF_DIR=dirs_dict["REF_DIR"], GENOME_ANNOTATION=config["GENOME_ANNOTATION"], ext=['', '.gz', '.gz.tbi']),
expand('{REF_DIR}/{REF_GENOME}{ext}', REF_DIR=dirs_dict["REF_DIR"], REF_GENOME=config["REF_GENOME"], ext=['','.fai']),
rule download_references:
params:
ref_genome=config["REF_GENOME"],
genome_annotation=config["GENOME_ANNOTATION"],
ref_dir=dirs_dict["REF_DIR"]
output:
os.path.join(dirs_dict["REF_DIR"],config["REF_GENOME"]),
os.path.join(dirs_dict["REF_DIR"],config["GENOME_ANNOTATION"]),
os.path.join(dirs_dict["REF_DIR"],VEP_ANNOT),
os.path.join(dirs_dict["REF_DIR"],VEP_ANNOT_INDEX)
resources:
mem=80000,
time=45
log:
os.path.join(dirs_dict["LOG_DIR"],"references","download.log")
singularity:
"shub://biocontainers/tabix"
shell: """
cd {params.ref_dir}
wget ftp://ftp.wormbase.org/pub/wormbase/releases/WS265/species/c_elegans/PRJNA13758/c_elegans.PRJNA13758.WS265.genomic.fa.gz
bgzip -d {params.ref_genome}.gz
wget ftp://ftp.wormbase.org/pub/wormbase/releases/WS265/species/c_elegans/PRJNA13758/c_elegans.PRJNA13758.WS265.annotations.gff3.gz
bgzip -d {params.genome_annotation}.gz
grep -v "#" {params.genome_annotation} | sort -k1,1 -k4,4n -k5,5n -t$'\t' | bgzip -c > {params.genome_annotation}.gz
tabix -p gff {params.genome_annotation}.gz
"""
rule index_reference:
input:
os.path.join(dirs_dict["REF_DIR"],config["REF_GENOME"])
output:
os.path.join(dirs_dict["REF_DIR"],GENOME_INDEX)
resources:
mem=2000,
time=30,
log:
os.path.join(dirs_dict["LOG_DIR"],"references", "faidx_index.log")
wrapper:
"0.64.0/bio/samtools/faidx"
Error
Building DAG of jobs...
Pulling singularity image shub://biocontainers/tabix.
WorkflowError:
Failed to pull singularity image from shub://biocontainers/tabix:
ESC[31mFATAL: ESC[0m While pulling shub image: failed to get manifest for: shub://biocontainers/tabix: the requested manifest was not found in singularity hub
File "/home/moldach/anaconda3/envs/snakemake/lib/python3.7/site-packages/snakemake/deployment/singularity.py", line 88, in pull
~
It appears this is a problem with the container?
(snakemake) [moldach#arc CONTAINER_TROUBLESHOOT]$ singularity pull shub://biocontainers/tabix
FATAL: While pulling shub image: failed to get manifest for: shub://biocontainers/tabix: the requested manifest was not found in singularity hub
In fact, I experience this problem with other biocontainers containers.
For example, I also need to use a container to do bowtie2 indexing and this is the error I get from the biocontainers/bowtie2 versus another developers container of the same tool comics/bowtie2:
^C(snakemake) [moldach#arc CONTAINER_TROUBLESHOOT]$ singularity pull docker://biocontainers/bowtie2
FATAL: While making image from oci registry: failed to get checksum for docker://biocontainers/bowtie2: Error reading manifest latest in docker.io/biocontainers/bowtie2: manifest unknown: manifest unknown
(snakemake) [moldach#arc CONTAINER_TROUBLESHOOT]$ singularity pull docker://comics/bowtie2
INFO: Converting OCI blobs to SIF format
INFO: Starting build...
Getting image source signatures
Copying blob a02a4930cb5d done
Does anyone know why?
Biocontainers does not allow latest as tag for their containers, and therefore you will need to specify the tag to be used.
From their doc:
The BioContainers community had decided to remove the latest tag. Then, the following command docker pull biocontainers/crux will fail. Read more about this decision in Getting started with Docker
When no tag is specified, it defaults to latest tag, which of course is not allowed here. See here for bowtie2's tags. Usage like this will work:
singularity pull docker://biocontainers/bowtie2:v2.4.1_cv1
Using another container solves the issue; however, the fact I'm getting errors from biocontainers is troubling given that these are both very common and used as examples in the literature so I will award the top-answer to whomever can solve that specific issue.
As it were, the use of stackleader/bgzip-utility solve the issue of actually running this rule in a container.
container:
"docker://stackleader/bgzip-utility"
Once again, for those coming to this post, it's probably best to test any container first before running snakemake, e.g. singularity pull docker://stackleader/bgzip-utility.
I am configuring Jenkins to automatically deploy my successfull builds to my Kubernetes cluster. I have manually set up the KUBECONFIG file in /var/lib/jenkins/.kube/config.
But my Jenkins job keeps giving the same error:
+ kubectl config --kubeconfig=/var/lib/jenkins/.kube/config view
error: error loading config file "/var/lib/jenkins/.kube/config": v1.Config.Contexts: \
[]v1.NamedContext: Clusters: []v1.NamedCluster: v1.NamedCluster.Name: Cluster: v1.Cluster.Server: \
CertificateAuthorityData: decode base64: illegal base64 data at input byte 47, error found in #10 byte of ... \
|ASXY9gkN$","server":|..., bigger context ...|"LS3tGS1PR0dJTiBDRVJUMLPAR0FURS0tLS0tCk1JSUV5gkN$", \
"server":"https://clsx-cloud-d734ef-0b|...
I copied the kube config file manually from my SSH accessible account, i.e
cat home/username/.kube/config
You most likely copied the wrong screen output from the terminal, or while editing the file in i.e nano.
The $ characters are illegal characters and the result of truncated file viewing in the terminal, make sure that you copy the real file data correctly.
For example:
xclip -sel clip < home/username/.kube/config
I have fixed this error by removing files in this directory:
/var/lib/jenkins/.kube/
Note: take backup on first priority of this directory as well.
I am trying to use for the first time the Yocto tool for my BeagleBoneBlack.
First I run this bash file to install Yocto:
#!/bin/bash
WKDIR=/work
mkdir -p $WKDIR/beaglebone-black/yocto/sources
mkdir -p $WKDIR/beaglebone-black/yocto/builds
cd $WKDIR/beaglebone-black/yocto/sources
git clone -b morty git://git.yoctoproject.org/poky.git poky-morty
cd $WKDIR/beaglebone-black/yocto/
source sources/poky-morty/oe-init-build-env builds/build-bbb-morty
Then I edited the file local.conf at "build-bbb-morty/conf" diretory:
MACHINE ?= "beaglebone"
and added
DL_DIR ?= "${TOPDIR}/../dl"
IMAGE_INSTALL_append = " kernel-modules kernel-devicetree"
Then I run bitbake:> bitbake core-image-minimal
After about 8 hours in my Core i7 five generation I got this result at my terminal output and I have no idea what I need to do to fix it:
bitbake core-image-minimal
Parsing recipes: 100% |########################################################################################################| Time: 0:02:55
Parsing of 864 .bb files complete (0 cached, 864 parsed). 1318 targets, 67 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies
Build Configuration:
BB_VERSION = "1.32.0"
BUILD_SYS = "x86_64-linux"
NATIVELSBSTRING = "Ubuntu-16.04"
TARGET_SYS = "arm-poky-linux-gnueabi"
MACHINE = "beaglebone"
DISTRO = "poky"
DISTRO_VERSION = "2.2.1"
TUNE_FEATURES = "arm armv7a vfp neon callconvention-hard cortexa8"
TARGET_FPU = "hard"
meta
meta-poky
meta-yocto-bsp = "morty:a3fa5ce87619e81d7acfa43340dd18d8f2b2d7dc"
NOTE: Fetching uninative binary shim from http ://downloads.yoctoproject.org/releases/uninative/1.4/x86_64-nativesdk-libc.tar.bz2;sha256sum=101ff8f2580c193488db9e76f9646fb6ed38b65fb76f403acb0e2178ce7127ca
--2017-01-18 15:51:09-- http ://downloads.yoctoproject.org/releases/uninative/1.4/x86_64-nativesdk-libc.tar.bz2
Resolving downloads.yoctoproject.org (downloads.yoctoproject.org)... 198.145.20.127
Connecting to downloads.yoctoproject.org (downloads.yoctoproject.org)|198.145.20.127|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2473216 (2.4M) [application/octet-stream]
Saving to: ‘/work/beaglebone-black/yocto/builds/build-bbb-morty/../dl/uninative/101ff8f2580c193488db9e76f9646fb6ed38b65fb76f403acb0e2178ce7127ca/x86_64-nativesdk-libc.tar.bz2’
2017-01-18 15:51:18 (297 KB/s) - ‘/work/beaglebone-black/yocto/builds/build-bbb-morty/../dl/uninative/101ff8f2580c193488db9e76f9646fb6ed38b65fb76f403acb0e2178ce7127ca/x86_64-nativesdk-libc.tar.bz2’ saved [2473216/2473216]
Initialising tasks: 100% |#####################################################################################################| Time: 0:00:14
NOTE: Executing SetScene Tasks
NOTE: Executing RunQueue Tasks
WARNING: attr-native-2.4.47-r0 do_fetch: Failed to fetch URL http ://download.savannah.gnu.org/releases/attr/attr-2.4.47.src.tar.gz, attempting MIRRORS if available
WARNING: libpng-native-1.6.24-r0 do_fetch: Failed to fetch URL http ://distfiles.gentoo.org/distfiles/libpng-1.6.24.tar.xz, attempting MIRRORS if available
ERROR: core-image-minimal-1.0-r0 do_image_wic: Function failed: do_image_wic (log file is located at /work/beaglebone-black/yocto/builds/build-bbb-morty/tmp/work/beaglebone-poky-linux-gnueabi/core-image-minimal/1.0-r0/temp/log.do_image_wic.23788)
ERROR: Logfile of failure stored in: /work/beaglebone-black/yocto/builds/build-bbb-morty/tmp/work/beaglebone-poky-linux-gnueabi/core-image-minimal/1.0-r0/temp/log.do_image_wic.23788
Log data follows:
| DEBUG: Executing python function set_image_size
| DEBUG: Python function set_image_size finished
| DEBUG: Executing shell function do_image_wic
| Checking basic build environment...
| Done.
|
| Build artifacts not found, exiting.<br/>
| (Please check that the build artifacts for the machine
| selected in local.conf actually exist and that they
| are the correct artifacts for the image (.wks file))
|
| The artifact that couldn't be found was kernel-dir:
| /work/beaglebone-black/yocto/builds/build-bbb-morty/tmp/deploy/images/beaglebone
| WARNING: exit code 1 from a shell command.
| ERROR: Function failed: do_image_wic (log file is located at /work/beaglebone-black/yocto/builds/build-bbb-morty/tmp/work/beaglebone-poky-linux-gnueabi/core-image-minimal/1.0-r0/temp/log.do_image_wic.23788)
ERROR: Task (/work/beaglebone-black/yocto/sources/poky-morty/meta/recipes-core/images/core-image-minimal.bb:do_image_wic) failed with exit code '1'
NOTE: Tasks Summary: Attempted 1771 tasks of which 6 didn't need to be rerun and 1 failed.
Summary: 1 task failed:
/work/beaglebone-black/yocto/sources/poky-morty/meta/recipes-core/images/core-image-minimal.bb:do_image_wic
Summary: There were 2 WARNING messages shown.
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
While not sure this could be the reason of the problem, the prefered method to add packages to the image, in the local.conf context is using the CORE_IMAGE_EXTRA_INSTALL variable.
Therefore change:
IMAGE_INSTALL_append = " kernel-modules kernel-devicetree"
to
CORE_IMAGE_EXTRA_INSTALL += "kernel-modules kernel-devicetree"
I think there is no problem with your work method.
It seems to be a build environment problem, but the error log seems to confirm.
your log location at "/work/beaglebone-black/yocto/builds/build-bbb-morty/tmp/work/beaglebone-poky-linux-gnueabi/core-image-minimal/1.0-r0/temp/log.do_image_wic.23788"
Your error log indicates the the URL for fetching binaries failed.
You can try using tunnel through proxy. Or you can run the bitbake again because it can also fail sometimes due to network conditions.
I have used spiffsimg to create a single file containing multiple lua files:
# ./spiffsimg -f lua.img -c 262144 -r lua.script
f 4227 init.lua
f 413 cfg.lua
f 2233 setupWifi.lua
f 7498 configServer.lua
f 558 cfgForm.htm
f 4255 setupConfig.lua
f 14192 main.lua
#
I then use esptool.py to flash the NodeMCU firmware and the file containing the lua files to the esp8266 (NodeMCU dev kit):
c:\esptool-master>c:\Python27\python esptool.py -p COM7 write_flash -fs 32m -fm dio 0x00000 nodemcu-dev-9-modules-2016-07-18-12-06-36-integer.bin 0x78000 lua.img
esptool.py v1.0.2-dev
Connecting...
Running Cesanta flasher stub...
Flash params set to 0x0240
Writing 446464 # 0x0... 446464 (100 %)
Wrote 446464 bytes at 0x0 in 38.9 seconds (91.9 kbit/s)...
Writing 262144 # 0x78000... 262144 (100 %)
Wrote 262144 bytes at 0x78000 in 22.8 seconds (91.9 kbit/s)...
Leaving...
I then run ESPLorer to check the status and get:
PORT OPEN 115200
Communication with MCU..Got answer! AutoDetect firmware...
Can't autodetect firmware, because proper answer not received.
NodeMCU custom build by frightanic.com
branch: dev
commit: b21b3e08aad633ccfd5fd29066400a06bb699ae2
SSL: true
modules: file,gpio,http,net,node,rtctime,tmr,uart,wifi
build built on: 2016-07-18 12:05
powered by Lua 5.1.4 on SDK 1.5.4(baaeaebb)
lua: cannot open init.lua
>
----------------------------
No files found.
----------------------------
>
Total : 3455015 bytes
Used : 0 bytes
Remain: 3455015 bytes
The NodeMCU firmware flashed correctly, but the lua files can't be located.
I have tried flashing to other locations (0x84000, 0x7c000), but I am just guessing at these locations based on reading threads on github.
I used the NodeMCU file.fscfg() routine to get the flash address and size. If I only flash the NodeMCU firmware I get the following:
print (file.fscfg())
524288 3653632
534288 is 0x80000, so I tried flashing only the spiffsimg file (lua.img) to 0x8000, then ran the same print statement and got:
print (file.fscfg())
786432 3391488
The flash address incremented by the exact number of bytes in the lua.img - which I don't understand, why would the flash address change? Is the first number returned by file.fscfg not the starting flash address, but the ending flash address?
What is the correct address for flashing an image file, contain lua files, that was created by spiffsimg?
The version of spiffsimg found here will provide the correct address for flashing an image file that contains lua files.
Do not use this version of spiffsimg as it is out of date.
To install the spiffsimg utility, you need to download and install the entire nodemcu-firmware package (into a linux environment, use make to install - note: make on my debian linux box generated an error, but i was able to go to the ../tools/spiffsimg subdirectory and run make on the Makefile found in that directory to create the utility).
The spiffsimg instructions found here are quite clear, with one exception: the file name you specify, with the -f parameter, needs to include the characters %x. The %x will be replaced with the address that the image file should be flashed to.
For example, the command
spiffsimage -f %x-luaFiles.img -S 4MB -U 465783 -r lua.script
will create a file, in the local directory, with a name like: 80000-luaFiles.img. Which means you should install that image file at address 0x80000 on the ESP8266.
I've never done that myself but I'm reasonably confident the correct answer can be extracted from the docs.
-f specifies the filename for the disk image. '%x' will be replaced
by the calculated offset of the file system.
And a bit further down
The disk image file is placed into the bin directory and it is named
0x<offset>-<size>.bin where the offset is the location where it
should be flashed, and the size is the size of the flash part.
However, there's a slight mismatch between the two statements. We may have a bug in the docs. If "'%x' will be replaced..." then I'd expected the final name won't contain 0x anymore.
Furthermore, it is possible to define a fixed SPIFFS position when you build the firmware.
#define SPIFFS_FIXED_LOCATION 0x100000
This specifies that the SPIFFS filesystem starts at 1Mb from the start of the flash. Unless
otherwise specified, it will run to the end of the flash (excluding
the 16k of space reserved by the SDK).