You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From the Whitecat Team we recently added support for littlefs in our Lua RTOS for the ESP32 soc. Can any one recommend the block_size, read_size, and write_size for a Winbond Flash?
The reason is that for our current configuration it seems that writes are slower that the SPIFFS ones, and as far I can read, littlefs performance is better.
Thanks for sharing! It's exciting to see littlefs getting adopted.
You may actually want to try reducing the read size / prog size. I found that ~64B is an optimal value for 4KiB-erase-size NOR flash chips (tested on the MX25R).
littlefs's caches are fairly dumb, so if it only needs to read a small value, it will still read an entire cache line (prog/read size). This means that larger caches can end up costing you, which is a bit counterintuitive. (Fixing this in 2.0 with with a separate cache size option + smarter caching).
Though I wouldn't be surprised if SPIFFS is faster in certain situations. SPIFFS and littlefs have very different approaches.
Most notably, I think SPIFFS will outperform littlefs if erases are much much slower than the storage's bus speed. (There's also several improvements on this in 2.0, such as static wear-leveling in metadata pairs).
Hi guys,
From the Whitecat Team we recently added support for littlefs in our Lua RTOS for the ESP32 soc. Can any one recommend the block_size, read_size, and write_size for a Winbond Flash?
The reason is that for our current configuration it seems that writes are slower that the SPIFFS ones, and as far I can read, littlefs performance is better.
Our current configuration is:
block size = 4096
read size = 1024
prog size = 1024
If anyone it's interested in the port you can find it at:
https://github.com/whitecatboard/Lua-RTOS-ESP32/blob/master/components/sys/vfs/lfs.c
Thanks in advance!
The text was updated successfully, but these errors were encountered: