-
Notifications
You must be signed in to change notification settings - Fork 7.6k
SPIFFS I/O writes (from example) get exponentially slower with size? #1360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
SPIFFs is not a file system. It is a pile of data. There is not central index, allocation table, directory. Each page block (4k) is divided into 256 byte pages, each block has one 256 byte page allocated for status and control. When a page is changed, the change is written to a new page, the old page is marked as deleted. When all 15 of the usable pages are marked delete then the whole block can be erased. But, BUT, the erase is not DONE until all 'fresh' pages have been used. This is to 'wear level' the Flash. So, to find a file, the whole pile, each deleted block has to be searched one block at a time. The file has to be found by looking at every block until it is found or the entire pile is read. Chuck |
Thank you, this is quite interesting!
But does this explain why sequentially writing / appending to a file in
fixed sized increments (as the given example does) takes exponentially more
time with growing size?
(It might, but that it seems exponential seems a bit too much?)
…On Thu, Apr 26, 2018, 21:10 chuck todd ***@***.***> wrote:
SPIFFs is not a file system. It is a pile of data. There is not central
index, allocation table, directory. Each page block (4k) is divided into
256 byte pages, each block has one 256 byte page allocated for status and
control. When a page is changed, the change is written to a new page, the
old page is marked as deleted. When all 15 of the usable pages are marked
delete then the whole block can be erased. But, BUT, the erase is not DONE
until all 'fresh' pages have been used. This is to 'wear level' the Flash.
So, to find a file, the whole pile, each deleted block has to be searched
one block at a time. The file has to be found by looking at every block
until it is found or the entire pile is read.
When Flash is being accessed, the CPUs are stalled because SPIFFS uses the
Same FLASH chip as code is stored in. The Flash chip can only do one thing
at a time.
Chuck
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#1360 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFxdr3CtdnjmbwVL4Z_mSGlpaVCNZEhjks5tsowigaJpZM4TodFt>
.
|
The relevant code is
void testFileIO(fs::FS &fs, const char * path){
Serial.printf("Testing file I/O with %s\r\n", path);
static uint8_t buf[512];
size_t len = 0;
File file = fs.open(path, FILE_WRITE);
if(!file){
Serial.println("- failed to open file for writing");
return;
}
size_t i;
Serial.print("- writing" );
uint32_t start = millis();
for(i=0; i<2048; i++){
if ((i & 0x001F) == 0x001F){
Serial.print(".");
}
file.write(buf, 512);
}
Serial.println("");
uint32_t end = millis() - start;
Serial.printf(" - %u bytes written in %u ms\r\n", 2048 * 512, end);
file.close();
On Thu, Apr 26, 2018, 21:15 Moritz von Schweinitz <[email protected]>
wrote:
… Thank you, this is quite interesting!
But does this explain why sequentially writing / appending to a file in
fixed sized increments (as the given example does) takes exponentially more
time with growing size?
(It might, but that it seems exponential seems a bit too much?)
On Thu, Apr 26, 2018, 21:10 chuck todd ***@***.***> wrote:
> SPIFFs is not a file system. It is a pile of data. There is not central
> index, allocation table, directory. Each page block (4k) is divided into
> 256 byte pages, each block has one 256 byte page allocated for status and
> control. When a page is changed, the change is written to a new page, the
> old page is marked as deleted. When all 15 of the usable pages are marked
> delete then the whole block can be erased. But, BUT, the erase is not DONE
> until all 'fresh' pages have been used. This is to 'wear level' the Flash.
> So, to find a file, the whole pile, each deleted block has to be searched
> one block at a time. The file has to be found by looking at every block
> until it is found or the entire pile is read.
> When Flash is being accessed, the CPUs are stalled because SPIFFS uses
> the Same FLASH chip as code is stored in. The Flash chip can only do one
> thing at a time.
>
> Chuck
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <#1360 (comment)>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AFxdr3CtdnjmbwVL4Z_mSGlpaVCNZEhjks5tsowigaJpZM4TodFt>
> .
>
|
Lets say the file is 0 bytes long, and you append 512 bytes, it took Now what happens when you delete a file, Each of the individual pages has to be marked as deleted. They are not erased until needed. so if 1/2 of the SPIFFS partition is currently Here is an optimization discussion on a SPIFFs implementation for Cortex CPUs SPIFFs implementation for Cortex CPU Chuck. |
Does SD card access on the esp32 perform better than SPIFFS in this respect? |
SD card works somehow better in regard with performance, but the flush() is missing in SDFS. which I mixed up with SPIFFS in the comment above. so with an SD card you have to work around the missing flush() with closing and reopening the file to really write it to the medium. |
SPIFFS over the ESP32 flash is excruciatingly slow. I just placed a new issue about it: I am creating a file with 28 bytes (a text) inside in about 130-150 milliseconds. More than 1 second for reading 28 bytes. Slower than 90's Harddisks :) |
so I just added a "fix" that disables timeout in the Stream portion of the File API. That means that any operation from the Stream class that was using timeoutRead or timeoutPeek is no longer waiting for the default 1 second if the end of the file has been reached. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This stale issue has been automatically closed. Thank you for your contributions. |
I just ran the example SPIFFS_Test. It hung the first time. I uploaded it again, and the test finished successfully.
But the write performance seems extremely slow:
So, 45.8 seconds for 1MB of data.
Is this normal?
When I change the example to do only 1024 iterations of the write, I get:
And when I put it down to 512 iterations:
So it seems to get very exponentially slower, the more data it has to write.
The text was updated successfully, but these errors were encountered: