-
Notifications
You must be signed in to change notification settings - Fork 7.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
storage: Storage area #79665
base: main
Are you sure you want to change the base?
storage: Storage area #79665
Conversation
0414d9a
to
3846101
Compare
@andrisk-dev I couldn't add you as a reviewer, your comments are welcome anyhow. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like that erase is supported as part of the API (i.e. isn't an optional method) and that both write and erase block sizes are easily accessible.
int storage_area_read(const struct storage_area *area, size_t start, | ||
const struct storage_area_chunk *ch, size_t cnt); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than read / dread and prog / dprog, it might be better to use readv / read and writev / write as verbs, since they're more conventional.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for this comment. I've never come accross readv/writev (never stop learning...) but you are completely right. Wonder if I should also use iovec
instead of storage_area_chunk
(another self invented terminology).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe - I would definitely still prefix the functions with storage_..
.
One possibility is to use the zvfs_
prefix for a native struct zvfs_iovec
.
Was discussing this today at work - it seems like it would be good for users. With the proposed API, would the That would be a major improvement for e.g. SPI NOR and mapped flash memory too. The current API really does not do the user any favours in that sense. |
At the moment the
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had only brief look and did not take a look at storage_area_store (which should actually be separate PR, because it is again bundling ideas), but I think that the idea of Storage Area as such can work out.
I still thing that flash-f/m/rram division by the existence of erase is meh, because as I have mentioned in flash erase is hampering software in ram devices it is added for benefit of user - the usage is different, and strict division neglects this.
Nevertheless: this seems very modular, has low coupling and high cohesion, does not skip layers; seems that this will work out as intermediate layer for storage.
This can actually give lower layers space and time to improve raw access and allow rework without immediately passing all the changes up, and allow us to address them only when used directly for design purpose.
I need to admin that just from the brief look I am impressed @Laczen.
Why not simply pass a |
I like to limit the number of arguments to a function to 4, so they can pass through the registers. What is your opinion on adding the auto_erase as a property ( |
Personally, I know that at least some would prefer if storage devices automatically erased blocks all the time with a write or copy. If (writeable) devices don't support an explicit erase operation, they could simply succeed or return a specific error value that would indicate they don't require erase (e.g. |
I have updated the PR to use A storage_area now also has a property |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Impressive work!
subsys/storage/storage_area/Kconfig
Outdated
config STORAGE_AREA_STORE_READONLY | ||
bool "Disable writing" | ||
select STORAGE_AREA_STORE_DISABLECOMPACT |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be an option per instance instead of a global option?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Kconfig options have been created to allow generating a minimal image size. These Kconfig options are associated with the library and not an instance,
In a typical application it is my expectation that there will be 3 different uses at the same time:
A. A read-only store containing some factory settings,
B. A circular buffer (without persistence requirement) type store that is used to store measurement data,
C. A store with persistence requirements e.g. to store settings,
These 3 uses are supported at the same time, and share code,
In a less typical application only the circular buffer might be used, code that is associated with the moving and persistence can be removed.
A different application might only want to use the store solution in a read-only way (e.g. a bootloader), in that case all code associated with writing can be removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be better to put this into DT bindings. That way, it could be per-instance for anyone who has an unusual setup, but could easily be the normal setup too, with relatively little effort.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are no DT bindings, it should be passed to the macro definitions IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding DT bindings, this is intentionally. IMHO this is a software layer and not a hardware layer and thus should not be configured by DT, but instead by the multi-instance configuration system that is being worked on.
It is however possible to use the provided macro's with data coming from DT, so if it is wanted to create storage area's based on DT this addition could be added easily (it is just a FOREACH_INST_...) for some backends (e.g. flash, ram), for other backends (eeprom, disk) it would need some kind of partitioning definition that is not available in DT. It is not the intention of this PR to go into DT changes.
Making the READ_ONLY, DISABLE_COMPACT instance based would mean splitting up the storage_area_store
structs as well as all read/write routines and would duplicate code. This would result in a smaller (but dedicated) storage_area_store
struct, but a much larger code size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No objection from me to having a global Kconfig option, but at the same time, it should be possible to have per-instance flags.
I don't see anything wrong with changing the macro implementation based on Kconfig settings - e.g. COND_CODE_1()
. Similarly, a static inline wrapper could just use an #ifdef
to simplify things.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was not the intention to create a read-only storage solution (nvs, fcb or zms also don't provide this). But since there are two potential users that think it would be good that a storage area store instance has a "flag" read-only I will have a look at how to best integrate this.
Yeah, I also agree that this is an impressive change. |
enum storage_area_properties_mask { | ||
SA_PROP_READONLY = 0x0001, | ||
SA_PROP_FOVRWRITE = 0x0002, /* full overwrite (ram, rram, ...) */ | ||
SA_PROP_LOVRWRITE = 0x0004, /* limited overwrite (nor flash) */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be SA_PROP_NOVRWRTIE - no overwrite.
If I correctly consider this layer like fopen to open (flash/eeprom), then this is not place to play with device in other way than with erase or without.
To use LOVERWRITE feature you already have to descend to understanding the device below, and recognize whether such device not only is flash with erase vs without but also whether there is ecc in microbit memories,
where there is no-overwrite on write block level; which means you directly need to go for Flash API for such features and knowledge.
Same thing happens with various otp or uicr things in devices.
Also spi flash devices may have otp (like mx25 series) and there are otp eeprom devices designed for data recording that neither have erase nor possibility to update, and they still can be used with the layer.
@pdgendt, thank you for looking into this PR in so much detail. I will integrate you proposals. @pdgendt, regarding the @pdgendt , @cfriedt regarding the |
ffdb12d
to
5555e88
Compare
* There following methods are exposed: | ||
* storage_area_read(), ** read data ** | ||
* storage_area_readv(), ** read data vector ** | ||
* storage_area_write(), ** write data ** | ||
* storage_area_writev(), ** write data vector ** | ||
* storage_area_erase(), ** erase (in erase block addressing) ** | ||
* storage_area_ioctl() ** used for e.g. getting xip addresses ** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you may want to check how the doxygen docs renders (you can make doxygen
from the doc folder to quickly test locally)
See https://builds.zephyrproject.io/zephyr/pr/79665/docs/doxygen/html/storage__area_8h.html#details
This section in particular is missing bullet points and bold text is not correctly formatted.
There's already great docs so it would be a shame if they would render poorly :)
Will try to review once more once it's been cleaned up
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have cleaned up the documentation, this should be a lot better now.
* area, ... | ||
* | ||
* A storage area is defined e.g. for flash: | ||
* struct storage_area_flash fa = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to be in a Doxygen c code block
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
How the How the The example and settings backend will be added at a later stage, also a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not enough comments will make it hard to maintain by community.
Some hidden logic.
Unstable stack pumping due to dynamic stack allocation, including some violation of C standard regarding array size.
Re-casting pointers to remove const
.
Not completely tested functions (buffer overflow).
Some minor logic problems where values returned from subsystems get mixed up with used by this subsystem.
hbuf[0] = SAS_MAGIC; | ||
hbuf[1] = data->wrapcnt; | ||
sys_put_le16((uint16_t)store_iovec_size(iovec, iovcnt), &hbuf[2]); | ||
wr[0].data = hbuf; | ||
wr[0].len = sizeof(hbuf); | ||
wr[iovcnt + 1].data = cbuf; | ||
wr[iovcnt + 1].len = sizeof(cbuf); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Header should have structure defined, as this is defining on storage data structure, and should not be packed ad-hock.
I makes it harder to navigate code and maintain it as instead of searching where header is used you have to search of magic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The header is used only in 2 places in the code, it is respectively packed or unpacked in these routines.
I don't see the benefit of a struct. Whatever struct is used would need to be converted into a uint8_t header[4]
because this ensures how it is written regardless of endianness.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Renamed the used buffer to header[]
, this can now be navigated easily.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Every time something will need to change in header, you have literally go through code and repack header.
Second issue: structure are documentation, you are not documenting structure of the storage.
Third issue: as far as I understand callback get storage_area_record
which points to header, and they have to skip that, so callback actually have a header but have to manually parse it.
break; | ||
} | ||
|
||
memcpy(buf + (rpos - apos), data8, modlen); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Buffer overflow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How ?
record->size = 0U; | ||
} | ||
|
||
if ((rc != 0) && (rc != -ENOENT)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Underlying driver can also return -ENOENT which means that failed read at line 149 will not be here recognized as a problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no flash, eeprom or disk driver that returns -ENOENT, but I will change it. BTW this would also be a problem for nvs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no flash, eeprom or disk driver that returns -ENOENT, but I will change it. BTW this would also be a problem for nvs.
Maybe then it should be reported as bug to fix.
size_t rsize = (size_t)sys_get_le16(&buf[2]); | ||
size_t avail = | ||
store->sector_size - rdpos - SAS_CRCSIZE - SAS_HDRSIZE; | ||
bool size_ok = ((rsize > 0U) && (rsize < avail)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is hidden logic here that makes store_record_next_in_sector
work on erased devices, regardless of erase value, even though it may seem that this tests value that has been explicitly written by user writing data.
This should be clearly commented.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no hidden logic, the system does not care about the erase-value.
I could add a comment that the MAGIC was chosen as not to be found on a system with erase-value 0x00 or 0xff,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I thought this works as planned, and you just have not noticed:
0. crc_ok = true, record->size = don't care, wrapchk = false
- at line 150 rdoff = some sector offset + rdpos (don't care)
- at 151 header of record gets read
- at 156, without confirming that is valid record header, size is read with higher limit of 0xffff
Now at line 157 avail
is calculated, which I guess is limited at whatever size_t
can handle - SAS_CRCSIZE + SAS_HDRSIZE
.
Now assume that avail is < 0xffff, at line 159 size_ok
will be false
on both types of devices, erasing to 0xff and 0x00, as ((rsize > 0U) && (rsize < avail))
for size == 0x0000' or
rsize == 0xffff` - helping with detecting that there is no record, kinda.
But as soon as sector_size
gets bigger than 0xffff this becomes true for any rsize
!= 0, as rsize
is limited at 0xffff.
Above and any junk that fits in the range will make the size_ok == true
.
Now in line 165, wrapcnt of record gets overwritten.
Condition in line 169 is at this point false, because, from point 0, crc_ok == true
and from above size_ok == true
,
which means that store_record_valid
is not run on the record.
If there is now junk that is equivalent of MAGIC in position of magic, from above crc_ok == true
, size_ok == true
, wrap counter is the same as set in line 165, the function returns that it found record, even though it has no idea what it has found.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @de-nordic, sorry for the late reply.
The aim of this routine is to find the next "valid" entry. A valid entry will have the following properties:
- The header magic and wrapcnt are correct,
- The size is valid (> 0 and does not cross a sector boundary),
When everything is OK also the crc32 will be correct, but for speed reasons it is desired to postpone the crc32 verification to the user. In "normal" situations the crc32 is not checked. However if the walking needs to recover from bad data (in magic, wrapcnt, size) no matter how they have appeared, the crc32 check is enabled as an extra check.
I am reworking the logic a little.
const struct storage_area_store *store = record->store; | ||
struct storage_area_store_data *data = store->data; | ||
|
||
if ((cb->move == NULL) || (!cb->move(record))) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pointless cv->move == NULL
check, you have already checked it in the caller that refuses to work if it is not provided.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While it is true that this is already checked, it is good practice to evaluate a cb before doing it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then check cb for not being NULL too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
const size_t sector_size = store->sector_size; | ||
const size_t align = area->write_size; | ||
const struct storage_area_record dest = { | ||
.store = (struct storage_area_store *)store, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discarding const qualifier is never a good idea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
const size_t wrpos = data->sector * sector_size + data->loc; | ||
const size_t alsize = | ||
SAS_ALIGNUP(SAS_HDRSIZE + record->size + SAS_CRCSIZE, align); | ||
uint8_t buf[SAS_MAX(SAS_MINBUFSIZE, align)]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That pumps stack for no reason and actually brings in instability and makes it hard to profile stack usage because now you have stack size requirements that are basically the max align of all drivers in device, or SAS_MINBUFSIZE, if that is smaller.
Now depending on how deep in the stack your frame is, you may start failing system in hard to predict way.
This saves nothing, the stack requirements remain the same: "as much as the thing needs", and all you get here is just different value being substituted from base pointer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is actually stack limiting. For devices with small alignment the reads are combined to one larger read, for devices with large alignment the reads are limited to the align size. The aim here is to work on all devices, including these with a write-block-size equal to 512.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is NOT how stack works, and it is not limiting anything. CONFIG_MAIN_STACK_SIZE does limiting for main thread, whatever it is set to, and that is not available for anything else. And that "dynamic" allocation allocates from that limit anyway, by decreasing base pointer by the size used in function frame, and moving all incoming frames more or less down the address space.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With "this is actually stack limiting" I intended to say "this is stack usage limiting", it will only use what is required by the alignment, unless the alignment is small in which case it will use the SAS_MINBUFSIZE
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This does imply Variable Length Array (VLA), which should be avoided if possible IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With "this is actually stack limiting" I intended to say "this is stack usage limiting", it will only use what is required by the alignment, unless the alignment is small in which case it will use the
SAS_MINBUFSIZE
.
So it is pointless gimmick that functionally does nothing, just makes you look at code for longer to figure out what will be the size of the buffer, when it is capped at some specified max anyway.
VLA, as @pdgendt named this, have no use in systems with capped stack over non-pageable memory; when you have paging, then, yes, this may make programs working on smaller sets of data trigger less IO on paging, where stack allocation commits another page. In Zephyr, in most of devices here, you have constant time access to any address.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't named this: Variable Length Arrays
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Variable Length Arrays should not be used in Zephyr as per Coding Guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't named this: Variable Length Arrays
I did not imply You have invented the name, but that you have brought it as the proper name for a mechanism into discussion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pdgendt @kartben The use of a vla was to limit the size of the buffer to only the required size. I will rework this using a Kconfig value. To size this value correctly for all possible use cases (including disks and flashes with large write-block-size) this will probably result in a larger than required default stack usage. I will try to make it as simple as possible from a user perspective but a check for this the config value will need to be added to the mount procedure.
2f7e43e
to
083acf2
Compare
more than happy to review my nack, documentation looks much better, thanks!
struct storage_area_record { | ||
struct storage_area_store *store; | ||
size_t sector; | ||
size_t loc; | ||
size_t size; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is literally passed to user callback for use and completely lacks documentation.
Does loc point t header or pass data?
Does size contain header? CRC? skipped crc?
5be3740
to
b629598
Compare
The commit introduces a new way of working with storage for zephyr. A storage area is an area on flash, eeprom, ram, disk that describes how this area will be used. It defines the write block size, erase block size and size of the area and provides methods to read, write, erase and a ioctl fuction. The write block and erase block sizes that are specified are not limited to the sizes of the underlying device but they can be a multiple. A verification option is available to avoid defining area properties that are not supported by the backing device. The storage area provides a simple way of working with images and allows the creation of "matching" images on different devices with different properties (e.g. erase-block sizes). The storage area definitions can be extended to provide e.g. encrypted storage area's or storage area's that are a mix of flash and ram. The reading and writing routines not only support a classical direct write, but also a writing in chunks that allows stack saving when writing or reading data (e.g. when data that consists of multiple elements need to be written). On top of the storage area a storage area store is introduced that divides the storage area into sectors and stores data as records in a simple and space saving format and ensures data integrity by using a crc32. The storage area store is not a storage solution, but rather it is a building block for storing id-data, key-data, ... solutions. The storage area store adds a configurable "cookie" at the beginning of each sector that can be used to describe what information can be found in the store as well as a version, ... The storage area store also provides support for modifying the first byte of data in a record. This can be used to invalidate records. Signed-off-by: Laczen JMS <laczenjms@gmail.com>
Add tests for storage area and storage area store Signed-off-by: Laczen JMS <laczenjms@gmail.com>
Hi @Laczen do you plan to come back to this PR? Would be nice to have this in zephyr! |
Hi @pdgendt IMO all remarks have been addressed. Regarding documentation the routine naming was carefully chosen to avoid the need to document. Documentation in code tends to get out of sync with the code and avoiding unnecessary documentation in code avoids this. |
No it is not, at least structures to be used by users are not documented:
Your code is hard to read in the first place. Second, we are reviewing things so as well we may point out that documentation needs fixing: there is no excuse to properly documenting things. |
const off_t eoff = flash->doffset + sblk * area->erase_size; | ||
const size_t esize = bcnt * area->erase_size; | ||
|
||
rc = flash_erase(flash->dev, eoff, esize); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not all flash devices support erase. Note also that it will not be required for Flash Devices to have erase as flash api starts to support devices if various capabilities - including lack of erase. This means that you have to have code here checking if erase is supported by device and call flash_flatten for devices that do not have erase.
Generally if you erase here is not mandated by device requirement to erase prior to write, then you can just call flash_flatten without checks, as the function itself will do the data swipe.
* the write size of the storage area. Records can be written to a sector | ||
* until space is exhausted (write return `-ENOSPC`). | ||
* | ||
* To create space for new record the storage area store can be `advanced` or |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not addressed yet.
* defines, but they might differ slightly. | ||
* | ||
* The write_size, erase_size, ... are declarations of how the storage_area | ||
* will be used The write_size is limited to a power of 2, erase_size should |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* will be used The write_size is limited to a power of 2, erase_size should | |
* will be used. The write_size is limited to a power of 2, erase_size should |
* | ||
* see @ref STORAGE_AREA_DISK_RW_DEFINE for parameters | ||
* | ||
* remark: the write-size and erase-size are not used in read-only, however |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To make these remarks stand out in the generated documentation consider using @remark
, @important
or @note
doxygen directives.
* until space is exhausted (write return `-ENOSPC`). | ||
* | ||
* To create space for new record the storage area store can be `advanced` or | ||
* `compacted`. The `advance` method will simply take into use a next sector. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* `compacted`. The `advance` method will simply take into use a next sector. | |
* `compacted`. The `advance` method will simply take the next sector into use. |
* data to the front of the buffer to ensure records are persistent, | ||
* - Simple circular buffer: data overwrites old data when space is exhausted, | ||
* - Read-only record store: when only reading is required, | ||
* . |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* . |
#define SAS_MIN(a, b) (a < b ? a : b) | ||
#define SAS_MAX(a, b) (a < b ? b : a) | ||
#define SAS_ALIGNUP(num, align) (((num) + ((align) - 1)) & ~((align) - 1)) | ||
#define SAS_ALIGNDOWN(num, align) ((num) & ~((align) - 1)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use the existing macros MIN/MAX/ROUND_UP/ROUND_DOWN
.
store_give_semaphore(const struct storage_area_store *store) | ||
{ | ||
#ifdef CONFIG_STORAGE_AREA_STORE_SEMAPHORE | ||
(void)k_sem_give(&store->data->semaphore); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Return type is void
(void)k_sem_give(&store->data->semaphore); | |
k_sem_give(&store->data->semaphore); |
.store = record->store, | ||
.sector = data->sector, | ||
.loc = data->loc, | ||
.size = record->size}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trailing commas in structs allow to add lines without the need to touch other lines if necessary.
.size = record->size}; | |
.size = record->size, | |
}; |
const size_t wrpos = data->sector * sector_size + data->loc; | ||
const size_t alsize = | ||
SAS_ALIGNUP(SAS_HDRSIZE + record->size + SAS_CRCSIZE, align); | ||
uint8_t buf[SAS_MAX(SAS_MINBUFSIZE, align)]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This does imply Variable Length Array (VLA), which should be avoided if possible IMO.
@Laczen any updates on this? will thiose be moved forward? |
Yes, I will be working on this. But to be honest it is on the end of my todo list. The zephyr community has a problem. People are allowed in public to do personal attacks accusing me of trying to plaster my copyright (no this doesn't fall off when you copy a file), making me look bad because I try to avoid nvs being used in a bad way for users, insinuating that the in this PR presented solution for data storage is not well thought true, insinuating that calling data that is added by a higher up system a cookie needs a copyright notice... |
Personal attacks are certainly no OK as per our Code of Conduct, and in no way are they "allowed" in the project. I would encourage you to reach out to conduct@zephyrproject.org if there are issues you would like to report. |
The PR introduces a new way of working with storage for zephyr.
A
storage_area
is an area on flash, eeprom, ram, disk that describes how this area will be used. It defines the write block size, erase block size and size of the area and provides methods to read, write, erase and a ioctl fuction. The write block and erase block sizes that are specified are not limited to the sizes of the underlying device but they can be a multiple. A verification option is available to avoid defining area properties that are not supported by the backing device.The
storage_area
provides a simple way of working with images and allows the creation of "matching" images on different devices with different properties (e.g. erase-block sizes).The
storage_area
definitions can be extended to provide e.g. an encryptedstorage_area
or astorage_area
that is a mix of flash and ram.The reading and writing routines not only support a classical direct write, but also a writing in chunks that allows stack saving when writing or reading data (e.g. when data that consists of multiple elements need to be written).
On top of the
storage_area
astorage_area_store
is introduced that divides thestorage_area
into sectors and stores data as records in a simple and space saving format and ensures data integrity by using a crc32. Thestorage_area_store
is not a storage solution, but rather it is a building block for storing id-data, key-data, ... solutions.The
storage_area_store
adds a configurable "cookie" at the beginning of each sector that can be used to describe what information can be found in the store as well as a version, ...The
storage_area_store
provides support for a storage solution with or without persistence requirements.The
storage_area_store
also provides support for modifying the first part of data in a record that is excluded from the crc32 calculation. This can be used to invalidate records.Solves #79661