-
Notifications
You must be signed in to change notification settings - Fork 176
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to Only Reset Matter and Matter related nvs data (CON-1529) #1259
Comments
Can you elaborate more on "only reset matter and matter related nvs data"? Do you mean to only erase the matter specific data? Or are you expecting anything more? esp_matter::factory_reset(), only erases the Matter specific data, but this resets the device (i.e. reboot). |
@shubhamdp I am using same API when I flash code first time and run TC-RR-1.1 it passes the test and next time when I perform factory reset using this API esp_matter::factory_reset(), test case fails on timeout error on step 14. |
Can you please try this, replace the Below patch does the same...
|
@shubhamdp Thanks I will test tomorrow and let you know. |
@bilalahmaddev did you get a chance to try this out? |
@bilalahmaddev Please close the issue if resolved |
@shubhamdp I tried this but it did not work: diff --git a/components/esp_matter/esp_matter_core.cpp b/components/esp_matter/esp_matter_core.cpp
|
@shubhamdp can you please help me fix this. When I am using same API when I flash code first time and run TC-RR-1.1 it passes the test and next time when I perform factory reset using this API, test case fails on timeout error on step 14. But I again if I perform erase_flash and flash code again TC-RR-1.1 passes |
@bilalahmaddev can you help me with the esp-matter and esp-idf commits please. Please share the TH and DUT logs. |
esp-idf commit: c9763f62dd00c887a1a8fafe388db868a7e44069 TH logs: |
@bilalahmaddev can you try bumping the timeout? I gave a shot with 10000 and it is working every time
|
@bilalahmaddev NVS state on the fresh flash and after calling |
@shubhamdp I tried with --timeout 10000 but still fails: |
I think you should analyze the NVS content on fresh flashing as well as after factory reset. You can read the nvs using esptool.py, please make sure you check the nvs address and size.
I usually use this: https://github.com/AFontaine79/Espressif-NVS-Analyzer/blob/main/analyze_nvs.py script for analysis.
See if you find any differences. Dumb question though: Did you update the submodules, we shipped the groups related fix in that one recently?
|
@shubhamdp Ok let me try. I am on this commit for connectedhomeip: 593d5c6f63a62e017e4ced43183049f2805a9db8 |
@shubhamdp We have 14 endpoint on this device and our last device has 12 endpoints, it passed this test after factory reset every time and it is also passed ATL certification tests. What do you think about size of nvs for 14 endpoints? `# Note: Firmware partition offset needs to be 64K aligned, initial 36K (9 sectors) are reserved for bootloader and partition table Name, Type, SubType, Offset, Size, Flagsesp_secure_cert, 0x3F, , 0xD000, 0x2000, encrypted |
Our recommendation is to have 48K for 2 endpoints. I think we can still get the TC-RR-1-1 for 8 endpoints with this. You have NVS of 512K, so this is good enough, I guess. I don't have a number for per endpoint overhead for the NVS (Will need to get this number). I can suggest one more thing, you can write your own factory reset which erases the complete NVS(assuming you don't have any data that needs to be persisted across factory reset, and that data is stored in "fctry"). This approach is being used in our esp-rainmaker framework. https://github.com/espressif/esp-rainmaker-common/blob/6398f401f2d4333cf0ed712d51f8fce3830cadf6/src/utils.c#L76 I suspect two problems here:
|
@shubhamdp I tried nvs_flash_erase(); but I think it is also erasing factory data and device become uncommissionable. increasing nvs works sometime but not every time. For testing, I have increased the nvs size to 700KB and now TC-RR-1.1 passes first two times and start failing after that. That means esp_matter::factory_reset() is not clearing "nvs" -> esp_matter_kvs and it is creating fragmentation. with analyze_nvs.py, I see differences in Namespace nvs.net80211 |
@shubhamdp I am able to resolve it by specifically clearing all nvs name spaces before calling esp_matter::factory_reset() even without increasing nvs size from 512KB. I tested 3 times. |
@bilalahmaddev Thanks for the update, can you please tell us which nvs namespace did you erase? Also, a suggestion, your factory data should not be stored in "nvs" partition but in "fctry". So that you can just erase the complete "nvs" partition on factory-reset. |
@shubhamdp I have erased these: esp_matter_kvs, storage(our application defined), nvs.net80211, chip-config and CHIP_KVS. To write factory data to the factory partition (fctry), we should use the address specified in your partition table, which in our case is 0xAE0000. But how application we automatically find if data is in factry partition? We need to set this right? |
To this list [esp_matter_kvs, storage, nvs.net80211, chip-config, CHIP_KVS], you should add Ideally what you should do is erase your application data first (storage namespace) and then call Yes, you will need to write to More info can be found at below links |
Is there any API to reset Matter only and Matter related nvs Data?
The text was updated successfully, but these errors were encountered: