Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

All Media View is slow on big accounts #4867

Open
4 tasks
Simon-Laux opened this issue Mar 28, 2025 · 0 comments
Open
4 tasks

All Media View is slow on big accounts #4867

Simon-Laux opened this issue Mar 28, 2025 · 0 comments
Labels
bug Something isn't working performance Related to (improving) performance

Comments

@Simon-Laux
Copy link
Member

Simon-Laux commented Mar 28, 2025

It takes seconds until the images are displayed.

We identified the following issues (ordered by impact, highest is on top):

  • it loads all messages
    -> solution would be infinite scroll/loading list additionally to the virtual list
  • the jsonprc request get_messages loads messages sequentially
  • the jsonrpc request does more requests per message then it needs to
my rough notes about jsonrpc inefficiency
quote: another message is loaded (+contact) -> 1-2 db requests
sender: contact is loaded: db request
webxdc info (loading message again, opening webxdc zip): db and filesystem
reactions: db request
parent_id: db request
get_original_msg_id: db request
get_saved_msg_id: db request
get file bytes (only used for file tab and this could be cached in core): filesystem request


So in total the usage of jsonrpc api `get_messages` does 7-8 extra db calls and atleast 2 filesystem requests to request information that is not needed for the gallery view. These extra calls happen for every message.

Solution

  • make it an infinite / virtual list
    • infinite list
    • show placeholder/skeleton items as long for messages that are loading
  • make a new jsonrpc call that is optimised to load only what is needed and use that one.
  • load each image messages in it's own request or make the jsonrpc call load all at once instead of sequentially
    • I tested both by loading all messages with both:
      • load each image in own request is the slowest
      • don't load images sequesntially in jsonrpc method only resulted in 1 second speedup almost, while pushing cpu usage to 100%
    • so both of those are not really improvements
the jsonrpc method for loading at parallel instead of sequentially
    async fn get_messages2(
        &self,
        account_id: u32,
        message_ids: Vec<u32>,
    ) -> Result<HashMap<u32, MessageLoadResult>> {
        let ctx = self.get_context(account_id).await?;
        let mut messages: HashMap<u32, MessageLoadResult> = HashMap::new();

        let results = futures::future::join_all(
        message_ids.into_iter().map(|message_id| {
            let cloned_ctx = ctx.clone();
            async move {
                (message_id, MessageObject::from_msg_id(&cloned_ctx, MsgId::new(message_id)).await )
            }
        })).await;
        for (message_id, message_result) in results {
            messages.insert(
                message_id,
                match message_result {
                    Ok(Some(message)) => MessageLoadResult::Message(message),
                    Ok(None) => MessageLoadResult::LoadingError {
                        error: "Message does not exist".to_string(),
                    },
                    Err(error) => MessageLoadResult::LoadingError {
                        error: format!("{error:#}"),
                    },
                },
            );
        }
        Ok(messages)
    }
@Simon-Laux Simon-Laux added bug Something isn't working performance Related to (improving) performance labels Mar 28, 2025
@Simon-Laux Simon-Laux self-assigned this Mar 28, 2025
@Simon-Laux Simon-Laux removed their assignment Mar 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working performance Related to (improving) performance
Projects
None yet
Development

No branches or pull requests

1 participant