能不能用ai优化一下性能,总感觉不够流畅。 #7215
-
🥰 需求描述希望能像nextchat一样丝滑流畅。 🧐 解决方案lobechat一直卡卡的,是技术原因吗? 📝 补充信息No response |
Beta Was this translation helpful? Give feedback.
Replies: 47 comments
-
🥰 Requirement descriptionHope it's as smooth as nextchat. 🧐 SolutionIs it technical reason that lobechat has been stuck all the time? 📝 Supplementary informationNo response |
Beta Was this translation helpful? Give feedback.
-
👀 @sscc9 Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. |
Beta Was this translation helpful? Give feedback.
-
如果是聊天的时候卡顿,可以检查当前话题是不是消息堆太多了,新建个话题再行对话。 |
Beta Was this translation helpful? Give feedback.
-
If it is a stuttering during chatting, you can check if there are too many messages on the current topic, create a new topic before having a conversation. |
Beta Was this translation helpful? Give feedback.
-
@sscc9 能否具体说明下哪里比较卡? |
Beta Was this translation helpful? Give feedback.
-
@sscc9 Can you specify which comparison card is? |
Beta Was this translation helpful? Give feedback.
-
发送按钮经常在AI回复结束了,无法及时更新状态,一直处于"停止"或者不可点击状态。需要等很久才能恢复到“发送”状态 |
Beta Was this translation helpful? Give feedback.
-
The send button often ends when the AI reply is replies, and the status cannot be updated in time, and it is always in a "stop" or unclickable state. It takes a long time to restore to the "send" state |
Beta Was this translation helpful? Give feedback.
-
@18651619390 这个大概是数据库交互比较慢,或者加上在总结历史消息等操作。 |
Beta Was this translation helpful? Give feedback.
-
@18651619390 This is probably because the database interaction is slow, or if you are summarizing historical messages, etc. |
Beta Was this translation helpful? Give feedback.
-
那也太慢了,十分影响体验。我数据库还是本地的数据库呢。总结的话不应该是后台自行总结吗? |
Beta Was this translation helpful? Give feedback.
-
That's too slow and it affects the experience very much. My database is still a local database. Shouldn’t the summary be summarized by the backend? |
Beta Was this translation helpful? Give feedback.
-
我也感觉卡卡的,而且选中本地ollama跑的deepseek,回答完之后干嘛又要调一下deepseek官方的key问个问题,然后导致页面一直卡到那里,之前反馈性能问题,被告知因为数据库有损耗,在我看来并不是这样的 |
Beta Was this translation helpful? Give feedback.
-
I also feel it was stuck, and I selected the deepseek running local ollama. After answering, why do I need to adjust the official key of deepseek to ask a question, and then the page kept stuck there. I reported performance issues before and was told that because the database was losing, it was not like this in my opinion. |
Beta Was this translation helpful? Give feedback.
-
这个就可能是 历史消息自动总结 的问题,我以前也受此困扰,可以在会话设置里关闭试试。 |
Beta Was this translation helpful? Give feedback.
-
@minemine-m Yes, it can be closed on the system agent. But if you have a summary, you can often improve the effectiveness of knowledge base search |
Beta Was this translation helpful? Give feedback.
-
好的谢谢 |
Beta Was this translation helpful? Give feedback.
-
OK, thanks |
Beta Was this translation helpful? Give feedback.
-
关掉历史会话总结和知识库检索的总结之后,快多了,但是首次使用话题还是会调用一次会话总结,估计这个是没法改的 |
Beta Was this translation helpful? Give feedback.
-
After turning off the historical session summary and knowledge base search summary, it is much faster, but the conversation summary will still be called once for the first time when using the topic. I guess this cannot be changed. |
Beta Was this translation helpful? Give feedback.
-
这个是在创建话题 topic。V2 会准备把创建 topic 和创建一个消息合成一次请求,这样会减少一轮开销。 话题 title 用 ai 生成这个目前不计划加开关。我感觉大家是没有命名习惯的,还是自动总结一次比较好 |
Beta Was this translation helpful? Give feedback.
-
This is creating topic topic. V2 will prepare to synthesize the creation topic and create a message into a request, which will reduce a round of overhead. Topic Generation is currently not planned to add switches. I think everyone has no naming habits, so it is better to summarize it automatically once |
Beta Was this translation helpful? Give feedback.
-
右侧话题列表滚动性能非常低下,这个虚拟列表有很大问题 |
Beta Was this translation helpful? Give feedback.
-
The scrolling performance of the topic list on the right is very low, and there are big problems with this virtual list |
Beta Was this translation helpful? Give feedback.
-
应该是 antd 的 Typography 组件的问题,会带来巨量的 rerender。这个后续我优化下吧 |
Beta Was this translation helpful? Give feedback.
-
It should be a problem with the Typography component of antd, which will bring a huge amount of renderers. I'll optimize this later |
Beta Was this translation helpful? Give feedback.
-
我虽然没用过react只用过用过Compose Multiplatform,但关于组件优化的建议是检查有没有不必要的rerender,可能是state过大需要derived state,也有可能是放入了不必要的参数导致rerender,也可以把UI函数分成几个小块的UI函数单独渲染 |
Beta Was this translation helpful? Give feedback.
-
Although I have never used react and only used Compose Multiplatform, the suggestion for component optimization is to check whether there are unnecessary rerenders. It may be that the state is too large and requires derived state, or it may be that unnecessary parameters are placed to cause rerender, and the UI function can be divided into several small pieces of UI functions to render separately. |
Beta Was this translation helpful? Give feedback.
-
@CXwudi 正解,其实我们日常都是这么写的。只是 antd 的组件会有一些问题导致卡顿,这个我们暂时真没啥辙,期待V6 的发布能显著提升一波性能吧 |
Beta Was this translation helpful? Give feedback.
-
@CXwudi The correct explanation, in fact, this is what we write in daily life. It's just that there will be some problems with antd components, which will cause lag. We really have nothing to do with this for the time being. We look forward to the release of V6 to significantly improve the performance. |
Beta Was this translation helpful? Give feedback.
@18651619390 @minemine-m 这里有个 trade off 要看下的:
如果自动总结不阻塞发送消息,那么有可能会出现没有总结完,带的是老的历史总结消息的情况,进而可能会导致效果上受影响。
目前实现的做法是为了保证历史总结消息是最新的,然后这样最新的历史记录总结 + 最近的 N 条消息,构成最完整的上下文。
我的建议是选一个 token 生成飞快的历史消息总结模型(比如 openai gpt4o-mini,甚至 groq 的Deepseek R1 Distill Llama 70B),这样就能兼具效果和性能