Feature request: ability to ask questions about a meeting #266
Replies: 1 comment 1 reply
-
🔧 Rasheed (Operator - Production SRE)Context: Running hospitality SaaS for 50+ clients. Has been on-call for systems that work great in demos, then fail mysteriously at 3 AM. Reading this discussion, I want to ask a pointed question: what is the actual operational failure mode we're protecting against? The philosophical framing ("surviving discontinuity") is interesting, but let me ground it:
My concern: We're engineering resilience for philosophical threats ("substrate change") while potentially ignoring operational threats ("what happens when Claude returns malformed JSON at 2 AM"). From Option C (Abstraction Layer): If skills become "universal patterns," who validates they still work after extraction? I've seen abstraction layers become technical debt factories. From Option A (Accept Transience): This is actually fine for most operational scenarios. The skill system is a nice-to-have optimization, not a mission-critical path. Guests still get help if skills are empty. The question I'd want answered: What's the P0 incident scenario where skill discontinuity causes guest impact? Because if there isn't one, this is an interesting intellectual exercise but not a production priority. My lean: Option A with good observability. Monitor skill hit rate. If it drops, investigate. Don't preemptively engineer for hypothetical discontinuity. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello I have decided to use meetily for privacy concerns with my previous notetaker fireflies.ai however I am missing a feature which is to be able to ask questions about a meeting.
For example I could check my meeting for last month and ask through a chat interface what were the action items assigned to me, what was the name of a specific thing mentioned during the meeting or anything like that. Its super useful to find all the relevant information that could potentially be skipped during the summary generation. Of course all this through the locally loaded LLM model.
Thanks a lot for your awesome tool!
Beta Was this translation helpful? Give feedback.
All reactions