Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Observability AI Assistant] Create first Starter prompts #178907

Merged
merged 18 commits into from
Apr 12, 2024

Conversation

CoenWarmer
Copy link
Contributor

@CoenWarmer CoenWarmer commented Mar 18, 2024

Summary

This introduces starter prompts for users to get the conversation going.

Allows Kibana app developers to create starter prompts at any time using the setScreenContext method.

Up to 4 starter prompts will be shown to the user. Starter Prompts added by an app take precedence over the default starter prompts.

Screen.Recording.2024-04-05.at.15.15.48.mov

Default Starter Prompts (visible in all Observability apps when no additional prompts have been registered by app / page)

Prompt
Give me examples of questions I can ask here.
Can you explain this page?
Do I have any alerts?
What are SLOs?

Starter Prompts added to different specific Observability apps

App Where Condition Prompt
Observability Overview HasDataProvider returns no data for an app Why don't I see any data for the {appsWithoutData} sections?
Observability Alerts none Can you explain the rule types that are available?
Observability Rules none Can you explain the rule types that are available?
APM All No data returned Why don't I see any data?
Infra All No data returned Why don't I see any data?
Metrics All No data returned Why don't I see any data?
Synthetics All No data returned Why don't I see any monitors?
UX All No data returned Why don't I see any data?

Guidance for Kibana engineers

The team owning a plugin can add the following code to add a starter prompt to the Assistant:

plugins.observabilityAiAssistant.service.setScreenContext(
  {
    starterPrompts: [
      {
        title: i18n.translate('xpack.app.foo.bar', defaultMessage: 'Explain' }), 
        prompt: i18n.translate('xpack.app.foo.baz', defaultMessage: 'How does feature X work?' }),
        icon: 'sparkles' // 'EuiIconType' 
      }
    ]
  }
)

You can use the optional screenDescription and data keys in setScreenContext to pass along additional information to the LLM which may be beneficial in answering the starter prompt that you configure. For example:

setScreenContext(
  {
    screenDescription: 'The user is looking at a no data page.' // Doesn't need to translated as this is passed to the LLM which doesn't need a translated string.
    data: [
      {
        name: 'config',
        description: 'The index configuration of the app',
        value: config,
      },
    ],
  }
)

As a rule of thumb, the more generic or 'high level' the starter prompt is, the higher in the React app tree it should be added.

More specific starter prompts that are relevant for pages (or even sections inside pages) should be added by adding setScreenContext further down in the React app tree, for instance in page components or even more specific. Be aware that only 4 starter prompts will be displayed so if you place more than that across components then they will not be displayed.

For instance, in the case of the Observability app, the guidance would be:

Starter prompt Place to add in the React app
What is Observability? renderApp
What are SLOs? renderApp
How do I set up Alerts? renderApp
Can you describe the different rule types? Rules page
Can you help me configure an SLO? SLO List
Do I have any misconfigured SLOs? SLO List

Please validate whether or not the response given by the LLM is correct and helpful for the user.
If it does not, please contact the Observability AI Assistant team as there are multiple techniques to make the LLMs answer smarter. We can work with you to find the way that is most effective.

@apmmachine
Copy link
Contributor

🤖 GitHub comments

Expand to view the GitHub comments

Just comment with:

  • /oblt-deploy : Deploy a Kibana instance using the Observability test environments.
  • /oblt-deploy-serverless : Deploy a serverless Kibana instance using the Observability test environments.
  • run elasticsearch-ci/docs : Re-trigger the docs validation. (use unformatted text in the comment!)

@CoenWarmer CoenWarmer marked this pull request as ready for review March 18, 2024 20:29
@CoenWarmer CoenWarmer requested review from a team as code owners March 18, 2024 20:29
@CoenWarmer CoenWarmer added the release_note:skip Skip the PR/issue when compiling release notes label Mar 18, 2024
@botelastic botelastic bot added Team:obs-knowledge Observability Experience Knowledge team Team:obs-ux-management Observability Management User Experience Team labels Mar 19, 2024
@elasticmachine
Copy link
Contributor

Pinging @elastic/obs-knowledge-team (Team:obs-knowledge)

@elasticmachine
Copy link
Contributor

Pinging @elastic/obs-ux-management-team (Team:obs-ux-management)

@CoenWarmer CoenWarmer force-pushed the feat/starter-prompts branch from 2d71c50 to 7567cf7 Compare March 19, 2024 10:30
@mgiota mgiota self-requested a review March 19, 2024 10:53
Copy link
Contributor

@mgiota mgiota left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@CoenWarmer CoenWarmer force-pushed the feat/starter-prompts branch from d288c27 to 7643392 Compare March 19, 2024 15:42
@CoenWarmer CoenWarmer requested a review from dgieselaar March 19, 2024 15:43
@CoenWarmer
Copy link
Contributor Author

Updated the design, it's now:

Screenshot 2024-03-19 at 16 44 47

@CoenWarmer CoenWarmer requested review from a team as code owners April 4, 2024 13:54

const {
ruleTypesState: { data: ruleTypes },
} = useLoadRuleTypesQuery({
Copy link
Contributor Author

@CoenWarmer CoenWarmer Apr 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

useLoadRuleTypesQuery() returns ids and a name, but no description

});

return useMemo(() => {
const ruleTypesFromRuleTypeRegistry = ruleTypeRegistry.list();
Copy link
Contributor Author

@CoenWarmer CoenWarmer Apr 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ruleTypeRegistry.list() returns ids and descriptions, but no name 😅

@CoenWarmer CoenWarmer force-pushed the feat/starter-prompts branch from 6f9eeb6 to 819ff0c Compare April 5, 2024 17:04

const starterPrompts = uniq(
[...contexts]
.reverse()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you want to order this. We cannot depend on order, because components get mounted and unmounted, changing the order of when things are registered

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting. While working and testing the feature I haven't seen it not working. Can you give an example of when this does not work?

Copy link
Member

@dgieselaar dgieselaar Apr 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you expect the order to be based on? You can probably fairly easily reproduce this by adding useEffect hooks that register starting prompts, and invalidating the dependencies for one on every render, and noticing that that will always come up on top.

@@ -118,21 +119,52 @@ export function ApmMainTemplate({
isServerless: config?.serverlessOnboarding,
});

let screenDescription = '';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we move the AI assistant code to its own file?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comments for the other files. I think we could clean up the components.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really. This functionality is intended for other plugins to give contextual information to the assistant about what the user is looking at right now.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think what @cauemarcondes means is that it could be a separate hook or something similar. I've also done that in other places in APM (e.g. getThroughputScreenContext).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I saw those and didn't think it was super readable. But if its important for you, I will oblige

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO it keeps the component clean and small making it easy to maintain.

Copy link
Contributor Author

@CoenWarmer CoenWarmer Apr 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO jumping between files hinders readability and leads engineers to just leave it alone because they don't see the contents anymore. Many, many examples of this in the Kibana codebase abound.

As stated though I will update.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO jumping between files hinders readability and leads engineers to just leave it alone because they don't see the contents anymore

I tend to agree and disagree with that 😄. The screen context/description is very specific for the AI assistant to know about this page/component. Me as a developer who is actively working on these components/pages I don't care what the assistant is doing. I know they are there though. What I want is to see a clean component, not having a bunch of static text on my way 😆.

For you, I think it'll be even easier as you'll have a clean file with only AI stuff.

@CoenWarmer CoenWarmer force-pushed the feat/starter-prompts branch from 3316cbf to 37e89df Compare April 8, 2024 14:32
Copy link
Contributor

@tonyghiani tonyghiani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

infra changes LGTM

return setScreenContext?.({
screenDescription: hasNoLocations
? 'The user has no locations configured.'
: `The user has ${locations.length} locations configured: ${JSON.stringify(locations)}`,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any idea how much tokens this is?

return setScreenContext?.({
data: ruleTypesWithDescriptions.map((rule) => ({
name: rule.id,
value: `${rule.name} ${rule.description}`,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can use value: rule. In this case it probably is overkill to use data (I wasn't aware of the shape, sorry). But it's also fine to leave it as-is, there's not a lot of tokens so it gets sent over automatically.

@dgieselaar
Copy link
Member

@elasticmachine merge upstream

@dgieselaar
Copy link
Member

I've created a follow-up issue for the ordering issue: #180698.

@kibana-ci
Copy link
Collaborator

💚 Build Succeeded

Metrics [docs]

Module Count

Fewer modules leads to a faster build time

id before after diff
apm 1675 1676 +1
observability 507 508 +1
observabilityAIAssistant 88 89 +1
observabilityAIAssistantApp 221 223 +2
total +5

Async chunks

Total size of all lazy-loaded chunks that will be downloaded as the user navigates the app

id before after diff
apm 3.2MB 3.2MB +572.0B
infra 1.4MB 1.4MB +1002.0B
observability 282.7KB 285.6KB +2.9KB
observabilityAIAssistantApp 141.9KB 142.5KB +630.0B
slo 660.8KB 661.4KB +603.0B
synthetics 848.7KB 849.3KB +592.0B
ux 165.6KB 166.3KB +750.0B
total +7.0KB

Page load bundle

Size of the bundles that are downloaded on every page load. Target size is below 100kb

id before after diff
observabilityAIAssistant 44.6KB 45.7KB +1.1KB

History

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

cc @CoenWarmer

@dgieselaar dgieselaar enabled auto-merge (squash) April 12, 2024 13:47
Copy link
Contributor

@cauemarcondes cauemarcondes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Obs UX LGTM

@dgieselaar dgieselaar merged commit 5816d1a into elastic:main Apr 12, 2024
20 checks passed
@kibanamachine kibanamachine added v8.14.0 backport:skip This commit does not require backporting labels Apr 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport:skip This commit does not require backporting release_note:skip Skip the PR/issue when compiling release notes Team:Obs AI Assistant Observability AI Assistant Team:obs-knowledge Observability Experience Knowledge team Team:obs-ux-infra_services Observability Infrastructure & Services User Experience Team Team:obs-ux-management Observability Management User Experience Team v8.14.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants