Skip to content

feat(route): add Inception Labs blog route#21386

Open
zdenek-stursa wants to merge 1 commit intoDIYgod:masterfrom
zdenek-stursa:feat/add-inceptionlabs-blog
Open

feat(route): add Inception Labs blog route#21386
zdenek-stursa wants to merge 1 commit intoDIYgod:masterfrom
zdenek-stursa:feat/add-inceptionlabs-blog

Conversation

@zdenek-stursa
Copy link

Involved Issue / 该 PR 相关 Issue

Close #

Example for the Proposed Route(s) / 路由地址示例

/inceptionlabs/blog

New RSS Route Checklist / 新 RSS 路由检查表

  • New Route / 新的路由
  • Anti-bot or rate limit / 反爬/频率限制
    • If yes, do your code reflect this sign? / 如果有, 是否有对应的措施?
  • Date and time / 日期和时间
    • Parsed / 可以解析
    • Correct time zone / 时区正确
  • New package added / 添加了新的包
  • Puppeteer

Note / 说明

Adds RSS feed for https://www.inceptionlabs.ai/blog/

Inception Labs is an AI research company building diffusion-based LLMs (dLLMs), known for their Mercury model.

The blog is built with Framer and rendered server-side (SSR), so a plain HTTP fetch with a browser User-Agent returns full HTML — no Puppeteer needed.

Two card layouts exist on the listing page:

  • Standard card – date and category in [data-framer-name="Date"] elements
  • Featured card – date inside <mark>, category in [data-framer-name="Category"]

Each item includes: title, publication date, author, category, and full article body.

Adds RSS feed for https://www.inceptionlabs.ai/blog/

The page is built with Framer and served as SSR HTML, so no Puppeteer
is required — a regular HTTP fetch with a browser User-Agent suffices.

Two distinct blog card layouts are handled:
- Standard card: date and category in [data-framer-name="Date"] elements
- Featured card: date inside a <mark> element, category in [data-framer-name="Category"]

Each item includes title, publication date, author, category, and the
full article body extracted from [data-framer-name="Content"].

Route: /inceptionlabs/blog
Example: /inceptionlabs/blog
@github-actions github-actions bot added the route label Mar 14, 2026
@zdenek-stursa zdenek-stursa changed the title feat(routes): add Inception Labs blog route feat(route): add Inception Labs blog route Mar 14, 2026
@github-actions
Copy link
Contributor

Auto Review

No clear rule violations found in the current diff.

@github-actions github-actions bot added auto: DO NOT merge Docker won’t even start and removed auto: DO NOT merge Docker won’t even start labels Mar 14, 2026
@github-actions
Copy link
Contributor

Auto Review

No clear rule violations found in the current diff.

@github-actions github-actions bot added auto: ready to review and removed auto: DO NOT merge Docker won’t even start labels Mar 14, 2026
@github-actions
Copy link
Contributor

Successfully generated as following:

http://localhost:1200/inceptionlabs/blog - Success ✔️
<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
  <channel>
    <title>Inception Labs Blog</title>
    <link>https://www.inceptionlabs.ai/blog</link>
    <atom:link href="http://localhost:1200/inceptionlabs/blog" rel="self" type="application/rss+xml"></atom:link>
    <description>Latest posts from the Inception Labs blog about diffusion LLMs and AI research - Powered by RSSHub</description>
    <generator>RSSHub</generator>
    <webMaster>contact@rsshub.app (RSSHub)</webMaster>
    <language>en</language>
    <lastBuildDate>Sat, 14 Mar 2026 13:06:07 GMT</lastBuildDate>
    <ttl>5</ttl>
    <item>
      <title>Introducing Mercury 2</title>
      <description>&lt;div class=&quot;framer-xg880z&quot; data-framer-name=&quot;Content&quot; style=&quot;transform:none&quot; data-framer-component-type=&quot;RichTextContainer&quot;&gt;&lt;h2 dir=&quot;ltr&quot; class=&quot;framer-text framer-styles-preset-68tgnr&quot;&gt;The fastest reasoning LLM, powered by diffusion&lt;/h2&gt;&lt;p dir=&quot;ltr&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Today, we&#39;re introducing Mercury 2 — the world&#39;s fastest reasoning language model, built to make production AI feel instant.&lt;/p&gt;&lt;h3 dir=&quot;ltr&quot; class=&quot;framer-text framer-styles-preset-1jgxk8t&quot;&gt;Why speed matters more now&lt;/h3&gt;&lt;p dir=&quot;ltr&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Production AI isn&#39;t one prompt and one answer anymore. It&#39;s loops: agents, retrieval pipelines, and extraction jobs running in the background at volume. In loops, latency doesn’t show up once. It compounds across every step, every user, every retry.&lt;/p&gt;&lt;p dir=&quot;ltr&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Yet current LLMs still share the same bottleneck: autoregressive, sequential decoding. One token at a time, left to right.&lt;/p&gt;&lt;video autoplay=&quot;&quot; loop=&quot;&quot; muted=&quot;&quot; playsinline=&quot;&quot; src=&quot;https://framerusercontent.com/images/s6ZXW6bHWKk8JRpp28Pr0Yr8D9o.mp4&quot; class=&quot;framer-text framer-image&quot;&gt;&lt;/video&gt;&lt;h3 dir=&quot;ltr&quot; class=&quot;framer-text framer-styles-preset-1jgxk8t&quot;&gt;A new foundation: Diffusion for real-time reasoning&lt;/h3&gt;&lt;p dir=&quot;ltr&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Mercury 2 doesn&#39;t decode sequentially. It generates responses through parallel refinement, producing multiple tokens simultaneously and converging over a small number of steps. Less typewriter, more editor revising a full draft at once. The result: &amp;gt;5x faster generation with a fundamentally different speed curve.&lt;/p&gt;&lt;p dir=&quot;ltr&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;That speed advantage also changes the reasoning trade-off. Today, higher intelligence means more test-time compute — longer chains, more samples, more retries — bought at the direct expense of latency and cost. Diffusion-based reasoning gets you reasoning-grade quality inside real-time latency budgets.&lt;/p&gt;&lt;h2 dir=&quot;auto&quot; class=&quot;framer-text framer-styles-preset-68tgnr&quot;&gt;Mercury 2 at a glance&lt;/h2&gt;&lt;p dir=&quot;ltr&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Mercury 2 shifts the quality-speed curve for production deployments:&lt;/p&gt;&lt;ul dir=&quot;auto&quot; class=&quot;framer-text&quot;&gt;&lt;li data-preset-tag=&quot;p&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;strong class=&quot;framer-text&quot;&gt;Speed:&lt;/strong&gt; 1,009 tokens/sec on NVIDIA Blackwell GPUs&lt;/p&gt;&lt;/li&gt;&lt;li data-preset-tag=&quot;p&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;strong class=&quot;framer-text&quot;&gt;Price:&lt;/strong&gt; $0.25/1M input tokens · $0.75/1M output tokens&lt;/p&gt;&lt;/li&gt;&lt;li data-preset-tag=&quot;p&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;strong class=&quot;framer-text&quot;&gt;Quality&lt;/strong&gt;: competitive with leading speed-optimized models&lt;/p&gt;&lt;/li&gt;&lt;li data-preset-tag=&quot;p&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;strong class=&quot;framer-text&quot;&gt;Features&lt;/strong&gt;: tunable reasoning · 128K context · native tool use · schema-aligned JSON output&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;img alt=&quot;&quot; width=&quot;973&quot; height=&quot;391&quot; src=&quot;https://framerusercontent.com/images/KUQ2ijMteh6pTFCieisdOx9s.png&quot; srcset=&quot;https://framerusercontent.com/images/KUQ2ijMteh6pTFCieisdOx9s.png?scale-down-to=512&amp;amp;width=1947&amp;amp;height=783 512w,https://framerusercontent.com/images/KUQ2ijMteh6pTFCieisdOx9s.png?scale-down-to=1024&amp;amp;width=1947&amp;amp;height=783 1024w,https://framerusercontent.com/images/KUQ2ijMteh6pTFCieisdOx9s.png?width=1947&amp;amp;height=783 1947w&quot; class=&quot;framer-text framer-image framer-styles-preset-1teh2bg&quot; style=&quot;aspect-ratio:1947 / 783&quot; sizes=&quot;(min-width: 1024px) 100vw, (min-width: 768px) and (max-width: 1023.98px) 100vw, (max-width: 767.98px) 100vw&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;img alt=&quot;&quot; width=&quot;1246&quot; height=&quot;598&quot; src=&quot;https://framerusercontent.com/images/vUhManfsMjNnhb8We1IfhRGbfr8.png&quot; srcset=&quot;https://framerusercontent.com/images/vUhManfsMjNnhb8We1IfhRGbfr8.png?scale-down-to=512&amp;amp;width=2493&amp;amp;height=1197 512w,https://framerusercontent.com/images/vUhManfsMjNnhb8We1IfhRGbfr8.png?scale-down-to=1024&amp;amp;width=2493&amp;amp;height=1197 1024w,https://framerusercontent.com/images/vUhManfsMjNnhb8We1IfhRGbfr8.png?scale-down-to=2048&amp;amp;width=2493&amp;amp;height=1197 2048w,https://framerusercontent.com/images/vUhManfsMjNnhb8We1IfhRGbfr8.png?width=2493&amp;amp;height=1197 2493w&quot; class=&quot;framer-text framer-image framer-styles-preset-1teh2bg&quot; style=&quot;aspect-ratio:2493 / 1197&quot; sizes=&quot;(min-width: 1024px) 100vw, (min-width: 768px) and (max-width: 1023.98px) 100vw, (max-width: 767.98px) 100vw&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;p dir=&quot;auto&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;We optimize for speed users actually feel: responsiveness in the moments users experience - p95 latency under high concurrency, consistent turn-to-turn behavior, and stable throughput when systems get busy.&lt;/p&gt;&lt;div class=&quot;framer-text framer-text-module&quot; style=&quot;width:100%;height:auto&quot; data-width=&quot;fill&quot;&gt;&lt;div class=&quot;ssr-variant&quot;&gt;&lt;div class=&quot;framer-UFJdD framer-1p2nwfj framer-v-1p2nwfj&quot; data-framer-name=&quot;Variant 1&quot;&gt;&lt;div class=&quot;framer-el7yon&quot; style=&quot;--extracted-r6o4lv:rgba(0, 0, 0, 0.8);--framer-link-text-color:rgb(0, 153, 255);--framer-link-text-decoration:underline;opacity:0.8;transform:none&quot; data-framer-component-type=&quot;RichTextContainer&quot;&gt;&lt;p dir=&quot;auto&quot; style=&quot;--font-selector:SW50ZXItTWVkaXVtSXRhbGlj;--framer-font-size:17px;--framer-font-style:italic;--framer-font-weight:500;--framer-line-height:1.4em;--framer-text-color:var(--extracted-r6o4lv, rgba(0, 0, 0, 0.8))&quot; class=&quot;framer-text&quot;&gt;“Inception’s Mercury 2 demonstrates what’s possible when new model architecture meets NVIDIA AI infrastructure. Surpassing 1,000 tokens per second on NVIDIA GPUs underscores the performance, scalability, and versatility of our platform to power the full spectrum of AI workloads.”&lt;/p&gt;&lt;/div&gt;&lt;div class=&quot;framer-1yi1l8z&quot; style=&quot;--extracted-r6o4lv:rgba(4, 20, 20, 0.5);--framer-link-text-color:rgb(0, 153, 255);--framer-link-text-decoration:underline;transform:none&quot; data-framer-component-type=&quot;RichTextContainer&quot;&gt;&lt;p dir=&quot;auto&quot; style=&quot;--font-selector:SW50ZXItTWVkaXVt;--framer-font-size:14px;--framer-font-weight:500;--framer-text-color:var(--extracted-r6o4lv, rgba(4, 20, 20, 0.5))&quot; class=&quot;framer-text&quot;&gt;Shruti Koparkar, Senior Manager of Product, Accelerated Computing Group at NVIDIA&lt;/p&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;h2 dir=&quot;auto&quot; class=&quot;framer-text framer-styles-preset-68tgnr&quot;&gt;What Mercury 2 unlocks in production&lt;/h2&gt;&lt;p dir=&quot;auto&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Mercury 2 excels in latency-sensitive applications where the user experience is non-negotiable.&lt;/p&gt;&lt;h4 dir=&quot;auto&quot; class=&quot;framer-text framer-styles-preset-rx577o&quot;&gt;1. Coding and editing&lt;/h4&gt;&lt;p dir=&quot;auto&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Autocomplete, next-edit suggestions, refactors, interactive code agents - workflows where the developer is in the loop and any pause breaks flow.&lt;/p&gt;&lt;div class=&quot;framer-text framer-text-module&quot; style=&quot;width:100%;height:auto&quot; data-width=&quot;fill&quot;&gt;&lt;div class=&quot;ssr-variant&quot;&gt;&lt;div class=&quot;framer-UFJdD framer-1p2nwfj framer-v-1p2nwfj&quot; data-framer-name=&quot;Variant 1&quot;&gt;&lt;div class=&quot;framer-el7yon&quot; style=&quot;--extracted-r6o4lv:rgba(0, 0, 0, 0.8);--framer-link-text-color:rgb(0, 153, 255);--framer-link-text-decoration:underline;opacity:0.8;transform:none&quot; data-framer-component-type=&quot;RichTextContainer&quot;&gt;&lt;p dir=&quot;auto&quot; style=&quot;--font-selector:SW50ZXItTWVkaXVtSXRhbGlj;--framer-font-size:17px;--framer-font-style:italic;--framer-font-weight:500;--framer-line-height:1.4em;--framer-text-color:var(--extracted-r6o4lv, rgba(0, 0, 0, 0.8))&quot; class=&quot;framer-text&quot;&gt;“Suggestions land fast enough to feel like part of your own thinking, not something you have to wait for.”&lt;/p&gt;&lt;/div&gt;&lt;div class=&quot;framer-1yi1l8z&quot; style=&quot;--extracted-r6o4lv:rgba(4, 20, 20, 0.5);--framer-link-text-color:rgb(0, 153, 255);--framer-link-text-decoration:underline;transform:none&quot; data-framer-component-type=&quot;RichTextContainer&quot;&gt;&lt;p dir=&quot;auto&quot; style=&quot;--font-selector:SW50ZXItTWVkaXVt;--framer-font-size:14px;--framer-font-weight:500;--framer-text-color:var(--extracted-r6o4lv, rgba(4, 20, 20, 0.5))&quot; class=&quot;framer-text&quot;&gt;Max Brunsfeld, Co-Founder, Zed&lt;/p&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;h4 dir=&quot;ltr&quot; class=&quot;framer-text framer-styles-preset-rx577o&quot;&gt;2. Agentic loops&lt;/h4&gt;&lt;p dir=&quot;auto&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Agentic workflows chain dozens of inference calls per task. Cutting latency per call doesn&#39;t just save time, it changes how many steps you can afford to run, and how good the final output gets.&lt;/p&gt;&lt;div class=&quot;framer-text framer-text-module&quot; style=&quot;width:100%;height:auto&quot; data-width=&quot;fill&quot;&gt;&lt;div class=&quot;ssr-variant&quot;&gt;&lt;div class=&quot;framer-UFJdD framer-1p2nwfj framer-v-1p2nwfj&quot; data-framer-name=&quot;Variant 1&quot;&gt;&lt;div class=&quot;framer-el7yon&quot; style=&quot;--extracted-r6o4lv:rgba(0, 0, 0, 0.8);--framer-link-text-color:rgb(0, 153, 255);--framer-link-text-decoration:underline;opacity:0.8;transform:none&quot; data-framer-component-type=&quot;RichTextContainer&quot;&gt;&lt;p dir=&quot;auto&quot; style=&quot;--font-selector:SW50ZXItTWVkaXVtSXRhbGlj;--framer-font-size:17px;--framer-font-style:italic;--framer-font-weight:500;--framer-line-height:1.4em;--framer-text-color:var(--extracted-r6o4lv, rgba(0, 0, 0, 0.8))&quot; class=&quot;framer-text&quot;&gt;“We’re now leveraging the latest Mercury model to intelligently optimize campaign execution at scale. By surfacing insights and dynamically enhancing delivery in real time, we’re driving stronger performance, greater efficiency, and a more resilient, AI-powered advertising ecosystem. This advancement reinforces our commitment to autonomous advertising, where intelligent systems continuously refine execution to deliver measurable outcomes for our clients.”&lt;/p&gt;&lt;/div&gt;&lt;div class=&quot;framer-1yi1l8z&quot; style=&quot;--extracted-r6o4lv:rgba(4, 20, 20, 0.5);--framer-link-text-color:rgb(0, 153, 255);--framer-link-text-decoration:underline;transform:none&quot; data-framer-component-type=&quot;RichTextContainer&quot;&gt;&lt;p dir=&quot;auto&quot; style=&quot;--font-selector:SW50ZXItTWVkaXVt;--framer-font-size:14px;--framer-font-weight:500;--framer-text-color:var(--extracted-r6o4lv, rgba(4, 20, 20, 0.5))&quot; class=&quot;framer-text&quot;&gt;Adrian Witas, SVP, Chief Architect, Viant&lt;/p&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;framer-text framer-text-module&quot; style=&quot;width:100%;height:auto&quot; data-width=&quot;fill&quot;&gt;&lt;div class=&quot;ssr-variant&quot;&gt;&lt;div class=&quot;framer-UFJdD framer-1p2nwfj framer-v-1p2nwfj&quot; data-framer-name=&quot;Variant 1&quot;&gt;&lt;div class=&quot;framer-el7yon&quot; style=&quot;--extracted-r6o4lv:rgba(0, 0, 0, 0.8);--framer-link-text-color:rgb(0, 153, 255);--framer-link-text-decoration:underline;opacity:0.8;transform:none&quot; data-framer-component-type=&quot;RichTextContainer&quot;&gt;&lt;p dir=&quot;auto&quot; style=&quot;--font-selector:SW50ZXItTWVkaXVtSXRhbGlj;--framer-font-size:17px;--framer-font-style:italic;--framer-font-weight:500;--framer-line-height:1.4em;--framer-text-color:var(--extracted-r6o4lv, rgba(0, 0, 0, 0.8))&quot; class=&quot;framer-text&quot;&gt;“We’ve been evaluating Mercury 2 because of its unparalleled latency and quality, especially valuable for real time transcript cleanup and interactive HCI applications. No other model has come close to the speed Mercury can provide!”&lt;/p&gt;&lt;/div&gt;&lt;div class=&quot;framer-1yi1l8z&quot; style=&quot;--extracted-r6o4lv:rgba(4, 20, 20, 0.5);--framer-link-text-color:rgb(0, 153, 255);--framer-link-text-decoration:underline;transform:none&quot; data-framer-component-type=&quot;RichTextContainer&quot;&gt;&lt;p dir=&quot;auto&quot; style=&quot;--font-selector:SW50ZXItTWVkaXVt;--framer-font-size:14px;--framer-font-weight:500;--framer-text-color:var(--extracted-r6o4lv, rgba(4, 20, 20, 0.5))&quot; class=&quot;framer-text&quot;&gt;Sahaj Garg, CTO &amp;amp; Co-Founder, Wispr Flow&lt;/p&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;framer-text framer-text-module&quot; style=&quot;width:100%;height:auto&quot; data-width=&quot;fill&quot;&gt;&lt;div class=&quot;ssr-variant&quot;&gt;&lt;div class=&quot;framer-UFJdD framer-1p2nwfj framer-v-1p2nwfj&quot; data-framer-name=&quot;Variant 1&quot;&gt;&lt;div class=&quot;framer-el7yon&quot; style=&quot;--extracted-r6o4lv:rgba(0, 0, 0, 0.8);--framer-link-text-color:rgb(0, 153, 255);--framer-link-text-decoration:underline;opacity:0.8;transform:none&quot; data-framer-component-type=&quot;RichTextContainer&quot;&gt;&lt;p dir=&quot;auto&quot; style=&quot;--font-selector:SW50ZXItTWVkaXVtSXRhbGlj;--framer-font-size:17px;--framer-font-style:italic;--framer-font-weight:500;--framer-line-height:1.4em;--framer-text-color:var(--extracted-r6o4lv, rgba(0, 0, 0, 0.8))&quot; class=&quot;framer-text&quot;&gt;&quot;Mercury 2 is at least twice as fast as GPT-5.2, which is a game changer for us.&quot;&lt;/p&gt;&lt;/div&gt;&lt;div class=&quot;framer-1yi1l8z&quot; style=&quot;--extracted-r6o4lv:rgba(4, 20, 20, 0.5);--framer-link-text-color:rgb(0, 153, 255);--framer-link-text-decoration:underline;transform:none&quot; data-framer-component-type=&quot;RichTextContainer&quot;&gt;&lt;p dir=&quot;auto&quot; style=&quot;--font-selector:SW50ZXItTWVkaXVt;--framer-font-size:14px;--framer-font-weight:500;--framer-text-color:var(--extracted-r6o4lv, rgba(4, 20, 20, 0.5))&quot; class=&quot;framer-text&quot;&gt;Suchintan Singh, CTO &amp;amp; Co-Founder, Skyvern&lt;/p&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;h4 dir=&quot;ltr&quot; class=&quot;framer-text framer-styles-preset-rx577o&quot;&gt;3. Real-time voice and interaction&lt;/h4&gt;&lt;p dir=&quot;ltr&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Voice interfaces have the tightest latency budget in AI. Mercury 2 makes reasoning-level quality viable within natural speech cadences.&lt;/p&gt;&lt;div class=&quot;framer-text framer-text-module&quot; style=&quot;width:100%;height:auto&quot; data-width=&quot;fill&quot;&gt;&lt;div class=&quot;ssr-variant&quot;&gt;&lt;div class=&quot;framer-UFJdD framer-1p2nwfj framer-v-1p2nwfj&quot; data-framer-name=&quot;Variant 1&quot;&gt;&lt;div class=&quot;framer-el7yon&quot; style=&quot;--extracted-r6o4lv:rgba(0, 0, 0, 0.8);--framer-link-text-color:rgb(0, 153, 255);--framer-link-text-decoration:underline;opacity:0.8;transform:none&quot; data-framer-component-type=&quot;RichTextContainer&quot;&gt;&lt;p dir=&quot;auto&quot; style=&quot;--font-selector:SW50ZXItTWVkaXVtSXRhbGlj;--framer-font-size:17px;--framer-font-style:italic;--framer-font-weight:500;--framer-line-height:1.4em;--framer-text-color:var(--extracted-r6o4lv, rgba(0, 0, 0, 0.8))&quot; class=&quot;framer-text&quot;&gt;“We build lifelike AI video avatars that hold real-time conversations with real people, so low latency isn&#39;t a nice-to-have, it&#39;s everything. Mercury 2 has been a big unlock in our voice stack: fast, consistent text generation that keeps the whole experience feeling natural and human.”&lt;/p&gt;&lt;/div&gt;&lt;div class=&quot;framer-1yi1l8z&quot; style=&quot;--extracted-r6o4lv:rgba(4, 20, 20, 0.5);--framer-link-text-color:rgb(0, 153, 255);--framer-link-text-decoration:underline;transform:none&quot; data-framer-component-type=&quot;RichTextContainer&quot;&gt;&lt;p dir=&quot;auto&quot; style=&quot;--font-selector:SW50ZXItTWVkaXVt;--framer-font-size:14px;--framer-font-weight:500;--framer-text-color:var(--extracted-r6o4lv, rgba(4, 20, 20, 0.5))&quot; class=&quot;framer-text&quot;&gt;Max Sapo, CEO &amp;amp; Co-Founder, Happyverse AI&lt;/p&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;framer-text framer-text-module&quot; style=&quot;width:100%;height:auto&quot; data-width=&quot;fill&quot;&gt;&lt;div class=&quot;ssr-variant&quot;&gt;&lt;div class=&quot;framer-UFJdD framer-1p2nwfj framer-v-1p2nwfj&quot; data-framer-name=&quot;Variant 1&quot;&gt;&lt;div class=&quot;framer-el7yon&quot; style=&quot;--extracted-r6o4lv:rgba(0, 0, 0, 0.8);--framer-link-text-color:rgb(0, 153, 255);--framer-link-text-decoration:underline;opacity:0.8;transform:none&quot; data-framer-component-type=&quot;RichTextContainer&quot;&gt;&lt;p dir=&quot;auto&quot; style=&quot;--font-selector:SW50ZXItTWVkaXVtSXRhbGlj;--framer-font-size:17px;--framer-font-style:italic;--framer-font-weight:500;--framer-line-height:1.4em;--framer-text-color:var(--extracted-r6o4lv, rgba(0, 0, 0, 0.8))&quot; class=&quot;framer-text&quot;&gt;“Mercury 2 quality is excellent, and the model’s low latency enables more responsive voice agents.”&lt;/p&gt;&lt;/div&gt;&lt;div class=&quot;framer-1yi1l8z&quot; style=&quot;--extracted-r6o4lv:rgba(4, 20, 20, 0.5);--framer-link-text-color:rgb(0, 153, 255);--framer-link-text-decoration:underline;transform:none&quot; data-framer-component-type=&quot;RichTextContainer&quot;&gt;&lt;p dir=&quot;auto&quot; style=&quot;--font-selector:SW50ZXItTWVkaXVt;--framer-font-size:14px;--framer-font-weight:500;--framer-text-color:var(--extracted-r6o4lv, rgba(4, 20, 20, 0.5))&quot; class=&quot;framer-text&quot;&gt;Oliver Silverstein, CEO &amp;amp; Co-Founder, OpenCall&lt;/p&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;h4 dir=&quot;ltr&quot; class=&quot;framer-text framer-styles-preset-rx577o&quot;&gt;4. Search and RAG pipelines&lt;/h4&gt;&lt;p dir=&quot;auto&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Multi-hop retrieval, reranking, and summarization latencies stack fast. Mercury 2 lets you add reasoning to the search loop without blowing your latency budget.&lt;/p&gt;&lt;div class=&quot;framer-text framer-text-module&quot; style=&quot;width:100%;height:auto&quot; data-width=&quot;fill&quot;&gt;&lt;div class=&quot;ssr-variant&quot;&gt;&lt;div class=&quot;framer-UFJdD framer-1p2nwfj framer-v-1p2nwfj&quot; data-framer-name=&quot;Variant 1&quot;&gt;&lt;div class=&quot;framer-el7yon&quot; style=&quot;--extracted-r6o4lv:rgba(0, 0, 0, 0.8);--framer-link-text-color:rgb(0, 153, 255);--framer-link-text-decoration:underline;opacity:0.8;transform:none&quot; data-framer-component-type=&quot;RichTextContainer&quot;&gt;&lt;p dir=&quot;auto&quot; style=&quot;--font-selector:SW50ZXItTWVkaXVtSXRhbGlj;--framer-font-size:17px;--framer-font-style:italic;--framer-font-weight:500;--framer-line-height:1.4em;--framer-text-color:var(--extracted-r6o4lv, rgba(0, 0, 0, 0.8))&quot; class=&quot;framer-text&quot;&gt;“Our partnership with Inception makes real-time AI for our search product practical. Every SearchBlox customer, across customer support, compliance, risk, analytics, and e-commerce, benefits from sub-second intelligence across all of their data.”&lt;/p&gt;&lt;/div&gt;&lt;div class=&quot;framer-1yi1l8z&quot; style=&quot;--extracted-r6o4lv:rgba(4, 20, 20, 0.5);--framer-link-text-color:rgb(0, 153, 255);--framer-link-text-decoration:underline;transform:none&quot; data-framer-component-type=&quot;RichTextContainer&quot;&gt;&lt;p dir=&quot;auto&quot; style=&quot;--font-selector:SW50ZXItTWVkaXVt;--framer-font-size:14px;--framer-font-weight:500;--framer-text-color:var(--extracted-r6o4lv, rgba(4, 20, 20, 0.5))&quot; class=&quot;framer-text&quot;&gt;Timo Selvaraj, Chief Product Officer, SearchBlox&lt;/p&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;h2 dir=&quot;ltr&quot; class=&quot;framer-text framer-styles-preset-68tgnr&quot;&gt;Get started&lt;/h2&gt;&lt;p dir=&quot;ltr&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Mercury 2 is available now.&lt;/p&gt;&lt;ul dir=&quot;auto&quot; class=&quot;framer-text&quot;&gt;&lt;li data-preset-tag=&quot;p&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;!--$--&gt;&lt;a class=&quot;framer-text framer-styles-preset-1j7uuqm&quot; href=&quot;https://platform.inceptionlabs.ai/&quot; rel=&quot;noopener&quot;&gt;&lt;strong class=&quot;framer-text&quot;&gt;Try the Mercury 2 API&lt;/strong&gt;&lt;/a&gt;&lt;!--/$--&gt;&lt;/p&gt;&lt;/li&gt;&lt;li data-preset-tag=&quot;p&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;!--$--&gt;&lt;a class=&quot;framer-text framer-styles-preset-1j7uuqm&quot; href=&quot;https://chat.inceptionlabs.ai/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;&lt;strong class=&quot;framer-text&quot;&gt;Try Mercury 2 in Chat&lt;/strong&gt;&lt;/a&gt;&lt;!--/$--&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p dir=&quot;ltr&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Mercury 2 is OpenAI API compatible. Drop into your existing stack - no rewrites required.&lt;/p&gt;&lt;p dir=&quot;ltr&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;If you’re doing an enterprise evaluation, we’ll partner with you on workload fit, eval design, and performance validation under your expected serving constraints.&lt;/p&gt;&lt;h3 dir=&quot;ltr&quot; class=&quot;framer-text framer-styles-preset-1jgxk8t&quot;&gt;Mercury 2 is live. Welcome to diffusion.&lt;/h3&gt;&lt;/div&gt;</description>
      <link>https://www.inceptionlabs.ai/blog/introducing-mercury-2</link>
      <guid isPermaLink="false">https://www.inceptionlabs.ai/blog/introducing-mercury-2</guid>
      <pubDate>Mon, 23 Feb 2026 16:00:00 GMT</pubDate>
      <author>Stefano Ermon</author>
      <category>Product</category>
    </item>
    <item>
      <title>SearchBlox + Inception: Real-Time GenAI Search at Enterprise Scale</title>
      <description>&lt;div class=&quot;framer-xg880z&quot; data-framer-name=&quot;Content&quot; style=&quot;transform:none&quot; data-framer-component-type=&quot;RichTextContainer&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;!--$--&gt;&lt;a class=&quot;framer-text framer-styles-preset-1j7uuqm&quot; href=&quot;about:blank&quot; rel=&quot;noopener&quot;&gt;SearchBlox&lt;/a&gt;&lt;!--/$--&gt; is bringing fast, contextual search and RAG to enterprises across ecommerce, customer service, knowledge management, legal, and digital platforms. To support AI at enterprise scale, companies need solutions that deliver speed, accuracy, and predictable costs.&lt;/p&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;SearchBlox SearchAI now integrates &lt;strong class=&quot;framer-text&quot;&gt;Inception’s Mercury dLLM&lt;/strong&gt;, unlocking ultra-low-latency, cost-efficient GenAI at scale. This partnership provides SearchBlox customers sub-second GenAI responses even on enterprise workloads.&lt;/p&gt;&lt;h2 class=&quot;framer-text framer-styles-preset-68tgnr&quot;&gt;The Challenge: Speed and Cost at Production Scale&lt;/h2&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Enterprises deploying GenAI search face a fundamental tension: traditional autoregressive LLMs deliver quality but at latencies and costs that make real-time applications slow and expensive. For use cases like ecommerce product Q&amp;amp;A, customer support, or employee knowledge assistants, response time directly impacts satisfaction.&amp;nbsp;&lt;/p&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;SearchBlox needed a solution that could handle real-time queries without sacrificing accuracy or exploding infrastructure costs.&lt;/p&gt;&lt;blockquote class=&quot;framer-text framer-styles-preset-xmlmn6&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;em class=&quot;framer-text&quot;&gt;“Speed is now the defining differentiator for enterprise AI. Our partnership with Inception makes real-time GenAI a practical reality. Whether it’s customer support, compliance, risk, analytics, or e-commerce, every SearchAI customer benefits from sub-second intelligence across all of their data.”&lt;br class=&quot;framer-text&quot;&gt;- Timo Selvaraj, Chief Product Officer, SearchBlox&lt;/em&gt;&lt;/p&gt;&lt;/blockquote&gt;&lt;h2 class=&quot;framer-text framer-styles-preset-68tgnr&quot;&gt;The Solution: Mercury Inside SearchAI&#39;s RAG Pipeline&lt;/h2&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Mercury made sense within SearchAI&#39;s pipeline for delivering fast, accurate inference at scale on unstructured text. Diffusion generation gives Mercury a parallel refinement path that enables enterprise search to run with consistent sub-second latency.&lt;/p&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;SearchBlox evaluated other lightweight models, but they carried higher latency or couldn&#39;t maintain quality across the range of tasks SearchAI handles. Mercury&#39;s architecture gives predictable performance even under bursty enterprise workloads, which is critical for customer-facing applications where inconsistent response times degrade user experience.&lt;/p&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;SearchBlox integrated Mercury into its SearchAI architecture as a model endpoint within their LLM abstraction. The integration required no refactoring or pipeline rewrites.&lt;/p&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Mercury slots into the existing RAG pipeline alongside:&lt;/p&gt;&lt;ul class=&quot;framer-text&quot;&gt;&lt;li data-preset-tag=&quot;p&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Hybrid search&lt;/p&gt;&lt;/li&gt;&lt;li data-preset-tag=&quot;p&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Context builder and metadata enrichment&lt;/p&gt;&lt;/li&gt;&lt;li data-preset-tag=&quot;p&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Query rewriting&lt;/p&gt;&lt;/li&gt;&lt;li data-preset-tag=&quot;p&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;SearchAI PreText NLP&lt;/p&gt;&lt;/li&gt;&lt;li data-preset-tag=&quot;p&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Vector and keyword fusion&lt;/p&gt;&lt;/li&gt;&lt;li data-preset-tag=&quot;p&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;SmartFAQs&lt;/p&gt;&lt;/li&gt;&lt;li data-preset-tag=&quot;p&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Commerce search experiences&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;The integration preserves everything SearchAI customers already rely on while dramatically improving speed and cost.&lt;/p&gt;&lt;h2 class=&quot;framer-text framer-styles-preset-68tgnr&quot;&gt;Outcomes for Enterprise Search&lt;/h2&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;strong class=&quot;framer-text&quot;&gt;Speed that scales.&lt;/strong&gt; Mercury delivers sub-second inference even under heavy enterprise workloads. For SearchBlox customers, this means product Q&amp;amp;A, smart FAQs, and knowledge retrieval that feel instant, whether handling ten queries or ten thousand.&lt;/p&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;strong class=&quot;framer-text&quot;&gt;60 to 90% lower inference costs.&lt;/strong&gt; Mercury reduced SearchBlox’s compute costs dramatically compared to using an autoregressive LLMs. Combined with SearchBlox&#39;s fixed-cost licensing model, enterprises get sustainable GenAI search economics.&lt;/p&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;strong class=&quot;framer-text&quot;&gt;Secure deployments.&lt;/strong&gt; SearchAI customers can deploy Mercury on AWS Bedrock or Azure Foundry, ensuring that data never leaves their private cloud instance and that they inherit the full security, compliance, and governance frameworks of AWS or Azure.&lt;/p&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;strong class=&quot;framer-text&quot;&gt;Better quality through context.&lt;/strong&gt; With SearchAI&#39;s RAG and metadata automation, Mercury gains higher grounding accuracy, better domain-aware summarization, personalized output for each user and workflow, multilingual support, and higher tolerance for noisy enterprise data. The result is hyper-relevant responses in every interaction.&lt;/p&gt;&lt;h2 class=&quot;framer-text framer-styles-preset-68tgnr&quot;&gt;Use Cases Now Production-Ready&lt;/h2&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;With Mercury powering SearchAI, enterprises can confidently deploy SearchBlox’s GenAI solutions across:&lt;/p&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;strong class=&quot;framer-text&quot;&gt;eCommerce:&lt;/strong&gt; Faster product Q&amp;amp;A, instant comparison summaries, real-time personalization, intelligent search suggestions.&lt;/p&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;strong class=&quot;framer-text&quot;&gt;Customer Support:&lt;/strong&gt; Instant smart answers, agent assist and summarization, knowledge retrieval with no lag, 24/7 multilingual support.&lt;/p&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;strong class=&quot;framer-text&quot;&gt;Enterprise Knowledge and RAG:&lt;/strong&gt; Contract and policy summarization, compliance checks, legal Q&amp;amp;A, employee knowledge assistants.&lt;/p&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;strong class=&quot;framer-text&quot;&gt;Digital Experience Search:&lt;/strong&gt; Lightning-fast site search, contextual recommendations, AI-powered content discovery.&lt;/p&gt;&lt;h2 class=&quot;framer-text framer-styles-preset-68tgnr&quot;&gt;Takeaway&lt;/h2&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Whether handling ecommerce queries, customer support, legal research, or knowledge management, the combination of SearchAI&#39;s RAG pipeline and Mercury&#39;s diffusion generation process turns GenAI search from a cost center into a competitive advantage.&lt;/p&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;The future of enterprise search is real-time, contextual, and cost-efficient. SearchBlox and Inception are delivering it today.&lt;/p&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Visit the link below for details on SearchAI + Mercury Diffusion LLMs, and schedule a personalized demo.&lt;/p&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;!--$--&gt;&lt;a class=&quot;framer-text framer-styles-preset-1j7uuqm&quot; href=&quot;https://www.searchblox.com/partners/searchai-and-inception-mercury-diffusion-models&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.searchblox.com/partners/searchai-and-inception-mercury-diffusion-models&lt;/a&gt;&lt;!--/$--&gt;&lt;/p&gt;&lt;/div&gt;</description>
      <link>https://www.inceptionlabs.ai/blog/searchblox-and-inception</link>
      <guid isPermaLink="false">https://www.inceptionlabs.ai/blog/searchblox-and-inception</guid>
      <pubDate>Sun, 11 Jan 2026 16:00:00 GMT</pubDate>
      <author>Sawyer Birnbaum</author>
      <category>Customers</category>
    </item>
    <item>
      <title>Mercury Diffusion LLM Now Available on Azure AI Foundry</title>
      <description>&lt;div class=&quot;framer-xg880z&quot; data-framer-name=&quot;Content&quot; style=&quot;transform:none&quot; data-framer-component-type=&quot;RichTextContainer&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Today, we&#39;re thrilled to announce that &lt;!--$--&gt;&lt;a class=&quot;framer-text framer-styles-preset-1j7uuqm&quot; href=&quot;https://ai.azure.com/explore/models/Mercury/version/1/registry/azureml-inceptionlabs&quot; rel=&quot;noopener&quot;&gt;Mercury is available on Azure AI Foundry&lt;/a&gt;&lt;!--/$--&gt;, bringing the first commercial-scale diffusion large language model (dLLM) to enterprise developers. This release combines Inception&#39;s breakthrough diffusion architecture with Azure&#39;s enterprise-ready infrastructure, giving developers access to a model that delivers both exceptional speed and quality.&lt;/p&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Built by the team behind foundational AI technologies including Flash Attention, Direct Preference Optimization, and the original diffusion models for images, Mercury represents a fundamental shift in how language models generate text. Developers on Azure AI Foundry now have access to a model that redefines what&#39;s possible in real-time AI applications.&lt;/p&gt;&lt;h2 class=&quot;framer-text framer-styles-preset-68tgnr&quot;&gt;A New Architecture for Language Generation&lt;/h2&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Traditional language models generate text sequentially, one token at a time. This creates an inherent bottleneck where each token must wait for all previous tokens to be generated. Mercury uses a diffusion-based architecture to generate multiple tokens in parallel, enabling dramatically faster inference. The result: Mercury runs up to 10x faster than comparable autoregressive models.&lt;/p&gt;&lt;h2 class=&quot;framer-text framer-styles-preset-68tgnr&quot;&gt;Performance: Speed &amp;amp; Quality&lt;/h2&gt;&lt;img alt=&quot;&quot; width=&quot;993&quot; height=&quot;551&quot; src=&quot;https://framerusercontent.com/images/UCjAOJmgmA03AehCu9GKUmBRNkM.jpg&quot; srcset=&quot;https://framerusercontent.com/images/UCjAOJmgmA03AehCu9GKUmBRNkM.jpg?scale-down-to=512&amp;amp;width=1986&amp;amp;height=1103 512w,https://framerusercontent.com/images/UCjAOJmgmA03AehCu9GKUmBRNkM.jpg?scale-down-to=1024&amp;amp;width=1986&amp;amp;height=1103 1024w,https://framerusercontent.com/images/UCjAOJmgmA03AehCu9GKUmBRNkM.jpg?width=1986&amp;amp;height=1103 1986w&quot; class=&quot;framer-text framer-image framer-styles-preset-1teh2bg&quot; style=&quot;aspect-ratio:1986 / 1103&quot; sizes=&quot;(min-width: 1024px) 100vw, (min-width: 768px) and (max-width: 1023.98px) 100vw, (max-width: 767.98px) 100vw&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Mercury provides frontier quality with unparalleled speeds. Across knowledge, coding, instruction following, and mathematical benchmarks, Mercury performs on par with models like Gemini 2.5 Flash and Claude 4.5 Haiku, while running up to 10x faster.&lt;br class=&quot;framer-text&quot;&gt;&lt;br class=&quot;framer-text trailing-break&quot;&gt;&lt;/p&gt;&lt;img alt=&quot;&quot; width=&quot;1000&quot; height=&quot;555&quot; src=&quot;https://framerusercontent.com/images/8VTiMEooyYdNqQ6c1hLt6yvHgE.png&quot; srcset=&quot;https://framerusercontent.com/images/8VTiMEooyYdNqQ6c1hLt6yvHgE.png?scale-down-to=512&amp;amp;width=2000&amp;amp;height=1110 512w,https://framerusercontent.com/images/8VTiMEooyYdNqQ6c1hLt6yvHgE.png?scale-down-to=1024&amp;amp;width=2000&amp;amp;height=1110 1024w,https://framerusercontent.com/images/8VTiMEooyYdNqQ6c1hLt6yvHgE.png?width=2000&amp;amp;height=1110 2000w&quot; class=&quot;framer-text framer-image framer-styles-preset-1teh2bg&quot; style=&quot;aspect-ratio:2000 / 1110&quot; sizes=&quot;(min-width: 1024px) 100vw, (min-width: 768px) and (max-width: 1023.98px) 100vw, (max-width: 767.98px) 100vw&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;h2 class=&quot;framer-text framer-styles-preset-68tgnr&quot;&gt;Enterprise-Ready on Azure&lt;/h2&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Mercury on Azure AI Foundry is production-ready out of the box. It features a 128K token context window for processing large documents and maintaining extensive conversations, with native tool calling and structured output support using JSON schemas for building agentic workflows. The API is OpenAI-compatible, making integration with existing codebases seamless.&lt;/p&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Azure AI Foundry provides enterprise-grade infrastructure, including network isolation, data privacy guarantees ensuring your data stays in your Azure environment and is never used for training, Azure compliance standards including SOC2 and HIPAA, and comprehensive observability through Azure Monitor and Application Insights.&lt;/p&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;strong class=&quot;framer-text&quot;&gt;Real-World Use Cases&lt;/strong&gt;&lt;/p&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Mercury&#39;s speed-quality combination makes applications faster and more responsive:&lt;/p&gt;&lt;ul class=&quot;framer-text&quot;&gt;&lt;li data-preset-tag=&quot;p&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;strong class=&quot;framer-text&quot;&gt;Coding assistants:&lt;/strong&gt; Stay in flow with responsive autocomplete, intelligent tab suggestions, fast chat responses, and more.&lt;/p&gt;&lt;/li&gt;&lt;li data-preset-tag=&quot;p&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;strong class=&quot;framer-text&quot;&gt;Real-time voice agents:&lt;/strong&gt; Engage naturally with AI for customer support, translation, and beyond.&lt;/p&gt;&lt;/li&gt;&lt;li data-preset-tag=&quot;p&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;strong class=&quot;framer-text&quot;&gt;Seamless enterprise workflows:&lt;/strong&gt; Automate complex routing, analytics, and decision processes with ultra-responsive AI.&lt;/p&gt;&lt;/li&gt;&lt;li data-preset-tag=&quot;p&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;strong class=&quot;framer-text&quot;&gt;Rapid enterprise search:&lt;/strong&gt; Instantly surface the right data from across your organization’s knowledge base.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2 class=&quot;framer-text framer-styles-preset-68tgnr&quot;&gt;Deploy on Azure AI Foundry&lt;/h2&gt;&lt;img alt=&quot;&quot; width=&quot;1216&quot; height=&quot;712&quot; src=&quot;https://framerusercontent.com/images/SutoNLjnjduUm5EcBHJG9ljd6ms.png&quot; srcset=&quot;https://framerusercontent.com/images/SutoNLjnjduUm5EcBHJG9ljd6ms.png?scale-down-to=512&amp;amp;width=2432&amp;amp;height=1424 512w,https://framerusercontent.com/images/SutoNLjnjduUm5EcBHJG9ljd6ms.png?scale-down-to=1024&amp;amp;width=2432&amp;amp;height=1424 1024w,https://framerusercontent.com/images/SutoNLjnjduUm5EcBHJG9ljd6ms.png?scale-down-to=2048&amp;amp;width=2432&amp;amp;height=1424 2048w,https://framerusercontent.com/images/SutoNLjnjduUm5EcBHJG9ljd6ms.png?width=2432&amp;amp;height=1424 2432w&quot; class=&quot;framer-text framer-image framer-styles-preset-1teh2bg&quot; style=&quot;aspect-ratio:2432 / 1424&quot; sizes=&quot;(min-width: 1024px) 100vw, (min-width: 768px) and (max-width: 1023.98px) 100vw, (max-width: 767.98px) 100vw&quot; referrerpolicy=&quot;no-referrer&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Mercury is available in the US and Canada regions through Azure AI Foundry. Deployment is straightforward—provision your model endpoint and configure your infrastructure through Azure AI Foundry&#39;s unified catalog. The Mercury software license costs $0.78/hour, with compute costs billed separately through your Azure account based on the resources you provision.&lt;/p&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;strong class=&quot;framer-text&quot;&gt;Azure AI Foundry Integration:&lt;/strong&gt;&lt;/p&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Mercury integrates seamlessly with the broader Azure ecosystem. Deploy using Azure AI Foundry&#39;s model catalog, apply Azure AI Content Safety for content filtering, monitor performance and costs in real-time, manage access with Azure RBAC and managed identities, and build multi-model applications using Azure&#39;s agent framework.&lt;/p&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Get Started in Three Steps&lt;/p&gt;&lt;ol class=&quot;framer-text&quot;&gt;&lt;li data-preset-tag=&quot;p&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Navigate to Azure AI Foundry’s &lt;!--$--&gt;&lt;a class=&quot;framer-text framer-styles-preset-1j7uuqm&quot; href=&quot;https://ai.azure.com/explore/models&quot; rel=&quot;noopener&quot;&gt;Model Catalog&lt;/a&gt;&lt;!--/$--&gt;, search for &lt;strong class=&quot;framer-text&quot;&gt;Mercury,&lt;/strong&gt; and click on the model card. Alternatively, you can visit our model card &lt;!--$--&gt;&lt;a class=&quot;framer-text framer-styles-preset-1j7uuqm&quot; href=&quot;https://ai.azure.com/explore/models/Mercury/version/1/registry/azureml-inceptionlabs&quot; rel=&quot;noopener&quot;&gt;here&lt;/a&gt;&lt;!--/$--&gt;&lt;/p&gt;&lt;/li&gt;&lt;li data-preset-tag=&quot;p&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;Click on &lt;strong class=&quot;framer-text&quot;&gt;Use this model,&lt;/strong&gt; choose a project, and follow along the UI prompts to provision your desired capacity.&lt;/p&gt;&lt;/li&gt;&lt;li data-preset-tag=&quot;p&quot; class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;While the default configuration should work for most developers, we recommend choosing &lt;strong class=&quot;framer-text&quot;&gt;ND-H100-v5&lt;/strong&gt; instances for optimal speeds. The deployment should take a few minutes to finish.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p class=&quot;framer-text framer-styles-preset-1o2d8l&quot;&gt;That’s it! You should now be able to start building with your newly provisioned API endpoint:&lt;/p&gt;&lt;div class=&quot;framer-text framer-text-module&quot; style=&quot;width:100%;height:auto&quot; data-width=&quot;fill&quot;&gt;&lt;div class=&quot;ssr-variant hidden-1m054m7 hidden-loc3ag&quot;&gt;&lt;div class=&quot;framer-cb&quot; style=&quot;--cb-colors-surface1-light:#161820;--cb-colors-surface2-light:#252525;--cb-colors-surface3-light:#2f2f2f;--cb-colors-clickable-light:#999999;--cb-colors-base-light:#d92121;--cb-colors-disabled-light:#4d4d4d;--cb-colors-hover-light:#c5c5c5;--cb-colors-accent-light:#0099ff;--cb-colors-error-light:#ff3366;--cb-colors-errorSurface-light:#ffe0e8;--cb-syntax-color-plain-light:#eeeeee;--cb-syntax-color-comment-light:#666666;--cb-syntax-fontStyle-comment-light:italic;--cb-syntax-color-keyword-light:#00BBFF;--cb-syntax-color-tag-light:#00BBFF;--cb-syntax-color-punctuation-light:#999999;--cb-syntax-color-definition-light:#ffcc66;--cb-syntax-color-property-light:#77dddd;--cb-syntax-color-static-light:#ff8866;--cb-syntax-color-string-light:#bb88ff;--cb-color-scheme-light:dark;--cb-colors-surface1-dark:#161820;--cb-colors-surface2-dark:#252525;--cb-colors-surface3-dark:#2f2f2f;--cb-colors-clickable-dark:#999999;--cb-colors-base-dark:#d92121;--cb-colors-disabled-dark:#4d4d4d;--cb-colors-hover-dark:#c5c5c5;--cb-colors-accent-dark:#0099ff;--cb-colors-error-dark:#ff3366;--cb-colors-errorSurface-dark:#ffe0e8;--cb-syntax-color-plain-dark:#eeeeee;--cb-syntax-color-comment-dark:#666666;--cb-syntax-fontStyle-comment-dark:italic;--cb-syntax-color-keyword-dark:#00BBFF;--cb-syntax-color-tag-dark:#00BBFF;--cb-syntax-color-punctuation-dark:#999999;--cb-syntax-color-definition-dark:#ffcc66;--cb-syntax-color-property-dark:#77dddd;--cb-syntax-color-static-dark:#ff8866;--cb-syntax-color-string-dark:#bb88ff;--cb-color-scheme-dark:dark;position:relative;width:100%;height:100%&quot;&gt;&lt;div class=&quot;sp-919074184 sp-c-fVPbOs sp-c-fVPbOs-LrWkf-variant-dark sp-wrapper&quot; style=&quot;height:100%&quot;&gt;&lt;div class=&quot;sp-c-ikJbEZ sp-layout&quot; style=&quot;height:100%;--sp-layout-height:100%;--cb-padding:30px;border-style:none;border-color:unset;border-top-width:0px;border-bottom-width:0px;border-left-width:0px;border-right-width:0px;background-color:var(--sp-colors-surface1);border-radius:15px;transform:unset;overflow:hidden&quot;&gt;&lt;div class=&quot;sp-c-euXojQ sp-editor sp-stack&quot; style=&quot;letter-spacing:0em;font-style:normal;font-weight:400&quot;&gt;&lt;div aria-labelledby=&quot;/example.jsx-:R79nldblop:-tab&quot; class=&quot;sp-c-gtcpyq sp-code-editor cb-code-editor&quot; id=&quot;/example.jsx-:R79nldblop:-tab-panel&quot; role=&quot;tabpanel&quot;&gt;&lt;div aria-autocomplete=&quot;list&quot; aria-label=&quot;Code Editor for example.jsx&quot; aria-multiline=&quot;true&quot; class=&quot;sp-pristine sp-javascript sp-c-jOWzsE sp-c-jkvvao sp-cm&quot; role=&quot;textbox&quot; tabindex=&quot;0&quot; translate=&quot;no&quot;&gt;&lt;pre class=&quot;sp-c-fWymNx sp-pre-placeholder&quot; style=&quot;margin-left:var(--sp-space-4)&quot;&gt;&lt;span class=&quot;sp-syntax-plain&quot;&gt;curl&lt;/span&gt; --&lt;span class=&quot;sp-syntax-plain&quot;&gt;location &lt;/span&gt;&lt;span class=&quot;sp-syntax-string&quot;&gt;&#39;https://mercury-endpoint-ftecx.eastus.inference.ml.azure.com/v1/chat/completions&#39;&lt;/span&gt; \\
        --&lt;span class=&quot;sp-syntax-plain&quot;&gt;header &lt;/span&gt;&lt;span class=&quot;sp-syntax-string&quot;&gt;&#39;azureml-model-deployment: mercury-1&#39;&lt;/span&gt; \\
        --&lt;span class=&quot;sp-syntax-plain&quot;&gt;header &lt;/span&gt;&lt;span class=&quot;sp-syntax-string&quot;&gt;&#39;Content-Type: application/json&#39;&lt;/span&gt; \\
        --&lt;span class=&quot;sp-syntax-plain&quot;&gt;header &lt;/span&gt;&lt;span class=&quot;sp-syntax-string&quot;&gt;&#39;Authorization: Bearer &amp;lt;your-api-key&amp;gt;&#39;&lt;/span&gt; \\
        --&lt;span class=&quot;sp-syntax-plain&quot;&gt;data &lt;/span&gt;&lt;span class=&quot;sp-syntax-string&quot;&gt;&#39;{
        &lt;/span&gt; &lt;span class=&quot;sp-syntax-string&quot;&gt;&quot;model&quot;&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;sp-syntax-string&quot;&gt;&quot;mercury&quot;&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;,&lt;/span&gt;
        &lt;span class=&quot;sp-syntax-string&quot;&gt;&quot;messages&quot;&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;sp-syntax-punctuation&quot;&gt;[&lt;/span&gt;
        &lt;span class=&quot;sp-syntax-punctuation&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;sp-syntax-string&quot;&gt;&quot;role&quot;&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;sp-syntax-string&quot;&gt;&quot;user&quot;&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;sp-syntax-string&quot;&gt;&quot;content&quot;&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;sp-syntax-string&quot;&gt;&quot;Hello!&quot;&lt;/span&gt; &lt;span class=&quot;sp-syntax-punctuation&quot;&gt;}&lt;/span&gt;
        &lt;span class=&quot;sp-syntax-punctuation&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;,&lt;/span&gt;
        &lt;span class=&quot;sp-syntax-string&quot;&gt;&quot;stream&quot;&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;sp-syntax-static&quot;&gt;true&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;,&lt;/span&gt;
        &lt;span class=&quot;sp-syntax-string&quot;&gt;&quot;temperature&quot;&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;sp-syntax-static&quot;&gt;0.0&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;,&lt;/span&gt;
        &lt;span class=&quot;sp-syntax-string&quot;&gt;&quot;max_tokens&quot;&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;sp-syntax-static&quot;&gt;512&lt;/span&gt;
        &lt;span class=&quot;sp-syntax-punctuation&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;sp-syntax-string&quot;&gt;&#39;&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;display:contents&quot; data-framer-css-ssr=&quot;true&quot;&gt;&lt;style&gt;--sxs{--sxs:0 sp-919074184}@media{.sp-919074184{--sp-space-1:4px;--sp-space-2:8px;--sp-space-3:12px;--sp-space-4:16px;--sp-space-5:20px;--sp-space-6:24px;--sp-space-7:28px;--sp-space-8:32px;--sp-space-9:36px;--sp-space-10:40px;--sp-space-11:44px;--sp-border-radius:4px;--sp-layout-height:300px;--sp-layout-headerHeight:40px;--sp-transitions-default:150ms ease;--sp-zIndices-base:1;--sp-zIndices-overlay:2;--sp-zIndices-top:3;--sp-colors-surface1:var(--cb-colors-surface1);--sp-colors-surface2:var(--cb-colors-surface2);--sp-colors-surface3:var(--cb-colors-surface3);--sp-colors-disabled:var(--cb-colors-disabled);--sp-colors-base:var(--cb-colors-base);--sp-colors-clickable:var(--cb-colors-clickable);--sp-colors-hover:var(--cb-colors-hover);--sp-colors-accent:var(--cb-colors-accent);--sp-colors-error:var(--cb-colors-error);--sp-colors-errorSurface:var(--cb-colors-errorSurface);--sp-colors-warning:var(--cb-colors-warning);--sp-colors-warningSurface:var(--cb-colors-warningSurface);--sp-font-body:sans-serif;--sp-font-mono:&quot;Fragment Mono&quot;, monospace;--sp-font-size:14px;--sp-font-lineHeight:1.5em;--sp-syntax-color-plain:var(--cb-syntax-color-plain);--sp-syntax-color-comment:var(--cb-syntax-color-comment);--sp-syntax-color-keyword:var(--cb-syntax-color-keyword);--sp-syntax-color-tag:var(--cb-syntax-color-tag);--sp-syntax-color-punctuation:var(--cb-syntax-color-punctuation);--sp-syntax-color-definition:var(--cb-syntax-color-definition);--sp-syntax-color-property:var(--cb-syntax-color-property);--sp-syntax-color-static:var(--cb-syntax-color-static);--sp-syntax-color-string:var(--cb-syntax-color-string)}}--sxs{--sxs:1 sp-k-eyOShd sp-k-iOHdLQ}@media{@keyframes sp-k-eyOShd{0%{opacity:0}100%{opacity:1}}@keyframes sp-k-iOHdLQ{0%{transform:rotateX(-25.5deg) rotateY(45deg)}100%{transform:rotateX(-25.5deg) rotateY(405deg)}}}--sxs{--sxs:2 sp-c-gMfcns sp-c-bxeRRt sp-c-jKPvnt sp-c-fWymNx sp-c-euXojQ sp-c-bpmgvy sp-c-PJLV sp-c-fVPbOs sp-c-ikJbEZ sp-c-gtcpyq sp-c-jOWzsE sp-c-jkvvao}@media{.sp-c-gMfcns svg{margin:auto}.sp-c-bxeRRt{-webkit-appearance:none;appearance:none;outline:none;display:flex;align-items:center;font-size:inherit;font-family:inherit;background-color:transparent;transition:color var(--sp-transitions-default), background var(--sp-transitions-default);cursor:pointer;color:var(--sp-colors-clickable);border:0;text-decoration:none}.sp-c-bxeRRt:disabled{color:var(--sp-colors-disabled)}.sp-c-bxeRRt:hover:not(:disabled,[data-active=&#39;true&#39;]){color:var(--sp-colors-hover)}.sp-c-bxeRRt[data-active=&quot;true&quot;]{color:var(--sp-colors-accent)}.sp-c-bxeRRt svg{min-width:var(--sp-space-4);width:var(--sp-space-4);height:var(--sp-space-4)}.sp-c-bxeRRt.sp-c-gMfcns{padding:var(--sp-space-1);height:var(--sp-space-7);display:flex}.sp-c-bxeRRt.sp-c-gMfcns.sp-c-bxeRRt:not(:has(span)){width:var(--sp-space-7)}.sp-c-bxeRRt.sp-c-gMfcns.sp-c-bxeRRt:has(svg + span){padding-right:var(--sp-space-3);padding-left:var(--sp-space-2);gap:var(--sp-space-1)}.sp-c-jKPvnt{padding:0 var(--sp-space-1) 0 var(--sp-space-1);border-radius:var(--sp-border-radius);margin-left:var(--sp-space-1);width:var(--sp-space-5);visibility:hidden;cursor:pointer;position:absolute;right:0px}.sp-c-jKPvnt svg{width:var(--sp-space-3);height:var(--sp-space-3);display:block;position:relative;top:1px}.sp-c-fWymNx{margin:0;display:block;font-family:var(--sp-font-mono);font-size:var(--sp-font-size);color:var(--sp-syntax-color-plain);line-height:var(--sp-font-lineHeight)}.sp-c-euXojQ{display:flex;flex-direction:column;width:100%;position:relative;background-color:var(--sp-colors-surface1);gap:1px}.sp-c-euXojQ:has(.sp-stack){background-color:var(--sp-colors-surface2)}.sp-c-bpmgvy{transform:translate(-4px, 9px) scale(0.13, 0.13)}.sp-c-bpmgvy *{position:absolute;width:96px;height:96px}.sp-c-fVPbOs{all:initial;font-size:var(--sp-font-size);font-family:var(--sp-font-body);display:block;box-sizing:border-box;text-rendering:optimizeLegibility;-webkit-tap-highlight-color:transparent;-webkit-font-smoothing:subpixel-antialiased}@media screen and (min-resolution: 2dppx){.sp-c-fVPbOs{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}}.sp-c-fVPbOs *{box-sizing:border-box}.sp-c-fVPbOs .sp-wrapper:focus{outline:0}.sp-c-ikJbEZ{border:1px solid var(--sp-colors-surface2);display:flex;flex-wrap:wrap;align-items:stretch;border-radius:var(--sp-border-radius);overflow:hidden;position:relative;background-color:var(--sp-colors-surface2);gap:1px}.sp-c-ikJbEZ &gt; .sp-c-euXojQ{flex-grow:1;flex-shrink:1;flex-basis:0;height:var(--sp-layout-height);overflow:hidden}@media print{.sp-c-ikJbEZ &gt; .sp-c-euXojQ{height:auto;display:block}}@media screen and (max-width: 768px){.sp-c-ikJbEZ &gt; .sp-c-euXojQ:not(.sp-preview, .sp-editor, .sp-preset-column){height:calc(var(--sp-layout-height) / 2)}}@media screen and (max-width: 768px){.sp-c-ikJbEZ &gt; .sp-c-euXojQ{min-width:100%;}}.sp-c-ikJbEZ &gt; .sp-file-explorer{flex:0.2;min-width:200px}@media screen and (max-width: 768px){.sp-c-ikJbEZ &gt; .sp-file-explorer{flex:1}}.sp-c-gtcpyq{flex:1;position:relative;overflow:auto;background:var(--sp-colors-surface1)}.sp-c-gtcpyq .cm-scroller{padding:var(--sp-space-4) 0}.sp-c-gtcpyq .sp-c-fWymNx{padding:var(--sp-space-4) 0}@media screen and (max-width: 768px){@supports (-webkit-overflow-scrolling: touch){.sp-c-gtcpyq .cm-content{font-size:16px}}}.sp-c-jOWzsE{margin:0;outline:none;height:100%}.sp-c-jkvvao .sp-syntax-string{color:var(--sp-syntax-color-string);font-style:var(--sp-syntax-fontStyle-string)}.sp-c-jkvvao .sp-syntax-plain{color:var(--sp-syntax-color-plain);font-style:var(--sp-syntax-fontStyle-plain)}.sp-c-jkvvao .sp-syntax-comment{color:var(--sp-syntax-color-comment);font-style:var(--sp-syntax-fontStyle-comment)}.sp-c-jkvvao .sp-syntax-keyword{color:var(--sp-syntax-color-keyword);font-style:var(--sp-syntax-fontStyle-keyword)}.sp-c-jkvvao .sp-syntax-definition{color:var(--sp-syntax-color-definition);font-style:var(--sp-syntax-fontStyle-definition)}.sp-c-jkvvao .sp-syntax-punctuation{color:var(--sp-syntax-color-punctuation);font-style:var(--sp-syntax-fontStyle-punctuation)}.sp-c-jkvvao .sp-syntax-property{color:var(--sp-syntax-color-property);font-style:var(--sp-syntax-fontStyle-property)}.sp-c-jkvvao .sp-syntax-tag{color:var(--sp-syntax-color-tag);font-style:var(--sp-syntax-fontStyle-tag)}.sp-c-jkvvao .sp-syntax-static{color:var(--sp-syntax-color-static);font-style:var(--sp-syntax-fontStyle-static)}}--sxs{--sxs:3 sp-c-PJLV-kCOVwI-status-pass sp-c-PJLV-kEzYsr-status-fail sp-c-PJLV-gHAhSA-status-skip sp-c-PJLV-jgnHyR-status-title sp-c-PJLV-iCgxLS-status-run sp-c-PJLV-bnDZSy-status-pass sp-c-PJLV-eYuGwt-status-fail sp-c-fVPbOs-LrWkf-variant-dark}@media{.sp-c-PJLV-kCOVwI-status-pass{color:var(--test-pass)}.sp-c-PJLV-kEzYsr-status-fail{color:var(--test-fail)}.sp-c-PJLV-gHAhSA-status-skip{color:var(--test-skip)}.sp-c-PJLV-jgnHyR-status-title{color:var(--test-title)}.sp-c-PJLV-iCgxLS-status-run{background:var(--test-run);color:var(--sp-colors-surface1)}.sp-c-PJLV-bnDZSy-status-pass{background:var(--test-pass);color:var(--sp-colors-surface1)}.sp-c-PJLV-eYuGwt-status-fail{background:var(--test-fail);color:var(--sp-colors-surface1)}.sp-c-fVPbOs-LrWkf-variant-dark{color-scheme:dark}}&lt;/style&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;ssr-variant hidden-zi3r5f hidden-loc3ag&quot;&gt;&lt;div class=&quot;framer-cb&quot; style=&quot;--cb-colors-surface1-light:#161820;--cb-colors-surface2-light:#252525;--cb-colors-surface3-light:#2f2f2f;--cb-colors-clickable-light:#999999;--cb-colors-base-light:#d92121;--cb-colors-disabled-light:#4d4d4d;--cb-colors-hover-light:#c5c5c5;--cb-colors-accent-light:#0099ff;--cb-colors-error-light:#ff3366;--cb-colors-errorSurface-light:#ffe0e8;--cb-syntax-color-plain-light:#eeeeee;--cb-syntax-color-comment-light:#666666;--cb-syntax-fontStyle-comment-light:italic;--cb-syntax-color-keyword-light:#00BBFF;--cb-syntax-color-tag-light:#00BBFF;--cb-syntax-color-punctuation-light:#999999;--cb-syntax-color-definition-light:#ffcc66;--cb-syntax-color-property-light:#77dddd;--cb-syntax-color-static-light:#ff8866;--cb-syntax-color-string-light:#bb88ff;--cb-color-scheme-light:dark;--cb-colors-surface1-dark:#161820;--cb-colors-surface2-dark:#252525;--cb-colors-surface3-dark:#2f2f2f;--cb-colors-clickable-dark:#999999;--cb-colors-base-dark:#d92121;--cb-colors-disabled-dark:#4d4d4d;--cb-colors-hover-dark:#c5c5c5;--cb-colors-accent-dark:#0099ff;--cb-colors-error-dark:#ff3366;--cb-colors-errorSurface-dark:#ffe0e8;--cb-syntax-color-plain-dark:#eeeeee;--cb-syntax-color-comment-dark:#666666;--cb-syntax-fontStyle-comment-dark:italic;--cb-syntax-color-keyword-dark:#00BBFF;--cb-syntax-color-tag-dark:#00BBFF;--cb-syntax-color-punctuation-dark:#999999;--cb-syntax-color-definition-dark:#ffcc66;--cb-syntax-color-property-dark:#77dddd;--cb-syntax-color-static-dark:#ff8866;--cb-syntax-color-string-dark:#bb88ff;--cb-color-scheme-dark:dark;position:relative;width:100%;height:100%&quot;&gt;&lt;div class=&quot;sp-919074184 sp-c-fVPbOs sp-c-fVPbOs-LrWkf-variant-dark sp-wrapper&quot; style=&quot;height:100%&quot;&gt;&lt;div class=&quot;sp-c-ikJbEZ sp-layout&quot; style=&quot;height:100%;--sp-layout-height:100%;--cb-padding:30px;border-style:none;border-color:unset;border-top-width:0px;border-bottom-width:0px;border-left-width:0px;border-right-width:0px;background-color:var(--sp-colors-surface1);border-radius:15px;transform:unset;overflow:hidden&quot;&gt;&lt;div class=&quot;sp-c-euXojQ sp-editor sp-stack&quot; style=&quot;letter-spacing:0em;font-style:normal;font-weight:400&quot;&gt;&lt;div aria-labelledby=&quot;/example.jsx-:R7anldblop:-tab&quot; class=&quot;sp-c-gtcpyq sp-code-editor cb-code-editor&quot; id=&quot;/example.jsx-:R7anldblop:-tab-panel&quot; role=&quot;tabpanel&quot;&gt;&lt;div aria-autocomplete=&quot;list&quot; aria-label=&quot;Code Editor for example.jsx&quot; aria-multiline=&quot;true&quot; class=&quot;sp-pristine sp-javascript sp-c-jOWzsE sp-c-jkvvao sp-cm&quot; role=&quot;textbox&quot; tabindex=&quot;0&quot; translate=&quot;no&quot;&gt;&lt;pre class=&quot;sp-c-fWymNx sp-pre-placeholder&quot; style=&quot;margin-left:var(--sp-space-4)&quot;&gt;&lt;span class=&quot;sp-syntax-plain&quot;&gt;curl&lt;/span&gt; --&lt;span class=&quot;sp-syntax-plain&quot;&gt;location &lt;/span&gt;&lt;span class=&quot;sp-syntax-string&quot;&gt;&#39;https://mercury-endpoint-ftecx.eastus.inference.ml.azure.com/v1/chat/completions&#39;&lt;/span&gt; \\
        --&lt;span class=&quot;sp-syntax-plain&quot;&gt;header &lt;/span&gt;&lt;span class=&quot;sp-syntax-string&quot;&gt;&#39;azureml-model-deployment: mercury-1&#39;&lt;/span&gt; \\
        --&lt;span class=&quot;sp-syntax-plain&quot;&gt;header &lt;/span&gt;&lt;span class=&quot;sp-syntax-string&quot;&gt;&#39;Content-Type: application/json&#39;&lt;/span&gt; \\
        --&lt;span class=&quot;sp-syntax-plain&quot;&gt;header &lt;/span&gt;&lt;span class=&quot;sp-syntax-string&quot;&gt;&#39;Authorization: Bearer &amp;lt;your-api-key&amp;gt;&#39;&lt;/span&gt; \\
        --&lt;span class=&quot;sp-syntax-plain&quot;&gt;data &lt;/span&gt;&lt;span class=&quot;sp-syntax-string&quot;&gt;&#39;{
        &lt;/span&gt; &lt;span class=&quot;sp-syntax-string&quot;&gt;&quot;model&quot;&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;sp-syntax-string&quot;&gt;&quot;mercury&quot;&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;,&lt;/span&gt;
        &lt;span class=&quot;sp-syntax-string&quot;&gt;&quot;messages&quot;&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;sp-syntax-punctuation&quot;&gt;[&lt;/span&gt;
        &lt;span class=&quot;sp-syntax-punctuation&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;sp-syntax-string&quot;&gt;&quot;role&quot;&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;sp-syntax-string&quot;&gt;&quot;user&quot;&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;sp-syntax-string&quot;&gt;&quot;content&quot;&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;sp-syntax-string&quot;&gt;&quot;Hello!&quot;&lt;/span&gt; &lt;span class=&quot;sp-syntax-punctuation&quot;&gt;}&lt;/span&gt;
        &lt;span class=&quot;sp-syntax-punctuation&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;,&lt;/span&gt;
        &lt;span class=&quot;sp-syntax-string&quot;&gt;&quot;stream&quot;&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;sp-syntax-static&quot;&gt;true&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;,&lt;/span&gt;
        &lt;span class=&quot;sp-syntax-string&quot;&gt;&quot;temperature&quot;&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;sp-syntax-static&quot;&gt;0.0&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;,&lt;/span&gt;
        &lt;span class=&quot;sp-syntax-string&quot;&gt;&quot;max_tokens&quot;&lt;/span&gt;&lt;span class=&quot;sp-syntax-punctuation&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;sp-syntax-static&quot;&gt;512&lt;/span&gt;
        &lt;span class=&quot;sp-syntax-punctuation&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;sp-syntax-string&quot;&gt;&#39;&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;

@github-actions
Copy link
Contributor

Auto Review

No clear rule violations found in the current diff.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant