Skip to content

This repository will contain the presentation and python jupyter notebooks for my DataHack Summit 2025 conference talk, Building Effective Agentic AI Systems: Lessons from the Field. Drawing from my experience building deploying Agentic AI systems, we’ll focus on three pillars: Architecting, Optimizing, and Observability for Agentic AI Systems.

License

Notifications You must be signed in to change notification settings

dipanjanS/building-effective-agentic-ai-systems-dhs2025

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Building Effective Agentic AI Systems: Lessons from the Field

Made with LangGraph Colab Ready License: GPL v3

This repository will contain the presentation and python jupyter notebooks for my DataHack Summit 2025 conference talk, Building Effective Agentic AI Systems: Lessons from the Field. Drawing from my experience building deploying Agentic AI systems, we’ll focus on three pillars: Architecting, Optimizing, and Observability for Agentic AI Systems.

Session Details


Everyone is building AI agents, but how do you design Agentic AI Systems that are truly reliable in the real world?

Agentic AI systems can plan tasks, use tools, reflect on results, and even collaborate with other agents.
But building them at scale brings challenges:

  • ⚙️ Choosing the right agent architecture
  • 🧠 Handling memory & context efficiently
  • ⚡ Reducing latency
  • 📊 Monitoring and evaluating agents effectively

This session draws from my personal experience building and deploying Agentic AI systems over the past year.
We’ll focus on three pillars: Architecting, Optimizing, and Observability for Agentic AI Systems.


📅 Agenda

1. Introduction

  • What are AI Agents?
  • Common challenges in building Agentic AI Systems
  • AI Agents vs. AI Workflows

2. Architecting Effective Agentic AI Systems

  • Popular Tools & Frameworks – key players and my recommendations
  • Why LangGraph Matters – benefits for building AI agents
  • Agent Design Patterns – tool use, planning, reflection, multi-agent (with practical recommendations)
  • Single-Agent vs. Multi-Agent Systems – real-world hands-on example & recommendations

3. Optimizing Agentic AI Systems

  • Context Engineering – what it is and popular approaches
  • Agentic RAG – integrating RAG with agents
  • Router Agentic RAG – real-world hands-on example
  • MCP & A2A – separating hype from value
  • Proven Multi-Server MCP Architecture – real-world hands-on example
  • Memory Management – long-term vs. short-term
  • Tools & Frameworks for Memory – key players
  • Memory Context Engineering – hands-on examples for Agentic AI

4. Observability (Monitoring & Evaluation) for Agentic AI Systems

  • Agent Observability – what it is and why it matters
  • Observability Tools & Frameworks – key players
  • Monitoring Metrics – token usage, latency, cost, tool calls, errors, etc.
  • Evaluation Metrics – goal accuracy, reasoning quality, trajectory accuracy, etc.
  • Hands-On Monitoring – tracing and dashboarding agent behavior
  • Hands-On Evaluation – building datasets and running evaluations with metrics

📝 Throughout the Session

  • ✅ Best practices and caveats for real-world readiness
  • 💻 Hands-on code demos using LangGraph, FastMCP, LangMem, and LangSmith

📒 Hands-On Notebooks


🎯 Key Takeaways

  • 🚨 Learn about the top challenges that cause Agentic AI systems to fail or underperform in production.
  • 🧩 Discover proven agent design patterns – tool use, planning, reflection, and multi-agent workflows – with clear guidance on when each works best.
  • 🏗️ Understand how to architect and compare single-agent vs. multi-agent systems using real-world examples.
  • 🧠 Learn about context engineering and memory management strategies (short-term & long-term) to improve accuracy and efficiency.
  • 🔀 Combine RAG & routing with agents to build powerful Router Agentic RAG systems.
  • 🌐 Evaluate the practical value of MCP and A2A, and learn how to design a multi-server MCP architecture.
  • 📈 Implement observability best practices – monitor runtime metrics (latency, cost, tool usage, errors) and evaluation metrics (goal accuracy, reasoning quality, trajectory accuracy).

About

This repository will contain the presentation and python jupyter notebooks for my DataHack Summit 2025 conference talk, Building Effective Agentic AI Systems: Lessons from the Field. Drawing from my experience building deploying Agentic AI systems, we’ll focus on three pillars: Architecting, Optimizing, and Observability for Agentic AI Systems.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published