pydantic-resolve is a framework for composing complex data structures with an intuitive, declarative, resolver-based architecture.
it supports:
- pydantic v1
- pydantic v2
- dataclass
from pydantic.dataclasses import dataclass
class Task(BaseTask):
user: Optional[BaseUser] = None
def resolve_user(self, loader=LoaderDepend(UserLoader)):
return loader.load(self.assignee_id) if self.assignee_id else None
class Story(BaseStory):
tasks: list[Task] = []
def resolve_tasks(self, loader=LoaderDepend(StoryTaskLoader)):
return loader.load(self.id)
By using this snippets above, it can esily transform plain stories into complicated stories with rich details:
BaseStory
[
{ "id": 1, "name": "story - 1" },
{ "id": 2, "name": "story - 2" }
]
Story
[
{
"id": 1,
"name": "story - 1",
"tasks": [
{
"id": 1,
"name": "design",
"user": {
"id": 1,
"name": "tangkikodo"
}
}
]
},
{
"id": 2,
"name": "story - 2",
"tasks": [
{
"id": 2,
"name": "add ut",
"user": {
"id": 2,
"name": "john"
}
}
]
}
]
If you have experience with GraphQL, this article provides comprehensive insights: Resolver Pattern: A Better Alternative to GraphQL in BFF.
Persisted queries in GraphQL can be easily transformed into pydantic-resolve pattern and gain performance improvement.
Extend your data models by implementing resolve_field
methods for data fetching and post_field
methods for transformations, enabling node creation, in-place modifications, or cross-node data aggregation.
Seamlessly integrates with modern Python web frameworks including FastAPI, Litestar, and Django-ninja.
pip install pydantic-resolve
Starting from pydantic-resolve v1.11.0, both pydantic v1 and v2 are supported.
- Documentation: https://allmonday.github.io/pydantic-resolve/v2/introduction/
- Demo Repository: https://github.com/allmonday/pydantic-resolve-demo
- Composition-Oriented Pattern: https://github.com/allmonday/composition-oriented-development-pattern
Building complex data structures requires only 3 systematic steps, let's take Agile's Story for example.
Establish entity relationships as foundational data models (stable, serves as architectural blueprint)

from pydantic import BaseModel
class BaseStory(BaseModel):
id: int
name: str
assignee_id: Optional[int]
report_to: Optional[int]
class BaseTask(BaseModel):
id: int
story_id: int
name: str
estimate: int
done: bool
assignee_id: Optional[int]
class BaseUser(BaseModel):
id: int
name: str
title: str
from aiodataloader import DataLoader
from pydantic_resolve import build_list, build_object
class StoryTaskLoader(DataLoader):
async def batch_load_fn(self, keys: list[int]):
tasks = await get_tasks_by_story_ids(keys)
return build_list(tasks, keys, lambda x: x.story_id)
class UserLoader(DataLoader):
async def batch_load_fn(self, keys: list[int]):
users = await get_tuser_by_ids(keys)
return build_object(users, keys, lambda x: x.id)
DataLoader implementations support flexible data sources, from database queries to microservice RPC calls. (It could be replaced in future optimization)
Based on a specific business logic, create domain-specific data structures through selective schemas and relationship dataloader (stable, reusable across use cases)

from pydantic_resolve import LoaderDepend
class Task(BaseTask):
user: Optional[BaseUser] = None
def resolve_user(self, loader=LoaderDepend(UserLoader)):
return loader.load(self.assignee_id) if self.assignee_id else None
class Story(BaseStory):
tasks: list[Task] = []
def resolve_tasks(self, loader=LoaderDepend(StoryTaskLoader)):
return loader.load(self.id)
assignee: Optional[BaseUser] = None
def resolve_assignee(self, loader=LoaderDepend(UserLoader)):
return loader.load(self.assignee_id) if self.assignee_id else None
reporter: Optional[BaseUser] = None
def resolve_reporter(self, loader=LoaderDepend(UserLoader)):
return loader.load(self.report_to) if self.report_to else None
Utilize ensure_subset
decorator for field validation and consistency enforcement:
@ensure_subset(BaseStory)
class Story(BaseModel):
id: int
assignee_id: int
report_to: int
tasks: list[BaseTask] = []
def resolve_tasks(self, loader=LoaderDepend(StoryTaskLoader)):
return loader.load(self.id)
Once business models are validated, consider optimizing with specialized queries to replace DataLoader for enhanced performance.
Apply presentation-specific modifications and data aggregations (flexible, context-dependent)
Leverage post_field methods for ancestor data access, node transfers, and in-place transformations.

from pydantic_resolve import LoaderDepend, Collector
class Task(BaseTask):
__pydantic_resolve_collect__ = {'user': 'related_users'} # Propagate user to collector: 'related_users'
user: Optional[BaseUser] = None
def resolve_user(self, loader=LoaderDepend(UserLoader)):
return loader.load(self.assignee_id)
class Story(BaseStory):
tasks: list[Task] = []
def resolve_tasks(self, loader=LoaderDepend(StoryTaskLoader)):
return loader.load(self.id)
assignee: Optional[BaseUser] = None
def resolve_assignee(self, loader=LoaderDepend(UserLoader)):
return loader.load(self.assignee_id)
reporter: Optional[BaseUser] = None
def resolve_reporter(self, loader=LoaderDepend(UserLoader)):
return loader.load(self.report_to)
# ---------- Post-processing ------------
related_users: list[BaseUser] = []
def post_related_users(self, collector=Collector(alias='related_users')):
return collector.values()

class Story(BaseStory):
tasks: list[Task] = []
def resolve_tasks(self, loader=LoaderDepend(StoryTaskLoader)):
return loader.load(self.id)
assignee: Optional[BaseUser] = None
def resolve_assignee(self, loader=LoaderDepend(UserLoader)):
return loader.load(self.assignee_id)
reporter: Optional[BaseUser] = None
def resolve_reporter(self, loader=LoaderDepend(UserLoader)):
return loader.load(self.report_to)
# ---------- Post-processing ------------
total_estimate: int = 0
def post_total_estimate(self):
return sum(task.estimate for task in self.tasks)
from pydantic_resolve import LoaderDepend
class Task(BaseTask):
user: Optional[BaseUser] = None
def resolve_user(self, loader=LoaderDepend(UserLoader)):
return loader.load(self.assignee_id)
# ---------- Post-processing ------------
def post_name(self, ancestor_context): # Access story.name from parent context
return f'{ancestor_context['story_name']} - {self.name}'
class Story(BaseStory):
__pydantic_resolve_expose__ = {'name': 'story_name'}
tasks: list[Task] = []
def resolve_tasks(self, loader=LoaderDepend(StoryTaskLoader)):
return loader.load(self.id)
assignee: Optional[BaseUser] = None
def resolve_assignee(self, loader=LoaderDepend(UserLoader)):
return loader.load(self.assignee_id)
reporter: Optional[BaseUser] = None
def resolve_reporter(self, loader=LoaderDepend(UserLoader)):
return loader.load(self.report_to)
from pydantic_resolve import Resolver
stories: List[Story] = await query_stories()
await Resolver().resolve(stories)
Complete!
The framework significantly reduces complexity in data composition by maintaining alignment with entity-relationship models, resulting in enhanced maintainability.
Utilizing an ER-oriented modeling approach delivers 3-5x development efficiency gains and 50%+ code reduction.
Leveraging pydantic's capabilities, it enables GraphQL-like hierarchical data structures while providing flexible business logic integration during data resolution.
Seamlessly integrates with FastAPI to construct frontend-optimized data structures and generate TypeScript SDKs for type-safe client integration.
The core architecture provides resolve
and post
method hooks for pydantic and dataclass objects:
resolve
: Handles data fetching operationspost
: Executes post-processing transformations
This implements a recursive resolution pipeline that completes when all descendant nodes are processed.
Consider the Sprint, Story, and Task relationship hierarchy:
Upon object instantiation with defined methods, pydantic-resolve traverses the data graph, executes resolution methods, and produces the complete data structure.
DataLoader integration eliminates N+1 query problems inherent in multi-level data fetching, optimizing performance characteristics.
DataLoader architecture enables modular class composition and reusability across different contexts.
Additionally, the framework provides expose and collector mechanisms for sophisticated cross-layer data processing patterns.
tox
tox -e coverage
python -m http.server
Current test coverage: 97%
ab -c 50 -n 1000
based on FastAPI.
strawberry-graphql (including cost of parsing query statements)
Server Software: uvicorn
Server Hostname: localhost
Server Port: 8000
Document Path: /graphql
Document Length: 5303 bytes
Concurrency Level: 50
Time taken for tests: 3.630 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 5430000 bytes
Total body sent: 395000
HTML transferred: 5303000 bytes
Requests per second: 275.49 [#/sec] (mean)
Time per request: 181.498 [ms] (mean)
Time per request: 3.630 [ms] (mean, across all concurrent requests)
Transfer rate: 1460.82 [Kbytes/sec] received
106.27 kb/s sent
1567.09 kb/s total
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.2 0 1
Processing: 31 178 14.3 178 272
Waiting: 30 176 14.3 176 270
Total: 31 178 14.4 179 273
pydantic-resolve
Server Software: uvicorn
Server Hostname: localhost
Server Port: 8000
Document Path: /sprints
Document Length: 4621 bytes
Concurrency Level: 50
Time taken for tests: 2.194 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 4748000 bytes
HTML transferred: 4621000 bytes
Requests per second: 455.79 [#/sec] (mean)
Time per request: 109.700 [ms] (mean)
Time per request: 2.194 [ms] (mean, across all concurrent requests)
Transfer rate: 2113.36 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.3 0 1
Processing: 30 107 10.9 106 138
Waiting: 28 105 10.7 104 138
Total: 30 107 11.0 106 140