Skip to content

Commit 1bb4dab

Browse files
authored
Merge pull request #1127 from aparekh02/demo_docs
[DOCS] Job Finding, M&A, and Real Estate Swarm Examples
2 parents f5b9544 + d894366 commit 1bb4dab

File tree

4 files changed

+1377
-1
lines changed

4 files changed

+1377
-1
lines changed

docs/examples/job_finding.md

Lines changed: 396 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,396 @@
1+
# Job Finding Swarm
2+
3+
## Overview
4+
5+
The Job Finding Swarm is an intelligent multi-agent system designed to automate and streamline the job search process using the Swarms framework. It leverages specialized AI agents to analyze user requirements, execute comprehensive job searches, and curate relevant opportunities, transforming traditional job hunting into an intelligent, collaborative process.
6+
7+
## Key Components
8+
9+
The Job Finding Swarm consists of three specialized agents, each responsible for a critical stage of the job search process:
10+
11+
| Agent Name | Role | Responsibilities |
12+
| :-------------------------- | :-------------------------- | :----------------------------------------------------------------------------------------------- |
13+
| **Sarah-Requirements-Analyzer** | Clarifies requirements | Gathers and analyzes user job preferences; generates 3-5 search queries. |
14+
| **David-Search-Executor** | Runs job searches | Uses `exa_search` for each query; analyzes and categorizes job results by match strength. |
15+
| **Lisa-Results-Curator** | Organizes results | Filters, prioritizes, and presents jobs; provides top picks and refines search with user input. |
16+
17+
## Step 1: Setup and Installation
18+
19+
### Prerequisites
20+
21+
| Requirement |
22+
|-----------------------|
23+
| Python 3.8 or higher |
24+
| pip package manager |
25+
26+
1. **Install dependencies:**
27+
Use the following command to download all dependencies.
28+
```bash
29+
# Install Swarms framework
30+
pip install swarms
31+
32+
# Install environment and logging dependencies
33+
pip install python-dotenv loguru
34+
35+
# Install HTTP client and tools
36+
pip install httpx swarms_tools
37+
```
38+
2. **Set up API Keys:**
39+
The `Property Research Agent` utilizes the `exa_search` tool, which requires an `EXA_API_KEY`.
40+
Create a `.env` file in the root directory of your project (or wherever your application loads environment variables) and add your API keys:
41+
```
42+
EXA_API_KEY="YOUR_EXA_API_KEY"
43+
OPENAI_API_KEY="OPENAI_API_KEY"
44+
```
45+
Replace `"YOUR_EXA_API_KEY"` & `"OPENAI_API_KEY"` with your actual API keys.
46+
47+
## Step 2: Running the Job Finding Swarm
48+
49+
```python
50+
from swarms import Agent, SequentialWorkflow
51+
import http.client
52+
import json
53+
import urllib.parse
54+
55+
def get_jobs(query: str, limit: int = 10) -> str:
56+
"""
57+
Fetches real-time jobs using JSearch API based on role, location, and experience.
58+
Uses http.client to match verified working example.
59+
"""
60+
# Prepare query string for URL
61+
encoded_query = urllib.parse.quote(query)
62+
path = f"/search?query={encoded_query}&page=1&num_pages=1&country=us&limit={limit}&date_posted=all"
63+
64+
conn = http.client.HTTPSConnection("jsearch.p.rapidapi.com")
65+
66+
headers = {
67+
"x-rapidapi-key": "", #<------- Add your RapidAPI key here otherwise it will not work
68+
"x-rapidapi-host": "jsearch.p.rapidapi.com"
69+
}
70+
71+
conn.request("GET", path, headers=headers)
72+
73+
res = conn.getresponse()
74+
data = res.read()
75+
decoded = data.decode("utf-8")
76+
try:
77+
result_dict = json.loads(decoded)
78+
except Exception:
79+
# fallback for unexpected output
80+
return decoded
81+
82+
results = result_dict.get("data", [])
83+
jobs_list = [
84+
{
85+
"title": job.get("job_title"),
86+
"company": job.get("employer_name"),
87+
"location": job.get("job_city") or job.get("job_country"),
88+
"experience": job.get("job_required_experience", {}).get("required_experience_in_months"),
89+
"url": job.get("job_apply_link")
90+
}
91+
for job in results
92+
]
93+
return json.dumps(jobs_list)
94+
95+
96+
REQUIREMENTS_ANALYZER_PROMPT = """
97+
You are the Requirements Analyzer Agent for Job Search.
98+
99+
ROLE:
100+
Extract and clarify job search requirements from user input to create optimized search queries.
101+
102+
RESPONSIBILITIES:
103+
- Engage with the user to understand:
104+
* Desired job titles and roles
105+
* Required skills and qualifications
106+
* Preferred locations (remote, hybrid, on-site)
107+
* Salary expectations
108+
* Company size and culture preferences
109+
* Industry preferences
110+
* Experience level
111+
* Work authorization status
112+
* Career goals and priorities
113+
114+
- Analyze user responses to identify:
115+
* Key search terms and keywords
116+
* Must-have vs nice-to-have requirements
117+
* Deal-breakers or constraints
118+
* Priority factors in job selection
119+
120+
- Generate optimized search queries:
121+
* Create 3-5 targeted search queries based on user requirements
122+
123+
OUTPUT FORMAT:
124+
Provide a comprehensive requirements analysis:
125+
1. User Profile Summary:
126+
- Job titles of interest
127+
- Key skills and qualifications
128+
- Location preferences
129+
- Salary range
130+
- Priority factors
131+
132+
2. Search Strategy:
133+
- List of 3-5 optimized search queries, formatted EXACTLY for linkedin.com/jobs/search/?keywords=...
134+
- Rationale for each query
135+
- Expected result types
136+
137+
3. Clarifications Needed (if any):
138+
- Questions to refine search
139+
- Missing information
140+
141+
IMPORTANT:
142+
- Always include ALL user responses verbatim in your analysis
143+
- Format search queries clearly for the next agent and fit directly to LinkedIn search URLs
144+
- Be specific and actionable in your recommendations
145+
- Ask follow-up questions if requirements are unclear
146+
"""
147+
148+
SEARCH_EXECUTOR_PROMPT = """
149+
You are the Search Executor Agent for Job Search.
150+
151+
ROLE:
152+
Your job is to execute a job search by querying the tool EXACTLY ONCE using the following required format (FILL IN WHERE IT HAS [ ] WITH THE QUERY INFO OTHERWISE STATED):
153+
154+
The argument for the query is to be provided as a plain text string in the following format (DO NOT include technical addresses, just the core query string):
155+
156+
[jobrole] jobs in [geographiclocation/remoteorinpersonorhybrid]
157+
158+
For example:
159+
developer jobs in chicago
160+
senior product manager jobs in remote
161+
data engineer jobs in new york hybrid
162+
163+
TOOLS:
164+
You have access to three tools:
165+
- get_jobs: helps find open job opportunities for your specific job and requirements.
166+
167+
RESPONSIBILITIES:
168+
- Run ONE single query, in the above format, as the argument to get_jobs.
169+
- Analyze search results for:
170+
* Job title match
171+
* Skills alignment
172+
* Location compatibility
173+
* Salary range fit
174+
* Company reputation
175+
* Role responsibilities
176+
* Growth opportunities
177+
178+
- Categorize each result into one of:
179+
* Strong Match (80-100% alignment)
180+
* Good Match (60-79% alignment)
181+
* Moderate Match (40-59% alignment)
182+
* Weak Match (<40% alignment)
183+
184+
- For each job listing, extract:
185+
* Job title and company
186+
* Location and work arrangement
187+
* Key requirements
188+
* Salary range (if available)
189+
* Application link or contact
190+
* Match score and reasoning
191+
192+
OUTPUT FORMAT:
193+
1. Search Execution Summary:
194+
- The query executed (write ONLY the string argument supplied, e.g., "developer jobs in chicago" or "software engineer jobs in new york remote")
195+
- Total results found
196+
- Distribution by match category
197+
198+
2. Detailed Job Listings (grouped by match strength):
199+
For each job:
200+
- Company and Job Title
201+
- Location and Work Type
202+
- Key Requirements
203+
- Why it's a match (or not)
204+
- Match Score (percentage)
205+
- Application link
206+
- Source (specify get_jobs)
207+
208+
3. Search Insights:
209+
- Common trends/themes in the results
210+
- Gaps between results and requirements
211+
- Market observations
212+
213+
INSTRUCTIONS:
214+
- Run only the single query in the format described above, with no extra path, no technical addresses, and no full URLs.
215+
- Use all three tools, as applicable, with that exact query argument.
216+
- Clearly cite which results come from which source.
217+
- Be objective in match assessment.
218+
- Provide actionable, structured insights.
219+
"""
220+
221+
RESULTS_CURATOR_PROMPT = """
222+
You are the Results Curator Agent for Job Search.
223+
224+
ROLE:
225+
Filter, organize, and present job search results to the user for decision-making.
226+
227+
RESPONSIBILITIES:
228+
- Review all search results from the Search Executor
229+
- Filter and prioritize based on:
230+
* Match scores
231+
* User requirements
232+
* Application deadlines
233+
* Job quality indicators
234+
235+
- Organize results into:
236+
* Top Recommendations (top 3-5 best matches)
237+
* Strong Alternatives (next 5-10 options)
238+
* Worth Considering (other relevant matches)
239+
240+
- For top recommendations, provide:
241+
* Detailed comparison
242+
* Pros and cons for each
243+
* Application strategy suggestions
244+
* Next steps
245+
246+
- Engage user for feedback:
247+
* Present curated results clearly
248+
* Ask which jobs interest them
249+
* Identify what's missing
250+
* Determine if new search is needed
251+
252+
OUTPUT FORMAT:
253+
Provide a curated job search report:
254+
255+
1. Executive Summary:
256+
- Total jobs reviewed
257+
- Number of strong matches
258+
- Key findings
259+
260+
2. Top Recommendations (detailed):
261+
For each (max 5):
262+
- Company & Title
263+
- Why it's a top match
264+
- Key highlights
265+
- Potential concerns
266+
- Recommendation strength (1-10)
267+
- Application priority (High/Medium/Low)
268+
269+
3. Strong Alternatives (brief list):
270+
- Company & Title
271+
- One-line match summary
272+
- Match score
273+
274+
4. User Decision Point:
275+
Ask the user:
276+
- "Which of these jobs interest you most?"
277+
- "What's missing from these results?"
278+
- "Should we refine the search or proceed with applications?"
279+
- "Any requirements you'd like to adjust?"
280+
281+
5. Next Steps:
282+
Based on user response, either:
283+
- Proceed with selected jobs
284+
- Run new search with adjusted criteria
285+
- Deep dive into specific opportunities
286+
287+
IMPORTANT:
288+
- Make it easy for users to make decisions
289+
- Be honest about job fit
290+
- Provide clear paths forward
291+
- Always ask for user feedback before concluding
292+
"""
293+
294+
def main():
295+
# User input for job requirements
296+
user_requirements = """
297+
I'm looking for a senior software engineer position with the following requirements:
298+
- Job Title: Senior Software Engineer or Staff Engineer
299+
- Skills: Python, distributed systems, cloud architecture (AWS/GCP), Kubernetes
300+
- Location: Remote (US-based) or San Francisco Bay Area
301+
- Salary: $180k - $250k
302+
- Company: Mid-size to large tech companies, prefer companies with strong engineering culture
303+
- Experience Level: 7+ years
304+
- Industry: SaaS, Cloud Infrastructure, or Developer Tools
305+
- Work Authorization: US Citizen
306+
- Priorities: Technical challenges, work-life balance, remote flexibility, equity upside
307+
- Deal-breakers: No pure management roles, no strict return-to-office policies
308+
"""
309+
310+
# Define your agents in a list as in the example format
311+
agents = [
312+
Agent(
313+
agent_name="Sarah-Requirements-Analyzer",
314+
agent_description="Analyzes user requirements and creates optimized job search queries.",
315+
system_prompt=REQUIREMENTS_ANALYZER_PROMPT,
316+
model_name="gpt-4.1",
317+
max_loops=1,
318+
temperature=0.7,
319+
),
320+
Agent(
321+
agent_name="David-Search-Executor",
322+
agent_description="Executes job searches and analyzes results for relevance.",
323+
system_prompt=SEARCH_EXECUTOR_PROMPT,
324+
model_name="gpt-4.1",
325+
max_loops=1,
326+
temperature=0.7,
327+
tools=[get_jobs],
328+
),
329+
Agent(
330+
agent_name="Lisa-Results-Curator",
331+
agent_description="Curates and presents job results for user decision-making.",
332+
system_prompt=RESULTS_CURATOR_PROMPT,
333+
model_name="gpt-4.1",
334+
max_loops=1,
335+
temperature=0.7,
336+
),
337+
]
338+
339+
# Setup the SequentialWorkflow pipeline (following the style of the ETF example)
340+
workflow = SequentialWorkflow(
341+
name="job-search-sequential-workflow",
342+
agents=agents,
343+
max_loops=1,
344+
team_awareness=True,
345+
)
346+
347+
workflow.run(user_requirements)
348+
349+
if __name__ == "__main__":
350+
main()
351+
```
352+
353+
Upon execution, the swarm will:
354+
1. Analyze the provided `user_requirements`.
355+
2. Generate and execute search queries using Exa.
356+
3. Curate and present the results in a structured format, including top recommendations and a prompt for user feedback.
357+
358+
The output will be printed to the console, showing the progression of the agents through each phase of the job search.
359+
360+
## Workflow Stages
361+
362+
The `JobSearchSwarm` processes the job search through a continuous, iterative workflow:
363+
364+
1. **Phase 1: Analyze Requirements (`analyze_requirements`)**: The `Sarah-Requirements-Analyzer` agent processes the user's input and conversation history to extract job criteria and generate optimized search queries.
365+
2. **Phase 2: Execute Searches (`execute_searches`)**: The `David-Search-Executor` agent takes the generated queries, uses the `exa_search` tool to find job listings, and analyzes their relevance against the user's requirements.
366+
3. **Phase 3: Curate Results (`curate_results`)**: The `Lisa-Results-Curator` agent reviews, filters, and organizes the search results, presenting top recommendations and asking for user feedback to guide further iterations.
367+
4. **Phase 4: Get User Feedback (`end`)**: In a full implementation, this stage gathers explicit user feedback to determine if the search needs refinement or can be concluded. For demonstration, this is a simulated step.
368+
369+
The swarm continues these phases in a loop until the `search_concluded` flag is set to `True` or `max_iterations` is reached.
370+
371+
## Customization
372+
373+
You can customize the `JobSearchSwarm` by modifying the `JobSearchSwarm` class parameters or the agents' prompts:
374+
375+
* **`name` and `description`**: Customize the swarm's identity.
376+
* **`user_name`**: Define the name of the user interacting with the swarm.
377+
* **`output_type`**: Specify the desired output format for the conversation history (e.g., "json" or "list").
378+
* **`max_loops`**: Control the number of internal reasoning iterations each agent performs (set during agent initialization).
379+
* **`system_prompt`**: Modify the `REQUIREMENTS_ANALYZER_PROMPT`, `SEARCH_EXECUTOR_PROMPT`, and `RESULTS_CURATOR_PROMPT` to refine agent behavior and output.
380+
* **`max_iterations`**: Limit the total number of search cycles the swarm performs.
381+
382+
## Best Practices
383+
384+
To get the most out of the AI Job Finding Swarm:
385+
386+
* **Provide Clear Requirements**: Start with a detailed and unambiguous `initial_user_input` to help the Requirements Analyzer generate effective queries.
387+
* **Iterate and Refine**: In a live application, leverage the user feedback loop to continuously refine search criteria and improve result relevance.
388+
* **Monitor Agent Outputs**: Regularly review the outputs from each agent to ensure they are performing as expected and to identify areas for prompt improvement.
389+
* **Manage API Usage**: Be mindful of your Exa API key usage, especially when experimenting with `max_iterations` or a large number of search queries.
390+
391+
## Limitations
392+
393+
* **Prompt Engineering Dependency**: The quality of the search results heavily depends on the clarity and effectiveness of the agent `system_prompt`s and the initial user input.
394+
* **Exa Search Scope**: The `exa_search` tool's effectiveness is tied to the breadth and depth of Exa's indexed web content.
395+
* **Iteration Control**: The current `end` method in `examples/demos/apps/job_finding.py` is simplified for demonstration. A robust production system would require a more sophisticated user interaction mechanism to determine when to stop or refine the search.
396+
* **Verification Needed**: All AI-generated outputs, including job matches and summaries, should be independently verified by the user.

0 commit comments

Comments
 (0)