Skip to content

Commit d54b07f

Browse files
authored
Merge pull request #143 from stevo1403/fix/dashboard-ui-profile-from-main
Fix dashboard ui profile build by restoring dashboard sources
2 parents afc7db1 + 14aa313 commit d54b07f

32 files changed

+10479
-1
lines changed

README.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -195,6 +195,13 @@ Optional: include the dashboard service profile:
195195
docker compose --profile ui up --build -d
196196
```
197197

198+
Using Doppler-managed config (recommended for hosted dashboard/API URLs):
199+
200+
```bash
201+
cd OpenMemory
202+
tools/ops/compose_with_doppler.sh up -d --build
203+
```
204+
198205
Check service status:
199206

200207
```bash

dashboard/.env.local.example

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
# OpenMemory Dashboard Configuration
2+
NEXT_PUBLIC_API_URL=http://localhost:8080
3+
# Set this if your backend has OM_API_KEY configured for authentication
4+
NEXT_PUBLIC_API_KEY=your

dashboard/CHAT_SETUP.md

Lines changed: 159 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,159 @@
1+
# Chat Interface Setup
2+
3+
The chat interface is now connected to the OpenMemory backend and can query memories in real-time.
4+
5+
## Features
6+
7+
**Memory Querying**: Searches your memory database for relevant content
8+
**Salience-based Results**: Shows top memories ranked by relevance
9+
**Memory Reinforcement**: Click the + button to boost memory importance
10+
**Real-time Updates**: Live connection to backend API
11+
**Action Buttons**: Quick actions after assistant responses
12+
13+
## Setup Instructions
14+
15+
### 1. Start the Backend
16+
17+
First, make sure the OpenMemory backend is running:
18+
19+
```bash
20+
cd backend
21+
npm install
22+
npm run dev
23+
```
24+
25+
The backend will start on `http://localhost:8080`
26+
27+
### 2. Configure Environment (Optional)
28+
29+
The dashboard is pre-configured to connect to `localhost:8080`. If your backend runs on a different port, create a `.env.local` file:
30+
31+
```bash
32+
# dashboard/.env.local
33+
NEXT_PUBLIC_API_URL=http://localhost:8080
34+
```
35+
36+
### 3. Start the Dashboard
37+
38+
```bash
39+
cd dashboard
40+
npm install
41+
npm run dev
42+
```
43+
44+
The dashboard will start on `http://localhost:3000`
45+
46+
### 4. Add Some Memories
47+
48+
Before chatting, you need to add some memories to your database. You can do this via:
49+
50+
**Option A: API (Recommended for Testing)**
51+
52+
```bash
53+
curl -X POST http://localhost:8080/memory/add \
54+
-H "Content-Type: application/json" \
55+
-d '{
56+
"content": "JavaScript async/await makes asynchronous code more readable",
57+
"tags": ["javascript", "async"],
58+
"metadata": {"source": "learning"}
59+
}'
60+
```
61+
62+
**Option B: Use the SDK**
63+
64+
```javascript
65+
// examples/js-sdk/basic-usage.js
66+
import OpenMemory from '../../sdk-js/src/index.js';
67+
68+
const om = new OpenMemory('http://localhost:8080');
69+
70+
await om.addMemory({
71+
content: 'React hooks revolutionized state management',
72+
tags: ['react', 'hooks'],
73+
});
74+
```
75+
76+
**Option C: Ingest a Document**
77+
78+
```bash
79+
curl -X POST http://localhost:8080/memory/ingest \
80+
-H "Content-Type: application/json" \
81+
-d '{
82+
"content_type": "text",
83+
"data": "Your document content here...",
84+
"metadata": {"source": "document"}
85+
}'
86+
```
87+
88+
## How It Works
89+
90+
### Memory Query Flow
91+
92+
1. **User Input**: You ask a question in the chat
93+
2. **Backend Query**: POST to `/memory/query` with your question
94+
3. **Vector Search**: Backend searches HSG memory graph
95+
4. **Results**: Top 5 memories returned with salience scores
96+
5. **Response**: Chat generates answer based on retrieved memories
97+
98+
### Memory Reinforcement
99+
100+
Clicking the **+** button on a memory card:
101+
102+
- Sends POST to `/memory/reinforce`
103+
- Increases memory salience by 0.1
104+
- Makes it more likely to appear in future queries
105+
106+
## Current Features
107+
108+
✅ Real-time memory querying
109+
✅ Salience-based ranking
110+
✅ Memory reinforcement (boost)
111+
✅ Sector classification display
112+
✅ Error handling with backend status
113+
114+
## Coming Soon
115+
116+
- 🚧 LLM Integration (OpenAI, Ollama, Gemini)
117+
- 🚧 Conversation memory persistence
118+
- 🚧 Export chat to memories
119+
- 🚧 WebSocket streaming responses
120+
- 🚧 Quiz generation from memories
121+
- 🚧 Podcast script generation
122+
123+
## Troubleshooting
124+
125+
### "Failed to query memories"
126+
127+
- Ensure backend is running: `npm run dev` in `backend/`
128+
- Check backend is on port 8080: `curl http://localhost:8080/health`
129+
- Verify CORS is enabled (already configured)
130+
131+
### "No memories found"
132+
133+
- Add memories using the API or SDK (see setup above)
134+
- Try broader search terms
135+
- Check memory content exists: `GET http://localhost:8080/memory/all`
136+
137+
### Connection refused
138+
139+
- Backend not started
140+
- Wrong port in `.env.local`
141+
- Firewall blocking connection
142+
143+
## API Endpoints Used
144+
145+
```typescript
146+
POST /memory/query // Search memories
147+
POST /memory/add // Add new memory
148+
POST /memory/reinforce // Boost memory salience
149+
GET /memory/all // List all memories
150+
GET /memory/:id // Get specific memory
151+
```
152+
153+
## Next Steps
154+
155+
1. Add LLM integration for intelligent responses
156+
2. Implement conversation memory storage
157+
3. Add streaming response support
158+
4. Create memory export feature
159+
5. Build quiz/podcast generators

dashboard/Dockerfile

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,12 @@ FROM node:20-alpine AS builder
33

44
WORKDIR /app
55

6+
# Build-time public vars for Next.js client bundle
7+
ARG NEXT_PUBLIC_API_URL=http://localhost:8080
8+
ARG NEXT_PUBLIC_API_KEY=
9+
ENV NEXT_PUBLIC_API_URL=${NEXT_PUBLIC_API_URL}
10+
ENV NEXT_PUBLIC_API_KEY=${NEXT_PUBLIC_API_KEY}
11+
612
# Install dependencies
713
COPY package*.json ./
814
RUN npm install
@@ -18,14 +24,19 @@ FROM node:20-alpine AS production
1824

1925
WORKDIR /app
2026

27+
ARG NEXT_PUBLIC_API_URL=http://localhost:8080
28+
ARG NEXT_PUBLIC_API_KEY=
29+
ENV NEXT_PUBLIC_API_URL=${NEXT_PUBLIC_API_URL}
30+
ENV NEXT_PUBLIC_API_KEY=${NEXT_PUBLIC_API_KEY}
31+
2132
# Install only production dependencies
2233
COPY package*.json ./
2334
RUN npm install --omit=dev
2435

2536
# Copy built assets from builder
2637
COPY --from=builder /app/.next ./.next
2738
COPY --from=builder /app/public ./public
28-
COPY --from=builder /app/next.config.ts ./next.config.ts
39+
COPY --from=builder /app/next.config.js ./next.config.js
2940

3041
# Create a dedicated non-root user for security
3142
RUN addgroup -g 1001 -S nodejs \

dashboard/README.md

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
This is a [Next.js](https://nextjs.org) project bootstrapped with [`create-next-app`](https://nextjs.org/docs/app/api-reference/cli/create-next-app).
2+
3+
## Getting Started
4+
5+
First, run the development server:
6+
7+
```bash
8+
npm run dev
9+
# or
10+
yarn dev
11+
# or
12+
pnpm dev
13+
# or
14+
bun dev
15+
```
16+
17+
Open [http://localhost:3000](http://localhost:3000) with your browser to see the result.
18+
19+
You can start editing the page by modifying `app/page.tsx`. The page auto-updates as you edit the file.
20+
21+
This project uses [`next/font`](https://nextjs.org/docs/app/building-your-application/optimizing/fonts) to automatically optimize and load [Geist](https://vercel.com/font), a new font family for Vercel.
22+
23+
## Learn More
24+
25+
To learn more about Next.js, take a look at the following resources:
26+
27+
- [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API.
28+
- [Learn Next.js](https://nextjs.org/learn) - an interactive Next.js tutorial.
29+
30+
You can check out [the Next.js GitHub repository](https://github.com/vercel/next.js) - your feedback and contributions are welcome!
31+
32+
## Deploy on Vercel
33+
34+
The easiest way to deploy your Next.js app is to use the [Vercel Platform](https://vercel.com/new?utm_medium=default-template&filter=next.js&utm_source=create-next-app&utm_campaign=create-next-app-readme) from the creators of Next.js.
35+
36+
Check out our [Next.js deployment documentation](https://nextjs.org/docs/app/building-your-application/deploying) for more details.
Lines changed: 109 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,109 @@
1+
import { NextResponse } from 'next/server'
2+
import fs from 'fs'
3+
import path from 'path'
4+
5+
const ENV_PATH = path.resolve(process.cwd(), '../.env')
6+
7+
function parseEnvFile(content: string): Record<string, string> {
8+
const result: Record<string, string> = {}
9+
const lines = content.split('\n')
10+
11+
for (const line of lines) {
12+
const trimmed = line.trim()
13+
if (!trimmed || trimmed.startsWith('#')) continue
14+
15+
const equalIndex = trimmed.indexOf('=')
16+
if (equalIndex === -1) continue
17+
18+
const key = trimmed.substring(0, equalIndex).trim()
19+
const value = trimmed.substring(equalIndex + 1).trim()
20+
result[key] = value
21+
}
22+
23+
return result
24+
}
25+
26+
function serializeEnvFile(updates: Record<string, string>): string {
27+
const lines: string[] = []
28+
29+
for (const [key, value] of Object.entries(updates)) {
30+
lines.push(`${key}=${value}`)
31+
}
32+
33+
return lines.join('\n')
34+
}
35+
36+
export async function GET() {
37+
try {
38+
if (!fs.existsSync(ENV_PATH)) {
39+
return NextResponse.json({
40+
exists: false,
41+
settings: {}
42+
})
43+
}
44+
45+
const content = fs.readFileSync(ENV_PATH, 'utf-8')
46+
const settings = parseEnvFile(content)
47+
48+
const masked = { ...settings }
49+
if (masked.OPENAI_API_KEY) masked.OPENAI_API_KEY = '***'
50+
if (masked.GEMINI_API_KEY) masked.GEMINI_API_KEY = '***'
51+
if (masked.AWS_SECRET_ACCESS_KEY) masked.AWS_SECRET_ACCESS_KEY = "***"
52+
if (masked.OM_API_KEY) masked.OM_API_KEY = '***'
53+
54+
return NextResponse.json({
55+
exists: true,
56+
settings: masked
57+
})
58+
} catch (e: any) {
59+
console.error('[Settings API] read error:', e)
60+
return NextResponse.json(
61+
{ error: 'internal', message: e.message },
62+
{ status: 500 }
63+
)
64+
}
65+
}
66+
67+
export async function POST(request: Request) {
68+
try {
69+
const updates = await request.json()
70+
71+
if (!updates || typeof updates !== 'object') {
72+
return NextResponse.json(
73+
{ error: 'invalid_body' },
74+
{ status: 400 }
75+
)
76+
}
77+
78+
let content = ''
79+
let envExists = false
80+
81+
if (fs.existsSync(ENV_PATH)) {
82+
content = fs.readFileSync(ENV_PATH, 'utf-8')
83+
envExists = true
84+
} else {
85+
const examplePath = path.resolve(process.cwd(), '../.env.example')
86+
if (fs.existsSync(examplePath)) {
87+
content = fs.readFileSync(examplePath, 'utf-8')
88+
}
89+
}
90+
91+
const existing = content ? parseEnvFile(content) : {}
92+
const merged = { ...existing, ...updates }
93+
const newContent = serializeEnvFile(merged)
94+
95+
fs.writeFileSync(ENV_PATH, newContent, 'utf-8')
96+
97+
return NextResponse.json({
98+
ok: true,
99+
created: !envExists,
100+
message: 'Settings saved. Restart the backend to apply changes.'
101+
})
102+
} catch (e: any) {
103+
console.error('[Settings API] write error:', e)
104+
return NextResponse.json(
105+
{ error: 'internal', message: e.message },
106+
{ status: 500 }
107+
)
108+
}
109+
}

0 commit comments

Comments
 (0)