-
Notifications
You must be signed in to change notification settings - Fork 2
Expand file tree
/
Copy pathmultimodal_processing.yaml
More file actions
242 lines (204 loc) · 8.22 KB
/
multimodal_processing.yaml
File metadata and controls
242 lines (204 loc) · 8.22 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
# Multimodal Processing Pipeline
# Demonstrates image, audio, and video processing capabilities
id: multimodal_processing
name: Multimodal Content Processing Pipeline
description: Process various media types with AI-powered analysis
version: "1.0.0"
parameters:
input_image:
type: string
default: "samples/test_image.jpg"
input_audio:
type: string
default: "samples/test_speech.wav"
input_video:
type: string
default: "samples/test_video_real.mp4"
output_dir:
type: string
default: ""
steps:
# Image Processing Section
- id: analyze_image
tool: image-analysis
action: execute
parameters:
image: "{{ parameters.input_image }}"
analysis_type: "describe"
detail_level: "high"
output_format: "json"
- id: detect_objects
tool: image-analysis
action: execute
parameters:
image: "{{ parameters.input_image }}"
analysis_type: "detect_objects"
confidence_threshold: 0.7
prompt_suffix: "List objects directly without conversational language. Use bullet points."
dependencies:
- analyze_image
- id: generate_variations
tool: image-generation
action: execute
parameters:
prompt: "A colorful abstract geometric design with rectangles and frames on a gradient background, modern digital art style"
size: "1024x1024"
style: "vivid"
num_images: 3
output_format: "file"
output_path: "{{ output_path }}/generated_images"
dependencies:
- analyze_image
# Audio Processing Section
- id: transcribe_audio
tool: audio-processing
action: execute
parameters:
audio: "{{ parameters.input_audio }}"
operation: "transcribe"
language: "en"
- id: analyze_audio
tool: audio-processing
action: execute
parameters:
audio: "{{ parameters.input_audio }}"
operation: "analyze"
dependencies:
- transcribe_audio
# Video Processing Section
- id: analyze_video
tool: video-processing
action: execute
parameters:
video: "{{ parameters.input_video }}"
operation: "analyze"
- id: extract_key_frames
tool: video-processing
action: execute
parameters:
video: "{{ parameters.input_video }}"
operation: "extract_frames"
frame_interval: 0.5
output_path: "{{ output_path }}/video_frames"
dependencies:
- analyze_video
- id: analyze_key_frames
tool: image-analysis
action: execute
parameters:
image: "{{ extract_key_frames.frames[0] }}"
analysis_type: "describe"
detail_level: "medium"
dependencies:
- extract_key_frames
condition: "{{ extract_key_frames.frames | length > 0 }}"
# Copy original image for report
- id: copy_original_image
tool: filesystem
action: copy
parameters:
path: "{{ parameters.input_image }}"
destination: "{{ output_path }}/test_image.jpg"
dependencies:
- analyze_key_frames
# Combined Analysis
- id: generate_summary_report
tool: filesystem
action: write
parameters:
path: "{{ output_path }}/analysis_report.md"
content: |
# Multimodal Analysis Results
## 📸 Image Analysis
### Original Image

### Description
{{ analyze_image.analysis.result }}
### Detected Objects
{% set objects_text = detect_objects.analysis.result %}
{% set lines = objects_text.split('\n') %}
{% for line in lines %}
{% if line and not 'I can identify' in line and not 'In this image' in line and not 'appears to be' in line %}
{{ line }}
{% endif %}
{% endfor %}
### Generated Variations
{% if generate_variations.success and generate_variations.images %}
Created {{ generate_variations.images | length }} artistic variations using DALL-E 3:
{% for image in generate_variations.images %}

{% endfor %}
{% else %}
*Image generation was not successful or no images were generated.*
{% endif %}
## 🎵 Audio Analysis
### File Information
- **File**: `{{ parameters.input_audio }}`
- **Format**: {{ analyze_audio.analysis.format }}
- **Duration**: {{ analyze_audio.analysis.duration }} seconds
- **Sample Rate**: {{ analyze_audio.analysis.sample_rate }} Hz
- **Channels**: {{ analyze_audio.analysis.channels }}
### Transcription
> "{{ transcribe_audio.transcription }}"
### Audio Characteristics
- **Volume Level**: {{ analyze_audio.analysis.analysis.volume_level }}
- **Noise Level**: {{ analyze_audio.analysis.analysis.noise_level }}
- **Tempo**: {{ analyze_audio.analysis.analysis.tempo_bpm }} BPM
- **Peak Amplitude**: {{ analyze_audio.analysis.analysis.peak_amplitude | round(4) }}
- **RMS Energy**: {{ analyze_audio.analysis.analysis.rms_energy | round(4) }}
### Spectral Analysis
- **Spectral Centroid**: {{ analyze_audio.analysis.analysis.spectral_centroid_hz | round(2) }} Hz
- **Spectral Rolloff**: {{ analyze_audio.analysis.analysis.spectral_rolloff_hz | round(2) }} Hz
- **Spectral Bandwidth**: {{ analyze_audio.analysis.analysis.spectral_bandwidth_hz | round(2) }} Hz
- **Zero Crossing Rate**: {{ analyze_audio.analysis.analysis.zero_crossing_rate | round(6) }}
## 🎬 Video Analysis
### Video Information
- **File**: `{{ parameters.input_video }}`
- **Duration**: {{ analyze_video.analysis.video_info.duration }} seconds
- **Resolution**: {{ analyze_video.analysis.video_info.resolution }}
- **Frame Rate**: {{ analyze_video.analysis.video_info.fps }} FPS
- **Total Frames**: {{ (analyze_video.analysis.video_info.duration * analyze_video.analysis.video_info.fps) | int }}
### Content Analysis
{{ analyze_video.analysis.summary }}
### Scene Detection
- **Total Scene Changes**: {{ analyze_video.analysis.scene_changes | length }}
- **Scene Change Timestamps**: {{ analyze_video.analysis.scene_changes | join(', ') }} seconds
- **Detected Objects**: {{ analyze_video.analysis.detected_objects | join(', ') }}
- **Dominant Colors**: {{ analyze_video.analysis.dominant_colors | join(', ') }}
### Extracted Key Frames
#### Frame at 0.0s

#### Frame at 0.5s

#### Frame at 1.0s

#### Frame at 1.5s

#### Frame at 2.0s

#### Frame at 2.5s

### Frame Analysis
{% set frame_text = analyze_key_frames.analysis.result %}
{% set frame_text = frame_text | regex_replace('This image shows ', '') %}
{% set frame_text = frame_text | regex_replace('The image shows ', '') %}
{% set frame_text = frame_text | regex_replace('The overall composition is ', 'Overall composition: ') %}
{{ frame_text }}
## 📊 Processing Summary
- **Total Media Files Processed**: 3 (1 image, 1 audio, 1 video)
- **Generated Images**: {{ generate_variations.images | length }}
- **Extracted Video Frames**: {{ extract_key_frames.frames | length }}
- **Processing Time**: Completed successfully
---
*Report generated on {{ timestamp }}*
dependencies:
- generate_variations
- analyze_audio
- analyze_key_frames
- copy_original_image
outputs:
image_analysis: "{{ analyze_image.analysis }}"
audio_transcription: "{{ transcribe_audio.transcription }}"
video_summary: "{{ analyze_video.analysis.summary }}"
generated_images: "{{ generate_variations.images }}"
report_location: "{{ parameters.output_dir }}/analysis_report.md"