-
Notifications
You must be signed in to change notification settings - Fork 212
Description
Please note - out of frustration with not being able to figure out my Wyze cam v3 issues, I decided to give Codex access to my cams today, along with the source code of Thingino, to try and determine if it could find anything concerning that causes my issues. This is what it wanted me to provide. Hopefully this isn't a wild goose chase, it really does seem like there's something wrong.
Summary
I have two Wyze Cam v3 units running Thingino on Ingenic T31.
One unit is a relatively healthy reference:
192.168.1.72- "front"
- main stream running at
25 fps
The other unit is the failing one:
192.168.1.74- "patio"
- main stream running at
30 fps
The strongest conclusion from read-only testing is that the patio problem is upstream of Frigate and upstream of go2rtc. The failing camera is emitting a materially worse RTSP/H.264 stream at the camera itself.
The strongest camera-local signal is this runtime log on the patio unit:
ispcore: irq-status 0x00000600, err is 0x200,0x3f8,084c is 0x0
At the same time, the patio stream shows:
- direct RTSP reports
30 fpsbut only15 tbr Frame num gapduring direct probing- stronger live decode corruption than the front camera
- RTSP sender backlog on the camera side
- go2rtc can connect, but downstream often cannot recover usable video dimensions from the patio restream
My current best explanation is:
- the patio unit has a real T31 ISP/capture instability on its active
30 fpspath - the shipped stable
prudyntbranch then amplifies that instability into a malformed RTSP stream because of small blocking queueing and sender-side timestamp behavior
Environment
- Hardware: Wyze Cam v3
- SoC: Ingenic T31
- Firmware family: Thingino
- RTSP server:
prudynt - NVR stack used during testing: Frigate + go2rtc
Reproduction / Comparison Setup
I compared the two cameras side by side using:
- direct SSH inspection of runtime state and logs
- direct
ffprobe/ffmpegagainst each camera RTSP URL - go2rtc API and restream probing
- source inspection of Thingino packaging and
prudynt-t
I did not change settings, restart services, or flash firmware during the main investigation.
POST-RESTART VALIDATION (March 8, 2026)
Both cameras were manually restarted to see if a clean state resolved the issues.
Summary of post-restart state (11m uptime):
- Logs: The
ispcore: irq-status 0x00000600error did not immediately reapppear on the Patio unit after 11 minutes of uptime. - Network: All
Send-Qbacklogs were cleared (reset to 0). - Timing Mismatch (Persistent): Despite the restart and clean logs, the Patio camera still reports a timing mismatch:
30 fps / 15 tbr. - Stream Integrity: Both cameras failed strict
ffmpegdecode checks immediately after restart (returning exit code 183), suggesting the underlying H.264 stream instability is present even when the kernel isn't yet reporting ISP errors.
This confirms that the 30 fps / 15 tbr mismatch and stream corruption are independent of the long-term uptime ISP error, though that error likely exacerbates the degradation over time.
Key Runtime Differences Between The Two Cameras
Front reference camera
- saved main-stream config:
stream0.fps=25,gop=25 - live sensor / ISP path reports
25 - direct RTSP reports
1920x1080, 25 fps, 25 tbr, 90k tbn - no matching
ispcore: irq-status 0x00000600log seen - RTSP sessions do not build the same send backlog
- go2rtc restream remains usable as normal
1920x1080video
Patio failing camera
- saved main-stream config:
stream0.fps=30,gop=30 - live sensor / ISP path reports
30 - direct RTSP reports
1920x1080, 30 fps, 15 tbr, 90k tbn - direct RTSP probing prints
Frame num gap - kernel/runtime logs contain:
ispcore: irq-status 0x00000600, err is 0x200,0x3f8,084c is 0x0
- live decode is noticeably worse and also shows RTSP/session trouble
- one active RTSP connection on the camera showed
Send-Q 4022 - go2rtc can stay attached, but downstream
ffprobeoften sees:
Could not find codec parameters for stream 0 ... unspecified size
Video: h264, none, 90k tbr, 90k tbn
Direct Camera-Side Evidence
Patio direct RTSP is internally inconsistent
The patio unit is not merely configured to 30 fps in a JSON file. Its live runtime state also reports a true 30 fps path.
But direct probing of the patio stream shows:
30 fps- only
15 tbr Frame num gap
That looks like a real timing / access-unit pacing problem on the camera stream itself, not just an NVR ingest issue.
Patio direct decode is materially worse
Direct ffmpeg decode of the patio stream shows stronger corruption than the front reference unit.
Representative patio-side errors included:
error while decoding MB 103 62, bytestream -11
CSeq 9 expected, 0 received.
Invalid data found when processing input
The front camera is not perfectly pristine in every sample, but it still behaves like a normal usable 1080p stream end-to-end. The patio stream is clearly worse.
Camera-side RTSP send backlog appears on patio
On the patio camera I observed an active RTSP connection with Send-Q 4022, while front-side sessions were not showing the same backlog.
That fits a camera that is falling behind while serving RTSP because the encoded stream or its timing is already unstable.
Why I Think This Is Upstream Of Frigate / go2rtc
I first noticed this through Frigate, but the stronger evidence came from direct camera probing:
- direct RTSP from the patio camera already shows the malformed
30 fps / 15 tbrbehavior - direct patio decode already shows corruption before the restream layer
- the patio kernel/runtime log already shows the
ispcorefault locally on the camera
So even though Frigate/go2rtc are where the failure becomes obvious operationally, the fault appears to originate earlier in the camera firmware / ISP / encoder path.
Source-Level Findings That May Matter
Thingino appears to package prudynt-t stable, not current master
From the current Thingino packaging:
package/prudynt-t/prudynt-t.mkpinsprudynt-ttostablecommitd8e97072b6e45fece965ee6f4954ce9d0874f4fb
My local standalone prudynt checkout was newer master, but the deployed behavior on the cameras looks much closer to that packaged stable branch.
That matters because current upstream master already contains newer timestamp/audio work that may not be present in the shipped camera build.
Stable prudynt behavior that seems relevant
In the packaged stable branch:
stream0.fpsis used to drive the sensor / ISP side FPS- video is queued through a very small blocking queue (
MSG_CHANNEL_SIZE 20) - RTSP delivery timestamps are based on delivery time in
IMPDeviceSource, not preserved encoder timestamps
Relevant files in the packaged stable branch:
src/IMPSystem.cppsrc/VideoWorker.cppsrc/MsgChannel.hppsrc/IMPDeviceSource.cppsrc/globals.hpp
This seems relevant because it gives a plausible explanation for the full failure pattern:
- patio's true 30 fps path becomes unstable
- once sender-side backpressure begins, the stable branch has very little headroom
- client-visible timing then degrades into
30 fps / 15 tbr, frame gaps, and a backed-up RTSP stream instead of recovering cleanly
Current upstream appears to have already improved some of these areas
Current master has newer work including:
- unified timestamp management
- explicit RTP presentation-time propagation
- shared audio/video RTP timestamp base
- documented Opus timestamp fixes
I am not claiming that these changes alone would fix the patio issue, because the patio unit also appears to have a real ISP/runtime fault. But they do seem relevant to why the shipped stable branch turns that fault into such a broken RTSP stream.
Vendor Kernel / ISP Code Visibility
I specifically looked into whether the T31 ISP code is available.
What is public:
- Thingino kernel/build wiring for T31
- public Ingenic SDK wrapper/debug code for T31 ISP
- T31 ISP build recipe in the public SDK
What is not public as normal C source:
- the actual T31 ISP core is linked from a blob/archive in the public SDK:
4.4/sdk/t31/1.1.5.2/libt31-firmware-540.a
That means the T31 ISP path is only partially open-source.
However, that blob still contains useful symbols/strings, including the exact patio log format:
ispcore: irq-status 0x%08x, err is 0x%x,0x%x,084c is 0x%x
It also contains nearby VIC/frame-channel/overflow-related strings such as:
Err [VIC_INT] : frame asfifo ovf!!!!!Err [VIC_INT] : dma syfifo ovf!!!Err [VIC_INT] : image syfifo ovf !!!Err [VIC_INT] : mipi fid asfifo ovf!!!Err [VIC_INT] : dma arb trans done ovf!!!Err [VIC_INT] : dma chid ovf !!!
And symbol names such as:
ispcore_interrupt_service_routineisp_irq_handleisp_irq_thread_handleisp_vic_interrupt_service_routinevic_framedone_irq_functionvic_mdma_irq_functionisp_overflow
So even without the full source, it is possible to say with high confidence that this patio log is coming from the vendor ISP core itself.
What I Think irq-status 0x00000600 Means
I want to be careful here and avoid overclaiming.
What seems high confidence:
- this is an internal ISP/VIC fault path, not a Frigate-side error
- the patio log string definitely comes from the T31 ISP core blob
- older open Ingenic ISP code shows
ispcore_interrupt_service_routinehandles an internal ISP interrupt-state register, while top-level outer VIC IRQ aggregation is handled separately
So my current interpretation is:
irq-status 0x00000600is very likely an internalispcore/VIC status word with bits0x200and0x400set- it is probably in the family of FIFO / frame-channel / MIPI interface faults indicated by the nearby blob strings
- the exact bit-to-error mapping still needs vendor-side source or proper reverse engineering
I would be especially interested to know whether 0x200 and 0x400 correspond to a known VIC overflow / DMA / frame-channel / MIPI condition on T31.
Things I Do Not Think Are The Main Cause
Backchannel SDP / multiple audio tracks
The camera advertises extra AAC / PCMU / PCMA sendonly tracks in SDP. I confirmed this is intentional prudynt behavior for backchannel support, not random corruption.
So I do not think the extra audio tracks in SDP are the primary patio bug.
Standalone daynightd
I did not find a running standalone daynightd service on either camera during the targeted comparison, so this does not currently look like a day/night daemon conflict.
My Best Current Root Cause Statement
The best-supported explanation I have right now is:
The patio unit has a real T31 ISP/capture instability on its true 30 fps path. On the currently packaged stable
prudyntbranch, that instability is then amplified by small blocking video queueing and sender-side timing behavior, which turns it into a malformed client-visible RTSP stream (30 fps / 15 tbr, frame gaps, decode trouble, backlog) instead of a cleanly degraded or recovered stream.
Questions For Maintainers
- Does
ispcore: irq-status 0x00000600, err is 0x200,0x3f8,084c is 0x0map to a known T31 ISP/VIC fault? - Do
0x200and0x400correspond to a documented VIC overflow / DMA / frame-channel / MIPI error condition? - Is there a known instability on Wyze Cam v3 / T31 when
stream0.fps=30compared with25? - Is the currently shipped Thingino build for this target still using
prudynt-tstabled8e97072b6e45fece965ee6f4954ce9d0874f4fb? - If so, is there already a known reason to prefer a newer
prudyntrevision for timestamping / RTSP stability on T31? - Is there any recommended debug knob or proc/debugfs readout I should capture next time this patio unit enters the bad state?
Extra Notes
- OPUS audio may be a stressor on the shipped stable branch, but I do not think it explains the patio-only ISP fault by itself because both cameras were using OPUS.
- I have additional local artifacts from direct
ffprobe,ffmpeg, go2rtc probing, and source inspection if a maintainer wants a narrower follow-up dataset.