How We Test
Our methodology is built to answer one question: will this camera deliver timely, usable evidence with minimal hassle and minimal data exposure? We recreate real environments, measure what matters, and publish the assumptions so you can judge our results.
Real‑World Scenarios
- Apartment entry: narrow hallways, shared Wi‑Fi congestion, privacy zones for neighbors.
- Front porch with street glare: headlight and sun‑angle challenges, package detection, and talking through the camera.
- Backyard at distance: identification at 20–50 ft, motion blur, IR coverage, and lens FOV trade‑offs.
- Small retail: continuous recording, multi‑camera sync, cash‑wrap coverage, and audit‑ready exports.
Core Metrics and How We Measure Them
- Alert precision and recall: we stage controlled events (person, vehicle, animal, package) and log true/false positives and missed detections across three runs per scenario.
- Notification latency: median and 95th‑percentile time from motion start to actionable push alert. We use NTP‑synced clocks and varied networks (strong/weak Wi‑Fi, PoE, simulated WAN jitter).
- Night‑time identification: readability of faces, clothing, and license plates at set distances under 0.1–3 lux, with IR on/off and HDR/WDR variations.
- Reliability: pre‑roll/buffered capture, gap frequency during continuous recording, stability across power/Wi‑Fi drops, and recovery time.
- Total cost of ownership (TCO): camera price, local storage (microSD/NVR/HDD), optional subscriptions, expected lifespan, and energy draw.
Tools and Controls
We use standard test charts, lux meters, backlight/headlight rigs, motion tracks, and network emulation. Each test logs firmware/app versions, bitrates, codecs, resolution, shutter/IR behavior, and ambient temperature. We repeat key trials and average results.
Scoring Model
Baseline weights for general‑purpose cameras: 35% alert accuracy, 20% notification latency, 20% low‑light identification, 15% reliability, 10% TCO. We publish category‑specific weights (e.g., battery cams, indoor only, PoE) with each review when they differ.
Privacy & Cost Index
A simple index summarizes local‑first options, end‑to‑end encryption, privacy zones and consent cues, offline functionality, data export rights, and long‑term cost. We show what’s optional vs. required so you can avoid unnecessary subscriptions.
Reproducibility and Retesting
We calibrate daily, test across at least two placements per scenario, and retest after major firmware releases. When reader evidence contradicts our findings, we investigate and publish updates or caveats.
Evidence You Can See
Where possible, we share anonymized clips and stills illustrating detection quality, night clarity, and motion blur at distance. We watermark and time‑sync media to make it audit‑ready and comparable across models.
If you have a scenario you want us to add—or a variable you think we should measure—email [email protected].