Behavioral Metrics
How to collect touch precision, scroll velocity, form timing, click precision, mouse velocity, typing speed, and other behavioral signals for age assurance across mobile and desktop.
Overview
Behavioral metrics carry the highest weight (43%) in the Signal Fusion engine. They measure how a user physically interacts with your app — motor control, scrolling patterns, form-filling speed, and interaction behavior. These signals are difficult to fake because they reflect neuromuscular development that differs measurably between children and adults.
The engine supports two sets of behavioral signals depending on the interaction mode:
- Touch mode (mobile/tablet) — touch precision, touch scroll velocity, touch pressure variance, multi-touch frequency
- Pointer mode (desktop/laptop) — click precision, mouse velocity, mouse path straightness, hover dwell time, pointer scroll velocity, typing speed, keystroke interval variance
- Hybrid mode (convertible devices) — all signals from both modes
Universal signals (form completion time, face estimation) work in all modes.
Set interaction_mode on the request, or omit it and the engine will
auto-detect from which signals are present. When only touch signals are
provided, the engine infers touch mode (backward compatible). When only pointer
signals are provided, it infers pointer mode.
All behavioral signals are computed on-device from raw browser/OS events. You transmit only aggregate scores (ratios, averages, counts) — never raw touch coordinates, mouse paths, timestamps, or event streams.
Touch-mode Fields Reference
| Parameter | Type | Required | Description |
|---|---|---|---|
avg_touch_precision | number (0–1) | Optional | Average touch-target precision. 0 = every tap misses, 1 = perfect accuracy. Touch-mode only — omit for pointer-mode interactions. |
scroll_velocity | number (px/s) | Optional | Average scroll/flick velocity in pixels per second. The engine applies different thresholds for touch-mode (0–5000+ px/s) versus pointer-mode (0–1500 px/s). |
form_completion_time_ms | number (ms) | Optional | Average time to complete a form field in milliseconds. Works identically in both touch and pointer modes. |
is_autofill_detected | boolean | Optional | Whether browser/OS autofill was detected. When true, form completion time is neutralized in scoring. |
touch_pressure_variance | number (0–1) | Optional | Normalized variance of touch pressure readings across the session. Touch-mode only. |
multi_touch_frequency | number (events/min) | Optional | Accidental multi-touch events per minute. Touch-mode only. |
face_estimation_result | object | Optional | Third-party face estimation fallback. See Face Estimation section below. |
Pointer-mode Fields Reference (Desktop/Laptop)
| Parameter | Type | Required | Description |
|---|---|---|---|
avg_click_precision | number (0–1) | Optional | Average click-target precision — pointer-mode equivalent of avg_touch_precision. 0 = missed completely, 1 = dead center. Children overshoot targets; adults click precisely. |
mouse_velocity_mean | number (px/s) | Optional | Average mouse/trackpad cursor velocity. Children move erratically; adults move smoothly. |
mouse_path_straightness | number (0–1) | Optional | Ratio of direct distance to actual cursor path distance. 0 = very curved/wobbly, 1 = perfectly straight. Children produce corrective paths; adults move efficiently. |
hover_dwell_time_ms | number (ms) | Optional | Average time between cursor entering a clickable target and the click. Adults deliberate; children click impulsively. |
typing_speed_wpm | number | Optional | Average typing speed in words per minute. Adults: 40–80 WPM; children: 10–25 WPM. Strongest on physical keyboards. |
keystroke_interval_variance | number (0–1) | Optional | Variance of inter-keystroke timing. 0 = perfectly uniform rhythm, 1 = maximally erratic. Adults develop consistent rhythms; children are variable. |
scroll_velocity | number (px/s) | Optional | Average scroll velocity — shared with touch mode but scored with pointer-calibrated thresholds (300/800/1500 px/s instead of 800/2000/3500 px/s). |
form_completion_time_ms | number (ms) | Optional | Same as touch mode — works identically for both modes. |
Touch Precision
What it measures: How accurately the user taps interactive elements (buttons, links, form fields). Children have less developed fine motor control and consistently miss touch targets by wider margins.
Signal strength: A touch precision below 0.40 is a strong child indicator. Above 0.80 suggests adult-level motor control.
How to Collect
Listen for pointerdown events on interactive elements, measure the distance
from the touch point to the element's center, and normalize to a 0–1 score
where 1 is a perfect center hit.
import { useEffect, useRef, useCallback } from 'react';
export function useTouchPrecision() {
const scores = useRef<number[]>([]);
useEffect(() => {
function handlePointer(e: PointerEvent) {
const target = (e.target as HTMLElement).closest(
'button, a, input, select, textarea, [role="button"]'
);
if (!target) return;
const rect = target.getBoundingClientRect();
const cx = rect.left + rect.width / 2;
const cy = rect.top + rect.height / 2;
const dist = Math.sqrt((e.clientX - cx) ** 2 + (e.clientY - cy) ** 2);
const maxDist = Math.sqrt(rect.width ** 2 + rect.height ** 2) / 2;
scores.current.push(maxDist > 0 ? 1 - Math.min(dist / maxDist, 1) : 1);
}
document.addEventListener('pointerdown', handlePointer);
return () => document.removeEventListener('pointerdown', handlePointer);
}, []);
const getScore = useCallback(() => {
const s = scores.current;
if (s.length === 0) return null;
return s.reduce((a, b) => a + b, 0) / s.length;
}, []);
return getScore;
}
// Usage in a component:
// const getTouchPrecision = useTouchPrecision();
// const avg_touch_precision = getTouchPrecision();Tip: Collect at least 10–15 taps before computing the average. Fewer
samples produce noisy scores. If the user hasn't interacted with enough
elements, the tracker returns null — omit behavioral_metrics entirely
rather than sending a fabricated value.
All behavioral_metrics fields are optional — send whichever signals
your tracker has collected. Touch-mode integrations typically send
avg_touch_precision, scroll_velocity, and form_completion_time_ms;
pointer-mode (desktop) integrations send avg_click_precision,
mouse_velocity_mean, and mouse_path_straightness instead. If a tracker
has insufficient data, omit the field rather than sending zeros — a
velocity of 0 or a form time of 0 would produce misleading scores.
Edge Cases
- Mouse users produce near-perfect precision (0.95+). This is expected — mouse precision correlates with adult desktop usage.
- Stylus users also produce high precision. The engine accounts for this naturally since stylus usage is rare among young children.
- Budget devices have lower-quality digitizers that reduce raw precision.
Send
device_modelindevice_contextto enable hardware-tier normalization.
Scroll Velocity
What it measures: Average scroll/flick speed in pixels per second. Children tend to scroll rapidly ("scan and flick"), while adults scroll more deliberately.
Signal strength: Velocities above 3,500 px/s are strong child indicators. Below 800 px/s suggests adult browsing patterns.
How to Collect
Track scroll position changes over time using scroll events and compute the
average velocity across the session.
import { useEffect, useRef, useCallback } from 'react';
export function useScrollVelocity() {
const velocities = useRef<number[]>([]);
const lastY = useRef(0);
const lastTime = useRef(0);
const rafId = useRef(0);
useEffect(() => {
lastY.current = window.scrollY;
lastTime.current = performance.now();
function handleScroll() {
cancelAnimationFrame(rafId.current);
rafId.current = requestAnimationFrame(() => {
const now = performance.now();
const dt = (now - lastTime.current) / 1000;
if (dt > 0.01) {
const dy = Math.abs(window.scrollY - lastY.current);
velocities.current.push(dy / dt);
}
lastY.current = window.scrollY;
lastTime.current = now;
});
}
window.addEventListener('scroll', handleScroll, { passive: true });
return () => {
window.removeEventListener('scroll', handleScroll);
cancelAnimationFrame(rafId.current);
};
}, []);
const getScore = useCallback(() => {
const v = velocities.current;
if (v.length === 0) return null;
return v.reduce((a, b) => a + b, 0) / v.length;
}, []);
return getScore;
}Edge Cases
- No scroll data — if the page doesn't scroll (short pages, single-screen
views), the tracker returns
null. Omitbehavioral_metricsentirely rather than sending 0 — a velocity of 0 would be scored asdeliberate_scroll_pattern(strong adult indicator), skewing the assessment. - Programmatic scrolling (smooth scroll,
scrollTo()) can produce artificially low velocities. Filter out scroll events that occur without user interaction if possible. - Infinite scroll pages naturally accumulate more scroll data — this is fine, more samples improve accuracy.
Form Completion Time
What it measures: Average time a user spends completing individual form fields, in milliseconds. Children tend to fill forms impulsively (< 2 seconds per field), while adults take more time (> 5 seconds).
Signal strength: Under 2,000 ms is a strong child indicator. Above 15,000 ms suggests careful, adult-like form interaction.
How to Collect
Timestamp when each form field receives focus and when it loses focus or the value changes. Average across all observed fields.
import { useEffect, useRef, useCallback } from 'react';
export function useFormTiming(formSelector = 'form') {
const timings = useRef<number[]>([]);
const fieldStart = useRef(new Map<HTMLElement, number>());
useEffect(() => {
function handleFocusIn(e: Event) {
const target = e.target as HTMLElement;
if (['INPUT', 'TEXTAREA', 'SELECT'].includes(target.tagName)) {
fieldStart.current.set(target, performance.now());
}
}
function handleFocusOut(e: Event) {
const target = e.target as HTMLElement;
const start = fieldStart.current.get(target);
if (start !== undefined) {
const elapsed = performance.now() - start;
if (elapsed > 100) timings.current.push(elapsed);
fieldStart.current.delete(target);
}
}
const forms = document.querySelectorAll(formSelector);
forms.forEach((form) => {
form.addEventListener('focusin', handleFocusIn);
form.addEventListener('focusout', handleFocusOut);
});
return () => {
forms.forEach((form) => {
form.removeEventListener('focusin', handleFocusIn);
form.removeEventListener('focusout', handleFocusOut);
});
};
}, [formSelector]);
const getScore = useCallback(() => {
const t = timings.current;
if (t.length === 0) return null;
return t.reduce((a, b) => a + b, 0) / t.length;
}, []);
return getScore;
}Autofill Detection
What it measures: Whether the browser or OS autofilled form fields. This is critical because adults frequently use password managers and browser autofill, which produces artificially fast form completion times. When autofill is detected, the engine neutralizes the form completion signal to prevent false child classification of adult power users.
How to Collect
Detect autofill using the CSS :-webkit-autofill pseudo-class (via animation
events) or by detecting rapid multi-field population.
import { useEffect, useRef, useCallback } from 'react';
export function useAutofillDetection() {
const detected = useRef(false);
useEffect(() => {
// Method 1: CSS animation trigger on :-webkit-autofill
const style = document.createElement('style');
style.textContent =
'@keyframes a3-autofill-detect { from { opacity: 1 } to { opacity: 1 } }' +
'input:-webkit-autofill { animation-name: a3-autofill-detect; }';
document.head.appendChild(style);
function onAnimation(e: AnimationEvent) {
if (e.animationName === 'a3-autofill-detect') detected.current = true;
}
document.addEventListener('animationstart', onAnimation);
// Method 2: Rapid multi-field population (fallback)
let changeCount = 0;
let firstChange = 0;
const inputs = document.querySelectorAll('input');
function onChange() {
if (changeCount === 0) firstChange = performance.now();
changeCount++;
if (changeCount >= 3 && performance.now() - firstChange < 100) {
detected.current = true;
}
}
inputs.forEach((i) => i.addEventListener('change', onChange));
return () => {
document.removeEventListener('animationstart', onAnimation);
inputs.forEach((i) => i.removeEventListener('change', onChange));
style.remove();
};
}, []);
const isDetected = useCallback(() => detected.current, []);
return isDetected;
}Always send is_autofill_detected: true when autofill is detected. Without
this flag, a fast form completion time from an adult using a password manager
would be scored as impulsive (child-like) behavior.
Touch Pressure Variance
What it measures: How consistently the user applies pressure to the screen. Children exhibit erratic, inconsistent pressure (high variance > 0.7), while adults maintain stable, consistent pressure (low variance < 0.3).
Availability: Requires PointerEvent.pressure support (most modern touch
devices). Returns 0.5 on devices without pressure sensors. This signal is
optional — omit it if your target devices don't support pressure.
How to Collect
import { useEffect, useRef, useCallback } from 'react';
export function useTouchPressure() {
const pressures = useRef<number[]>([]);
useEffect(() => {
function handlePointer(e: PointerEvent) {
if (e.pointerType === 'touch' && e.pressure > 0 && e.pressure < 1) {
pressures.current.push(e.pressure);
}
}
document.addEventListener('pointerdown', handlePointer);
document.addEventListener('pointermove', handlePointer);
return () => {
document.removeEventListener('pointerdown', handlePointer);
document.removeEventListener('pointermove', handlePointer);
};
}, []);
const getScore = useCallback(() => {
const p = pressures.current;
if (p.length < 10) return null;
const mean = p.reduce((a, b) => a + b, 0) / p.length;
const variance = p.reduce((sum, v) => sum + (v - mean) ** 2, 0) / p.length;
return Math.min(variance / 0.25, 1); // normalize to 0–1
}, []);
return getScore;
}Edge Cases
- Devices without pressure sensors report
pressure: 0orpressure: 0.5for all events. The tracker above filters these out. If fewer than 10 valid readings are collected, returnnulland omit the field. - Apple Pencil / stylus produce very consistent pressure — this naturally scores as adult-like behavior.
Multi-Touch Frequency
What it measures: How often the user accidentally triggers simultaneous touches. Children produce 2–4x more accidental multi-touch events per minute than adults due to palm contact and less precise finger placement.
Signal strength: Above 6 events/min is a strong child indicator. Below 1 event/min suggests intentional, adult-like touch patterns.
How to Collect
Track pointerdown events and count cases where multiple pointers are active
simultaneously.
import { useEffect, useRef, useCallback } from 'react';
export function useMultiTouchFrequency() {
const activePointers = useRef(new Set<number>());
const count = useRef(0);
const startTime = useRef(0);
useEffect(() => {
startTime.current = performance.now();
function handleDown(e: PointerEvent) {
if (e.pointerType !== 'touch') return;
if (activePointers.current.size > 0) count.current++;
activePointers.current.add(e.pointerId);
}
function handleUp(e: PointerEvent) {
activePointers.current.delete(e.pointerId);
}
document.addEventListener('pointerdown', handleDown);
document.addEventListener('pointerup', handleUp);
document.addEventListener('pointercancel', handleUp);
return () => {
document.removeEventListener('pointerdown', handleDown);
document.removeEventListener('pointerup', handleUp);
document.removeEventListener('pointercancel', handleUp);
};
}, []);
const getScore = useCallback(() => {
const elapsedMin = (performance.now() - startTime.current) / 60_000;
if (elapsedMin < 0.1) return 0;
return count.current / elapsedMin;
}, []);
return getScore;
}Face Estimation (Fallback)
What it measures: A third-party face estimation provider's age range estimate. This is a fallback signal designed for the transition period (Jan–Jul 2027) when the OS age signal may not be available, or for web apps that cannot access the OS signal.
Face estimation is not required for most integrations. Use it when
os_signal is not-available and you want a stronger PROVISIONAL assessment.
Face estimation uses a third-party provider, which means the API response
will set internal_evidence_only: false when this field is present. If your
compliance posture requires strictly internal evidence under
§1798.501(b)(2)(B), omit face_estimation_result.
Supported Providers
| Provider | Value | Notes |
|---|---|---|
| Yoti | yoti | Age estimation SDK |
| Privado ID | privado | Privacy-preserving age verification |
| FaceTec | facetec | 3D liveness + age estimation |
Fields
| Parameter | Type | Required | Description |
|---|---|---|---|
estimation_provider | enum | Required | The third-party provider: yoti, privado, or facetec. |
estimated_age_lower | number (0–150) | Required | Lower bound of the provider's estimated age range. |
estimated_age_upper | number (0–150) | Required | Upper bound of the estimated age range. Must be >= estimated_age_lower. |
confidence | number (0–1) | Required | The provider's confidence in their estimate. |
Integration Flow
- Integrate the provider's client SDK (Yoti Age Estimation, FaceTec Browser SDK, etc.)
- Perform face estimation on-device — the raw image never leaves the user's device
- Receive the age range and confidence from the provider
- Pass the result to A3 inside
behavioral_metrics.face_estimation_result
{
"behavioral_metrics": {
"avg_touch_precision": 0.45,
"scroll_velocity": 2800,
"form_completion_time_ms": 3200,
"face_estimation_result": {
"estimation_provider": "yoti",
"estimated_age_lower": 10,
"estimated_age_upper": 14,
"confidence": 0.78
}
}
}
The engine blends the face estimation with behavioral signals, weighted by provider confidence. Lower confidence estimates are blended toward neutral (0.5) to prevent overreliance on uncertain results.
Putting It All Together
Here's a complete example that collects all behavioral metrics during a user
session and assembles the behavioral_metrics object:
# Complete behavioral_metrics example payload:
curl -X POST https://api.a3api.io/v1/assurance/assess-age \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"os_signal": "18-plus",
"user_country_code": "US",
"behavioral_metrics": {
"avg_touch_precision": 0.82,
"scroll_velocity": 950,
"form_completion_time_ms": 12000,
"is_autofill_detected": false,
"touch_pressure_variance": 0.15,
"multi_touch_frequency": 0.3
}
}'Next Steps
- Input Complexity — the second-highest weighted category (28%)
- Device Context — enables hardware normalization for touch precision
- Full API Reference — complete endpoint documentation