Siri, Gemini, and TypeScript: Building Privacy‑Aware Assistant Integrations for iOS Web Apps
assistantpwaprivacy

Siri, Gemini, and TypeScript: Building Privacy‑Aware Assistant Integrations for iOS Web Apps

ttypescript
2026-02-06 12:00:00
10 min read
Advertisement

How TypeScript PWAs can safely integrate with Siri/Gemini: typed responses, privacy-first proxies, and graceful fallbacks for 2026.

Hook: Your PWA speaks TypeScript — can it talk to Siri and Gemini without exposing your users?

Shipping a Progressive Web App or React/Vue/Next.js site in 2026 often means three simultaneous demands: integrate with platform assistants (Siri/Gemini-powered experiences), keep typing and runtime safety (so the assistant responses map to well-typed UI actions), and maintain user privacy under increasingly strict OS-level and regulatory guardrails. If you build in TypeScript, you already have an advantage — but the assistant surface area is fragmented and full of privacy landmines. This article gives you a pragmatic, TypeScript-first playbook to implement assistant integrations that are typed, privacy-aware, and gracefully degrade when OS-level hooks aren't available.

Why this matters in 2026

Two industry trends converged in 2024–2026 that affect web-first apps today:

  • Siri’s modernization — Apple’s work to graft advanced generative models (including partnerships that route some requests to models like Gemini) means Siri can now run richer assistant flows, but most of those flows surface via native App Intents and privacy-preserving channels.
  • Edge & private compute — vendors and platforms push on-device and private cloud compute to protect PII. That changes how you design API proxies and what data you send to third-party LLMs.

In practice: web apps can participate, but you must design a layered architecture that handles typed responses, validates everything, and falls back cleanly when assistant hooks are missing or user privacy settings block access.

High-level integration patterns

There are three practical patterns to integrate a TypeScript web app / PWA with mobile assistants like Siri (Gemini-powered or otherwise):

  1. Native companion (best experience) — small native wrapper exposes App Intents / Siri Shortcuts and deep-links into the PWA. The native layer performs capability negotiation and (optionally) minimal processing for privacy-sensitive data before handing control back to the TypeScript app.
  2. Web-first progressive integration (web-only) — rely on Web APIs (Web Speech, Web Share Target, deep links) with careful feature detection and local validation. No native code required, but experiences are more limited on iOS.
  3. Server-mediated assistant proxy — your TypeScript backend (Node/Next.js) calls a generative model (Gemini, LLM provider) and returns strongly-typed JSON to the client or to a native wrapper. This gives the most control over privacy, validation, and auditing. See our notes on building and hosting micro-apps to keep service boundaries small.

Core principles

  • Type everything — define schemas for assistant intents and outputs and validate them at runtime (Zod/io-ts) to avoid surprising runtime behavior.
  • Respect capability detection — detect assistant and platform features at runtime and choose a safe fallback. Don’t assume native assistant hooks are present.
  • Minimize PII — only send what’s required to remote models; prefer on-device transforms when possible.
  • Audit & log safely — redact or hash sensitive values before logging. Keep an audit trail that proves compliance without exposing user data.

Practical TypeScript patterns: typed responses + validation

Assistant responses are often semi-structured text. Treat them as data-first: define TypeScript types, and use a runtime validator to enforce them.

Example intent + response type

// types/assistant.ts
import { z } from 'zod';

export const AppointmentSchema = z.object({
  id: z.string().uuid(),
  title: z.string(),
  date: z.string().refine(s => !Number.isNaN(Date.parse(s)), { message: 'invalid date' }),
  location: z.string().optional(),
  private: z.boolean().default(false),
});

export type Appointment = z.infer;

export const AssistantActionSchema = z.discriminatedUnion('type', [
  z.object({ type: z.literal('create_appointment'), payload: AppointmentSchema }),
  z.object({ type: z.literal('open_url'), payload: z.object({ url: z.string().url() }) }),
]);

export type AssistantAction = z.infer;

Use these schemas at the server boundary and in the client to guarantee shape and catch malformed model outputs.

Server-side proxy (Next.js/Node + Gemini-style model)

Example: Next.js /app/api route that forwards a user prompt to a model provider and validates the assistant JSON response.

// app/api/assistant/route.ts (Next.js App Router)
import type { NextRequest } from 'next/server';
import { AssistantActionSchema } from '@/types/assistant';
import { z } from 'zod';

export async function POST(req: NextRequest) {
  const { prompt, context } = await req.json();

  // Minimal privacy filter: redact emails and phones
  const sanitizedPrompt = prompt.replace(/[\w.-]+@[\w.-]+/g, '[REDACTED]').replace(/\+?\d[\d -]{7,}\d/g, '[REDACTED]');

  const modelResponse = await callGeminiLikeModel(sanitizedPrompt, context);

  // Attempt to parse JSON payload from the model safely
  const parsed = safeExtractJson(modelResponse.text);

  const result = AssistantActionSchema.safeParse(parsed);
  if (!result.success) {
    // Return structured error: client can fallback to a human-friendly reply
    return new Response(JSON.stringify({ error: 'invalid_model_output', details: result.error.format() }), { status: 422 });
  }

  // Do not log raw user content; log only intent type + hashed id for auditing
  auditLog({ type: result.data.type, idHash: hash(result.data.payload?.id || '') });

  return new Response(JSON.stringify(result.data), { headers: { 'Content-Type': 'application/json' } });
}

Notes:

  • Do runtime schema validation with Zod; never assume model output is correct.
  • Sanitize PII before sending it to third-party LLMs.
  • Prefer server-side calls when you need to add rate-limiting, consent checks, or redaction.

Client-side: capability detection and graceful degradation

Your PWA must detect the platform assistant surface and choose a path. Here’s an example TypeScript utility you can reuse.

// utils/assistant.ts
export function canUseSpeechRecognition(): boolean {
  // Chrome/Edge/Firefox have webkitSpeechRecognition or SpeechRecognition
  return typeof (window as any).SpeechRecognition !== 'undefined' || typeof (window as any).webkitSpeechRecognition !== 'undefined';
}

export function canUseMediaSession(): boolean {
  return 'mediaSession' in navigator;
}

export function isIosPwaInstalled(): boolean {
  // heuristic: standalone display-mode + iOS UA
  return window.matchMedia('(display-mode: standalone)').matches && /iP(hone|ad|od)/.test(navigator.userAgent);
}

Use these checks to show a different UI or instruct the user to install a companion native helper for the best Siri integration.

React component: assistant button with fallback

// components/AssistantButton.tsx
import React from 'react';
import { canUseSpeechRecognition } from '@/utils/assistant';

export default function AssistantButton({ onAssistantAction }: { onAssistantAction: (action: any) => void }) {
  const handleClick = async () => {
    if (canUseSpeechRecognition()) {
      // start web speech flow
      startWebSpeechFlow(onAssistantAction);
      return;
    }

    // fallback: open messaging/contact flow or show keyboard
    showFallbackInput(onAssistantAction);
  };

  return ;
}

If you need deep Siri integration, the practical route in 2026 is a lightweight native wrapper exposing App Intents that deep-link to your PWA. Keep native logic minimal — only for intent routing, consent, and local privacy transformations. The wrapper can do:

  • Receive App Intent from Siri
  • Perform local PII redaction or transform (e.g., replace contact names with resource IDs)
  • Open your PWA via a universal link with signed temporary tokens

Advantages: you keep your core product in TypeScript while offering a first-class assistant experience.

Privacy checklist for assistant interactions

Before shipping assistant features, validate each item below:

  • Consent — users must explicitly opt into assistant features that forward personal data to third-party models.
  • Minimize — strip PII (emails, phone numbers, contact names) before sending prompts.
  • On-device vs. cloud — choose on-device transforms for the most sensitive data. Use cloud proxies only when necessary and document retention policies.
  • Audit — keep a privacy-preserving audit log (hashes, timestamps, nonces) for compliance requests.
  • Rate limit — prevent accidental data leaks from runaway assistant calls.

Testing and CI: mock assistant responses

Model outputs change. Bake assumptions into tests:

  • Use golden JSON fixtures for expected assistant actions and validate them with your Zod schemas.
  • Run fuzz tests where randomly generated (but valid) assistant outputs are run through your client code to ensure stable parsing and graceful error handling.
  • Test privacy flows: verify that PII is removed before hitting external endpoints.

Real-world example: Booking flow that Siri can trigger

Imagine a booking PWA written in Next.js + TypeScript. Users can say "Hey Siri, book a hair appointment with Vio on Friday". The steps:

  1. Siri App Intent captures the utterance and maps it to a normalized intent with entities (name, service, date).
  2. Native wrapper performs privacy check: user has allowed assistant bookings? If yes, it replaces contact names with account IDs and opens your PWA with a signed ephemeral token and the normalized intent JSON in the query string or via a short-lived POST.
  3. Your Next.js API route validates the intent shape (Zod), checks slot availability, reserves a provisional slot, and returns a typed action (AssistantActionSchema) that the client renders as a confirmation card.
  4. If anything fails (validation, or the model returned invalid JSON), the client shows a clear human-friendly error and allows manual completion.

This approach gives the fastest user flow while ensuring the assistant never receives raw PII beyond what the user consented to.

Look to these trends when planning assistant integrations:

  • Assistant orchestration layers — Many platforms now route assistant requests through an orchestration layer (private compute + model selection). Expect APIs that return structured outputs by contract. Design your schema-first pipeline accordingly.
  • WebNN and on-device reasoning — emerging WebNN and on-device model runtimes will enable more local transforms in JavaScript/TypeScript; keep abstraction layers to swap between local and cloud validation.
  • Privacy-preserving ML primitives — look for Privacy SDKs that provide tokenization, redaction, and local differential privacy helpers to include in your TypeScript pipeline.

Common pitfalls and how to avoid them

  • Trusting the model — never trust assistant text to be valid JSON. Always parse and validate with a schema.
  • Leaking PII in logs — sanitize before logging or use one-way hashes for audit purposes.
  • Assuming assistant hooks exist — feature-detect and design clean fallbacks (explicit user input, web speech, or email link flows).
  • Mixing responsibilities — separate intent routing (native wrapper or assistant) from business logic (server-side TypeScript) to make privacy reviews easier.

Actionable checklist — ship assistant support in 4 sprints

  1. Prototype: add a server-side proxy that calls the model provider and returns a validated AssistantActionSchema. Add a simple React button to request an assistant action.
  2. Privacy hardening: implement redaction, consent screens, and audit logging. Add schema validation end-to-end.
  3. Native companion: build a minimal wrapper that routes Siri/App Intents to your PWA with short-lived tokens. Test on-device behavior.
  4. Polish & scale: add rate-limiting, telemetry (privacy-safe), e2e tests, and a feature flag for toggling assistant integrations per region/user.

Practical rule: schema + redaction + fallback beats attempting a single perfect integration.

Resources & libraries to consider (2026)

  • Zod / io-ts — runtime validation for TypeScript schemas.
  • Web Speech API — client-side voice capture for web-first flows.
  • WebAuthn / Msal / OIDC libs — for exchanging short-lived tokens between native and web parts.
  • Privacy SDKs — look for provider SDKs offering PII redaction or private compute hooks.

Closing: a pragmatic, TypeScript-first future for assistants

By 2026, assistants like Siri are richer and sometimes run on or alongside models such as Gemini. That creates an opportunity for PWAs and TypeScript-first apps to participate — but only if you accept three constraints: validate everything, minimize the data you share, and provide graceful fallbacks when the platform doesn't cooperate. Use TypeScript types and runtime validators to turn fuzzy model outputs into safe, auditable actions. Prefer server-side proxies for sensitive operations, and keep the native surface minimal to benefit from OS-level assistant features.

Takeaways

  • Design intent schemas first and validate them at runtime (Zod/io-ts).
  • Sanitize and redact before calling external models — add consent gates.
  • Use a minimal native companion if you need Siri App Intent parity; otherwise iterate with web-first flows.
  • Test with realistic model outputs and add graceful degradation for every assistant path.

Call to action

Start by drafting the assistant schemas for one user flow in your app. Clone a small Next.js + TypeScript starter, wire a model proxy with Zod validation, and toggle a feature flag for assistant mode. If you want a ready-made checklist and starter repo, grab the companion repo (TypeScript starter, Zod schemas, and example Next.js API proxy) on our GitHub and subscribe for updates — we'll publish a sample native wrapper and testing harness for Siri App Intents this quarter.

Advertisement

Related Topics

#assistant#pwa#privacy
t

typescript

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-01-24T03:53:03.284Z