Seven‑Day App: A TypeScript Playbook for Building a Small App with AI Assistants
A practical 7‑day TypeScript plan combining scaffolding, typed APIs, and prompt engineering to ship a micro app quickly with LLM help.
Build a usable, typed app in seven days — even with an LLM as your pair programmer
If you’re a developer or tech lead who needs to ship a small, reliable app fast, you know the pain: uncertain types, shaky API contracts, flaky AI outputs, and a million tiny setup decisions that slow you way down. In 2026 those frictions matter less — if you plan for them. This playbook gives a pragmatic, day-by-day TypeScript plan to build a small “Seven‑Day” app (think: a dining recommender) that uses typed APIs, scaffolding generators, and LLM-assisted development to iterate rapidly and safely.
Why this approach matters in 2026
Micro apps and “vibe coding” exploded after 2023–2025. Non‑developers and lean teams rapidly prototype useful personal apps with AI assistance. At the same time, LLMs are more powerful and multimodal and major players (OpenAI, Anthropic, and Google’s Gemini family) offer function calling and structured output. The winning pattern in 2026 is not “AI writes everything” — it’s “AI + strong types + scaffolding”: the LLM drafts, the types verify.
“Vibe-coding” and micro apps are proof that speed + guardrails win. Your job: add TypeScript guardrails so those ideas are reliable.
What you’ll get from this plan
- A concrete 7‑day schedule focused on TypeScript-first decisions
- Examples for typed domain models, Zod validation, and tRPC-style typed APIs
- Prompt engineering patterns that produce structured JSON you can validate with types
- Scaffolding and automation steps so an LLM (or junior dev) can follow the plan
- Deployment, cost control, and safety checks for LLM usage
Quick overview: the seven days
- Day 1: Define the MVP and domain types — get your types on paper and in code.
- Day 2: Scaffold repo and developer tooling (TS, ESLint, Prettier, CI).
- Day 3: Build typed API endpoints and Zod schemas; wire LLM server integration.
- Day 4: Iterate prompts and implement structured function-calling (JSON/schema outputs).
- Day 5: UI with typed hooks (tRPC or typed fetch), and UX for assistant prompts.
- Day 6: Tests and AI-assisted scaffolding for repeatability (Playwright, unit tests, codegen).
- Day 7: Polish, deploy, monitor, and set cost & safety guardrails.
Day 1 — Plan the MVP & author domain types (60–120 min)
Start by writing the simplest user story: “Help a small group pick a restaurant.” From that, design a minimal domain model. Capturing this as TypeScript upfront prevents a thousand tiny bugs later.
Deliverables
- TypeScript domain interfaces
- High-level API contract (endpoints and payload shapes)
- Prompt templates for the assistant (first pass)
Example types
export type Cuisine = 'american' | 'mexican' | 'japanese' | 'italian' | 'vegetarian';
export interface UserPreferences {
userId: string;
partySize: number;
preferredCuisines: Cuisine[];
maxDistanceKm?: number;
priceRange?: 1 | 2 | 3; // 1 = $ , 3 = $$$
}
export interface Restaurant {
id: string;
name: string;
lat: number;
lng: number;
cuisine: Cuisine;
priceRange: 1 | 2 | 3;
score?: number; // computed
}
Actionable: Save these files in src/types or src/domain as ground truth. If an LLM produces any data, you will parse and validate it against these types.
Day 2 — Scaffold repo & developer tooling (2–4 hours)
Scaffold with a single command and add linting, type checks, and pre-commit hooks. The goal is a reproducible environment for humans and LLMs alike.
Scaffold suggestion (Vite + React + TypeScript)
npm create vite@latest seven-day-app -- --template react-ts
cd seven-day-app
npm install
Essential tooling
- TypeScript: strict mode (see tsconfig below)
- ESLint + TypeScript plugin
- Prettier
- Husky + lint-staged
- CI: GitHub Actions for tsc, lint, tests
Recommended tsconfig (key bits)
{
"compilerOptions": {
"target": "ES2022",
"module": "ESNext",
"strict": true,
"moduleResolution": "bundler",
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"resolveJsonModule": true,
"noUncheckedIndexedAccess": true
}
}
Tip: Keep strict true. If you're worried about initial velocity, use ESLint's rule suppression only temporarily — your long-term velocity will improve if you fix type errors early.
Day 3 — Typed APIs and the LLM server integration (4–8 hours)
Now convert your domain model into a typed API. I recommend one of two patterns:
- tRPC + Zod for fully type-safe server-to-client.
- OpenAPI + openapi-typescript to generate types from server schemas.
Example server route using Zod + tRPC (simplified)
// server/src/schemas.ts
import { z } from 'zod';
export const UserPreferencesSchema = z.object({
userId: z.string(),
partySize: z.number().int().min(1),
preferredCuisines: z.array(z.string()),
maxDistanceKm: z.number().optional(),
priceRange: z.union([z.literal(1), z.literal(2), z.literal(3)]).optional(),
});
export type UserPreferences = z.infer;
// server/src/router/recommend.ts
import { t } from '../trpc';
import { UserPreferencesSchema } from '../schemas';
export const recommend = t.procedure
.input(UserPreferencesSchema)
.query(async ({ input, ctx }) => {
// call to LLM happens here (server-side)
const llmResponse = await ctx.llmClient.recommendRestaurants(input);
// validate with Zod (defensive)
const parsed = RestaurantListSchema.safeParse(llmResponse);
if (!parsed.success) throw new Error('LLM returned invalid shape');
return parsed.data;
});
Actionable: Ensure the server exposes only validated shapes. Even when the LLM is the source of truth for recommendations, your code must validate the output.
Day 4 — Prompt engineering for structured, typed outputs
Day 4 is about transforming free-form LLM replies into reliable, parseable JSON. Instead of asking “recommend restaurants,” ask the model to return JSON matching a schema and include an example. Use function-calling or JSON Schema where possible.
Prompt template (structured output)
System: You are a restaurant recommender. Always respond with JSON exactly matching the schema.
Schema: {
"type": "object",
"properties": {
"recommendations": {
"type": "array",
"items": {
"type": "object",
"properties": {
"id": {"type":"string"},
"name": {"type":"string"},
"lat": {"type":"number"},
"lng": {"type":"number"},
"cuisine": {"type":"string"},
"priceRange": {"type":"integer"},
"score": {"type":"number"}
},
"required": ["id","name","lat","lng","cuisine","priceRange"]
}
}
},
"required": ["recommendations"]
}
User: Input preferences: { ... }
Assistant: (only JSON matching the schema)
Prefer function-calling APIs (OpenAI/Anthropic/Gemini styles) when available; they reduce hallucinations and make parsing straightforward.
Validate LLM output with Zod
import { z } from 'zod';
const RestaurantSchema = z.object({
id: z.string(),
name: z.string(),
lat: z.number(),
lng: z.number(),
cuisine: z.string(),
priceRange: z.number().int(),
score: z.number().optional(),
});
const RecommendationResponse = z.object({
recommendations: z.array(RestaurantSchema),
});
// usage
const parsed = RecommendationResponse.safeParse(JSON.parse(llmText));
if (!parsed.success) { /* handle gracefully */ }
Takeaway: Your LLM prompts should treat structured output as non-negotiable. Validate everything on arrival.
Day 5 — Build the UI with typed hooks and offline flows
Day 5 is where you connect typed APIs to the UI. Use typed hooks so your components get autocomplete and compile-time guarantees.
Example React hook with tRPC or typed fetch
// client/src/hooks/useRecommendations.ts
import { useQuery } from '@tanstack/react-query';
import type { UserPreferences } from 'shared/types';
export function useRecommendations(prefs: UserPreferences) {
return useQuery(['recommendations', prefs], async () => {
const res = await fetch('/api/recommend', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(prefs),
});
const data = await res.json();
// runtime validation (again)
return RecommendationResponse.parse(data);
}, { staleTime: 1000 * 60 * 5 });
}
Expose a simple assistant UI pattern: a small chat-like box that lets the user refine preferences via a guided prompt (checkboxes for cuisines, sliders for distance). The assistant should accept the changes and re-run the recommendation flow.
Day 6 — Tests, AI-assisted scaffolding & automation
At this point your app is mostly functional. Spend Day 6 automating quality and using the LLM to scaffold repetitive bits like tests and e2e flows.
Use the LLM to generate tests — then review manually
- Ask an LLM to generate unit tests for parsing logic
- Ask it to generate Playwright or Cypress flows for the primary user story
- Always review generated tests for false positives/negatives
// Example Jest test for parser
import { RecommendationResponse } from '../schemas';
test('parses valid llm response', () => {
const sample = { recommendations: [ { id: '1', name: 'Sushi', lat: 0, lng: 0, cuisine: 'japanese', priceRange: 2 } ] };
expect(() => RecommendationResponse.parse(sample)).not.toThrow();
});
Automate codegen
Use tools to keep client and server types in sync:
- tRPC's type generation
- openapi-typescript if using OpenAPI
- zod-to-ts for converting validation schemas to TS interfaces
Day 7 — Polish, deploy, monitor, and guardrails
Finish with reliability: deployment, observability, and LLM cost controls. Without these, rapid prototypes burn money quickly.
Deployment
- Deploy frontend to Vercel/Netlify/Cloudflare Pages
- Deploy server to serverless functions or a small Node container (Fly.io, Render, or Vercel Serverless)
- Store LLM keys in secrets and limit scopes
Monitoring & Observability
- Track latency, tokens per call, and success rate (Zod validation failures)
- Instrument with OpenTelemetry or a light APM (Sentry/Datadog)
- Log LLM inputs and normalized outputs for audit (redact PII)
LLM cost and safety guardrails
- Cache results aggressively for identical inputs
- Limit max tokens and reduce model temperature in production
- Provide rule-based fallback (simple distance + rating sort) if the LLM fails or returns bad data
Example fallback: run a deterministic ranking using POI data already in your database and present it if the LLM fails schema validation.
Blueprint prompts an LLM can follow to scaffold your repo
Want an LLM to act as your scaffold generator? Provide a clear checklist and strict output requirements. Example instruction prompt:
"You are a TypeScript scaffolding assistant. Output a JSON object listing shell commands to run, files to create (path + content), and any environment variables needed. Only output JSON with keys: commands, files, env. Use minimal explanations."
By returning JSON, the LLM becomes easily scriptable. You can feed that JSON into a small Node script to apply the changes.
Advanced patterns and 2026 trends to adopt
- Local and hybrid LLMs: use on-device or proxied models for privacy-sensitive operations and cloud LLMs for heavy reasoning.
- Model orchestration: route trivial prompts to smaller models and complex reasoning to larger models to save cost.
- Function-calling + typed bindings: the dominant pattern in 2025–2026 for safety and correctness. See recent ECMAScript 2026 proposals for related web API changes.
- Tool-using agents: let the assistant call external APIs (maps, reservations) but restrict the scope and validate every tool response.
Common pitfalls and how to avoid them
- Relying on raw LLM output: Always validate and sanitize. Use Zod + runtime checks.
- Skipping strict TypeScript: Looser typing in the name of speed adds debt. Keep strict mode and treat types as living documentation.
- Not capping token usage: Set per-user and per-request limits and cache heavily.
- Poor testing: Use LLMs to suggest tests, but always review and run them locally/CI.
Real-world example: recommendation endpoint (concise)
// server/src/llmClient.ts (simplified)
import fetch from 'node-fetch';
import { RecommendationResponse } from './schemas';
export async function callRecommendLLM(input: any) {
const prompt = `Return JSON matching the schema: ... \nUser: ${JSON.stringify(input)}`;
const res = await fetch(process.env.LLM_API_URL!, {
method: 'POST',
headers: { 'Authorization': `Bearer ${process.env.LLM_KEY}` },
body: JSON.stringify({ prompt, max_tokens: 800, temperature: 0.2 }),
});
const text = await res.text();
const parsed = RecommendationResponse.safeParse(JSON.parse(text));
if (!parsed.success) {
// fallback deterministic ranking
return fallbackRecommendations(input);
}
return parsed.data;
}
Actionable checklist to finish this week
- Day 1: Commit domain types and an API contract file.
- Day 2: Scaffold and add CI with tsc check + lint.
- Day 3: Implement server endpoint and Zod validation.
- Day 4: Implement function-calling prompt and validate responses.
- Day 5: Wire to UI via typed hooks and create a basic assistant UI.
- Day 6: Add tests and AI-assisted scaffolding to generate remaining boilerplate.
- Day 7: Deploy, add observability, and set token/cost budgets.
Why this reduces long-term risk
By combining TypeScript with runtime validation, generator-driven scaffolding, and deliberate prompt engineering, you keep the velocity benefits of LLMs while avoiding their most costly risks: hallucinations, schema drift, and runaway costs. The pattern in 2026 is “fast + safe” — build quickly, but build with typed guardrails.
Final notes on developer productivity and team handoffs
Create a README that documents the type contracts and prompt templates. If you use an LLM to scaffold, include the exact prompt used so future engineers can reproduce or iterate. Consider adding a tiny “playbook” route in the repo: /PLAYBOOK.md that lists the seven-day steps, environment variables, and run commands.
Call to action
Try this playbook on your next micro app. Start Day 1 right now: commit the domain types and a one-line README that explains the single user story. If you want a ready-made starter, clone the companion repo (look for "seven-day-typescript-playbook" on GitHub) and use the included prompt templates. Share your results with the TypeScript community — and if you want, paste a generated prompt or an LLM output and I’ll review how to make it type-safe.
Related Reading
- The Evolution of Cloud Cost Optimization in 2026: Intelligent Pricing and Consumption Models
- Advanced Strategy: Observability for Workflow Microservices — From Sequence Diagrams to Runtime Validation
- Building a Resilient Freelance Ops Stack in 2026: Advanced Strategies for Automation, Reliability, and AI-Assisted Support
- Future-Proofing Publishing Workflows: Modular Delivery & Templates-as-Code (2026 Blueprint)
- January Tech Bundle: Mac mini M4 + Nest Wi‑Fi + Charger — Is It Worth It?
- Scooter vs Budget E-Bike: Which Low-Cost Option Wins for Daily Commuters?
- Jet Fuel Scrutiny & Fare Volatility: How to Find Last-Minute Deals When Airlines Hit Turbulence
- Photo Gallery: Celebrity Coastal Moments — From Venice Jetties to Private Villa Arrivals
- How Fandom Rituals Mirror Grief: Processing Change in Long-Running Franchises
Related Topics
typescript
Contributor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you