# Simon Quick ## Product-focused lead software engineer · Applied AI/RAG · Full-stack delivery [hi@siquick.com](mailto:hi@siquick.com) · [Resume](/resume) · [LinkedIn](https://www.linkedin.com/in/simonquick/) ## About I'm a product-focused lead software engineer, based in Sydney, Australia with 15+ years’ experience delivering products from idea to scale across startups, SMEs, and global commerce leaders. Skilled at combining user empathy, product sense, and technical excellence with applied AI. I'm an experienced collaborator who thrives in high-growth and cross-disciplinary teams. ## Current role **Lead Product Engineer** - Pollen (award-winning design & product studio) Lead engineering and AI strategy; hands-on across web, mobile, backend, and cloud. ## Career Highlights - **Clara - AI companion for arthritis (Webby-nominated)** - Evidence-based support; 87% specialist approval and 90% would recommend [link](https://clara.arthritis.org.au/) - **PollenAI - Agentic AI Search** - Adopted by Australian federal government, healthcare providers, and national infrastructure; processing millions of queries annually [link](https://pollen.ventures/pollenai) - **Rebuilt - Part-time CTO** - Australia’s first self-service platform for verified Product Carbon Footprints (PCFs) [link](https://rebuilt.eco) - **Sound Shelter - Founder** - Vinyl marketplace scaled to ~100k users and 30+ partner stores [link](https://soundshelter.net) - **Youcheck - Founding engineer** - Founding engineer at startup awarded Google DNI funding to combat online misinformation using NLP [link](https://www.compasslist.com/insights/youcheck-by-precept-the-fact-checking-app-fighting-misinformation) - **HomeAway.com** - 1st technical hire in APAC for HomeAway.com (prior to $3.9B acquisition by Expedia Inc.) [link](https://www.expediagroup.com/media/media-details/2015/Expedia-To-Acquire-HomeAway-Inc/default.aspx) ## Focus areas - Applied Generative AI, RAG, and agents - Product engineering, UX/UI collaboration - Cloud architecture, reliability, and evals ## Currently - **Interests:** Shipping user-facing AI features including RAG, agents, and workflows, measurement & evals, cost/performance tuning - **Availability:** Open to select collaborations and advisory on applied AI and product engineering [Work with me](mailto:hi@siquick.com) · [See my career](https://www.linkedin.com/in/simonquick/) # Career I am a product-focused lead software engineer with over 15 years of experience delivering products from concept to scale for startups, SMEs, and global leaders. Based in Sydney, I have dual Australian/UK citizenship and have led teams in Melbourne, London, and Barcelona. My work combines technical excellence, user empathy, and applied AI. ## Key Career Highlights - Led the build of a Webby-nominated AI companion for arthritis sufferers, **Clara**, achieving **87% approval** from specialists and **90% recommending it to patients**. - Delivered an agentic AI search adopted by the Australian federal government, healthcare providers, and national infrastructure, on track to answer nearly **2 million questions per year**. - Founded **Sound Shelter**, a vinyl marketplace scaled to **100k users** and **30+ partner stores**. - Part-time CTO of **Rebuilt**, Australia’s first self-service platform for generating verified Product Carbon Footprints (PCFs). - Engineered and integrated platforms for **Apple, Vodafone, Expedia, Puma, and CSIRO**. - Founding engineer at **Precept**, awarded **Google DNI funding** to combat online misinformation using NLP. --- ## Experience ### Lead Product Engineer — Pollen *Jan 2022 – Present · Sydney, Australia* Pollen is an award-winning digital design, UX, and product studio. I lead the engineering team, shaping technical direction and AI strategy. Promoted from Senior Engineer to Lead in March 2023. #### Selected Achievements - **Clara – AI Companion for Arthritis (Webby-nominated):** Architected and led the build of an iOS/Android/Web app supporting 3.7M Australians with arthritis. Designed a secure RAG pipeline to surface contextual answers, achieving **87% approval** from subject matter experts and **90% specialist recommendation**. Featured on 9News, Sydney Morning Herald, and The Age. - **Agentic AI Search:** Architected and launched a production AI search product, now adopted by the Australian federal government, healthcare providers, and national infrastructure organisations. Handles millions of queries annually with a pipeline including **PII redaction, LLM-as-judge classification, dynamic query rewrite, hybrid semantic/vector search, and LLM summarisation**. - **Rebuilt (Part-time CTO):** Leading technical direction for Australia’s first self-service platform enabling manufacturers to generate and publish verified PCFs. Designed and launched the platform to make trusted carbon data accessible at scale. #### Additional Contributions - Leadership team member driving technical & AI strategy. - Mentored 2 full-time engineers + contractors. - Designed infrastructure across AWS, GCP, Vercel, Expo using IaC (Pulumi). - Built proofs-of-concept for discovery and won multi-million-dollar client projects with technical expertise. **Stack:** TypeScript, Python, React/Next.js, Node.js, Django, Prisma, Postgres, TailwindCSS, React Native/Expo, AWS, GCP, Pulumi, RAG, LlamaIndex, Langfuse, OpenAI, Anthropic --- ### Founder / Engineer — Sound Shelter *Apr 2013 – Jan 2024 · Sydney, Australia* - Built and scaled a vinyl marketplace to **100k users** and **30+ partner stores**. - Designed recommendation algorithms and built infrastructure to pull catalogues via APIs, feeds, and scraping. - Created and launched a native iOS app. **Stack:** React/Next.js, Node.js, Prisma, MySQL, Tailwind, React Native, AWS --- ### Senior Software Engineer — Endeavour *Jan 2020 – Jan 2022 · Sydney, Australia* - Migrated the events platform to React + Django, serving thousands of prospective students. - Built a student onboarding platform used by hundreds per term. - Developed a clinic booking front-end handling hundreds of instant payments weekly. **Stack:** React, Django, Postgres, Tailwind, AWS --- ### Senior Software Engineer — Precept *Aug 2018 – Aug 2019 · Barcelona, Spain* Precept (YouCheck) received Google DNI funding to improve online information environments. - Built backend APIs for ML-driven misinformation detection in text and images. - Led a team of two on a React/Node platform connecting journalists with experts. - Managed DevOps and code review. **Stack:** React, Next.js, Node.js, Python, Django, Google Cloud --- ### Integration Engineer — Partnerize *Dec 2016 – Apr 2018 · Sydney, Australia* - APAC technical lead integrating global clients (Apple, Expedia, Vodafone, Nike, Emirates). - Built custom integrations with third-party APIs for partner marketing infrastructure. - Pre-sales/post-sales consultant on multi-million-dollar deals. **Stack:** Python, MySQL --- ### Sales Engineer — HomeAway.com (Expedia Inc.) *Jul 2012 – Aug 2016 · Melbourne & Sydney, Australia* - First technical hire in APAC. - Built feed parsing infrastructure powering ~20,000 property listings for two years. - Led technical consulting for APAC pre- and post-sales. **Stack:** Python --- ## Technical Skills - **Languages:** TypeScript, JavaScript, Python, SQL - **Front-end:** React, Next.js, React Native, Tailwind - **Back-end:** Node.js, Hono, Express, Django, FastAPI, GraphQL, Prisma, Drizzle, Postgres - **AI:** RAG, Semantic/Hybrid search, Vector databases, Prompt engineering, OpenAI, Anthropic, LlamaIndex, Langfuse, Vercel AI SDK, Agents / Workflows - **Infrastructure:** AWS, Google Cloud, Vercel, Docker, CI/CD --- ## Education **BSc (Hons) Internet Computing** Northumbria University — Newcastle upon Tyne, UK ## Working Rights Australian citizen (dual Australian/UK) # Claudette Patterns for TypeScript: A Guide to the AI SDK **TL;DR:** Claudette gives Python developers an ergonomic way to work with Claude, featuring a stateful chat object, an automatic tool loop, and structured outputs. This guide shows how to recreate those same powerful patterns in TypeScript using the Vercel AI SDK. **Acknowledgement:** Claudette is an Answer.AI project that teaches through literate notebooks. Credit to its maintainers for a clean, well‑explained design. ([claudette.answer.ai](https://claudette.answer.ai/)) ## Recreating Claudette's Core Features in TypeScript | Pattern | Claudette (Python) | AI SDK (TypeScript) Implementation | | :--- | :--- | :--- | | **Multi-step Tools** | A `Chat.toolloop()` runs calls until a task is done. | Use `generateText` with a `stopWhen` condition. | | **Structured Output** | `Client.structured()` returns a typed Python object. | Use `generateObject` with a Zod or JSON schema. | | **Prompt Caching** | Helpers mark cacheable parts of a prompt. | Use `providerOptions` to enable caching with a TTL. | | **Server Tools** | Wires up tools like Text Editor and Web Search. | Attach provider tools for Text Editor, Web Search, etc. | --- ## 1. Pattern: Automatic Multi-step Tool Use A key feature in Claudette is the `toolloop`, which automatically executes tool calls and feeds the results back to the model until a task is complete. You can build the same loop in the AI SDK by defining tools and using `generateText` or `streamText` with a `stopWhen` condition. This tells the SDK to re-invoke the model with tool results until your condition is met, preventing runaway loops. ```ts // pnpm add ai @ai-sdk/anthropic zod import { streamText, tool, stepCountIs } from 'ai'; import { anthropic } from '@ai-sdk/anthropic'; import { z } from 'zod'; const add = tool({ description: 'Add two integers', inputSchema: z.object({ a: z.number(), b: z.number() }), execute: async ({ a, b }) => a + b, }); const result = await streamText({ model: anthropic('claude-4-sonnet-20250514'), tools: { add }, stopWhen: stepCountIs(5), // Stop after 5 steps prompt: 'What is (12345 + 67890) * 2? Use tools and explain.', }); for await (const chunk of result.textStream) process.stdout.write(chunk); ``` ## 2. Pattern: Strongly Typed Structured Outputs Claudette's `structured()` method is a convenient way to get typed Python objects from the model. The AI SDK provides `generateObject` for the same purpose. You provide a Zod schema, and the SDK handles sending the schema to the model, validating the response, and returning a typed object. ```ts // pnpm add ai @ai-sdk/openai zod import { generateObject } from 'ai'; import { openai } from '@ai-sdk/openai'; import { z } from 'zod'; const Person = z.object({ first: z.string(), last: z.string(), birth_year: z.number(), }); const { object } = await generateObject({ model: openai('gpt-4o-mini'), schema: Person, prompt: 'Extract data for Ada Lovelace.', }); ``` ## 3. Pattern: Effective Prompt Caching Claudette's documentation highlights how to cache large, repeated prompt sections to save on costs. In the AI SDK, you can achieve this using `providerOptions.anthropic.cacheControl`. This marks parts of a message as cacheable. Remember that Anthropic enforces minimum token thresholds, so this is most effective for large system prompts or RAG context. You can verify caching was successful by checking the `providerMetadata`. ```ts // pnpm add ai @ai-sdk/anthropic import { generateText } from 'ai'; import { anthropic } from '@ai-sdk/anthropic'; const result = await generateText({ model: anthropic('claude-sonnet-4-20250514'), messages: [ { role: 'system', content: 'Long, reusable instructions...', providerOptions: { anthropic: { cacheControl: { type: 'ephemeral' } } }, }, { role: 'user', content: 'User-specific question...' }, ], }); console.log(result.providerMetadata?.anthropic?.cacheCreationInputTokens); ``` ## 4. Pattern: Using Anthropic's Server Tools The AI SDK also provides access to Anthropic's server-side tools, like Text Editor and Web Search, which are explained in the Claudette notebooks. ### Implementing the Text Editor The Text Editor tool requires careful sandboxing. Your `execute` function is the safety boundary and must validate all paths and commands. ```ts // app/api/edit/route.ts // pnpm add ai @ai-sdk/anthropic import { NextRequest } from 'next/server'; import { generateText } from 'ai'; import { anthropic } from '@ai-sdk/anthropic'; import path from 'node:path'; const ROOT = path.resolve(process.cwd(), 'repo'); const safe = (p: string) => { const abs = path.resolve(ROOT, p); if (!abs.startsWith(ROOT)) throw new Error('Path outside allowed root'); return abs; }; const textEditor = anthropic.tools.textEditor_20250429({ execute: async ({ command, path: p, ...args }) => { const abs = safe(p); // ... safe implementation for 'create', 'view', 'str_replace' ... return 'unsupported command'; }, }); export async function POST(req: NextRequest) { const { prompt } = await req.json(); const result = await generateText({ model: anthropic('claude-4-sonnet-20250514'), tools: { str_replace_based_edit_tool: textEditor }, prompt, }); return new Response(result.text); } ``` ### Implementing Web Search To use Web Search, enable it in your Anthropic Console and then attach the provider-defined tool in your code. ```ts import { anthropic } from '@ai-sdk/anthropic'; import { generateText } from 'ai'; const webSearch = anthropic.tools.webSearch_20250305({ maxUses: 3 }); const result = await generateText({ model: anthropic('claude-4.1-opus-20250805'), prompt: 'Summarise the latest TypeScript release notes.', tools: { web_search: webSearch }, }); ``` # Hello World This is where I write about my learning, experiements, and other things. # How I built this site This site runs on a small, reliable stack for a personal profile and a Markdown blog. I prioritised clarity over novelty: predictable content files, simple rendering, strong SEO defaults, and tools to make writing fast. ## Architecture & Content The site uses the Next.js App Router with TypeScript and PNPM. All content is stored in the repository, organised by type: `posts/page.md` for the homepage, `posts/career/page.md` for my career history, `posts/testimonials/page.md` for testimonials, and `posts/blog//` for each blog post. A post consists of two files: `meta.json` for metadata and `post.md` for the content. The first `H1` in `post.md` is used as the page title. A build process handles drafts and scheduled posts, hiding them in production but showing them with status badges in development. The site is deployed on Vercel. ## Pipeline & Tooling - **Rendering**: Markdown is rendered using `unified`, `remark`, and `rehype`. - **Title Extraction**: A script parses the Markdown AST to extract the first `H1` as the page title, rendering the rest as the body. - **Scaffolding**: `pnpm run new:post` is a small CLI script that prompts for a title, generates a clean slug, and creates the `meta.json` and `post.md` files. - **Aggregation**: A `/llms.txt` route serves a plain-text version of all site content for AI models. ## Writing a new post 1. Run `pnpm run new:post` and provide a title. 2. The script creates a new folder in `posts/blog/` with a `meta.json` file (marked as unpublished) and a `post.md` file. 3. Edit the metadata and write the post. 4. Set `published: true` to publish. ## Styling and SEO Styling uses Tailwind CSS, including the `@tailwindcss/typography` plugin for readable article text. The font, Public Sans, is loaded via `next/font` to prevent layout shift. I use Base UI for unstyled, accessible components like the main navigation menu. The Next.js Metadata API generates page titles, descriptions, and Open Graph cards. A `sitemap.xml` and `robots.txt` are also generated dynamically. ## Why this approach This setup optimises for writing speed and simplicity. Content lives with the code, the rendering path is testable, and the site avoids common issues like broken links or missing metadata. It's a solid foundation that can be extended later. ## Working with AI I used OpenAI Codex to accelerate routine work, keeping final decisions in my hands. I set the direction and constraints, then asked for focused changes like scaffolding pages, building the post generator, or drafting documentation. I reviewed every patch for consistency and correctness. For example: - The AI assistant implemented the content loaders and the `unified`/`remark`/`rehype` pipeline based on my specifications. - It wrote the initial `new:post` CLI, which I then refined. - It also fixed bugs, such as a type error on the `/resume` route. - This post was drafted by an AI from my project notes and commit history, then edited by me for clarity. The result is a small, extensible codebase. The AI accelerated the work, but the design, constraints, and final review were human-led. # Introducing @purepageio/fetch-engines: reliable web fetching Extracting content from websites is unreliable. Plain HTTP requests miss content rendered by JavaScript, and bot detection can block automated traffic. Developers often rebuild the same glue code for retries, proxies, and headless browsers. `@purepageio/fetch-engines` packages these patterns into a robust API. It provides a lightweight `FetchEngine` for simple pages and a smart `HybridEngine` that starts with a fast request and automatically escalates to a full browser when needed. It simplifies fetching HTML, Markdown, or even raw files like PDFs. [**@purepageio/fetch-engines on npm**](https://www.npmjs.com/package/@purepageio/fetch-engines) ## Features - **Smart Engine Selection**: Use `FetchEngine` for speed on static sites or `HybridEngine` for reliability on complex, JavaScript-heavy pages. - **Unified API**: Fetch processed web pages with `fetchHTML()` or raw files with `fetchContent()`. - **Automatic Escalation**: The `HybridEngine` tries a simple fetch first and only falls back to a full browser (Playwright) if the request fails or the response looks like an empty SPA shell. - **Built-in Stealth & Retries**: The browser-based engine integrates stealth measures to avoid common bot detection, and all engines have configurable retries. - **Content Conversion**: `fetchHTML()` can be configured to return clean Markdown instead of HTML. - **Raw File Handling**: `fetchContent()` retrieves any type of file—PDFs, images, APIs—returning the raw content as a Buffer or string. ## Quick start First, install the package and its browser dependencies. ```bash pnpm add @purepageio/fetch-engines pnpm exec playwright install ``` This example uses the `HybridEngine` to reliably fetch a potentially complex page. ```ts import { HybridEngine, FetchError } from "@purepageio/fetch-engines"; // Initialise the engine. HybridEngine is best for general use. const engine = new HybridEngine(); async function main() { try { const url = "https://quotes.toscrape.com/"; // A JS-heavy site const result = await engine.fetchHTML(url); console.log(`Fetched ${result.url}`); console.log(`Title: ${result.title}`); console.log(`HTML (excerpt): ${result.content.substring(0, 150)}...`); } catch (error) { if (error instanceof FetchError) { console.error(`Fetch failed: ${error.message} (Code: ${error.code})`); } } finally { // Shut down the browser instance managed by the engine. await engine.cleanup(); } } main(); ``` ## Fetching Markdown and Raw Files (like PDFs) To get clean prose from an article, configure the engine to return Markdown. To download a PDF, use `fetchContent()` to get the raw file buffer. ```ts import { HybridEngine } from "@purepageio/fetch-engines"; import { writeFileSync } from "fs"; const engine = new HybridEngine(); async function fetchDocuments() { // 1. Fetch an article and convert it to Markdown const article = await engine.fetchHTML("https://example.com/blog/post", { markdown: true, }); if (article.content) { console.log(article.content); } // 2. Fetch a raw PDF file const pdf = await engine.fetchContent("https://example.com/report.pdf"); if (pdf.content instanceof Buffer) { // The library returns the raw file; parsing it is up to you writeFileSync("report.pdf", pdf.content); console.log("Downloaded report.pdf"); } await engine.cleanup(); } fetchDocuments(); ``` ## Choosing an engine - **`FetchEngine`**: Best for speed with trusted, static sites or APIs that return HTML. - **`HybridEngine`**: The recommended default. It offers the speed of a simple fetch with the reliability of a full browser fallback for dynamic sites. This project is open source. If you use it, please report issues and share ideas on the [GitHub repository](https://github.com/purepageio/fetch-engines) to help guide its development. # A Pragmatic AI Workflow for Software Engineers AI models are powerful but unreliable co-authors. Treating them as a single tool leads to inconsistent results and subtle errors. A better approach is a structured workflow that uses specialised tools for distinct phases of development: planning, validation, and execution. This guide outlines a production-ready process that leverages AI's speed without sacrificing engineering rigour. ## Phase 1: High-Level Planning with a Frontier Model The first step is to translate a product requirement into a technical plan. For this, I use a powerful, large-context model like Google Gemini 2.5. Its strengths in agentic reasoning and its ability to process large amounts of information make it ideal for architectural tasks. My process: 1. **Provide Context:** I give the model the full feature request, relevant existing code, and any key constraints (e.g., "this must be a serverless function," "use this library"). 2. **Request a Plan:** I ask for a detailed implementation plan, including suggested data models, API endpoints, and a file-by-file breakdown of changes. 3. **Debate and Refine:** I challenge the initial plan, asking about edge cases, alternative libraries, or potential performance bottlenecks. This iterative dialogue produces a much stronger starting point. The goal is to use the AI as a high-level architect to map out the solution before any code is written. ## Phase 2: Cross-Validation with a Different Model No single model is infallible. To reduce the risk of hallucination or suboptimal design, I take the plan from Phase 1 and present it to a different model, such as GPT-5 Thinking or Claude Sonnet 4. I ask it to critique the plan: "What are the weaknesses of this approach? What could go wrong? Suggest an alternative design." This cross-validation step is crucial. It is especially useful for catching subtle flaws, such as references to non-existent libraries, and often uncovers a simpler implementation path or identifies dependencies that the first model missed. This is the AI equivalent of a design review. ## Phase 3: Execution with Specialised Tools With a validated plan, I move to implementation, using a set of specialised, task-specific AI tools. - **Cursor for Code Implementation:** For the inner loop of writing and refactoring code, Cursor is my primary tool. Its tight integration with the editor means I can apply AI assistance—from generating functions to refactoring blocks—without breaking my flow. - **Vercel v0 for UI Scaffolding:** To quickly generate React components, I use v0. It excels at turning high-level descriptions ("a settings page with a toggle for dark mode") into clean, production-ready JSX with Tailwind CSS. - **Perplexity for Grounded Research:** When the plan requires new knowledge (e.g., "how to use a specific feature of a new library"), I use Perplexity. Its focus on providing cited, up-to-date answers is more reliable for technical research than a general-purpose chat model. - **Injecting Up-to-Date Library Documentation:** A model's knowledge of a library is only as current as its last training run. To get accurate answers about a specific library's API, I use the `/llms.txt` pattern. Many modern libraries, like the Vercel AI SDK, now publish an endpoint (e.g., `https://ai-sdk.dev/llms.txt`) that provides their entire documentation as a single text file. I feed this to the model as context before asking implementation questions. This prevents it from hallucinating deprecated functions or incorrect usage patterns. The main trade-off is context window size: this file can be large, so the technique is best used with large-context models. (This site uses the same approach to provide its own content to models — you can see it at `/llms.txt`.) - **AI-Powered Code Review and Documentation:** Once the code is ready for review, I use AI to automate the first pass. This can be integrated directly into a CI/CD pipeline. When a pull request is opened, a GitHub Action triggers a script that sends the diff to a model like Claude Opus 4.1, along with the project's coding rules. The model's review—checking for bugs, style violations, or unclear logic—is then automatically posted as a comment on the PR. This catches simple errors and lets human reviewers focus on the architecture. The same process can generate docstrings and comments for the new code. ## The Foundation: A Detailed System Prompt This entire workflow is underpinned by one principle: **define the rules before you ask for work.** Instead of a simple one-line instruction, I provide AI models with a detailed "system prompt" or a rules document that acts as a project constitution. It is the single most effective way to improve the quality and consistency of AI-generated output. My rules document typically includes: - **The Full Stack:** Be explicit. "This project uses Next.js (App Router), TypeScript, Tailwind CSS, and PNPM on Node 22." This prevents the model from suggesting incompatible technologies or outdated patterns. - **Directory Structure and Naming Conventions:** Define the source of truth for code organisation. "All page components are in `app/`. Use `kebab-case` for files, `PascalCase` for React components, and `camelCase` for functions and variables." - **Architectural Patterns:** Enforce non-negotiable design decisions. "All data fetching must occur in Server Components. State should be managed with simple props or React context; do not introduce a global state manager." - **Coding Style and Philosophy:** Go beyond a linter config. "Style components using Tailwind utility classes directly in the JSX; do not create separate CSS files. All functions must be typed, and any use of `any` must be justified with a comment." - **Language and Tone:** Set clear rules for prose, both in documentation and UI copy. "All user-facing text must use UK English. Avoid em-dashes (`—`); use a spaced hyphen (` - `) or a colon instead. Remove filler phrases like 'it's important to note' or 'in today's world'. Prefer the active voice." - **Process and Workflow:** Define how the model should interact with your development process. "All git commits must follow the Conventional Commits specification (e.g., `feat: ...`, `fix: ...`). All new business logic must be accompanied by unit tests written with Vitest." Providing these constraints up front dramatically increases the quality of the AI's output, turning it from a creative but erratic partner into a reliable executor that understands your project's specific needs.