AI Prototype Studio: Landing Page Experiment

Summary
AI Prototype Studio is a structured experiment designed to evaluate how effectively AI tools can be used to rapidly prototype product ideas through a conversion-focused landing page. The project simulates a real service offering—building low-cost, non-production prototypes for startups or individuals—while testing multiple AI-assisted development workflows. Different variants of the landing page will be generated and iterated using combinations of local LLMs (such as Gemma via Ollama) and Claude Code, allowing direct comparison of output quality, speed, cost, and usability.
The Challenge
The challenge lies in balancing realistic product positioning with controlled experimentation. The project must function as a credible landing page with clear messaging and conversion intent, while also acting as a test environment for comparing AI tooling approaches. Additional complexity comes from evaluating prompt strategies, managing API integrations such as Resend for lead capture, and considering security risks when exposing AI-generated outputs and endpoints.
Product Rationale
Experiment-led product build
The project is structured as a controlled experiment, using multiple landing page variants to compare AI tooling performance rather than building a single static implementation.
Local LLM evaluation
Running Gemma models locally via Ollama allows testing of cost-efficient, self-hosted AI workflows and comparison against more guided tools such as Claude Code.
Conversion-focused design
Despite being experimental, the landing page is designed with clear messaging, structured sections, and strong call-to-action patterns to reflect real-world product positioning.
Security and prompt testing
The project explores risks associated with AI-assisted development, including prompt injection, endpoint exposure, and safe handling of user input through API integrations.
Tech Stack
Key Decisions
Multi-variant experiment structure: Building several variations of the landing page allows direct comparison of different AI workflows rather than relying on a single subjective result.
Local-first AI tooling: Prioritised locally hosted models via Ollama to explore cost, performance, and control trade-offs compared to fully managed AI services.
Real-world service framing: Positioned the experiment as a plausible product offering to ensure outputs are evaluated against realistic user expectations and conversion standards.
Email capture integration: Incorporated Resend to simulate real lead capture flows and test API integration within an AI-generated application context.
Project Notes
No two projects solve the same problem, so each case study emphasises different aspects of delivery depending on what was most relevant to the challenge. Supporting visuals and implementation details are included here to provide additional context behind the final outcome.
Visuals

