ON TIME
A photorealistic CGI rabbit named WREN sits at a bus stop in an ordinary American suburb, waiting.
The bus does not come. WREN does not know this.
ON TIME is a short film about waiting — not as anxiety, but as a state of being. The kind of waiting that has gone on long enough to become invisible. The kind we carry without noticing.
The film asks a simple question: what are you waiting for? And whether it's coming at all.
ON TIME is an AI filmmaking experiment — an attempt to push generative video tools toward cinematic language. Each shot was authored through Veo 3.1 for video generation, with the character WREN developed through photorealistic CGI image prompting. The project explores what it means to direct without a camera, to write for a tool, and to find stillness inside a medium defined by motion.
Tools: Google Veo 3.1 / Nanobanana / A24 cinematic reference / Terrence Malick lighting direction
Role: Concept, Direction, Prompt Design, Edit
UNAFRAID A cinematic AI short film. A woman walks through an ocean where flowers bloom from the water. She does not hurry. She does not look away. This project is a personal exploration of AI-assisted filmmaking — testing how far generative tools can be pushed to achieve a cohesive cinematic language. The visual aesthetic is inspired by A24 films and the work of Terrence Malick: golden-hour light, anamorphic lens, and slow, meditative pacing. The entire film was built from a single hero image.
WORKFLOW
01 — Hero Image One single hero image was generated in ComfyUI. This image established the visual world: the character, the light, the ocean, the mood.
02 — Character Sheet Using only the hero image as reference, a full character sheet and costume detail sheet were created in Nanobanana (Google Flow) — capturing the character's appearance, proportions, and clothing from front, side, and back.
03 — Video Production Using only the hero image and the two character sheets as visual references — and prompt engineering alone — all video sequences were generated in Veo 3.1 via Google Flow. No additional images were created for video generation. Prompts only.
TOOLS USED
ComfyUI — hero image generation
Nanobanana / Google Flow — character sheet & costume reference
Veo 3.1 / Google Flow — all video sequences All prompts were written and iterated manually. No pre-made templates. No automation.
One hero image. Two reference sheets. Prompts only.
A short AI-driven cinematic study exploring magical realism in everyday suburban spaces.
Ordinary objects — a mailbox, a bicycle, and a quiet sidewalk — gradually transform as flowers begin to bloom and spread through the environment.
The piece was directed as a sequence of small visual moments, focusing on natural light, slow camera movement, and a calm observational tone inspired by A24 and Terrence Malick.
Images were generated using Z-Image, then animated using Veo to create subtle transformations and cinematic motion.
Generated images from ComfyUI and videos from Veo 3.1. Music by Pavel Bekirov from Pixabay.
A generative fashion editorial exploring sculptural silhouettes, bold monochrome environments, and reflective materials.
Created with ComfyUI, this project investigates how AI pipelines can be used as a creative tool for developing high-fashion imagery and visual concepts.
Generated images from ComfyUI and videos from Veo 3.1.
An exploration of disappointment within love, family, and intimacy.
This film was created using images generated in Z-Image, videos generated with Veo 3.1 and edited in After Effects.
BGM: Sound of Walking Away by Illenium & Kerli
A surreal dance study exploring the contrast between physical realism and dreamlike space. A single dancer moves through a minimal, otherworldly landscape where floating geometric forms remain perfectly still while fabric, breath, and movement react naturally to gravity and air. The piece is designed as a continuous camera move, gradually traveling from a wide shot into an intimate close-up, allowing the viewer to experience the motion of the body, the weight of the cloth, and the subtle interaction between movement and atmosphere. Created using an AI-assisted visual pipeline combining generative image creation and video synthesis, the project explores how cinematic camera language and choreography can be translated into generative filmmaking workflows.
AI video using Z-image turbo & Veo 3.1 and edited in After Effects. Music by Clavier Clavier from Pixabay.