On the morning of his wedding, a man thinks of his mother — who is no longer here. The hands that tied his tie. The hands that smoothed his collar. The quiet, wordless embrace before letting him go.
At the ceremony, he leaves one seat empty. Just in case. On it, a single candle and a flower.
After the vows, he turns back — and the flame flickers. No wind. No reason. But he knows.
She came.
He smiles, and the wedding continues.
This short film was made entirely without traditional production — using only AI image and video generation tools, edited in Adobe After Effects, and Suno for music.
Tools: Nanobanna · Kling · Cinema Studio 2.5 · Adobe After Effects · Suno
THE RETURN
WHAT IS THIS FILM?
A man walks through an old palace — alone, in silence. He moves through grand hallways, empty banquet halls, and fog-filled courtyards as if he has been here before. His skin is chrome. The figures standing in the walls watch him as he passes. He touches the stone. He looks out the window. He stands at a door and does not open it. That is the whole film.
WHAT WORLD IS THIS?
Medieval Europe and the future, merged into one. The palace is ancient — Gothic arches, stone floors, candlesticks. But look closer: thin cables run along the walls, circuit patterns are woven into the carpets, and the people standing in the alcoves are not quite statues. The two worlds have coexisted for so long that no one questions them anymore. The chrome man does not explain himself either.
PROCESS
This film was built entirely through AI generation — Nanobanna for character and location design, Veo via Google Flow for video generation, and Suno for the original score. Every frame began as a text prompt, iterated through dozens of revisions to arrive at the precise material quality, camera language, and emotional register the film required. The production pipeline moved from character sheet to location sheet to keyframe image to video prompt — a structured process that treats AI tools less like filters and more like a cinematographer you have to learn to speak to very precisely.
ON TIME
A photorealistic CGI rabbit named WREN sits at a bus stop in an ordinary American suburb, waiting.
The bus does not come. WREN does not know this.
ON TIME is a short film about waiting — not as anxiety, but as a state of being. The kind of waiting that has gone on long enough to become invisible. The kind we carry without noticing.
The film asks a simple question: what are you waiting for? And whether it's coming at all.
ON TIME is an AI filmmaking experiment — an attempt to push generative video tools toward cinematic language. Each shot was authored through Veo 3.1 for video generation, with the character WREN developed through photorealistic CGI image prompting. The project explores what it means to direct without a camera, to write for a tool, and to find stillness inside a medium defined by motion.
Tools: Google Veo 3.1 / Nanobanana / A24 cinematic reference / Terrence Malick lighting direction
Role: Concept, Direction, Prompt Design, Edit
UNAFRAID A cinematic AI short film. A woman walks through an ocean where flowers bloom from the water. She does not hurry. She does not look away. This project is a personal exploration of AI-assisted filmmaking — testing how far generative tools can be pushed to achieve a cohesive cinematic language. The visual aesthetic is inspired by A24 films and the work of Terrence Malick: golden-hour light, anamorphic lens, and slow, meditative pacing. The entire film was built from a single hero image.
WORKFLOW
01 — Hero Image One single hero image was generated in ComfyUI. This image established the visual world: the character, the light, the ocean, the mood.
02 — Character Sheet Using only the hero image as reference, a full character sheet and costume detail sheet were created in Nanobanana (Google Flow) — capturing the character's appearance, proportions, and clothing from front, side, and back.
03 — Video Production Using only the hero image and the two character sheets as visual references — and prompt engineering alone — all video sequences were generated in Veo 3.1 via Google Flow. No additional images were created for video generation. Prompts only.
TOOLS USED
ComfyUI — hero image generation
Nanobanana / Google Flow — character sheet & costume reference
Veo 3.1 / Google Flow — all video sequences All prompts were written and iterated manually. No pre-made templates. No automation.
One hero image. Two reference sheets. Prompts only.
A short AI-driven cinematic study exploring magical realism in everyday suburban spaces.
Ordinary objects — a mailbox, a bicycle, and a quiet sidewalk — gradually transform as flowers begin to bloom and spread through the environment.
The piece was directed as a sequence of small visual moments, focusing on natural light, slow camera movement, and a calm observational tone inspired by A24 and Terrence Malick.
Images were generated using Z-Image, then animated using Veo to create subtle transformations and cinematic motion.
Generated images from ComfyUI and videos from Veo 3.1. Music by Pavel Bekirov from Pixabay.
A generative fashion editorial exploring sculptural silhouettes, bold monochrome environments, and reflective materials.
Created with ComfyUI, this project investigates how AI pipelines can be used as a creative tool for developing high-fashion imagery and visual concepts.
Generated images from ComfyUI and videos from Veo 3.1.