AI as a Sketchpad.
It All Begins Here
AI has become an amazing sketchpad for translating ideas into visuals. I’m using it to bridge my painting practice in the outdoors, where I explore light, colour, and composition on canvas, with my adventure photography in real landscapes. Here, I’m reimagining portraits I want to shoot, from the smoky kitchens of Nepal, rich in texture and atmosphere, to a relaxed, dressed-up awards night, full of elegance and mood. Capturing these scenes in reality would take immense time and effort, so AI lets me externalise the ideas in my head, test compositions, lighting, and colour palettes, and turn fleeting mental impressions into concrete visual concepts ready to inspire the final shoot
Gemini Nano-Banana exploring alternate versions of my photos
Refilming the Missing Shot
It All Begins Here
On my first trip to Nepal, I envisioned a cinematic moment with absolute clarity. The high Himalaya loomed in the background, cold, immense, unforgiving. The camera pressed low against the mountain track, almost fused with the earth itself. Then - a distant rumble. A herd of goats appeared on the horizon, their movement building into a living wave, hooves thundering over the lens, a fleeting perspective that didn’t just observe the landscape, it immersed me in it.
Recently, I wanted to push the boundaries of AI in adventure filmmaking. Using Veo and Nano-Banana, I generated a still frame and then breathed life into it as a moving shot. Every step…from idea to visual sketch… happened on my phone, in that quiet, pre-dawn space where concepts feel sharpest and most cinematic.
Watching the result later on a 4K screen, the limitations were clear: detail softened, textures were incomplete, and the illusion wavered. But that was never the goal. As a director’s sketch, it captured the mood, the framing, the intent. It turned a half-formed vision into something tangible, something ready for the real world.
Now it no longer lives solely in my imagination. It’s a blueprint… a shot waiting to be chased, hunted, and captured amidst the wild places of the world.
Compiling Ideas into Reality
It All Begins Here
Modern tooling has dramatically reduced the latency between an idea and a working executable. The compile–test–deploy loop is now tight enough that prototyping, validation, and production hardening can happen almost continuously. As the old programming principle goes, eating your own dog food is the fastest way to surface edge cases and ensure tools are production-ready.
My current focus is on performance-oriented tooling for Nuke, the industry-standard compositing platform. The stack has evolved over time. I started with TCL expressions and Nuke gizmos, moved into Python tooling using NumPy and SciPy for vectorised computation, and have since pushed into C++ plugins to reduce interpreter overhead, improve cache locality, and achieve deterministic performance gains. Working closer to the hardware unlocks measurable improvements in execution time, memory efficiency, and scalability, particularly for high-density image buffers and deep data.
The tools I’ve built address real production bottlenecks in VFX pipelines. These include a Roto Exploder, designed to manage large outsourced roto datasets without degrading graph performance, and a Deep File Optimiser, which reduces deep sample counts to improve computational efficiency, disk I/O, storage use, and network performance. These optimisations translate directly into faster renders, smaller deep EXRs, and more responsive compositing workflows.
This work builds on years of practical production experience, including beta testing and feedback with vendors like SideFX, creators of Houdini, and AtomKraft’s 3Delight, a production-grade path tracing renderer.
My programming foundations began by growing up with mainframe systems and later evolved into creative coding with p5.js at the University of Sydney, where code became a medium for visual experimentation and real-time interaction. That intersection of graphics, mathematics, and code continues to inform how I approach tool development today, treating software as both a technical and creative instrument.
Looking forward, AI-assisted development is accelerating iteration and lowering implementation friction, making it easier to move from concept to compiled tool. While current systems like Anthropic’s Claude, Google’s Gemini, and OpenAI’s ChatGPT rely on large transformer architectures, emerging approaches suggest a shift toward more efficient and structured reasoning.
For now, the impact is clear: development cycles are shorter, optimisation is deeper, and ideas are easier to realise. It feels like a full-circle moment, connecting early experimentation with today’s GPU-accelerated, performance-driven, production-focused tooling.
Nukepedia:
https://www.nukepedia.com/search/?q=Marty%20Blumen
GitHub:
https://github.com/bratgot/