Updated Fall 2025
What can I say? I love training models. The dataset building part is laborious and at the same time rewarding. It is definitely a process.
Vector graphics model.
I trained a model on a custom dataset of a few thousand screenshots from vector games a while ago, just getting around to posting the output. I was really only interested if I could capture the blooming “effect” from the old monitors which it seems to do fairly well. I’ve never seen a vector game that didn’t have blooming. As I understand it, blooming is the result of an overvolted or old CRT and contrast between extremely bright and black. I always thought it looked cool and ethereal. To be clear, the dataset wasn’t comprised of simulated bloom but actual bloom from game play. People have said it looks like “Tron” (1982) (the light cycles scene) which has ~15 minutes of true (rendered) 3D vector graphics in the film so that makes sense. This is a FLUX model.

WAN/Video
I trained my bleach bypass dataset of WAN 5b and 14b (2.2) into a LoRA. Considering how much smaller and faster inference is for 5b I’ve been leaning it a lot more for quick outputs. The difference in quality is not much from my experience in making sub-60s videos. It’s great for Halloween stuff.
This is the 5b example of applying the LoRA. Considering there are only about 6 movies filmed entirely in BB I figured I’d keep the theme of one of them.
FLUX/SDXL
These are some examples of fine tunes or LoRA’s I purpose built.



examples of synthetic images generated to augment “light” datasets. These end up as hybrid illustrative/photoreal panels such as above. These were generated from a LoRA trained on hundreds of original photographs from “down the bayou”.



TBD; Nighttime-trained LoRA examples. Kodachrome, APX100 , Bleach Bypass, etc. Poor low-light trained models etc.