There's not much of a reason to do this because ffmpeg has a much faster and nicer scaler, but maybe we want to write common lisp to handle an effect. When I introduced img-genner, I wasn't sure that this was the direction to take, but now it seems appropriate, if nothing else.
You will most likely want to consult the installation instructions if you do not yet have img-genner set up.
So let's start out with the following to import the libraries we need, as well as start the ffmpeg processes.
(defparameter *ffmpeg-reader* (uiop:launch-program "ffmpeg -i wander.mp4 -c:v png -pix_fmt rgb24 -f image2pipe -" :output :stream))
(defparameter *ffmpeg-writer* (uiop:launch-program "ffmpeg -f png_pipe -i - -y -b:v 1M hello4.webm" :input :stream))
We need pngload because it handles reading from the pipe correctly whereas the libpng binding I have been using does not seem to. I chose to output a webm because it can provide playback sooner, but of course there's no reason that you can't use whatever funky codec format that you prefer.
Okay, so overall, the flow should look like this:
scaler --> b
Hmm, that's not very interesting though, so let's add the implicit stuff and a neat function I recently wrote.
a(*ffmpeg-reader*) --> decode(pngload:decode) -->data(pngload:data)--> scaler;
scaler --> colorizer(colorize-naive);
There, that's an excuse to use this library!
In common lisp I couldn't remember if there was a way to check for eof explicitly, so instead I just have it handle the
end-of-file condition to terminate this.
(loop with i = 0
with colors = (loop repeat 16 collect (img-genner:get-random-color))
for input = (pngload:data (pngload:load-stream (uiop:process-output *ffmpeg-reader*)))
for output = (img-genner:colorize-naive (img-genner:upscale-image-by-factor input 0.5 0.5) colors)
do(img-genner:save-image output (process-input *ffmpeg-writer))
Before you run the code like this though, you'll need to add this at the end:
This closes the input stream, causing ffmpeg to close as gracefully as possible and then collecting its exit code in order to let the process be cleaned up.
Okay, but that's not very efficient. For one thing, it's fairly slow, but it's also parallelizable. You can use the
pcall library that img-genner uses in order to make it faster. However, this will require some significant changes in order to make it work well. We need to turn it into tasks and then wait for them in order.
We want to load as many as possible at once, but we have a limited amount of memory and if you're using sbcl it may be a rather painful experience once you hit your heap limit, so we can't load the entire video at once unless it is very small or you have a monstrous amount of memory and have configured your lisp to use it.
for jobs = (loop with memory-used = 0
for input = (handle-case
(pngout:data (pngload:load-stream (uiop:process-output *ffmpeg-reader*)))
(end-of-file (c) '())
do(incf memory-used (array-total-size input))
until (> memory-used 20000000)
collect (let ((input input) (i (incf i)))
(prog1 (img-genner:colorize-naive (img-genner:upscale-image-by-factor input 0.75 0.75) colors)
do(loop for i in jobs
do(img-genner:save-image (pcall:join i) (uiop:process-input *ffmpeg-writer*)))
This is a very substantial change to the control flow as you can see. A few things that are important to note, in pexec calls, you need to have made a closure of whatever variables that change that you use inside pexec, otherwise they may be modified mid use, and quite simply, that's not ideal.
This is what could be called a "Bulk Sequential Process" design, and while it uses all the processors on your system effectively, the utilization tends to have ups and downs unless it is dispatched very carefully. In this case in particular, even if we did everything perfectly, we're just communicating with ffmpeg over a pipe, and there's a lot of overhead there, and even then, it is possible to outpace the encoder ffmpeg uses, even in common lisp(This is more likely with AVIF or vp8/vp9 rather than h264/h265 in my experience), and during these times your program is sitting around doing nothing, and while ffmpeg might be keeping your computer busy at these times, it likely as not won't be able to parallelize on the images of a size that are useful, and for things with unpredictable runtimes, it very well be a bottleneck.
Now theoretically, it should be possible to also splice the audio from the first file into the second with something like a named pipe, or if that fails, just exporting the audio on its own and then using it in the encoding command, but that sounds like a project in itself.