iain's development activities. May contain z80, Cocoa, or whatever.
Metal in Cyclorama
The original version of Cyclorama just used multiple video layers and adjusted the alpha values of them which worked for a 2 layer system, but it’s not the most efficient way especially if I want to blend the layers together using the screen mode, so I thought I’d look into using Metal.
A few years ago when I was doing live gigs, I needed an app that could do simple projections for me and that I could manipulate quickly while on stage. I wrote a very basic application called Cyclorama in C# and Xamarin.Mac that allowed you to have two videos and crossfade between them, and have an optional static image as an overlay. It worked pretty well for what I needed but I felt it could be improved.
When it comes to projections and videos, my style is very much inspired by the experimental film maker and Godspeed You! Black Emperor projectionist, Karl Lemieux, who uses loops of 16mm footage and layers them on top of each other, and uses subtle manipulations and film burns to create striking scenes.
I’ve recreated this style in FCPX and DaVinci Resolve but neither of those are simple to use, or feasible in a live setting so I’ve decided to revisit my earlier application and rewrite it, this time in Swift and recreate this aesthetic rather than just crossfade between videos.
It can create some beautiful images. I’ll write some more in the future about what is happening.
Been slowly working on Fieldwork while other things have been going on. Spent a lot of time watching SwiftUI videos so I could do SwiftUI “the right way” and I think I’ve figured it out.
I’m not convinced SwiftUI works well with the MVVM pattern that people want to push on it, at least not the standard way of thinking about MVVM. I see people creating ViewModel classes for each of their views then setting values on it in
.onAppear or some similar View method, or the parent class creates a ViewModel class, populates it and then passes it to the View, but then that makes @State awkward because who really owns that object?
But it also doesn’t feel like it works well with the other way people seem to be using it, having one monolithic huge “ViewModel” object that gets passed into every SwiftUI view, because that’s not really a ViewModel, that’s just a global model that we would have worked with in the old C days.
Among other problems I’ve had with using the above patterns is that they make SwiftUI’s previews really, really awkward to develop.
One thing I read on Mastodon from someone that in SwiftUI the Views themselves are the “view model” and that makes more sense to me.
So I’ve rewritten a lot of Fieldwork to do away with ViewModels, and instead each class only gets passed in the simple types that it needs.
As an example the InfoView displays the name of the file, the length of the sample, the bitrate, number of channels, etc. Metadata type stuff. Previously it took a Recording class as a parameter, and did some digging through to find the right metadata: the name came from the
Recording.metadata, the bitrate and number of frames from
This meant to create a preview for this, I needed to create a
Recording for the preview, which meant setting up the
FileService and faking lots of things that the preview didn’t need to care about.
The preview doesn’t need to know about
Recording class, it just wanted a few strings to display. It doesn’t even need to care if the bitrate is a number, it just wants a string to display.
So now the
InfoView just takes 4 string parameters when it is created and it becomes its own ViewModel.
Swinging this back to the source of truth, the source of truth is at the highest level it needs to be. There is a large monolithic class at the very top holding things that views need to know about, but anything that’s needed further down the view heirarchy is just passed in as either a simple type parameter, or a binding to the simple type parameter. No view needs to know about the large monolithic class holding everything together.
And now Fieldwork is fairly easy to understand, and every view can be previewed fairly easily
A change of direction, mostly because I’m disappointed my Sam Coupé platform scroller was a bust - I’ve returned to an application I was writing near the end of last year.
Fieldwork, a field recording organiser / editor. Think Lightroom for audio. It looks like this currently
The main window is SwiftUI, the editor display is AppKit, and it’s written in a mix of Swift, ObjC and C, although I’d like to reduce the amount of ObjC over time and leave it as a simple layer that Swift can call to access the sample data.
It’s using a lot of the code from my previous sample editor project Marlin. It can handle very large files, and operate on them very quickly, and at the moment I’m working out how to integrate it all with a SwiftUI declarative UI.
Plans in the future include writing a UIKit version of the editor display and see if it runs on iOS/iPadOS.
Got a player sprite displayed on the screen, and moving left and right. I made a simplified single screen map with 128 tiles on it and with the character moving around while it redraws every frame is unbelievably slow. Starting to rethink this, because having it scroll and draw sprites at the same time is just going to be impossible.
Which is a shame, I had wanted to make a Super Mario type scrolling platformer but I may have to fall back on a static screen platformer.