raf.al
 

🎞️ 2022.12.23 - beta!

After much work, Ansel is in beta. Oh right, not sure if I mentioned that earlier but that’s what I’m calling this project. I set up a lil bit of a site at https://ansel.film with some info, download, and purchase link (using Stripe’s prebuilt checkout URL functionality).
 
I’m still using Nuitka to create builds for Ansel, and it’s honestly been much easier than I could’ve even hoped for. I did run into some complications, specifically around OS versioning. I ended up buying a Mac Mini (which just broke a couple of hours ago so I need to get another Intel Mac) to build versions of Ansel on x86 for macOS. I have (had) Catalina running on it as, despite my best efforts to the contrary, I was not able to get Nuitka to build versions of the app on macOS that would work on older macOS versions that were older than the macOS version they were built on. Windows builds were fine, I had a spare (albeit pretty anemic) Windows laptop which spits out builds (needing literally 0 code changes to work) in around an hour. I am still in awe at the efficacy and ease of use of Nuitka. A command per-platform to build portable one-file .exe’s and (not one-file) macOS executables. I turned to packaging the macOS .app manually so that I could have some more control over resources, the Info.plist, etc.
 
Notarization on macOS was a doozie but didn’t take more than an evening to figure out. Every time I work with things of the Apple developer experience persuasion, I always question why Apple hates developers. Like, they provide one of (if not the) largest development platforms in the world, and yet, the experience of creating for those platforms is rough - particularly in the documentation department. So far, I’ve implemented/worked with Apple’s SSO, push notifications, wallet functionality, and now - notarization. And each time I did so, I found all the necessary information in some random Medium article, forum, or blog post - never the official Apple docs.
 
Anyway, enough blabbering for now. I’ve pushed up 4 versions (each with small fixes and incremental updates) to GitHub and Ansel is currently on pre-release version 0.1.4. While I do testing and gather feedback, I’ll be making small tweaks until I feel Ansel is ready for the 1.0.0 stamp of production and public readiness. Keep an eye out on the wiki and releases for when that happens!
 

 

🎞️ 2022.10.29 - color spaces 🤕

 
Oof color spaces are hard. Every time I dive into anything color space related and try to learn more about them I get more confused than when I started, and with far more questions.
 
I’ve been looking into doing manipulations during color correction and general editing to be done in a variety of color spaces in order to have more consistent and reliable results. That is to say, if you’re adjust the saturation by to different amounts, it corresponds to changes in saturations that seem consistent to the eye. Additionally, working in different color spaces enables certain changes to be done more efficiently. Converting between color spaces, however, can be expensive and lossy — so it’s a bit of a balancing act.
 

 

🎞️ 2022.09.29 - gallery performance

 
I’ve been dealing with gallery performance issues for some time - primarily due to my inexperience with Qt. I’ve been using a QGridLayout inside a QScrollArea to lay out all the image previews. The issue was that when QGridLayout does not support resizing/rearranging elements when the window resizes. So, in an effort to fix that, I implemented a custom resizeEvent on my gallery window which would clear the gallery and refill it. The issue was that this took as much time as it would on initial load to paint the previews, and because this had to happen in the GUI thread, it would freeze the window while it was being resized. Overall — pretty janky.
 
So I’ve instead implemented a custom FlowLayout which you can also check out here.
 
import math from PySide6 import QtWidgets, QtCore class FlowLayout(QtWidgets.QLayout): item_list: list[QtWidgets.QLayoutItem] horizontal_space: int vertical_space: int margin: int current_size: tuple[int, int] item_width: int item_height: int def __init__(self, parent: QtWidgets.QWidget, item_width: int, item_height: int): super().__init__(parent) if parent is not None: self.setContentsMargins(0, 0, 0, 0) # spaces between each item self.horizontal_space = 5 self.vertical_space = 5 self.item_width = item_width self.item_height = item_height self.item_list = [] self.current_size = (0, 0) def __del__(self): item = self.takeAt(0) while item: item = self.takeAt(0) def addItem(self, item): self.item_list.append(item) def count(self) -> int: return len(self.item_list) def itemAt(self, index) -> QtWidgets.QLayoutItem | None: if index >= 0 and index < len(self.item_list): return self.item_list[index] return None def takeAt(self, index: int) -> QtWidgets.QLayoutItem | None: if index >= 0 and index < len(self.item_list): return self.item_list.pop(index) return None def addWidget(self, widget: QtWidgets.QWidget): super().addWidget(widget) def expandingDirections(self) -> QtCore.Qt.Orientations: return QtCore.Qt.Orientations(QtCore.Qt.Orientation(0)) def hasHeightForWidth(self) -> bool: return True def heightForWidth(self, width: int) -> int: if width == 0: return -1 # TODO take into account the right most horizontal_space column_count = max([width // self.item_width, 1]) row_count = math.ceil(len(self.item_list) / column_count) height = row_count * (self.item_height + self.horizontal_space) # if even, remove last spacing if len(self.item_list) % 2 == 0: height -= self.vertical_space return height def setGeometry(self, rect: QtCore.QRect): super().setGeometry(rect) self.place_items(rect) def sizeHint(self) -> int: return self.minimumSize() def minimumSize(self) -> int: size = QtCore.QSize() for item in self.item_list: size = size.expandedTo(item.minimumSize()) return size def place_items(self, rect: QtCore.QRect): if (rect.width(), rect.height()) == self.current_size: return self.current_size = (rect.width(), rect.height()) column_count = max([rect.width() // self.item_width, 1]) centering_offset = (rect.width() - (column_count * self.item_width)) // 2 row = 0 column = 0 for item in self.item_list: x_offset = column * self.item_width + centering_offset y_offset = row * self.item_height column += 1 if column == column_count: column = 0 row += 1 item.setGeometry(QtCore.QRect(x_offset, y_offset, self.item_width, self.item_height))
 
With this layout, and knowing that all child widgets will have the same size, I can very efficiently recompute placement on resize by changing the geometry of each child widget at a low level. Most of these are custom sub-implementations of a QLayout but the magic happens within place_items. I don’t really know why, but QLayout.setGeometry is called an absurd amount of times — I think once for every time a child is added, but with a debounce, so only some time after the last child is added.
 
Anyway, in order to not iterate over each child each-child-number-of-times this only runs place_items once the geometry of the layout itself changes — i.e. when the window is resized.
 

 

🎞️ 2022.09.28 - sauce and white balance

 
Did some more image processing/color-sciency work today.
 
I implemented the “sauce” — a term used to describe the custom CMYK changes made to a negative to give it its look and feel by lab techs — in the processing panel. Here’s what it looks like within the Noritsu scanner.
 
notion image
 
I wanted to give users a bit more control, though, so it’s not only possible to change the CMYK values globally, but also across different luminosity ranges — for now they are highlights, midtones, and shadows. Right now, this works by adjusting each channel’s brightness values by creating a LUT which maps existing channel values to ones dictated by a gamma curve. In order to target luminosity ranges, the gamma curve is modified with a combination of logistic curves, which mix the gamma with a straight y=x line to target only one portion of the image. The resulting luminosity masks look like so when plotted:
 
   global gamma 2.2
global gamma 2.2
 
 shadows gamma 2.2
shadows gamma 2.2
 
 
    mids gamma 2.2
mids gamma 2.2
 
 
   highs gamma 2.2
highs gamma 2.2
 
This isn’t yet perfect, and I’ll continue tweaking how I apply the logistic curve to target different luminosity regions and whether a gamma function is best used in the first place, but for now it’ll suffice.
 
Additionally, I switched over to a white-balance methodology over a subtract-base methodology. That is, rather than removing the base to remove the color cast, I instead white balance against it. I find that this better preserves luminosity ranges and handles a much wider variety of film bases and scan methods.
 

 

🎞️ 2022.09.27 - packaging for distribution cont’d

 
Yeehaw, I’ve done it. I’ve tweaked some more things, uncommented the rest of the code, and was able to package the app to create a macOS executable. Nuitka also has a setting to automatically create a .app bundle, but it’s lacking in configurability so I’ll just be bundling it myself.
 
At this point, I was undeterred - I grabbed my girlfriend’s spare Windows laptop and got to work setting up a new user, installing powershell, chocolatey, winget, git, pyenv, poetry, et al. This was my first time developing on Windows since, like, 2014 and it was cool to see that tooling has really improved and a dev environment has gotten way easier to get up and running.
 
I was amazed — it. just. worked.
 
So, I set up two GitHub Actions on macos-latest and windows-latest to build macOS and Windows executables and upload the artefacts. There was a slight caveat with the windows run, as nuitka needed to download some extra exes. Luckily, I was able to inspect where they were downloaded on my local Windows computer, commit them to my repo, and just copy them to the right locations during the Actions run (nuitka asks for an interactive y/n response response during the bulid so there was no way for me to download the files as part of the build, hence downloading them beforehand).
 
I’m so stoked. I’ll probably be building a linux app for this at some point, but for the time being I’m back to feature development. Stay tuned for some more frequent UI updates and accompanying screenshots!
 

 

🎞️ 2022.09.25 - packaging for distribution

 
I spent a bit of time fixing bugs from yesterday and refining bits of the Process UI/UX. However, the majority of today has been spent trying to figure out packaging the app for distribution (specifically for macOS). I’ll work on Windows/Linux packaging once I’m able to prove out that packaging works on at least one platform.
 
I’ve chosen to use nuitka over other options such as PyInstaller, py2exe/py2app, and PyOxidizer, primarily because nuitka gathers and links app code to its dependencies, transpiles the Python code to C, and compiles it all down to machine code. All other Python distribution solutions merely embed the Python interpreter (to varying degrees) alongside Python bytecode. Because of nuitka’s ability to actually compile down to machine code we get 1) better performance and 2) a proper machine-code based binary that should be pretty tough to reverse engineer.
 
I’m testing packaging the app piecewise, basically uncommenting the app code piece by piece and building a macOS app bundle each time, to see where things break. I started with just the empty Gallery/Process windows to test that PySide6 works. Then added some trivial numpy/PIL code to test those libraries. So far, I’m able to create an app bundle that
  • opens the Gallery
  • loads up all the imported folders in the explorer
  • displays images in the Gallery for selected folders
  • opens the Process page for a selected image
 
The app bundle is coming out at around ~150 megabytes. That’s not too bad, given all the modules and libraries that need to be bundled to get all this to work. And with this ending up as a pretty well equipped image editor, I think people won’t be too bothered by the size of the app.
 
Currently, I’m working on (and failing to) package get the image to show on the Process page. I’m not yet sure which specific part of that codepath is causing issues. nuitka is able to still build the bundle, but upon opening the .app, nothing happens.
 
I need to investigate how to be able to view debug logs and stack traces for crashes, as I’m kinda flying blind here. Figuring out better visibility and getting the selected image to show in the Process page is going to be my continued goal for tomorrow.
 
Wish me luck 👋
 

 

🎞️ 2022.09.24 - project board

 
Feels good to get back into it.
 
I decided to set some soft deadlines for myself
  1. late October for an MVP I can share about to get beta testing
  1. mid November for a 1.0 product launch (including infrastructure around distributing app)
 
These were kinda arbitrary, but I figured a month to get an MVP in a good spot and then a few more weeks to test, get feedback, and polish sounded reasonable. In an effort to keep myself on track, motivated, and well equipped to deal with unknown-unknowns, I set up a GitHub Project.
 
notion image
 
I’ve filed and groomed a bunch of issues I know I have to get done, but will go through a more serious planning session tomorrow and start establishing a timeline. I’ve found that the best way to conceptualize a project is to 1) define what the end state is 2) break it down into the features necessary to support that end state and 3) break down those features into atomic steps necessary for them to exist. Once you’ve done that, you basically have a step by step recipe on how to get to your end state. And, when you’re following a recipe, you mostly only have to concern yourself with the task at hand - which is a known quantity - it’s much more difficult to get overwhelmed or demoralized.
 
Anyway, moving on to some progress reports. Today, I built out a few more enhancements.
 
notion image
 
Disclaimer - obviously the UI is in a rough state. I’m going to go through the design process of coming up with a cohesive design language and creating the design system at a later date. For the time being, though, processing is coming along nicely. You can now
  • Rotate images
  • Set the sample space
  • Set the “sauce” - individual tweaks to the cyan, magenta, and yellows across highlights, midtones, shadows, and globally
 
For any change, your “recipe” - as I’m calling it - and your thumbnail are recomputed and persisted in your catalog (sqlite db). This means you can go back to the gallery at any time, see your updated changes in the thumbnail of the image you just processed, and then reopen it to have changes right where you left off - even if you completely quit the app in between. One of my main goals with this app was to make sure that every single change was nondestructive. i.e. pixel changes are never baked in, only the reproducible recipe is stored.
 
This auto-save feature is a little bit buggy right now. It’s done as a new QThread right before returning from the function process. In doing so, I’m able to return the pixel data to the GUI thread to update the user’s screen as soon as possible, while updating the catalog in the background. Unfortunately, i’m occasionally running into a SQLite OperationalError sqlite3.OperationalError: cannot commit transaction - SQL statements in progress. I’ll have to update my ORM library (p3orm) to better handle transaction commits. Right now, each statement is automatically committed just after execution. I’m going to add the ability to also manually commit transaction instead. Then, it’ll just be a matter of sequencing SQL transactions across multiple threads here so we don’t get statement execution/connection commit overlap.
 
There are a few other bugs, like some things not persisting right, switching directly between gallery and processing page causing some changes to rollback, etc. There’s a whole host of things I need to tackle as you could tell from my project board - but it’s getting real late here, so I’m going to head to sleep and tackle more tomorrow morning.
 

 

⏳ 2022.09.24 - getting back into it

 
Whew lad, it’s been a minute since I’ve had bandwidth for personal projects.
 
Back to it now, and I’m feeling pretty confident having a few hours this week to get image processing in a fully (albeit with bad color science) working prototype state.
 
In general, I should have a lot more free time in the foreseeable future to actively work on projects and keeping this log active.
 

 

💰 2022.09.16 film cameras + redispatcher updates

 
Another short update for today. Was sick much of last week and busy otherwise, so didn’t spend too much time working on the image editor. I did have time to make some updates on two other bits of projects though.
 
First off, I finally cleaned up redispatcher - a small open source library I made to help with processing of distributed/background/asynchronous/however you want to call it work in Python. It’s similar to celery or dramatiq but way smaller of a footprint, and with a very declarative, intellisense-ful, and strongly typed API. It’s backed by Redis as its message broker. I’ve had it handing in a half-updated state since April, so the past day or two I’ve spent just cleaning things up, reorganizing the API, updating documentation, etc. I still have a bit of ways to go to bring the documentation up to spec so that it covers everything I want it to, including a nifty little monitoring script that you can use to view stats on worker queues, publish/ack rates, etc. Anyway, if you’re interested in such a library, don’t want to mess with setting up rabbit nor with overhead of something like a celery, check out redispatcher.
 
 
A few months back, as price inflation of everything from groceries to used cars started hitting the front page of the news, I wanted to figure out what the deal would be with analog camera prices.
 
 
 

 

🎞️ 2022.09.07 - quick updates

 
Quick updates for today because it’s late.
 
Started on progress window, and added a window manager to switch between windows with hotkeys.
 
notion image
 
The library page shows the image nice and big, includes a CMY histogram (that I want to tweak some more), and will have the main 3 process tools (film base, sample crop, and sauce).
 
Fixed my p3orm library to finally fully support SQLite, added full suite of tests, and getting ready to publish version 0.6.0, pending updating docs. Main issue here was how the IN clause was being added, as well as the order of parameters/arguments getting out of whack with UPDATE queries. Now I can update the Library page to more efficiently fetch folders/images.
 
I played around a bit with Nuitka for packaging the app. There were some errors, specifically around PIL, I think, so I’m going to have to look into that further. Would suck to finish this up and then not be able to package it as a standalone app for distribution.
 

 

🎞️ 2022.09.05 - gallery progress

 
Been hard at work this past weekend learning more of Qt’s general usage and design patterns. I’ve gotten a very rough skeleton of the gallery page done, though, so that’s exciting!
 
notion image
 
Now, it’s not pretty, but it’s got some functionality and I’m finally grasping how to use Qt.
So far, the Catalog screen
  • Imports (with a sick progress bar) any folders, their children, and contained images
  • Persists them in a SQLite database
  • Displays the images of any selected folder and its children
 
Importing images and fetching images for display are both done in a separate thread, to not block the GUI. During import, I connect a signal from the background thread to a slot in the GUI thread in order to update the progress bar. I pre-walk the directory to get a count of all images, and then walk it a second time to do the actual processing. Since the initial walk is practically instant, the user sees an immediate indication that the action took place and the progress bar can be reasonably accurate. Once done, I send a signal to the GUI thread, which kicks off another background thread that fetches all folders, returns a signal with the results, and the GUI thread updates the file explorer QTreeView .
 
Next up, will be selecting an image and opening it up in the Process page.
 
As I’ve said, this page is nowhere near done, and is just a skeleton. There are two categories of changes this page will require: cleanups/optimizations and feature/UIX improvements.
 
In terms of cleanup, optimization, and bug fixes the main things I want to tackle are:
  • I want to optimize importing images. With a large import (my import of my entire “Scans” folder of 1500+ images takes a while, with each image taking ~0.25 seconds to complete. The bulk of this time is spent generating the preview image. I will have the process thread kick off further threads for image resizing.
  • When showing images for the gallery, I currently need to hackily fetch all the folders and all the images, the retrieval of which takes some time. I believe that it’s due to a bug in p3orm that doesn’t correctly construct parametrized queries for SQLite. I’ll need to fix this so I can more efficiently fetch just the folders I want.
 
For feature/UIX improvements
  • I need to make things look pretty. I’m going to ask for someone with product design skills to help me come up with a theme, palette, and overall design system.
  • Toolbar on the right side - image and folder metadata like film type, format, etc; batch processing tools; exporting; and more.
  • Navigation between screens and a proper menubar (particularly for macOS)
 

 

🎞️ 2022.09.02 - foundation

 
Just going to be spitballing here today, mostly for myself, to figure out how I want to structure things. I’ve spent the past couple of days pouring through Qt documentation to learn all about the tools that PySide6 provides - and it’s a lot. I thought I’d have to build out my own solutions for settings persistence, passing off work to threads, model/view/data framework, etc. Turns out Qt has really powerful solutions to all of these, and more that I’m still discovering. It’s really cool and comforting to know that I won’t have to build these all out myself, but does introduce the hitch that I’ll likely be (equally) slowed down by having to learn the Qt-way of doing things. Not a problem, as I’m sure the folks behind Qt have figure these things out over the years better than I would for my first time building a desktop app.
 

Initialization

Upon launch, first thing the app must do is ensure persisted settings exist and are set to sane values. Next, we’ll have to do the same with the catalog db, creating it and its schema if it doesn’t exist. Only then can we load up the gallery view and populate it with image previews.
 

Long running work

I’ll have to set up a generic (probably) QThread based worked class. My thinking is I’ll set it up to take any arbitrary function/coroutine to run and pass back data to the main GUI thread with result, error, and progress signals. I’m leaning toward QThreads over regular Python Threads because it has support for Qt specific workings. While I’m not 100% sure where I’ll use them, I’m willing to take the tradeoff of a potentially slightly higher overhead. Ultimately, QThreads and Python threads are subclasses of the same foundational components, so it’s not like I’m choosing between two vastly different solutions.
 
For now, I’m working with a really shoddy UI, bunch of hard-codes, little preferences/configurability. Basically, trying to get
 

 

🎞️ 2022.08.31 - sqlite support in p3orm

 
Well, that wasn’t too painful. I’ve wrapped up adding support for SQLite (with aiosqlite as the driver) in p3orm. I’ve not yet published a new package version as I’ll need to first add thorough testing and update the documentation, but I can continue on building the film editor off the master GitHub branch of p3orm.
 
If you’re interested in checking out p3orm for your async ORM needs, it’s available on PyPi. If you want to try it out with SQLite support you can pip/poetry it from GitHub.
 

 

🎞️ 2022.08.30 - slight change of plans

 
Not a long update today, as I’ve been doing a bit of rethinking on my strategy when I release this project. I’ve come to realize that the web based solution will not be the best way to distribute this image editor, particularly because most who use it may not want to be forced to store their content in the cloud (as would have been necessitated by my original direction) and the likely recurring cost that would come with that. Instead, I plan on building out the full featured editor as a desktop app I’ll ship for macOS, Windows, and Linux.
 
I’ve looked into solutions for building native desktop apps, and I’ve settled on Qt for Python, specifically PySide6 (as opposed to PyQT due to PySide6 having a more generous LGPL license).
 
Some of the other options I’ve considered (and ruled out) are
  • electron or nw.js - fairly bulky, but also would require figuring out how to simultaneously spin up a Python server to do the image processing and that just seemed messy
  • Eel - actually not super against this one, but I wanted to explore more than web technology for UIs (though I see myself coming back to this in the future)
  • Tkinter - just looks ugly
  • Kivy - still not huge on kvlang (lack of intellisense, mostly) and also because it’s basically all drawn on OpenQL and doesn’t use native UI bindings it won’t look truly native on each platform
  • PyQT - has an unfavorable-for-commercial GPL v3 license
 
I’ve already ported the existing project I’ve had to PySide6 over the past two days (although it doesn’t look as pretty) and going forward I’ll be focusing on building out the basis of the remaining UI flows - specifically DAM and Editing, before I start refining them and Processing.
 
That, however, brought me to the realization that I’ll need to figure out a solution for embedding image and library metadata for the Digital Asset Management piece. I’m pretty against the Darktable solution of a sidecar .xml file for each image, and much more in favor of a singular library file like Lightroom has.
 
SQLite is the perfect solution for this. Unfortunately (as I’m sure you’ll eventually come to realize) I don’t like the state of ORMs within Python. That’s why I built p3orm, to have a Postgres ORM for my web projects. I never thought I’d need to have p3orm support non-Postgres drivers, as I’m a web developer through and through, but alas here we are. Before I continue on with building out the image editor, I’ll first be adding sqlite support (with aiosqlite as my driver) to p3orm. I’ve made some small foundational changes to p3orm and should have this wrapped up with a new version release in a couple of days.
 
P.S. I need to figure out pagination for this devlog - it’s getting kinda big for just one page, especially since it’s all SSR’d right now
 

 

🚴 2022.08.28 - CitiBike, an addendum

 
 
Sadly, they no longer provide Bike ID in recent datasets 😕  would’ve been really cool to, for example, follow along a single bike across this past month. Though, up until (at least) 2016, they include it so may still be something worthwhile there, just not as current.
 

 

🚴 2022.08.28 - CitiBike

 
I live in NYC. I’m sure many of you have heard a tale or stereotype of someone’s taxi getting stolen out from under them by someone else in a rush. That’s never happened to me… with a taxi. But goddamnit it’s happened to me so many times with a Citi Bike dock.
 
I’m really curious in understanding what Citi Bike trends look like over the course of a day and week. I’m interested to see how work commutes change the bike distribution across the boroughs each morning and afternoon. More notably for my purposes, though - I want to know how long on average I’d have to wait for a dock to open up if a station is full.
 
So in order to do this, I set up a little script to record the number of bikes, e-bikes, and docks at each of the 1703 Citi Bike stations. It’s currently runs as a daemon every 5 minutes, and I’ll be letting it run for about the next week or two. Once I’ve collected enough data, I’m going to try out some fun data visualizations - so, stay tuned for that! Unfortunately, this API (as far as I could tell) doesn’t return the IDs of bikes at each station. It’d have been super cool if I could actually track the migration of individual bikes. I’ll dig through their GraphQL schema some more to see if it’s at all possible.
 
You can check out the project on my GitHub here. I’ll be updating it with data-vis tools/scripts, the raw data I collect, and of course the visualization results themselves.
 
The “scraper” is really simple and only took a few minutes to put together since, luckily, you only need to make one HTTP request to CitiBike’s GraphQL API.
 
import asyncio from datetime import datetime, timedelta import httpx import uvloop from p3orm import Porm from citibike.models.db import Run, Station, StationStatus from citibike.models.gql import CitiBikeResponse from citibike.settings import Settings request = httpx.Request( "POST", "https://account.citibikenyc.com/bikesharefe-gql", json={ "query": "query {supply {stations {stationId stationName siteId bikesAvailable ebikesAvailable bikeDocksAvailable location {lat lng}}}}" }, ) run_count = 1 async def run(): global run_count run_time = datetime.utcnow() print(f"""starting run {run_count} @ {(run_time - timedelta(hours=4)).strftime("%c")}""") await Porm.connect(dsn=Settings.DATABASE_URL) run = await Run.insert_one(Run(time=run_time)) async with httpx.AsyncClient() as client: response = await client.send(request) data = CitiBikeResponse.parse_obj(response.json()) for cb_station in data.data.supply.stations: station = await Station.fetch_first(Station.citibike_id == cb_station.station_id) if not station: station = await Station.insert_one( Station( citibike_id=cb_station.station_id, site_id=cb_station.site_id, name=cb_station.station_name, latitude=cb_station.location.lat, longitude=cb_station.location.lng, ) ) await StationStatus.insert_one( StationStatus( station_id=station.id, bikes_available=cb_station.bikes_available, ebikes_available=cb_station.ebikes_available, docks_available=cb_station.bike_docks_available, run_id=run.id, ) ) await Porm.disconnect() async def daemon(): global run_count while True: asyncio.ensure_future(run()) await asyncio.sleep(5 * 60) run_count += 1 if __name__ == "__main__": uvloop.install() asyncio.run(daemon())
 
I use a loop to spit out asyncio futures that run in the background and wait 5 minutes before starting the next one - this way I get an exact 5 minute interval between each run, regardless of how long it takes for run() to complete each time. I’m using p3orm as my ORM of choice (give it a try if you agree with its philosophy) to persist the station and status of each station.
 
I’ve got it running in a simple tmux session I’m keeping open on my 7 year old DigitalOcean droplet. It’d have been more professional to set it up as a proper systemd service but, tmux was quicker and it’s like 2AM. Looking forward to getting some insights on this in a couple of weeks and spitting out some sweet r/DataIsBeautiful GIFs.
 

 

📓 2022.08.27 - devlog CDN

 
Notion’s not a great image CDN, as its images are just stored in S3 at full size. This meant that the images I upload here take pretty long to fetch and draw to the screen. Because of this, I spun up a quick “CDN” to stand in front of my uploaded images.
 
It’s just a simple FastAPI route hosted on Vercel that intercepts any image request, downscales it to 80% JPEG quality before caching it in memory and returning it. It reduces size ~10 fold for some of the larger screenshots I’ve posted and significantly speeds up fetching.
 
The app itself is super simple.
 
from io import BytesIO import httpx from fastapi import FastAPI from fastapi.responses import StreamingResponse from PIL.Image import open as pil_open app = FastAPI() MEMORY: dict[str, BytesIO] = {} @app.get("/") async def get_resource(image_url: str): if image_url in MEMORY: buffer = MEMORY[image_url] else: async with httpx.AsyncClient() as client: response = await client.get(image_url) image = pil_open(BytesIO(response.content)) image = image.convert("RGB") buffer = BytesIO() image.save(buffer, format="jpeg", quality=80) MEMORY[image_url] = buffer buffer.seek(0) return StreamingResponse(buffer, media_type="image/jpeg")
 
Unfortunately, there’s still ~1s of waiting before the image is actually served by the FastAPI app.
notion image
I’ll look into that another time.
 

 

🎞 2022.08.26 - basic processing ✅

I’ve made some progress on the film editor this past week. This includes the first pass at the UI for the process page (which includes uploading an image and some levers to manipulate) as well as the main /process websocket that contains much of the business logic of converting an orange image to something 80% of the way there. There’s certainly a good amount of tweaking i have to do to the pixel maffs in order to improve the color science, but I’ve built up a good base. I’ll go into some more details of my “color science” later.
 
The UI is pretty minimalist so far, and I’m definitely going to have to do a full pass of making a nice UI once everything’s in place, but it’ll suffice so far.
 
notion image
 
So what’s this do? Temporarily, there’s the ability to upload a single image on this page - in the future, you’ll have the ability to import a bunch of images (like a whole roll) in the DAM piece, kinda of like Lightroom, before picking from there which negative to process. Once an image is selected, a websocket is established with the server, the image is uploaded as raw bytes, stored in the websocket session, naively inverted, and returned to the UI.
 
Currently there are 3 sets of settings to process the image
  1. Film base color selection which the server uses to remove the color cast on the image
  1. Cropping out the film border for the server to get the right sample to do its color science on
  1. The colloquially named “sauce”, which controls the cyan, magenta, and yellow presence in the image (I’ve broken these out into global, highlight, midtone, and shadow changes)
 
I’m aiming to make the first two sections smarter by having the server figure out where the frame border is to be able to 1) pick the film base color and 2) crop it out automatically. Then, these would be optional and only have to be tweaked by the user if the server makes a mistake.
 
The actual color science right now is very naive. Once the film border color is “subtracted”, the sample crop is used to determine where all the image color is in (i.e. ignoring the frame border). Then, the image channels are normalized to fill the whole 0..255 range of pixel values, which helps correct for the different sensitivities of each of the red-, green-, blue-sensitive layers in the film. This, however, is not ideal as 1) it doesn’t by itself produce the best looking results and 2) really struggles when an image is primarily either a single color or single tonal range (i.e. really bright, or really dim).
 
I’m pretty proud of how I did the luminosity masking for the “sauce”. The cyan, magenta, and yellow channels are separated into threenp.ndarrays. In order to figure out how much to increase/decrease the intensity of each color at a certain value, I use a simple gamma function from 0.255 with values clamped between 0..1 acting as a multiplier. Then, that is bitwise-multiplied by one of three logistic functions, corresponding to either highlights, midtones, or shadows.
shadows mask
shadows mask
midtones mask
midtones mask
highlights mask
highlights mask
With this bitwise multiplications, all values of the gamma function that apply outside the luminosity mask (like ~85..255) are scaled to 0. I use a logistic function rather than a stepwise function in order to smooth out the transition between tones. I’m considering lowering the growth rate of the functions even more to further smooth out the transition.
 
I also need to explore a different function for actually scaling the intensity of the colors, as gamma does not play well with luminosity masks. For example, a gamma function that increases the intensity of a color will have little to no affect if targeting only highlights.
 
One last thing I wanted to write about about for now is the performance. I already mentioned how this runs through a websocket so that data data can be pushed from the server and so we can keep the image in memory for quick access at every change of the recipe. The image that is stored for the duration of the session and the one thats edited and shown to the user is scaled down to 1000px on its long edge and is returned as a 95% quality JPEG. With these compromises (which for getting a sense of color accuracy are totally passable) changes to the image take only ~0.08 seconds to process. On average, it takes only a tenth of the second for changes to be reflected on the users’ screen, which makes this whole process feel very snappy. Because of this average time, edits sent over to the server are throttled to run once every 200ms so as to strike a compromise between not overriding the server and still making the experience feel snappy (it does). The user can, however, choose to rerun the current processing recipe and return the full quality lossless image for closer inspection.
 
I’m thinking about what to name this once it’s launched. I’ve been playing around with naming derived from the RA-4 process - which is the Kodak’s chemical process that allows for printing a color negative onto a photo sensitive piece of paper in the darkroom.
 
I’ll be working on doing more research on negative color inversion and the color science behind similar software like Negative Lab Pro, Silverfast, and Noritsu and Frontier scanners. Additionally, I’ll start working on the DAM aspect of the project so I can get a head start on reorganizing my own film scan library.
 
‘til next time 👋
 
 

 

 🎞 2022.08.17 - film scan editor

I’m really into film photography. I have recently switched to scanning my negatives with a (dedicated) digital camera rather than my film scanner (which I sold off, so that’s no longer an option). Before going into some more context, might be useful for those who have no idea what the hell I’m talking about to understand, well, what the hell I’m talking about.
 
If you remember back in the ol’ days, before digital cameras, you took pictures on film. And when you got them developed, what you’d get back is a roll of images that looked kinda like the below image on the left. It was Sunny-D flavored and everything was inverted (black was white, green was magenta, blue was yellow, and red was cyan).
scanned film negative
scanned film negative
low fidelity color corrected image
low fidelity color corrected image
So, high level, I wanna build an app where I can toss in these scans and edit them to make them pretty for the ‘gram.
 
I’m using Python to do the image processing, and before you ask, it’s plenty fast enough. I’m using a combination of numpy, ImageMagick, and OpenCV for most (if not all) of the actual image data manipulation, so really this is more of a C/C++ project glued together with some Python.
 
I was torn on how to tackle the UI. Initially, I wanted to bundle a Python GUI framework into a native (macOS, Linux, Window) desktop app, but I found that there weren’t any suitable solutions. Options like TKinter and PyQT just seem so outdated, Kivy is cool but I really dislike using Kivy Language to model the UI (even with the VSCode plugin), and Toga doesn’t let you display images from memory or even blit textures directly to a canvas.
 
And then I wondered… paint.net, figma, and a bunch of other image-y apps run in the browser - why couldn’t this?
 
Well, folks, it can.
 
I’ll get into the nitty-gritty of the image processing and UI another time, so in the meantime I’ll tell you about the quick test I did the other day. As a proof-of-concept, I set up a janky Python service running FastAPI and a dirty UI which let me slide around a couple sliders and dial a couple of dials to upload and edit an image. The UI and server talked to each other through a websocket - sending binary data back and forth - and lo and behold, it worked like a charm. There was little perceivable delay, totally passable for the unoptimized garbage I put together, and I was actually able to edit an image!
 
“Why websockets?” you may be asking? A websocket is a persistent stateful TCP connection. So, for any incremental edit you make to the image, the server can keep a bunch of the state in memory (which is way faster than having it stored on disk/cache/db) and you save on the time it’d take to establish a new TCP connection with HTTP. Also, because websockets are bidirectional, I can do cool stuff like quickly show a lower-resolution to the user, and then progressively load in higher and higher quality previews as the server churns them out.
Let’s talk features.
  1. Digital asset management (DAM). I should be able to upload a bunch of negatives, have them be treated like a cohesive roll of film (tagging things with EXIF/metadata like what camera I used, what film this is, what format, etc.)
  1. Non-destructive editing. I don’t want the changes I make to be baked into the image. Instead, all changes, no matter after how many weeks later I come back, should be easily undo-able without quality/data loss of the image. This also means support for raw image formats straight from the camera (thank you rawpy).
  1. Batch processing. If I process a frame from a roll, it stands to reason that I should be able to apply those base settings to all the other images from that roll. This should get me 90% of the way with all the images in a roll with minimal effort.
  1. Negative conversion feature. i.e. the ability to turn an orange inverted image into something that looks passable.
  1. Additional editing features. To start, I’m thinking at least:
    1. blemish (read: dust/scratch) removal
    2. (AI powered) sharpening
    3. HSL/color tweaking
    4. And of course, the basic stuff like cropping, exposure, contrast, etc.
Before I sign off, I want to quickly mention my motivation for this project. You must be wondering, “Surely there’s some tools that already do this, the film photography community is growing like crazy. There’s dozens of us!” Well you’d be right, but I don’t like any of the current tools. I used to use VueScan and Silverfast 8 with my film scanner, which were alright but lacked any DAM and advanced photo editing features. This meant I’d always have move to my photo editor (Affinity Photo), so I was manually managing all the storage, tagging, and collating of my images, and was losing data along the way with all the destructive editing. There’s also a really cool and popular tool called Negative Lab Pro, which is an Adobe Lightroom plugin. It does really well with color reproduction, has a bunch of neat features, and because it’s a Lightroom plugin, I get the DAM and non-destructive editing features out of the box. But, it still doesn’t have all the features I want, the workflow (at least for me) could really be streamlined, and Lightroom has a bunch of features I don’t want. Also, there’s no way I’m shelling out more money on this hobby for Lightroom and NLP.
 
So instead of paying a few bucks for software, I’ll be spending countless hours over the next few weeks building some myself. Because why buy a wheel when you can make yourself a slightly shittier one for twice the cost.
 

 

📓 2022.08.16 - initial commit

I work on a lot of side projects of all shapes and sizes and I’ve been meaning to create a devlog, but I was stuck on the question of “how” to build it. Using something like Medium or handwriting HTML/JSX seemed either too blasé or too tedious. I was intrigued by options like Hugo and Gatsby, but I wanted something that neatly tied into my existing personal site.
 
I’ve been a fan of Notion for a few months now, particularly its rich editing features and display components, so I decided to use Notion as my CMS. Turns out (of course) there’s people already doing this, and there are some great libraries (notably, but not exclusively) coming out of NotionX.
 
They’ve created a set of libraries that can
  • Fetch content from a Notion page using their “private” API and
  • Render (virtually) every available block with a React component
 
Setting this up was really straightforward. I was already running this page with Next.js on Vercel. Is that overkill? Yeah, probably. But I like playing around with frameworks and I was tired of hosting this on a self-managed VPS. Anyway, back to the topic at hand, getting this set up with my current stack was dead simple, here’s the code for my devlog.tsx page
 
import { GetStaticProps, GetStaticPropsContext } from 'next' import { NotionAPI } from 'notion-client' import { ExtendedRecordMap } from 'notion-types' import { NotionRenderer } from 'react-notion-x' import NextImage from 'next/image' import NextLink from 'next/link' import styled from '@emotion/styled' const devlogPageId = 'redacted even though it doesnt really matter' interface Props { recordMap: ExtendedRecordMap } export default function BlogPage (props: Props): React.ReactElement { return ( <Wrapper> <NotionRenderer recordMap={props.recordMap} fullPage={false} darkMode={false} rootPageId={devlogPageId} rootDomain='raf.al' previewImages components={{ nextLink: NextLink, nextImage: NextImage }} mapPageUrl={(pageId) => pageId.replace(/-/g, '') === devlogPageId ? '/devlog' : `/devlog/${pageId.replace(/-/g, '')}`} /> </Wrapper> ) } export const getStaticProps: GetStaticProps = async (context: GetStaticPropsContext) => { const notion = new NotionAPI({ activeUser: process.env.NOTION_USER, authToken: process.env.NOTION_TOKEN }) const recordMap = await notion.getPage(devlogPageId) return { props: { recordMap: recordMap }, revalidate: 10 } } const Wrapper = styled.div` min-height: 100vh; width: 100%; `
 
This fetches the contents from Notion’s API server-side using GetStaticProps. We use GetStaticProps because this content will be the same for everyone, which means we can easily cache the built page. Additionally, this function returns revalidate: 10, which tells Next.js that the cached built page should be invalidated after 10 seconds and updated in the background.
 
That’s it for now. Tomorrow I’m back to work on my film scan editor.