Optimizing a 6502 image decoder, from 70 minutes to 1 minute

by davikron 9/29/25, 10:11 AMwith 29 comments
by 6510on 9/29/25, 3:15 PM

It is quite surreal to me that this was not the road taken. Optimizing software doesn't have the same potential as optimizing hardware but i'd say 1/70 is significant. If thousands of people would work on this indefinitely the time would drop to seconds. That code would also be completely incomprehensible. The argument that people should just buy a faster computer could just as easily have worked out the other way around, just write faster software. Going the hardware way gave us really really readable code which is great. The other direction however would have given us really really cheap devices. Receiving and sending a signal for [say] a chat application requires very little stuff. It would be next to impossible to add images, word suggestions or spell checkers. We could still bake mature applications onto dedicated chips. But until now those efforts went pretty much nowhere(?) I imagine one could quite easily bake a mail client or server, or a torrent client, irc, perhaps even a gui for windowed applications. Maybe an error console?

by flanked-everglon 9/29/25, 1:44 PM

If I have the "Without interpolating, we can clearly see we only have half the pixels." image entirely on screen, using Chrome, KDE with X11 on Ubuntu 24.04, then it makes my whole screen change colour. Everything becomes slightly darker or something. Very odd. I will try it on another computer.

by JKCalhounon 9/29/25, 2:27 PM

I kind of enjoy seeing these posts from time to time on HN. I thought it was my age (I remember this hardware) but I think a lot of engineers are enjoying practicing their craft in a more pure environment with so few (or no) layers of abstraction underneath.

Refreshing at times, isn't it?

by anyfooon 9/29/25, 9:19 PM

Isn't it crazy how the image where every other pixel is black (labeled "Without interpolating, we can clearly see we only have half the pixels") sort of looks to have higher fidelity than the one after it, where the black pixels have been removed, which now looks pixelated?

And yet both images have the exact amount of information, because all pixels that have been removed are simply black.

The effect is so pronounced, that I wonder whether there wasn't any additional processing between the two images. Compare the window with the sky reflection between the two: In the image without black pixels, it looks distorted and aliased, while in the one with, it looks pristine.

If actually only the black pixels have been removed (and the result nearest-neighbor scaled), I think the black pixel grid is a form of dithering, although neither random, nor the error diffusion kind one usually thinks of when hearing "dithering". There is no error being diffused here, and all added pixels are constant black.

Maybe the black pixels allow the mind to fill in the gaps (shifting the interpolation that was also removed, prior to the black pixel image, to our brain, essentially). It is known that our brain interpolates and even straight makes up highly non-linear stuff as part of our vision system. A famous example of that is how our blind spot, the fovea where the optic nerve comes out, is "hidden" from us that way.

The aliasing would "disappear" because we sort of have twice the amount of samples (pixels), leading to twice the Nyquist frequency, only that half of the samples have been made up by our vision system through interpolation. (This is a way simplified and crude way to look at it. Pun unintended.)

But before jumping to such lofty conclusions, I still wonder whether nothing more happened between the two images...

by JSR_FDEDon 9/29/25, 11:53 AM

Good reminder to do less, rather than the same thing but optimized.