The wild broccoli 🥦
Current mood: feeling
Some slight reorganization
Mood: 🥱 sleepy
So this was probably imperceptible to anyone who read my site but I actually had the flex: parameter mis-specified in my CSS file this whole time, which made it so that I couldn’t write blog post titles that were too long… or else something bad would happen. Something evil. Because of this I’ve been keeping my titles very short just in case.
I have now fixed this. While I was at it, I also decided to divide up my blog entries by year and include the little mood emoji that I like to put on each post. I’m not sure if I want to put in a fallback mood for when I inevitably forget to put in an emoji, or if I want to have the Blank Spot of Shame instead.
It’s quite late, so I shan’t write very much else. I’ve been looking forward to Eurovision this year, as I have been for the past several years. There were a lot of rumors floating around about Azerbaijan’s entry being written by AI, which the Azerbaijani team fervently denied. But who knows, right? They could be lying. Or it could have been a malicious rumor spread just to hurt their prospects in the competition. In this sort of post-truth world, I was reminded of an essay I read about “AILOM”, short for the “AI-induced loss of meaning”. In a way, the pervasiveness of AI-generated content has led us to no longer feel a sense of connection with the art and communications we experience. You might think that it makes us not feel connected to AI-generated “content”, which is fairly obvious, but the real trouble is how the pervasiveness and increasing fidelity makes us lose faith in the humanity of everything else in this world.
Well, maybe it’s that, or maybe it’s just a new way to insult someone and say that their work is formulaic/generic/“slop”/uncreative/unskilled/ugly/etc.
Anyway, funny to have the two wolves inside me of “I hate what the pervasiveness of AI-pretending-to-be-human content is doing to our relationships with art, truth, and with each other” and “I’m literally taking a machine learning class so I can get better at computer vision classification problems in order to get back into radiology research and also sometimes look at the funny eigenfaces”. Well I guess it really is all about what you use the technology for and how open you are about disclosing the use.
Like, I think it’s really cool that you can do a pretty basic linear algebra function on a set of 400 stock photos of George W. Bush and the vector that corresponds to the greatest variation in pixels will actually shape itself into a ghostly George W. Bush face… his spirit is trapped in there… But is that art? I guess it’s art the same way that emergent patterns in clouds can be art. I talk to a chatbot to revise my homework, and when I search for something on Google I read the overview. But I don’t want it to be my friend or my lover or any of that stuff. Even though I think it can be useful for all sorts of things and you can actually run it locally on a laptop without evaporating Lake Superior, I still don’t want the internet to become a sea of bots crowding out all the humans. And even though I’m a big fan of piracy online due to latent antisocial tendencies and cheapskateness, maybe the sort of piracy at scale by AI will have a chilling effect on human output. And generally I think people are using AI for the wrong things these days, as a novelty at best or at worst as a replacement for creativity and thinking and human connection.
(Very nobly) The good use case for AI is seeing if we can detect focal-onset epilepsy by using computer vision on heatmaps of the abdomen. Or for automatically tagging videos of the ocean for plastic trash and then tracking the trash to see where it comes from and where it ends up so people can intercept and recycle it. Or for making regression models more complicated. It’s all linear algebra. Everything I do is linear algebra. It’s fine when I do it. I’ll tell myself it’s fine when I do it, haha.