A Side Project!

It is a truth universally acknowledged that programmers are consumed by their side projects. This is certainly true of my husband, who can often be found tapping away into the early hours, working on his automated household reports, his home monitoring system, or his train-spotting app. It is one of several things that has made me feel like an alien in the world of coders. I didn’t seem to have the imagination to concoct a side project. (This probably speaks of a certain blithe satisfaction with life – my husband is always thinking of things he can improve, whereas I bumble on, naively content with existence.)

Nevertheless, I could see the many benefits of a side project. They’re fun, they improve your skills, and they can produce something valuable. So I kept an eye out for an opportunity.

I began to tentatively feel my way towards a computer vision project. In my PhD, I am applying AI to medical imaging, using machine learning techniques in an attempt to enable earlier diagnosis. This is absolutely fascinating, and, of course, very hard work. I thought it might be fun to apply computer vision skills to something a little bit more trivial.

Finally inspiration struck me, somewhat bizarrely, when I saw the following tweet:

If you are unfamiliar with the product, let me enlighten you: these are toilet rolls from Who Gives A Crap (excellent eco-friendly toilet paper by the way – we get the bamboo ones in this very picture). The composition of the photo and the fact that the rolls are individually wrapped in patterns means that the subject is not necessarily immediately identifiable. I simply had to let AI have a go, to see what it came up with – would it see biscuits too? I could have used an online tool or demo, but I saw an opportunity to write some code. I adapted an online notebook (a nice way of writing code with commentary) from Tensorflow to produce the following AI-generated predictions:

You can see where the predictions are coming from here. The round shapes seem to have informed the predictions of “wall clock”, “spindle” and “necklace”. The box perhaps was the cause of the top “tray” prediction. But not a loo roll prediction in sight (suggesting that this sort of presentation of toilet roll was not in the training data… but more on that another time!)

This was fun! I could see myself doing more of this! Could this be my coveted side project? I decided to write my own notebook based on a tutorial from LearnOpenCV, so that I could play with the code and run through any image that I wanted. Of course, the test photo had to be my favourite picture of our darling kitten, Pebble:

Excellent, the code was working! The AI model was pretty sure it was seeing a cat. “Quilt” also made sense because she can be seen sitting on a bed. I don’t think I’ll tell her about the “space heater” one…

Soon, another tweet caught my attention:

I immediately wondered what AI would make of this optical illusion, and whether it too might be thrown by the grating, suggestive of a grill. I ran the picture through my notebook:

It wasn’t remotely fooled! This isn’t actually surprising, as humans are only deceived by a cursory glance, and the nature of these models is that they consider every pixel. Regardless, it was a interesting experiment.

I’ll be continuing this fun side project on GitHub, and explaining more of the concepts in future blog posts, so watch this space!


Leave a comment