Pixel Pal (PART 2) : Gathering data for Pixel Pal

Posted on Sat 15 February 2020 in Linux, HiDPI, PixelPal

Pixel Pal is a project I have started whose purpose is to bring new life to old icon themes. Many old icon themes are in raster format and are not available for HiDPI screens (their resolution are too small). Currently the only way to handle old icon themes is to resample them by the desired factor, but the icons look pixelated. The goal of this project is to use deep learning to upsample small icons into icons for HiDPI screens.

To solve this I will use artificial intelligence and more precisely deep learning. The idea is to teach a neural network how to upsample icons in a manner that takes into account the shape of the underlying icon. In other words the idea is to do better than increasing the number of pixels. However to teach a neural network you need lots of data. In our case the data is a collection of icons for which we have both low resolution and HiDPI resolution versions.

Thankfully due to open source a lot of icons themes are available for which we have both. There are some requirements:

  1. The icon theme's licence needs to allow use theme to make a derivative product. To be safe I will stick to open source licences.
  2. There needs to be enough data to teach a network. However I will intentionally not get too much data to explore data augmentation techniques.
  3. Ability to split data well in various sets: training, validation, test and real world data

Let me explain the splitting of the data a little more by using an analogy. Training a network is a bit like taking a course at university. You get a lot of material through the course and at the end of the course there is a final exam. You are not allowed to see the final exam.

The course material is your real training data, and the final exam is test data. The point of test data is to evaluate the real performance of the network. But the test data is part of the course. I have another data set, a real world data which is there for me to use the network a first time on a real task; in other words none course material. The real world data will correspond to some old icon sets (OS2, CDE).

Furthermore real training data will be split. To use our analogy to learn the course and pass the final exam with a good grade you do practice tests. This is done by splitting our real training data into a smaller training data set and a validation data set.

So to recap we have :

  1. Training data : Use for training the network
  2. Validation data : Use for evaluating the training
  3. Test data : Use for evaluating the performance of the network
  4. Real world data : To test the use of the software on an actual use case.

Let me explain a bit more about the use of real world data. I have chosen two really old icon themes for which no svg formats are available. I can't learn to upscale those images but I would be very much interested in doing so. This data set allows me to judge whether the project has been a success in a qualitative way.

Because no svg files are available an official metric can't be calculated, which is why we need a testing data set with svg icons. Using old icons with no svg file format will showcase whether using deep learning is actually useful for reviving old icon sets. The old icon themes are the official CDE icon theme and the icon theme from OS/2.