How often do you convert a UIImage into a Data object? Seems like a relatively straight forward task, just use UIImageJPEGRepresentation and your done.
After doing this I started seeing memory spikes and leaks appear which got me thinking on how I can better profile different options for performing this conversion. If you want to follow along you can create your own Swift Playground using this gist.
The first step was looking at the different ways you can convert a UIImage into Data. I settled on the following three approaches.
Out of all the options this is the most straightforward and widely used. If you look at the testing blocks later in the post you can see I’m simply inlined the UIImageJPEGRepresentation with the test suite compression ratio.
UIImageJPEGRepresentation within an Autorelease Pool
Out of the box UIImageJPEGRepresentation provides everything we need. In some cases I’ve found it holds onto memory after execution. To determine if wrapping UIImageJPEGRepresentation in a autoreleasepool has any benefit I created the convenience method UIImageToDataJPEG2. This simply wraps UIImageJPEGRepresentation into a autoreleasepool closure as shown below. We later use UIImageToDataJPEG2 within our tests.
Using the ImageIO Framework
The ImageIO framework gives us a lower level APIs for working with images. Typically ImageIO has better CPU performance than using UIKit and other approaches. NSHipster has a great article with details here. I was interested to see if there was a memory benefit as well. The below helper function wraps the ImageIO functions into an API similar to UIImageJPEGRepresentation. This makes testing much easier. Keep in mind you’ll need to have image orientation yourself. For this example we just use Top, Left. If you are implementing yourself you’ll want read the API documentation available here.
What about UIImagePNGRepresentation?
UIImagePNGRepresentation is great when you need the highest quality image. The side effect of this is it has a largest Data size and memory footprint. This disqualified UIImagePNGRepresentation as an option for these tests.
For my scenarios it was important to understand how memory is impacted based on the following:
- Number of executions, i.e. what is the memory impact for calling an approach on one or many images.
- How the Compression ratio impacts memory usage.
Image quality is an important aspect of my projects, so the tests where performed using the compression ratios of 1.0 and 0.9. These compression ratios where then run using 1, 2, 14, 20, and 50 executions. These frequencies demonstrate when image caching and Autorelease Pool strategies start to impact results.
Testing Each Approach
I test each of the above mentioned approaches using the template outlined below. See the gist of the details for each approach.
- At the top of the method a memory sample is taken
- The helper method for converting a UIImage to a Data object is called in a loop.
- To make sure we are measure the same resulting data across tests, we record the length of the first Data conversion.
- When the loop has completed the proper number of iterations the memory is again sampled and the delta is recorded.
There is some variability on how each approach is tested.
The implementation for each approach is slightly different, but the same iteration and compression ratios are used to keep the outcome as comparative as possible. Below is an example the strategy used to test the JPEGRepresentation with Autorelease Pool approach.
Below is the result broken down by iteration.
Results for 1 Iteration
Results for 2 Iterations
Results for 14 Iterations
Results for 20 Iterations
Results for 50 Iterations
I am sure there is a ton of optimizations that could be made to bring these numbers down. Overall the usage of UIImageJPEGRepresentation wrapped within an Autorelease Pool looks to be the best approach. There is more work to be done on why the compression ratio has an inconsistent impact, my guess is this is a result to caching within the test.
Although the ImageIO strategy was better in a single execution scenario I question if the proper handling of image orientation would reduce or eliminate any of your memory savings.
There are more comprehensive approaches out there. This is just an experiment using Playgrounds and basic memory sampling. It doesn’t take into account any memory spikes that happen outside of the two sampling points or any considerations around CPU utilization.
- Gist of the Swift Playground is available here