I like to include a lot of images on my websites. I am a photographer, after all. And I definitely don’t want the photos to look heavily compressed, but I also want the sites to be fast. While there’s valid disagreement about how obsessive you need to be about site speed, the reality is that there’s no downside to a fast website. Google likes it, and your site’s visitors like it.
But many images start out with a lot of redundant data that you can’t see. Some of it is metadata–text information about the file. Things like edited dates and GPS fields and the type of camera and settings used to take the photo. But there’s also a lot of other data that can be safely removed with no or minimal impact on image quality. Removing that can drastically reduce a file’s image size. And on a website, that means faster load times. When you multiply that by multiple images and graphics on a page, that can make a huge difference.
So I try to squeeze as much redundant data out of the files as possible. One way I do that is with traditional image optimization. That strips out redundant information that simply doesn’t need to be there. I generally strongly prefer lossless options when possible so as not to sacrifice image quality any more than I have to, but there are lossy versions.
The catch with lossy compression is that it discards information. And if it’s too aggressive, it can make your images look terrible. And the tiniest filesize isn’t worth having your images look terrible. Exporting a JPG at a quality setting of 100 will look great, but it’s going to end up with a very large file. Exporting the same file at a quality setting of 20 is going to result in a very small size but the image will look really bad.
So it’s always a balancing act between the amount of lossy compression and filesize (and therefore the speed of your website). I tend to be very cautious when it comes to lossy compression.
But after extensive testing, I’ve now enabled a new lossy compression on my sites. But it’s not just cranking up the JPEG compression. This one involves proprietary secret sauce that somehow manages to crunch files down dramatically without any noticeable effect on quality.
JPEGmini has been around for quite a long time, and I’ve now been using it for quite a long time. I originally posted this using their older version, but they’ve recently come out with a new version (JPEGmini Pro 3), so I’ve updated this post with that. It’s fundamentally the same. There’s a slightly refreshed user interface. And there’s one new feature (more on that below). But fundamentally, the new JPEGmini is a lot like the old JPEGmini.
JPEGmini is available in a few different forms. There used to host a web version as well, but that seems to have been deprecated. They do still offer a server-based option for developers that lets you host your own optimization server or run as your own web browser plugin-based tool.
But the version I’m focusing on here is the JPEGmini Pro desktop app. It comes in two flavors. One is the standalone app, which sells for $59. The other is a suite (for $89) that includes the standalone app as well as plugin versions that integrate into Lightroom, Photoshop, and Capture One. 1 There’s also a free trial, so you can take it for a spin before shelling out for a license.
JPEGmini Pro 3
This is an extremely simple app to use. 2 It’s basically a combination of a drop panel and a progress indicator. So you just drag and drop your images or folders of images onto the panel and let it do its thing. And you can batch process a lot at a time–I’ve put through folders of over 60,000 images at once. But it only works on JPGs–not on PNGs, GIFs, TIFs, or RAW. (Actually, with version 3, that’s not entirely true anymore. It now works on HEIC images as well, although it converts them to JPG.)
There’s really not much to configure. You have no control over the optimization, and you really just have to choose whether to overwrite the original files or save the processed files somewhere else.
And unlike regular JPG compression, running the same image through again doesn’t result in any further compression or quality loss. Once you run an image through the processor, it’s done, and running it through again won’t do anything—any pre-optimized images will be skipped.
How much benefit you get will depend on what settings the original file was saved with as well as the unique visual characteristics of the image (images with simple detail and few tones compress better than images heavy on detail and smooth tones). But it’s rare that I come across a JPG image that doesn’t get significant filesize benefits by running it through JPEGmini Pro.
JPEGmini Pro & Lightroom
With JPEGmini Pro you have the option of using a Lightroom export plugin. You can then add that step into pretty much any of your export operations that are exporting JPGs. And in a nice touch, it also works with Publish Services–you just add it as a step when you configure the service.
I originally ran these side-by-side tests using the previous version of JPEGmini Pro. As far as I can tell, the output results are pretty much identical using version 3.
After trying a bunch of different JPGs with a variety of subjects, from smooth tones to sharp, vivid shots, I found the results to be impressive. Switching them back and forth between the crunched versions and the originals you really can’t see any visual difference. Here are some examples:
But the filesize savings are impressive.
|Image||KB||JPEGmini KB||Reduced by %|
I tested several examples by seeing what quality setting you’d have to use with regular JPG compression in Photoshop’s Save for Web function to end up with equivalent filesizes. So I went back to the original RAW files and generated new 1024px JPGs through Photoshop to try to hit the corresponding filesize of the JPEGmini versions.
The results ranged from a quality setting of 60 to around 77. At 77 many images are still going to look excellent, but the 60 end of the range can start introducing some really ugly artifacts in many cases. So I found that JPEGmini works much better than simply ramping up the aggressiveness of standard JPG compression.
It’s really very, very difficult–often impossible–to see any difference between the files using the naked eye. In most cases, I simply can’t spot any difference. But out of curiosity, I ran diff tests on the original images vs the JPEGmini-crunched ones. This uses software that’s specifically designed to compare the images down to the pixel and display differences between them.
In these examples, the white represents pixels that are identical between the files, while the black represents pixels that are different. So this shows the pixels that JPEGmini is touching and the ones it’s leaving alone.
As you can see, the amount of changes varies greatly by the subject matter of the image. That’s also evident in the filesize gains. But there are real changes between the files, even if you can’t actually tell with the naked eye. And these comparisons show that the changes are mostly hidden in the detail and edges rather than the smooth tones, which would be why it’s harder to see and doesn’t result in the kind of ugly artifacts in smooth tones that are so noticeable in overly aggressive regular JPG compression.
JPEGmini Optimization vs Traditional Image Optimization
JPEGmini makes impressive gains on reducing the filesizes of JPGs without any noticeable quality loss, but there’s a way you can build on that and make even further gains.
JPEGmini calls what it does “optimization,” but it’s not the same as what is traditionally known as optimization. JPEGmini doesn’t remove metadata like EXIF or IPTC data, etc. Rather, it does a re-encoding. The upshot is that even JPEGmini files can benefit from a round of traditional lossless optimization.
In these examples, I compared the results from running the original files through JPEGmini and ImageOptim and a combination of both. I used the most aggressive lossless settings in ImageOptim, so the image quality isn’t touched in that round.
For these tests, I used an assortment of images with different types of colors and detail. They’re JPGs that are 1024 pixels along the longest edge, exported from Lightroom at a quality setting of 85 and with the “copyright only” metadata option.
The bigger the file is the higher percentage reduction you’ll get. And the 80% figure that JPEGmini uses is for the largest images, 8MP or larger. But most of us aren’t displaying 8MP JPGs in our web pages. So I settled on 1024px as a good compromise of still being relatively large for the web but still a practical size for quick display. (If I was to re-run these images again now, I’d use larger dimensions to better account for the widespread use of retina images now. But 1024px is still a useful baseline.)
|Image||KB||ImageOptim only KB||Reduced by %||JPEGmini only KB||Reduced by %||JPEGmini &|
|Reduced by %|
As you can see, by far the best results were using both JPEGmini and ImageOptim together. It would be ideal if those steps could be built into JPEGmini so it could be done in one step, but maybe that would run into licensing issues with the underlying libraries and code.
But even if JPEGmini isn’t coming to the party, Mac users can run ImageOptim and JPEGmini in one step. But it requires some setup and comfort with using the command line. Here’s how to set it up.
But one thing JPEGmini does do, which is helpful for web display, is that it outputs Progressive JPGs.
JPEGmini vs WebP
Since the objective here is to reduce filesize for displaying on the web, it makes sense to compare the results from JPEGmini’s optimization with those from converting from JPG to WebP.
WebP is a much newer format championed by Google and is something I’ve explored in detail before.
Through its own fancy compression algorithms, the WebP format promises much smaller files than are possible with comparable visual quality in JPG and PNG formats. And in practice it really works remarkably well. The catch is that it hasn’t yet been fully adopted by web browser developers and support for the format remains patchy. But it’s possible to see a future where WebP becomes the standard image format of the web–that’s what its creators envisage for it, at least.
So I decided to test JPEGmini-crunched files side-by-side with WebP versions. Both are lossy formats*, which means that they’re discarding information that can’t be recovered. (* Technically WebP has a lossless mode, but it’s pretty useless for converting from JPGs because you usually end up with a file that’s significantly larger than the original.)
To make it a useful comparison I wanted to standardize it as much as possible. With JPEGmini, that’s not a problem–there is no control over the compression. But with WebP it’s possible to choose the amount of compression along a sliding scale, just like with JPG. So it’s possible to choose very aggressive compression at the expense of image quality all the way up to lossless which preserves the image quality completely but results in much larger files.
So for this test, I decided to use the WebP images created by the Optimus image optimization cloud service, which is one of the most popular and easiest ways to implement WebP on the web if you’re using WordPress. We, as users, don’t have any control over the WebP compression amount with Optimus either, but at least it’s fixed. (So far as I can tell from image quality and filesize results, the Optimus WebP conversion seems to use settings equivalent to about 86 in XNConvert’s options.) And, of course, I made sure that both were converting from the same original so that we weren’t ending up with a cumulative effect.
Filesize. WebP is the clear winner (at least with the settings Optimus is using). The files are considerably smaller than those generated by JPEGmini. And they’re still smaller–sometimes by quite a lot–than JPEGmini files that have been subsequently run through traditional optimization with ImageOptim. So WebP wins hands down when it comes to filesize alone.
|Image||KB||JPEGmini only KB||Reduced by %||WebP KB||Reduced by %|
Image Quality. JPEGmini wins here, but it’s closer than you’d think. With images with sharp lines and distinct tones, it’s almost impossible to tell them apart even when you flick them back and forth quickly. But where JPEGmini has a noticeable advantage is when it comes to subtle tones and gradations. The most obvious example in this group is the remnants of the sunset over Athens. The JPEGmini version retains good detail in the clouds, whereas the WebP version gets noticeably muddy. And if you go looking in areas of other images where there are smooth tones, those are where you can see differences between them (like the out-of-focus edges of the tulips close-up, for example). So when it comes to image quality, JPEGmini wins, but it’s really not by much and in very many cases the quality of the WebP images is entirely satisfactory.
Here’s an example. The version on the left is JPEGmini; the one on the right is WebP. (If you can’t see the one on the right, you’re probably using a browser that doesn’t support WebP.)
And here’s another. Look to the blurred edges of the center flower for some subtle difference.
JPEGmini & WordPress
Running your images through JPEGmini before uploading them to your WordPress site is all well and good, but there’s a catch. When you upload the images, WordPress automatically creates derivative versions that are often the ones that are actually used on your site. By default, these are the “thumbnail,” “medium,” and “large” sizes, but you can add many others of your own specification if you like. And the catch is that none of those derivatives are processed.
JPEGmini doesn’t currently offer a WordPress service, although they say it’s on their development roadmap. So, for now, there are three possible ways to run all the images on your WordPress site through JPEGmini.
The first is manually by copying all your site’s media to your computer, running the files through JPEGmini or JPEGmini Pro (the Pro version is much faster for this) and then replacing the existing files on your server. But I don’t recommend this in most cases because it’s pretty easy to really screw up your site by running into permissions issues or messing up the database’s registration of the media files in some way.
The second is to run your own JPEGmini server. But that’s really not a practical option for most people. It’s expensive and requires a lot of technical chops.
The third option is to use EWWW Image Optimization Cloud. That has JPEGmini baked in as an option along with its other optimization functions. But it’s still not ideal. If you’re processing a lot of images the costs of the EWWW cloud service can add up.
For now, at least, those are the only practical ways to process all of the derivative images in WordPress through JPEGmini.
There are some small areas where things could be improved. These have to do with the UI rather than the core functionality.
- Add traditional optimization. Perhaps the licensing complications might be prohibitive, but it would be nice to be able to run JPEGmini processing and traditional lossless optimization in a single step.
- On Mac, make the dock icon a drop space, turning it into a droplet. The main panel is really just a drop space, but it means you have to open the app first. It would be handy to just be able to drop files directly on the dock icon, as you can already do with ImageOptim.
I’ve found JPEGmini to work really well and result in dramatically trimming down the images on my website with no loss of image quality. Whatever is in their secret sauce really works. And since on most of my pages image data is at least 75% of the page load size, trimming the fat from the images makes a big difference on how quickly the pages load.
So after testing it quite exhaustively, I now use it on the images on my websites. So the JPG images you see here on Have Camera Will Travel have all been processed through it (with the very limited exception of the side-by-side examples above so as to make a meaningful visual comparison). In some parts of the site where image quality is less crucial, I’m also using WebP versions. (UPDATE: Now that WebP has become much more widely adopted, I’m using it quite extensively on this site. That’s also in keeping with current Google recommendations relating to site speed and core web vitals.)
Shaving even 20 percent off each image makes a big difference when you’re displaying several images. And the results from JPEGmini, especially when used in conjunction with regular optimization, are often dramatically better than that.
But before you dive in and convert all your images, I’d recommend conducting your own tests. There’s an online option where you can test it on individual images and you can download a trial version of the desktop app from the JEPGmini website. I also don’t recommend using it on your original, master images–this is lossy compression, after all, and there’s no way to restore the information after it’s discarded.
But for the JPGs you put on your website, it’s entirely possible to reduce their filesize by half or more without any loss of quality. And that means a faster website, a step closer to meeting Google’s Core Web Vitals, lower bandwidth costs, and happier visitors to your site.
If you’d like to download the test images used above and all of the converted and optimized versions I refer to so that you can see them at full size on your own display, you can download it here (about 11MB).