Aug 312011
 

Will update this post more, but here is an epic example of why flash photography at an aquarium is a bad idea:

Photo ruined by someone’s popup flash.
As you can see, I was unlucky enough to have taken my shot at the same time as someone firing off their popup flash. Yes, I can really see the value add of using your pop up flash in an aquarium… against a large 2 story window. That glare really adds to the look and feel.
Seriously, why do people do this? I can’t imagine people are thinking this is a “great” image of what they saw?
Contrast the above shot with one I took, which was not ruined by someone else’s flash:
Sea Turtle
I took a different post processing approach with the sea turtle shots this time around, vs past visits. You can check out the images, as well as the ruined shot in my sea turtle gallery:
Aug 252011
 

Recently, I came across this post from PetaPixel about JPEGmini

It details a company that has come up with a way to optimize an image so that it visually looks virtually identical to the original, but takes up a fraction of the original file size.That’s awesome! I even tested it against some image files:

Black Sea Nettles – JPEG shrunk using custom algorithm.
  • 24 megapixel JPEG / 12MB in size => 1.6MB resulting file after JPEGmini.
  • 12 megapixel JPEG / 4.8MB in size => 1.1MB resulting file after JPEGmini.
I performed a difference test against the original and the JPEGmini version. They are virtually identical. If you take the resulting difference image and adjust the levels, there are in fact different, but the difference can be attributed to minor random noise vs compression artifacts. Amazing!
DIY JPEG Compression Improvement
Being the kind of guy I am, I was wondering… how did they do this? According to the JPEGmini site:
  • improved algorithms
  • compressing to a level just before artifacting becomes an issue
Well, thinking that I could at least get some improvement, I leverage ImageMagick and some shell scripting.
I tested with the same images and came up with the following:
  • 24MP / 12MB => 2.8MB (1.7MB with updates below) (jpeg compression quality @73%)
  • 12MP / 4.6MB => 1.1MB (jpeg compression quality @87%)
The images produced were likewise virtually the same and a difference mask between the two in photoshop shows nothing, unless you auto-level. The amount of difference between my reduced images and JPEGmini’s reduced images were comparable. No new JPEG compression algorithm on my part, just applying basic JPEG compression guidelines from FileFormat’s JPEG Page:

The JPEG library supplied by the Independent JPEG Group uses a quality setting scale of 1 to 100. To find the optimal compression for an image using the JPEG library, follow these steps:

  • Encode the image using a quality setting of 75 (-Q 75). If you observe unacceptable defects in the image, increase the value, and re-encode the image. If the image quality is acceptable, decrease the setting until the image quality is barely acceptable. This will be the optimal quality setting for this image.  Repeat this process for every image you have (or just encode them all using a quality setting of 75).

The process that my script goes through:

  1. Compress original image at 99% image quality
  2. Do a comparison metric between the original and the new image.
  3. If the compression has resulted in a difference greater than a certain threshold, use the compression quality percentage prior to the current one.
  4. If difference is less than threshold, decrease image quality by 1% and repeat from step #2.
Using this process against the whole image, I could reduce <12MP files down to ranges similar to what JPEGmini was able to achieve, albeit slower. For files >12MP, I would achieve great savings in space, but not as great as JPEGmini, for a given threshold of difference.
Edit: Upon reviewing my notes, it looks like the threshold for <12MP and for >12MP is different from the JPEGmini files. When optimizing with the expanded threshold for larger files, I am able to get comparable to what JPEGmini gets.
How I Did The Compression Process
  • Use ImageMagick’s “compare” to generate “image distortion” readings:
  • convert -quality PERCENTAGE% original-image.jpg compressed-image.jpg
  • compare -verbose -metric mae original-image.jpg compressed-image.jpg
  • Using the example file from JPEGmini, establish a baseline:
    • 24MP image = ceiling of 370 points of distortion
    • 12MP image = ceiling of 265 points of distortion
  • Starting at 99% quality for JPEG compression, continue to reduce quality until the distortion measured exceeds the baseline. If it does, use the compression level from before.
  • The level of acceptable compression will differ from image to image. If the image has more pixels/images, then it will be able to contain more points of distortion before the image quality suffers.
    Optimizations (speed)
    To optimize the speed, especially for larger images, I scaled down the original image and performed the tests on the scaled down images. Once the ideal was located, I performed the compression on the original with the quality determined from the scaled down image.  For very large images, I take a crop from center of the image, which results in a more accurate quality prediction.
    Thoughts
    I can see why most applications don’t both with this process… it’s basically exhaustive testing to determine how much you can compress before image quality is degraded unacceptably. In addition to scaling down the image, I could also do quality percentage point skipping and backtracking when quality is degraded.
    What would be ideal is an API and library set that does an image quality analysis and gives you an optimally compressed JPEG out of the box. That is the value add of what JPEGmini is offering, and I think that it is a technology which any company involved with image storage should look into.
    The Code
    Note, the shell script/etc are not published, as the goal was to explore whether or not it could be done. Based on the description above, one can easily write the appropriate wrapper script or use the appropriate java/python/php/etc hooks.
    Update/Edited(8/29/2011):
    • Improved optimization to the quality locating logic resulted in a reduction from 5 minutes to 13 seconds to locate the quality setting and compress a 24MP image.
    • Planning on creating a Lightroom 3 export plugin for this scripted method.
    • Planning on trying to reduce the time required from 13 seconds down to under 10 seconds.
    Aug 092011
     

     

    This is a followup to my previous article, “Managing Resources – Part I“. It’s been half a year since I wrote that article and it has been a mixed blessing kind of trip. So, here are some of my thoughts from the attempt at managing my storage through complexity, and my thoughts going forward.

    How Did It Go?

    If you read part 1, you’ll note I went with a big box, lots of cpu cores, ram, and storage. I opted for complexity and multiple layers of abstraction to get all the features I wanted. I opted to get ZFS by emlpoying Nexenta and other Solaris derived solutions. Virtual machines to get the best of both worlds, and leveraging PCI-passthru to get a desktop experience on a virtualized server platform.

    In short, this did not work out very well. :(

    What Went Wrong

    From the start, there were problems:

    • PCI passthru only worked with a small subset of devices, and the devices it did work with were primarily storage controllers. Only specific, expensive, storage controllers. Display controllers were more or less out of the question, save as CUDA processor usages.
    • Without PCI passthrough for the storage controllers that I had, the option of using Nexenta in an ESXi VM with direct access to the underlying hardware to present storage options was a bust as well. Tests with Nexenta running on top of VMFS as presented through ESXi resulted in horrible performance numbers. Think something along the lines of 5MB/second read/write.
    • Juggling 10-12 1TB hard drives proved to be a nightmare in a conventional tower case. Likewise, the heat and noise generated by the drives alone made running the server 24/7 a problematic proposition.
    • The issues proved to be made worse by the fact that the OpenSolaris instance I was using had an advanced zpool version, which was not importable, at the time. This made importing and accessing the original data in my array a non-starter.
    Where Are Things Now
    In the end, I opted to get the  OWC Elite Quad Pro and load it up with 4 x 1TB drives, setting it up for Raid 5, and hooking it up to the Airport Extreme to use as a time machine drive for laptops and as personal dumping space, while the big server is being debugged.
    The main server is currently a mass of sata and power cables. I’ve added 2 quad port pcie x4 SATA cards to the server to handle more drives. I’m planning on re-imaging the box and loading it up with Linux(Ubuntu or Centos) and employing ZFSonLinux to implement ZFS storage pools and get RAIDZ2 up and running. This would remove the need for virtual machines, as I would get ZFS in Linux and would be able to run the applications I wanted to run, natively.
    I’ve also acquired 2 x 60GB SSD(s) to serve as L2ARC caching and ZIL logging for ZFS to improve the performance of the system. The intent is to have these two SSD(s) mirrored and having the mirrored device used for the caching and logging.
    What Needs To Get Done
    The remaining road block now is how to get the data off of the existing ZPOOL(solaris) and onto the new ZPOOL(linux). I’m thinking it will take the following form:
    1. Build new Linux environment / ZPOOL environment. (pool1)
    2. Export pool1 from Linux environment.
    3. Boot into old OpenSolaris environment and import all pools (pool0, pool1)
    4. While booted up in OpenSolaris, zfs send from pool0 to pool1.
    5. Once completed, export pool1 from OpenSolaris environment.
    6. Boot into Linux environment and import pool1.
    7. Confirm data is all good, then re-purpose all disks in the original pool0 OpenSolaris environment.
    The current capacity of the server is 14 SATA ports:
    • 6 on the motherboard :: mb0
    • 4 on quad sata 1  :: qs1
    • 4 on quad sata 2  :: qs2
    After taking into account the 8 disks for the existing array, this leaves 6 ports available for the new environment to be built off of. Building small RAIDZ groups, I’m planning on grouping the space as follows:
    • RAIDZ :: mb0:2 + qs1:0 + qs2:0    [ + 2TB ]
    • RAIDZ :: mb0:3 + qs1:1 + qs2:1    [ + 2TB ]
    This way, the six ports are spread across the motherboard and each of the 2 additional controllers. When the data is transferred over and the original 8 disks are freed up, they will be added to pool1 in a similar fashion:
    • RAIDZ :: mb0:4 + qs1:2 + qs2:2   [ + 2TB ]
    • RAIDZ :: mb0:5 + qs1:3 + qs2:3   [ + 2TB ]
    The remaining ports on the motherboard, MB0:0 and MB0:1 will be used for the SSD(s) for the L2ARC and the ZIL. As well as for the booting up of the Linux OS.
    The effective space will be 8TB of usable RAIDZ data. However, it is 8TB spread out over 12 x 1TB drives vs the original 7TB spread out over 8 x 1TB drives. The arrangement also ensures that should one controller card fail, the arrays will be recoverable.
    Upgrading storage and/or resilvering a replacement disk would also be significantly sped up by the fact that each RaidZ group is only comprised of 3 disks.
    If/when I decided to upgrade to 2TB disks, I can do so 3 at a time. Though ideally, they would all be RaidZ2 with perhaps more disks per group, but funds are no infinite. :( What this configuration buys me is more IOPS across the disks, and hopefully better performance.
    Future Upgrades & Potential Issues
    Once this is up and running, I may bring the OWC Elite unit into the mix via an ESATA controller, with the 4 drives in the unit presented as individual drives. However, because of the way that ZFS works, I would not be able to add a drive to each of the RAIDZ(s) since ZFS lacks the ability to reshape the geometry of the zpool as well as the ability to shrink a zpool by removing a device from it. So, future adding of storage would require ever more disks/space to enact reconfigurations, which makes using ZFS a bit problematic.
    One Avenue To Upgrade Via Adding More Disks, While Retaining Redundancy
    I suppose I _could_ swap out disks from each RAIDZ group and replace it with a disk from the external quad enclosure like so:
    From:
    • RAIDZ :: mb0:2 + qs1:0 + qs2:0   [ + 2TB ]
    • RAIDZ :: mb0:3 + qs1:1 + qs2:1   [ + 2TB ]
    • RAIDZ :: mb0:4 + qs1:2 + qs2:2   [ + 2TB ]
    • RAIDZ :: mb0:5 + qs1:3 + qs2:3   [ + 2TB ]
    To:
    • RAIDZ :: mb0:2 + es0:1 + qs2:0   [ + 2TB ]
    • RAIDZ :: mb0:3 + qs1:1 + es0:2   [ + 2TB ]
    • RAIDZ :: mb0:4 + qs1:2 + qs2:2   [ + 2TB ]
    • RAIDZ :: mb0:5 + qs1:3 + qs2:3   [ + 2TB ]
    • RAIDZ :: es0:0 + qs1:0  + qs2:1  [ +2TB ]
    • + 1 x 1TB hot spare : es0:3
    In this fashion, another 2TB of RAIDZ storage is added to the zpool, making it 10TB usable across 5 x RAIDZ blocks. The extra slot in the external SATA enclosure can be used to house a hot spare, to allow for auto-rebuilding.
    Basically, if you want to add additional physical storage devices… you will need to ensure that the devices are re-juggled around in such a way that the failure of any one device will not result in a failure of any RAIDZ component that makes up the zpool.
    In anycase, storage and data management continues to be a pain on the home front. :)
    Aug 072011
     

    As one might have read, Sony is going to be announcing their A77 DSLR, which is the replacement for the A700 DSLR. A good deal of expectation is being heaped on this new camera and for me, some extra money will be spent in addition to the cost of the camera itself.

    Memory Cards

    I’m a  consumate user of compact flash. I own and use PhotoFast 16GB and 32GB cards, both rated at 533X. They are great performers and are rock solid. Needless to say, I’ve spend a bit of money in getting them when 533X compact flash cards were fairly new. They work great in the A700 and I had hoped to continue to use them in the A77. However, Sony decided against including support for compact flash. Instead, it will be Memory Sticks and SD cards.

    Limitation of Memory Sticks

    The current top of the line Memory Sticks are the HG HX type cards, which are advertised as being “50MB/second” cards. However, that is only the READ speed. If you are writing to the card, the minimum WRITE speeds are 5MB/second and 15MB/second for 4bit and 8bit modes, respectively. That is fine for writing out 28mbps compressed video. However, it will cause the buffer to empty out slowly, causing wait times between bursts of shots. The RAW images on an A700 are 17MB in size. With the 533X CF cards, I can go 19 shots before the buffer tells me to hold off. With the MS, it drops to between 9-12 frames. Granted, you can shoot as soon as an image is flushed out, but you’re going to be waiting 1-2 seconds between shots. Not ideal.

    UHS-I SDXC Cards

    Now, looking over at the SD memory card side of things, if you go with SDHC and SDXC cards, the rate maxes out at around 22-25MB/second in real-world performance. In fact, SanDisk non-UHS-I SDHC/SDXC cards are generally rated at around 25MB/second max. This isn’t bad, and for the A700, would be acceptable. However, for a 24MP sensor dumping 36MB RAW files, this too will choke up the buffer if you have several fast paced sequences you’re trying to capture.

    Enter UHS-I, which claims a bus capable of 104MB/second read and write. The top of the line UHS-I SDXC cards out there are claiming 95MB/second read and 80MB/second write. If it were fully leveraged, that would be perfect! In fact, there are users out there who have tested out their SDXC capable cameras, like the D7000 with the UHS-I cards, only to find that the performance difference between a SanDisk Extreme Pro (45MB/45MB) vs a Delkin (95MB/80MB) shows no performance difference. The reason, when the calculations are in, is that the D7000′s implementation bottlenecks at 22MB-25MB/second. So there is no gain.

    If the A77 properly implements UHS-I, it will/should provide amazing shot-to-shot performance, with an image buffer that will effectively expand, with a fast enough card. This is what people saw with the firmware update and fast UDMA CF cards on the A700, an effective doubling of the image buffer from 8 to 18. If the A77 implements UHS-I properly for SDXC cards, I will be one happy camper.

    However, if it implements just the SDHC/SDXC non-UHS-I interface, then we will see transfer speeds capped at around 22MB-25MB/second, with RAW image shooting suffering.

    Batteries

    The other bit of interest, from a gear upgrade perspective, are the batteries. It would appear that the A77 might continue to use the same cells employed by the A700, the NP-FM500H. This is likewise a boon to those of us who are already A700 shooters: we have a handful of these batteries, and it will mean we can re-use those batteries in the A77. That much less of an obstacle to upgrading.

    Closing

    I’ve made my peace with Sony’s decision to go with SD vs CF, so long as they implement UHS-I properly. At this point, I’m just waiting for the camera to come out, so that I can get it into my hands and see what it can do. I’m looking forward to the low light performance as well as the upgraded AF system. I’m also looking forward to the live view auto-focus, as it will greatly help with manual focusing on subjects.  Oh, 24MP doesn’t hurt either, for some nice blow-ups.