DPI (Dots Per Inch) seems to be the most misunderstood image property out there. Funny thing is, my investigation into this started with a conversation I was having with a coworker about a programming issue he was having that centered around the dpi of the screen. Initially I said PCs are 96 dpi, and Macs are 72 dpi, but as I started looking into the validity of that statement, a world of misinformation and confusion was brought to my attention.
Even more confusion surfaced when I started looking into the dpi issue as it relates to photography. Have you ever heard someone say, “save it at 72dpi for web presentation”? Me too. Actually, it would have been me if you would have asked me a few days ago. However, I now know that it really doesn’t matter much*. Ah, you noticed that asterisk; well it does matter in a few instances, but not where you’d think probably.
So here we go. Why doesn’t dpi matter. Well, it has zero, zip, zilch, nada, no effect on how an image is presented to the user when viewed on a computer screen. The only thing that matters is the pixel dimension of the image. That’s right, you heard me, the only thing that matters is pixel dimension. You know, 640×480, 800×600, 3000×2000, etc, etc. That’s what matters. Here’s an example. If the resolution of your monitor is set to 1024×768 and you try to view a 640×480 image, it will take up about 60% of your screen. That’s true if you’re looking at a small laptop screen, or a projector displaying on a wall 60 feet away. As long at the resolution of the output device is set to 1024×768, it’ll all look the same.
Still skeptical and need some proof? Here ya go. The three images below all have a pixel dimension of 625×750. What varies is the dpi of each image.
They look pretty much the same to me. What do you think? And remember, this is your monitor, so there’s no way I can be doing any trickery here. If you want to verify the dpi, click on each image, save the full size image, then view it in something like Photoshop to see the dpi for the image.
But what about file size I hear you say, 2500dpi must be hugh! Well, no, it’s not. They’re all about the same at around 180KB. I think there’s a slight difference due to JPG compression, or perhaps I fumbled something when making these sample images(most likely case).
But what about printing? You save web files at 72 dpi so they can’t be printed. Well, that’s where part 1 of the asterisk from above comes in. The dpi is really used to tell a printer, “Hey, print this image using this number of points per inch”. In actuality, printers use something called PPI (Points Per Inch), and is the number of points that can be printed in an inch. Now, depending on how small a printer can make those dots, you can get a higher resolution image. From some empirical studies that I read online, 150ppi seems about the minimum you need to print a good image. This means that you’d really have to be doing some pixel peeping to see and difference from a higher dpi image (like the well known standard of 300dpi). So how does dpi of an image relate to printing? Math. It’s pretty simple actually. To determine the ideal printing resolution of an image, and I did say ideal, you divide the dimension by the dpi. For example, the files above, if you printed the 25dpi image you’d get a print that’s 25″x30″ (625px/25dpi = 25 inches and 750px/25dpi = 30 inches). No one said it’d be a good looking print! If you printed the 250dpi image, you’d get a 2.5 inch x 3 inch image. And printing the 2500dpi image, you’d get a .25 inch x .3 inch print.
But why did I say that dpi is less important for printing? Well, if you’re trying to be tricky and save that 2000×3000 image to 72dpi so people can’t swipe it, you just handed them a file that would print pretty good at 13″x20″, and great at 8″x12″. Why? Because changing the dpi of an image does not effect the image (unlike resampling). All you really did was change a few bits of data in the header of the image. The content of the image stayed the same. So dpi does have some relevance when it comes to printing. To protect your online images from being heisted, you need to combine a small pixel count AND a small dpi. But even if you do all that, if someone is motivated enough, they can take your images and resample them to a larger size with some pretty sophisticated software to yield acceptable results for printing: this is especially true of canvas which supports really low dpi due to the characteristics of the material.
Oh yea, where does dpi REALLY matter? Scanning images. This is where dpi truly matters because scanning at a small dpi will result in a small pixel dimension, which results in a small file (viewed on a screen or printed).
To use HDR, or to not use HDR? I find most HDR imagery unappealing, but that’s my own personal taste. I prefer a more realistic look to my imagery. I’m sure, however, that HDR gets a bad rap from over-doing the effect. You know what I mean. Those images that look totally unrealistic, have odd looking halos around everything, or saturation that is just off the charts. Done with a mild touch, it can be a very useful technique for extending your dynamic range. So how do you go about it?
I tried three methods: Using HDR Efex Pro(3 images). Taking a single RAW image and manipulating that in Lightroom. Hand merging 3 images in Photoshop via layers and masks. I’m not going to get into how I did each technique, but I will say that the single RAW image was the easiest to perform. I will also admit that I am no HDR Efex Pro wiz, so my results are surely skewed. Of the three methods, the hand layering in Photoshop was the most time consuming to perform. It also required a pretty good working knowledge of PS.
Let me talk about the scene. It was a brightly lit mid-day shot up in Zion NP. The dynamic range of the image was right at the edge of the abilities of my camera (Nikon D70). Because of this, I was able to manipulate a single RAW image. If the range was just a tad more, then I wouldn’t have been able to use that technique. So anyway, there was this rock stack with a single pine tree on top. From my vantage point, most of the rock was in shadow. I wanted the rock stack to look like there was more light reflecting off of the ground than what there really was, so that was goal one. I also wanted to get good definition in the clouds, goal two. And lastly, I didn’t want any dark shadows in the pine tree.
Here are the three images with a little blurb of my thoughts on the image.
Photoshop Layers. Like I said, this took the longest to do, but I like the results the most. The longest part was bringing out the ‘reflected light’ on the rock in the foreground which I feel was the best of the three images. It was also pretty easy to preserve cloud and pine tree detail. Most of this image was the middle exposure. Rock highlights and lightening the darker parts of the pine tree were from the +1 exposure. Darkening the sky and bringing out a little more detail was from the -1 exposure.
Single RAW File. This was the quickest to complete. All I did was apply global and some local adjustments in Lightroom. Quick and easy. I don’t think that ‘reflected light’ look is as apparent in this image, but I probably could have spent more time adding selective exposure increase to the rocks to get that look. I will say though, that adding selective adjustments is easier in PS due to the layer mask functionality. Cloud definition and pine tree appearance are very similar to PS image.
HDR Efex Pro. I got issues with this image. First off, for me, the rock in the foreground looks flat. Well, not flat, but just not realistically lit. Things that should be shadowy, are not. It does seem to have brought out texture in the rock though. All right, now let’s talk about ghosting. You can’t really tell from these web resolution images, but there are quite a few ghosting artifacts in this image. Especially at the top of the foreground pine tree and some of the trees in the background. I tried several combinations of ghosting removal (global/adaptive, at varying strengths), but none got rid of all the ghosting. There’s also some haloing around the clouds (the sky is too dark). There was more texture in the clouds themselves, and the pine tree had no really dark shadows.
How do you get that glossy, reflective look that is pretty popular now-a-days? How do you get a cleanly blown out background? These are some of the questions we’ll go over today.
The reflective foreground is really popular recently, and I think it’s due to Apple’s advertising, and the popularity of their iTunes coverflow view. I wondered how hard it was to get this look. I started out by researching what materials you could use that would produce a nice reflection. Glass, Plexiglass, and tileboard all came to mind. Glass seems like the natural choice, but here are a few issues I had with it. First off, it’s glass, so it could break easily and be a real mess. Secondly, I’ve seen, where at the right angles, you get a slight double reflection (from both sides of the glass). I wanted a really clean reflection, so that option was out. Next was tileboard. These are big 4′x8′ sheets of particle board that have a shiny reflective surface. You can get them at home centers, and they are often used as paneling in wet areas (think utility room). They are cheap, which is good, but they are really big and heavy. Maybe if I was shooting full length portraits and wanted a reflecting surface, but for small product shots, it’s just too much to deal with. I finally decided on Plexiglass. A 3′x4′ sheet was around $25, so it’s reasonable, and it produces a really shinny surface. The only down-side to Plexiglass is that it scratches easily, so I’ll have to be careful with it.
Now that I have a nice surface to get a good reflection on, I turned my attention to getting a nice, white background. This is useful if you’re doing shots for catalog or web presentation. I also wanted a result that took very little to no Photoshop post production to get that white background. The key to achieving this is to light your background and subject separately. I tried two different methods, so here are the setups I tried.
In the first setup, I took a white paper backdrop and hung it from a background stand about 6′ from a table. The table had a sheet of white paper with Plexiglass upon it, with the subject positioned about 3/4 back on the table. I then placed two speedlights at 45 degree angles to the backdrop between the table and the background. Each light was aimed at the further edge of the backdrop. So, the right speedlight was aimed at the left edge of the backdrop, and the left speedlight was aimed at the right edge of the backdrop. I figured this would ensure even illumination across the entire backdrop. The speedlights that I used are zoomable, so they were set at 35mm. That seemed to produce the most even illumination while not needing to have the speedlights set at 1/2+ power (for faster recycle times and to conserve batter life). I think ideally, I should have used umbrellas with a gobo placed in between them and the camera so that they don’t influence exposure. I metered each flash individually until they were 2 stops overexposed. This is important because any more powerful and the subject might look washed out due to the amount of light reflecting back off of the background. A really strong reflectance, or ‘wrap’ as it’s called, will also blow out fine detail like hair if using this technique for doing portrait work. The softboxes were then placed quite close to the subject, and were set at a 1:1 lighting ratio. Here’s a diagram and the sample image.
Very little work was done to this shot. All I did was to remove the visible transition from table top to background, and I faded the reflection. Two very minor and very easy corrections. I think if this was a real ‘production’ shot, I would have used less reflection, and spent more time removing the slight color influence on the table surface from the Plexiglass.
In this second example, the background, table, and softboxes were the same, but I used a single snooted SB700 (see diagram below). I used a short snoot (about 1″) to block any stray light from hitting the subject. I also set the background to be 1.5 stops overexposed so that there was a bit of tonality to it, and the softboxes were set at a 2:1 lighting ratio.
The only adjustment that I did to this shot was to blend the table to background transition.
I’d like to take another moment to talk about the metering of these shots. I found it extremely helpful to have a hand-held light meter for these shots. I’m sure that I could have got similar results without one, but I think that it would have taken longer. The only real downside (and an upside for other reasons) is that I had to meter each light separately, and it wasn’t initially obvious how to do that since I was mixing speedlights and monolights. Once I figured it out, it was a cinch, and made getting proper exposure effortless.
So what were my exposure settings? The camera’s shutter speed was set at 1/250. Shutter speed has little effect in studio work unless you’re trying to balance ambient light. I wasn’t, but I did want to make sure that no stray light was polluting my images (I was working in a garage after all). I could have set it to 1/500 as that’s my camera’s synch speed (the maximum shutter speed with which the shutter and strobes can synch). I then choose f8 as my aperture because that allowed sufficient depth of field. So, I set both softboxes to have proper exposure at f8. The background strobes were set to 2 stops overexposed, f16. What do I mean by “set the strobe to f8″? Well, I think that should be a separate post if anyone is interested.
Glass can be a pretty tricky thing to photograph, especially if you’re trying to accentuate the shape of the glass itself. For most product photography you’ll have your lights in front of the subject, maybe on a 45 degree angle, casting even light on the front of your subject. If you did that for something made solely of glass, it would get lost in the background and look really flat and lifeless.
To illustrate my point, I decided to take some shots of a bedside water carafe that I have. The point of this shoot would be to accentuate the shape of the glass in an artistic manner. Maybe not the best product shot if you were trying to sell these, but a good start I think.
The first example uses ‘dark-field lighting’. In this technique, you place the light source directly behind the subject, then obscure some of it with an opaque material (like black foam core or dark cloth). What happens is that the edges of the glass object are defined by highlight, while the majority of the glass object remains dark. You can also control the amount of highlight by moving the object closer (more highlight) or further (less highlight) from the light source. In the example below I was using a 36″x36″ softbox with a piece of black construction paper down the middle. The carafe was about 18″ from the softbox. No other lights were used.
My second example uses ‘bright-field lighting’. This technique is very similar to the above, except you place two opaque material on each side of your light source, revealing only a narrow slit of light (obviously not so narrow that it’s in the shot though). What happens here is that the edges are defined by shadow, while the majority of the glass object remains bright. You can also control the amount of shadow by moving the object closer (less shadow) or further (more shadow) from the light source. In the below example I was using that same 36″x36″ softbox, but I had two strips of black construction paper along the edges of the softbox. The carafe was about 12″ away from the softbox.
Given these two examples, I think that I prefer the bright-field lighting. An alternative shot that I would have liked to do would be to take the cup off the top of the carafe and place it in front of and to the right of the carafe. I think that would look interesting as well.
Which shot do you prefer, and why?
Here’s the location that I choose for our local firework show. My original location was a bust as it was too far from the display. If Moorpark had an interesting skyline, maybe it would have worked, but Moorpark is… um… fairly suburban.
There was a really nice waxing crescent moon out, but unfortunately with an 18mm lens, it doesn’t really show up that well.
One other tip I can pass along now. Don’t stand down wind of a firework display. All the smoke from the blasts made me smell like sulfer, and it partially obscured the fireworks later on in the show. Maybe it would have turned out better if I was perpendicular to the direction of the wind. Or, maybe you just have to get all your shots early in the show.
Here are a few of my faves, straight out of the camera.
Yes, I know there are power lines in the upper right hand corner. Although these were my best from the shoot, I didn’t think that they were worthy enough to warrant the time needed to remove them in Photoshop. Maybe next year….
The 4th is just around the corner, so I’m sure many of you will be looking to capture your local firework show. Fireworks can be tricky as your camera’s automatic features simply won’t work well (unless your camera has a ‘firework’ mode). Well, here are some tips for getting great firework shots.
- Use a tripod.
- Use a cable release.
- Set camera on manual (manual focus and manual exposure).
- Set exposure to f8 @ 8 seconds for initial shots, or use bulb if your camera has that.
- Set focus to just a tad back from infinity.
- Use a wide angle zoom lens.
- If possible, incorporate some interesting foreground elements.
Color management is a very complex subject, which we could talk about for days on end. In this post I’ll be talking about X-Rite’s ColorChecker Passport. This handy little tool helps you get the best, and most accurate, color out of your digital cameras. The two images below show the difference between the standard Adobe Camera Raw conversion (top) and that using a profile made with the ColorChecker Passport (bottom). As you can see, the bottom image’s colors are much richer and pleasing, and are more correct to the true hue of the orchid.
So how do you use the ColorChecker? You need to take a shot with the ColorChecker in the frame for each lighting scenario. So for example, if you were taking a few shots inside and a few outside, you’d need to take two shots with the ColorChecker. In this case, all my shots were taken with the same strobes under the same lighting conditions, so I only took one ColorChecker shot.
I should point out that you use this when shooting in RAW format, and that you should be viewing the results on a recently calibrated monitor*. I don’t think it works if you shoot JPG or TIF.
So after downloading all the images from the shoot to Adobe Lightroom, you export the ColorChecker image to the X-Rite ColorChecker Passport preset. It churns a bit creating a profile, and once it’s done you need to restart LR to pick up the new profile. This is one area I’d like to see improved. Restarting LR is a bit of a hassel. After the restart, enter the Development module of LR and apply the newly created profile to your image under the Camera Calibration tab. Boom, that’s it. Your image is now calibrated.
You can use the grayish looking blocks on the left side of the Passport to adjust the color balance of the image. The left column is for portraits (warms the image), the middle is for landscapes (brings out greens), and the right column is a generic gray scale for a neutral color balance. I find that most of the time I use the neutral scale.
I think that the look of a calibrated image is definitely better than the what ACR does by default. I do think however, that the profiles created with the ColorChecker tend to saturate colors a bit. Not that this is bad, but if you were looking for exact color reproduction, you’d be off the mark a bit. This could, as often is, my inability to use the product correctly.
*note: I use the X-Rite i1 colorimeter to calibrate my iMac’s monitor.