20.4.08

Genealogy. Maybe.

Looking back at all the experiments I can easily trace most of my technical choices. Although I can never firmly pin why it is that I make these images. I have strived to avoid the question. If you look at it and something goes on for you, great. But for me more than anything it is just getting it done. These current multi-focal composites are a departure from this method of approaching my work. They are the following of a series of ideas surrounding photography to a conclusion within their own logic.

For me photographic images have fundamentally changed. We are all familiar with the transition to digital. Instead of a chemical process, photographic images are encoded samples taken from a sensor. Whiles there has been this fundamental shift in the mediums underlying working, by and large the use and conceptual development has continued in the paradigm of traditional photography. Welcoming digital for the technical enhancement it has brought but treating it much the same, with the core focus being on the 'image' as subject matter, a human computational outcome. It is hard to ignore this intuitive, taught method of interacting with images. 
It is so hard to explain the shift in feeling towards images in this sense. It is like sitting back and looking at the world pass by and realizing that most things are the sum function of a rule being abided by. Adherence to a rule enforces the rule. Thinking of an image as data, takes the image into a purely abstract space. But this is the habitat of the modern image. The real innovations in photography happen outside 'photography'. The software development, the opening of new domains of interaction. The treatment of an image as product of a given data-set in an ever growing dataverse. Projects like Joachim Sauter & Dirk Lüsebrink’s ‘Invisible Shapes of Things Past’ expose these potentials. But there are millions of examples from Scene Completion to Automated Facial Recognition. All leveraging computation over the data to create something that is more than a photographic image. At the moment the use of images in this manor is treated as a little novel but these data hybrids are only going to grow and further penetrate peoples use of the image.

19.4.08

Computation.

All of the images Im making rely heavily on post capture processing to turn them into what i want. These computational overheads add significantly to the time it takes to create an image but represent more than anything what Im trying to achieve. The current workflow runs something like this;

1. Convert the camera .raws into small .jpgs using the generic settings within Adobe Camera Raw.
2. Organize the images into their row, then layer. The row is all the images from one set of full movements in one axis. The layer is how many images to create on whole image in focus
3. Feed the layers into one of a few image stacking programs. This part can take some time. As each image can need a fair bit of optimization. save this output.
4. Feed these outputs into either Photoshop Photo-merge or Autopano Pro. If the images/data are decent then photoshop can stitch out a test image in a few hours but any major gaps or serious mall alignments lead to a massive increase in compute time.

Autopano offers the fastest most controllable output but responds really badly to images sets with missing images by warping the entire image. But when all the data is there, it produces great outputs. Photomerge is stupidly slow but seems to be able to ignore missing images. Mostly the missing images are a result of my lack luster capture process - this should/will improve greatly with the use of a scanning back. I also tried Realviz Stitcher,I used this extensively for early panoramic experiments where it performs brilliantly but when faced with images that cover a large planar field of view it fails. I’m not really sure why but in the face of autopano i gave up trying to optimize it.
This is the last test I intend to do with a DSLR as the capture unit. It highlights nearly every problem! The lack of any uniformity reveals the fact that it was created with small images over a long period of time. In this case when I started it was a rainy day and by the time I had finished it was sunny. This has been less pronounced in previous images as they have been shot in environments with mainly continuous lighting. Although most of this could be edited out, I feel that that is outside of this project. Most of the image is 'in focus' but there are chunks and gaps. These relate to the repeatability issue and are exacerbated by small movements in the camera (of all the things the floor bowed under the weight of the camera). The dark square effect is caused by the minimal amount of image available to overlap. This test also revealed an entirely new problem that only really occurs with stacked images. This problem arises from an optical effect. By 'shifting' the back standard of the 4x5 camera I move the sensor over the lens projection. At the extreme end of a movement there is more image being projected but the sensor cannot move anymore. In these situations I 'shift' the front standard to move the image over the sensor. In the case of early composites this has never manifested itself as a problem as the image has been at a single focal plane. But with the images being stacked, these slight changes become parallax errors that make it impossible to overlay multiple focused images accurately. This phenomena was greatly enhanced by the fact that what I was focusing on was very close. It might well be possible to get away with it in an image if the plane of focus is shallower. This will reduce the total number of images used to make a composite, unless I can find a way to extend the rear movement. I should have really known about this before it happened as it was a major problem in my early panoramics. That said, I really like this failure image. It embodies a lot of what im trying to say.
The software I have settled on to perform the image stacking is Helicon Focus. Although i have tested a few, this program offers a good range of possibilities in manipulating the image. Also it supports a batch render function (only under Windows, unfortunately). This use of the software really stretches the algorithms used to blend the layers of different focal planes, mainly as its all optimized for macro work. Obviously if all this testing works, i go back to the .raw's and give them a full workover and repeat the process with greater attention to the details.

18.4.08

One small step for me, One giant leap for my credit card

I can with some degree of certainty say that i have got a BetterLight 4000E in the pipeline. Its the lowest model scanning back from BetterLight and the version im getting shipped in is SCSI based. I cant wait. I had hoped to find something a bit more local, like a Phase One PowerPhase but they are nie on impossible to find. As with all things in hi-end imaging, its all dealers and pros. Its very hard to crack in and have a play. Unless your shooting some skinny model for a catalogue. Despite putting out requests to UK dealers only one had the decency to respond. The BetterLight back tops out at about 18 Mega pixels but the quality of those pixels will be awesome. With the colour data being derived from a single scan line in its own r,g and b colour. No Bayer interpolation is necessary to generate the colour image combined with the Dac working in 16bits. I can look forward to some very clean images. Of course my main need is to have a large effective sensor and the 4000E's scan area is a whopping 72mm x 96mm. Honestly i feel a bit uncomfortable with this back as all the images and all the discussion on forums surrounding then are geared towards either art reproduction or landscape. I really don't see myself in either. As usual i'll just ignore and carry on.

16.4.08

Image Stacking Composites - Tests


This is the first full scale test. Using rise, fall and shift as before to capture the greatest area possible of the lenses image circle. Combined with racking the focus to generate enough images to describe the entire field of view as 'in focus'. This test image is riddled with errors. Most of these errors are surmountable but require an further increase in tolerances of the current imaging process. Currently the source images for each composite are captured via a Canon 350d attached to a 4x5 studio camera with large range movements. Because of the area of the 350d sensor is so small a great number of individual exposures are required.
The sensor is effectively in the back of a box, the shutter-box. Because of this not all of the image circle (approx 301mm at f22 from 210mm lens) can be captured. Also due to the angle of the mirror for the viewfinder only a very small area of the total image circle can be seen and focused. This never effected the early images, as i used an angle finder to amplify the viewfinder and acquire focus. This point of focus would then be same across the entire set of images made via the movements. By introducing the need to change the focus to acquire enough information to make a all in-focus composite, many of these early limitations become major problems. The issue with not being able to use the viewfinder outside of a small image area makes it incredibly hard to find all the required focus points for the image stacking. I have found a few ways to circumvent this but non of them is as satisfactory as being able to use the viewfinder. As the camera is mounted about 80mm from the 4x5 insert its not possible to use the ground glass focus screen. The next major hurdle is repeatability.
With each whole image being constructed from upwards of 300 source images the margin for error is high. Each image must overlap approximately 25%. This leaves room to wriggle. But as soon as focus stacking is introduced the number of images skyrockets to anything upward of 650. Additionally the overlap needs to be even greater to offset the change in focal length caused by shifting the focus. This focal shift manifests itself in the focus stacked images having a soft edge often reducing the overlap below 15% and intern increases alignment errors. All of these errors can be seen in the test.
A live-view camera would allow for more accurate on the fly focusing but the best solution would be to use a medium format digital back. These have an adaptor that would allow the use of a magnify-able ground glass screen and also benefit from a large image sensor that would reduce the amount of total shot required.
Upon reflection i have decieded the best way to improve the image and get it nearer to my vision will be the use of a scanning back. This will increase the effective sensor size to 100mm x 80mm approx. As a scanning back will require being tethered to a laptop, instant review of the images will remove most of the aforementioned errors. An with potentially less than 30 images required, high tolerance repeatability will be easy.

Of course all of these errors and ideas pertain to a single vision of the images i want to create. Their are many other routes and experiments to be explored. Im particularly keen to explore some form of automation of the process. It not inconceivable to develop a system that manipulates the camera movements via stepper motors. I also have yet to try using any of the movements that manipulate the perspective and plane of focus of a recorded image. But all in good time.