Is the Shroud of Turin really just 18 years short of its 2000th birthday? SEE THIS BLOG FOR A DAILY ACERBIC OVERVIEW OF CURRENT WRANGLING (currently 2015, Week 33)

This posting rep0rts what this blogger/retired science bod considers to be significant progress in modelling the “Shroud” image, so as to reproduce more of its allegedly  ‘iconic’ and/or “unique” properties (negative image, superficiality,  3D properties, fuzzy border, possibly even some of those so-called microscopic properties.

(See previous posting for the scepticism about some of the features described as “microscopic”, which might be more accurately described as mere enlargements of macroscopic properties that reveal no new structure).

I shall be using the new format adopted in that last posting, namely to post as a series of mini-topics, presented in reverse chronological sequence, i.e. most recent at the top where it is easier to find. Sorry if one finds that quirky, but it’s proved the only sure way for getting the Google algorithm to spot this site in the internet jungle, i.e. organizing it in a way that gets repeat visits to the one posting and a respectable ranking in a (shroud of turin) search profile – at least for “past week”, “past month” etc.

 Topic 6: Here comes the crucial test of the new model: how do my linen fibres compare with those of the Turin “Shroud” under a low power light microscope?

This a difficult topic to report on a blog, given the welter of detail that has to be addressed,and given the need to be scrupulously non-partisan in the choice of field for comparison. On the other hand there are a mere 8 Mark Evans microphotographs cited as showing the claimed microscopic properties of the “Shroud”,  covered in the posting that preceded this one (see last added Topic 13 at top of posting) whereas there’s an infinite number that can be generated by modelling. There comes a point where one simply has to bite the bullet – and put up a single comparison that offers the reader (to say nothing of this blogger) an easy introduction. One will then start to flesh things out in more and more detail. It could take days, it will more probably take weeks.

So here’s the first plate for your attention: click on image to enlarge.

Left: image fibres from a dry flour imprint of this blogger's hand onto linen, takem throug 2 heating stages, the first with a hot iron, the second with a hot air oven, followed by washing with soap and water Right: Mark Evans photomicrograph, code ME16;

Left: image fibres from a dry flour imprint of this blogger’s hand onto linen, taken through 2 heating stages, the first with a hot iron for pressing, the second with a hot air oven, followed by washing with soap and water
Right: Mark Evans photomicrograph, code ME16;


There are fairly uniformly coloured fibres in the model system (lef).  They are not dissimilar from those in the “Shroud” right. The model fibres constitute a much smaller proportion of the total, but that was probably the result of a deliberate decision to produce as faint an image as possible, consistent with still being able to see it (details later). What we see in the model fibres is consistent with the peculiar so-called “half-tone effect” claimed for the Shroud, though it has to be said that occasional dark fibres (darker than above) can be seen here and there in other fields – though if more damaged and brittle they might tend to break off with time, leaving the paler  honey-coloured fibres we see above.

There’s a second feature in common with “Shroud” fibres – a preference for coloration on the highest points in the weave, i.e. the crowns.  It’s not exclusive to the crowns, but that is the case with the “Shroud” too.  However, one should not set too much store by that similarity, since the first of the two heating steps – pressing with a hot iron –  was deliberately deployed with a view to producing the crown fibre coloration. It’s noteworthy that the effect could be produced using solid flour – one might have expected the fine particles to fall into the lowest parts of the weave!  Oops. Forget I said that (the flour particles are transferred from dusted skin using WET linen, so there’s an additional mechanism that favours imprinting onto the superficial crowns! Maybe I don’t need that pre-ironing step after all.  Maybe the harvesting of flour onto wet linen is sufficient to get the crown-imprinting!  Might the wetting of linen cause the threads to swell, temporarily closing up the interstices of the weave, keeping the flour from penetrating the weave -which would be difficult enough anyway the instant the flour becomes wet).

More to come – much more.

The next task is to show the changes in the microscopic appearance at each stage of the modelling.

 Update: Monday Aug 17

Oops, it’s now Week 34, time to start a new posting under this new presentation.  By way of signing off from this Week 33 posting, here’s a quick postscript to the possibility flagged up in red. I did a test 3 days ago in which the dry-flour imprinted sample was oven-roasted first.  It did NOT get the initial hot iron. Fortunately I still have the samples before and after final washing. Yes, there’s clear evidence under the microscope of preferential coloration of the crown threads. In other words, the technique of imprinting dry flour onto wet linen would seem to be sufficient in itself to account for the “contact scorch look”, the false friend that  sent this blogger down the road of direct contact-scorching with heated  brass crucifixes  and other inanimate and unwieldy templates for the best part of two years or more. Upside: lots of experience with using ImageJ in a manner best suited to contact imprints that may or may not have genuine 3D information, needing to be carefully differentiated  from the pseudo-3D that is embedded in ImageJ’s easily-overlooked default z=1.0 setting. (Some might consider that ImageJ should come with a health warning!).

Blogging strategy for the next few days? Design and carry out new experiments that incorporate all the latest thinking,  comparing carefully one or both heating stages, testing and comparing linen that is very wet or just damp  for its flour pick-up powers,  and looking careully for the methodology that delivers  (a) optimal selectivity for the crowns in comparison with Mark Evans pix and  (b) best demonstration of uncolored v uniformly coloured fibres, with no intermediate shades between the two,  which some describe as the ‘half-tone effect’. Incidentally,  the latter is an unhelpful  and somewhat misleading term in my opinion and best avoided – I prefer “two-tone effect” thereby removing the TS image  entirely from the context of 20th century dot-matrix ink-printing technology.

Topic 5.

Look carefully at this graphic, rotate screen through 90 degrees. The answer to those "eyes" is here, for those with eyes to see, how may not need to read my welter of words and explanation.

Look carefully at this graphic, rotate screen through 90 degrees. The answer to those “eyes” is here, for those with eyes to see, who may then not need to read my welter of words and explanation.


So how were those eyes obtained in Topic 4? Let’s look first at how ImageJ produces its 3D effect, on simple 2D diagrams with no 3D history. One is then in a better position to see how it can be best utilized for exploring images that may have a 3D history (though whether that enhances the 3D effect still further is a moot point, as we shall see).

In keeping with the ‘reverse chronology’ reporting on this site (new additions going on top) Topic 5 is in two parts, 5A – processing the “Shroud” image in ImageJ, and 5B – processing schematic diagrams with the same software. 5B will be posted first (approx 11:15 UK time without captions, the latter added at 5 minute intervals) and 5B later today, probably mid-afternoon).

Topic 5B: So how ere those eyes “revealed” (or artefactually imaged?). Topic 5B below explains the theory, which is to do entirely with the settings AND orientation of the side illumination in ImageJ that comes from the LEFT.

Let’s look at what happens when one progressively illuminates the brow ridge and eye sockets of the Man on the “Shroud”, either in the conventional upright position OR rotated through 90 degrees (which places the long axis of the brow ridge at right angles to the light source, casting a stronger more prominent shadow, and with it a greater 3D-enhancing effect – albeit artefactual NOT real.).


Brow ridge, upright v rotated through 90 degrees, from Shroud Scope (added contrast)


As above, uploaded to ImageJ. Lateral lighting (from left)  set at 0.1.


As above, lighting increased to 0.2. Laptop screen has to be rotated to see effect on the rotated image.



Lighting increased to 0.31. Rotate laptop to see increasing difference between the two images.



Lighting now at 0.4

It’s a bit of a nuisance to have to keep turning one’s laptop to see the growing difference in the way the two images respond to increased illumination. So here are the restored upright versions of the right -hand images. Note the gradual appearance of “eyes” as illumination was increased from ABOVE, ie at right angles to the long axis of the brow ridge, creating an increasing zone of shadow in the eye socket below.


Now you know how those “eyes”were generated in ImageJ – simply by altering light intensity AND direction!


Topic 5B.  Double-click to enlarge images.


Fig 1: Here’s an entirely home-made schematic diagram (thank you MS Paint) in which the superimposed rectangles or solid circles have been given graduated shades of colour or grayscale. Let’s see how they behave in ImageJ.


Fig 2:

Fig 2: Here they are uploaded to ImageJ. The only change to default settings so far is to “smoothing”, raised from 0 to 5 (to suppress the ‘needle forest’ effect of applying 3D to unsmoothed pixels). Note the position of the pointer at 0.5 on the z scale. That is the software’s default setting. It cannot be reduced, important for what follows. This blogger rarely if ever increases the z scale either, to do so introducing (additional) height artefacts.



Fig.3: This  screen display  has exactly the same settings as Fig.2, but with a crucial difference. The pointer has been used to tilt the display slightly (top end into the screen) . An important property (and constraint) of ImageJ is revealed. The software generates a default 3D-rendering to ALL inputs, as seen by that non-zero setting on the z scale right. The “inbuilt” default 3D is not insignificant either, as seen from that conical (comical?)  “3D” B/W target, despite it having NO 3D history.


Fig.4: The lighting value has now been increased to 0.6, with NO OTHER CHANGE.  Observe first the additional 3D effect, especially obvious on the vertical rectangles. Why is that? It’s because the lighting in ImageJ, crucial to its 3D-rendering over and above the elevation provided by the default z setting, comes from the left. That is a second default setting, one that cannot be altered. Which figures respond best to the unilateral illumination? Those with their long axis edge-on (left) or those with their long axis normal (90 degrees) to the incident “light”.



Fig.5: Tilting the display slightly gives a little extra 3D-appearance to the horizontal figures on the left, but not much. It is still the vertical ones that are most conspicuous, being oriented at right angles to the incident light, creating a shadow on the long sides instead of the short ends. There’s clearly a take-away message here to be applied when uploading an image of unknown origin, one that may or may not have real 3D information (whether or not that is credible). Never be content with an upright presentation. Always test the effect of rotating the figure through 90 degrees, and maybe intermediate angles, especially where there are features that have a dominant long axis, psst like the brow ridge in the TS that spans both eyes.



Fig.6: Here’s an intermediate value for rotation, approx 45 degrees, combined with some tilt to give a bird’s eye view.  The lighting has been returned to zero. There is just a hint of “3D-ness” due to that default z setting, but for conspicuous 3D rendering one has to use that lighting control, which is the CRUCIAL one in ImageJ.


Fig.7: here’s the same oblique vantage point as above, but with lighting raised to its maximum value. Note the way that the rectangles on the left would be easily overlooked, but for their small ends now in shadow. Note the prominence of the other 2, thanks to maximal shadowing along their long axes.

I’ll be back in a couple of hours, showing what happens when one uploads the TS face from Shroud Scope into ImageJ with its prominent brow ridge, beneath which are the eye sockets, and see how that ‘long axis’ behaves when one (a) alters the intensity of the lighting and (b) the orientation of the image, relative to the incident “light” coming (ALWAYS!) from the left… Look carefully at what appears and/or disappears in the eye sockets. Real or artefactual? Always assume artefactual, unless one has strong grounds for suspecting otherwise. End of Topic 5B.  Topic 5A (TS image) will be added on top later.  First some uv exposure – there being a rare glimpse of the golden orb in the sky.

Topic 4: Here’s an image of the Man on the Turin “Shroud” with features you’ve maybe never seen so clearly before. This one has EYES and LIPS (or patterns of pixels on your laptop screen that could be mistaken for eyes and lips).


Shroud Scope image, minimally processed

Shroud Scope image, minimally processed

As the caption states, the image was obtained using the splendid Shroud Scope, and then minimally processed in ImageJ.

(Techie stuff: the height setting on the z scale was kept at 0.1, i.e. its default setting, one that cannot be reduced, as my embedded B/W reference shows, given it has no 3D history ,having been constructed in MS  Paint. Minimal values were used for smoothing and lighting (10.0 and 0.2 respectively). 

So what makes this image different from most others – like having those EYES!  Look carefully and you may see the ‘trick’ that was used – which some might regard as perfectly legitimate, exploiting another fixed feature of ImageJ, albeit one that you can work around (CLUE!)  and indeed was worked around.  Answer – will be given in 24 hours.

In passing: the nose and surrounding area has given me an idea as to how the ‘awkward’ face could have been imprinted in a contact-only mechanism without resort to a metal or plaster  bas-relief (Garlaschelli), at least in the latest  flour dust model. Clue: cartilage is more malleable (“bendy”)* than either tin or lead,  and let’s not forget that our imprinting medium flour can be dabbed on and off dry linen with a brush or swab or similar.

*  “deformable” might  have been a better term- as one Michael V. now FRS mathematician, demonstrated on this blogger’s nose circa 1959, having sneakily crept up behind during one of  Mr.Tanner’s riotous chemistry lessons to deliver a (probably well-deserved) karate chop, requiring corrective surgery many years later …

Topic 3: Here’s Dr.Positive (science bod) calling a certain Dr.Persistently Negative, he who dishes out his “science” as if medicine to treat disease. This is an important posting, probably the most important from my years of “Shroud” research, and it’s dedicated to the man with the  prescribing tendency.  Why? Because his negative nitpicking, from countless sniping  and indeed hostile comments and, especially his sniping- from-cover pdfs, were what spurred me to switch from imprinting with flour paste/slurry to imprinting with dry flour. Check out these results for (a) that “Shroud” like fuzzy image by which he sets so much store (rarely if ever considering the effect of age-related degradation) and to (b) 3D properties (which he flatly claimed lacked 3D properties, unsupported by data, and which I demonstrated yesterday to be false).

.First, the new improved fuzzy-look image, obtained using flour dust as imprinting medium, colour development with a hot flat iron*  or in  a hot oven, and a new 3rd stage (image attenuation by washing with soap and water).

(*Late addition: it’s probably the hot iron – its pressing action being responsible for the coloration being confined mainly to the crowns of the weave. Microscopy is in progress, but needs careful evaluation).

Imprint of this blogger's hand using new 3-stage technology. startin with dry flour as imprinting medium.

Imprint of this blogger’s hand using new 3-stage technology. starting with dry flour as imprinting medium.

Here’s the same after applying Autocorrect in MS Office Picture Manager:

With Autocorrect

With Autocorrect

… and here, skipping several stages, but still deploying the image-processing techniques flagged up yesterday in Topic 2, is a side- by -side comparison of the excised re-oriented   hands on the  “Shroud” with the now fuzzier powder-based model.


Note the two benchmarks added to the model imprint (2D reference markers with no 3D history, but showing some ‘apparent’ 3D properties due to the non-zero default z scale setting that cannot be set to zero.

Here’s a close-up of the model imprint after performing a Secondo-Pia style tone-reversal (“positive” to “negative”) in ImageJ.


Tone -reversed negative of dry-flour imprint, 3D-rendered in ImageJ. Note the relative lack of distortion, compared with the wet-flour imprint in Topic 2.  Dr.Negative please note.

Not bad eh?  One is put in mind of that biblical quotation based on the bees around the deceased lion (“from out of the strong came forth the sweet” or words to that effect, even if the biology is suspect) …  from out of the negative came forth the positive…


Postscript: there’s a tiresome individual on Dan Porter’s shroudstory site (not seen for a few days) who pops up regularly whenever this blogger produces a new idea, saying that it’s simply his ideas that have been recycled (that’s a charitable description). One of his favourite ploys to undermine my credentials is to quote an idea floated on my 6th posting in January 2012, related to Model 1 (“thermostencilling”) which involved the use of mummified monks, as on display in Brno, Czech Republic

The mummified monks of Brno.

The mummified monks of Brno.

(admittedly not my finest hour, as it required them being heated, but maybe not excessively if the linen had been impregnated with a thermo-sensitizing material). If he reads all that posting (unedited, as is the case for all my postings) he sees ideas  there, influenced no doubt by Ray Rogers,  that foreshadow what’s here, notable the use of starch or reducing sugars or fruit juice to render the linen better able to capture an image. Indeed, Maillard reactions even get a mention. I returned to that approach much later,  October of last year,  sprinkling dry flour onto linen before imprinting with a heated metal template.

From October 24, 2014 - testing white flour as a thermosensitizer (it worked!).

From October 24, 2014 – testing white flour as a thermosensitizer (it worked!).

The latter was even coated with flour and tested (poor adhesion!). Now why didn’t I think of coating myself with the flour, thus getting away from the major drawback of the simple contact scorch hypothesis, namely that it required  a metal template (heavy, cumbersome etc) instead of a real person?


 Topic 2: response to new comment from Dr.Relentlessly Negative, and a side-by-side comparison of 3D response of TS hands versus my own flour-imprinted hand (wet slurry technique).

Here’s Dr.Negative’s response to Topic 1 below:

“This “over-dogmatic MD” knows since many months or years that any kind of 2D input gives a 3D response using ImageJ.
This is is the problem, not the solution..

I will explain that in detail after your instalment.”

Once again, I disapprove strongly of the way he uses  and abuses Dan Porter”s site. He’s been allowed to use it as a permanent billboard for two pdfs (see margin) specifically attacking my ideas over a long period of time, with no facility there for responding to his haughty criticism, the latter based for the most part on some poorly-designed experiments. Now, and not for the first time, he’s playing cat-and-mouse, now- you-see- me, now-you-don’t,  in the comments on that site.

I’ve just done a comparison, as carefully controlled as possible, of the 3D response of the disputed imprint of my hand against the TS hands. See caption for details:


Since the 3D response in ImageJ on default z cale setting depneds almost entirely on the shadowing effect created by virtual illumination from the left, it was considered important to have all the fingers in the vertical plane. That required rotation of each of the two hands of the TS image (taken from Shroud Scope and used ‘as is’ i.e. without any photoediting. Note as before the use of  embedded entirely 2D-generated  benchmarks, each showing a small entirely artefactual 3D response, placing a question mark over the validity of all the images above as having captured any real 3D information from their subject at the instant of image capture.

and here’s the same again, with the optimized settings that were used in ImageJ.


Note the crucial settings: smoothing 10.0, Lighting 0.5, z scale 0.1 (minimum default value).

Well, I don’t know about you, dear reader, but I see no grounds for thinking that my model imprints are appreciably worse (or better) than those of the TS., at least for the hands. One can argue endlessly about whether a particular 3D enhancement is real or apparent, but one thing’s for certain. One’s conclusions are more firmly based  in hard reality when knowing the history of the input image, albeit a crude model system with flour paste. How can one hope to arrive at any hard and fast conclusions when the history of the TS body image is UNKNOWN, far less claim that it has a uniquely superior response to 3D-rendering when the above two results prove otherwise? As for those who hound one on this and other issues to do with the TS, my policy is simple – progressive disengagement. One discusses the science as equals, or not at all.  There are no grounds for presumptions of superiority merely because one has been researching it longer than the other. It’s the quality of the data, OLD and NEW that matters, and the unbiased interpretation of that data, where preconceptions have to be put to one side, if only temporarily.

I recently discovered where the “cherry jam” hyping of the TS 3D response started. It was way back in 1983 (possibly earlier by as many as 4-5 years).  Watch this space for a cut-and-paste.

Yup, here’s a prime, dare I say crass example of sloppy, slapdash science being used to sustain  a major claim, namely that the 3D properties of the “Shroud” are exceptional in comparison with all other images. Note the sample size of 1 used for the “all other images” category:


Colour plates from Heller's book, 1983

Colour plates from Heller’s book, 1983. Note the grotesquely over-hyped captions. Some might think the lower image compares very favourably with the “Shroud”. The distortion is only to be expected, given a photograph, influenced by light direction, is being compared with an image (almost certainly an imprint) that is generally referred to as having no directionality, light playing no part in the imprinting  process. Shadows are bound to result in distortion when uploaded to 3D-rendering programs, whether analogue (the above VP-8) or digital (e.g. ImageJ).

(Late addition: the captions may be hard to read in places: here they are (with totally unscientific cherry jam’ highlighted in red.

Upper of the two photos: A VP-8 image taken from the instrument’s cathode ray tube. The three-dimensional attributes of the VP-8 Shroud images cannot be reproduced by any artistic endeavour (Copyright: 1978, Vernon D. Miller)

Lower of the two photos:  A VP-8 of a photo of William Ercoline. Note the gross distortion of all features and the two-dimensional quality of the VP-8 – both characteristic of a VP-8 taken from a 2D surface. The only exception is the Shroud. (Copyright: 1978, Vernon D. Miller)


Topic 1: there’s unfinished business from use of what I now call the 2 stage  ‘wet imprinting’ technology, using a slurry of white flour in cold water as imprinting medium, followed by heat treatment to develop colour. Once that’s been attended to, this posting will then go on to describe a new 3 stage process that is a dry-imprinting procedure, using a dusting of dry flour onto the human 3D template (or part thereof – this blogger’s hand). The third stage (which is bound to create controversy)? Answer: image attenuation. But first things first.

Here’s one of two highly irritatingly SIMPLISTIC and/or  INACCURATE comments placed on a recent shroudstory posting by a certain well known sindonologist (see comments bemeath the imported graphics from this site). . He’s medically qualified, and sadly, like so many medics who engage with scientists, quickly comes across as over-prescriptive (yes, they don’t so much propose their ideas as PRESCRIBE them as if medicine. Woe betide you if you refuse the medicine. You then become a cancer that has to be attacked with a Powerful Drug – Fiction  (pdf for short). More about the abuse of the prescribing pad  later – a sore point with this blogger.  Yes, pdfs have a role to play, but not for sniping from cover (no facility for posting a reply!). As for wikipedia’s quaint idea that an unrefereed pdf is authoritative, a peer-reviewed publication in all but name, words fail me. Wikipedia need to  engage more with real people in the real world, at least where highly controversial topics are concerned – namely interactive  sites where ideas can be challenged in the open.

Anyway, here’s part of one of the 2  comments. I’ve highlighted one of its three claims in red – the one that is simply untrue.

“Regarding your experiments, I agree with your approach.

But, for now the results are not convincing (see my previous message).
1) the imprint of you fingers shows sharp borders, contrary to the shroud.
2) No 3-D, contrary to the Shroud
3) Distortions, contrary to the shroud. We need much more.”

It was accompanied by a cut-and-paste of this image from my recent wet-imprinting researches:


Flour-slurry imprint of my hand, after colour development with a hot iron, except for forefinger on left. The blue-white stripes are the ironing board.

Flour-slurry imprint of my hand, after colour development with a hot iron, avoiding all but the tip of the forefinger onthe  left. The blue-white stripes are of course the ironing board.

Now please bear with me while I take our medic through half a dozen or so tutorial steps in using the best known 3D-rendering program (ImageJ) which I happen to know is his choice as well as my own (being freely downloadable and very user-friendly).

Did he even bother to check out my imprint?  He doesn’t say (a real scientist, engaging rather than prescribing,  who understands the scientic ethos would have done so).

If he had, it’s a fair bet his laptop screen would have looked initially like this:

Flour imprint uploaded to ImageJ, initial default settings

Flour imprint uploaded to ImageJ, initial default settings

However, that is not how it would look on this blogger’s screen. Spot the new addition:

Note the internal reference, a 2D diagram constructed in MS paint of solid concentric circles, with a steady increase in image intensity towards the centre. ImageJ on default settings. Note the value on the z (apparent height) scale. It is not zero, but 1.0. It cannot be set below that value.

Note the internal reference, a 2D diagram constructed in MS Paint of solid concentric circles, with a steady increase in image intensity towards the centre.
ImageJ on default settings. Note the value on the z (apparent height) scale. It is not zero, but 1.0. It cannot be set below that value.

If one tilts the diagram to get an oblique view of that internal standard, one sees something that I suspect not many people know about.

Note that the internal standard now appears in 3d, despite having no 3D history, despite ImageJ being in default settings. That is because there is some 3D rendering that cannot be removed, and it has to be regarded as "apparent" 3D, even if one's input image is suspected, or assumed to have some "real" (though questionable) 3D properties).

Note that the internal standard now appears in 3D, despite having no 3D history,  and despite ImageJ being in default settings. That is because there is some 3D rendering that cannot be removed, and it has to be regarded as “apparent” 3D, even if one’s input image is suspected, or assumed to have some “real” (though questionable) 3D properties).

Now let’s try enhancing beyond the default 3D. How should one do that? One could be forgiven for thinking that the appropriate control to alter first would be the z scale setting (“height”). One would be wrong in making that assumption, as the following use of lighting and smoothing controls demonstrates. Again, please bear with me, because this is important. The z control will be the last to be tested.

Which does one alter first – smoothing or lighting? Answer: the smoothing, not because it has much effect on its own, but because it’s needed to ‘de-digitalize/de-pixellate-3D-wise the displayed images in order to make the result look realistic. If the smoothing setting is not increased first, one is faced with a ‘needle forest’, the result of 3D enhancement of unsmoothed pixels, as this next result shows:



Lighting (lateral illumination from the left) has been increased, but the smoothing should have been increased first to avoid the needle forest.

This is the same as above, after first increasing the smoothing t0 a modest value (20) sufficient to avoid the needle forest 9one shoulld aim for the lowest setting, since excessive smoothing results in loss of crisp 3D response, whether real or apparent.

This is the same as above, after first increasing the smoothing t0 a modest value (20) sufficient to avoid the needle forest. One should aim for the lowest setting: excessive smoothing results in loss of crisp and progressive  3D response, whether real or apparent.

Note that we now have some unmistakeable 3D imaging of the fingers, and that it was  the result of altering the lighting/smoothing only. No twiddling was necessary with the z control. Image J can elicit 3D purely from assuming that increasing image intensity represent height, and modelling the effect of shining light from one side to create shadows and apparent 3D. The software is using the image intensity map alone, i.e. there was no addition of extra z value to produce the result.

So how come my critic can state so categorically that there is no 3D effect, when there are two 3D effects – a minor one from default settings and a more pronounced one from  virtual IT-generated lateral lighting, needing no further increase in z? Why is he  (yet again) setting himself up as the final authority on matetrs where there are no a priori grounds for thinking he knows more than I do, and indeed probably knows a great deal less (this blogger having done dozens of postings based on ImageJ)

Now let’s finally test the effect of increasing the gain control on height, i.e. the z control. Look carefully at what it does to the internal reference:

The z control produces a dramtic effect on the 2D reference, despite having no 3D history. The z control might be useful to exaggerate 3D character to make it more apparent, but it's action has to be regarded as entirely artificial, though favouring a 2D image that has a smooth gradient of image intensity in one or other direction, which the imprint clearly lacks, given its origin as a simple imprint from a fairly flat region of the anatomy (the back of hand, excluding the curvature between the fingers that escapes imaging).

The z control produces a dramatic effect on the 2D reference, despite having no 3D history. The z control might be useful to exaggerate 3D character to make it more apparent, but it’s action has to be regarded as entirely artificial, through favouring a 2D image that has a smooth gradient of image intensity in one or other direction That’s something the real image of my fingers clearly lacks, given its origin as a simple imprint from a fairly flat region of the anatomy (the back of hand, excluding the curvature between the fingers that escapes imaging).

Conclusion: my wet flour imprint, given 2nd stage colour development, shows a very respectable 3D response in ImageJ. One can argue as to whether it’s a “real” 3D response, coming from a 3D template, through lacking any obvious mechanism for capturing 3D information. It’s for that reason that this blogger now routinely includes the internal reference as a reminder that  3D rendering must never be assumed to have required either a 3D template, or a relief-sensitive mechanism of image capture, or both.

So was the 3D response one sees above entirely due to promotion of image intensity in step with that of the internal benchmark  reference, or might it have been additionally favoured by the imprinting procedure?  I was starting to address this tricky question when an alternative imprinting procedure came to mind, prompted by another criticism from our perennial critic, namely that the image of my fingers shows “sharp borders”.  Might the technique be modified to address that complaint?  Is the model being faulted less for its science, more for its technology, indeed “arts and crafts” aspects?  That will be addressed in tomorrow’s instalment to this posting  (2nd topic). Thanks to those who have stayed the course so far…   Full marks for perseverance.


About Colin Berry

Retired science bod, previous research interests: phototherapy of neonatal jaundice, membrane influences on microsomal UDP-glucuronyltransferase, defective bilirubin and xenobiotic conjugation and hepatic excretion, dietary fibre and resistant starch.
This entry was posted in Uncategorized and tagged , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s