![]() ...
·
![]() |
---|
Not to derail a different thread, I thought I would pose this question here. I was reading about integration and increasing bit depth and dr. That spawned a question that I couldn’t find an answer for. Lets say you stack enough lights to “increase” your bit depth from say 14 bit to 24 bit. Then you apply flats/darks that have 16 or 18 bit depth. Wouldn’t that be detrimental? Or is it insignificant? Or am I misunderstanding everything? thx in advance! |
![]() ...
·
![]()
·
1
like
|
---|
The stacked darks and flats are applied to each light before they are stacked. If you are dithering then the darks contribution to the light stack gets further randomised as the light stack is integrated, so you don't require as long an integration for dark stacks as for the light stack.
|
![]() ...
·
![]() |
---|
Sean Mc: First and foremost both lights, darks and flats are coming from the same device and hence have the same effective bit depth (and practically speaking none have a 16-bit dynamics). Beside that, unless you operate with 16-bit integer arithmetic which I know of no program doing so, the results are internally converted in floating point math so there is no numerical loss regardless of the operand bit depth as long as the result of the operation is stored in the appropriate format, normally 32-bit float (internally, this could mean using 64-bit float). |
![]() ...
·
![]() |
---|
Roger Nichol: Makes sense, thx. |