Whatever, the things done in raw processing can drive the data past it’s original saturation, and maybe up to and beyond the container max (16-bit: 65536, float: 1.0). Me, I use floating point in rawproc, where the convention is 0.0 - 1.0, but I scale my raw data to that in the same proportion as it exists in the integer container, so my max camera value is in floating point about 0.25. This, if your software uses 16-bit integer as the internal image format. Since computers don’t have 14-bit integer types, this data is usually delivered in unsigned 16-bit integers.Īt some point, that 14-bit data has to be scaled so the camera data’s max saturation roughly aligns to the data container, but in the meantime the 16-bit container provides about 2 stops of headroom for processing to eat into. I read about this many years ago and confirmed it with a simple Bash script, it’s probably still true.ĭata right out of the sensor is at the ADC resolution, say, unsigned 14-bit, which corresponds to a range between 0=black and 16383=maximum saturation (some channels may saturate before that, camera specific). That is, if you re-save 100 times using quality=90, and do another test where you re-save 100 times using a quality that hovers between 91 and 100, you will find that despite the former test using a lower quality, the end result should look better than the latter test.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |