Sensitivity of snow density and specific surface area measured by microtomography to different image processing algorithms

Microtomography can measure the X-ray attenuation coefficient in a 3-D volume of snow with a spatial resolution of a few microns. In order to extract quantitative characteristics of the microstructure, such as the specific surface area (SSA), from these data, the greyscale image first needs to be segmented into a binary image of ice and air. Different numerical algorithms can then be used to compute the surface area of the binary image. In this paper, we report on the effect of commonly used segmentation and surface area computation techniques on the evaluation of density and specific surface area. The evaluation is based on a set of 38 X-ray tomographies of different snow samples without impregnation, scanned with an effective voxel size of 10 and 18 μm. We found that different surface area computation methods can induce relative variations up to 5 % in the density and SSA values. Regarding segmentation, similar results were obtained by sequential and energy-based approaches, provided the associated parameters were correctly chosen. The voxel size also appears to affect the values of density and SSA, but because images with the higher resolution also show the higher noise level, it was not possible to draw a definitive conclusion on this effect of resolution.

imum boundary surface and which at the same time respects the gray value data in the best possible way. However, none of the mentioned studies were interested in snow. This material exhibits specific features such as a natural tendency, induced by metamorphism, to minimise its surface energy (e.g. Flin et al., 2003;Vetter et al., 2010). Moreover, these former studies tended to focus on properties 60 linked to volumetric material contents, while less attention was paid to the surface area of the segmented object. Hagenmuller et al. (2013) applied an energy-based segmentation method on images of impregnated snow samples, which is a three-phase material (impregnation product, ice and residual air bubbles). This method is based on the same principles as the Bayesian Markov random field segmentation but the optimisation process is performed differently. It takes explicitly advantage of 65 the knowledge that the surface energy of snow tends to be minimal, and was shown to be accurate in comparison to a segmentation method based on global thresholding. However the set of snow microtomographic images used by Hagenmuller et al. (2013) was limited to impregnated samples and to a few different snow types. Moreover, no independent SSA measurements were available to provide a reference or at least a comparison. Here, the flexible energy-based segmentation method 70 was adapted to two-phase images (air-ice) and applied to 38 images on which SSA measurements were conducted with independent instruments. Note that comparisons with these independent SSA measurements are beyond the scope of this paper and will be reported in a synthesis paper of the working group to be published in the present special issue of The Cryosphere.
First, the sampling and X-ray measurement procedures to obtain grayscale images are described. 75 Attention is paid to the fact that the parameters used for binary segmentation also depend on the scanned sample and not only on the X-ray source setup. Second, two different approaches of binary segmentation are presented. The first one, commonly used in the snow science community, consists of a sequence of filters: Gaussian smoothing, global thresholding and morphological filtering.
The second one is based on the minimisation of a segmentation energy. Third, different methods 80 to compute surface area from binary images are presented. Finally, the different methods of binary segmentation and area computation are applied to the microtomographic images and the results are compared to provide an estimation of the scatter in density and SSA measurements due to numerical processing of the grayscale image and area calculation. a side-length of 3 mm and correspond to a subset of the images analysed in the present study. The grain shape is also indicated in brackets below the images, according to the international snow classification (Fierz et al., 2009).

Sampling
Thirteen snow blocks of apparently homogeneous snow were collected in the field or prepared in 90 a cold laboratory. These blocks span different snow types (decomposing and fragmented snow, rounded grains, faceted crystals and depth hoar, Fig. 1). Smaller specimens were taken out of these blocks to conduct grain size measurements with different instruments. Two snow cylinders of radius 35 mm and height of 60 mm and one snow cylinder of radius 20 mm and 60 mm height were extruded from each block to perform microtomographic measurements.

Number of voxels
Intensity (arbitrary units) Ice Air Figure 2. Grayscale image (400 2 pixels) representing the X-ray attenuation coefficient and its corresponding grayscale histogram. The 2D slice is extracted from image G2-s1. The image exhibits two materials: air (dark gray) and ice (light gray). The contour of ice resulting from binary segmentation is plotted in red. The zoom panel (top right) was enlarged eight times to emphasise the fuzzy transition between air and ice.

X-ray scanning
The grayscale images were obtained with a commercial microcomputer tomograph (Scanco Medical µCT40) operating in a cold room at −15 • C. The X-ray source was set to an energy of 55 keV.
The two samples with a radius of 37 mm were scanned with a nominal resolution of 18 µm, and the smaller sample with a radius of 20 mm was scanned with a nominal resolution of 10 µm. To 100 avoid edge effects a sub image of size about 1000 3 voxels was extracted from each image, which correspond to a volume of 10 3 mm 3 for the 10 µm resolution and 18 3 mm 3 for the 18 µm resolution.
These volumes are larger than the previously established representative elementary volumes for SSA (around 2.5 3 mm 3 , Flin et al. (2011)) and density (around 2.5 3 mm 3 , Coléou et al. (2001)). In the following, the images corresponding to a resolution 18 µm are identified by the suffixes "s1" and 105 "s2", and the 10 µm by "10micron". The output of the tomograph is a 3D grayscale image with values encoded as unsigned short integers (16 bits) (Fig. 2). The grayscale value quantifies the X-ray attenuation coefficient.

Images artefacts
As shown in Fig. 2, air and ice can be distinguished by their respective attenuation coefficient, 110 i.e. by their grayscale value or intensity. However, the grayscale distributions in ice and air are not completely disjoint, as there are always pixels (voxels in 3d) that consist of both materials. In addition to this fuzzy transition between air and ice, the image is also noisy, which makes the binary segmentation not straightforward.

5
The Cryosphere Discuss., doi:10.5194/tc-2015Discuss., doi:10.5194/tc- -217, 2016 Manuscript under review for journal The Cryosphere   Slight differences are also observed between the grayscale distribution of the two images coming from the same snow block and scanned with the same resolution. This may be due slight variations in the temperature of the X-ray source during successive scans. Hence, it is doubtful whether binary segmentation parameters "optimised" for one image can be used to segment other images even of 120 the same sample and with the same resolution. It appears necessary to determine the segmentation parameters on each image independently.

Segmentation methods
In this section, two binary segmentation methods are presented: (1) the common method based on global thresholding combined with denoising and morphological filtering, hereafter referred to as 125 sequential filtering, and (2) a method based on the minimisation of a segmentation energy, referred to as energy-based segmentation.

Sequential filtering
Sequential filtering is commonly used to segment grayscale microtomographic images of snow because it is simple, fast, and is implemented in packages of several different programming languages.

130
It consists of a sequence of denoising, global thresholding and post-processing, the input of each step coming from the output of the previous step.
Denoising with a Gaussian filter. Numerous filters exist to remove noise from images, the most common being the Gaussian filter, the median filter, the anisotropic diffusion filter and the total 6 The Cryosphere Discuss., doi:10.5194/tc-2015-217, 2016 Manuscript under review for journal The Cryosphere Published: 25 January 2016 c Author(s) 2016. CC-BY 3.0 License. variation filter (Schlüter and Sheppard, 2014). The objective of denoising is to smooth intensity 135 variations in homogeneous zones (characterised by low-intensity gradients) while preserving sharp variations of intensity in the transition between materials (characterised by high-intensity gradient).
In snow science, the most popular denoising filter is the Gaussian filter (e.g. Kerbrat et al., 2008;Lomonaco et al., 2011;Theile et al., 2009;Schleef and Löwe, 2013), which consists in convoluting the intensity field I with a Gaussian kernel of zero mean N (0, σ) defined as: with σ the (positive) standard deviation. The support of the Gaussian kernel can be truncated (here to 4σ ) to speed up the calculations. This filter is very efficient in smoothing homogeneous zones.
However it fails to preserve sharp features in the image by indifferently smoothing low-intensity and high-intensity gradient zones, and therefore reduces the effective resolution of the image.

145
Global thresholding. After the denoising step, a global threshold T is determined for the entire image in order to classify voxels as air or ice depending whether their grayscale value is smaller or greater, respectively, than the threshold. The choice of this threshold is generally based on the grayscale histogram without considering the spatial distribution of grayscale values. Different methods exist, an exhaustive review of which can be found in Sezgin and Sankur (2004). Here the focus 150 is limited to methods commonly used for snow, namely: (1) local minimum, (2) Otsu's method, and (3) mixture modelling.
1. Local minimum: A simple way to determine the threshold is to define it as the local minimum in the valley between the attenuation peaks of ice and air ( Fig. 3) (e.g. Heggli et al., 2009;Pinzer et al., 2012). However, the histogram may be noisy, resulting in several local 155 maxima and minima, which makes the method inapplicable. In some cases, the attenuation peaks of ice and air can also be too close, which results in a unimodal histogram without any valley (e.g. Kerbrat et al., 2008). Moreover, the position of the local minimum is generally affected by the height of the attenuation peaks in the histogram: the less ice in the image, the closer the local minimum is to the ice attenuation peak. The threshold obtained with this 160 method is denoted T valley in the following.
2. Otsu's method: Another popular method, first introduced by Otsu (1975), is to find the threshold that minimises the intra-class variance σ w defined as σ 2 w = n air σ 2 air + n ice σ 2 ice , with n air and n ice the numbers of voxels classified as air and ice respectively, and σ air and σ ice the standard deviations of the grayscale value in each segmented class. This method is generic 165 and does not require any assumption on the grayscale distribution. However, this also represents a drawback of the method, in that knowledge of the origin of the image artefacts can help to find the optimal threshold. The threshold obtained with this method is denoted T otsu in the following. This method is less used in the snow community but is widely used for other porous materials (e.g. Haussener, 2010;Ebner et al., 2015). -The grayscale histogram computed on the image masked on high-intensity gradient can usually be perfectly decomposed into two Gaussian distributions centred on µ air and 175 µ ice , respectively, and with identical standard deviations σ (Fig. 4a). Masking the highintensity gradients enables to suppress the fuzzy transition zones between ice and air.
Therefore, on the corresponding histogram, the scatter around the attenuation peaks can be attributed to instrument noise only, and appears to be Gaussian distributed. The optimal threshold value derived from this method is T mask = (µ ice + µ air )/2. Note that 180 the ratio Q noise = σ/(µ ice − µ air ) provides a quantitative estimate of the quality of the grayscale images with regards to noise artefacts. In practice, however, masking the grayscale image on high-intensity gradient zones is time consuming and not straightforward with existing segmentation softwares. Moreover, in case of very thin ice structures, homogeneous ice zones are almost inexistent.

185
- Kerbrat et al. (2008) directly fitted the sum of two Gaussian distributions to the complete grayscale histogram (Fig. 4b). Note that the partial volume effect at the transition between materials changes the position of the attenuation peaks and the agreement between the fit and the histogram remains partial in comparison to the fit on the histogram of the masked image. The threshold, defined as the mean of the centre of the two Gaus-190 sian distributions obtained with this fit, is denoted T kerbrat .
- Hagenmuller et al. (2013) fitted the grayscale histogram with the sum of three Gaussian distributions to take into account the fuzzy transition between materials. The fitting function is where λ air , λ ice , µ air , µ ice and σ are five adjustable parameters,μ = (µ air + µ ice )/2, andμ = (µ ice − µ air )/4. The two first terms of the sum model the grayscale distribution in low-intensity gradient zones, while the last term models the grayscale distribution in high-intensity gradient zones. The choice ofμ-value is arbitrary but was found to provide a good fit to the transition zone. The agreement of this model with 200 the grayscale histogram is generally very good, although no additional free parameter is added in comparison to the two-Gaussian distributions model (Fig. 4c). The optimal threshold value derived from this method is T hagen = (µ ice +µ air )/2. Note that the value  Post-processing. In general, the binary segmented image needs to be further corrected to remove remaining artefacts. This can be done manually for each 2D section, but it is extremely timeconsuming (Flin et al., 2003). The continuity of the ice matrix can also be used to correct the binary image by deleting ice zones not connected to the main structure or to the edges of the image (Hagenmuller et al., 2013;Schleef et al., 2014;Calonne et al., 2014). Among generic and automatic 210 post-processing methods, the morphological operators erosion and dilation are the most popular. The combination of these operators enables to delete small holes in the ice matrix (closing: erosion then dilation) or small protuberances on the ice surface (opening: dilation then erosion). In the following, the support size if these morphological filters is denoted d.

Energy-based segmentation 215
Energy-based segmentation methods consist in finding the optimal segmentation by minimising a prescribed energy function. These methods are robust and flexible since the best segmentation is automatically found by the optimisation process, and the energy function can incorporate various segmentation criteria. In general, the optimisation of functions composed of billions of variables can be complex and time-consuming. However, provided that the variables are binary and some 220 additional restrictions on the form of the energy function, efficient global optimisation methods exist. In particular, functions that involve only pair interactions can be globally optimised in a very efficient way with the graph cut method (Kolmogorov and Zabih, 2004). Using this method, the

225
The energy function E used in the present work is composed of two components: a data fidelity term E v and a spatial regularisation term E s . The definition of E is similar to that proposed by Hagenmuller et al. (2013) for the binary segmentation of impregnated snow samples (air/ice/impregnation product), except that the data fidelity term is, here, adapted to the processing of air/ice images. This term assigns penalties for classifying a voxel into ice or air, according to 230 its local grayscale value. Qualitatively, assigning to air a voxel with a grayscale value close to the attenuation peak of ice "costs more" than assigning it to ice. Quantitatively, we define E v as follows: where L i is the segmentation label (0 for air, 1 for ice) for voxel i, I i is its grayscale value, P 0 is the proximity function to air, and P 1 is the proximity function to ice. This energy is scaled by the volume 235 v of one voxel. The proximity functions quantify how close a grayscale value is to the corresponding material. They are defined from the three-Gaussian fit (Eq. 2) adjusted on the grayscale histogram as follows: with e = exp(1) (Fig. 5).
The spatial regularisation term E s (L) is defined as r · S(L), with S(L) the surface area of the segmented object L and r (r ≥ 0) a tunable parameter with the dimension of a length. Accounting for this regularisation term in the energy leads to penalising large interface areas: a voxel with an r assigns a relative weight to the surface area term in the total energy function E, and can be interpreted as the minimum radius of protuberances preserved on the segmented object (Hagenmuller et al., 2013). This regularisation term minimising the ice/air interface is of particular interest for materials such as snow where metamorphism naturally tends to reduce the surface and grain boundary energy. Such processes are known to be particularly effective on snow types resulting from isother-250 mal metamorphism. For other snow types, such as precipitation particles, faceted crystals or depth hoar, the surface regularisation term is expected to perform well in recovering the facet shapes, but may induce some rounding at facet edges. from 3D binary images: the stereological approach (e.g. Torquato, 2002), the marching cubes approach (e.g. Hildebrand et al., 1999) and the voxel projection approach (Flin et al., 2005). These authors showed that the three approaches provide globally similar results, but each possesses its own inherent drawbacks: the stereological approach does not handle anisotropic structures properly, the marching cubes tends to overestimate the surface, and the voxel projection method is highly 260 sensitive to image resolution. In the present work, in order to estimate whether variations of SSA due to different surface area computation approaches are significant compared to the effect of binary segmentation, we tested three different methods to quantify the surface area: the stereological approach, the marching cubes approach and the Crofton approach. We did not evaluate the voxel projection method (Flin et al., 2005) because its implementation is sophisticated and the required 265 computation of high-quality normal vectors is excessively time-consuming if used only for surface area computation.

Model based stereological approach
Stereological methods derive higher dimensional geometrical properties, as density or SSA, from lower dimensional data. The key idea is to count the intersections of the reference material with 270 points or lines. Prior to the development of X-ray tomography, so-called model based methods were used. These models assume certain geometric properties of the object being studied, such as the isotropy of the material (Edens and Brown, 1995). They are now replaced by design-based methods that do not require any prior information on the studied object but require denser sampling of the object (Baddeley and Vedel Jensen, 2005;Matzl and Schneebeli, 2010). to physical surface sections of the sample. This corresponds to a model-based stereological method since isotropy of the sample is assumed, we call it in the following "stereological".
In addition, we used the mathematical formalism provided by the Cauchy-Crofton formula that explicitly relates the area of a surface to the number of intersections with any straight lines (Boykov and Kolmogorov, 2003). Instead of using only three orthogonal directions of the straight lines, we 285 used 13 and 49 directions, and the improved approximations based on Voronoi diagrams proposed by Danek and Matula (2011). This method comes close to a design based stereological method, as the volume (and direction) is almost exhaustively sampled. We refer to this area computation method to as the Crofton approach.

Marching cubes approach 290
The marching cubes approach consists in extracting a polygonal mesh of an isosurface from a threedimensional scalar field. Summing the area contributions of all polygons constituting the mesh provides the surface area of the whole image. We used a homemade version of the algorithm developed by Lorensen and Cline (1987). It computes the area of the 0.5-isosurface of the binary image without any further processing of the image. Our version of the algorithm is adapted to compute only the 295 surface area without saving all the mesh elements that are required for 3D visualisation.

Results
In this section, the methods to compute the area of the ice-air interface are evaluated first, since this evaluation can be performed on reference objects whose area is theoretically known, without accounting for the interplay with the binary segmentation method. The Crofton approach, which is 300 shown to perform best, is selected for the rest of the study. The sensitivity of density and SSA to the parameters of the sequential filtering and energy-based segmentations on the entire set of snow images is then investigated. Finally the variability of SSA due to numerical processing is compared to the variability of SSA due to snow spatial heterogeneity and scanning resolution.  The different surface area computation methods were then evaluated on the entire set of snow images segmented with the energy-based method (r = 1). According to the results obtained on the spheroid, the Crofton approach with 13 directions was chosen as a reference. As shown in Fig. 7, the SSA obtained with the direction-averaged stereological method is in excellent agreement with the value provided by the Crofton method. The results of the marching cubes method are in fair 325 agreement but show a systematic over-estimation of the SSA (+6% average relative deviation).

Surface area estimation
In summary, all presented area computation method showed consistent results. The Crofton approach showed the best accuracy on an artificial anisotropic structure whose surface area is theoretically known. The stereological approach is negatively affected by strong anisotropy of the imaged structure. However on the tested snow images, the structural anisotropy is low and this method is in excellent agreement with the Crofton approach. The simple marching cubes approach presented here (without additional filtering or pre-smoothing of the binary image) overestimates the specific surface on the order of 5%. For the following analysis of the sensitivity to binary segmentation, the SSA is computed via the Crofton approach with 13 directions.

Sequential filtering 335
The binary image resulting from the sequential filtering approach depends on: (1) the standard deviation σ of the Gaussian filter, (2) the threshold value T , (3) the size d of the post-processing morphological filters (opening/closing). As shown in Fig. 8, both SSA and density are sensitive to these segmentation parameters. The relation between SSA and density, on the one hand, and σ and d, on the other hand, depends significantly on the chosen threshold. Thus, in the following, we first

Choice of threshold
The threshold T mask obtained with the two-Gaussian fit of the grayscale histogram computed on the low-intensity gradient zones is chosen as a reference since this value is not affected by the 345 fuzzy transition artefact. This reference threshold ranges between 38800 and 39500 for the different scanned images (Fig. 9). The mean values of the attenuation peaks of air and ice are µ air = 35800 and µ ice = 42600, respectively. Hence, the variations of the reference threshold value remain small compared to the contrast between the two attenuation peaks µ ice − µ air = 6800. However, these variations clearly indicate, once again, that a unique threshold value cannot be used for all images.

350
These variations could be explained by slight variations in the X-ray source energy level due to slight temperature changes, or to deviations from the Beer-Lambert attenuation law depending on the total ice content of the sample.
As shown in Fig. 9, the computed threshold depends significantly on the determination method.
These variations affect in turn the density extracted from the binary image (Fig. 10a). Note that the 355 scatter on density due to the choice of the threshold remains the same even if a Gaussian filter is applied on the grayscale image before thresholding (Fig. 10a). The SSA values are also affected by the threshold determination method, but to a smaller extent since the threshold value tends to 15 The Cryosphere Discuss., doi:10.5194/tc-2015-217, 2016 Manuscript under review for journal The Cryosphere Published: 25 January 2016 c Author(s) 2016. CC-BY 3.0 License. affect density and total surface area in the same proportion (Fig. 10b). The variation of SSA due to smoothing is much more important than those due to the choice of the threshold (Fig. 10b).

Gaussian filtering
The sensitivity of density and surface area to the standard deviation σ of the Gaussian smoothing kernel is shown on Fig. 11. The segmentation was performed with the threshold T hagen derived with the method of Hagenmuller et al. (2013).
Depending on the sample, density varies in the range [-8, +2]% (compared to the value obtained 380 without smoothing) when σ is increased from 0 to 20 µm (Fig. 11a). Density appears to be insensitive to σ when σ is much lower than the voxel size. For larger values of σ, an average decrease of density with σ is observed due to the fact that snow structure is generally convex and smoothing tends to erode convex zones. Slight increase of density with σ is observed for σ > 5 µm for samples M1-1, M1-3 and M2-2. These samples are the most faceted snow samples exhibiting a large proportion of 385 flat surfaces (Fig. 1), which explains the different variation of density with σ. Systematic differences can also be noted between the images with a resolution of 10 µm and 18 µm. At a resolution of 10 µm, a fast decrease of density is observed for σ in the range [3,6] µm. This regime is absent at a resolution of 18 µm. For larger values of σ, the evolution of density is then similar for the two resolutions, and depends on the snow type. This difference is attributable to a stronger noise in the 390 10 µm images, which results in local grayscale variations that are generally smoothed out when σ > 6 µm.
The computed surface area significantly decreases when σ increases (Fig. 11b)   image, thresholding yields a lot of small details in the binary image, which enhances the effect of morphological filters (Fig. 12b). When the image is already smoothed by Gaussian filtering, the 410 morphological filters have less effect on the overall density and specific surface area. However, note that using a Gaussian filter of standard deviation σ does not guarantee the complete absence of details "smaller" than σ. Certain algorithms based on the binary images, such as grain segmentation (e.g. Theile and Schneebeli, 2011;Hagenmuller et al., 2014) are highly sensitive to the presence of residual artefacts in the ice matrix and require the use of these additional morphological filters.

Energy-based approach
The binary image image resulting from the energy-based approach depends on the parameter r which controls the smoothness of the segmented object. The other parameters involved in the volumetric term E v of the segmentation energy are directly derived from the three Gaussian mixture model (see Sect. 2.2.1).

420
As shown in Fig. 13a, the density of the segmented object slightly varies with r. On the 10 µm images, the evolution of density with r is not monotonic but relative variations remain limited in the range [-4, +1]%. On the 18 µm images, density clearly decreases when σ increases. This higher sensitivity of density to r on the 18 µm images can be explained by the fact that the fuzzy transition appears to be larger than on the 10 µm images, which leads to a higher indetermination of the exact 425 position of the interface between ice and air in this moderate intensity gradient zone (Fig. 14).

19
The As shown in Fig. 13b, surface area is more sensitive to r than density, and decreases significantly when r increases. Two regimes can be distinguished. For low values of r in the range [0, 10] µm, the surface area decreases rapidly when r increases. For larger values of r in the range [10, 20] µm, the decrease of the surface area with r is much slower, and displays an almost constant slope. As 430 discussed by Hagenmuller et al. (2013), this second regime is due to real details of the snow structure being progressively smoothed out, and is indicative of a continuum of sizes in structural details of snow microstructure. The distinction between the two regimes is more pronounced on the 10 µm images which are more affected by noise (Fig. 14).

435
The sensitivity of surface area to the parameters σ and r is similar, but the energy-based and sequential filtering approaches are conceptually different ( Fig. 11 and 13). The Gaussian filter smoothes high-frequency intensity variations with a small amplitude, independently of the subsequent binary thresholding. Small details due to noise artefacts remaining in the binary image are then deleted independently of the initial grayscale value by applying morphological filters. The energy-based ap-440 proach smoothes the segmented object so that the ice-air interface area is minimised while respecting at best the grayscale intensity model. Hence, the grayscale smoothing and morphological filtering are somehow done simultaneously with thresholding in the energy-based approach. In addition, the Gaussian filter is "grid-limited": as shown in Fig. 11b the ice-air interface occurs even for very low values of r (Fig. 13b) because voxels with a grayscale value close to the threshold between ice and air can be segmented as air or ice without much change in the data fidelity term E v but with a clear change in the surface term E s . The parameter r defines the largest equivalent spherical radius of details preserved in the segmented image, whereas σ does not directly correspond to the size of the smallest detail.
450 Figure 15 shows density and specific surface area computed on the entire set of snow images segmented with the sequential filtering approach (σ = 1.0 voxel, T = T hagen , d = 1.0 voxel) and the energy-based approach (r = 1.0 voxel). The "smoothing" parameters (σ and r) were chosen equal to 1.0 voxel since this value roughly corresponds to the transition beyond which the computed surface area starts to vary slowly with σ and r (Fig. 11b, 13b), and therefore provides the segmentations 455 that best preserve the smallest snow details while deleting most of noise-induced protuberances. As already pointed out, this transition is clear on the 10 µm images, but less evident on the 18 µm images. To be consistent, however, and to ensure that all noise artefacts are smoothed out, values of r, σ = 1 voxel were used in all cases.
It is observed that the two approaches generally produce similar results in terms of density (root 460 mean square deviation between the two segmentation methods is 6 kg m −3 ) and specific surface area (root mean square deviation of 0.7 m 2 kg −1 ). The largest differences are observed for the snow types presenting the highest SSA. In general, the density provided by the sequential filtering is slightly larger than that computed with the energy-based approach. The opposite difference is observed for SSA.
Scatter can be observed even between the density and SSA derived from images coming from the same snow block, due probably to the existence of spatial heterogeneities with the blocks and and the difference of image quality. The averages of standard deviations calculated for each snow block are 10.7 kg m −3 and 1.1 m 2 kg −1 for density and SSA, respectively (calculated with the energy-based approach). This intra-block variability nevertheless appears to be limited compared to the inter-block 470 variability (46.5 kg m −3 for density and 4.7 m 2 kg −1 , see Fig. 15), and is on the same order as the variability due to the image processing technique (see above).
Lastly, systematically larger density and SSA values are found on the images scanned with a 10 µm resolution, compared to the images with a 18 µm resolution. It could be argued that this difference is due to a better imaging of small details with a lower voxel size. However, as already 475 noticed, the 10 µm images also present stronger noise artefacts (Fig. 14), and it is difficult to assess whether the effective resolution of these images is, in practice, finer than the one of the 18 µm images. Note that the root mean square difference between the density (respectively SSA) computed on the images s1 and s2 is 3.8 kg m −3 (respectively 0.15 m 2 kg −1 ), which is much lower than the intra-block variability (including the 10 µm images). This observation indicates that the hardware 480 setup of the tomograph and the subsequent image quality or resolution can significantly affect the measured density and SSA.

Conclusions and discussions
We investigated the effect of numerical processing of microtomographic images on density and specific surface area derived from these data. To this end, a set of 38 X-ray attenuation images of 485 non-impregnated snow were analysed with different numerical methods to segment the grayscale images and to compute the surface area on the resulting binary images.
The segmentation step is not straightforward because the grayscale images present noise and blur.
It is shown that noise artefacts can significantly affect the computed SSA, and that the fuzzy transition between ice and air can have a strong impact on the computed density.

490
The sequential filtering approach critically depends on the threshold used to separate ice and air.
The grayscale histogram on low-intensity gradient zones presents two disjoint attenuation peaks, whose characteristics are not affected by blur. The threshold derived from this method was used as a reference to evaluate other methods based on the analysis of the grayscale histogram of the entire image. The mixture models which consist in decomposing the histogram into a sum of Gaussian 495 distributions are shown to be accurate. On the contrary, the local minimum method is shown to be unsuitable in general.
Smoothing induced by the Gaussian and morphological filters in the sequential approach, or by accounting for the surface area term in the energy-based method, efficiently remove noise artefacts from the segmented binary image. Morphological filters applied on the binary image in the sequential Sequ. on s1 and s2 Ener. on s1 and s2 Sequ. on 10micron Ener. on 10micron Figure 15. Specific surface area as a function of density for the entire set of images and the two different binary segmentation methods. The sequential filtering ("Sequ." in the legend) was applied with σ = 1.0 voxel, T = Thagen and d = 1.0 voxel. The energy-based approach ("Ener." in the legend) was applied with r = 1.0 voxel.
The surface area was computed with the Crofton approach. approach miss the initial gray value information. However, it seems that their effect is negligible if the applied Gaussian filter is strong enough. The smoothing can also induce the disappearance of real structural details contributing to the overall SSA. The transition between smoothing of noise and smoothing of real details can be well estimated on the curve showing the evolution of SSA as a function of σ or r. However, due to the influence of noise, it remains difficult to assess the potential 505 contribution to the SSA of structural details of size smaller than the voxel size. It has previously been shown that the SSA measured with gas adsorption technique, which has a molecular resolution, is in good agreement with the SSA measured with microtomography for aged natural snow (Kerbrat et al., 2008). This observation corroborates the idea that the surface of aged snow is smooth up to a scale of about tens of microns, and that if smaller structures are present they do not contribute 510 significantly to the overall SSA (Kerbrat et al., 2008). To further investigate this issue on recent 23 The Cryosphere Discuss., doi:10.5194/tc-2015-217, 2016 Manuscript under review for journal The Cryosphere Published: 25 January 2016 c Author(s) 2016. CC-BY 3.0 License. snow and to disregard any additional influence of the measuring technique except the resolution, the use of new tomographic systems with very high resolutions of about 1 µm would be necessary.
The formalism of the energy-based segmentation could enable to add more advanced criteria in the segmentation process, such as the maximisation of the grayscale gradient at the segmented interface 515 (Boykov and Jolly, 2001), the minimisation of the curvature of the segmented object (El-Zehiry and Grady, 2010), or the spatial continuity in time-series of 3D images (Wolz et al., 2010). In this study, only criteria on the local grayscale value and on the surface area of the segmented object were used.
The advantage of this method is that the parameter r formally defines an effective resolution of the segmented image. In contrast, the standard deviation σ of the Gaussian smoothing kernel in the 520 sequential approach does not explicitly define the smallest structural detail in the segmented image.
In practice, however, both methods provide very similar results on the tested images in terms of density and SSA, provided appropriate parameters are chosen.
Comparison between the presented area computation methods showed similar results when applied to a synthetic image or to the set of snow images. On the synthetic image (oblate spheroid), 525 the Crofton approach computes the surface area with highest accuracy (less than 2% for sufficiently large spheroids) whereas the stereological approach is negatively affected by strong anisotropy of the imaged structure and the unfiltered marching cubes approach overestimates the specific surface on the order of 5%. Stereological methods using more complex test lines, such as a cycloids, can compensate for the effect of anisotropy if the snow sample exhibits isotropy in a certain plane, which 530 is often the case for the stratified snowpack (Matzl and Schneebeli, 2010). However on the tested snow images, the surface anisotropy is low and the stereological method is in excellent agreement with the Crofton approach. The unfiltered marching cubes approach still overestimates the specific surface on the order of 5%. Note that methods have been developed to overcome this overestimation problem of the marching cubes approach, such as the use of gray levels or smoothing (Flin et al., 535 2005). However, these methods may create other artefacts depending on the image considered, such as systematic underestimation of the surface (Flin et al., 2005), and were not evaluated here.
The comparison of the sequential filtering and energy-based methods shows that density and SSA can be estimated from X-ray tomography images with a "numerical" variability of the same order as the variability due to spatial heterogeneities within one snow layer and to different hardware setups.

Recommendations
A few recommendations to derive density and SSA from micro-tomographic data are summarized below: -Surface area computation. The unfiltered marching cubes approach systematically overestimates the surface area and should thus be avoided. Counting intersections with test lines of 545 24 The Cryosphere Discuss., doi:10.5194/tc-2015-217, 2016 Manuscript under review for journal The Cryosphere Published: 25 January 2016 c Author(s) 2016. CC-BY 3.0 License. different orientations (at least in the three axes x, y and z) provides an efficient way to compute the surface area and properly accounts for structural anisotropy.
-Threshold determination. The value of the threshold depends on the tomograph configuration but also potentially on the scanned sample. A constant value for a time-series does not necessarily prevent from density deviations due to beam hardening. Visual inspection of the