10:30 Lise LaurinBruce VigonBachmann TillChristof KofflerPhilipp PreissBarclay SatterfieldSerge GenestBen AmorGabriel Blejman and Jane Bare

Life Cycle Assessment Capacity Roadmap Section 1: Decision-Making Support using LCA
SPEAKER: Lise Laurin

ABSTRACT. A group of Life Cycle Assessment (LCA) practitioners has been working on a roadmap for capacity development in LCA. The Roadmap is envisioned to identify common needs for development in LCA, which can then be addressed by the academic, vendor, and broader LCA community. The first section of the roadmap concerning decision-making support has been completed and has undergone a public comment period. The resulting consensus document is available for download and outlines the current state as well as areas needing work and milestones to ensure progress continues apace. Other roadmapping groups are forming and are looking for practitioners to support the effort. Often, LCA results do not show a clear and certain environmental preference over one choice or another. When this happens, current methods are limited in their ability to inform decision makers. The roadmap document covers five main areas of development. The first area concerns performance measures of confidence, which identify the acceptable uncertainty for study results, minimizing expenditures. The second area concerns selection of impact categories; an area where there are already methods in use. The roadmap suggests that these should be codified and their applicability to various applications identified. Normalization is addressed next. While several methods of normalization are in use, the method with the greatest acceptance in the LCA community has a number of drawbacks including data gaps for the emissions references, (Heijungs et al. 2007), a lack of consensus in how the data is compiled (Bare et al. 2006), lack of uncertainty information (Lautier et al. 2010), and spatial and temporal variability (Finnvedenet al. 2009; Bare and Gloria 2006).

This area is followed by weighting, which is a form of Multicriteria Decision Analysis (MCDA). The broader MCDA field can enrich LCA by providing studied methods of assessing tradeoffs. The last area deals with visualization of results. Many other LCA capacity needs would benefit from documentation. This include but are not limited to addressing ill-characterized uncertainty, Life Cycle Inventory data needs and data format needs and tool capabilities. Groups are being formed to address several of these topics and new members are welcomed.

10:45 Jeremy GregoryXin XuMaggie Wildnauer and Randolph Kirchain

A comparison of methodologies for quantifying variation in life cycle inventories: the case of US portland cement production

ABSTRACT. Variation in life cycle inventories (LCI) is rarely reported, in spite of the fact that inventories are often created by aggregating data from several locations. Despite the importance of data aggregation, little formal discussion about the methodological choices or their implications is found in the literature.

In this study, two approaches for aggregating LCI data, inventory-level (horizontal) and plant-level (vertical), were implemented in the context of a specific industrial case: portland cement production in North America. We present a rigorous derivation of the methodologies for aggregating LCI data and environmental impacts in order to highlight the similarities and differences. Then we present the LCI framework and data sources used to form the LCI for portland cement using both approaches. Finally, we show results from a single attribute global warming potential (GWP) life cycle impact assessment (LCIA) of cradle-to-gate portland cement production using a Monte Carlo simulation approach in order to explore the implication of the inventory aggregation methods.

For the case of portland cement, we observe that the mean values of GWP of producing 1 kg portland cement using both approaches are similar for all process types (less than 2% difference), but the standard deviations can be significantly different between the two approaches (3 to 60% difference). This is expected because of differences in the handling of correlation across exchange magnitudes within a facility and zero-inflated data. In the conventional approach to inventory aggregation—inventory-level—it is assumed that exchanges are uncorrelated and zero-inflated data are not explicitly accommodated. Plant-level aggregation, however, enforces correlation among flow magnitudes within the plant and removes the impact of zero-inflation.

Inventory-level aggregation is needed for LCA because it is the only way to create transparent, unit-process inventories that are both modular and flexible. Nevertheless, there is a clear deficiency in the implementation of inventory-level aggregation in that individual exchanges are represented using conventional distributions and exchange magnitudes are assumed independent.

New inventory studies should analyze inventory data for the existence of significant correlation and zero-inflated data that reflect operational subpopulations. In some cases, this may lead to the development of separate LCI datasets. Ultimately, these improvements in quantitative uncertainty reporting, aggregation methods, and the resulting LCIs are expected to provide more statistically robust LCA conclusions.

11:00 Matthew JamiesonJames LittlefieldGreg CooneyJoe Marriott and Tim Skone

Using Monte Carlo Simulation to Reduce or Eliminate Uncertainty
SPEAKER: Joe Marriott / Presentation

ABSTRACT. Two case studies are presented that show how Monte Carlo can reduce uncertainty in LCA results. The first case study is based on NETL’s upstream natural gas model. Parameterized life cycle models provide flexibility in the specification of uncertainty ranges around parameters. However, as the complexity of a model increases, the combined uncertainty of multiple unit processes can span a range that overstates the likelihood of total uncertainty. Such overstatements are due to the pairing of all best case parameters (resulting in improbably low bounds on results) and, conversely, the pairing of all worst case parameters (resulting in improbably high bounds on results). This amplification of uncertainty is problematic even when the uncertainty ranges around individual parameters are carefully selected, such as the discarding of outliers or, when enough data points are available, the use of interquartile ranges. This case study shows the use of Monte Carlo simulation (or other sampling methods) prevents the pairing of extreme parameters and yields total uncertainty ranges that represent a likelihood instead of a universe of results. For example, before Monte Carlo simulation, the uncertainty in GHG results (in 100-year, AR-5 GWPs) is +200%/-20% around the mean; after the application of Monte Carlo analysis, the uncertainty is +33%/-12% around the mean. The second case study demonstrates the ways in which too many parameters can confound the interpretation of results when a different question is being asked, namely picking the “better” scenario. The uncertainty can be reduced by identifying the common parameters between scenarios and holding those values constant while Monte Carlo simulation is applied to the remaining parameters. While this negatively affects the absolute values generate by the models, it provides a more direct comparison between the scenarios and allows us to focus on the parameters that differentiate options and identify true opportunities for improvement.

11:15 Stephanie MullerPascal Lesage and Réjean Samson

LCI uncertainty modelling through the pedigree approach: uncertainty factors derivation based on a broad data assessment

ABSTRACT. The widely used LCI database ecoinvent models uncertainty on data through a semi-quantitative approach: the pedigree approach. It combines two types of uncertainty: the basic uncertainty (intrinsic variability) and the additional uncertainty (variability due to the use of imperfect data) determined using a so-called “pedigree matrix”. In the first releases of the ecoinvent database, the figures used to model uncertainty were determined by expert judgment. In 2013, Ciroth et al. defined new uncertainty factors (for the additional uncertainty) based on empirical considerations [1]. The work presented here goes further by assessing a larger amount of data sources (almost 80 sources were used containing more than 20 000 LCI data points) and by improving the data assessment framework to derive basic and additional uncertainty factors by type of flow. The data assessment will be presented: it is based on classification techniques that allow the obtaining of uncertainty factors by type of flows or by industrial sector. Obtained uncertainty figures for both the basic and additional uncertainties will also be presented. Preliminary results show that additional uncertainty factors depend on both the assessed industrial sector and the type of flow. For example, for the pedigree criteria “Further technological correlation” and for a pedigree score equals to 5, the obtained uncertainty factor when considering the agriculture sector is 1.61; the figure is 2.02 when the electricity generation sector is assessed. Preliminary results also show that currently used uncertainty factors tend to underestimate the uncertainty. This work will permit to quantify uncertainty, particularly for data used in background processes, in a more reliable way. This better and more detailed foundation for uncertainty figures will help the decision-making process based on LCAs by improving the trust in the obtained results.

1. Ciroth, A., et al., Empirically based uncertainty factors for the pedigree matrix in ecoinvent. The International Journal of Life Cycle Assessment, 2013, doi: 10.1007/s11367-013-0670-5