LOCATION: 2311
08:30 Susan A. CsiszarDavid MeyerJane BarePeter EgeghyPaul PriceKelly ScanlonCecilia TanKent Thomas and Daniel Vallero

Chemical Exposure and Toxicity Estimation Methods within Human Health Impact Assessment

ABSTRACT. 1 ORISE Research Participant hosted at U.S. Environmental Protection Agency; 2 U.S. Environmental Protection Agency, National Risk Management Research Laboratory; 3 Environmental Protection Agency, National Exposure Research Laboratory; 4 AAAS Science & Technology Policy Fellow hosted at U.S. Environmental Protection Agency

Abstract: Within Life Cycle Impact Assessment (LCIA), characterization factors (CFs) for chemically- mediated human health (HH) impacts are a combination of exposure and toxicity. In the widely used USEtox LCIA framework, exposure is characterized via modeled intake fractions which are focused on far-field chemical releases, and toxicity is characterized via effect factors, which have been derived from experimental toxicity studies [1]. This method is thus limited to chemicals without near-field releases and to those with empirically derived toxicity factors and is not applicable to chemicals with little or no effect data available. This poses a challenge for incorporating HH impacts into LCIA, especially during the product design phase when focusing on newly developed chemicals or chemical alternatives. We explore incorporating advances near-field exposure modeling and chemical toxicity estimation techniques into HH impact assessment.

In recent years, the EPA developed the Toxicity Forecaster (ToxCast) [2] program, which uses high-throughput in vitro assay methods to estimate chemical bioactivity for thousands of chemicals without the use of animals. Computational models can be used to estimate chemical toxicity, for example using quantitative structure activity relationships (QSARs). An example of this is the Toxicity Estimation Software Tool (TEST) [3], also developed at the EPA, which allows for rapid chemical toxicity estimation based on chemical structure properties. The bioactivities derived from ToxCast have been combined with pharmacokinetic and exposure modeling to rank and prioritize chemicals for risk assessment, and this type of analysis may lend itself to LCIA techniques, which are comparative in nature [4]. Using computer modeled toxicities, however, may provide the most practical solution for a priori HH effect factor predictions, which could be used with modeled exposure factors to derive CFs.

As exposure is also a key component of HH CFs, we also discuss the inclusion of use-phase near-field sources in LCIA. This requires incorporating advances in rapid near-field exposure modeling and source characterization. We present a decision-tree framework outlining proposed steps needed to enhance HH CF modeling incorporating new techniques to rapidly estimate chemical exposure and effects.

References:

1. www.usetox.org/

2. www.epa.gov/ncct/toxcast/

3. www.epa.gov/nrmrl/std/qsar/qsar.html

4. Wetmore et al. 2012, Toxicological Sciences, 125(1), 157–174

08:45 Lise LaurinShelly Martin and Warren Boelhouwer

Allocation, a question of perspective

ABSTRACT. Green Carbon has developed a thermal vacuum recovery (TVR) system to recycle on-road and off-road tires and tracks and process them into high carbon steel, carbon black, two grades of fuel feedstocks and a hydrocarbon-rich gas (process gas), which is used to heat the process. Life cycle assessment (LCA) was used to examine the life cycle impacts of the TVR process relative to conventional tire disposal in an incinerator or landfill, as well as to quantify the production of several useful co-products generated in the TVR process.

Attributional LCA was used to analyze the TVR system from multiple vantage points to understand the TVR system from an operating perspective, as a disposal service for waste tires and as a production method for fuel feedstocks. This comparative LCA study, which underwent an ISO compliant panel critical review, included the use of midpoint (TRACI) and damage (ReCiPe Endpoint) categories. In addition, Monte Carlo simulations were used to understand the statistical reliability of the results with respect to the data quality, and results were only reported if they were consistent across 95% or more of the comparative simulations.

In the TVR process, there are many co-products or services provided: disposal of used tires, production of light fuel feedstock, production of heavy fuel feedstock, production of carbon black and production of steel. Depending upon the functional unit, the other products and/or services are modeled through system boundary expansion. To put this another way, to determine the impact of our functional unit, we subtract the impacts of the co-products as currently produced in the market today, or offset their production. When considering TVR as a disposal service, traditional production of heavy fuel feedstock, light fuel feedstock, carbon black and steel are avoided, that is their impacts are subtracted from the total impact of the TVR process (Figure 1). In this way, we are modeling the TVR for its service of disposing of tires.

Figure 1: System expansion for evaluating impacts of disposing of tires and/or tracks

When considering TVR as a production process, the co-products are again avoided by using system expansion, then the heavy and light fuel feedstocks are allocated based on energy content (see Figure 2). In this case, the production of the fuel feedstocks also produces carbon black, steel wire and disposes of tires, so the impacts of the production of carbon black, steel wire and landfilling tires are subtracted from the TVR process to get the impacts of fuel production.

Figure 2: System expansion for evaluating impacts of producing of fuel feedstock

Another allocation comes in when considering the biogenic source of some of the tire carbon. Between 45% and 80% of tire rubber comes from natural latex. Further, approximately half of a tire’s organic matter is rubber, which would indicate that between 23% and 40% of the carbon going into the oils would be organically based. Considering the bio-based content of the original tire, there is also a bio-based component to these fuels, as confirmed by the carbon testing data.

We also compared the TRV process as a production process for four co-products and compared it with conventional production. The results for ozone depletion and global warming using TRACI 2.1 are shown in figure 3 below.

Figure 3: Comparative contribution of processing 15,000 lbs of scrap tires via the TVR process without co-products vs. conventional product of co-products, using TRACI 2.1

LCA practitioners are sometimes given the impression that there is only one way to allocate. In this study, we show that by allocating differently, we can answer different questions. This study answered the question of what are the environmental impacts of the TVR process for different stakeholders. Whether a stakeholder is considering the disposal of old tires, fuel feedstock production or carbon black production, this study was allocated to answer the appropriate question.

Green Carbon set about to develop the TVR process based on the request of its customers and others who needed a responsible way to dispose of their tires. As OTR Wheel Engineering (Green Carbon’s parent company) is a major supplier of off-road tires, supporting its customers at end-of-life is considered good business, as well as good product stewardship. Green Carbon is continuing to refine and improve the TVR process and is currently undergoing an addendum to this LCA with updated data.

09:00 Guillaume Bourgault

Monte Carlo sampling in the presence of dependent variables

ABSTRACT. Evaluation of uncertainty in LCA can be done through the help of Monte Carlo simulation. This method becomes less straightforward to apply in the presence of dependent variables. This presentation introduces the application of the Dirichlet distribution to deal with specific instances of dependent variables in LCA. On the LCI side, dependent variable appear within a unit process that distributes demand for a product to different suppliers (in ecoinvent, a “market” dataset). The sum of the different suppliers has to be equal to 1, making each contribution dependent on one another. In LCIA, the sum of each regionalized fate factor should not exceed 1. The first challenge in these situation is to construct distributions that reflect the dependence. The Dirichlet distribution is the multivariate generalization of the beta distribution, which has a lower and upper bound, appropriate for that kind of application. The average of each variable is specified. Then, the parameter controlling the variance of the distributions is chosen. The second challenge is to integrate the sampling to the rest of the Monte Carlo. Algorithms are already available in statistical software to make sure the sum of each iteration remains equal to 1. The sampling for this part of the model has to be generated separately and stored to be used during the sampling of the whole model. Preliminary results on simplified models show that not respecting the dependence relationship leads to an overestimation of the uncertainty of the result. A more careful application of the Monte Carlo method is therefore more likely to lead to statistically significant differences in LCIA scores and increasing the discriminating power of LCA.

09:15 Yohan MarfoqPascal LesageUrs Schenker and Manuele Margni

Uncertainty analysis in LCA using aggregated datasets: making it possible by accounting for correlations
SPEAKER: Pascal Lesage / Presentation

ABSTRACT. Aggregated LCI datasets represent cradle-to-gate LCI (or even sometimes LCIA results) of products. They are notably used in simplified LCA tools because they make calculations simpler and faster. However, uncertainty analysis techniques found in current LCA software are nonsensical if they use aggregated datasets, unless one accounts for different types of correlation. We distinguish between and show how to deal with three types of correlations: 1) Correlations within aggregated datasets. Aggregated datasets are product systems, made up of unit processes. Unit processes can appear many times (e.g. there are multiple instances of “truck transport”). Sometimes, these different instances in a product system also represent different instances in reality (e.g. “truck transport” unit processes represent different trips in reality, with different trucks).

Normally, in LCA software, the same parameter value is sampled for all instances of a unit process during Monte Carlo analysis. We propose an algorithm that distinguishes between instances of a unit process and decorrelates these instances during uncertainty analysis. Results using ecoinvent 2.2 show that decorrelated product systems generally have slightly lower uncertainty. 2) Correlations across aggregated datasets in a given product system. Aggregated datasets are usually built using the same basic building blocks (unit processes). Different aggregated datasets used to construct a given product system therefore cannot be considered independent. We propose an algorithm that calculates the covariance between all aggregated datasets produced for a given database. These covariances can then be used in the estimation of system-wide uncertainty using analytical error propagation (via Taylor series expansion).

Results, again based on ecoinvent v2.2 aggregated datasets, show that correlations across aggregated datasets can significantly influence calculated uncertainty. 3) Correlations across product systems in a comparative LCA. Aggregated datasets representing a given product in two compared scenarios sometimes represent the same instance (same input, same source), sometimes not (similar input, different source). We propose a method to tackle the scenario comparison problem in analytical error propagation and apply it on simple examples. The methods presented should help software developers who favor aggregated LCI datasets to start allowing sensible and system-wide uncertainty analyses.