Numerical modelling of solute transport processes using higher order accurate finite difference schemes. Numerical treatment of flooding and drying in tidal flow simulations and higher order accurate finite difference modelling of the advection diffusion equation for solute transport predictions.

View/ Open
Chen.pdf (28.21Mb)
Download
Publication date
2010-06-23T15:45:45ZAuthor
Chen, YipingKeyword
AccuracyAdvection
Diffusion
Finite difference schemes
Hydrodynamics
Mathernatical modelling
Solute transport
Stability
Water quality modelling
Dispersion-diffusion
Hydrodynamic modelling
Rights

The University of Bradford theses are licenced under a Creative Commons Licence.
Institution
University of BradfordDepartment
Department of Civil EngineeringAwarded
1992
Metadata
Show full item recordAbstract
The modelling of the processes of advection and dispersion-diffusion is the most crucial factor in solute transport simulations. It is generally appreciated that the first order upwind difference scheme gives rise to excessive numerical diffusion, whereas the conventional second order central difference scheme exhibits severe oscillations for advection dominated transport, especially in regions of high solute gradients or discontinuities. Higher order schemes have therefore become increasingly used for improved accuracy and for reducing grid scale oscillations. Two such schemes are the QUICK (Quadratic Upwind Interpolation for Convective Kinematics) and TOASOD (Third Order Advection Second Order Diffusion) schemes, which are similar in formulation but different in accuracy, with the two schemes being second and third order accurate in space respectively for finite difference models. These two schemes can be written in various finite difference forms for transient solute transport models, with the different representations having different numerical properties and computational efficiencies. Although these two schemes are advectively (or convectively) stable, it has been shown that the originally proposed explicit QUICK and TOASOD schemes become numerically unstable for the case of pure advection. The stability constraints have been established for each scheme representation based upon the von Neumann stability analysis. All the derived schemes have been tested for various initial solute distributions and for a number of continuous discharge cases, with both constant and time varying velocity fields. The 1-D QUICKEST (QUICK with Estimated Streaming Term) scheme is third order accurate both in time and space. It has been shown analytically and numerically that a previously derived quasi 2-D explicit QUICKEST scheme, with a reduced accuracy in time, is unstable for the case of pure advection. The modified 2-D explicit QUICKEST, ADI-TOASOD and ADI-QUICK schemes have been developed herein and proved to be numerically stable, with the bility sta- region of each derived 2-D scheme having also been established. All these derived 2-D schemesh ave been tested in a 2-D domain for various initial solute distributions with both uniform and rotational flow fields. They were further tested for a number of 2-D continuous discharge cases, with the corresponding exact solutions having also been derived herein. All the numerical tests in both the 1-D and 2-D cases were compared with the corresponding exact solutions and the results obtained using various other difference schemes, with the higher order schemes generally producing more accurate predictions, except for the characteristic based schemes which failed to conserve mass for the 2-D rotational flow tests. The ADI-TOASOD scheme has also been applied to two water quality studies in the U. K., simulating nitrate and faecal coliform distributions respectively, with the results showing a marked improvement in comparison with the results obtained by the second order central difference scheme. Details are also given of a refined numerical representation of flooding and drying of tidal flood plains for hydrodynamic modelling, with the results showing considerable improvements in comparison with a number of existing models and in good agreement with the field measured data in a natural harbour study.Type
ThesisQualification name
PhDCollections
Related items
Showing items related by title, author, creator and subject.
-
Interpreting random forest models using a feature contribution methodPalczewska, Anna Maria; Palczewski, J.; Marchese-Robinson, R.M.; Neagu, Daniel (2013)
-
Hybrid Dynamic Modelling of Engine Emissions on Multi-Physics Simulation Platform. A Framework Combining Dynamic and Statistical Modelling to Develop Surrogate Models of System of Internal Combustion Engine for Emission ModellingCampean, I. Felician; Neagu, Daniel; Pant, Gaurav (University of BradfordFaculty of Engineering and Informatics, 2018)The data-driven models used for the design of powertrain controllers are typically based on the data obtained from steady-state experiments. However, they are only valid under stable conditions and do not provide any information on the dynamic behaviour of the system. In order to capture this behaviour, dynamic modelling techniques are intensively studied to generate alternative solutions for engine mapping and calibration problem, aiming to address the need to increase productivity (reduce development time) and to develop better models for the actual behaviour of the engine under real-world conditions. In this thesis, a dynamic modelling approach is presented undertaken for the prediction of NOx emissions for a 2.0 litre Diesel engine, based on a coupled pre-validated virtual Diesel engine model (GT- Suite ® 1-D air path model) and in-cylinder combustion model (CMCL ® Stochastic Reactor Model Engine Suite). In the context of the considered Engine Simulation Framework, GT Suite + Stochastic Reactor Model (SRM), one fundamental problem is to establish a real time stochastic simulation capability. This problem can be addressed by replacing the slow combustion chemistry solver (SRM) with an appropriate NOx surrogate model. The approach taken in this research for the development of this surrogate model was based on a combination of design of dynamic experiments run on the virtual diesel engine model (GT- Suite), with a dynamic model fitted for the parameters required as input to the SRM, with a zonal design of experiments (DoEs), using Optimal Latin Hypercubes (OLH), run on the SRM model. A response surface model was fitted on the predicted NOx from the SRM OLH DoE data. This surrogate NOx model was then used to replace the computationally expensive SRM simulation, enabling real-time simulations of transient drive cycles to be executed. The performance of the approach was validated on a simulated NEDC drive cycle, against experimental data collected for the engine case study. The capability of methodology to capture the transient trends of the system shows promising results and will be used for the development of global surrogate prediction models for engine-out emissions.
-
Interpretation, Identification and Reuse of Models. Theory and algorithms with applications in predictive toxicology.Neagu, Daniel; Ridley, Mick J.; Travis, Kim; Palczewska, Anna Maria (University of BradfordSchool of Electrical Engineering and Computer Science, 2015-07-15)This thesis is concerned with developing methodologies that enable existing models to be effectively reused. Results of this thesis are presented in the framework of Quantitative Structural-Activity Relationship (QSAR) models, but their application is much more general. QSAR models relate chemical structures with their biological, chemical or environmental activity. There are many applications that offer an environment to build and store predictive models. Unfortunately, they do not provide advanced functionalities that allow for efficient model selection and for interpretation of model predictions for new data. This thesis aims to address these issues and proposes methodologies for dealing with three research problems: model governance (management), model identification (selection), and interpretation of model predictions. The combination of these methodologies can be employed to build more efficient systems for model reuse in QSAR modelling and other areas. The first part of this study investigates toxicity data and model formats and reviews some of the existing toxicity systems in the context of model development and reuse. Based on the findings of this review and the principles of data governance, a novel concept of model governance is defined. Model governance comprises model representation and model governance processes. These processes are designed and presented in the context of model management. As an application, minimum information requirements and an XML representation for QSAR models are proposed. Once a collection of validated, accepted and well annotated models is available within a model governance framework, they can be applied for new data. It may happen that there is more than one model available for the same endpoint. Which one to chose? The second part of this thesis proposes a theoretical framework and algorithms that enable automated identification of the most reliable model for new data from the collection of existing models. The main idea is based on partitioning of the search space into groups and assigning a single model to each group. The construction of this partitioning is difficult because it is a bi-criteria problem. The main contribution in this part is the application of Pareto points for the search space partition. The proposed methodology is applied to three endpoints in chemoinformatics and predictive toxicology. After having identified a model for the new data, we would like to know how the model obtained its prediction and how trustworthy it is. An interpretation of model predictions is straightforward for linear models thanks to the availability of model parameters and their statistical significance. For non linear models this information can be hidden inside the model structure. This thesis proposes an approach for interpretation of a random forest classification model. This approach allows for the determination of the influence (called feature contribution) of each variable on the model prediction for an individual data. In this part, there are three methods proposed that allow analysis of feature contributions. Such analysis might lead to the discovery of new patterns that represent a standard behaviour of the model and allow additional assessment of the model reliability for new data. The application of these methods to two standard benchmark datasets from the UCI machine learning repository shows a great potential of this methodology. The algorithm for calculating feature contributions has been implemented and is available as an R package called rfFC.