Current PhD Projects
With the scale and need for industrial operations increasing, industrial systems are becoming far more complex than they had ever been. As a consequence, traditional maintenance strategies are not only failing to prevent downtime, they introduce additional costs due to over-maintenance. Predictive maintenance (PdM) aims to address this problem by delivering on the promise of an optimal maintenance policy that allows maintenance to be delayed until the risk of failure is not critical and the performance of the equipment remains unaffected. In order to effectively implement PdM, one must first perform prognostics with minimal error and uncertainty. Unfortunately, due to the limited data availability for prognostics in industrial scenarios, effective implementation of PdM remains a vision rather than a reality for industrial organisations.
Off-site construction operations can be subject to downstream construction site disruptions. These disruptions - such as high wind conditions limiting on-site crane movements for example - delay on-site construction and impact on the effectiveness of the off-site production of construction modules. Brian researches how different disruption management strategies can be used to improve system performance in the face of disruptions.
Managing Industry 4.0 holistically - taking a Business Model Innovation lens to drive the implementation of digitalisation
The accelerated diffusion of digital technologies challenges leadership teams across all manufacturing industries. Capturing the full value of these rapid Industry 4.0 developments can only be realised by holistically orchestrating the implementation across a firm and its value network. To achieve this, it is vital for leadership teams to consider the inter-dependencies between individual activities and projects in key areas of their business, that may be as diverse as the value proposition, operational processes or value capture mechanisms – this is a complex undertaking. However, today it is not clear, how a leadership team should decide, which activities of their business model should be enhanced with which Industry 4.0 pillar and with what priority.
Mudassar investigates the use of Additive Manufacturing (AM) at three well-known levels of operational strategy i.e. on a strategic level as an enabler of competitiveness; on a tactical level to hedge against uncertainties; and on an operational level in the achievement of smooth production flow.
Manufacturers in developed economies are increasingly relying on services to offset the cost-based competition posed by the manufacturers in developing economies. Of these, substitution "pay-per-use" services- where the customer substitutes buying the product with paying for the asset performance- are heavily dependent on reducing the unplanned downtime of the assets. Most manufacturers rely on Remote Monitoring Technologies (RMTs) to predict the Remaining Useful Life (RUL) of the assets. The maintenance activities are then planned accordingly. "Collaborative prognostics" is one such RMT introduced in late 2017 where the assets are represented by agents. These agents identify other similar agents, and share data among themselves to improve the RUL predictions for their corresponding assets. While there have been several proof-of-concept publications for collaborative prognostics, the explicit sharing of data hinders its implementation in real-world industries. This is especially true for the case where the assets are expensive, geographically distributed, and owned by different organisations. Moreover, the communication cost can immensely increase with scale in the example cases published. This PhD project focuses on reducing the minimum bandwidth required for such a network of collaborating assets. It aims to answer the following research question: “How can we reduce the minimum bandwidth required for collaborative prognostics of industrial asset?”
The principal aim of this research project is to enable industrial machines with communication, deep-learning and decision making capabilities in order to improve failure prediction in a cost-effective way. Current prognostics frameworks usually rely on models that operate in a centralised architecture. However, industrial fleets of machines are highly heterogeneous and dynamic, conditions known to be conducive for distribution.
Distributed workload planning and predictive maintenance
As condition-based maintenance, system-wide optimisation, and value-oriented production become the three major directions of the evolution of asset management, maintenance can no longer be considered as isolated from other production activities. Studies have shown that the degradation process of machines are dependent on the operation being performed (e.g., higher workload results in faster degradation). However, the intrinsic connection between operation and condition-based maintenance has not been sufficiently addressed in system-wide asset management optimisation problems. Hao’s research is set in a scenario where parallel assets cooperate to achieve production goals. The aim of her research is to develop a model to assist with integrated optimisation of both load allocation and condition-based maintenance in order to maximise system-level profit in the long run.
Supply chains are increasingly global, complex and multi-tiered. Consequently, companies often struggle to maintain complete visibility of their upstream supply network. This poses a problem as visibility of the network is required in order to effectively manage supply chain risk. Pascal is working on automated methods to generate supply chain maps from openly available text sources for the purpose of improving resilience-related decision-making.
Value-driven approaches to portfolio asset management
The problem of portfolio management arises as an asset portfolio – a system of heterogeneous assets – cannot be addressed by existing frameworks for multi-component systems. Due to the system scale and asset diversity, asset portfolios are apt to have an impact on a wider range of stakeholders and involves more criteria when compared to multi-component systems. In other words, portfolio asset management entails not only direct technical measures (e.g. system reliability and cost) but also indirect functional measures (e.g. production output and service level). It is apparent that decision-making at the cross-asset level is associated with investment allocation and therefore exerts substantial effects on the organisation's performance and profitability. Hence, based on multi-criteria decision analysis and multi-objective optimisation approaches, this PhD thesis aims to establish a value-driven framework that integrates the information across diverse assets to offer evidence-based solutions to the problem.
Information management processes within asset management
The importance of information management is gaining momentum within the Engineering Asset Management domain, both in academic literature and industry applications. Being guided by an array of industry standards that
solely focuses on information management processes within the life-cycle of engineering assets. Most noticeably PAS 1192-3 focusing on Building Information Modelling (BIM) and the associated information management processes within the operational phase of an asset. James’ research aims to address the challenges provided in PAS 1192-3 that says organisation should develop asset information requirements by providing a top-down practical methodology for defining asset information requirements.
Raising the accessibility of digitalisation for manufacturer
Small and medium sized manufacturers in the UK and globally are often less progressed in their adoption of digital technologies compared to larger companies. Named reasons include a lack of digital skills and high investment and operating costs. The aim of this PhD research is to determine a taxonomy of digitalisation solutions for manufacturing environments and to develop a method to identify suitable projects for a given target company.
Completed PhD Projects
Complex industrial assets such as power transformers are subject to accelerated deterioration when one of its constituent component malfunctions, affecting the condition of other components – a phenomenon called ‘fault propagation’. Zhenglin’s research is to design a novel approach for optimizing condition-based maintenance policies for such assets by modelling their deterioration as a multiple dependent deterioration path process. The aim of the policy is to replace the malfunctioned component and mitigate accelerated deterioration at minimal impact to the business. The maintenance model provides guidance on determining inspection and maintenance strategies to optimize asset availability and operational cost.
Today’s warehouses are faced with new challenges that require strategies that can offer more flexibility than conventional strategies are able to do. In this context, Wenrong’s PhD project aims to address the flexibility challenge by investigating how order-picking, which is a key factor affecting warehouse performance, can be dynamically managed. The order-picking operation is typically constrained by inventory management in the warehouse and transportation management of placed orders. More specifically, the order-picking operation is managed based on three key decisions:
a) When should the orders be picked from the warehouse?
b) Which storage location should the order-picker visit?
c) How should the orders be batched together to form a pick-list?
Having proposed an interventionist routing algorithm to enable the dynamic re-routing of an order-picker during the picking operation, Wenrong’s project now investigates the dynamism from making the three decisions in different sequences. By formulating the problem as a Markov Decision Process (MDP), Wenrong aims to develop a method of making the decisions in appropriate sequence based on the status of the operation so as to improve the flexibility as well as the efficiency of the order-picking operation.
Torben's Phd project is addressing problems of data overload and the Value of Information. Currently most information systems specifically allocate data to a specific user. However, the amount of data and the number of tasks each user has to do are constantly increasing at a very large rate. Therefore, companies are having a massive data overload problem and ensuring the right data is getting to the right users can be difficult for them. Torben tries to address this problem by using market-based techniques, which use the principle of markets in economics. The user has a certain utility (or value) for specific datasets or dataset combinations. At the same time datasets have costs. Markets like supermarkets combine these two in an efficient manner. Consumers in a supermarket have a specific value for certain products and the supermarket has costs associated in offering these. The market approach is working quite efficiently in various applications and has been shown to work well for similar resource allocation problems. By applying this approach towards data management, Torben is hoping to improve the user decision-making by providing him with the right information and to identify a value for companies large amounts of datasets.
The engineering asset performances such as reliability and maintainability directly impact the ultimate overall business performance. Therefore asset intensive manufacturing companies heavily rely on their engineering asset management systems to gain core competitive advantages. However, developing effective performance measures for valuable and complicated engineering asset management (EAM) has always been a challenge for asset intensive manufacturing organisations. Additionally having effective performance measures in place is required by a number of international standards on engineering asset management such as ISO55000 and PAS55. There is a very limited number of existing studies on the proposed topic, and the mainstream approach in generic performance measurement literature is not the most suitable in the context of organisations’ EAM. A crucial reason is that risk control is not included as independently essential perspective. However EAM heavily relies on the successful management of various risks such as asset safety, reliability and many other potential hazards. Furthermore, the complexity and scope of EAM is quite difficult to be modelled in the performance measurement, therefore leading literature and practical experience of EAM should be adapted in the design process to understand the full picture of EAM. JQ Wang has proposed frameworks by refining existing approach of designing performance measures for asset-intensive organisations’ EAM. Risk control elements and leading EAM knowledge will be factor in the design process to assist organisations to select their performance measures holistically. JQ will apply three phase case studies including a facilitator case study for validating the research. His pilot case study has proved that the frameworks are usable and feasible for partner organisations to review and improve existing performance measures for their EAM.
Joel’s research seeks to address the dynamic nature of assets’ criticality. So far criticality analysis, which is a tool for deciding what assets should have priority within a maintenance management program, has been treated as a static concept both in literature and in practice. The myth is: “…we have just concluded our criticality analysis; we can now check that box...” But insufficient understanding of the changing nature of criticality has led to misalignment between asset maintenance strategies and the business goals of the organisation over time. Joel attempts to develop a model that will combine several multi criteria decision making techniques to identify factors that influence changes in criticality. These factors/criteria will depend upon asset operating condition, business environment, maintenance objectives and key performance indicators of the organisation. The algorithm should detect changes in criticality, connect to the company’s enterprise asset management system to automatically reproduce the analysis and update criticality accordingly. From this research, Joel is hoping to automatically adjust maintenance program to business needs by exploiting the dynamic nature of criticality to generate dynamic CBM strategies.
Probabilistic dependencies in supply networks
My research examines the use of machine learning and statistical techniques on large industrial datasets in order to deduce complex, probabilistic dependencies between parameters in dynamic, non-deterministic and partially-known environments. There are many possible applications of this research but my initial aim is to apply this knowledge to the multi-goal optimisation of large and deeply connected supply networks.