engineering

The engineering essay below has been submitted to us by a student in order to help you with your studies.

Intelligent infrastructure strategy

Chapter 1

Introduction

1.1 Background:

As part of the Intelligent Infrastructure strategy, Network Rail are developing a pilot system on the Edinburgh to Glasgow route, where data from a number of condition monitoring systems (points machines, track circuits, etc.) are being collected and presented using a standard SCADA package. However, the level of analysis undertaken to detect faults and diagnosis failures is limited. Additionally, little effort has been focused on how data from different sources can be integrated in order to improve performance.

This project is carried out with The Railway Research Group in the School of Electronic, Electrical and Computer Engineering which is currently undertaking research with a number of railway companies in order to develop a means of integrating data from various condition monitoring systems to support both short term (operational) and long term (strategic) decisions.

1.2 Structure of the report:

In this chapter, and after this section, the aim and objectives of the project will be discussed. This will provide a scope of the work to be achieved.

In chapter two, a detailed outcome of a broad research carried out in the first term will be investigated. This includes a description of intelligent infrastructure, condition monitoring as part of intelligent infrastructure, its technologies and applications, as well as, the fault detection and diagnosis methods undertaken in railway industry.

In chapter three, design and building procedure of a simple condition monitoring system will be demonstrated which also includes an introduction to the software used for this purpose such as Labview and Wonderware. This chapter will end by launching the ideas and plans for the practical work of the next term.

Project management is vital in any engineering projects carried out. Hence a very careful time planning, cost estimation and risk assessment including health & safety issues have been implemented. Detail of all these activities have been brought together in chapter four.

In chapter five, future and next term plans will be discussed which will be followed by the Conclusions and references. Technical appendixes have been attached at the end of this report.

1.3 Aims and Objectives:

The aim of this project is to initially review the state-of-the-art in railway condition monitoring by firstly, looking at the general condition monitoring theories in engineering and specifically in the railway industry.

Thereafter, the expected outcome for the first semester is to develop a means of linking Labview Software to a standard widely used, industrial SCADA package (Wonderware and its InTouch application) via OPC server. This will help to rapidly log the sensors information into the HMI software in order to display, monitor and store data.

In the second semester, this data, which will be collected (or have been collected before e.g. using data from Historian server), from different sources (e.g. sensors such as Point Machine etc), will be used to develop and validate appropriate algorithms for fault detection and diagnosis of railway assets in an achievable scale.

This work then can be extended further by including more complex data sets into the SCADA package (e.g. wheel impact load detection systems - WheelChex) and 'live' data inclusion from the railway network into the system as well as by exploring methods for integrating data sets, using a railway data model or ontology.

Chapter 2

Literature Review

2.1 Background:

A simple system consists of inputs, outputs and a feedback loop controlling the system's performance where faults acts as unwanted inputs to the system (Figure1):

Figure 1 - A simple System overview

According to Gerlter (1998), faults are defined as a derivation from the normal behavior in the plant or its instrumentation.

Faults are divided into four main categories (Gerlter, 1998)

  1. Additive Process Faults: These are the unknown inputs to the system which are normally zero and when present cause a change in the plant outputs but are independent from the known inputs such as plant leaks etc.
  2. Multiplicative Process Faults: These are changes in some plant parameters. They affect the outputs which depend also on the magnitude of the known inputs e.g. plant deterioration such as power loss etc.
  3. Sensor Faults: These are the differences/inconsistencies between the measured and actual values of individual plant variables.
  4. Actual Faults: These are again the differences/inconsistencies between the input command of an actuator and its actual output.

In the railway actuator applications, faults have been categorized into three main types (Silmon, 2009):

  1. Abrupt faults: which suddenly occur within a system. In a railway actuator, the fault can occur after a long healthy performance.
  2. Intermittent faults: these types of faults are usually hidden within the system; they normally emerge for a few operations and then disappear. These types of faults are harder to detect and isolate.
  3. Incipient faults: small faults or slowly developing faults which can be predicted with some accuracy providing the correct parameters of the actuator are observed for every operation..These faults are desirable to be detected for enhancement of maintenance operations (Patton et al, date).

2.2 Condition Monitoring:

2.2.1 Background:

Nowadays, Condition monitoring plays an important role in many machine based technology and systems. The condition of an apparatus may be observed from its biological senses. This is where the classic condition monitoring started from. Human would learn about machines malfunction by looking for changes in shape, color or size, listening to sounds unusual in strength or pitch, touching to feel heat or vibration, and smelling for fumes from leaks or overheating (Gertler, 1998). The main aim of condition monitoring is to find and monitor faults in a system.

Condition Monitoring, has undergone through a development era in parallel with any other technology such as electronics and has become almost completely automated.

In this part it endeavored to analyze methods used in condition monitoring with specific examples and techniques utilized in the railway industry.

2.2.2 Condition monitoring system review:

A condition monitoring system consists of several processes. Figure 2 demonstrates the order of the processes in a system (Barron, 1996):

Figure 2 - Condition Monitoring Process

Barron (1996) has divided this practice into two main areas:

  • System set-up and review
  • Routine monitoring, assessment and diagnosis:

In system set-up and review, it is crucial to identify systems which would benefit from CM application. Appropriate systems or equipment for this, will probably have a poor record of performance such as having failed before or flagging wrong errors as well as those which require additional reliability and dependability.

Consequently, choosing the best methods suitable for the specific purpose is essential. FMECA (Failure Mode Effect Critical Analysis) which is currently used widely in industry helps to understand how a system fails (mode) and to find the causes and the effects these failures would have on the system. There are many methods used for FMECA but what is important here is the outcome of FMECA -which is simply the answer to how equipments fail? - This can be then input into the shown processes in figure 2. This is followed by setting up tolerances (minimum/maximum) for parameters in the system, where and how often to take measurements, collecting baseline readings and setting the alarm levels.

The second area which is also the more important part is where the actual data are collected, stored and interpreted. Data could be obtained from local inspections by the operators, local instrumentation, process control or portable/permanent monitoring equipment. Gathered data will be assessed by the methods such as level checking, trading against time and comparison with other data (Barron 1996).

Condition monitoring for systems highly depends on the system itself and/or the organization running it. CM can be a big or complex system for say a very sensitive site, such as military or nuclear sites, or can be small and simple. Nonetheless, the important issues to consider is accurately assessing data each time a new set of measurements are taken and identifying and monitoring faults within the system.

2.2.3 Remote Condition Monitoring:

Human observations and on-site data collection and analysis have clear limitations as well as safety and other issues. For this reason, remote condition monitoring and automatic systems are certainly preferable. Remote condition monitoring consists of sensors and data acquisition device on-site connected to a network via a communication link which sends data over to base stations where data are analyzed. This allows the actuator to be monitored continuously.

According to Silmon (2009):

"...this would improve the reliability of the railways by reducing the number of failures which occur during normal traffic time. There is also potential to improve safety, because wrong-side incipient faults can be detected using the same system..."

2.2.4 CM in Maintenance:

In any engineering industry, including railways, maintenance plays a critical role in operating systems & plants and also accounts for a large proportion of plant operating costs (Rao, 1996). In the railway industry, maintenance is the general day-to-day upkeep of the railway which keeps the trains running, such as looking after tracks, signals & power supply. Also in recent years, expectations of maintenance have evolved considerably (Moubray, 1997) and hence the importance of developing and modernizing the Condition Monitoring techniques. As can be seen in Figure 3, plant availability, reliability, safety and other related factors are essential features of maintenance in industries.

Condition Monitoring is a major component of predictive maintenance. Different techniques exist within Condition Monitoring for predictive maintenance. A basic technique used widely for this purpose, is called Reliability-Cantered Maintenance, often known as RCM, which is a process to ensure that assets continue to do what their users require in their present operating context (Moubray, 1997). Reliability centered maintenance is an engineering framework that enables the definition of a complete maintenance regime. Moubray (1997) Characterized RCM as a process to establish the safe minimum levels of maintenance and believes successful implementation of RCM will lead to an increase in cost effectiveness, machine uptime, and a greater understanding of the level of risk that the organization is presently managing.

Classic maintenance scheme is based on the conditions where the maintenance work is carried out only when there are faults or alarms flagged up within the system which clearly is not the most efficient practice. This demonstrates a need for a proactive practice which is based on preventing, predicting and mitigating failures.

RCM2 is a very famous practice, which is the world's leading method for identifying maintenance and other activities required to sustain reliable performance of physical assets (CAM RCM, n.d). RCM2 has been developed as a means of integrating a proactive maintenance regime with modern technology. RCM2 focuses on merging Reliability Centered Maintenance and Remote Condition Monitoring (Roberts, 2007).

Remote Condition Monitoring, proposed techniques for condition monitoring strategies such as intelligent machining and multi-sensor systems condition monitoring and diagnosis systems which are capable of identifying machine system defects. Machine system defects locations are essential for unmanned machining. Unattended (or minimally manned) machining would result in increased capital equipment utilization, thus substantially reducing manufacturing costs (Moubray, 1997).

Roberts (2007) has categorized Railway Condition Monitoring Systems into three main classes:

  • Event Loggers: basic systems which record two-state relay switchings and do not provide any online analysis of data.
  • Data Recorders: systems which gathers both analogue and digital data from sensors. They usually have alarms and thresholds built in and although they do not provide diagnostic capabilities, but they do connect to either point-to-point or digital bus communication links using distributed transducers.
  • Condition monitoring systems (fault detection and diagnosis): these systems benefit from "intelligent algorithms" which make them a more advanced version of data recorders. They can detect and diagnose faults with an indication of their level of criticalness.

The third category will be discussed further in section 2.4.

2.3 Intelligent Infrastructure:

As mentioned before, making the assets and the overall infrastructure independent of human interference at least at stations, to certain extent, increases the site reliability, safety and efficiency considerably. The UK invests £8bn a year in transport infrastructure (Blythe, 2007) and Network Rail has launched an initiative to progressively introduce intelligent infrastructure across the national network. According to the Network Rail website[1], Intelligent Infrastructure self monitoring assets (such as: Supervisory Control and Data Acquisition systems) will make a major contribution to improving overall business performance and delivering the future world class railway.

Intelligent infrastructure has played a critical role in mitigating the complexities of transport infrastructures of the past and is needed today to mitigate the complexities of our Digital Age. To do so, intelligent infrastructure for the 21st century must meet a set of six requirements shown in Figure 4 (VeriSign, 2004).

The model in figure 4 has been adopted from VeriSign report (2004) which explains intelligent infrastructure principles further:

  1. Scalability: must be capable of rising to the challenge of accommodating dramatically increasing usage.
  2. Interoperability: must also be able to mediate between many technologies and protocols. Any useful intelligent infrastructure should enable mediation between different protocols, as well as mediation between multiple providers and devices (in chapter 3, an example of connecting different devices using different software have been demonstrated, which is a good example of this).
  3. Adoptability: should be designed to adapt to new developments easily. The assets must be flexible enough to connect and communicate to latest technologies by keeping their form and functionality.
  4. Availability: an essential principle in any Intelligent Infrastructure is Availability. Availability of a system's essential parts controls its overall availability.
  5. Security: Intelligent infrastructure plays a main role in coordinating, controlling and safety and they should also be remarkably resistant to physical, logical, network, and social engineering attacks.
  6. Visibility: "The intelligent collection, correlation, and interpretation of data in myriad formats from multiple sources are at the heart of any intelligent infrastructure. Intelligent infrastructures must be able to provide visibility into usage, trends, and anomalies throughout an entire network as well as the larger Internet network of which they are a part" (VeriSign, 2004).

2.4 Fault Detection and Diagnosis (FDD)

2.4.1 Background:

As described in section A.1 fault is defined as any source of misbehavior in the plant and/or its instrumentation. It was also discussed how condition monitoring provides extremely useful information on the health of a system and adds security & reliability by developing into an intelligent infrastructure.

In this section, FDD and different methods employed for it as part of a Condition Monitoring System will be analyzed. In addition, algorithms development and other examples within FDD will be also reviewed.

2.4.2 FDD system:

The foremost and two primary tasks of the FDD are to firstly detect to see if there has been a malfunction within the system and secondly, to isolate the source of the fault i.e. to locate the component within the system and feedback by flagging up alarms etc. These two together are referred as fault diagnosis. Fault identification is the supplementary stage where faults are recognized and classified in a way that their sizes and types are estimated (Blanke et al, 2003). Any FDD is supposed to continuously monitor the operation of a system in order to fulfill its main objective which is to detect faults in the system as early as possible and to diagnose their causes, facilitating correction of the faults prior to additional damage to the system or loss of service occurs.

2.4.3 Fault Detection & Diagnosis methods:

In general FDD methods can be divided into two main groups namely:

  1. Without utilizing a mathematical model of the plant
  2. With mathematical model of the plant

In this section, a quick review of the first category will be introduced and because in the next semester, the author's project will be greatly involved with the second category (i.e. algorithm and model development), this part will be analyzed and discussed in detail in the upcoming sections.

2.4.4 Model-free methods:

This type of FDD systems can be listed as follows (adopted from Gertler, 1998):

  • Physical redundancy:
  • This practice has more or less become a routine in large industries. There are always "redundancy" systems installed alongside running systems which in the case of failure, the idle systems carry out the task.

  • Special sensors:
  • These special sensors may be established explicitly for detection and diagnosis which can be for tolerance or limit check, such as temperature/pressure sensors in hardware or to measure some fault-indicating physical quantity, such as sound, vibration etc.

  • Limit-checking:
  • In this method, plant measurements are compared by computers to preset limits. There are two levels of limits, the first acting as pre-warning whereas the other one triggers an emergency reaction, where exceeding the threshold specifies the fault existence. This method may be extended so a number of variables time-trend can be monitored.
  • Spectrum analysis:
  • Most system variables present a typical frequency spectrum under normal operating conditions. This can be taken as a reference and any deviation from this can be an indication of an unhealthy condition.

  • Logical reasoning:
  • This technique is a complimentary practice to the other model-free techniques and is for evaluating the symptoms obtained by the detection hardware/software. It can simply consist of trees of logical rules of the "IF - symptom - AND - symptom - THEN - conclusion" type. The tree continues by each conclusion becoming a symptom itself in the next rule until the final conclusion is reached.

2.4.5 Model-based FDD algorithms:

Before commencing the model based FDD method, it is necessary to first go through and examine the FDD algorithms literature and theory. Therefore, an overall explanation of how the algorithms are developed is discussed in this section.

According to Roberts (2007), FDD problem can be seen as a continuous variable, continuous-time, dynamic system (or asset) control system where the system with an input 'u', produces output 'y' which is subject to an unknown fault, 'f'. In Figure 6 a railway asset within such a system has been demonstrated.

Figure 6 - A Railway Asset with FDD algorithms and fault function (Roberts, 2007)

Input and output pairs (I/O pairs) of the system can be shown by (u,y) pairs and the set of all possible pairs of trajectories u and y that may occur for a given plant define the behaviour B (Blanke et al, 2003).

An example of a basic behavior of a system is shown in Figure 7 where dot A represents a single I/O pair that may occur for a given system. However, in a system with fault consideration, conditions such as dot C, where C = (uc,yc), in the above figure, symbolizes a pair that is out of order and inconsistent with the system dynamics, hence y ? yc (blanke et al, 2003).

It is important to note that, sometime there are conditions where the system performance is not associated with fault-free or faulty categories (such as dot A & C), this situation is the unknown condition and it becomes impossible to decide whether the system is healthy or faulty.

The system considered in Figure 6 can be connected to a computer-based data acquisition system. At discrete time points (Roberts, 2007):

I/Os of the system, u & y, and x represent the internal states of the system. So as can be seen in the above formula, x(k +1)is a function of previous state, x(k ), previous input, u(k ), and faults within the system at that particular time. In a same way, the output, y(k), is a function of present state, x(k), present input, u(k), and faults within the system. As a result, the system behavior B, becomes a function of fault where f0 is the fault free condition.

As a consequence, the FDD system attains the sequences U & Y as input and output over the interval [0,k]:

As mentioned in the previous part of this chapter, it is important to note that, fault detection is to detect whether there is a fault within a system by examining its process I/O i.e. U & Y. However, it was also discussed that fault diagnosis has the task of determining and estimating which type or size (severity) of fault is present. Furthermore, fault diagnosis requires a significant comprehension of set of faults obtained by undertaking a Failure Mode and Effects Analysis (FMEA) (Roberts, 2007).

2.4.6 Model Based FDD:

Model-based methods rely on analytical redundancy by using explicit mathematical models of the monitored process, plant, or system to detect faults and subsequently diagnose their causes. In contrast to physical redundancy, where measurements from multiple sensors are compared to each other (refer to model-free section), with analytical redundancy, sensor measurements are compared to values computed analytically, with other measured variables serving as model inputs (Gertler 1998).

The key separating factor in these two techniques, is the knowledge used for formulating the diagnostics which can be based either on 'a priori' knowledge, or can be totally dependent on the practical and operational behavior of the system (e.g. by black-box models).

Therefore, in model based FDD systems, all approaches use models and data. However, one main class uses 'a priori' knowledge to specify/input the model for identification and evaluation of the residual (or difference which will be explained in detailed in the next section). This approach is also called "first-principle model based system". Moreover, the other main class develops behavioral models only from actual system measurement data from the process itself.

Based on all these, the FDD model-based systems are divided into three main groups.

  • Quantitative method
  • Qualitative method
  • History based model

A detailed, comprehensive representation of model-based systems and their sub-categories can be seen in Figure 8 (Silmon, 2009).

2.4.6.1 Quantitative model-based:

Initially, in this method, parameters needed for the model are identified and the system inputs are measured. From the parameters identification, a quantitative model is produced which in parallel to the measured inputs to the system will predict the system performance and behavior based on the mathematical analysis as well as the input measurement carried out. At this stage, the system is running and its actual performance is evaluated against the expected or predicted performance from the previous stage. If the results match, it means that the system is running as expected and is fine, otherwise, the remainder or the residual may be representing faults in the system and these residuals should be diagnosed and actions should be taken. According to Silmon (2009):

"The most common functions for residual generation are subtraction of the model signal from the measured signal (to detect additive faults) and division of the measured signal by the model signal (to detect multiplicative faults)."

Residual is the consistency of a model based system which can be checked at every time t by determining the difference:

Where r(t) is the residual at time t, y(t) is the faultless and y(t) is the faulty output. In ideal faultless condition r(t) is zero. A non-zero residual value indicates the deviation between measurements and predicted/calculated values based on the system model and a possible fault in the system.

There are three methods within quantitative approach used widely. First of all, the Kalman filter is used which is a recursive estimator. This means that only the estimated state from the previous time step and the current measurement are needed to compute the estimate for the current state. Kalman filter also has a consistent mean at zero which will fluctuate to a nonzero value if there is any inconsistency or residual within the system, representing a fault in the system. Statistical tests are not complicated to construct because the innovations sequences are white. However, fault isolation is more difficult with these filters as many "matched filters" are required; one for each fault and check which matches filter output with the actual observations (Gertler, 1998).

The second method, Parameter Estimation, is extremely important for different reasons and for various simulation purposes. Certain machine parameters such as resistance are monitored continuously and any change in their performance can signify existence of a malfunction. Moreover, in modern systems, the route of acquiring the required system parameters has been eased by automating the measurement process using Online Signal Processing (Vas, 1993).

There are many other techniques used in this area, such as parity relations, diagnostic observers etc.

2.4.6.2 Qualitative model-based:

It was explained that, in model-based approaches that system's behaviors and values of outputs are modeled, based on the measured inputs (e.g. pressure, temperature), and model parameters (e.g. heat transfer coefficients) and their comparison to the system measured the performance or output. The key difference between Qualitative models and Quantitative models is that, in the latter one, exact numerical results are obtained as a result of possible requirements. In the qualitative method, however, more state related conclusions and results are achieved as the system's behaviors and values of outputs, for instance instead of outputting a number between 0 and 100, it rather outputs states such as: Excellent, Very Healthy, Normal, Critical or Emergency. As can be seen in Figure 10, all the measurements (system inputs and performance) have to be quantized (block 4), before being used in other parts.

In qualitative models, inaccuracy is not a significant issue. It can be used where the precise quantitative observations are impossible to attain and as a consequence incipient faults will not be detected as fast and as accurately as in the quantitative method (Silmon, 2009).

In qualitative models, the main principle is the qualitative relationships derived from information and knowledge of the underlying physics. This method can be divided into two main sub categories in which both employ causal knowledge of the process or system to diagnose faults (Katipamula & Brambley, 2005):

  • Rule-based (expert systems)
  • Models based on qualitative physics

In rule-based systems, the model takes advantage of the process history data (such as expert systems) which can be the human experience with a process in the relative field e.g. "IF condition1 THEN conclusion1" and so on. This method is very similar to the Logic reasoning method introduced in model-free section.

Katipamula & Brambley (2005) explains that qualitative models can also be based on abstraction hierarchies based on decomposition. In abstraction hierarchies, the system works out the conclusion for the behavior of the overall system, from the bottom (basic components) upwards, or in other words, exclusively from the laws governing the behavior of its subsystems. For this method, qualitative inputs are required, although some methods do accept quantitative inputs (e.g. temperature, pressure etc) directly, but others do call for qualitative inputs such as linguistic statements. Hence, if the inputs are in numerical format (i.e. quantitative) pre-processing is necessary in order to convert this data into qualitative information. A conventional method used for this process is called Fuzzy logic which provides a mechanism for doing this task.

In the Qualitative model, complete knowledge of a physical process is not necessary as the model enables the conclusion about the state of a system to be drawn from partial information in hand. This is carried out by using either qualitative confluence equations from the normal differential equations which implies the system behavior (data such as order of magnitude inputs etc can solve these equations, using a qualitative algebra, to derive the qualitative behavior of the system.) or by deriving qualitative behaviors from again the differential equations. In the next stages, these calculated behaviors can be used as a source of information for FDD.

2.4.6.3 Process history-based:

The last technique to consider in this chapter, involves in a large amount of historical data or process data, logged by the data-loggers/recorders and condition monitoring systems. It can also benefit from input/output methods and data can be transformed into a priori knowledge through feature extraction, either qualitatively or quantitatively (Roberts, 2007).

Methods for History based approach include:

  1. Qualitative shape(trend) analysis
  2. Neural networks and other artificial intelligence techniques
  3. Statistical feature extraction

1. Qualitative shape (trend) analysis: In qualitative shape (trend) analysis, waveforms are replaced or represented by a series of shapes or trends with associated numerical quantities. The replacement shapes have the advantages of being very compact and suitable for processing by a computer, but are an approximation of the original waveforms. These conversions have also been alphabetized, giving similar waveform a specific shape/ alphabet (Silmon & Roberts, 2008). An example of this method can be seen in figure 11.

2. Neural networks: Or Artificial Neural Network is a very good method of mapping input to output data in non-linear systems (Roberts, 2007). It is used for solving artificial intelligence problems and works with assemblies of neurons (computing elements) in which the structure and strengths of the connection between two computing elements determines the behavior of the system.

3. Statistical feature extraction: History based methods differ from model-based methods as a prior knowledge is not needed in this approach. It was mentioned that the function of history based methods is to deal with a huge amount of historical data. This function, which is called feature extraction, is carried out by analyzing the historic data and to produce required information and knowledge from them to use for faults diagnosis. The application of the knowledge in this method is very similar to those used in model-based techniques (Silmon, 2009).

Chapter 3

Design & Development (Practical Works)

3.1 Introduction:

Having gone through the intelligent infrastructure, condition monitoring and fault detection & diagnosis theory, in this term, it was also planned to build a simple condition monitoring system, using potentiometers as source of data and sensors. The objective was to log the data from this device into Labview software which is a commercial tool used for this purpose. Streaming data through to Wonderware, which is a standard SCADA/HMI package and to monitor the received data through the network on its graphical interface application, called InTouch, was the next goal.

3.2 The Process and implementation:

At the start of the term, 1watt wire-wound, 100 ohm resistance potentiometers (details of the device can be found in appendix 1) were purchased from the EECE schools store and wires were soldered to their terminal pins. Then, these potentiometers were connected to a USB-6008 data logger from National Instruments. The National Instruments USB-6008 provides basic data acquisition functionality for applications such as simple data logging, portable measurements, and academic lab experiments. Further technical details and its pin architecture can be found in appendix 1. In Figure 12, data-logger and the sensors can be seen connected together.

This potentiometer has three pins, one connected to the data logger's 5V power supply port, one to the Ground, and the middle outputs to an analogue input of the data logger. By turning the potentiometer knob, the value of the voltage would change (increasing and decreasing the resistance), a plot of this could be observed in NI Measurement and Automation Explorer software (MAX). After validating the data-logger on MAX, Labview software was used to analyze the data further. Labview is a graphical programming environment to develop sophisticated measurement, test, and control systems using intuitive graphical icons and wires that resemble a flowchart (http://www.ni.com/labview, 2009).

Labview software was installed on the local machine (laptop). There were a number of problems encountered with the data logger recognition by Labview due to Drivers issues on the local machine. However, after overcoming this problem, by acquiring assistance from NI technician staff and installing the appropriate drivers, plots of the voltages from the potentiometers could be seen on Labview. A closed loop program was designed on Labview to control the data acquisition unit and to plot the voltage signal coming from the potentiometers. The circuit itself and the output voltage signal can be seen in Figure 13.

This was followed by linking Labview to Wonderware. The Wonderware software is installed on a Virtual machine which runs on a computer in EECE Railway research lab and Labview software is installed on the local machine (laptop). However, because both of them are more or less stand-alone software and also work as a client, there was a need for an interface and a server in between. For this requirement, an OPC server was used which is an industrial and popular software. The OPC server is a software application that acts as an API (Application Programming Interface) or protocol converter. OPC servers are usually expensive but following research on the Internet a Matrikon OPC server was found to be a good product to use, since it is free and secondly it is a very robust commercial software. Figure 13 illustrates the OPC server and the other two software linked together.

A Matrikon OPC server for Wonderware was installed on the remote computer and it was attempted to connect the local computer to the server via Birmingham University wireless LAN (local area network). However, for safety and security reasons of the network, the university wireless firewall did not allow the local computer to connect to the server. Nevertheless, so as to prove that the failure in connection was because of the wireless firewall, a computer which was already in the network (same network as the remote virtual machine) tried to connect to the server directly via a wired-connection, and it successfully linked up and the data could be read. A screenshot of this have been shown in Figure 15. Furthermore it can be observed in the same Figure that the OPC server is found by the software using a wired-connection and a random variables set could be read and written to the OPC server:

Also, to test the connection between the Labview and the server, another software called Matrikon Simulation Server was installed locally. A program was created on Labview to write data to the server using the Shared Variable Engine which is function in Labview 2009.

Also a basic program was created and developed in Wonderware InTouch application which displays the data from OPC server. However, because the main effort was dedicated to complete the integration and resolve of the network issues, insufficient time was spent on developing a decent application which will be definitely considered in the second semester.

In conclusion to the practical part, the entire resultant system has been demonstrated in one diagram as can be seen in Figure 17. The only link which needs to be completed is Labview to the Wonderware InTouch application.

3.3 Point machine:

In this section a point machine, which is as an important system and asset in Railways is briefly introduced, since it will be used as the main data source and sensor for practical developments next semester.

At a railway junction, trains are guided from one track onto the other track using mechanical installation called point machines or turn outs (in the US). Point machines consist of a switch, crossing and guard.

An Electric Point Machine is driven electrically and is used for the operation of points in Railway tracks and includes of an electric motor point mechanism and circuit control device. Point machines are classified according to their field arrangement (Single or split field type), speed (high or low speed), operating voltage (high or Low) and Locking such as In and Out type, straight through type and rotary type. Point mechanism is the important part which consists of friction clutch, reduction gears. Cams and bars are used for converting the rotary movement to linear movement (n.a, 2008).

Chapter 4

Project Management

4.1 Time management

Due to the nature of the project, the time plan was quite intense. Having only nine weeks in each semester to complete all the objectives set in chapter 1, necessitated an accurate time management. A Gannett Chart has been created based on the time plan in order to illustrate the sequence and duration as well as the completion of the required tasks for this project. However, in this part, a table of the important parts of it has been included and further discussed. A detailed Gannett chart can be found in Appendix 2.

The green color implies that the objective was achieved in time and completely. Yellow indicates not finished in time. As mentioned before, data logger took longer than expected to connect to Labview that is why it is yellow and also further learning and in Labview is needed to perform more advanced practices. Finally, the integration part has been represented by a red color as it was not completed in time (by end of week 11). However, all the components and their connections worked fine, but the university wireless LAN did not allow the local machine to be linked to the remote computer, which this will be resolved by using a wired-connection. Apart from these issues, other tasks met the deadline successfully and it is anticipated the table will be all green for next semester.

4.2 Costing

A budget of one hundred pounds has been allocated to each project and it is important to keep the spending within the budget unless in special cases. However, in this semester, since the project was mainly based on software, the only money spent was on potentiometers, at a cost of five pounds each.

4.3 Risk Assessment

All the activities in the project must comply with EECE's Health and Safety laws. Therefore in order to be able to work in the Railway research lab, the area was shown by a member of staff and all the risks associated were pointed out and at the end a risk assessment form was filled in a signed.

Soldering the potentiometers was the only other activity which was also carried out safely in the EECE lab.

4.4 Meetings with the project supervisor

Regular weekly meetings, with the project supervisor took place in order to discuss and review the project work. The project supervisor also gave advice and fully supported every aspect of the project. A formal record of these meetings, using the standard meeting form, have been maintained which includes: outcomes, feedback and other important details. A detailed log book covering all aspects of the project work was maintained throughout the term and all the activities on the project were recorded.

Chapter 5

Future Plans

In this chapter, a quick review of the plans for the next semester has been summarized. But before that, it is important to mention that the uncompleted part from the first semester will be resolved during the Christmas vacation and for the next semester tasks outlined here will be carried out. As mentioned before, one of the main targets of this project is to develop appropriate algorithms for fault detection and diagnosis of railway assets. Hence the main effort will be concentrated on this task. Its concept design is planned to finish in three weeks alongside the main research on the topic. This will be carried out by firstly creating a flowchart (pert chart) and following it through by setting up milestones and completing it step by step. A further three weeks has been arranged for actual code writing which will be followed by taking readings and data from the point machine in the EECE Railway lab. The developed algorithm and all the data and parameters from the point machine will then be modeled all together and tested after. This model can be backed up and/or approved by inclusion of 'Live' data from the railway Network.

Usually problems and faults occur in the test period, so four to five weeks have been dedicated for the test, validation of algorithm as well as code estimation and evaluation.

If there is spare time left, other railway assets such as Wheelchex and Trains doors can be tested and their data can be also inputted into the system and analyzed. Also, since it is important that the final report is written appropriately and accurately, work on it will be started earlier in order to ensure all the requirements are met.

Table 2 displays the tasks and approximated week number associated with them for semester two commencing on 11th of January 2010. Both the week numbers and tasks are flexible and may be changed if needed.

Conclusion

In this report, intelligent infrastructure in railway industry, condition monitoring, its different methods and fault detection and diagnosis were reviewed.

In general, this project has been a major learning activity where researching on the topic was the key part. In the Literature review part (chapter 2) a summary of this effort was reflected by detailing the findings, understanding and appreciation in an appropriate format.

As well as being involved with theoretical exploration of the subject, a practical practice was also carried out in parallel as detailed in chapter 3. The aim for this was firstly to build and test a simple condition monitoring system as an initiation to a larger and more complex system design and implementation which will be carried out in the second term.

Although there were many difficulties experienced in the process of building this system, it taught a valuable lesson of how different the theory is from the practical, real-world experience.

Hence, next term this lesson will be certainly taken into account, and a more accurate planning and estimation will be carried out before commencing to do the practical part.

The main achievement of this semester was linking a sensor to a SCADA package as well as streaming data into it and presenting them in a Graphical User Interface, which will be used next term to allow algorithms to be developed. The project was also kept within the budget, met the deadlines for test inspection and no unsafe or illegal activity was carried out.

References

  • American Society of Heating, Refrigerating and Air-Conditioning Engineers. (2005) Methods for Fault Detection, Diagnostics, and Prognostics for Building Systems. Atlanta, January 2005. Washington: HVAC&R RESEARCH.
  • Barron, R. (1996) Engineering Condition Monitoring: Practice, Methods and Applications. New York: Longman.
  • Blanke, M. Kinnaert, M. Lunze, J. & Staroswiecki, M. (2003) Diagnosis and Fault Tolerant Control. Berlin: Springer.
  • Blythe, P. (2007) Intelligent Infrastructure Futures, Delivering a Safe, Sustainable and Robust Transport Infrastructure for the future. Newcastle University.
  • Gertler, J. (1998) Fault Detection and Diagnosis in Engineering Systems. New York: Marcel Dekker.
  • Intelligent Infrastructure for 21st century. (2004) "Where it all comes together". VeriSign, Inc USA.
  • Moubray, J. (1997) Reliability-centered Maintenance. Oxford: Butterworth Heinemann.
  • National Instruments. (2009) What Is LabVIEW? [online]. http://www.ni.com/labview/whatis/ [Accessed December 1st 2009]
  • Network Rail (2009) Intelligent Infrastructure Good Practice Guide [online]. http://www.networkrail.co.uk/aspx/4349.aspx [Accessed December 4th 2009]
  • No Author. (2008). Point Machine for Railway Signaling. Mumbai: Signal & Telecommunication Training Centre. [also available online] http://www.scribd.com/doc/5988661/-Point-Machine-FOR-RAILWAY-SIGNALING [Accessed December 3rd 2009]
  • Patton, R. Frank, P and Clark, R. (1989) Fault diagnosis in Dynamic systems: theory and applications. Hertfordshire: Prentice Hall.
  • Rao, B. (1996) Handbook of condition monitoring. Oxford: Elsevier Science.
  • Reliability Centered Maintenance (RCM2). (2008) Conscious Asset Management. A member of the Conscious Group Inc, Canada.
  • Roberts, C. (2007) Methods for fault detection and diagnosis of railway actuators. PhD thesis, University of Birmingham.
  • Silmon, J. (2009) Operational Industrial fault detection and diagnosis: Railway Actuator case studies. PhD thesis, University of Birmingham.
  • Tokyo Car Corporation. (2009) Railway Track Turnout [online]. http://www.tokyu-car.co.jp/eng/rs/turnout.html [Accessed December 3rd 2009]
  • Vas, P. (1993) Parameter Estimation, condition monitoring & diagnosis of electrical machines. Oxford: Oxford Science Publication.
  1. http://www.networkrail.co.uk/aspx/4349.aspx
  2. (http://uk.rs-online.com/web)
  3. NI USB-6008 Datasheet

Request Removal

If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:

Request the removal of this essay


More from UK Essays