The History Of Opc Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

OLE for Process Control, which stands for Object Linking and Embedding for Process Control, is the original name for a standard specification developed in 1996 by an industrial automation industry. The standard specifies the communication of real-time plant data between controlling devices and monitoring devices from different manufacturers.

Figure 2

OPC provides standardization on technology not on a single product and finally gives the universal connectivity. The vision of OPC is that once the data is converted into OPC, every application can communicate with every other application.

3.1 History of OPC

Since reusable software components made their entry in automation technology and replaced monolithic, customized software applications, the question of standardized interfaces between components has increased significantly. If such interfaces are missing, every integration must be connected with cost-intensive and time-consuming programming supporting the respective interface. If a system consists of several software components, these adaptations have to be carried out several times.

Following the immense distribution of Windows operating systems and their coherent Win32-API in the PC area, different technologies were created to enable communication between software modules by means of standardized interfaces. A first milestone was DDE (Dynamic Data Exchange) which was complemented later on by the more efficient technology OLE (Object Linking and Embedding). With the introduction of the first HMI and SCADA programs based on PC technology between 1989 and 1991, DDE was used for the first time as an interface for software drivers to access the process periphery.

Besides the product-specific drivers for visualization, configuration, a series of product-independent DDE servers was soon created. PC plug-in cards were equipped with DDE software interfaces and thus could be operated from programs with DDE client interface. But DDE has had some insufficiencies, in particular where performance is concerned. This induced manufacturers of HMI programs to define some DDE extensions for increased performance. The results were different, manufacturer-specific DDE flavors, another kind of proprietary interface. Interoperability on the application level was still far away.

Due to the great variety of different hardware platforms and operating systems on which applications run, and due to the different programming languages in which applications are developed, no general assumption can be made about the activation and deactivation of the applications and the implemented object models.

To allow developing the applications in such a way that they are interoperable and independent of each other, OPC uses the aspects of platform independence and internet capability to meet the requirements for an expansion of the implementation and application basis for OPC products

With the increased distribution of their products and the growing number of communication protocols and bus systems, software manufacturers faced more and more pressure to develop and maintain hundreds of drivers. A large part of the resources of these enterprises had to be set aside for development and maintenance of communication drivers.

In 1995, the companies Fisher-Rosemount, Intuitive Technology, Opto22, Rockwell, and Siemens AG decided to work out a solution for this growing problem, and they formed the OPC Task Force. Members of Microsoft staff were also involved and supplied technical assistance.

Their vision was to standardize on a particular technology and their strategy was to provide interoperable, reliable and secure connectivity. And they want to create and maintain standards for open connectivity of Industrial automation devices and systems like industrial control systems and process control systems. Mainly they specify the communication of industrial process data between sensors, software systems and notification devices.

The OPC Task Force assigned itself the task to work out a standard for accessing real-time data under Windows operating systems, based on Microsoft's (OLE) DCOM technology, OLE for Process Control or OPC. The members of the OPC Task Force worked intensively, so that already in August 1996 the OPC specification Version 1.0 was available. In September 1996, during the ISA show in Chicago, the OPC Foundation was established; it has been coordinating all specification and marketing work since then.

An important task of the OPC Foundation is to respond to the requirements of the industry and to consider adding them as functional extensions of existing or newly-created OPC specifications. The strategy is to extend existing specifications, to define basic additions in new specifications, and to carry out modifications with the aim of maximum possible compatibility with existing versions. In September 1997, a first update of the OPC specification was published in the form of version 1.0A. This specification was no longer named "OPC Specification" but Data Access Specification. It defined the fundamental mechanisms and functionality or reading and writing process data. This version also served as the basis for the first OPC products, which were displayed at the ISA show 1997. Consideration of further developments in Microsoft DCOM and industry requirements led to the creation of Data Access Specification version 2.0 in October 1998.

Already rather earlier after the release of version 1.0A it could be seen that there was a need for the specification of an interface for monitoring and processing events and alarms. A working group formed to solve this problem worked out the Alarms and Events Specifications, which was published in January 1999 as version 1.01. The Alarms and Events Specification defines how Alarms and Events clients are informed by Alarms and Events servers of the spontaneous occurrence of events within a process.

In addition to the acquisition of real-time data and monitoring of events, the use of historical data offers another large field of application in automation. The work on the Historical Data Access Specification already began in 1997 and was completed in September 2000.

Defining and implementing security policies for use with OPC components is also of great importance. A corresponding specification has been available since September 2000 and is titled OPC Security Specification.

In particular from the field of industrial batch processing, additional requirements have been forwarded to the OPC Foundation which has to the OPC Batch Specification. During work on version 2.0 of Data Access Specification and the other specifications, it emerged that there are elements common to all specifications. These elements have been combined in two specifications.

While the easy commissioning and simple communication setup between OPC components on a local computer boosted the popularity and widespread use of OPC technology, the implementation of OPC communication between remote computers was often complicated and inadequate. Reasons were the DCOM properties and DCOM security settings. On the one hand, the 'near-omnipresence' of DCOM on Microsoft platforms was a major success factor for OPC. On the other hand, DCOM also proved an OPC showstopper in many cases. Calling functions on remote computers or accessing remote components, such as Data Access or Alarms and Events Servers, is allowed or blocked by DCOM Security. Setting up DCOM security so that it really works the way it should is very complicated and takes a lot of expertise. The access rights granted to a user during Windows login have to be adjusted to match the DCOM security settings. As a result, setup engineers and system integrators routinely choose to speed up the process by granting very broad access rights and thus largely disabling the protection from unauthorized remote access. This shortcut collides with IT security policies and risks damage caused by negligence or sabotage. Another drawback is that a DCOM based communication of Data Access or Alarms and Events components is not possible across firewalls using dynamic Network Address Translation (NAT).

Since the standardization of XML in 1998 new Web Service technology have been developed, including the XML protocol SOAP (Simple Object Access Protocol), UDDI (Universal Description, Discovery and Integration ) and WSDL (Web Service Description Language ). In 2002 Microsoft launched its new .NET framework which is based on XML, SOAP and Web Service. The OPC foundation also early recognized the significance of XML and Web Services and formed additional working groups: The OPC XML-DA specification defines the mapping of the Data Access Specification to Web Services by using SOAP and XML. This allows using OPC components over the Internet and on operating system platforms without DCOM support. The Data Access 3.0 working group added new functionality to the existing Data Access Specification.

The OPC DX Specifications defines how to implement horizontal communication between servers directly, without including a client. While the OPC DX Specification was implemented in only few OPC products, the additional implementation of an XML-DA interface as a Web Service in existing Data Access servers offered an interesting option in that it allowed communication across computer boundaries. XML-DA components communicate through the XML protocol, i.e. by exchanging XML frames via HTTP (Hyper Text Transfer Protocol). This makes them very easy to configure, as opposed to DCOM. In addition, they can communicate across firewalls without problems.

Another major achievement of OPC technology using Web Services is its portability to non-Windows operating systems. For the first time, OPC components for UNIX, Linux and other operating systems were available on the market. The only drawback of this Web Services based type of OPC communication is a significantly lower transmission speed compared to DCOM. The transfer rates achieved are often insufficient for high-speed automation tasks.

Considerations to convert the Alarms and Events Specifications and the Historical Data Access Specification to Web Services led to the formation of OPC Unified Architecture working group in late 2003. The objective of the working group was to receive the Specifications that define access to process data, historical data and alarms and events and to change them over to Web Services in such a way that this data could be accessed in a standardized manner. The Unified Architecture - in short, OPC UA -was born. The OPC UA Specification comprises 13 parts and defines a platform independent interoperability framework that allows managing process data, alarms and historical data in a single Unified address space.

In many applications, the execution of commands is just as important as the reading and writing of values. To meet this requirement, the OPC commands Specification was defined and a draft version was released in 2004. However, the specification was not developed any further, as work on OPC UA was already in progress at that point The support of program calls and the monitoring of long running processes were defined as requirements for OPC UA. When OPC Unified Architecture introduced the new technology generation, the term "Classic OPC" was used to distinguish the old DCOM based OPC Specification from the new OPC UA Specification. Simply using the terms old and new to make this distinguish would lead to the misconception that the new OPC technology was replacing the previous OPC technology. In fact, it is a key goal of the OPC Foundation to protect investments in thousands of DCOM based OPC products. A carefully defined migration strategy ensures that Classic OPC products and OPC UA products can coexist without problems and that they can be combined and used in the same projects.

The rapid growth in the number of OPC products, to several thousand already within only a few years since the first Specification, shows the enormous acceptance and success of this technology OPC has succeeded not only in developing from a concept to an industrial standard within only three years, but also in becoming established in practically all areas and segments of industrial automation within the over twelve years of its existence.

With the introduction of the new platform-independent OPC Unified Architecture, OPC technology has started to conquer completely new areas, such as embedded systems and IT, and become established in applications like device parameterization, where OPC had hardly been used before.

OPC is the technological basis for the convenient and efficient link of automation components with control hardware and field devices. Furthermore, it provides the condition for integration of Office products and information systems on the company level such as Enterprise Resource Planning (ERP) and Manufacturing Execution Systems (MES).

Process data on the field level can be presented in an Excel sheet. Status data and production data on the control level can be achieved in a database without any problem via OPC or can be further processed in a production planning system. OPC UA components can be embedded in programmable logic controllers (PLC), distributed control Systems (DCS), intelligent gateways, remote IOs and other devices and be accessed over the Internet.

OPC is based on client and server technology the client allows the user to make read and write requests to an OPC server which then translates items into device protocol specific requests that the underlying machinery can understands.

Because of the OPC foundation, any generic clients like HMI/SCADA from one vendor could easily consume data from any of the OPC foundation member's server. In a multiple data acquisition level architecture OPC can exist.


OPC provides the standard specifications for data access (DA), historical data access (HDA), and unified architecture (UA) .These OPC specifications are widely accepted by the automation industry. Classic OPC is based on Microsoft-COM/DCOM-technology

OPC Data Access

OPC Historic Data Access

OPC Unified Architecture


OPC Data Access provides the baseline functionality for accessing data from various devices connected with different networks via a standard set of interfaces. These interfaces facilitate the interoperability between clients and servers. Client discovers the list of servers and selects the desired one. Client can access functionality & name space, information about the items in the names space. Server defines the set of interfaces to facilitate various mechanisms to access (read and write) the data items according to the needs of the client application. The primary intent of OPC Data Access is to provide the interfaces for data acquisition (accessing services) in support of the vertical architecture (serve data from a device to a client application on a higher level computer).

3.2.2 OPC Historic Data Access

This specification describes the OPC COM Objects and their interfaces implemented by OPC Historical Data Access. An OPC Client can connect to OPC Historical Server and access the data. The OPC Historical Data Server provides a way to access or communicate to a set of Historical data sources and the types of sources available are a function of the server implementation. The server may be implemented as a stand-alone OPC Historical Data Server that collects data from an OPC Data Access server or another data source and store that data. It may also be a set of interfaces that are layered on top of an existing Proprietary Historical Data Server. The clients that reference the OPC Historical Data server may be simple trending packages that just want values over a given time frame or they may be complex reports that require data in multiple formats.

An OPC client application communicates to an OPC Historical Data server through the specified OPC custom interfaces. The OPC Specification specifies COM interfaces not the implementation of those interfaces. It specifies that the set of particular interfaces should be provided to the client application. In all COM implementations, the architecture of OPC is a client-server model in which the OPC Server provides an interface to the OPC objects and server manages them.

The OPC Historical Data server objects provide the ability to read data from a historical server and write data to a historical server. The types of historical data are server dependent. All COM objects are accessed via Interfaces and the client sees only interfaces. Thus, the objects described here are 'logical' representations which have no concern with the actual internal implementation of the HDA server.

3.2.3 OPC Unified Architecture

OPC -Unified Architecture is the next generation of OPC technology. When OPC was created, Security, Platform independence and information modeling were not on the scene, but today OPC UA addresses almost everything from cyber threats and the secure movement of complex data between different platforms to platform independence from existing Microsoft based technologies. This means that OPC- UA can now be implemented system-wide from embedded field devices to enterprise level applications

OPC Unified Architecture (UA) is a platform independent standard by which different kind of systems and devices can communicate to each other over various types of networks. This standard supports powerful, secure communication that assures the identity of Clients and Servers. OPC UA defines standard sets of Services that Servers may provide to client and specify to Clients what service sets they support. Information is conveyed by using data types which are either standard or vendor-defined, and Servers defined object models that can be discovered dynamically by Clients. Servers can provide access to both current and Historical Data, as well as Alarms and Events to notify Clients of important changes. OPC UA can be mapped onto a group of communication protocols and data can be encoded in various ways to trade off security and efficiency

OPC UA provides a consistent, integrated Address Space and service model. This allows a single OPC UA Server to integrate data, Alarms and Events, and history into its Address Space, and to provide access to them using an integrated set of Services. These Services also include an integrated security model.

OPC UA also allows Servers to provide Clients with type definitions for the Objects accessed from the Address Space. This allows standard information models to be used to describe the contents of the Address Space. OPC UA allows data to be exposed in many different formats, including binary structures and XML documents. The format of the data may be defined by OPC, other standard organizations or vendors. Through the Address Space, Clients can query the Server for the metadata that describes the format for the data. In many cases, Clients with no pre-programmed knowledge of the data formats will be able to determine the formats at runtime and properly utilize the data.

OPC UA adds support for many relationships between Nodes instead of being limited to just a single hierarchy. In this way, an OPC UA Server may present data in a variety of hierarchies tailored to the way a set of Clients would typically like to view the data. This flexibility, combined with support for type definitions, makes OPC UA applicable to a wide array of problem domains. As illustrated below, OPC UA is not targeted at just the telemetry server interface, but also as a way to provide greater interoperability between higher level functions.

OPC UA is designed to provide robustness of published data. A major feature of all OPC servers is the ability to publish data and Event Notifications. OPC UA provides mechanisms for Clients to quickly detect and recover from communication failures associated with these transfers without having to wait for long timeouts provided by the underlying protocols. OPC UA is designed to support a wide range of Servers, from plant floor PLCs to enterprise Servers. These Servers are characterized by a broad scope of size, performance, execution platforms and functional capabilities. The OPC UA specifications are layered to isolate the core design from the underlying computing technology and network transport. This allows OPC UA to be mapped to future technologies as necessary, without negating the basic design.

OPC UA is designed as the migration path for OPC clients and servers that are based on Microsoft COM technology. Care has been taken in the design of OPC-UA so that existing data exposed by OPC COM servers (DA, HDA and A&E) can easily be mapped and exposed via OPC UA. Vendors may choose to migrate their products natively to OPC UA or use external wrappers to convert from OPC COM to OPC UA and vice-versa. Each of the previous OPC specifications defined its own address space model and its own set of Services. OPC UA unifies the previous models into a single integrated address space with a single set of Services.

3.3 OPC UA Security Model

OPC UA security is concerned with the authentication of Clients and Servers, the authentication of users, the integrity and confidentiality of their communications, and the verifiability of claims of functionality. It does not specify the circumstances under which various security mechanisms are required. That specification is crucial, but it is made by the designers of the system at a given site and may be specified by other standards.

Rather, OPC UA provides a security model, in which security measures can be selected and configured to meet the security needs of a given installation. This model includes standard security mechanisms and parameters.

Application level security relies on a secure communication channel that is active for the duration of the application Session and ensures the integrity of all Messages that are exchanged. This means users need to be authenticated only once, when the application Session is established. When a Session is established, the Client and Server applications negotiate a secure communications channel and exchange software Certificates that identify the Client and Server and the capabilities that they provide. OPC Foundation-generated software Certificates indicate the OPC UA Profiles that the applications implement and the OPC UA certification level reached for each Profile. Certificates issued by other organizations may also be exchanged during Session establishment.

The Server further authenticates the user and authorizes subsequent requests to access Objects in the Server. Authorization mechanisms, such as access control lists, are not specified by the OPC UA specification. They are application-or system-specific.

User level security includes support for security audit trails, with traceability between Client and Server audit logs. If a security-related problem is detected at the Server, the associated Client audit log entry can be located and examined. OPC UA also provides the capability for Servers to generate Event Notifications that report auditable Events to Clients capable of processing and logging them. OPC UA defines standard security audit parameters that can be included in audit log entries and in audit Event Notifications.

OPC UA security complements the security infrastructure provided by most web service capable platforms. Transport level security can be used to encrypt and sign Messages. Encryption and signatures protect against disclosure of information and protect the integrity of Messages. Encryption capabilities are provided by the underlying communications technology used to exchange Messages between OPC UA applications.

3.4 Conclusion

OPC is a published and secure API that defines how process data passes between applications.

It enables standardization on technology not on a product, means now there is no need to rely on a specific vendor.

It gives Best-of -breed solution. Now any HMI in the world can be combined with any PLC and these can be combined with any advance process control. All these things can be put together using OPC.

OPC reduces short term and long term project costs.