Characteristics Of Trusted Real Time Operating Systems Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Abstract-Real Time Operating System (RTOS) had emerged in past few decades to provide a virtue solutions over various platforms that range between embedded devices till a sophisticated electronic system such nuclear plant and spacecraft. The evolution of the operating systems are kept continues to endure the need of diverse application that running on various platforms. Recently, a new element introduced to provide security enhancement on the platform using Trusted Computing. The general idea of the Trusted Computing is to provide a trusted environment where are the hardware and software or application running on the platform behaves as expected without need for further verification. The term "behave as expected" need to be characterize in form of operating system behaviours so that we can identify that new properties to add as references once developing or upgrading the operating systems. Therefore, this paper we shall present a characteristics of the existing Real Time Operating Systems and extended properties or criteria to introduce Trusted Real Time Operating System (TRTOS). The purpose of this paper is to facilitate the researchers for further investigate the need of TRTOSs and its variant through diversity of the user's application.

Keywords-trusted application; RTOS; TPM; trusted RTOS; real time operating system; trusted platform module; trusted computing


Operating systems (OS) are often used for managing the complexity of computing resources and maximizing computation input and output. This including serving user's application with minimal requirements for application to know depth about the platform or resources. Most of the OSs are designed for specific purpose based on target platform or machine and application executing in that platform. There are many OSs that had been used across the global including Microsoft Windows, Linux, MAC and etc. However each OSs has unique design and architecture used for specific or general purpose. We can classify OSs into 4 categories: i) monolithic [1], ii) microkernel [2] or NanoKernel [3, 12], iii) exoKernel [4, 5], iv) hybrid [5].

Trusted Computing

We begin our discussion with some key definitions related to trusted computing (TC). The Trusted Computing Group (TCG) address the "trust" based on expected response schema which means "an entity can be trusted if it always behaves in the expected manner for the intended purpose" [20]. Nevertheless, nowadays most of contaminated platform by malicious activities will tries to acts as expected response in real time while doing its malicious activities [21]. Therefore, "trust" is a declaration by a well-known authority that the platform can be trusted to behave properly for the intended purpose. This authority is prepared to issue an endorsement (e.g. credential or certificate) of the platform because it has already assessed the integrity of the platform. "Trust" is a complicated notion that has been studied and debated in different areas such as social science and humanities as well as computer science [22].

Intel Corporation defined "Trusted" as "trusted denotes a successful measurement of the provided software module to a reference measurement that is protected by a hardware trusted platform module (e.g., TPM) and is pre-provisioned on the platform" [23]. In fact, trust is a fusion of several elements of the platform that span the enterprise to consumer, including reliability, safety, confidentiality, integrity, availability and privacy. However, a possible definition to quantify "trust" is using the notion "trustworthy". "Trustworthy" is the degree to which the (security) behaviour of the component is demonstrably compliant with its stated functionality [24].

Defining trust to be quantified is only the first order of things. It is necessary but not sufficient. Next, is to clearly define "Trusted Process". It refers to "either a hardware-based or software-based process with the host platform that is trusted without the need for further inspection to perform as expressed by the host platform certificate" (TCG PC Client, 2005). Trusted process is an important element in the trusted platform because it defines the basis of how can processes in a system become trusted in a computing environment. The definition clearly spells out that a process can only be trusted if the platform is trusted and a platform gains it's trusted credential through the platform testimonials issued by the platform makers.

Measuring Methods

At least, there are two techniques that can be use for measuring security of the OSs using i) binary measurement and ii) properties-based measurement. There are 3 major issues regarding binary measurement that used for attestation: 1) revealing the platform configuration may lead to privacy abuse and unfairness against the underlying platform; 2) it requires the verifier to know all possible "trusted" configurations of all platforms; 3) it does not necessarily imply that the platform complies with desired (security) properties during attestation process because binary is a static value [26]. Normally, binary integrity measurement is acquired through hashing algorithm (e.g., SHA1) on files such as kernel, library, executable file and etc. It is easy to implement but lack of flexibility when serving hot-updating platforms such as (Windows or Linux update). Properties-based is a technique that using properties of the platform without exposing its identity during and after attestation process. So, what are the properties? By characterization, properties are dynamic measurements offering by the platform (e.g., client) to satisfy the verifier (e.g., server) based on certain security requirements of the party who asks for attestation. In favour of attestation, properties do not reveal the platform software and hardware configurations [27]. Until now, researchers still are unable to identify these properties and mechanism to derive it for attestation process.



The monolithic kernel usually packed in multiple modules including hardware abstraction layer (HAL), hardware drivers, kernel modules (e.g., scheduler, interrupt request handler (IRQ), resource management and etc), and high precession application in a single large executable binary. This kind of design is suitable for less complex platform such as embedded devices. However, there are a lot exiting Linux based OSs using monolithic kernel such as Debian, Fedora, Ubuntu and Redhat. They integrate some critical or core modules for the application by recompiling the kernel source codes [6] for customization.

Fig. 1: Monolithic Architecture


The microkernel naturally consist of multiple modules that high cohesion because interaction between modules is loose coupling (e.g. a crashed module will not trouble other modules) [7]. Liedtke's principle in designing microkernel: "a concept is tolerated inside the microkernel only if moving it outside the kernel, i.e. permitting competing implementations, would prevent implementation of the system's required functionality"[17]. What he means that, to ensure the core kernel is very small by implementing any module outside the kernel space whatever possible. Normally, each kernel space elements such as HAL, Drivers, IRQ, scheduling manager are separated in different modules or executable files. Each module recognizes each other through standard interface (e.g., POSIX) using a very fast inter process communication (IPC) interface. An early generation of microkernel such as Mach [8], IBM's and Workplace OS [9] had encountered many problems during implementation because of poorly design and performance degradation rooted at the IPC [2]. These happen because the design of the hardware during that time more likely for monolithic OS and also lack of hardware acceleration (e.g., context switch, registers, buses and DMA). Modern generation of microkernel such as Vamos [2] and L4 [10] has successful improved earliest generation problems and also adding new security element into kernel design such as Trusted Computing Base (TCB).

Fig. 2: Microkernel Architecture


The nanokernel or for a while they call picokernel is a term used at the initial stage of microkernel. The nano or pico used to represent a control the computer clock and the response time (precision timing) to certain process or thread [3]. An example of nanokernel is KeyOS that has been used in the System/370, 680x0 and 88x0 processor families since 1983 [12]. The motivation of development the KeyOS is to do accounting accuracy, 24-hour uninterrupted service, and the ability to support simultaneous, mutually suspicious time sharing consumers with an excellent level of security [12]. The main characteristic of the nanokernel are stateless kernel, single-level store, and capabilities (objects in the system is referred to by one or more linked keys) [12]. However, in past 15 years, people refer nanokernel or picokernel as microkernel because the emerging of software and hardware had diminished the differences between these terms.


The exokernel is developed by MIT's student [4] to accelerate the throughput of the hardware by allowing untrusted application to utilize the hardware through low-level access to the physical hardware [5]. Meanwhile, it separate protection from management by dividing responsibilities differently and somehow it different from the way conventional OSs do [4]. The exokernel provides a mechanism to access low-level resources using library operating systems (libOSes). The libOSes is unprivileged libraries that can be modified or replaced at will. Furthermore, these libraries offer virtual memory, file systems, networking, IPC, and processes (threading) for application on the top of these libraries to access it [4, 5]. D. R. Engler [5] found that this OS architecture is efficient to secure the multiplexing of hardware resources at low-level and also traditional operating system abstractions can be implemented efficiently at application level.

Fig. 3: ExoKernel Architecture

Hybrid Kernel

The hybrid kernel is a mixture of kernel architecture to serves a numerous of application running in the same platform.

Several research community calls hypervisor or hybrid kernel because the hypervisor provides multiple interface to support other kernel running on the top of this hypervisor. This hypervisor provides hardware abstraction and resource sharing between kernels. Examples of hybrid architecture are OKL4 [16], L4/Fiasco.OC [18, 19], L4 NOVA [19] and Citrix XenServer. These kernel or hypervisor is a Type 1 (native or bare metal) hypervisors that run directly on the host's hardware to control the hardware and to monitor guest kernels. There are also another design of hybrid kernel such as Windows Vista, BeOS [13], ReactOS [14] and Plan 9 [15] that using combination of microkernel and monolithic kernel in the architecture. There are a few reasons why Windows is a hybrid kernel, but not just microkernel: i) size NTOSKRNL for Windows Vista SP3 is around 3.38 MB (monolithic); ii) core functions (or modules) reside in NTOSKRNL such as first-level interrupt handling (microkernel); iii) subsystems providing system services that run in kernel mode, rather than in user-mode (monolithic); and iv) Windows Server with Hyper-V support virtualization of OSs as guest OS (hybrid).

Fig. 4: Hybrid Kernel (Hypervisor) Architecture

Fig. 5: Hybrid Kernel (Mono and Micro) Architecture

Related Work

R. Sailer et. al.[28] presented a design and prototype used for verifying client platform using Integrity Measurement Architecture (IMA) for Linux OS (monolithic). The main security design is to measure all executable content that is loaded onto the kernel and user space before execution and these measurements are protected in the TPM hardware. Second characteristic in their scheme were used the root of trust and chain of trust based on TCG requirement. The measurement model is to measure any content from the BIOS all the way up into the application layer. They used binary measure schema to identify integrity whether the OS and its environment have been modified in an unauthorized manner. IMA security design is to achieve the Clark-Wilson [29] level of integrity verification such as verification scope, executable content, structured data, unstructured data and challenger is able to ensure measurements are fresh, complete and unchanged. They did modification on the kernel with security hooks to measure when the first code is loaded into a process and detect when changes happen by verifying with the original integrity.

Secondly, Z. Yanqin et. al. [9] presented on optimizing VPN security gateways using Security Policy Database (SPD) configuration and key exchange, which are based on machine learning and Elliptic Curve Cryptography (ECC) [9,10,11]. This paper point up key security areas: i) authentication to assure packet transmitted by valid source and without alteration, ii) confidentiality to assure no eavesdropping by encrypting the messages and iii) key management to secure exchange key processes [9].

R. Sailer et. al.[15] presented a design to verify client integrity properties and establish trust into client's policy enforcement before permitting client to have access to corporate intranet. In their proposal, they used an integrity heartbeat enabling the VPN server to track changes in the remote client's security properties and implemented policy changes at some stage in the attestation process.

J Liu et al. [16] discussed remote anonymous attestation protocol based on the ring signature. In their design, any public key system such as RSA can be used to construct ring signature attestation protocol and an RSA-based scheme is encouraged without involvement of third trusted party and the issuer of the TPM in the Join sub-protocol.

Finally, C Nie [17] briefly illustrated the problem with attestation using static root of trust to check the integrity of platform. They proposed another solution using dynamic root of trust provided by new CPU architecture such as Intel Trusted Execution (Intel TXT).

Problem Statement

Since 1999, the Trusted Computing solutions emerge to protect computing activities. However, the interface between the end user and the application (software) using TPM is still inflexible and it is difficult to implement. For application developers, they require deep trusted computing knowledge to enable them to design, coding and test software. For instance, existing application needs to be changed or upgraded before utilizing the TPM functionality.

Methods & Solutions


This paper discussed a possible solution that will allow existing networking application without TPM to use TPM functionalities. It can be done using a concept called "trusted compartment" and "trusted services". Furthermore, this paper also discusses a possible way to improve networking software using the TNC protocol that will perform secure and trusted remote attestation.


This paper discusses a method to enable existing networking software to use Trusted Computing solution via attestation between application without interfering, tampering or modifying existing software (source codes). Examples of applications that may utilise this method include VPN, Internet Explorer, Firefox and FTP, which use TCG's attestation mechanism between applications within trusted compartment. To allow attestation, we recommended using machines with TPM hardware or TPM Emulator to simulate attestation process. We recommend developers to use Infineon TPM 1.2 chip, because it is the only one in the market that store EK credential in TPM. Besides that, Infineon TPM also had complied with TCG TPM 1.2 specification compliance testing. We could also use Mario Strasser's TPM Emulator to simulate TPM functionality in Linux operating system but it has a shortcoming that each emulator instances use the same EK values and all PCRs are preset to blank which means that we need to use TPM_Extend to provide some value for attestation process. We also need to do minor changes in the source code.

Conclusion & Future Work

This paper presented a method to establish trusted application where in, we use a trusted agent as a wrapper to allow non-TGC application to use TPM. Furthermore, we have also discussed a case study on the implementation of our proposal using client server application. In summary, we may use virtualization (compartment) to provide isolation and secure environment before deploying trusted application. For future work, we are looking forward to implement our proposal in cloud computing.