State of the Art |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The initial work concerning the remote entrusting was developed by some
consortium members in the TrustedFlow research activities
[1,
2] . The
initial work introduced the ideas on how to generate a continuous stream
of signatures using software only. The remote entrusting methodology
proposed in this project is novel and challenging, and presents a major
advancement beyond the state of the art. However, some specific aspects
of the proposed research activities have been dealt with in different
contexts. Therefore, the following state of the art discussion is
divided into several subsections corresponding to various research
aspects that are related to RE-TRUST.
Software dependability state of the artSoftware dependability is a mature and well-established research area that seeks solutions to the problem of software errors that can corrupt the integrity of an application. To this aim, several techniques have been developed and the most prominent are control-flow checking and data duplication. Control-flow checking techniques are meant to supplement the original program code with additional controls verifying that the application is transitioning through expected valid "traces" [3, 4, 5]. In data duplication techniques, program variables are paired with a backup copy [6, 7, 8]. Write operations in the program are instrumented to update both copies. During each read access, the two copies are compared for consistency. There is one main difference regarding the "attack model," between software dependability and the current project. Software dependability assumes that modifications are accidental (random) errors (say bit flips), while remote entrusting deals with intentional and malicious software modifications. Software tamper resistance state of the artAmong the several possible attacks, the focus is on the problem of authenticity, i.e., attacks aiming at tampering with application code/data for malicious purposes, like bypassing licensing, or forcing a modified (thus unauthorized) execution. Different solutions have been proposed in the literature to protect software from the above-mentioned rogue behaviors. Such solutions are surveyed in details in [9, 10] and briefly described in the following. Obfuscation is used to make application code obscure so that it is complex to understand by a potential attacker who wants to reverse engineer the application. Obfuscation techniques, change source code structure without changing its functional behavior through different kinds of code transformations [ 11, 12]. Theoretical studies about complexity of reverse engineering and de-obfuscation are in early stage. It is well-known that for binaries that mix code and data disassembly and de-compilation are undecidable in the worst case [13]. On the other hand, some work reported that de-obfuscation (under specific and restrictive conditions) is an NP-easy problem [14]. Further, it was proven that a large number of functions cannot be obfuscated [15]. Replacement background state of the artDynamic replacement strategy relies on the assumption that tampering attempts can be made more complex if the attackers have to face newer versions continuously. This approach has some similarities with software aging [16], where new updates of a program are frequently distributed. This limits the spread of software "cracks" and it allows renewal of software protection techniques embedded in the application. Another relevant area of related work is represented by techniques for protection of mobile agents [17, 18]. For instance, previous work proposed a scheme to protect mobile code using a ring-homomorphic encryption scheme based on CEF (computation with encrypted functions) with a non-interactive protocol [19, 20]. However the existence of such homomorphic encryption function (also known as a privacy homomorphism) is still an open problem. Furthermore, some approaches mix obfuscation and mobility. For instance, in [21] agents are periodically re- obfuscated to ensure that the receiving host cannot access the agent state. HW-based entrusting state of the artSolution proposed by Trusted Computing initiatives [22, 23, 24, 25] rely both on a trusted hardware component on the motherboard (co-processor) and on a common architecture that enable a trusted server-side management application to attest the integrity of a machine and to establishing its "level of trust". This non run-time approach has been applied to assess integrity of a remote machine enhanced with a trusted coprocessor and a modified Linux kernel [26]. In that work a chain of trust is created. First BIOS and coprocessor measure integrity of the operating system at start-up, then the operating system measure integrity of applications, and so on. Other non run-time approaches rely on additional hardware to allow a remote authority to verify software and hardware originality of a system [27]. Beside Trusted Computing, another interesting approach is presented in [28]. This approach has some similarities to the HW/SW method proposed in this project, as it is based on commodity hardware tokens (e.g., smart cards) and remote execution of selected software components. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Bibliography |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|