Publications

Export 37 results:
Sort by: Author Title Type [ Year  (Desc)]
2015
Vale, T., R. J. Dias, J. A. Silva, and J. M. Lourenço, "Execução concorrente e determinista de transações", Proceedings of INForum Simpósio de Informática, Covilhã, Portugal, 2015. Abstractinforum15-pot.pdf

Neste artigo apresentamos um protocolo de controlo de concorrência que garante que a execução concorrente de transações é equivalente à sua execução sequencial por uma ordem predefinida. Isto permite executar programas que usam transações de forma determinista. O protocolo (1) permite, pela primeira vez, a execução determinista de programas que usam memória transacional por hardware; e (2) garante a execução determinista de programas que usam memória transacional por software com um desempenho claramente superior ao estado da arte.

Dias, R. J., T. M. Vale, and J. M. Lourenço, "Framework Support for the Efficient Implementation of Multi-version Algorithms", Transactional Memory. Foundations, Algorithms, Tools, and Applications, vol. 8913: Springer International Publishing, pp. 166–191, 2015. Abstracttransactional_memory-dias_vale_lourenco.pdf

Software Transactional Memory algorithms associate metadata with the memory locations accessed during a transactions lifetime. This metadata may be stored in an external table and accessed by way of a function that maps the address of each memory location with the table entry that keeps its metadata (this is the out-place or external scheme); or alternatively may be stored adjacent to the associated memory cell by wrapping them together (the in-place scheme). In transactional memory multi-version algorithms, several versions of the same memory location may exist. The efficient implementation of these algorithms requires a one-to-one correspondence between each memory location and its list of past versions, which is stored as metadata. In this chapter we address the matter of the efficient implementation of multi-version algorithms in Java by proposing and evaluating a novel in-place metadata scheme for the Deuce framework. This new scheme is based in Java Bytecode transformation techniques and its use requires no changes to the application code. Experimentation indicates that multi-versioning STM algorithms implemented using our new in-place scheme are in average 6 × faster than when implemented with the out-place scheme.

Sousa, D. G., R. J. Dias, C. Ferreira, and J. M. Lourenço, "Preventing Atomicity Violations with Contracts", ArXiv e-prints, 2015. Abstract1505.02951v1-dsousa.pdfWebsite

Software developers are expected to protect concurrent accesses to shared regions of memory with some mutual exclusion primitive that ensures atomicity properties to a sequence of program statements. This approach prevents data races but may fail to provide all necessary correctness properties.The composition of correlated atomic operations without further synchronization may cause atomicity violations. Atomic violations may be avoided by grouping the correlated atomic regions in a single larger atomic scope. Concurrent programs are particularly prone to atomicity violations when they use services provided by third party packages or modules, since the programmer may fail to identify which services are correlated. In this paper we propose to use contracts for concurrency, where the developer of a module writes a set of contract terms that specify which methods are correlated and must be executed in the same atomic scope. These contracts are then used to verify the correctness of the main program with respect to the usage of the module(s). If a contract is well defined and complete, and the main program respects it, then the program is safe from atomicity violations with respect to that module. We also propose a static analysis based methodology to verify contracts for concurrency that we applied to some real-world software packages. The bug we found in Tomcat 6.0 was immediately acknowledged and corrected by its development team.

Silva, J. A., T. M. Vale, R. J. Dias, H. Paulino, and J. M. Lourenço, "Supporting Multiple Data Replication Models in Distributed Transactional Memory", Proceedings of the 2015 International Conference on Distributed Computing and Networking, Goa, India, ACM, pp. 11:1–11:10, 2015. Abstracticdcn15-jsilva.pdf

Distributed transactional memory (DTM) presents itself as a highly expressive and programmer friendly model for concurrency control in distributed programming. Current DTM systems make use of both data distribution and replication as a way of providing scalability and fault tolerance, but both techniques have advantages and drawbacks. As such, each one is suitable for different target applications, and deployment environments. In this paper we address the support of different data replication models in DTM. To that end we propose ReDstm, a modular and non-intrusive framework for DTM, that supports multiple data replication models in a general purpose programming language (Java). We show its application in the implementation of distributed software transactional memories with different replication models, and evaluate the framework via a set of well-known benchmarks, analysing the impact of the different replication models on memory usage and transaction throughput.

2014
Silva, J. A., T. M. Vale, R. J. Dias, H. Paulino, and J. M. Lourenço, "Supporting Partial Data Replication in Distributed Transactional Memory", Proceedings of Joint Euro-TM/MEDIAN Workshop on Dependable Multicore and Transactional Memory Systems, Vienna, Austria, jan, 2014. Abstractdmtm14-jsilva.pdf

n/a

Fiedor, J., Z. Letko, J. Lourenço, and T. Vojnar, "On Monitoring C/C++ Transactional Memory Programs", Mathematical and Engineering Methods in Computer Science, vol. 8934: Springer International Publishing, pp. 73–87, 2014. Abstractmemics14-monitoring-tm.pdf

Transactional memory (TM) is an increasingly popular technique for synchronising threads in multi-threaded programs. To address both correctness and performance-related issues of TM programs, one needs to monitor and analyse their execution. However, monitoring concurrent programs (including TM programs) may have a non-negligible impact on their behaviour, which may hamper the objectives of the intended analysis. In this paper, we propose several approaches for monitoring TM programs and study their impact on the behaviour of the monitored programs. The considered approaches range from specialised lightweight monitoring to generic heavyweight monitoring. The implemented monitoring tools are publicly available to the scientific community, and the implementation techniques used for lightweight monitoring of TM programs may be used as an inspiration for developing other specialised lightweight monitors.

2013
Dias, R. J., T. M. Vale, and J. M. Lourenço, "Efficient support for in-place metadata in Java software transactional memory", Concurrency and Computation: Practice and Experience, vol. 25, no. 17, pp. 2394–2411, 2013. Abstractccpe2013-dias.pdfWebsite

Software transactional memory (STM) algorithms associate metadata with the memory locations accessed during a transaction's lifetime. This metadata may be stored in an external table by resorting to a mapping function that associates the address of a memory cell with the table entry containing the corresponding metadata (out-place or external strategy). Alternatively, the metadata may be stored adjacent to the associated memory cell by wrapping the cell and metadata together (in-place strategy). The implementation techniques to support these two approaches are very different and each STM framework is usually biased towards one of them, only allowing the efficient implementation of STM algorithms which suit one of the approaches and inhibiting a fair comparison with STM algorithms suiting the other. In this paper, we introduce a technique to implement in-place metadata that does not wrap memory cells, thus overcoming the bias and allowing STM algorithms to directly access the transactional metadata. The proposed technique is available as an extension to Deuce and enables the efficient implementation of a wide range of STM algorithms and their fair (unbiased) comparison in a common STM framework. We illustrate the benefits of our approach by analyzing its impact in two popular transactional memory algorithms with several transactional workloads, TL2 and multiversioning, each befitting out-place and in-place, respectively.

Vale, T. M., R. J. Dias, and J. M. Lourenço, "On the Relevance of Total-Order Broadcast Implementations in Replicated Software Transactional Memories", Multicore Software Engineering, Performance, and Tools, vol. 8063: Springer Berlin Heidelberg, pp. 49-60, 2013. Abstractmusepat13-vale.pdf

n/a

2012
Dias, R. J., V. Pessanha, and J. M. Lourenço, "Precise Detection of Atomicity Violations", Haifa Verification Conference, Haifa, Israel, Springer Berlin / Heidelberg, Nov 2012. Abstracthvc2012.pdf

Concurrent programs that are free of unsynchronized ac- cesses to shared data may still exhibit unpredictable concurrency errors called atomicity violations, which include both high-level dataraces and stale-value errors. Atomicity violations occur when programmers make wrong assumptions about the atomicity scope of a code block, incorrectly splitting it in two or more atomic blocks and allow them to be interleaved with other atomic blocks. In this paper we propose a novel static analysis algorithm that works on a dependency graph of program variables and detects both high-level dataraces and stale-value errors. The algorithm was implemented for a Java Bytecode analyzer and its effectiveness was evaluated with some well known faulty programs. The results obtained show that our algorithm performs better than previous approaches, achieving higher precision for small and medium sized programs, making it a good basis for a practical tool.

Vale, T. M., R. J. Dias, and J. M. Lourenço, "Uma Infraestrutura para Suporte de Memória Transacional Distribuída", INForum 2012: Proceedings of INForum Simpósio de Informática, Monte de Capraica, PT, Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa, 7 Sep., 2012. Abstractinforum-dstm.pdf

As técnicas e algoritmos desenvolvidos sobre diferentes infraestruturas específicas dificilmente podem ser comparados entre si. Este princípio também se aplica às infraestruturas para execução de Memória Transacional Distribuída (MTD), pois não só são muito escassas aquelas que permitem o desenvolvimento, teste e comparação de vários algoritmos e técnicas de implementação, como fornecem uma interface intrusiva para o programador. Sem uma comparação justa, não é possível aferir quais as técnicas e algoritmos mais apropriados em cada contexto de utilização (workload). Neste artigo propomos uma infraestrutura generalista, muito flexível, que possibilita a experimentação de várias estratégias de MTD, permitindo o desenvolvimento de uma grande variedade de algoritmos e de técnicas de implementação eficientes e otimizadas. Através da sua utilização, é agora possível a comparação de técnicas e algoritmos em diferentes contextos de utilização (workloads), recorrendo a uma única infraestrutura e com implicações mínimas no código da aplicação.

Dias, R. J., D. Distefano, J. C. Seco, and J. M. Lourenço, "Verification of Snapshot Isolation in Transactional Memory Java Programs", Proceedings of the 26th European Conference on Object-Oriented Programming, Beijing, China, 11-16 June, 2012. Abstractecoop12.pdf

This paper presents an automatic verification technique for transactional memory Java programs executing under snapshot isolation level. We certify which transactions in a program are safe to execute under snapshot isolation without triggering the write-skew anomaly, opening the way to run-time optimizations that may lead to considerable performance enhancements. Our work builds on a novel deep-heap analysis technique based on separation logic to statically approximate the read- and write-sets of a transactional memory Java program. We implement our technique and apply our tool to a set of micro benchmarks and also to one benchmark of the STAMP package. We corroborate known results, certifying some of the examples for safe execution under snapshot isolation by proving the absence of write-skew anomalies. In other cases our analysis has identified transactions that potentially trigger previously unknown write-skew anomalies.>

Dias, R. J., T. M. Vale, and J. M. Lourenço, "Efficient Support for In-Place Metadata in Transactional Memory", Proceedings of the 18th International Euro-Par Conference on Parallel Processing, Berlin, Heidelberg, Springer-Verlag, 2012. Abstracteuropar12.pdf

Software Transactional Memory (STM) algorithms correctness rely on metadata associated with the memory locations accessed during the transaction life-time. STM implementations may store this metadata either in-place, by wrapping the memory cells in a container that includes the memory cell itself and the corresponding metadata, or out-place, by resorting to a mapping function that associates the memory cell address to an external table with the corresponding metadata. The implementation techniques for these two approaches are very different and each STM framework is usually biased towards one of them, only allowing the efficient implementation of algorithms that fall into the appropriate category, and inhibiting the fair comparison with STM algorithms falling into the other. In this paper we introduce a technique that supports the use of in-place metadata without requiring to wrap memory cells, thus providing STM algorithms with direct access to the transactional metadata and overcoming the bias. The proposed technique is available as an extension to the DeuceSTM framework and allows the efficient implementation of a wide range of STM algorithms, thus enabling their fair (unbiased) comparison in a common STM infrastructure. We illustrate the benefits of our approach by analyzing its impact in two popular TM algorithms with two different transactional workloads, TL2 and multi-versioning, which bias to out-place and in-place respectively.

2011
Dias, R. J., J. M. Lourenço, and N. Preguiça, "Efficient and Correct Transactional Memory Programs Combining Snapshot Isolation and Static Analysis", Proceedings of the 3rd USENIX Conference on Hot Topics in Parallelism (HotPar'11), Berkeley, USA, Usenix Association, May, 2011. Abstracthotpar2011.pdf

Concurrent programs may suffer from concurrency anomalies that may lead to erroneous and unpredictable program behaviors. To ensure program correctness, these anomalies must be diagnosed and corrected. This paper addresses the detection of both low- and high-level anomalies in the Transactional Memory setting. We propose a static analysis procedure and a framework to address Transactional Memory anomalies. We start by dealing with the classic case of low-level dataraces, identifying concurrent accesses to shared memory cells that are not protected within the scope of a memory transaction. Then, we address the case of high-level dataraces, bringing the programmer's attention to pairs of memory transactions that were misspecified and should have been combined into a single transaction. Our framework was applied to a set of programs, collected form different sources, containing well known low- and high-level anomalies. The framework demonstrated to be accurate, confirming the effectiveness of using static analysis techniques to precisely identify concurrency anomalies in Transactional Memory programs.

Pessanha, V., R. J. Dias, J. M. Lourenço, E. Farchi, and D. Sousa, "Practical verification of high-level dataraces in transactional memory programs", Proceedings of 9th the Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging, New York, NY, USA, ACM, pp. 26–34, July, 2011. Abstractisstaws11padtad-4-pessanha.pdf

In this paper we present MoTh, a tool that uses static analysis to enable the automatic verification of concurrency anomalies in Transactional Memory Java programs. Currently MoTh detects high-level dataraces and stale-value errors, but it is extendable by plugging-in sensors, each sensor implementing an anomaly detecting algorithm. We validate and benchmark MoTh by applying it to a set of well known concurrent buggy programs and by close comparison of the results with other similar tools. The results achieved so far are very promising, yielding good accuracy while triggering only a very limited number of false warnings.

Lourenço, J., D. Sousa, B. C. Teixeira, and R. J. Dias, "Detecting concurrency anomalies in transactional memory programs", Comput. Sci. Inf. Syst., vol. 8, issue 2, no. 2, pp. 533–548, 2011. Abstractcomsis-2011.pdf

Software transactional memory is a promising programming model that adapts many concepts borrowed from the databases world to control concurrent accesses to main memory (RAM). This paper discusses how to support revertible operations, such as memory allocation and release, within software libraries that will be used in software memory transactional contexts. The proposal is based in the extension of the transaction life cycle state diagram with new states associated to the execution of user-defined handlers. The proposed approach is evaluated in terms of functionality and performance by way of a use case study and performance tests. Results demonstrate that the proposal and its current implementation are flexible, generic and efficient

Dias, R. J., J. M. Lourenço, and J. C. Seco, Detection of Snapshot Isolation Anomalies in Software Transactional Memory: A Statical Analysis Approach, , no. UNL-DI-5-2011: Departamento de Informática FCT/UNL, 2011. Abstract

n/a

Dias, R. J., D. Distefano, J. M. Lourenço, and J. C. Seco, StarTM: Automatic Verification of Snapshot Isolation in Transactional Memory Java Programs, , no. UNL-DI-6-2011: Departamento de Informática FCT/UNL, 2011. Abstractddls11.pdf

This paper presents StarTM , an automatic verification tool for transactional memory Java programs executing under relaxed isolation levels. We certify which transactions in a program are safe to execute under Snapshot Isolation without triggering the write-skew anomaly, opening the way to run-time optimizations that may lead to considerable performance enhancements.
Our tool builds on a novel shape analysis technique based on Separation Logic to statically approximate the read- and write-sets of a transactional memory Java program. This technique is particularly challenging due to the presence of dynamically allocated memory.
We implement our technique and apply our tool to a set of intricate examples. We corroborate known results, certifying some of the examples for safe execution under Snapshot Isolation by proving the absence of write-skew anomalies. In other cases we identify transactions that potentially trigger the write-skew anomaly.

2010
Teixeira, B., J. M. Lourenço, E. Farchi, R. J. Dias, and D. Sousa, "Detection of Transactional Memory Anomalies using Static Analysis", Proceedings of the 8th Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging (PADTAD'10), New York, NY, USA, ACM, pp. 26–36, 2010. Abstractpadatad-teixeira-2010.pdf

Transactional Memory allows programmers to reduce the number of synchronization errors introduced in concurrent programs, but does not ensures its complete elimination. This paper proposes a pattern matching based approach to the static detection of atomicity violation, based on a path-sensitive symbolic execution method to model four anomalies that may affect Transactional Memory programs. The proposed technique may be used to to bring to programmer's attention pairs of transactions that the programmer has mis-specified, and should have been combined into a single transaction. The algorithm first traverses the AST tree, removing all the non-transactional blocks and generating a trace tree in the path sensitive manner for each thread. The trace tree is a Trie like data structure, where each path from root to a leaf is a list of transactions. For each pair of threads, erroneous patterns involving two consecutive transactions are then checked in the trace tree. Results allow to conclude that the proposed technique, although triggering a moderate number of false positives, can be successfully applied to Java programs, correctly identifying the vast majority of the relevant erroneous patterns.

Duro, N., R. Santos, J. M. Lourenço, H. Paulino, and J. Martins, "Open Virtualization Framework for Testing Ground Systems", Proceedings of the 8th Workshop on Parallel and Distributed Systems (PADTAD'10), New York, NY, USA, ACM, pp. 67–73, 2010. Abstractpadtad-duro-2010.pdf

The recent developments in virtualization change completely the panorama of the Hardware/OS deployment. New bottlenecks arise in the deployment of application stacks, where IT industry will spend most of the time to assure automation. VIRTU tool aims at managing, configuring and testing distributed ground applications of space systems on a virtualized environment, based on open tools and cross virtualization support. This tool is a spin-off of previous activities performed by the European Space Operations Center (ESOC) and thus it covers the original needs from the ground data systems infrastructure division of the European Space Agency. VIRTU is a testing oriented solution. Its ability to group several virtual machines in an assembly provides the means to easily deploy a full testing infrastructure, including the client/server relationships. The possibility of making on-demand request of the testing infrastructure will provide some infrastructure optimizations, specially having in mind that ESA maintains Ground Control software of various missions, and each mission cam potentially have a different set of System baselines and last up to 15 years. The matrix array of supported system combinations is therefore enormous and any improvement on the process provides substantial benefits to ESA, by reducing the effort and schedule of each maintenance activity. The ESOC's case study focuses on the development and validation activities of infrastructure or mission Ground Systems solutions. The Ground Systems solutions are typically composed of distributed systems that could take advantage of virtualized environments for testing purposes. Virtualization is used as way to optimize maintenance for tasks such as testing new releases and patches, test different system's configurations and replicate tests. The main benefits identified are related to deployment test environment and the possibility to have on-demand infrastructure.

Paulino, H., J. A. Martins, J. M. Lourenço, and N. Duro, "SmART: An Application Reconfiguration Framework", Complex Systems Design & Management: Springer Berlin Heidelberg, pp. 73–84, 2010. Abstractcsdm.pdf

SmART (Smart Application Reconfiguration Tool) is a framework for the automatic configuration of systems and applications. The tool implements an application configuration workflow that resorts to the similarities between configuration files (i.e., patterns such as parameters, comments and blocks) to allow a syntax independent manipulation and transformation of system and application configuration files.Without compromising its generality, SmART targets virtualized IT infrastructures, configuring virtual appliances and its applications. SmART reduces the time required to (re)configure a set of applications by automating time-consuming steps of the process, independently of the nature of the application to be configured. Industrial experimentation and utilization of SmART show that the framework is able to correctly transform a large amount of configuration files into a generic syntax and back to their original syntax. They also show that the elapsed time in that process is adequate to what would be expected of an interactive tool. SmART is currently being integrated into the VIRTU bundle, whose trial version is available for download from the projects web page.

Dias, R. J., J. Seco, and J. M. Lourenço, "Snapshot Isolation Anomalies Detection in Software Transactional Memory", Proceedings of INForum Simpósio de Informática (InForum 2010), Braga, Portugal, Universidade do Minho, 2010. AbstractINForum-dias-2010.pdf

Some performance issues of transactional memory are caused by unnecessary abort situations where non serializable and yet non conflicting transactions are scheduled to execute concurrently. Smartly relaxing the isolation properties of transactions may overcome these issues and attain considerable performance improvements. However, it is known that relaxing isolation restrictions may lead to runtime anomalies. In some situations, like database management systems, developers may choose that compromise, hence avoiding anomalies explicitly. Memory transactions protect the state of the program, therefore execution anomalies may have more severe consequences in the semantics of programs. So, the compromise between a relaxed isolation strategy and enforcing the necessary program correctness is harder to setup. The solution we devise is to statically analyse programs to detect the kind of anomalies that emerge under snapshot isolation. Our approach allows a compiler to either warn the developer about the possible snapshot isolation anomalies in a given program, or possibly inform automatic correctness strategies to ensure Serializability.

2009
Lourenço, J. M., N. Preguiça, R. J. Dias, J. N. Silva, J. Garcia, and L. Veiga, "NGenVM: New Generation Execution Environments", EuroSys, Nuremberg, Germany, 2009. Abstractngenvm-poster.pdf

This document describes a work-in-progress development of NGen-VM, a distributed infrastructure that manages execution environments with run-time and programming language support targeting applications developed in the Java programming language, deployed over clusters of many-core computers. For each running application or suite of related applications, a dedicated single-system image will be provided, regardless of the concurrent threads running on a single machine (on several cores) or scattered on different computers. Such system images rely on a single model for concurrency management (Transactional Shared Memory Model), in order fill the gap between the hardware infrastructure of clusters of many-core nodes and the application runtime that is independent from that hardware infrastructure. Interactions between threads in the same tasks will be supported by a Transactional Memory framework that provides the programming language with Atomic and Isolated code regions. Interactions between thread on different machines will also use the Transactional Memory model, but now resorting to a Distributed Shared Memory abstraction.

Lourenço, J. M., R. J. Dias, J. Luís, M. Rebelo, and V. Pessanha, "Understanding the Behavior of Transactional Memory Applications", Proceedings of the 7th Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging (PADTAD'09), New York, NY, USA, ACM, pp. 31–39, 2009. Abstractpadtad2009.pdf

Transactional memory is a new trend in concurrency control that was boosted by the advent of multi-core processors and the near to come many-core processors. It promises the performance of finer grain with the simplicity of coarse grain threading. However, there is a clear absence of software development tools oriented to the transactional memory programming model, which is confirmed by the very small number of related scientific works published until now. This paper describes ongoing work. We propose a very low overhead monitoring framework, developed specifically for monitoring TM computations, that collects the transactional events into a single log file, sorted in a global order. This framework is then used by a visualization tool to display different types of charts from two categories: statistical charts and thread-time space diagrams. These last diagrams are interactive, allowing to identify conflicting transactions. We use the visualization tool to analyse the behavior of two different, but similar, testing applications, illustrating how it can be used to better understand the behavior of these transactional memory applications.

Dias, R. J., and J. M. Lourenço, "Unifying Memory and Database Transactions", Proceedings of the 15th International Euro-Par Conference on Parallel Processing, Berlin, Heidelberg, Springer-Verlag, pp. 349–360, 2009. Abstracteuropar2009-umadt.pdf

Software Transactional Memory is a concurrency control technique gaining increasing popularity, as it provides high-level concurrency control constructs and eases the development of highly multi-threaded applications. But this easiness comes at the expense of restricting the operations that can be executed within a memory transaction, and operations such as terminal and file I/O are either not allowed or incur in serious performance penalties. Database I/O is another example of operations that usually are not allowed within a memory transaction. This paper proposes to combine memory and database transactions in a single unified model, benefiting from the ACID properties of the database transactions and from the speed of main memory data processing. The new unified model covers, without differentiating, both memory and database operations. Thus, the users are allowed to freely intertwine memory and database accesses within the same transaction, knowing that the memory and database contents will always remain consistent and that the transaction will atomically abort or commit the operations in both memory and database. This approach allows to increase the granularity of the in-memory atomic actions and hence, simplifies the reasoning about them.

2008
Cunha, G., J. Lourenço, and R. J. Dias, "Consistent State Software Transactional Memory", IV Jornadas de Engenharia de Electrónica e Telecomunicações e de Computadores (JETC'08), Lisboa, Portugal, ISEL - Instituto Superior de Engenharia de Lisboa, pp. 251–256, 2008. Abstractjetc_2008.pdf

Software transactional memory (STM) is a promising programming model that adapts many concepts borrowed from the databases world to control concurrent accesses to memory (RAM) locations. In this paper we propose a new classification for the active states of a transaction; a new memory quiescing algorithm, to allow the safe transition of a memory block form transactional to non-transactional space; we compare word and object transactional grain units; and evaluate the cost of consistent state validation, arguing that this cost can be minimized by performing partial validation on problematic code regions.