Publications

Export 26 results:
Sort by: Author Title [ Type  (Asc)] Year
Book Chapter
Soares, J., J. M. Lourenço, and N. Preguiça, "MacroDB: Scaling Database Engines on Multicores", Euro-Par 2013 Parallel Processing, vol. 8097: Springer Berlin Heidelberg, pp. 607-619, 2013. Abstracteuropar2013-soares.pdf

n/a

Conference Paper
Sousa, D. G., J. M. and Lourenço, E. Farchi, and I. Segall, "Aplicação do Fecho de Programas na Deteção de Anomalias de Concorrência", INForum 2012: Proceedings of INForum Simpósio de Informática, Monte de Caparica, PT, Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa, 6 Sep., 2012. Abstractinforum-closure.pdf

Uma das estratégias para tirar partido dos múltiplos processadores disponíveis nos computadores atuais passa por adaptar código legado, inicialmente concebido para ser executado num contexto meramente sequencial, para ser agora executado num contexto multithreading. Nesse processo de adaptação é necessário proteger apropriadamente os dados que são agora partilhados e acedidos por diferentes threads concorrentes. A proteção dos dados com locks usando uma granulosidade grossa inibe a concorrência e opõe-se ao objetivo inicial de explorar o paralelismo suportado por múltiplos processadores. Por outro lado, a utilização de uma granulosidade fina pode levar à ocorrência de anomalias próprias da concorrência, como deadlocks e violações de atomicidade (high-level data races). Este artigo discute o conceito de fecho de um programa e uma metodologia que, quando aplicados em conjunto, permitem adaptar código legado para o tornar thread-safe, garantindo a ausência de violações de atomicidade na versão corrente do software e antecipando algumas violações de atomicidade que poderão ocorrer em versões futuras do mesmo software.

Silva, J., J. M. Lourenço, and H. Paulino, "Boosting Locality in Multi-version Partial Data Replication", Proceedings of the 30th ACM/SIGAPP Symposium On Applied Computing (SAC'15), 2015. Abstractsac15_cache.pdf

n/a

Silva, J. A., H. Paulino, and J. M. Lourenço, "Crowd-Sourcing Mobile Devices to Provide Storage in Edge-Clouds", Proceedings of the Doctoral Symposium of the 16th International Conference on Distributed Computing and Networking, Jan, 2015. Abstracticdcn15srf.pdf

Given the proliferation and enhanced capabilities of mobile devices, their computational and storage resources can now be combined in a wireless cloud of nearby mobile devices, a mobile edge-cloud. These clouds are of particular interest in low connectivity scenarios, e.g., sporting events and disaster scenarios. In these dynamic clouds it is necessary to reliably disseminate and share data, and also to offload data processing computations to other devices in the edge-cloud. We are particularly interested in supporting storage services in these new type of edge-clouds, as a mean to enable data sharing, dissemination and querying, as well as to serve as a distributed file system for offloaded computations. In this Ph.D. thesis, we propose to address these questions by researching on the usage of ad-hoc clouds of mobile devices to develop an efficient storage service capable of providing high availability and reliability.

Teixeira, B., J. M. Lourenço, E. Farchi, R. J. Dias, and D. Sousa, "Detection of Transactional Memory Anomalies using Static Analysis", Proceedings of the 8th Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging (PADTAD'10), New York, NY, USA, ACM, pp. 26–36, 2010. Abstractpadatad-teixeira-2010.pdf

Transactional Memory allows programmers to reduce the number of synchronization errors introduced in concurrent programs, but does not ensures its complete elimination. This paper proposes a pattern matching based approach to the static detection of atomicity violation, based on a path-sensitive symbolic execution method to model four anomalies that may affect Transactional Memory programs. The proposed technique may be used to to bring to programmer's attention pairs of transactions that the programmer has mis-specified, and should have been combined into a single transaction. The algorithm first traverses the AST tree, removing all the non-transactional blocks and generating a trace tree in the path sensitive manner for each thread. The trace tree is a Trie like data structure, where each path from root to a leaf is a list of transactions. For each pair of threads, erroneous patterns involving two consecutive transactions are then checked in the trace tree. Results allow to conclude that the proposed technique, although triggering a moderate number of false positives, can be successfully applied to Java programs, correctly identifying the vast majority of the relevant erroneous patterns.

Vale, T., R. J. Dias, J. A. Silva, and J. M. Lourenço, "Execução concorrente e determinista de transações", Proceedings of INForum Simpósio de Informática, Covilhã, Portugal, 2015. Abstractinforum15-pot.pdf

Neste artigo apresentamos um protocolo de controlo de concorrência que garante que a execução concorrente de transações é equivalente à sua execução sequencial por uma ordem predefinida. Isto permite executar programas que usam transações de forma determinista. O protocolo (1) permite, pela primeira vez, a execução determinista de programas que usam memória transacional por hardware; e (2) garante a execução determinista de programas que usam memória transacional por software com um desempenho claramente superior ao estado da arte.

Duro, N., R. Santos, J. M. Lourenço, H. Paulino, and J. Martins, "Open Virtualization Framework for Testing Ground Systems", Proceedings of the 8th Workshop on Parallel and Distributed Systems (PADTAD'10), New York, NY, USA, ACM, pp. 67–73, 2010. Abstractpadtad-duro-2010.pdf

The recent developments in virtualization change completely the panorama of the Hardware/OS deployment. New bottlenecks arise in the deployment of application stacks, where IT industry will spend most of the time to assure automation. VIRTU tool aims at managing, configuring and testing distributed ground applications of space systems on a virtualized environment, based on open tools and cross virtualization support. This tool is a spin-off of previous activities performed by the European Space Operations Center (ESOC) and thus it covers the original needs from the ground data systems infrastructure division of the European Space Agency. VIRTU is a testing oriented solution. Its ability to group several virtual machines in an assembly provides the means to easily deploy a full testing infrastructure, including the client/server relationships. The possibility of making on-demand request of the testing infrastructure will provide some infrastructure optimizations, specially having in mind that ESA maintains Ground Control software of various missions, and each mission cam potentially have a different set of System baselines and last up to 15 years. The matrix array of supported system combinations is therefore enormous and any improvement on the process provides substantial benefits to ESA, by reducing the effort and schedule of each maintenance activity. The ESOC's case study focuses on the development and validation activities of infrastructure or mission Ground Systems solutions. The Ground Systems solutions are typically composed of distributed systems that could take advantage of virtualized environments for testing purposes. Virtualization is used as way to optimize maintenance for tasks such as testing new releases and patches, test different system's configurations and replicate tests. The main benefits identified are related to deployment test environment and the possibility to have on-demand infrastructure.

Pessanha, V., R. J. Dias, J. M. Lourenço, E. Farchi, and D. Sousa, "Practical verification of high-level dataraces in transactional memory programs", Proceedings of 9th the Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging, New York, NY, USA, ACM, pp. 26–34, July, 2011. Abstractisstaws11padtad-4-pessanha.pdf

In this paper we present MoTh, a tool that uses static analysis to enable the automatic verification of concurrency anomalies in Transactional Memory Java programs. Currently MoTh detects high-level dataraces and stale-value errors, but it is extendable by plugging-in sensors, each sensor implementing an anomaly detecting algorithm. We validate and benchmark MoTh by applying it to a set of well known concurrent buggy programs and by close comparison of the results with other similar tools. The results achieved so far are very promising, yielding good accuracy while triggering only a very limited number of false warnings.

Sousa, D. G., C. Ferreira, and J. M. Lourenço, "Prevenção de Violações de Atomicidade usando Contractos", Proceedings of INForum Simpósio de Informática, Lisbon, Portugal, Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa, pp. 190–201, sep, 2013. Abstractinforum2013-sousa.pdf

A programação concorrente obriga o programador a sincronizar os acessos concorrentes a regiões de memória partilhada, contudo esta abordagem não é suficiente para evitar todas as anomalias que podem ocorrer num cenário concorrente. Executar uma sequência de operações atómicas pode causar violações de atomicidade se existir uma correlação entre essas operações, devendo o programador garantir que toda a sequência de operações é executada atomicamente. Este problema é especialmente comum quando se usam operações de pacotes ou módulos de terceiros, pois o programador pode identificar incorretamente o âmbito das regiões de código que precisam de ser atómicas para garantir o correto comportamento do programa. Para evitar este problema o programador do módulo pode criar um contrato que especifica quais as sequências de operações do módulo que devem ser sempre executadas de forma atómica. Este trabalho apresenta uma análise estática para verificação destes contratos.

Martins, H. R. L., J. Soares, J. M. Lourenço, and N. Preguiça, "Replicação Multi-nível de Bases de Dados em Memória", Proceedings of INForum Simpósio de Informática, Lisbon, Portugal, Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa, pp. 190–201, sep, 2013. Abstractinforum2013-martins.pdf

Os serviços Web são frequentemente suportados por sistemas com uma arquitetura em camadas, sendo utilizadas bases de dados relacionais para armazenamento dos dados. A replicação dos diversos componentes tem sido uma das formas utilizadas para obter melhorarias de escalabilidade destes serviços. Adicionalmente, a utilização de bases de dados em memória permite alcançar um desempenho mais elevado. No entanto é conhecida a fraca escalabilidade das bases de dados com o número de núcleos em máquinas multi-núcleo. Neste artigo propomos uma nova abordagem para lidar com este problema, intitulada MacroDDB. Utilizando uma solução de replicação hierárquica, a nossa proposta, replica a base da dados em vários nós, sendo que cada nó, por sua vez, executa um conjunto de réplicas da base de dados. Esta abordagem permite assim lidar com a falta de escalabilidade das bases de dados relacionais em máquinas multi-núcleo, o que por sua vez melhora a escalabilidade geral dos serviços.

Silva, J. A., T. M. Vale, J. M. Lourenço, and H. Paulino, "Replicação Parcial com Memória Transacional Distribuída", Proceedings of INForum Simpósio de Informática, Lisbon, Portugal, Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa, pp. 310–321, 2013. Abstractinforum13-silva.pdf

Os sistemas de memória transacional distribuída atuais recorrem essencialmente à distribuição ou à replicação total para distribuir os seus dados pelos múltiplos nós do sistema. No entanto, estas estratégias de replicação de dados apresentam limitações. A distribuição não oferece tolerância a falhas e a replicação total limita a capacidade de armazenamento do sistema. Nesse contexto, a replicação parcial de dados surge como uma solução intermédia, que combina o melhor das duas anteriores com o intuito de mitigar as suas desvantagens. Esta estratégia tem sido explorada no contexto das bases de dados distribuídas, mas tem sido pouco abordada no contexto da memória transacional e, tanto quanto sabemos, nunca antes tinha sido incorporada num sistema de memória transacional distribuída para uma linguagem de propósito geral. Assim, neste artigo propomos e avaliamos uma infraestrutura para replicação parcial de dados para programas Java bytecode, que foi desenvolvida com base num sistema já existente de memória transacional distribuída. A modularidade da infraestrutura que apresentamos permite a implementação de múltiplos algoritmos e, por conseguinte, avaliar em que contextos de utilização (workloads, número de nós, etc.) a replicação parcial se apresenta como uma alternativa viável a outras estratégias de replicação de dados.

Dias, R. J., J. Seco, and J. M. Lourenço, "Snapshot Isolation Anomalies Detection in Software Transactional Memory", Proceedings of INForum Simpósio de Informática (InForum 2010), Braga, Portugal, Universidade do Minho, 2010. AbstractINForum-dias-2010.pdf

Some performance issues of transactional memory are caused by unnecessary abort situations where non serializable and yet non conflicting transactions are scheduled to execute concurrently. Smartly relaxing the isolation properties of transactions may overcome these issues and attain considerable performance improvements. However, it is known that relaxing isolation restrictions may lead to runtime anomalies. In some situations, like database management systems, developers may choose that compromise, hence avoiding anomalies explicitly. Memory transactions protect the state of the program, therefore execution anomalies may have more severe consequences in the semantics of programs. So, the compromise between a relaxed isolation strategy and enforcing the necessary program correctness is harder to setup. The solution we devise is to statically analyse programs to detect the kind of anomalies that emerge under snapshot isolation. Our approach allows a compiler to either warn the developer about the possible snapshot isolation anomalies in a given program, or possibly inform automatic correctness strategies to ensure Serializability.

Soares, J., J. M. Lourenço, and N. Preguiça, "Software Component Replication for Improved Fault-Tolerance: Can Multicore Processors Make It Work?", Dependable Computing, vol. 7869: Springer Berlin Heidelberg, pp. 173-180, 2013. Abstractewdc2013.pdf

n/a

Teixeira, B., J. M. Lourenço, and D. Sousa, "A Static Approach for Detecting Concurrency Anomalies in Transactional Memory", Proceedings of INForum Simpósio de Informática (InForum 2010), Braga, Portugal, Universidade do Minho, 2010. AbstractINForum-teixeira-2010.pdf

Programs containing concurrency anomalies will most probably exhibit harmful erroneous and unpredictable behaviors. To ensure program correctness, the sources of those anomalies must be located and corrected. Concurrency anomalies in Transactional Memory (TM) programs should also be diagnosed and fixed. In this paper we propose a framework to deal with two different categories of concurrency anomalies in TM. First, we will address low-level TM anomalies, also called dataraces, which arise from executing programs in weak isolation. Secondly, we will address high-level TM anomalies, also called high-level dataraces, bringing the programmers attention to pairs of transactions that the programmer has misspecified, and should have been combined into a single transaction. Our framework was validated against a set of programs with well known anomalies and demonstrated high accuracy and effectiveness, thus contributing for improving the correctness of TM programs

Silva, J. A., T. M. Vale, R. J. Dias, H. Paulino, and J. M. Lourenço, "Supporting Multiple Data Replication Models in Distributed Transactional Memory", Proceedings of the 2015 International Conference on Distributed Computing and Networking, Goa, India, ACM, pp. 11:1–11:10, 2015. Abstracticdcn15-jsilva.pdf

Distributed transactional memory (DTM) presents itself as a highly expressive and programmer friendly model for concurrency control in distributed programming. Current DTM systems make use of both data distribution and replication as a way of providing scalability and fault tolerance, but both techniques have advantages and drawbacks. As such, each one is suitable for different target applications, and deployment environments. In this paper we address the support of different data replication models in DTM. To that end we propose ReDstm, a modular and non-intrusive framework for DTM, that supports multiple data replication models in a general purpose programming language (Java). We show its application in the implementation of distributed software transactional memories with different replication models, and evaluate the framework via a set of well-known benchmarks, analysing the impact of the different replication models on memory usage and transaction throughput.

Silva, J. A., T. M. Vale, R. J. Dias, H. Paulino, and J. M. Lourenço, "Supporting Partial Data Replication in Distributed Transactional Memory", Proceedings of Joint Euro-TM/MEDIAN Workshop on Dependable Multicore and Transactional Memory Systems, Vienna, Austria, jan, 2014. Abstractdmtm14-jsilva.pdf

n/a

Silva, J. A., J. M. Lourenço, and H. Paulino, "Um Mecanismo de Caching para o Protocolo {SCORe}", Proceedings of INForum Simpósio de Informática, Porto, Portugal, FEUP Edições, pp. 260–275, sep, 2014. Abstractinforum14-jsilva.pdf

Os protocolos de replicação parcial de dados apresentam um grande potencial de escalabilidade. O SCORe é um protocolo para replicação parcial proposto recentemente que faz uso de controlo de concorrência multi-versão. Neste artigo abordamos um dos problemas principais que afeta o desempenho deste tipo de protocolos: a localidade dos dados, i.e., pode-se dar o caso do nó local não ter uma cópia dos dados a que pretende aceder, e nesse caso é necessário realizar uma ou mais operações de leitura remota. Assim, a não ser que se empreguem técnicas para melhorar a localidade no acesso aos dados, o número de operações de leitura remota aumenta com o tamanho do sistema, acabando por afetar o desempenho do mesmo. Nesse sentido, introduzimos um mecanismo de caching que permite replicar cópias de dados remotos de maneira a que seja poss{\'ı}vel servir localmente dados remotos enquanto que se mantém a consistência dos mesmos e a escalabilidade oferecida pelo protocolo. Avaliamos o mecanismo de caching com um benchmark conhecido da literatura e os resultados experimentais mostram resultados animadores com algum aumento no desempenho do sistema e uma redução considerável da quantidade de operações de leitura remota.

Farchi, E., I. Segall, J. M. Lourenço, and D. Sousa, "Using Program Closures to Make an Application Programming Interface (API) Implementation Thread Safe", PADTAD'12: Proceedings of the 10th Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging, Minneapolis, MN, USA, ACM, 2012. Abstractprogramclosure.pdf

A set of methods defining an API (Application Programming Interface) are to be made thread safe; thus running any subset of these methods in parallel should not create races or deadlocks. Originally, the set of methods were not designed to be thread safe, so races and deadlocks are expected when running them in parallel. The number of possible interleavings when running methods from this API in parallel is huge, and this work focuses on the identification of the high level data races introduced by such interleavings. We propose an analysis that avoids the exhaustive exploration of all possible interleavings. For a concurrent program P, the closure of P, clos(P), is defined. Roughly speaking, we can say that the clos(P) is obtained by adding threads to P in such a way that high level data races resulting from running P in parallel to other programs are exposed statically. A set of methods representing the API is then modeled as a set of concurrent programs and their closure is analysed to identify high level data races. These high level data races are then inspected and removed to make the API thread safe. We illustrate the application of this methodology with a simple use case.

Dias, R. J., D. Distefano, J. C. Seco, and J. M. Lourenço, "Verification of Snapshot Isolation in Transactional Memory Java Programs", Proceedings of the 26th European Conference on Object-Oriented Programming, Beijing, China, 11-16 June, 2012. Abstractecoop12.pdf

This paper presents an automatic verification technique for transactional memory Java programs executing under snapshot isolation level. We certify which transactions in a program are safe to execute under snapshot isolation without triggering the write-skew anomaly, opening the way to run-time optimizations that may lead to considerable performance enhancements. Our work builds on a novel deep-heap analysis technique based on separation logic to statically approximate the read- and write-sets of a transactional memory Java program. We implement our technique and apply our tool to a set of micro benchmarks and also to one benchmark of the STAMP package. We corroborate known results, certifying some of the examples for safe execution under snapshot isolation by proving the absence of write-skew anomalies. In other cases our analysis has identified transactions that potentially trigger previously unknown write-skew anomalies.>

Conference Proceedings
Eder, K., J. M. Lourenço, and O. Shehory, "Hardware and Software: Verification and Testing", Haifa Verification Conference, Haifa, Israel, Springer Berlin / Heidelberg, 2011. Abstract

n/a

Lourenço, J. M., N. Preguiça, R. J. Dias, J. N. Silva, J. Garcia, and L. Veiga, "NGenVM: New Generation Execution Environments", EuroSys, Nuremberg, Germany, 2009. Abstractngenvm-poster.pdf

This document describes a work-in-progress development of NGen-VM, a distributed infrastructure that manages execution environments with run-time and programming language support targeting applications developed in the Java programming language, deployed over clusters of many-core computers. For each running application or suite of related applications, a dedicated single-system image will be provided, regardless of the concurrent threads running on a single machine (on several cores) or scattered on different computers. Such system images rely on a single model for concurrency management (Transactional Shared Memory Model), in order fill the gap between the hardware infrastructure of clusters of many-core nodes and the application runtime that is independent from that hardware infrastructure. Interactions between threads in the same tasks will be supported by a Transactional Memory framework that provides the programming language with Atomic and Isolated code regions. Interactions between thread on different machines will also use the Transactional Memory model, but now resorting to a Distributed Shared Memory abstraction.

Journal Article
Lourenço, J., D. Sousa, B. C. Teixeira, and R. J. Dias, "Detecting concurrency anomalies in transactional memory programs", Comput. Sci. Inf. Syst., vol. 8, issue 2, no. 2, pp. 533–548, 2011. Abstractcomsis-2011.pdf

Software transactional memory is a promising programming model that adapts many concepts borrowed from the databases world to control concurrent accesses to main memory (RAM). This paper discusses how to support revertible operations, such as memory allocation and release, within software libraries that will be used in software memory transactional contexts. The proposal is based in the extension of the transaction life cycle state diagram with new states associated to the execution of user-defined handlers. The proposed approach is evaluated in terms of functionality and performance by way of a use case study and performance tests. Results demonstrate that the proposal and its current implementation are flexible, generic and efficient

Sousa, D. G., R. J. Dias, C. Ferreira, and J. M. Lourenço, "Preventing Atomicity Violations with Contracts", ArXiv e-prints, 2015. Abstract1505.02951v1-dsousa.pdfWebsite

Software developers are expected to protect concurrent accesses to shared regions of memory with some mutual exclusion primitive that ensures atomicity properties to a sequence of program statements. This approach prevents data races but may fail to provide all necessary correctness properties.The composition of correlated atomic operations without further synchronization may cause atomicity violations. Atomic violations may be avoided by grouping the correlated atomic regions in a single larger atomic scope. Concurrent programs are particularly prone to atomicity violations when they use services provided by third party packages or modules, since the programmer may fail to identify which services are correlated. In this paper we propose to use contracts for concurrency, where the developer of a module writes a set of contract terms that specify which methods are correlated and must be executed in the same atomic scope. These contracts are then used to verify the correctness of the main program with respect to the usage of the module(s). If a contract is well defined and complete, and the main program respects it, then the program is safe from atomicity violations with respect to that module. We also propose a static analysis based methodology to verify contracts for concurrency that we applied to some real-world software packages. The bug we found in Tomcat 6.0 was immediately acknowledged and corrected by its development team.

Kwiatkowski, J., M. Andruszkiewicz, E. Luque, T. Margalef, J. C. Cunha, J. M. Lourenço, H. Krawczyk, and S. Szejko, "Teaching parallel processing: development of curriculum and software tools", SIGCUE Outlook, vol. 24, New York, NY, USA, ACM, pp. 159–161, 1996. Abstractsigcse96.pdfWebsite

This paper presents an approach to education in Parallel and Distributed Processing undertaken in the Technical University of Gdansk and Technical University of Wroclaw. The paper gives a detailed structure of the project entitled "Teaching Parallel Processing: Development of Curriculum and Software Tools" which was started in 1994 and will be finish in 1997. Two universities from Poland: Technical University of Gdansk and Technical University of Wroclaw and two universities from EC countries: University Autònoma of Barcelona from Spain and University Nova of Lisbon from Portugal participate in the presented project. The main aim of the project is to develop existing curricula of Computer Science specialisation and to establish specialisation concerned with parallel and distributed processing at Polish universities.

Miscellaneous
Dias, R. J., J. M. Lourenço, and J. C. Seco, Detection of Snapshot Isolation Anomalies in Software Transactional Memory: A Statical Analysis Approach, , no. UNL-DI-5-2011: Departamento de Informática FCT/UNL, 2011. Abstract

n/a