Publications

Export 78 results:
Sort by: Author [ Title  (Asc)] Type Year
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 
A
Oliveira, L. P., and J. M. Lourenço, "Aceleração de Computações Científicas com Processadores Heterogéneos", InForum 2011: Proceedings of InForum Simpósio de Informática, Coimbra, Universidade do Coimbra, 2011. Abstractinforum-pitxyoki.pdf

Actualmente o mercado residencial de computadores inclui não só multiprocessadores com diversos núcleos (CPUs) como também placas gráficas (GPUs) cuja capacidade de processamento tem evoluído a um ritmo exponencial. Este poder computacional pode ser utilizado para outros fins para além do processamento gráfico, tal como o processamento de algoritmos comuns em computação científica. Neste artigo é apresentada, discutida e avaliada a framework Cheetah, uma framework que distribui programas computacionalmente exigentes sobre uma rede de CPUs e GPUs. Um programador que utilize a Cheetah apenas necessita de especificar o seu programa como um conjunto de kernels OpenCL, relegando para a framework a distribuição destes pelas unidades de processamento disponíveis. O programa pode assim escalar à medida que são adicionados novos recursos computacionais, sem quaisquer esforços adicionais de adaptação ou recompilação. Os testes realizados demonstraram a capacidade de a framework providenciar aceleracçõs até duas ordens de grandeza com um esforço de desenvolvimento reduzido, mesmo quando na presença de recursos computacionais limitados.

Sousa, D. G., J. M. and Lourenço, E. Farchi, and I. Segall, "Aplicação do Fecho de Programas na Deteção de Anomalias de Concorrência", INForum 2012: Proceedings of INForum Simpósio de Informática, Monte de Caparica, PT, Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa, 6 Sep., 2012. Abstractinforum-closure.pdf

Uma das estratégias para tirar partido dos múltiplos processadores disponíveis nos computadores atuais passa por adaptar código legado, inicialmente concebido para ser executado num contexto meramente sequencial, para ser agora executado num contexto multithreading. Nesse processo de adaptação é necessário proteger apropriadamente os dados que são agora partilhados e acedidos por diferentes threads concorrentes. A proteção dos dados com locks usando uma granulosidade grossa inibe a concorrência e opõe-se ao objetivo inicial de explorar o paralelismo suportado por múltiplos processadores. Por outro lado, a utilização de uma granulosidade fina pode levar à ocorrência de anomalias próprias da concorrência, como deadlocks e violações de atomicidade (high-level data races). Este artigo discute o conceito de fecho de um programa e uma metodologia que, quando aplicados em conjunto, permitem adaptar código legado para o tornar thread-safe, garantindo a ausência de violações de atomicidade na versão corrente do software e antecipando algumas violações de atomicidade que poderão ocorrer em versões futuras do mesmo software.

B
Silva, J., J. M. Lourenço, and H. Paulino, "Boosting Locality in Multi-version Partial Data Replication", Proceedings of the 30th ACM/SIGAPP Symposium On Applied Computing (SAC'15), 2015. Abstractsac15_cache.pdf

n/a

Preguiça, N., R. Rodrigues, C. Honorato, and J. M. Lourenço, "Byzantium: Byzantine-fault-tolerant database replication providing snapshot isolation", Proceedings of the Fourth conference on Hot topics in system dependability, Berkeley, CA, USA, USENIX Association, pp. 9–9, 2008. Abstractbyzantium-hotdep.pdf

Database systems are a key component behind many of today's computer systems. As a consequence, it is crucial that database systems provide correct and continuous service despite unpredictable circumstances, such as software bugs or attacks. This paper presents the design of Byzantium, a Byzantine fault-tolerant database replication middleware that provides snapshot isolation (SI) semantics. SI is very popular because it allows increased concurrency when compared to serializability, while providing similar behavior for typical workloads. Thus, Byzantium improves on existing proposals by allowing increased concurrency and not relying on any centralized component. Our middleware can be used with off-the-shelf database systems and it is built on top of an existing BFT library.

C
Alves, A., F. P. Birra, and J. M. Lourenço, "Como separar o trigo do joio? Ou: Como selecionar a melhor fotografia de um conjunto de fotografias semelhantes", Proceedings of INForum Simpósio de Informática, Covilhã, Portugal, sep, 2015. Abstractinforum15-photo.pdf

O advento da fotografia digital está na base de uma clara mudança de paradigma no processo de gestão da fotografia por amadores. Porque tirar mais uma fotografia agora não representa qualquer custo adicional, é frequente tirarem-se múltiplas fotografias ao mesmo sujeito, na expectativa de que uma delas corresponda aos padrões de qualidade desejados, em termos de iluminação, foco e enquadramento. Assumindo que a questão do enquadramento se resolve facilmente recorrendo ao recorte (crop) da fotografia, tem-se ainda assim que selecionar qual das várias fotografias bem enquadradas, tecnicamente parecidas em termos de iluminação e foco, vamos guardar (e por oposição quais vamos descartar). A escolha da melhor fotografia com base na observação visual em ecrã de computador é um processo muito pouco preciso e, portanto, gerador de sensações de insegurança que resultam, muitas vezes, na opção de não descartar nenhuma das várias fotografias semelhantes. Neste artigo propomo-nos endereçar a questão de como ajudar um fotógrafo amador a selecionar a melhor fotografia de um conjunto de fotografias semelhantes em termos técnicos (foco e iluminação) e de enquadramento. Este processo é baseado num workflow suportado por um pacote de software, que com alguma ajuda do utilizador permite ordenar um conjunto de fotografias semelhantes, sendo assim possível escolher aquela que melhor corresponde às expectativas e dando segurança e conforto na eventual eliminação das restantes.

Cunha, G., J. Lourenço, and R. J. Dias, "Consistent State Software Transactional Memory", IV Jornadas de Engenharia de Electrónica e Telecomunicações e de Computadores (JETC'08), Lisboa, Portugal, ISEL - Instituto Superior de Engenharia de Lisboa, pp. 251–256, 2008. Abstractjetc_2008.pdf

Software transactional memory (STM) is a promising programming model that adapts many concepts borrowed from the databases world to control concurrent accesses to memory (RAM) locations. In this paper we propose a new classification for the active states of a transaction; a new memory quiescing algorithm, to allow the safe transition of a memory block form transactional to non-transactional space; we compare word and object transactional grain units; and evaluate the cost of consistent state validation, arguing that this cost can be minimized by performing partial validation on problematic code regions.

Lourenço, J. M., J. C. Cunha, and V. Moreira, "Control and Debugging of Distributed Programs Using Fiddle", CoRR, vol. cs.DC/0309049, pp. 143–157, 2003. Abstractaadebug.pdfWebsite

n/a

Silva, J. A., H. Paulino, and J. M. Lourenço, "Crowd-Sourcing Mobile Devices to Provide Storage in Edge-Clouds", Proceedings of the Doctoral Symposium of the 16th International Conference on Distributed Computing and Networking, Jan, 2015. Abstracticdcn15srf.pdf

Given the proliferation and enhanced capabilities of mobile devices, their computational and storage resources can now be combined in a wireless cloud of nearby mobile devices, a mobile edge-cloud. These clouds are of particular interest in low connectivity scenarios, e.g., sporting events and disaster scenarios. In these dynamic clouds it is necessary to reliably disseminate and share data, and also to offload data processing computations to other devices in the edge-cloud. We are particularly interested in supporting storage services in these new type of edge-clouds, as a mean to enable data sharing, dissemination and querying, as well as to serve as a distributed file system for offloaded computations. In this Ph.D. thesis, we propose to address these questions by researching on the usage of ad-hoc clouds of mobile devices to develop an efficient storage service capable of providing high availability and reliability.

D
Cunha, J. C., J. M. Lourenço, and V. Duarte, "The DDBG distributed debugger", Parallel Program Development for Cluster Computing, Commack, NY, USA, Nova Science Publishers, Inc., pp. 279–290, 2001. Abstractcap13.pdf

n/a

Cunha, J. C., J. M. Lourenço, and T. Antão, "A Debugging Engine for a Parallel and Distributed Environment", Proceedings of the 1st Austrian-Hungarian Workshop on Distributed and Parallel Systems (DAPSYS'96), Wien, Austria, Hungarian Academy of Sciences, KFKI, pp. 111–118, 1996. Abstractdapsys96.pdf

This paper describes a debugging interface that has been developed for a parallel software engineering environment and that was developed on top of the PVM environment in the scope of the SEPP and HPCTI projects of the COPERNICUS Program. The main goal of this interface is to provide the basic debugging functionalities that are required by some components of that environment. We give special attention to the requirements posed by high-level tools of the environment, and to the need of providing a flexible debugging support layer that can be suitably adapted and extended. We present the system logical architecture and the interface specification of the debugging engine. We discuss its interfacing with other components of the environment, namely a graphical editor for the GRAPNEL visual parallel programming language, and a testing tool. We finally describe current work on the improvement of the debugging engine. Keywords: Debugging, monitoring, parallel processing, software tools.

Lourenço, J. M., "A Debugging Engine for Parallel and Distributed Programs", Universidade Nova de Lisboa: Faculdade de Ciências e Tecnologia, 2004. Abstractfiddle-thesis.pdf

In the last decade a considerable amount of research work has focused on distributed debugging, one of the crucial fields in the parallel software development cycle. The productivity of the software development process strongly depends on the adequate definition of what debugging tools should be provided, and what debugging methodologies and functionalities should these tools support. The work described in this dissertation was initiated in 1995, in the context of two research projects, the SEPP (Software Engineering for Parallel Processing) and HPCTI (High-Performance Computing Tools for Industry), both sponsored by the European Union in the Copernicus program, which aimed at the design and implementation of an integrated parallel software development environment. In the context of these projects, two independent toolsets have been developed, the GRADE and EDPEPPS parallel software development environments. Our contribution to these projects was in the debugging support. We have designed a debugging engine and developed a prototype, which was integrated the both toolsets (it was the only tool developed in the context of the SEPP and HPCTI projects which achieved such a result). Even after the closing of those research projects, further research work on distributed debugger has been carried on, which conducted to the re-design and re-implementation of the debugging engine. This dissertation describes the debugging engine according to its most up-to-date design and implementation stages. It also reposts some of the experimental work made with both the initial and the current implementations, and how it contributed to validate the design and implementations of the debugging engine.

Cunha, J. C., J. M. Lourenço, and V. Duarte, "Debugging of parallel and distributed programs", Parallel Program Development for Cluster Computing, Commack, NY, USA, Nova Science Publishers, Inc., pp. 97–129, 2001. Abstractcap03.pdf

n/a

Lourenço, J., D. Sousa, B. C. Teixeira, and R. J. Dias, "Detecting concurrency anomalies in transactional memory programs", Comput. Sci. Inf. Syst., vol. 8, issue 2, no. 2, pp. 533–548, 2011. Abstractcomsis-2011.pdf

Software transactional memory is a promising programming model that adapts many concepts borrowed from the databases world to control concurrent accesses to main memory (RAM). This paper discusses how to support revertible operations, such as memory allocation and release, within software libraries that will be used in software memory transactional contexts. The proposal is based in the extension of the transaction life cycle state diagram with new states associated to the execution of user-defined handlers. The proposed approach is evaluated in terms of functionality and performance by way of a use case study and performance tests. Results demonstrate that the proposal and its current implementation are flexible, generic and efficient

Dias, R. J., J. M. Lourenço, and J. C. Seco, Detection of Snapshot Isolation Anomalies in Software Transactional Memory: A Statical Analysis Approach, , no. UNL-DI-5-2011: Departamento de Informática FCT/UNL, 2011. Abstract

n/a

Teixeira, B., J. M. Lourenço, E. Farchi, R. J. Dias, and D. Sousa, "Detection of Transactional Memory Anomalies using Static Analysis", Proceedings of the 8th Workshop on Parallel and Distributed Systems: Testing, Analysis, and Debugging (PADTAD'10), New York, NY, USA, ACM, pp. 26–36, 2010. Abstractpadatad-teixeira-2010.pdf

Transactional Memory allows programmers to reduce the number of synchronization errors introduced in concurrent programs, but does not ensures its complete elimination. This paper proposes a pattern matching based approach to the static detection of atomicity violation, based on a path-sensitive symbolic execution method to model four anomalies that may affect Transactional Memory programs. The proposed technique may be used to to bring to programmer's attention pairs of transactions that the programmer has mis-specified, and should have been combined into a single transaction. The algorithm first traverses the AST tree, removing all the non-transactional blocks and generating a trace tree in the path sensitive manner for each thread. The trace tree is a Trie like data structure, where each path from root to a leaf is a list of transactions. For each pair of threads, erroneous patterns involving two consecutive transactions are then checked in the trace tree. Results allow to conclude that the proposed technique, although triggering a moderate number of false positives, can be successfully applied to Java programs, correctly identifying the vast majority of the relevant erroneous patterns.

Dias, R. J., J. M. Lourenço, and G. Cunha, "Developing libraries using software transactional memory", Comput. Sci. Inf. Syst., vol. 5, issue 2, no. 2, pp. 103–117, 2008. Abstractcomsis_final.pdf

Software transactional memory is a promising programming model that adapts many concepts borrowed from the databases world to control concurrent accesses to main memory (RAM). This paper discusses how to support revertible operations, such as memory allocation and release, within software libraries that will be used in software memory transactional contexts. The proposal is based in the extension of the transaction life cycle state diagram with new states associated to the execution of user-defined handlers. The proposed approach is evaluated in terms of functionality and performance by way of a use case study and performance tests. Results demonstrate that the proposal and its current implementation are flexible, generic and efficient

Dias, R. J., J. Lourenço, and G. Cunha, "Developing Libraries Using Software Transactional Memory", CoRTA 2008: Proceedings of the Conference on Compilers, Related Technologies and Applications, Bragança, Portugal, Instituto Politécnico de Bragança - ESTG, 2008. Abstractcorta_2008.pdf

Software transactional memory (STM) is a promising programming model that adapts many concepts borrowed from the databases world to control concurrent accesses to main memory (RAM) locations. This paper aims at discussing how to support apparently irreversible operations within a memory transaction.

Cunha, J. C., J. M. Lourenço, and T. Antão, "A Distributed Debugging Tool for a Parallel Software Engineering Environment", Proceedings of the 1st European Parallel Tools Meeting (EPTM'96), Paris, France, ONERA (French National Establishment for Aerospace Research), October, 1996. Abstracteptm96.pdf

We discuss issues in the design and implementation of a flexible debugging tool and its integration into a parallel software engineering environment.

Cunha, J. C., P. D. Medeiros, J. M. Lourenço, V. Duarte, J. Vieira, B. Moscão, D. Pereira, and R. Vaz, "The DOTPAR Project: Towards a Framework Supporting Domain Oriented Tools for Parallel and Distributed Processing", Proceedings of the International Conference and Exhibition on High-Performance Computing and Networking (HPCN'98), London, UK, Springer-Verlag, pp. 952–954, 1998. Abstractdotpar98.pdf

We discuss the problem of building domain oriented environments by a composition of heterogeneous application components and tools. We describe several individual tools that support such environments, namely a distributed monitoring and control tool (DAMS), a process-based distributed debugger (PDBG) and a heterogeneous interconnection model (PHIS). We discuss our experience with the development of a Problem Oriented Environment in the domain of genetic algorithms, obtained by a composition of heterogeneous tools and application components.

Fiedor, J., Z. Letko, J. M. Lourenço, and T. Vojnar, "Dynamic Validation of Contracts in Concurrent Code", Proceedings of the Fifteenth International Conference on Computer Aided Systems Theory (EUROCAST'15), Las Palmas de Gran Canaria, Spain, Universidad de Las Palmas de Gran Canaria, 2015. Abstracteurocast15.pdf

Multi-threaded programs allow one to achieve better performance by doing a lot of work in parallel using multiple threads. Such parallel programs often contain code blocks that a thread must execute atomically, i.e., with no interference from the other threads of the program. Failing to execute these code blocks atomically leads to errors known as atomicity violations. However, frequently it not obvious to tell when a piece of code should be executed atomically, especially when that piece of code contains calls to some third-party library functions, about which the programmer has little or no knowledge at all. One solution to this problem is to associate a contract with such a library, telling the programmer how the library functions should be used, and then check whether the contract is indeed respected. For contract validation, static approaches have been proposed, with known limitations on precision and scalability. In this paper, we propose a dynamic method for contract validation, which is more precise and scalable than static approaches.

E
Dias, R. J., J. M. Lourenço, and N. Preguiça, "Efficient and Correct Transactional Memory Programs Combining Snapshot Isolation and Static Analysis", Proceedings of the 3rd USENIX Conference on Hot Topics in Parallelism (HotPar'11), Berkeley, USA, Usenix Association, May, 2011. Abstracthotpar2011.pdf

Concurrent programs may suffer from concurrency anomalies that may lead to erroneous and unpredictable program behaviors. To ensure program correctness, these anomalies must be diagnosed and corrected. This paper addresses the detection of both low- and high-level anomalies in the Transactional Memory setting. We propose a static analysis procedure and a framework to address Transactional Memory anomalies. We start by dealing with the classic case of low-level dataraces, identifying concurrent accesses to shared memory cells that are not protected within the scope of a memory transaction. Then, we address the case of high-level dataraces, bringing the programmer's attention to pairs of memory transactions that were misspecified and should have been combined into a single transaction. Our framework was applied to a set of programs, collected form different sources, containing well known low- and high-level anomalies. The framework demonstrated to be accurate, confirming the effectiveness of using static analysis techniques to precisely identify concurrency anomalies in Transactional Memory programs.

Dias, R. J., T. M. Vale, and J. M. Lourenço, "Efficient support for in-place metadata in Java software transactional memory", Concurrency and Computation: Practice and Experience, vol. 25, no. 17, pp. 2394–2411, 2013. Abstractccpe2013-dias.pdfWebsite

Software transactional memory (STM) algorithms associate metadata with the memory locations accessed during a transaction's lifetime. This metadata may be stored in an external table by resorting to a mapping function that associates the address of a memory cell with the table entry containing the corresponding metadata (out-place or external strategy). Alternatively, the metadata may be stored adjacent to the associated memory cell by wrapping the cell and metadata together (in-place strategy). The implementation techniques to support these two approaches are very different and each STM framework is usually biased towards one of them, only allowing the efficient implementation of STM algorithms which suit one of the approaches and inhibiting a fair comparison with STM algorithms suiting the other. In this paper, we introduce a technique to implement in-place metadata that does not wrap memory cells, thus overcoming the bias and allowing STM algorithms to directly access the transactional metadata. The proposed technique is available as an extension to Deuce and enables the efficient implementation of a wide range of STM algorithms and their fair (unbiased) comparison in a common STM framework. We illustrate the benefits of our approach by analyzing its impact in two popular transactional memory algorithms with several transactional workloads, TL2 and multiversioning, each befitting out-place and in-place, respectively.

Dias, R. J., T. M. Vale, and J. M. Lourenço, "Efficient Support for In-Place Metadata in Transactional Memory", Proceedings of the 18th International Euro-Par Conference on Parallel Processing, Berlin, Heidelberg, Springer-Verlag, 2012. Abstracteuropar12.pdf

Software Transactional Memory (STM) algorithms correctness rely on metadata associated with the memory locations accessed during the transaction life-time. STM implementations may store this metadata either in-place, by wrapping the memory cells in a container that includes the memory cell itself and the corresponding metadata, or out-place, by resorting to a mapping function that associates the memory cell address to an external table with the corresponding metadata. The implementation techniques for these two approaches are very different and each STM framework is usually biased towards one of them, only allowing the efficient implementation of algorithms that fall into the appropriate category, and inhibiting the fair comparison with STM algorithms falling into the other. In this paper we introduce a technique that supports the use of in-place metadata without requiring to wrap memory cells, thus providing STM algorithms with direct access to the transactional metadata and overcoming the bias. The proposed technique is available as an extension to the DeuceSTM framework and allows the efficient implementation of a wide range of STM algorithms, thus enabling their fair (unbiased) comparison in a common STM infrastructure. We illustrate the benefits of our approach by analyzing its impact in two popular TM algorithms with two different transactional workloads, TL2 and multi-versioning, which bias to out-place and in-place respectively.

Vale, T., R. J. Dias, J. A. Silva, and J. M. Lourenço, "Execução concorrente e determinista de transações", Proceedings of INForum Simpósio de Informática, Covilhã, Portugal, 2015. Abstractinforum15-pot.pdf

Neste artigo apresentamos um protocolo de controlo de concorrência que garante que a execução concorrente de transações é equivalente à sua execução sequencial por uma ordem predefinida. Isto permite executar programas que usam transações de forma determinista. O protocolo (1) permite, pela primeira vez, a execução determinista de programas que usam memória transacional por hardware; e (2) garante a execução determinista de programas que usam memória transacional por software com um desempenho claramente superior ao estado da arte.

Cunha, J. C., P. D. Medeiros, V. Duarte, J. Lourenço, and C. Gomes, "An Experience in Building a Parallel and Distributed Problem-Solving Environment", Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA'99): CSREA Press, pp. 1804–1809, 1999. Abstractpdpta99.pdf

We describe our experimentation with the design and implementation of specific environments, consisting of heterogeneous computational, visualization, and control components. We illustrate the approach with the design of a problem–solving environment supporting the execution of genetic algorithms. We describe a prototype supporting parallel execution, visualization, and steering. A life cycle for the development of applications based on genetic algorithms is proposed.