Show simple item record

dc.contributor.authorEvripidou, P.
dc.contributor.authorKyriacou, Costas
dc.date.accessioned2016-10-07T14:33:35Z
dc.date.available2016-10-07T14:33:35Z
dc.date.issued2013
dc.identifier.citationEvripidou P and Kyriacou C (2013) Data-flow vs control-flow for extreme level computing. In: Data-Flow Execution Models for Extreme Scale Computing. 8 Sep 2013. Edinburgh, UK: IEEE: 9-13.
dc.identifier.urihttp://hdl.handle.net/10454/9653
dc.descriptionNo
dc.description.abstractThis paper challenges the current thinking for building High Performance Computing (HPC) Systems, which is currently based on the sequential computing also known as the von Neumann model, by proposing the use of Novel systems based on the Dynamic Data-Flow model of computation. The switch to Multi-core chips has brought the Parallel Processing into the mainstream. The computing industry and research community were forced to do this switch because they hit the Power and Memory walls. Will the same happen with HPC? The United States through its DARPA agency commissioned a study in 2007 to determine what kind of technologies will be needed to build an Exaflop computer. The head of the study was very pessimistic about the possibility of having an Exaflop computer in the foreseeable future. We believe that many of the findings that caused the pessimistic outlook were due to the limitations of the sequential model. A paradigm shift might be needed in order to achieve the affordable Exascale class Supercomputers.
dc.relation.isreferencedbyhttp://dx.doi.org/10.1109/DFM.2013.17
dc.subjectSupercomputing; Data-flow; HPC; Exascale
dc.titleData-flow vs control-flow for extreme level computing
dc.status.refereedYes
dc.typeConference Paper
dc.type.versionNo full-text available in the repository


This item appears in the following Collection(s)

Show simple item record