An Open Framework for Rapid Prototyping of Signal Processing Applications
- 1.4k Downloads
Embedded real-time applications in communication systems have significant timing constraints, thus requiring multiple computation units. Manually exploring the potential parallelism of an application deployed on multicore architectures is greatly time-consuming. This paper presents an open-source Eclipse-based framework which aims to facilitate the exploration and development processes in this context. The framework includes a generic graph editor (Graphiti), a graph transformation library (SDF4J) and an automatic mapper/scheduler tool with simulation and code generation capabilities (PREESM). The input of the framework is composed of a scenario description and two graphs, one graph describes an algorithm and the second graph describes an architecture. The rapid prototyping results of a 3GPP Long-Term Evolution (LTE) algorithm on a multicore digital signal processor illustrate both the features and the capabilities of this framework.
KeywordsRapid Prototype User Equipment Graph Transformation Direct Memory Access Multicore Architecture
The recent evolution of digital communication systems (voice, data, and video) has been dramatic. Over the last two decades, low data-rate systems (such as dial-up modems, first and second generation cellular systems, 802.11 Wireless local area networks) have been replaced or augmented by systems capable of data rates of several Mbps, supporting multimedia applications (such as DSL, cable modems, 802.11b/a/g/n wireless local area networks, 3G, WiMax and ultra-wideband personal area networks).
As communication systems have evolved, the resulting increase in data rates has necessitated a higher system algorithmic complexity. A more complex system requires greater flexibility in order to function with different protocols in different environments. Additionally, there is an increased need for the system to support multiple interfaces and multicomponent devices. Consequently, this requires the optimization of device parameters over varying constraints such as performance, area, and power. Achieving this device optimization requires a good understanding of the application complexity and the choice of an appropriate architecture to support this application.
An embedded system commonly contains several processor cores in addition to hardware coprocessors. The embedded system designer needs to distribute a set of signal processing functions onto a given hardware with predefined features. The functions are then executed as software code on target architecture; this action will be called a deployment in this paper. A common approach to implement a parallel algorithm is the creation of a program containing several synchronized threads in which execution is driven by the scheduler of an operating system. Such an implementation does not meet the hard timing constraints required by real-time applications and the memory consumption constraints required by embedded systems . One-time manual scheduling developed for single-processor applications is also not suitable for multiprocessor architectures: manual data transfers and synchronizations quickly become very complex, leading to wasted time and potential deadlocks. Furthermore, the task of finding an optimal deployment of an algorithm mapped onto a multicomponent architecture is not straightforward. When performed manually, the result is inevitably a suboptimal solution. These issues raise the need for new methodologies, which allow the exploration of several solutions, to achieve a more optimal result.
Several features must be provided by a fast prototyping process: description of the system (hardware and software), automatic mapping/scheduling, simulation of the execution, and automatic code generation. This paper draws on previously presented works [2, 3, 4] in order to generate a more complete rapid prototyping framework. This complete framework is composed of three complementary tools based on Eclipse  that provide a full environment for the rapid prototyping of real-time embedded systems: Parallel and Real-time Embedded Executives Scheduling Method (PREESM), Graphiti and Synchronous Data Flow for Java (SDF4J). This framework implements the methodology Algorithm-Architecture Matching (AAM), which was previously called Algorithm-Architecture Adequation (AAA) . The focus of this rapid prototyping activity is currently static code mapping/scheduling but dynamic extensions are planned for future generations of the tool.
From the graph descriptions of an algorithm and of an architecture, PREESM can find the right deployment, provide simulation information, and generate a framework code for the processor cores . These rapid prototyping tasks can be combined and parameterized in a workflow. In PREESM, a workflow is defined as an oriented graph representing the list of rapid prototyping tasks to execute on the input algorithm and architecture graphs in order to determine and simulate a given deployment. A rapid prototyping process in PREESM consists of a succession of transformations. These transformations are associated in a data flow graph representing a workflow that can be edited in a Graphiti generic graph editor. The PREESM input graphs may also be edited using Graphiti. The PREESM algorithm models are handled by the SDF4J library. The framework can be extended by modifying the workflows or by connecting new plug-ins (for compilation, graph analyses, and so on).
In this paper, the differences between the proposed framework and related works are explained in Section 2. The framework structure is described in Section 3. Section 4 details the features of PREESM that can be combined by users in workflows. The use of the framework is illustrated by the deployment of a wireless communication algorithm from the 3rd Generation Partnership Project (3GPP) Long-Term Evolution (LTE) standard in Section 5. Finally, conclusions are given in Section 6.
2. State of the Art of Rapid Prototyping and Multicore Programming
There exist numerous solutions to partition algorithms onto multicore architectures. If the target architecture is homogeneous, several solutions exist which generate multicore code from C with additional information (OpenMP , CILK ). In the case of heterogeneous architectures, languages such as OpenCL  and the Multicore Association Application Programming Interface (MCAPI ) define ways to express parallel properties of a code. However, they are not currently linked to efficient compilers and runtime environments. Moreover, compilers for such languages would have difficulty in extracting and solving the bottlenecks of the implementation that appear inherently in graph descriptions of the architecture and the algorithm.
The Poly-Mapper tool from PolyCore Software  offers functionalities similar to PREESM but, in contrast to PREESM, its mapping/scheduling is manual. Ptolemy II  is a simulation tool that supports many models of computation. However, it also has no automatic mapping and currently its code generation for embedded systems focuses on single-core targets. Another family of frameworks existing for data flow based programming is based on CAL  language and it includes OpenDF . OpenDF employs a more dynamic model than PREESM but its related code generation does not currently support multicore embedded systems.
Closer to PREESM are the Model Integrated Computing (MIC ), the Open Tool Integration Environment (OTIE ), the Synchronous Distributed Executives (SynDEx ), the Dataflow Interchange Format (DIF ), and SDF for Free (SDF3 ). Both MIC and OTIE can not be accessed online. According to literature, MIC focuses on the transformation between algorithm domain-specific models and metamodels while OTIE defines a single system description that can be used during the whole signal processing design cycle.
DIF is designed as an extensible repository of representation, analysis, transformation, and scheduling of data flow language. DIF is a Java library which allows the user to go from graph specification using the DIF language to C code generation. However, the hierarchical Synchronous Data Flow (SDF) model used in the SDF4J library and PREESM is not available in DIF.
SDF3 is an open-source tool implementing some data flow models and providing analysis, transformation, visualization, and manual scheduling as a C++ library. SDF3 implements the Scenario Aware Data Flow (SADF ), and provides Multiprocessor System-on-Chip (MP-SoC) binding/scheduling algorithm to output MP-SoC configuration files.
SynDEx and PREESM are both based on the AAM methodology  but the tools do not provide the same features. SynDEx is not an open source, it has its own model of computation that does not support schedulability analysis, and code generation is possible but not provided with the tool. Moreover, the architecture model of SynDEx is at a too high level to account for bus contentions and DMA used in modern chips (multicore processors of MP-SoC) in the mapping/scheduling.
The features that differentiate PREESM from the related works and similar tools are
(i)The tool is an open source and accessible online;
(ii)the algorithm description is based on a single well-known and predictable model of computation;
(iii)the mapping and the scheduling are totally automatic;
(iv)the functional code for heterogeneous multicore embedded systems can be generated automatically;
(v)the algorithm model provides a helpful hierarchical encapsulation thus simplifying the mapping/scheduling .
The PREESM framework structure is detailed in the next section.
3. An Open-Source Eclipse-Based Rapid Prototyping Framework
3.1. The Framework Structure
The first step of the process is to describe both the target algorithm and the target architecture graphs. A graphical editor reduces the development time required to create, modify and edit those graphs. The role of Graphiti  is to support the creation of algorithm and architecture graphs for the proposed framework. Graphiti can also be quickly configured to support any type of file formats used for generic graph descriptions.
The algorithm is currently described as a Synchronous Data Flow (SDF ) Graph. The SDF model is a good solution to describe algorithms with static behavior. The SDF4J  is an open-source library providing usual transformations of SDF graphs in the Java programming language. The extensive use of SDF and its derivatives in the programming model community led to the development of SDF4J as an external tool. Due to the greater specificity of the architecture description compared to the algorithm description, it was decided to perform the architecture transformation inside the PREESM plug-ins.
The PREESM project  involves the development of a tool that performs the rapid prototyping tasks. The PREESM tool uses the Graphiti tool and SDF4J library to design algorithm and architecture graphs and to generate their transformations. The PREESM core is an Eclipse plug-in that executes sequences of rapid prototyping tasks or workflows. The tasks of a workflow are delegated to PREESM plug-ins. There are currently three PREESM plug-ins: the graph transformation plug-in, the scheduler plug-in, and the code-generation plug-in.
The three tools of the framework are detailed in the next sections.
3.2. Graphiti: A Generic Graph Editor for Editing Architectures, Algorithms and Workflows
Graphiti is an open-source plug-in for the Eclipse environment that provides a generic graph editor. It is written using the Graphical Editor Framework (GEF). The editor is generic in the sense that any type of graph may be represented and edited. Graphiti is used routinely with the following graph types and associated file formats: CAL networks [13, 25], a subset of IP-XACT , GraphML  and PREESM workflows .
3.2.1. Overview of Graphiti
A type of graph is registered within the editor by a configuration. A configuration is an XML (Extensible Markup Language ) file that describes
(1)the abstract syntax of the graph (types of vertices and edges, and attributes allowed for objects of each type);
(2)the visual syntax of the graph (colors, shapes, etc.);
Two kinds of input transformations are supported, from XML to XML and from text to XML (Figure 2). XML is transformed to XML with Extensible Stylesheet Language Transformation (XSLT ), and text is parsed to its Concrete Syntax Tree (CST) represented in XML according to a LL(k) grammar by the Grammatica  parser. Similarly, two kinds of output transformations are supported, from XML to XML and from XML to text.
Graphiti handles attributed graphs . An attributed graph is defined as a directed multigraph Open image in new window with Open image in new window the set of vertices, Open image in new window the multiset of edges (there can be more than one edge between any two vertices). Open image in new window is a function Open image in new window that associates instances with attributes from the attribute name set Open image in new window and values from Open image in new window , the set of possible attribute values. A built-in type attribute is defined so that each instance Open image in new window has a type Open image in new window , and only admits attributes from a set Open image in new window given by Open image in new window . Additionally, a type Open image in new window has a visual syntax Open image in new window that defines its color, shape, and size.
To edit a graph, the user selects a file and the matching configuration is computed based on the file extension. The transformations defined in the configuration file are then applied to the input file and result in a graph defined in Graphiti's XML format Open image in new window as shown in Figure 2. The editor uses the visual syntax defined by Open image in new window in the configuration to draw the graph, vertices, and edges. For each instance of type Open image in new window the user can edit the relevant attributes allowed by Open image in new window as defined in the configuration. Saving a graph consists of writing the graph in Open image in new window , and transforming it back to the input file's native format.
3.2.2. Editing a Configuration for a Graph Type
Graphiti is a stand-alone tool, totally independent of PREESM. However, Graphiti generates workflow graphs, IP-XACT and GraphML files that are the main inputs of PREESM. The GraphML files contain the algorithm model. These inputs are loaded and stored in PREESM by the SDF4J library. This library, discussed in the next section, executes the graph transformations.
3.3. SDF4J: A Java Library for Algorithm Data Flow Graph Transformations
SDF4J is a library defining several Data Flow oriented graph models such as SDF and Directed Acyclic Graph (DAG ). It provides the user with several classic SDF transformations such as hierarchy flattening, and SDF to Homogeneous SDF (HSDF ) transformations and some clustering algorithms. This library also gives the possibility to expand optimization templates. It defines its own graph representation based on the GraphML standard and provides the associated parser and exporter class. SDF4J is freely available (GPL license) for download.
3.3.1. SDF4J SDF Graph model
An SDF graph is used to simplify the application specifications. It allows the representation of the application behavior at a coarse grain level. This data flow representation models the application operations and specifies the data dependencies between these operations.
An SDF graph is a finite directed, weighted graph Open image in new window where:
(i) Open image in new window is the set of nodes. A node computes an input data stream and outputs the result;
(ii) Open image in new window is the edge set, representing channels which carry data streams;
(iv) Open image in new window is a function with Open image in new window representing the number of data tokens produced at Open image in new window 's source to be carried by Open image in new window ;
This model offers strong compile-time predictability properties, but has limited expressive capability. The SDF implementation enabled by the SDF4J supports the hierarchy defined in  which increases the model expressiveness. This specific implementation is straightforward to the programmer and allows user-defined structural optimizations. This model is also intended to lead to a better code generation using common C patterns like loop and function calls. It is highly expandable as the user can associate any properties to the graph components (edge, vertex) to produce a customized model.
3.3.2. SDF4J SDF Graph Transformations
SDF4J implements several algorithms intended to transform the base model or to optimize the application behavior at different levels.
(i)The hierarchy flattening transformation aims to flatten the hierarchy (remove hierarchy levels) at the chosen depth in order to later extract as much as possible parallelism from the designer's hierarchical description.
(ii)The HSDF transformation (Figure 7) transforms the SDF model to an HSDF model in which the amount of tokens exchanged on edges are homogeneous (production = consumption). This model reveals all the potential parallelism in the application but dramatically increases the amount of vertices in the graph.
(iii)The internalization transformation based on  is an efficient clustering method minimizing the number of vertices in the graph without decreasing the potential parallelism in the application.
3.4. PREESM: A Complete Framework for Hardware and Software Codesign
As can be seen in Figure 8, prior to entering the scheduling phase, the algorithm goes through three transformation steps: the hierarchy flattening transformation, the HSDF transformation, and the DAG transformation (see Section 3.3.2). These transformations prepare the graph for the static scheduling and are provided by the Graph Transformation Module (see Section 4.1). Subsequently, the DAG—converted SDF graph—is processed by the scheduler . As a result of the deployment by the scheduler, a code is generated and a Gantt chart of the execution is displayed. The generated code consists of scheduled function calls, synchronizations, and data transfers between cores. The functions themselves are handwritten.
The plug-ins of the PREESM tool implement the rapid prototyping tasks that a user can add to the workflows. These plug-ins are detailed in next section.
4. The Current Features of PREESM
4.1. The Graph Transformation Module
In order to generate an efficient schedule for a given algorithm description, the application defined by the designer must be transformed. The purpose of this transformation is to reveal the potential parallelism of the algorithm and simplify the work of the task scheduler. To provide the user with flexibility while optimizing the design, the entire graph transformation provided by the SDF4J library can be instantiated in a workflow with parameters allowing the user to control each of the three transformations. For example, the hierarchical flattening transformation can be configured to flatten a given number of hierarchy levels (depth) in order to keep some of the user hierarchical construction and to maintain the amount of vertices to schedule at a reasonable level. The HSDF transformation provides the scheduler with a graph of high potential parallelism as all the vertices of the SDF graph are repeated according to the SDF graph's basic repetition vector. Consequently, the number of vertices to schedule is larger than in the original graph. The clustering transformation prepares the algorithm for the scheduling process by grouping vertices according to criteria such as strong connectivity or strong data dependency between vertices. The grouped vertices are then transformed into a hierarchical vertex which is then treated as a single vertex in the scheduling process. This vertex grouping reduces the number of vertices to schedule, speeding up the scheduling process. The user can freely use available transformations in his workflow in order to control the criteria for optimizing the targeted application and architecture.
As can be seen in the workflow displayed in Figure 8, the graph transformation steps are followed by the static scheduling step.
4.2. The PREESM Static Scheduler
Scheduling consists of statically distributing the tasks that constitute an application between available cores in a multicore architecture and minimizing parameters such as final latency. This problem has been proven to be NP-complete . A static scheduling algorithm is usually described as a monolithic process, and carries out two distinct functionalities: choosing the core to execute a specific function and evaluating the cost of the generated solutions.
The PREESM scheduler splits these functionalities into three submodules  which share minimal interfaces: the task scheduling, the edge scheduling, and the Architecture Benchmark Computer (ABC) submodules. The task scheduling submodule produces a scheduling solution for the application tasks mapped onto the architecture cores and then queries the ABC submodule to evaluate the cost of the proposed solution. The advantage of this approach is that any task scheduling heuristic may be combined with any ABC model, leading to many different scheduling possibilities. For instance, an ABC minimizing the deployment memory or energy consumption can be implemented without modifying the task scheduling heuristics.
The interface offered by the ABC to the task scheduling submodule is minimal. The ABC gives the number of available cores, receives a deployment description and returns costs to the task scheduling (infinite if the deployment is impossible). The time keeper calculates and stores timings for the tasks and the transfers when necessary for the ABC.
The ABC needs to schedule the edges in order to calculate the deployment cost. However, it is not designed to make any deployment choices; this task is delegated to the edge scheduling submodule. The router in the edge scheduling submodule finds potential routes between the available cores.
4.2.1. Scheduling Heuristics
Three algorithms are currently coded, and are modified versions of the algorithms described in .
(i)A list scheduling algorithm schedules tasks in the order dictated by a list constructed from estimating a critical path. Once a mapping choice has been made, it will never be modified. This algorithm is fast but has limitations due to this last property. List scheduling is used as a starting point for other refinement algorithms.
(ii)The FAST algorithm is a refinement of the list scheduling solution which uses probabilistic hops. It changes the mapping choices of randomly chosen tasks; that is, it associates these tasks to another processing unit. It runs until stopped by the user and keeps the best latency found. The algorithm is multithreaded to exploit the multicore parallelism of a host computer.
(iii)A genetic algorithm is coded as a refinement of the FAST algorithm. The n best solutions of FAST are used as the base population for the genetic algorithm. The user can stop the processing at any time while retaining the last best solution. This algorithm is also multithreaded.
4.2.2. Scheduling Architecture Model
The current architecture representation was driven by the need to accurately model multicore architectures and hardware coprocessors with intercores message-passing communication. This communication is handled in parallel to the computation using Direct Memory Access (DMA) modules. This model is currently used to closely simulate the Texas Instruments TMS320TCI6487 processor (see Section 5.3.2). The model will soon be extended to shared memory communications and more complex interconnections. The term operator represents either a processor core or a hardware coprocessor. Operators are linked by media, each medium representing a bus and the associated DMA. The architectures can be either homogeneous (with all operators and media identical) or heterogeneous. For each medium, the user defines a DMA set up time and a bus data rate. As shown in Figure 9, the architecture model is only processed in the scheduler by the ABC and not by the heuristic and edge scheduling submodules.
4.2.3. Architecture Benchmark Computer
(i)The loosely-timed model takes into account task and transfer times but no transfer contention.
(ii)The approximately-timed model associates each intercore communication medium with its constant rate and simulates contentions.
(iii)The accurately-timed model adds set up times which simulate the duration necessary to initialize a parallel transfer controller like Texas Instruments Enhanced Direct Memory Access (EDMA ). This set up time is scheduled in the core which sends the transfer.
The task and architecture properties feeding the ABC submodule are evaluated experimentally, and include media data rate, set up times, and task timings. ABC models evaluating parameters other than latency are planed in order to minimize memory size, memory accesses, cadence (i.e., average runtime), and so on. Currently, only latency is minimized due to the limitations of the list scheduling algorithms: these costs cannot be evaluated on partial deployments.
4.2.4. Edge Scheduling Submodule
When a data block is transferred from one operator to another, transfer tasks are added and then mapped to the corresponding medium. A route is associated with each edge carrying data from one operator to another, which possibly may go through several other operators. The edge scheduling submodule routes the edges and schedules their route steps. The existing routing process is basic and will be developed further once the architecture model has been extended. Edge scheduling can be executed with different algorithms of varying complexity, which results in another level of scalability. Currently, two algorithms are implemented:
(i)the simple edge scheduler follows the scheduling order given by the task list provided by the list scheduling algorithm;
(ii)the switching edge scheduler reuses the task switcher algorithm discussed in Section 4.2.1 for edge scheduling. When a new communication edge needs to be scheduled, the algorithm looks for the earliest hole of appropriate size in the medium schedule.
The scheduler framework enables the comparison of different edge scheduling algorithms using the same task scheduling submodule and architecture model description. The main advantage of the scheduler structure is the independence of scheduling algorithms from cost type and benchmark complexity.
4.3. Generating a Code from a Static Schedule
PREESM currently supports the C64x and C64x+ based processors from Texas Instruments with DSP-BIOS Operating System  and the x86 processors with Windows Operating System. The supported intercore communication schemes include TCP/IP with sockets, Texas Instruments EDMA3 , and RapidIO link .
Depending on the type of medium between the operators in the PREESM architecture model, the XSLT transformation generates calls to the appropriate predefined communication library. Specific code libraries have been developed to manage the communications and synchronizations between the target cores .
5. Rapid Prototyping of a Signal Processing Algorithm from the 3GPP LTE Standard
The framework functionalities detailed in the previous sections are now applied to the rapid prototyping of a signal processing application from the 3GPP LTE radio access network physical layer.
5.1. The 3GPP LTE Standard
5.2. The RACH Preamble Detection
The RACH is a contention-based uplink channel used mainly in the initial transmission requests from the UE to the eNodeB for connection to the network. The UE, seeking connection with a base station, sends its signature in a RACH preamble dedicated time and frequency window in accordance with a predefined preamble format. Signatures have special autocorrelation and intercorrelation properties that maximize the ability of the eNodeB to distinguish between different UEs. The RACH preamble procedure implemented in the LTE eNodeB can detect and identify each user's signature and is dependent on the cell size and the system bandwidth. Assume that the eNodeB has the capacity to handle the processing of this RACH preamble detection every millisecond in a worst case scenario.
(1)After the cyclic prefix removal, the preprocessing (Preproc) function isolates the RACH bandwidth, by shifting the data in frequency and filtering it with downsampling. It then transforms the data into the frequency domain.
(2)Next, the circular correlation (CirCorr) function correlates data with several prestored preamble root sequences (or signatures) in order to discriminate between simultaneous messages from several users. It also applies an IFFT to return to the temporal domain and calculates the energy of each root sequence correlation.
(3)Then, the noisefloor threshold (NoiseFloorThr) function collects these energies and estimates the noise level for each root sequence.
(4)Finally, the peak search (PeakSearch) function detects all signatures sent by the users in the current time window. It additionally evaluates the transmission timing advance corresponding to the approximate user distance.
In general, depending on the cell size, three parameters of RACH may be varied: the number of receive antennas, the number of root sequences, and the number of times the same preamble is repeated. The 115 km cell case implies 4 antennas, 64 root sequences, and 2 repetitions.
5.3. Architecture Exploration
5.3.1. Algorithm Model
The goal of this exploration is to determine through simulation the architecture best suited to the 115km cell RACH-PD algorithm. The RACH-PD algorithm behavior is described as a SDF graph in PREESM. A static deployment enables static memory allocation, so removing the need for runtime memory administration. The algorithm can be easily adapted to different configurations by tuning the HSDF parameters. Using the same approach as in , valid scheduling derived from the representation in Figure 16 can be described by the compact expression:
We can separate the preamble detection algorithm in 4 steps:
(1)preprocessing step: (8Preproc),
(2)circular correlation step: (4(64(InitPower (2((SingleZCProc)(PowAcc))))PowAcc)),
(3)noise floor threshold step: (64NoiseFloorThreshold),
(4)peak search step: PeakSearch.
Each of these steps is mapped onto the available cores and will appear in the exploration results detailed in Section 5.3.4. The given description generates 1,357 operations; this does not include the communication operations necessary in the case of multicore architectures. Placing these operations by hand onto the different cores would be greatly time-consuming. As seen in Section 4.2 the rapid prototyping PREESM tool offers automatic scheduling, avoiding the problem of manual placement.
5.3.2. Architecture Exploration
5.3.3. Architecture Model
To solve the deployment problem, each operation is assigned an experimental timing (in terms of CPU cycles). These timings are measured with deployments of the actors on a single C64x+. Since the C64x+ is a 32-bit fixed-point DSP core, the algorithms must be converted from floating-point to fixed-point prior to these deployments. The EDMA is modelled as a nonblocking medium (see Section 4.2.2) transferring data at a constant rate and with a given set up time. Assuming the EDMA has the same performance from the L2 internal memory to the L2 internal memory as the EDMA3 of the TMS320TCI6482 (see , then the transfer of N bytes via EDMA should take approximately): transfer Open image in new window cycles. Consequently, in the PREESM model, the average data rate used for simulation is 3.375 GBytes/s and the EDMA set up time is 135 cycles.
5.3.4. Architecture Choice
The experimental timings were measured on code executions using a TMS320TCI6487. The timings feeding the simulation are measured in loops, each calling a single function with L1 cache activated. For more details about C64x+ cache, see . This represents the application behavior when local data access is ideal and will lead to an optimistic simulation. The RACH application is well suited for a parallel architecture, as the addition of one core reduces the latency dramatically. Two cores can process the algorithm within a time frame close to the real-time deadline with loosely and approximately timed models but high data transfer contention and high number of transfers disqualify it when accurately timed model is used.
The 3-core solution is clearly the best one: its CPU loads (less than 86% with accurately-timed ABC) are satisfactory and do not justify the use of a fourth core, as can be seen in Figure 18. The high data contention in this case study justifies the use of several ABC models; simple models for fast results and more complex models to dimension correctly the system.
5.4. Code Generation
From the PREESM generated code, the size of the statically allocated buffers are 1.65 MBytes for one core, 1.25 MBytes for a second core, and 200 kBytes for a third core. The asymmetric mode is chosen to fit this memory distribution. As the necessary memory is higher than the internal L2, some buffers are manually chosen to go in the external memory and the L2 cache  is activated. A memory minimization ABC in PREESM would help this process, targeting some memory objectives while mapping the actors on the cores.
The intent of this paper was to detail the functionalities of a rapid prototyping framework comprising the Graphiti, SDF4J, and PREESM tools. The main features of the framework are the generic graph editor, the graph transformation module, the automatic static scheduler, and the code generator. With this framework, a user can describe and simulate the deployment, choose the most suitable architecture for the algorithm and generate an efficient framework code. The framework has been successfully tested on RACH-PD algorithm from the 3GPP LTE standard. The RACH-PD algorithm with 1357 operations was deployed on a tricore DSP and the simulation was validated by the generated code execution. In the near future, an increasing number of CPUs will be available in complex System on Chips. Developing methodologies and tools to efficiently partition code on these architectures is thus an increasingly important objective.
- 2.Pelcat M, Aridhi S, Nezan JF: Optimization of automatically generated multi-core code for the LTE RACH-PD algorithm. Proceedings of the Conference on Design and Architectures for Signal and Image Processing (DASIP '08), November 2008, Bruxelles, Belgium Google Scholar
- 3.Piat J, Bhattacharyya SS, Pelcat M, Raulet M: Multi-core code generation from interface based hierarchy. Proceedings of the Conference on Design and Architectures for Signal and Image Processing (DASIP '09), September 2009, Sophia Antipolis, France Google Scholar
- 4.Pelcat M, Menuet P, Aridhi S, Nezan J-F: Scalable compile-time scheduler for multi-core architectures. Proceedings of the Conference on Design and Architectures for Signal and Image Processing (DASIP '09), September 2009, Sophia Antipolis, France Google Scholar
- 5.Eclipse Open Source IDE http://www.eclipse.org/downloads
- 6.Grandpierre T, Sorel Y: From algorithm and architecture specifications to automatic generation of distributed real-time executives: a seamless flow of graphs transformations. Proceedings of the 1st ACM and IEEE International Conference on Formal Methods and Models for Co-Design (MEMOCODE '03), 2003 123-132.Google Scholar
- 7.OpenMP http://openmp.org/wp
- 10.The Multicore Association http://www.multicore-association.org/home.php
- 11.PolyCore Software Poly-Mapper tool http://www.polycoresoftware.com/products3.php
- 12.Lee EA: Overview of the ptolemy project. In Technical Memorandum. University of California, Berkeley, Calif, USA; 2001.Google Scholar
- 13.Eker J, Janneck JW: CAL language report. University of California, Berkeley, Calif, USA; December 2003.Google Scholar
- 16.Belanovic P: An open tool integration environment for efficient design of embedded systems in wireless communications, Ph.D. thesis. Technische Universität Wien, Wien, Austria; 2006.Google Scholar
- 17.Grandpierre T, Lavarenne C, Sorel Y: Optimized rapid prototyping for real-time embedded heterogeneous multiprocessors. Proceedings of the 7th International Workshop on Hardware/Software Codesign (CODES '99), 1999 74-78.Google Scholar
- 18.Hsu C-J, Keceli F, Ko M-Y, Shahparnia S, Bhattacharyya SS: DIF: an interchange format for dataflow-based design tools. Proceedings of the 3rd and 4th International Workshops on Computer Systems: Architectures, Modeling, and Simulation (SAMOS '04), 2004, Lecture Notes in Computer Science 3133: 423-432.Google Scholar
- 19.Stuijk S: Predictable mapping of streaming applications on multiprocessors, Ph.D. thesis. Technische Universiteit Eindhoven, Eindhoven, The Netherlands; 2007.Google Scholar
- 20.Theelen BD: A performance analysis tool for scenario-aware steaming applications. Proceedings of the 4th International Conference on the Quantitative Evaluation of Systems (QEST '07), 2007 269-270.Google Scholar
- 21.Graphiti Editor http://sourceforge.net/projects/graphiti-editor
- 25.Janneck JW: NL—a network language. ASTG Technical Memo, Programmable Solutions Group, Xilinx; July 2007.Google Scholar
- 26.SPIRIT Schema Working Group : IP-XACT v1.4: a specification for XML meta-data and tool interfaces. The SPIRIT Consortium; March 2008.Google Scholar
- 27.Brandes U, Eiglsperger M, Herman I, Himsolt M, Marshall MS: Graphml progress report, structural layer proposal. In Proceedings of the 9th International Symposium on Graph Drawing (GD '01), 2001, Vienna, Austria. Edited by: Mutzel P, Junger M, Leipert S. Springer; 501-512.Google Scholar
- 28.Piat J, Raulet M, Pelcat M, Mu P, Déforges O: An extensible framework for fast prototyping of multiprocessor dataflow applications. Proceedings of the 3rd International Design and Test Workshop (IDT '08), December 2008, Monastir, Tunisia 215-220.Google Scholar
- 29.w3c XML standard http://www.w3.org/XML
- 30.w3c XSLT standard http://www.w3.org/Style/XSL
- 31.Grammatica parser generator http://grammatica.percederberg.net
- 32.Janneck JW, Esser R: A predicate-based approach to defining visual language syntax. Proceedings of IEEE Symposium on Human-Centric Computing (HCC '01), 2001, Stresa, Italy 40-47.Google Scholar
- 33.Pino JL, Bhattacharyya SS, Lee EA: A hierarchical multiprocessor scheduling framework for synchronous dataflow graphs. University of California, Berkeley, Calif, USA; 1995.Google Scholar
- 34.Sriram S, Bhattacharyya SS: Embedded Multiprocessors: Scheduling and Synchronization. 1st edition. CRC Press, Boca Raton, Fla, USA; 2000.Google Scholar
- 35.Sarkar V: Partitioning and scheduling parallel programs for execution on multiprocessors, Ph.D. thesis. Stanford University, Palo Alto, Calif, USA; 1987.Google Scholar
- 37.Garey MR, Johnson DS: Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman, San Francisco, Calif, USA; 1990.Google Scholar
- 38.Kwok Y-K: High-performance algorithms of compiletime scheduling of parallel processors, Ph.D. thesis. Hong Kong University of Science and Technology, Hong Kong; 1997.Google Scholar
- 39.Ghenassia F: Transaction-Level Modeling with Systemc: TLM Concepts and Applications for Embedded Systems. Springer, New York, NY, USA; 2006.Google Scholar
- 40.TMS320TCI6487 DSP platform, texas instrument product bulletin (SPRT405) Google Scholar
- 41.Tms320 dsp/bios users guide (SPRU423F) Google Scholar
- 42.Feng B, Salman R: TMS320TCI6482 EDMA3 performance. In Technical Document. Texas Instruments; November 2006.Google Scholar
- 43.RapidIO http://www.rapidio.org/home
- 44.The 3rd Generation Partnership Project http://www.3gpp.org
- 45.Jiang J, Muharemovic T, Bertrand P: Random access preamble detection for long term evolution wireless networks. US patent no. 20090040918Google Scholar
- 46.3GPP technical specification group radio access network; evolved universal terrestrial radio access (EUTRA) (Release 8), 3GPP, TS36.211 (V 8.1.0) Google Scholar
- 48.TMS320C64x/C64x+ DSP CPU and instruction set In Reference Guide. Texas Instruments; February 2008.Google Scholar
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.