# The the parameters are often in conflict, i.e.

The modern Electronic Systems design often confronts sophisticated requirements with a wide range of choices from operating modes, components, interfaces, algorithms, etc. To deal with the problem, Codesign technique tries to bring the design process to a higher level of abstraction with the goal of improving the design productivity by high-level system modeling and simulation 4. By using this approach, we can optimize for better modeling effort and execution speed. Therefore, DSE has become an essential part of Codesign technology 5. In the next sections, DSE will be the focus of the discussion. The DSE phase will be conducted at the beginning of the development process, at which a large variety of different design alternatives will be considered. The decisions made after DSE have significant influence on the later phases, thus heavily affecting the success or failure of the final design. However, this process is very time consuming and challenging since the design space that needs to be explored is typically enormous. Therefore, the need for efficient and effective DSE approaches has been increasing in the recent years 3.During DSE phase, the target is multiple objectives optimization, where the ambition is to improve performance, power/energy consumption, cost, etc. simultaneously. These objectives are called parameters of DSE, and are different across systems. Generally, there will be a solution that best fits with each parameter, which is evaluated by a so-called fitness function. Since the parameters are often in conflict, i.e. the enhancement of one quality number often comes at a deterioration of another, there cannot be a single optimal solution that simultaneously optimizes all objectives 3. Therefore, optimal decisions need to be taken in the presence of trade-offs between different design criteria. Hence, we can typically compare different solutions in the case of multiple objectives by using the Pareto front relation which indicates the dominance of some solutions over others. The example of a Pareto front is shown in Fig. 4. Here, f1 and f2 are fitness functions corresponding to each solution with a preference of low values, the set of solutions from A to F belongs to Pareto front of solutions because they are more optimized with respect to f1 and f2. The solution H is dominated by solutions B, C, and D because their values are lower for both f1 and f2 compared with that of H. Similarly, solution H is a better solution in comparison with M, N, and O. Finally, some of the solutions are not comparable to H since they have better score for one objective but worse for the other. Therefore, it is up to the designers to decide which of the set of solutions is the most adequate.The results obtained from the DSE step is the Pareto-optimal front of solutions where designers now have only a few of best choices left for further investigation. An efficient DSE requires both evaluation and navigation approaches 6. In fact, because design spaces grow exponentially with increasing number of parameters, an exhaustive search (evaluating every possible design point) is unfeasible. With this method, the number of design points can easily outgrow any practical limit on execution time which is limited by the evaluation algorithm or simulator. Therefore, many advanced methods are proposed for the search problem, which can be classified into four different classes 3. The first class is heuristics and pseudo-random optimization which attempt to reduce the design space under scrutiny and focus the exploration on regions of interest 7, 8. The goal of algorithms in this class is to drastically reduce the number of solutions to evaluate (e.g., using the sum instead of the product of the number of tunable parameters 7). For example, they try to identify sub-spaces (called clusters) in the design search space that can be efficiently explored in an exhaustive fashion 8. Subsequently, the global Pareto front can be built from the Pareto-optimal solutions of each partition. The exploration results of the algorithms in this class remain sub-optimal due to their simplicity.The second class is evolutionary algorithms. They are the most common and widely used for the search problem of DSE, which apply random changes of a starting set of solutions to improve their Pareto set of solutions by iterations. One interesting algorithm in this category is Genetic Algorithms (GAs) 9. The major advantages of these algorithms are the very limited setup effort and the fact that they do not require any specific knowledge associated to the search space or the fitness function used for the optimization. However, these algorithms are not guaranteed to find the optimal solutions nor even to converge to results within certain predefined quality bounds. In practice, however, they usually provide high performance when running for a sufficiently large number of evaluations. The third and fourth classes are statistical approaches without/with domain knowledge. They both try to predict or estimate the next promising solutions for the next step. While the former uses metamodels extracted from the design space to predict, the latter uses probabilistic frameworks from built-in domain knowledge. The typical algorithms for these two approaches are Design ofExperiments 10 and Markov Decision Process 11. The major advantage of these approaches is that they greatly reduce the simulation time 3. Once candidate solutions have been determined, the next step is measuring their fitness with respect to the specification. Methods for evaluating the fitness of a single design point in the design space can be classified into three categories: 1) measurements on a prototype implementation; 2) simulation-based evaluations; and 3) estimations based on an analytical model. Each of these methods has different advantages.The evaluation of prototype implementations provides the highest accuracy, but its long development time outweighs many design options. Whereas, analytical estimations are considered the fastest, but accuracy is limited since they are typically unable to sufficiently capture intricate system behavior. Simulation-based evaluation can be at both extremes: either highly accurate (but slower) or fast (but less accurate) depending on simulation techniques 12. Because of this trade-off property, fitness evaluations using simulation-based methods have more room to exploit in the DSE process. In fact, many DSE frameworks use this evaluation method under the support of transaction-level modeling (TLM) such as SystemC. The designtasks cannot operate efficiently without tools. In this field, there is also a need of DSE frameworks to exploit the design automation. Most of these frameworks are essential parts in Electronic System-Level (ESL) tools. In the design process, DSE performs the design optimization for a set of fitness functions corresponding to constraints. At the end of this process, models of the system at various levels of abstraction are generated for future implementation steps of the system design. Commonly, such system models will be TLMs described in SystemC. After this step, models can be simulated or analyzed to provide feedback about the fitness of the generated design. Some advanced and well-designed tools have been developing, namely Metropolis, SystemCoDesigner, Daedalus, PeaCE, SCE in the academic field; CoFluent, Space Codesign, CoWare, SoC Designer in the commercial field 13. However, these tools are often limited to specific applications rather than general ones.For the case study, one tool was investigated for exemplifying and confirming the usefulness of HW/SW Codesign. SystemCoDesigner is a system level design tool developed at the University of Erlangen-Nuremberg, Germany. The environment supports high-level system modeling and simulation, automatic DSE with multiple objectives, SW and HW synthesis from abstract model to final implementation. SystemCoDesigner receives the target application requirement, the component library and an architecture template as inputs. The system will then operate the multi-objective exploration of the design space automatically. The designers can define the constraints of the search as well as restrict possible target architectures by using the architecture template. The tool also supports choosing the number and type of available processors or the allowed mappings of actors to processor types. After candidate solutions have been selected by the search, a TLM process will be performed for code generation and simulation 13. SystemCoDesigner uses GA for exploration navigation. As mentioned in section III.B, a genetic algorithm based approach for solutions searching is reasonable for large complex designs because of small setup effort. Firstly, the design space is transformed with a binary representation, often referred to as the chromosome 14. Each solution will be assigned with a fixed size and position representation. The search starts in a small set of solutions called population, which is chosen randomly. This population is used through evaluation and simulation so that all the solutions will be assessed by the parameters to select an individual solution. Consequently, the solutions passed through a fitness based procedure are more likely to be selected for the next iterations. This refined population is then used to generate other set, so-called child population, through genetic operators such as crossover and mutation. This child population shares many of the characteristics of its parent population. This process is repeated until the algorithm reaches a termination condition (for example, a fixed number of iterations, or a fixed run-time). The system has eventually assessed several design points, and can return us the optimal points on Pareto front 6. The Motion-JPEGdecoder has been implemented as a case study for SystemCoDesigner 15, 16. The researchers applied the tool to exploit its DSE framework, which resulted in an approximation of a set of Pareto-optimal design points with constraints parameter of latency, throughput and area. As reported in 15, the DSE process runs for about 2.75 days and the average simulation time for executing Motion-JPEG streams with 4 QCIF (176×144 pixels) frames is around 30 seconds. The resulting search is 366 optimal design points. The simulation and implementation results of several designs are shown in 15. The differences in latency and throughput are caused by the schedule overhead and zero-time simulation. Whereas, the distinction in used area comes from post-synthesis optimization and configuration of MicroBlaze processors of a Xilinx tool. However, the results still confirmed the agreement between the tool implementation and the actual one 17.