In 1971, the first microprocessor produced in mass production had 2300 transistor and was able to compute 60'000 operations per second at 740 kHz. Nowadays, even a common gaming console uses a central unit including 243 millions of transistors running at 4 GHz separated in a 9-cores architecture. This exponential evolution following the famous "Moore's Law" is possible thanks to the innovations made in every domain implied in a chip conception and allows chipset creation targeting a more and more wide range of applications. This variety of processors implies a great number of variations in the hardware choices, to match application (general purpose processor, digital signal processor, application-specific processor…), performance requirements and / or other operative constraints such as power consumptions. The applications running on these powerful solutions are growing in complexity exponentially as well, relying on more and more sophisticated software. Consequently the design team is facing a real challenge when designing and implementing an optimal architectural solution. To ease their task, the trend in computer aided design is to develop integrated frameworks for simulation and design, providing all the tools needed during the process, in a top-down approach. These tools allow system specification by means of abstract concurrent entities down to final implementation by means of successive refinements of the different entities and of the communication among then. In parallel, these hardware solutions run applications of a constantly increasing complexity, leading to the need for more and more intensive specification and validation tasks, by means of abstract software descriptions, typically referred to as verification models. These models implement all the functionalities of the application and thus, from the end of this specification phase, the abstract description of the application in the verification model becomes the most reliable reference for the corresponding application. All these reasons make the intuitive understanding of the processing complexity and tend to create redesign loops in the conception process when realizing that the initial architecture choices were sub-optimal (or even wrong), preventing the system to meet the desired performance constraints. In this case, a costly re-design iteration is mandatory, targeting a new platform choice but introducing delay and cost in the design causing the project to arrive too late on the market. Obviously, to eliminate these re-design iterations and to be able to help as soon as possible, the preliminary complexity analysis should be performed before starting the design of the target architecture. Since the first step of the design cycle implies an algorithm description through a verification model, it is desirable to perform the complexity study on this model. Indeed, though considerable efforts are carried out on developing powerful co-design tools, by means high-level abstract descr