Total Pageviews

Saturday, October 18, 2008

Verilog programming-language-interface primer

Verilog programming-language-interface primer
by Swapnajit Mittra, SGI -- EDN
Wednesday 21 August 2002
Swapnajit Mittra, SGI -- from EDN, 2/9/1999
Designers have employed HDLs for more than a decade, using them to replace a schematic-based design methodology and to convey design ideas. Verilog and VHDL are the two most widely used HDLs for electronics design. Verilog has approximately 35,000 active designers who have completed more than 50,000 designs using the Cadence Verilog software suite. Even with Verilog's success, many seasoned Verilog users still perceive its programming-language interface (PLI) as a "software task" (Reference 1). A step-by-step approach helps you "break the ice" when writing PLI functions. By learning the essentials of PLI design without getting bogged down by too many details, you will acquire a basic knowledge of PLI that you can immediately use.Why should you use a PLI?A PLI gives you an application-program interface (API) to Verilog. Essentially, a PLI is a mechanism that invokes a C function from Verilog code. People usually call the construct that invokes a PLI routine in Verilog a "system task" or "system function" if it is part of a simulator and a "user-defined task" or "user-defined function" if the user writes it. Because the essential mechanism for a PLI remains the same in both cases, this article uses the term "system call" to indicate both constructs. Examples of common system calls that most Verilog simulators include are $display, $monitor, and $finish.You use a PLI primarily for doing tasks that would otherwise be impossible to do using Verilog syntax. For example, IEEE Standard1364-1995 Verilog has a predefined construct for doing a file write, ($fwrite, which is another built-in system call written using a PLI), but it does not have one for reading a register value directly from a file (Reference 2). More common tasks for which PLI is the only way to achieve the desired results include writing functional models, calculating delays, and getting design information. (For example, no Verilog construct gives the instance name of the parent of the current module in the design hierarchy.)To illustrate the basic steps for creating a PLI routine, consider the problem in Listing 1. This problem is much simpler than a real-life problem you solve using PLI, however it shows many of the basic steps you use to build a PLI routine. When you run the Verilog in the listing, it should print the value of the register as 10 at time 100 and 3 at time 300. You can think of creating a PLI routine as a two-step process: First, you write the PLI routine in C; then, you compile and link this routine to the simulator's binary code.Writing a PLI routine The way a PLI routine interfaces with the simulator varies from simulator to simulator, although the main functions remain the same. This article discusses the interfacing mechanisms of the two most popular commercial simulators, Cadence's Verilog-XL (Reference 3) and Synopsys' VCS (Reference 4). Although other commercial simulators support PLI, their interfacing mechanisms do not differ significantly from these two. Over the years, Verilog PLI has evolved into PLI 1.0 and Verilog Procedural Interfaces (VPI). This article covers only PLI 1.0. Despite the differences in interfacing parts and versions, you can break down the creation of a PLI routine into four main steps.Step 1: Include the header filesBy convention, a C program implements a PLI routine in a file veriuser.c. Although you can change this name, the vconfig tool assumes this default name while generating the compilation script in a Verilog-XL environment. For now, assume that you keep the PLI routine in the file veriuser.c.In a Verilog-XL environment, the file veriuser.c must start with the following lines:#include #include In a VCS environment, the file must start with:#include These header files contain the most basic data structures of the Verilog PLI that the program will use.Step 2: Declare the function prototypes and variables A PLI routine consists of several functions. Just as you would for a normal C program, you should place the prototype declarations for the functions before the function definitions. For this case, the function appears as:int my_calltf(), my_checktf();In the above function, the int prototype declaration implies that these functions return an integer at the end of their execution. If there is no error, the normal return value is 0. However, if the functions are in separate files, you should declare them as external functions with:extern int my_calltf(), my_checktf();A typical PLI routine, like any other C program, may need a few other housekeeping variables. Step 3: Set up the essential data structuresYou must define a number of data structures in a PLI program. A Verilog simulator communicates with the C code through these variables. Open Verilog International (OVI), an organization for standardizing Verilog, recommends only one mandatory data structure, veriusertfs. However, the exact number and syntax of these data structures vary from simulator to simulator. For example, Verilog-XL requires four such data structures and their functions for any PLI routine to work; VCS needs none of them and instead uses a separate input file. The main interfacing data structure for Verilog-XL is an array of structures or a table called veriusertfs. Verilog-XL uses this table to determine the properties associated with the system calls that correspond to this PLI routine. The simulator does not recognize any names other than veriusertfs. Each element of veriusertfs has a distinct function, and you need all these functions to achieve the overall objective of correctly writing the PLI routine. The number of rows in veriusertfs is the same as the number of user-defined system calls plus one for the last entry, which is a mandatory 0. In this case, the veriusertfs array should look like: s_tfcell veriusertfs[] = {{usertask, 0, my_checktf, 0, my_calltf, 0, "$print_reg"},{0} /* Final entry must be zero */}The first entry, usertask, indicates that the system call does not return anything. It is equivalent to procedure in Pascal or function returning void in C. In the previous data-structure, my_ checktf and my_calltf are the names of the two functions that you use to implement the system call $print_reg. These names are arbitrary, and you can replace them with other names. The function my_checktf is generally known as a checktf routine. It checks the validity of the passed parameters. Similarly, my_calltf, which you usually call calltf routine, performs the main task of the system call. The positions of these names in veriusertfs are very important. For example, if you want to use my_checkt for any other name as a checking function, it must be the third element. The function veriusertfs provides options for a few other user-defined functions that you will not use in this routine. A zero replaces any function that you do not use. Therefore, the second, fourth, and sixth elements in the entry are zeroes. Table 1 summarises the objectives of each entry in a row of veriusertfs. If there are additional system calls, you need to define a separate entry in veriusertfs for each one. Verilog-XL also needs the following variables or functions:char *veriuser_version_str;int (*endofcompile_routines[])();bool err_intercept();The first variable, veriuser_version_str, is a string indicating the application's user-defined-version information. A bool (Boolean) variable is an integer subtype with permitted values 0 and 1. In most cases, you can use Cadence-supplied default values for these variables or functions.In VCS, instead of a table, you use the equivalent information in a separate file, which you usually call pli.tab. This name is also user-defined. In the current example using $print_reg, the contents of this file are:$print_reg check=my_checktf call=my_calltfStep 4: The constituent functionsWith the prototype declarations in place, you are ready to write the two functions, my_checktf() and my_calltf(), which constitute the main body of the PLI application.As previously discussed, a checktf routine checks the validity of passed parameters. It is a good practice to check whether the total number of parameters is the same as you expect, and whether each parameter is required. For example, in this case, you expect the program to pass only one parameter of type register. You do this task in the function my_checktf() (Listing 2).The functions starting with tf_ are the library functions and are commonly known as utility routines. The library functions in the previous function and their usage are tf_nump(), which determines how many parameters you are passing, and tf_typep(), which determines the parameter type by its position in the system call in the Verilog code. In this case, the system call is $print_reg. Thus, tf_typep(1) gives the type of the first parameter, tf_typep(2) gives the type of the second parameter, and so on. If the parameter does not exist, tf_typep() returns an error. (In this case, tf_typep(2) does not exist.) In the current example, you expect the program to pass a register value as a parameter. Therefore, the type should be tf_readwrite. If a wire is the expected parameter, the type should be tf_readonly. To facilitate error-condition checking, it is a good idea to first check the number of parameters and then to check their types. The tf_error() function prints an error message and signals the simulator to increment its error count. These library functions and constants are parts of the header files that you have included at the top of the file in Step 1.A calltf function is the heart of the PLI routine. It usually contains the main body of the PLI routine. In this case, it should read the value of the register and then print this value. The following code shows how you can accomplish this job:int my_calltf(){ io_printf("$print_reg: Value of the reg=%x at time=%d\n", tf_getp(1), tf_gettime());}In the above code, io_printf() does the same job as printf() in C, printing the value in standard output. Additionally, the function prints the same information in the Verilog log file. The function tf_getp() gets the register's integer value. The function tf_gettime() returns the current simulation time and needs no input parameter. Calltf is the most complicated function in a PLI routine. It often runs for several hundred lines.Now that you have created the two main functions, you need to put them into one place. Listing 3 shows how to implement $print_reg in a Verilog-XL environment.The next task is making the Verilog simulator understand the existence of your new system call. To accomplish this task, you must compile the PLI routine and then link it to the simulator's binary. Although you can manually integrate the PLI code with the Verilog binary by running a C compiler and merging the object files, it is more convenient to use a script. In Verilog-XL, a program called vconfig generates this script. The default name for this script is cr_vlog. While generating this script, the program vconfig asks for the name of the compiled Verilog that you prefer. It also asks whether to include model libraries, which are PLI code, from standard vendors. For most of these questions, the default answer, which you input if you press return without entering anything, is good enough unless you have a customized environment. At the end, vconfig asks for the path for your veriuser.c files. Once you generate the script cr_vlog, you need only to run the script to generate a customized Verilog simulator that can execute the new system call.A sample run of the compiled Verilog using this PLI routine produces the output with a Verilog-XL simulator (Listing 4).Modifying the value of a registerYou use a PLI is to read design information and to modify this information in the design database. The following example shows how you can do this modification. Create a system call, $invert, to print the value of a register (as in the previous example), bitwise invert the content, and print this updated value. Many ways exist to invert a value of a binary number. By using a straightforward algorithm to do the inversion, as the following list shows, helps you gain more insight into the PLI mechanism and library functions: 1. read the content of the register as a string;2. convert all ones in the string to twos, convert all zeros to ones, convert all twos to zeros, convert all Z's to X's, and leave X's intact; and3. put the modified string back into the register.The second step converts all ones to zeros and zeros to ones. Note that the checktf function in this case does not differ from the earlier one, because in both cases the number and type of input parameter are the same.Creating the PLI routineListing 5 shows the implementation of $invert routine in a VCS environment. This program uses the following library functions:tf_strgetp() returns the content of the register as a string. Just liketf_getp(), it reads the register's content as a decimal. In Listing 5, tf_strgetp(1, b) reads the content of the first parameter in binary format. (An h or o, in place of a b, read it in hexadecimal or octal format, respectively). The routine then copies the content to a string val_string.tf_strdelputp() writes a value to a parameter from a string-value specification. This function takes a number of arguments, including the parameter index number (the relative position of the parameters in the system call, starting with 1 for the parameter on the far left); the size of the string whose value the parameter contains (in this case, it should be same size as the register); the encoding radix; the actual string; and two other delay-related arguments, which you ignore by passing zeros for them. Although not shown in the current example, a simpler decimal counterpart of tf_strdelputp() is tf_putp().It is important to remember that a program can change or overwrite only the value of certain objects from a PLI routine. Any attempt to change a variable that you cannot put on the left of a procedural assignment in Verilog, such as a wire, results in an error. In general, you can modify the contents of a variable of type tf_readwrite. The checktf function my_checktf() checks this feature.One additional user-defined function that Listing 5 uses is move(), which has three parameters. In the third parameter, a string, the second parameter replaces every occurrence of the first parameter. In the current case, a series of calls to move() changes the zeros to ones and ones to zeros. However, once a program converts zeros to ones, you must distinguish the converted ones in the string from the original ones. To make this distinction, the program first changes all initial ones to twos. A two is an invalid value for binary logic, but you can use it in an intermediate step for this example. At the end, the program converts all twos back to zeros. The program completes its task by putting the inverted value back into the register using the tf_strdelputp() function.Listing 6 shows a sample Verilog program containing the call $invert. In a VCS environment, a file lists the functions associated with a PLI routine. Although this file can have any name, you usually call it pli.tab. This file is equivalent to the veriusertfs[] structure in Verilog-XL. The content of this file in the current example is:$invert call=invert_calltf check=my_checktfBy saving this program in the file test.v, you generate an executable binary pli_invert with the command:$ vcs -o pli_invert -P pli.tab test.v veriuser.cExecuting pli_invert, you get:
= 0 navigator.userAgent.indexOf("MSIE") >= 0) {
document.write('');}
// -->

$invert: Modifying the content from 00001111 to 11110000 at time 100$finish at simulation time 200You now know the basic structure of a PLI routine and the mechanism of linking it to build a custom version of Verilog. You can also read, convert, and modify the values of the elementary components of a design database in Verilog through a PLI routine and read the simulation time for the routine. These tasks are the basic and often most necessary ones for a PLI routine to carry out. Considering all that PLI offers, this information is just the tip of the iceberg. You can also use a PLI to access design information other than register contents. (For more information about Verilog and C modeling, see References 5 and 6.) A short history of Verilog PLI Verilog started as a proprietary product from Gateway Design Automation, a company that Cadence subsequently bought. According to people close to the project at the time, the requirements for a programming-language interface (PLI) in Verilog came up early during designs. One of the problems facing them was that a single workstation could not cope with the simulation load of an entire design. The need for load balancing among multiple workstations soon became a necessity. Designers used a PLI as a tool to solve this problem.People designed the first generation of PLI routines, called TF or tf_ routines, to work only with the parameters passed to them. It was soon apparent that you could not pass all design parameters as software parameters. For example, you could not pass a delay path between two modules as software parameter. As a result, more sophisticated versions of PLI-library functions emerged. These functions, known as access or ACC/acc_ routines, focused PLI attention to cover a variety of design objects while keeping the user interface as simple as possible. Access routines did a good job, but, for keeping the user interface simple, the interface was inconsistent. In 1995, Cadence came up with the third generation of PLI for Verilog, the Verilog Procedural Interface (VPI). All three generations of PLI are part of IEEE Standard 1364-1995.

Thursday, October 16, 2008

Verilog coding examples chapters

http://iroi.seu.edu.cn/books/asics/Book/CH11/CH11.07.htm

Wednesday, October 15, 2008

8 Cs of Knowledge Management

http://www.techsparks.com/The-8-Cs-of-KM-success-Learning-from-the-IT-sector.html

writing PLI

http://www.asic-world.com/verilog/pli2.html#Writing_PLI_Application

Re-Use of Verification Environment for Verification of Memory Controller

Re-Use of Verification Environment for Verification of Memory Controller
Aniruddha Baljekar, NXP Semiconductors India Pvt. Ltd.
Bangalore, INDIA

Abstract:

With the complexity of the design on the rise, coverage of functional verification is one of the increasing challenges faced by the design and the verification teams. Reducing the verification time without compromising the quality of the verification is the greatest challenge to the verification engineers. Improving the verification process is highly critical to improve upon time to market. To achieve this, re-use of the verification environment across different levels is the way forward. The re-use of the verification environment can be achieved at following levels:


Reuse with different IPs
Reuse at different levels of integration
Reuse at different levels of abstraction (SystemC (PV/PVT), RTL)
The paper covers the following:

Methodology adopted to address the re-use of the test environment/testbench at unit level across testing of highly abstract level models (modeled in SystemC) and RTL models (modeled in VHDL/Verilog)
Methodology and challenges faced towards unit level verification of complex TLM (SystemC) model (memory controllers: external static memory controller and external nand flash controller) using this re-use methodology
Methodology in creating verification IP’s to provide interfaces to enable its re-use at these different levels of abstraction
Following tools are used in this methodology:

Specman Elite as the functional verification tool which supports coverage driven verification methodology
vManager for plan driven verification
Scenario builder to create specific scenarios.
eVCs as the verification IP’s
1. Introduction

The paper describes the methodology on re-use of unit level verification framework across different levels of abstraction. It also mentions about how to create a new eVC or extend an existing eVC to provide TLM interface. Inputs to this paper are based on the experience from verification of the memory controller (re-using the RTL verification components in verification of the TLM model).

2. Motivation

Re-use of the coverage driven verification framework and environment is very important to reduce the time taken for verification. With challenges to build a TLM model in line with the time to market, it is very important to reduce the time taken to create the model and validate the model without compromising the quality. At present in NXP there are portfolio IP’s available at RTL for which the SystemC TLM models are being developed. Lots of effort has gone in the verification these RTL IP’s. If there is a possibility to reuse the testbenches developed for RTL verification in the verification of the TLM models, then the time required to verify the TLM models can be reduced. In the near future, the SystemC TLM models will be developed much before the RTL models. In this case the testbenches developed to verify SystemC TLM models can be re-used and extended for the verification of the RTL models. Re-use of the verification framework across the various abstraction levels is extremely important to reduce the total development time.

3. Technical Details

The main expectations from such a re-use approach can be categorized into:

High Level:

Reuse of the existing RTL verification environment to verify SystemC model
Minimum bring up time of verification framework
Common methodology towards creation of re-usable verification framework for all groups and environments
Technical view:

Reuse existing Specman sequences, monitors, scoreboard and other eRM compliant verification components from the RTL verification environment
Clear methodology to support RTL and SystemC models
3.1 Verification Environment Overview



3.1.1 Recommendations in creating re-usable Module eVC

The following requirements are essential for Module eVC re-use:

No interference between eVCs
Common look and feel, similar activation and similar documentation
Support for combining eVCs (control, checking, layering etc.)
Support for modular debugging
Commonality in implementation
Avoid using too many output messages
Avoid using too many parallel threads
Avoid sampling events at @sys.any
Use list resize() for large lists
Use Keyed list for better memory performance
Use str_join() instead of append() for large strings
3.1.2 Integrating Module eVCs into an SVE

All eVCs and connections between them are created during generation. Best known practice of generating and connecting eVCs is discussed to address following issues.

Decide when to use pointers and when to use ports
Resolve generation issues such as :
Generation order
Connecting cross pointers
To connect eVCs with pointers:

In the system eVC, define the pointers from the system eVC to the interface eVCs
Avoid pointers in the opposite direction. The interface eVC should be independent of the system eVCs associated with it
In the SVE instantiate the eVCs
In both the eVCs and the SVE, connect the pointers procedurally
The pointers between eVCs are connected procedurally after generation of the architecture. The connect_pointers() method is activated in a top-down fashion. The method is called automatically after all units have been generated and the pointers of the parent unit have been connected by the parent unit’s connect_pointers() method
3.1.3 Configuration Re-usability

The eVC configuration is a reusable aspect that must be encapsulated with the eVC. When an eVC becomes part of a system, some of its configuration is fixed according to the specific use in the system.

Each level in the unit hierarchy can have a configuration struct. Configuration is always projected top down, typically using constraints from the parent.

The configuration of a component depends only on the configuration of the parent environment.

If the configuration of siblings is mutually dependent, the dependency must be generated in the parent environment and propagated down to the siblings.

The key of configuration reusability is determining what elements of the configuration are reusable when the system becomes part of a bigger system. The elements that are reusable should be part of the eVC itself. The non-reusable configuration elements go into the SVE configuration file

3.1.4 Reusing sequences

Reusing sequences from the various components of the environment saves effort when building the verification environment. SoC environments typically require synchronization of the input of several agents. Virtual Sequence is the solution for providing synchronization on SoC level verification.

Multi channel sequence generation can be defined as follows:

Ensure that each of the agents has its own sequence (and sequence driver)
Define a new (virtual) sequence (and sequence driver) using the sequence statement and omitting the item parameter
Add the existing sequence drivers as fields of the new sequence driver
Pass the existing sequence drivers using constraints to the BFM sequences done by the virtual sequence.
A virtual sequence driver is always instantiated at the SVE level. Its primary role is to drive coordinated scenarios involving multiple inputs to the system. These scenarios are built from reusable sequences from the lower-level eVCs. Virtual sequence drivers from the module level can be reused (instantiated) on the system level. These sequence drivers are:

Defined in module eVCs
Instantiated in the SVE of the module eVC (as a usage example)
Reused by instantiating in a system SVE
Module-level sequences are not always reusable on the system level. In general, mixing random sequences can break a design. For example, inputs that are accessible at module level might not be accessible at system level.

3.1.5 Additional recommendations

Create a uRM compliant top-level verification framework for it to be re-usable at different levels of abstractions (RTL and TLM)
Create test sequences using virtual sequences. Virtual sequences, unlike BFM sequences, are not tightly connected to a specific sequence type or item. Virtual sequences can do sequences of other types (but not items). As a result, one can use virtual sequences to drive more than one agent and models a generic driver. These virtual sequences can be easily connected to the driver of the Module eVC
Use ports for getting the internal state of the DUT. Do not use ‘tick defines to probe into the design. In case ‘tick defines is absolutely required, provide an option to switch off this tick defines when the DUT is SystemC. ‘tick defines in the code result in error during the creation of the stub e.g. sync true ( '~/ip_xxx_tb_testbench_full/pwr_c_sys_ack'== 0)
Use backpointers to units to access any object instantiated in the verification environment. Avoid using “sys.” to access the members of that object
In case the test cases are specific to only RTL or TLM, there should be a mechanism to turn off the sequences which are not relevant the DUT. e.g reset sequences are relevant to RTL designs but for TLM designs its not relevant.
A typical re-usable verification framework across different levels of abstractions is as follows:



Following verification components can be easily re-used at TLM and RTL:

uRM compliant SVE (top level verification framework)
Virtual sequences/Sequences
Scoreboards
Coverage implementation
Monitors
Limited re-use on checkers (e.g. Protocol checkers can be used only with RTL)
3.2 Challenges on “e” side

Creating eVCs with TLM interface
Typically an eRM compliant eVC implements port level implementation and communicates with the RTL signals. This section describes the methodology in writing new eVCs which can be used for verification of models @ different levels of abstractions and supports the following modes of operation:

RTL interfacing mode (wherein the DUT is RTL model)
TLM interfacing mode (wherein the DUT is SystemC model)
The methodology describes how to develop an eVC which supports different levels of abstractions. The methodology also specifies how to add a TLM interface to an existing eVC without TLM interface.

Structure of eVC

eVC is built using the eRM methodology for it to be re-usable across multiple projects. The extension of the eVC to support SystemC TLM interface is done in the BFM, monitor and signal map units. The sequence driver is abstract and doesn’t have dependency of the details of the DUT interface (RTL signal level interface, SystemC TLM interface or any other interface). The sequence driver generates transactions which are then passed to the BFM. The BFM then converts these transactions to the implemented interface (signal level or TLM).

Monitor decodes the transaction activity in the DUT into the same data as defined in the eVC for the sequence driver. The data checking and functional coverage is performed by extending the monitor and covering the fields of the data items as decoded by the monitor. The coverage items and data checks should be implemented in such a way that there is no dependency on the interface implemented by the design. Only the specific protocol checkers implemented in the monitor have the dependency on the interface of the design. So depending on the interface selected the protocol checkers need to switched ON/OFF.

A typical eVC has a config unit which handles the configuration of the eVC used in the verification environment. This needs to be extended to provide a field “mode” or “abstraction_level”. This takes the values “RTL” and “TLM”. This field is also added to the eVC agent, signal map, monitor and BFM units. With this it’s possible to create an instance of the eVC with TLM mode or RTL mode. This mode is then passed to all the instantiated units in the eVC. The eVC structure still follows the structure as indicated in the eRM methodology but it encapsulates the TLM or RTL specific implementation.

The figure below shows such structure of eVC supporting different levels of abstractions:



RTL interface
The signal level interface is defined in the signal map unit of the RTL subtype. The signal map contains the input and output ports which maps to HDL signals for the monitor to read and for the BFM to read and drive. The BFM unit of the RTL subtype receives the transfers which need to be driven to the HDL. The BFM then converts the received transfers to the pin level interface as per the protocol.

TLM interface
The TLM level interface is defined in the signal map unit of the TLM subtype. This contains the method ports that link directly to the SystemC master/slave methods for driving the transactions received from the BFM unit for read and write. The BFM unit of the TLM subtype receives the transfers which need to be driven to the TLM model. The TLM BFM calls the driving out-method ports implemented in the SystemC interface master/slave which is interfaced to the DUT. In case of active slaves this information is received via the in-method ports to generate the response to the transaction.

The monitor unit for the TLM subtype defines the in-method ports. The TLM monitor implements the functionality of these in-methods ports. These in-method ports are called from the SystemC master/slave or DUT. This unit then rebuilds the data items and implements the necessary events which can be used for coverage information.

Advantages of having eVCs with implementation of TLM and RTL interfaces:

Enables reuse of verification framework across different levels of abstraction
Enables reuse of the various components of the verification environment like:
Test scenarios
Coverage details
Monitor details
Scoreboard functionality
3.3 Challenges on “SystemC” side

Monitoring support for all *_tlm_evc
At TLM the monitor of a passive SLAVE is updated by calling the in-method ports from the DUT. There are two ways in doing this:

The monitor in-method ports are called explicitly from the DUT. i.e. the DUT code has to be modified to call these methods
Implement a channel between the master and DUT. The channel then updates the monitor via these method ports. This avoids the DUT code to be modified to add these methods.
The figure below shows the “generic channel” which updates the monitor of the passive SLAVE



Handling conversions between “e” and “SystemC” data types.
Default SystemC/C++ predefined types have a built-in type casting for conversion between SystemC and “e”. For all these types conversion between SystemC and “e” is automatic and is handled by Specman. No separate or additional instrumentation is required. However for “user” defined kinds a “user defined convertor” has to be created manually to handle the data conversion between SystemC and “e”. This converter is written in C++. It is a struct containing the conversion functions. The conversion functions are static functions, and the convertor is used only to group these functions. A convertor struct is never instantiated.

Also for a new developed TLM module, the corresponding “e” datatype need to be provided and a conversion for the “e” and SystemC types is required for verification of the model using Specman.

The SC2e utility from Cadence addresses the process to minimize the effort of creating the data types in “e” and the creation of the convert mechanism between these data types. The SC2e utility performs two main operations:

Creates “e” types, matching the given SystemC/C++ types
Creates a conversion function, able to transform a data item from a SystemC/C++ type into an e type and vise versa
The figure below shows the typical steps involved in generation of the convert file:



Typical usage of the SC2e utility:

Creating “e” types based on the SystemC types
The “e” struct is created based on the SystemC datatypes.


The SystemC classes are created first during the model creation. This SystemC class is used to generate the corresponding “e” types.

“e” code related to the verification tasks is written generated re-using this generated “e” types. The data conversion between the “e” types and the SystemC types is handled be the generated structure via the hdl_convertor() which points to this generated convert file.

Using the existing “e” structures
In case one wants to use the existing “e” structs rather than the generated “e” structs created automatically by SC2e then one needs to use the following steps:

Use the convertor generator to create a struct in “e” that directly corresponds to the SystemC type.
Extend the existing “e” struct with a method that converts it to the “e” struct created by the convertor
Extend the SC2e-created “e” struct with a method that converts it to the existing “e” struct
To pass the existing “e” struct to SystemC:
Call the method that converts the “e” struct to the SC2e-created “e” struct.

To get the existing “e” struct from SystemC:

Call the method that converts the SC2e-created “e” type to the existing “e” struct.


User defined conversion rules
User can configure conversion rules with a set of predefined e methods. The vr_sc2e package defines a global object in e called vr_sc2e_manager. This object has a set of initially empty methods that all are called during the convertor generation. Empty methods enable default values of the configurable parameters. Users can extend these configuration methods to provide custom, non-default values.

4 Summary

The need for Re-Use of verification environment is very high to reduce the project execution time and the cost of verification without compromising on the quality
A uRM compliant top level verification framework is essential to enable re-use at different levels of abstractions
Following verification components from RTL testbench were re-used to verify the TLM SystemC DUT:
eVCs (extended to support the functionality at both TLM and RTL interfaces)
virtual sequences
scoreboard implementation
coverage implementation
verification plan
5 Acknowledgements

India
Narendra Solanki ( AE Cadence)
Israel
Omer Schori ( R&D Cadence )
Germany
John MacBeth ( VCAD Cadence )
Michael Jacob ( AE Cadence )
Joerg Simon ( AE Cadence )
+ world wide NXP/Cadence management support

6 Appendix

Acronyms/Glossary
BFM Bus Functional Model
DUT Device Under Test
eRM “e” Reuse Methodology
eVC “e” Verification Component
RTL Register Transfer Level
SC2e “System to e” conversion Utility from Cadence
SVE System Verification Environment
Specman “Enterprise Specman Elite” testbench (verification tool from Cadence)
TLM Transaction Level Model
IES “Incisive Enterprise Simulator” from Cadence
IPCM “Incisive Plan to Closure Methodology” from Cadence

Tools used:

Sr. No. Name of the tool Version
1 Cadence IES 6.1
2 vManager 2.0.2
3 Scenario Builder 1.0.1
4 IPCM package 6.1

eVC’s used:

Sr. No. eVC’s Source Version
1 AHB eVC Cadence 2.2
2 AHB TLM eVC Cadence 1.0ea4
3 APB eVC (signal + TLM interface) Internal 3.0
4 Interrupt interface eVC Internal -
5 DMAC Interface eVC Internal -
6 External Bus Interface eVC Internal -

References


[1] eRM Developers manual from Cadence

[2] Specman Elite, Third party Simulators/Specman Elite Integrator’s guide / SystemC Specman Elite Integration from Cadence

[3] e Language Reference from Cadence

[4] SystemC Users Guide

[5] IPCM documentation from Cadence


Some Examples of Verilog testbench techniques.

http://www.mindspring.com/~tcoonan/testing.v

Some Examples of Verilog testbench techniques.



1.0 Introduction

2.0 Generating Periodic Signals

3.0 Generating and Receiving Serial Characters

4.0 Memories

5.0 Bus Models





1.0 Introduction



A testbench is a verilog program that wraps around an actual design.

The testbench is typically not part of the design and does not result

in actual circuitry or gates. Verilog code for testbenches may be

much more "free style" than Verilog code that must be

synthesized - anything goes. Here are some tidbits from various

projects I've worked on. The code is not completely general nor

perfect, but hopefully may provide ideas for designers just starting

out with testbench design. Oh, and the names have been changed

to protect the innocent. I hope I haven't introduced error in

doctoring up these examples. Again, none of the following code

is intended to be synthesizable!



2.0 Generating Periodic Signals.



Say you have a period signal. Try tossing in a little random fluctuation

on where the edges occur - you may catch a an unexpected bug!

But, be careful about using random because if you move on to

manufacturing test, then your testbench may not be deterministic.

Often, for the sake of the tester, you must enforce transitions to

occur in specific periods. In this case, you may need to add

statements that delay changes to fall in these periods. Anyway,

let's drive the foo1in signal. We'll add in some random, count

the transitions and print out a message.



initial begin

#1 foo1in = 0;

forever begin

#(`PERIOD/2 + ($random % 10)*(` PERIOD/20)) foo1in = 1;

foo1_input_count = foo1_input_count + 1;

$display ("#Foo1 rising edges = %d", foo1_input_count);

#(` PERIOD/2 + ($random % 10)*(` PERIOD/20)) foo1in = 0;

end

end


Here's another code snippet - a task that generates a period message..

task generate_syncs;
event send_sync;
begin
syncdata = SYNC_START;
syncstb = 0;

fork
// Generate periodic event for sending the sync
forever #(1000000000.0 * RATE) ->send_sync; // convert RATE to nanoseconds

// Wait on send_sync event, and then send SYNC synchronized with clk
forever begin
@(send_sync);
syncdata = syncdata + CMTS_FREQ * CMTS_RATE;
$display ("... SYNC = %h at time %0t, Local Time = %h", syncdata, $time, local_time);
@(posedge clk) #1;
syncstb = 1;
@(posedge clk) #1;
syncstb = 0;
end
join
end
endtask


3.0 Generating and Receiving Serial Characters



Say your design inputs or outputs serial characters. Here is some

code for both. First, some defines:



/* Serial Parameters used for send_serial task and its callers. */

`define PARITY_OFF 1'b0

`define PARITY_ON 1'b1

`define PARITY_ODD 1'b0

`define PARITY_EVEN 1'b1

`define NSTOPS_1 1'b0

`define NSTOPS_2 1'b1

`define BAUD_9600 2'b00

`define BAUD_4800 2'b01

`define BAUD_2400 2'b10

`define BAUD_1200 2'b11

`define NBITS_7 1'b0

`define NBITS_8 1'b1



Here's how you call it:



send_serial (8'hAA, `BAUD_9600, `PARITY_EVEN, `PARITY_ON, `NSTOPS_1, `NBITS_7, 0);



Here's a task that sends a character.



task send_serial;

input [7:0] inputchar;

input baud;

input paritytype;

input parityenable;

input nstops;

input nbits;

input baud_error_factor;



reg nbits;

reg parityenable;

reg paritytype;

reg [1:0] baud;

reg nstops;

integer baud_error_factor; // e.g. +5 means 5% too fast and -5 means 5% too slow



reg [7:0] char;

reg parity_bit;

integer bit_time;



begin

char = inputchar;

parity_bit = 1'b0;

case (baud)

`BAUD_9600: bit_time = 1000000000/(9600 + 96*baud_error_factor);

`BAUD_4800: bit_time = 1000000000/(4800 + 48*baud_error_factor);

`BAUD_2400: bit_time = 1000000000/(2400 + 24*baud_error_factor);

`BAUD_1200: bit_time = 1000000000/(1200 + 12*baud_error_factor);

endcase



$display ("Sending character %h, at %0d baud (err=%0d), %0d bits, %0s parity, %0d stops",

(nbits == `NBITS_7) ? (char & 8'h7f) : char,

1000000000/bit_time,

baud_error_factor,

(nbits == `NBITS_7) ? 7 : 8,

(parityenable == `PARITY_OFF) ? "NONE" : (paritytype == `PARITY_EVEN) ? "EVEN" : "ODD",

(nstops == `NSTOPS_1) ? 1 : 2

);



// Start bit

serial_character = 1'b0; // Start bit.

#(bit_time);



// Output data bits

repeat ( (nbits == `NBITS_7) ? 7 : 8) begin

serial_character = char[0];

#(bit_time);

char = {1'b0, char[7:1]};

end



if (parityenable == `PARITY_ON) begin

parity_bit = (nbits == `NBITS_7) ? ^inputchar[6:0] : ^inputchar[7:0];

if (paritytype == `PARITY_ODD)

parity_bit = ~parity_bit; // even parity

serial_character = parity_bit;

#(bit_time);

end

serial_character = 1'b1; // Stop bit.

#(bit_time);

if (nstops) // Second stop bit

#(bit_time);

end

endtask



Here's a task that receives serial characters. This particular task was

a bit messy in that it set some global variables in order to return a

status, etc. By all means - fix this up the way you like it!



reg [7:0] receive_serial_character_uart1; // Global that receives tasks result



// **** SERIAL CHARACTER LISTENER Task for UART1

//

//

task receive_serial_uart1;



input baud;

input paritytype;

input parityenable;

input nstops;

input nbits;



reg nbits;

reg parityenable;

reg paritytype;

reg [1:0] baud;

reg nstops;



integer bit_time;



reg expected_parity;



begin

receive_serial_result_uart1 = 0;

receive_serial_character_uart1 = 0;



case (baud)

`BAUD_9600: bit_time = 1000000000/(9600);

`BAUD_4800: bit_time = 1000000000/(4800);

`BAUD_2400: bit_time = 1000000000/(2400);

`BAUD_1200: bit_time = 1000000000/(1200);

endcase



receive_serial_result_uart1 = `RECEIVE_RESULT_OK; // Assume OK until bad things happen.



@(negedge uart1out); // wait for start bit edge

#(bit_time/2); // wait till center of start bit

if (uart1out != 0) // make sure its really a start bit

receive_serial_result_uart1 = receive_serial_result_uart1 | `RECEIVE_RESULT_FALSESTART;

else begin

repeat ( (nbits == `NBITS_7) ? 7 : 8) begin // get all the data bits (7 or 8)

#(bit_time); // wait till center

// sample a data bit

receive_serial_character_uart1 = {uart1out, receive_serial_character_uart1[7:1]};

end



// If we are only expecting 7 bits, go ahead and right-justify what we have

if (nbits == `NBITS_7)

receive_serial_character_uart1 = {1'b0, receive_serial_character_uart1[7:1]};



#(bit_time);

// now, we have either a parity bit, or a stop bit

if (parityenable == `PARITY_ON) begin

if (paritytype == `PARITY_EVEN)

expected_parity = (nbits == `NBITS_7) ? (^receive_serial_character_uart1[6:0]) :

(^receive_serial_character_uart1[7:0]);

else

expected_parity = (nbits == `NBITS_7) ? (~(^receive_serial_character_uart1[6:0])) :

(~(^receive_serial_character_uart1[7:0]));

if (expected_parity != uart1out)

receive_serial_result_uart1 = receive_serial_result_uart1 | `RECEIVE_RESULT_BADPARITY;

// wait for either 1 or 2 stop bits

end

else begin

// this is a stop bit.

if (uart1out != 1)

receive_serial_result_uart1 = receive_serial_result_uart1 | `RECEIVE_RESULT_BADSTOP;

else

// that was cool. if 2 stops, then do this again

if (nstops) begin

#(bit_time);

if (uart1out != 1)

receive_serial_result_uart1 = receive_serial_result_uart1 | `RECEIVE_RESULT_BADSTOP;

end

#(bit_time/2);

end

end

end

endtask





4.0 Memories



Memories, whether they are RAMs, ROMs or special memories like FIFOs

are easily modeled in Verilog. Note that you can define your own special

testbench locations for debugging! Say, you have a processor core hooked

up to these memories. Define some special locations that when read or

written to, display diagnostic messages. Or, you can specify that a write to

a particular location will halt the simulation or signify PASS or FAIL.

Memories are an easy way for the embedded Verilog core processor to

communicate to the testbench. There are many possibilities.



reg [15:0] FLASH_memory [0:(1024*32 - 1)]; // 32K of FLASH

reg [15:0] SRAM_memory [0:(1024*32 - 1)]; // 32K of SRAM



//*****

//

// The ASIC's ca[20] is the active LO chip select for the FLASH.

// The ASIC's ca[18] is the active LO chip select for the SRAM.



// Write process for FLASH and SRAM

//

always @(posedge cwn) begin

if (ca[20] == 1'b0) begin

// Write to FLASH

if (ca[16:15] != 2'b00) begin

$display ("Illegal write to FLASH!");

end

else begin

$display ("Write to FLASH Address = %h, Data = %h", ca, cb);

// Our FLASH is only declared up to 32KW, so use ca[14:0]

FLASH_memory[ca[14:0]] = cb;



// Check for magic write from the embedded processor core! This is done in the

// C firmware simply by writing to the location.

//

if (ca == `MAGIC_ADDRESS) begin

$display ("Embedded code has signalled DONE!");

sa_test_status = `SA_TEST_DONE;

sa_test_result = cb;

end

end

end

else if (ca[18] == 1'b0) begin

// Write to SRAM

if (ca[16:15] != 2'b00) begin

$display ("Illegal write to SRAM!");

end

else begin

$display ("Write to SRAM Address = %h, Data = %h", ca, cb);

// Our SRAM is only declared up to 32KW, so use ca[14:0]

SRAM_memory[ca[14:0]] = cb;

end

end

end



// Read process for FLASH and SRAM

//

always @(crn) begin

if (crn == 1'b0) begin

case ({ca[20], ca[18]})

2'b11: cb_i <= 16'hzzzz;

2'b10: begin

$display ("Read from SRAM Address = %h, Data = %h", ca, SRAM_memory[ca[14:0]]);

cb_i <= SRAM_memory[ca[14:0]];

end

2'b01: begin

$display ("Read from FLASH Address = %h, Data = %h", ca, FLASH_memory[ca[14:0]]);

cb_i <= FLASH_memory[ca[14:0]];

end

2'b00: begin

$display ("Simultaneosly selecting FLASH and SRAM!!");

end

endcase

end

else begin

cb_i <= 16'hzzzz;

end

end



Clearing the memories is easy:



task clear_SRAM;

reg [15:0] SRAM_address;

begin

$display ("Clearing SRAM..");

for (SRAM_address = 16'h0000; SRAM_address < 16'h8000; SRAM_address = SRAM_address + 1) begin

SRAM_memory[SRAM_address] = 0;

end

end

endtask



Performing other operations is straight-forward. How about a task

that copies a firmware hex image to a FLASH memories boot area,

relocating along the way and maybe setting a hew header bytes too.

Now, this task is specific to a particular processor, etc. but this

shows what is fairly easily done in Verilog:



task copy_to_FLASH_boot;

reg [15:0] temp_memory[0:1023];

reg [15:0] original_address;

reg [15:0] FLASH_address;

integer n;

begin

$display ("Copying ROM image to FLASH boot block..");



// Read in the normal ROM file into our temporary memory.

for (original_address = 0; original_address < 1024; original_address = original_address + 1) begin

temp_memory[original_address] = 0;

end

$readmemh (`ROM_FILE, temp_memory);



// Fill in Boot header

FLASH_memory[15'h0800] = `BOOT_COPY_LENGTH; // Let's copy 1KW maximum

FLASH_memory[15'h0801] = 0; // Copy program to code space starting at zero

FLASH_memory[15'h0802] = temp_memory[3]; // Entry point is same as the address in the reset vector



// Now, copy from original image into the boot area.

n = 0;

FLASH_address = 15'h0803;

original_address = 0;

while (n < 1024) begin

FLASH_memory[FLASH_address] = temp_memory[original_address];

FLASH_address = FLASH_address + 1;

original_address = original_address + 1;

n = n + 1;

end

end

endtask



Also, test vectors are easily loaded into Verilog memories using the

$readmem statements. You may easily read your stimulus vectors

from a file into a memory, clock out the vectors to your circuit, and

optionally capture your circuits response to another memory (or simply

write the vector out using $fdisplay). Once you have captured one

output vector set that you know is good (e.g. your "Golden" vectors),

your testbench can compare subsequent simulation vectors against

these "Golden" vectors and detect any problems in your changing

circuit (e.g. after back-annotation, scan insertion, or alpha space

particle circuit corruption).



5.0 Bus Models



Many times a processor is interfaced to the logic being tested. If the

complete processor model/core is not present, then a "bus model" is

a simple function that emulates the bus transaction. More simply; the

bus model allows the testbench to read and write values. The following

task utilizes very specific timing delays. You should probably include

'defines' for these and update them as you get better timing information.

Typically, you will test your UART or whatever peripheral in isolation

with the bus model, and later test your peripheral with the real processor core.



write_to_foobar (COMPAREH_REGISTER, next_word[31:16]);

#10;

write_to_ foobar(COMPAREL _REGISTER, next_word[15:0]);

#10;



task write_to_foobar;

input [15:0] address_arg;

input [15:0] data_arg;

// Several global bus signals are assumed: address, we, clk.

begin

/* Wait until next rising clock edge */

@(posedge clk);



/* t_valid for address is 5ns, wait and then drive address */

#5; // <---- Manually back-annotate this, or use a define, whatever...

address = address_arg;



/* t_valid for wrxxx is 8ns, we are already 5ns into cycle, so wait 3ns */

#3;

we <= 1'b1;



/* t_valid for wrdata is 20ns, We are 8ns into cycle, wait 12ns */

#12

data <= data_arg;



/* Wait till the next rising edge, wait for a little bit of hold time. */

@(posedge clk40);

#1;

address <= 4'hz;

#1;

we <= 1'b0;

#4;

data <= 16'hzzzz;



//$display ("Writing data %h to address: %h", data, address);

end

endtask



Here's a task that reads from the memory-mapped peripheral.



task read_from_foobar;

input [3:0] address_arg;

// Let's just write to a global with the resulting data retrieved (! bad practice, I know....)

// Gobal variable is 'last_data_read'.

begin

/* Wait until next rising edge to do anything.. */

@(posedge clk)



/* t_valid for rwadrs is 5ns, wait and then drive address */

#5;

address = address_arg;



/* t_valid for rbxxx is 8ns, we are already 5ns into cycle, so wait 3ns */

#3;

rw <= 1'b1;



/* Wait till the next rising edge, wait for a little bit of hold time. */

@(posedge clk);

last_data_read = data; // <-- keep in the global, caller can use if they wish.

$display ("Reading data %h from address: %h", data, address);



/* Wrap it up. Deassert rw. Let's float the address bus. */

rw <= 1'b0;

#1;

address <= 16'hzzzz;

end

endtask

Reading and writing files from Verilog models

http://larc.ee.nthu.edu.tw/~lmdenq/doc/fileio.htm

Sunday, October 5, 2008

VGA to Svideo adapter

http://www.svideo.com/vga2video2.html
http://www.svideo.com/

A personal note by Dr. P. Subbanna Bhat, Professor, Dept of E&C Engg, on leaving NITK)

Ref: link http://sharathrao.wordpress.com/2007/08/27/an-ex-teachers-farewell-note/


Dear friends,
Today (May 01, 2007), I have submitted my VRS papers to the Director, NITK with a request to be relieved from the service of NITK three months from now, on Aug 01, 2007.
In fact, for quite some time I was thinking of quitting NITK for good. It is a harddecision, as I have lived 31 years of my life in this campus. It is in this Institute that I studied and it is here that I have spent more than two decades of my professional life (1+23 years). I have worked at all levels of faculty position (Asst. Lecturer, Lecturer, Asst. Professor, Professor, HOD, Senator etc), and I believe that God would be pleased with my devotion to duty and sincerity of purpose. I feel that I have made my contribution – along with others – to the quality of education in the Institute. I am leaving the Dept of E&C with a name and stature higher than what it was two decades ago. I have decided to terminate this association now, as I feel that one should live only as long as necessary and that my time is over. Though there are things to be improved on every front – that is always the case, in any Dept or Institute – now I should leave it to others to carry the torch.
This Institute has been some kind of a Mother to me. I came here as a boy of 16 from my village (Aug 04, 1969) ; and grew up to be some kind of a professional, and spent 24 years as a faculty member. During this period I served her like a son – with all my heart – no matter who sat on the Chair. The ride was by no means smooth – primarily because I was rather naïve at dealing with the ‘authorities’ – and at least three times during this interval I was emotionally shattered 1990, 1998 and 2005). The first two instances were related to my professional aspirations, and he last of them was due to the happenings in the Institute – following the Govt. order sacking irectors of several NITs (March 23,2005) – over which nobody seemed to have had any control.
I confess that I am naïve even now, and am unable to cope with many developments. I visualize two (extreme) models of professionals : one that works for the Institute; and the other works for oneself – but often projects it as working for the ‘boss’. Each of us is a mix of both, in varying degrees. [Personally, I have a difficulty in projecting the first component – which is sacred activity – as the second !]. For a number of years, I remained rooted in the belief that recognition and reward would follow the first model. The consequence of this naiveté was a devastating emotional experience, which I could barely handle (Jan 1990). I interpreted it as a consequence of my ignorance of the etiquettes of dealing (supplication, genuflection etc.) with the ‘higher authorities’ ! Though the experience was intense, it did not change my character; and as a consequence, I had to undergo a second lesson – eight years later (Oct 1998) – planned and executed with great skill and aplomb! It caused me considerable distress; even so, I was able to retain my personal dignity and poise. However, it made me very sensitive to the ‘messages’ emanating from the Chair! The last of my major ordeals started about two years ago – the intensity of which was in direct proportion to my attachment to the Institute. My current decision to quit NITK, is partly an attempt to bring it to a close.
As an alumnus of this Institute, I wish that my Mother’s face shines brighter and becomes visible across the Globe. The NITK vision is to become a ‘world-class Institution’. Over the years, we have been hearing it (from the podium) - that NITK has the potential to achieve just that - which may be true - but I feel sad that I may not live long enough to see it happening. I feel that the achievement of NITK - or that of any other NIT in the country - during the first 46 years of their existence is far less that what other Institutes of repute - Harvard, Stanford, MIT, etc.- have achieved in comparable time frame s. When I seek the reasons for this impasse, I find two of them quite prominent:
The identity of an Institute is seen in the set of norms – declared, understood and observed – that serves as the Frame of reference for all those who work for the Institution. These norms may (or may not) be enshrined in the Vision & Mission statements – if they are, it is certainly helpful – but what is more important is that it should be enshrined in the traditions adhered and upheld by the Institute over a period.
Traditions are more forceful than the engraved (Vision & Mission) statements in the Book; for live traditions are intuitively understood and internalized by the people in the system. Healthy tradition of clearly defined norms applied uniformly without discrimination is the hard ground upon which Institutions are built; it is only on such ground that individuals feel comfortable that their contributions will be evaluated on merit, and they can hope for recognition and advancement on the basis of their contributions. It is the tradition of norms and values that provides a Frame of Reference upon which the delicate creeper of initiative leans and spirals upwards to finally bears fruits of achievement.
The soul of an Institute is its faculty – and its worth can be measured by the qualification, competence and commitment of its faculty members. The first of these parameters – the qualification – is the easiest to see. The second is more illusive – for judgment based on interviews and recommendations can be erroneous. The last parameter – commitment – may be person-specific to some extent, but to a large extent depends on the environment we create within the Institute. From a broader perspective, commitment of faculty is the most important parameter for an Institute, as a strong commitment can compensate for many other lacunae at various levels. For an Institution to grow and develop, it should create an environment where its own human resource feels comfortable, develops a sense of belonging, and feels motivated to take initiative to improve oneself and the Institute on a continuous basis. Such a policy has to have several components – decentralization (of responsibility as well as authority), a meaningful recognition-reward system etc. – but it can flourish only under a settled environment where norms – declared and understood – are applied uniformly without double standards.
If NITK has to evolve upwards into a ‘world class’ Institute, it has to have a paradigm (Frame of Reference), worthy of such an Institute. Qualitatively, KREC has achieved something noteworthy under its present model; but to achieve something more, it requires a paradigm – which can enthuse and motivate the faculty at a deeper level. I am deeply disenchanted with the present model; I do not wish to continue ploughing the same furrow as earlier, and keep reaping the same harvest as earlier ! I am sure of my ground on this; I have gone through the fire three times. Hence the decision to part.
The Institute is propelled by its own momentum. The joy or distress – even the presence r absence – of an individual like me, may not make much difference to the Institute; but it certainly makes a difference to me. I have spent 24 years of my life holding the Institute as the focus of my activities; now I wish to spend the remaining years on something more meaningful to my life. I am leaving the Institute with a strange mix of feelings – a quiet satisfaction and a stirring frustration – satisfaction on making the best effort at my station, and frustration because my achievement is neither significant nor concrete.
I wish to thank all my friends who made my life easier in the campus. Especially, those who shared my feelings at times of distress; those who lent clarity to my vision and support to my actions; and those who joined me in my prayers and worship.
#Note : The defining moment for the current decision came on March 17, 2007, when the Senate resolved to close’ the ‘bonafide certificate’ issue – without really resolving the basic questions that rise out of it. More than 20 months ago – on June 28, 2005, I had tabled a copy of a ‘bonafide certificate’ issued (to a foreign student, for the purpose of Visa extension) under the name and seal of director, NITK – requesting the Senate to ascertain whether the document was genuine or not. Under normal circumstances it would have taken less than 20 minutes to settle the issue. In this case however, the procession went on for 20 months : Enquiry by a Senate Committee, referral (deflection ) to the BOG (Oct 07, 2006), withdrawal (of the agenda) from the BOG (March 25,2006), re-entry of the agenda to the Senate (Nov 18, 2006), and finally the resolution ‘to close the matter’ (March 17, 2007) – without addressing the original question as to whether the document is genuine or not !
The 20–month long procession was useful: it enabled me to get a full and clear view of the NITK oaradigm – the emperor was on a high chariot, with very few clothes on – from all angles, at all levels! What is the message conveyed, when two top bodies of the Institute – the Board and the Senate – refuse to term a genuine document as ‘genuine’; and a violation as ‘violation’ ? A deliberate and calculated ‘violation’ – prompted by motives that could not be defended in public – further compounded by evasion and defiance (of the Senate (Enquiry) Committee) – was condoned without a word of disapproval; whereas my attempt at exposing such shenanigans was termed as ‘impropriety’ (BOG) ! [Great administrative skill was at play here: The health of the Administration is primarily the responsibility of the BOG – not of the Senate or the faculty. Even so, for all my efforts to expose the rot, the ‘boot’ was deftly placed on my back! That contain s a ‘message’ – my third lesson of the series!!]
If some friends are still hoping to build a ‘world -class’ Institution around this Model, I wish them well – but do not share their hope !
I was in the Electrical Department and as such was not Dr. Bhat’s student baring my participation in a 3-day course on Digital Signal Processing that he conducted at college. Yet, I feel for Dr. Bhat - in my 4 years in college and after, I am yet to hear a student make uncharitable statements about him (except that he was a bit too soft-spoken and his Vajpayee-sque pause in the middle of sentences (to teasingly stimulate thought perhaps) put some uninterested students to sleep during class hours)

Monday, September 22, 2008

VLSI Technology

http://www.vlsitechnology.org/index.html

Saturday, September 20, 2008

Online Virus scanners


http://www.anti-trojan.net/at.asp?l=en&t=onlinecheck

http://www.pcpitstop.com/antivirus/default.asp

Digital Logic Gates

http://www.asic-world.com/digital/gates4.html

Temperature compensated integrated circuits

http://docs.google.com/fileview?id=F.dfe86f95-ca93-46e0-9aae-00d6cd41d42d

Performance of submicron CMOS devices and gates with substratebiasing

Xiaomei Liu; Mourad, S.Circuits and Systems, 2000. Proceedings. ISCAS 2000 Geneva. The 2000 IEEE International Symposium onVolume 4, Issue , 2000 Page(s):9 - 12 vol.4 Digital Object Identifier 10.1109/ISCAS.2000.858675Summary:

This paper reports the results of an extensive simulation to study the effect of body bias engineering on the performance of deep submicron technology circuits. Reverse body bias (RBB) is very useful in reducing a device's off-state leakage current and hence standby power. This reduction is more effective as the temperature increases. Forward body bias (FBB) suppresses short channel effects and hence improves Vt roll-off and reduces the gate delays. This improvement is enhanced as the power supply voltage decreases. However, the power dissipation and power delay product have increased under this biasing condition. A good strategy is to apply a forward body bias on critical path only to improve speed without significant increase in power dissipation» View citation and abstract

gate and wire delay simulation

http://tams-www.informatik.uni-hamburg.de/applets/hades/webdemos/12-gatedelay/10-delaydemo/gate-vs-wire-delay.html

Tuesday, September 16, 2008

Clock cross domains

http://www.wipo.int/pctdb/en/wo.jsp?wo=2003039061

edatechforum - Journals

http://www.edatechforum.com/journal/archives.cfm

Introducing new verification methods into a design flow: an industrial user's view


Verification has become one of the main bottlenecks in hardware and system design. Several verification languages, methods and tools addressing different issues in the process have been developed by EDA vendors in recent years. This paper takes the industrial user’s point of view to explore the difficulties posed when introducing new verification methods into ‘naturally grown’ and well established design flows – taking into account application domain-specific requirements, constraints present in the existing design environment and economics. The approach extends the capabilities of an existing verification strategy with powerful new features while keeping in mind integration, reuse and applicability. Based on an industrial design example, the effectiveness and potential of the developed approach is shown.

Bio PicRobert Lissel is a senior expert focusing on digital design in the Design Methodology Group of the Bosch Automotive Electronics Semiconductor and ICs Division. He holds a Master’s degree in Electrical Engineering from Dresden University of Technology, Germany.

Bio PicJoachim Gerlach is a member of the Design Methodology Group in the Bosch Automotive Electronics Semiconductor and ICs Division, responsible for research projects in system specification, verification and design. He holds a Ph.D. degree in Computer Science from University of Tübingen, German.

Today, it is estimated that verification accounts for about 70% of the overall hardware and system design effort. Therefore, increasing verification efficiency can contribute significantly to reducing timeto- market. Against that background, a broad range of languages, methods and tools addressing several aspects of verification using different techniques has been developed by EDA vendors. It includes hardware verification languages such as SystemC1 2, SystemVerilog3 and e4 that address verification challenges more effectively than description languages such as VHDL and Verilog.

Strategies that use object-oriented mechanisms as well as assertionbased techniques built on top of simulation-based and formal verification enable the implementation of a more compact and reusable verification environment. However, introducing advanced verification methods into existing and well established industrial development processes presents several challenges. Those that require particularly careful attention from an industrial user’s point of view include:

  • The specific requirements of individual target applications;
  • The reusability of available verification components;
  • Cost factors such as tool licenses and appropriate designer training.

This paper discusses how to address the challenges outlined above. With regard to the specific requirements of automotive electronics design, it identifies verification tasks that have high priority. For the example of the verification strategy built up at Bosch, the paper works through company-specific requirements and environmental constraints that required greatest consideration. Finally, the integration of appropriate new elements into our industrial design flow, with particular focus on their practical application, is described.

Figure

Figure 1. Verification landscape

Verification challenges

Recently, many tools and methods have been developed that address several aspects of verification using different techniques. In the area of digital hardware verification, metrics for the assessment of the status of verification as well as simulation-based and formal verification approaches have received most attention. Figure 1 is an overview of various approaches and their derived methods. Different design and verification languages and EDA solutions from different vendors occupy this verification landscape to differing degrees and in different parts.

Introducing new verification languages and methods into a well established design and verification flow requires more than pure technical discussion. Their acceptance by developers and the risks that arise from changing already efficient design processes must be considered – a smooth transition and an ability to reuse legacy verification code are essential.

Existing test cases contain much information on former design issues. Since most automotive designs are classified as safety critical, even a marginal probability of missing a bug because of the introduction of a new verification method is unacceptable. On the other hand, the reuse of legacy code should not result in one project requiring multiple testbench approaches. Legacy testcases should ideally be part of the new approach, and it should be possible to reuse and enhance these instead of requiring the writing of new ones.

A second important challenge lies in convincing designers to adopt new methods and languages. Designers are experienced and work efficiently with their established strategies. Losing this efficiency is a serious risk. Also, there is often no strict separation between design and verification engineers, so many developers can be affected when the verification method changes. Furthermore, new methods require training activities and this can represent a considerable overhead. Meanwhile, most projects must meet tight deadlines that will not allow for the trial and possible rejection of a new method.

To overcome those difficulties, it is important to carefully assess all requirements and to evaluate new approaches outside higher priority projects. One possibility is to introduce new methods as add-ons to an existing approach so that a new method or tool may improve the quality but never make it worse. In this light, the evolution of verification methodologies might be preferable to the introduction of completely new solutions.

Considering verification’s technical aspects, automotive designs pose some interesting challenges. The variety of digital designs ranges from a few thousand gates to multimillion-gate systemson- chip. Typical automotive ICs implement analog, digital and power on the same chip. The main focus for such mixed-signal designs is the overall verification of analog and digital behavior rather than a completely separate digital verification. However, the methodology's suitability for purely digital ICs (e.g., for car multimedia) must also be covered.

In practice, the functional characteristics of the design determine what is the most appropriate verification method. If the calculation of the expected behavior is ‘expensive’, directed tests may be the best solution. If there is an executable reference model available or if the expected test responses are easy to calculate, a random simulation may be preferable. Instead of defining hundreds of directed testcases, a better approach can be to randomize the input parameters with a set of constraints allowing only legal behavior to be generated. In addition, special directed testcases can be implemented by appropriately constraining the randomization. The design behavior is observed by a set of checkers. Functional coverage points are necessary to achieve visibility into what functionality has been checked. Observing functional coverage and manually adapting constraints to meet all coverage goals leads to coverage-driven verification (CDV) techniques. Automated approaches built on top of different verification languages1 2 3 4 5 result in testbench automation (TBA) strategies.

A directed testbench approach may be most suitable for low complexity digital designs, particularly in cases where reference data is not available for the randomization of all parameters, or the given schedule does not allow for the implementation of a complex constraint random testbench. Furthermore, mixed-signal designs may require directed stimulation. Often a function is distributed over both analog and digital parts (e.g., an analog feedback loop to the digital part). Verifying the digital part separately makes no sense in this case. In fact, the interaction between analog and digital parts is error-prone. Thus, the integration of analog behavioral models is necessary in order to verify the whole function.

One technique that deals with this requirement maps an analog function to a VHDL behavioral description and simulates the whole design in a directed fashion. In other cases, the customer delivers reference data originating from a system simulation (e.g., one in Matlab6). Integrating that reference data within a directed testcase is mandatory. Since each directed testcase may be assigned to a set of features within the verification plan, the verification progress is visible even without functional coverage points. Hence, the implementation effort is less than for a constraint random and CDV approach up to a certain design complexity. Anyway, for some parameters not affecting the expected behavior (e.g., protocol latencies), it makes sense to introduce randomization.

Formal verification techniques like property checking allow engineers to prove the validity of a design characteristic in a mathematically correct manner. In contrast to simulation-based techniques – which consider only specific paths of execution – formal techniques perform exhaustive exploration of the state space. On the other hand, formal techniques are usually very limited in circuit size and temporal depth. Therefore, formal and simulation-based techniques need to be combined carefully to optimize the overall verification result while keeping the verification effort to a minimum.

The solution is to apply different verification techniques where they best fit. Powerful metrics are needed to ensure sufficient visibility into the verification’s progress and the contribution of each technique. The question is how to find the best practical solution within available time, money and manpower budgets rather than that which is simply the best in theory. The demands placed on verification methods reach from mixed-signal simulation and simple directed testing to complex constraint random and formal verification as well as hardware/software integration tests. Nevertheless, a uniform verification method is desirable, to provide sufficient flexibility and satisfy all the needs of the automotive environment.

Verification strategies

To illustrate one response to the challenges defined above, this section shows how SystemC has been applied to enhance a companyinternal VHDL-based directed testbench strategy. This approach allowed for the introduction of constraint random verification techniques as well as the reuse of existing testbench modules and testcases, providing the kind of smooth transition cited earlier.

Figure

Figure 2. VHDL testbench approach

VHDL-based testbench approach

As Figure 2 shows, the main element in our testbench strategy is to associate one testbench module (TM) or bus functional model with each design-under-test (DUT) interface. All those TMs are controlled by a single command file. Each TM provides commands specific to its individual DUT interface. Furthermore, there is a command loop process requesting the next command from the command file using a global testbench package. Thus, a ‘virtual interconnect layer’ is established. Structural interconnect is required only between the TMs and the DUT.

The command file is an ASCII file containing command lines for each TM as well as control flow and synchronization statements. With its unified structure, this testbench approach enables the easy reuse of existing TMs.

Figure 3 is an example of the command file syntax. Each line starts with a TM identifier (e.g., CLK, CFG), the ALL identifier for addressing global commands (e.g., SYNC), or a control flow statement. Command lines addressing TMs are followed by modulespecific commands and optional parameters. Thus, line 1 addresses the clock generation module CLK. The command PERIOD is implemented within this clock generation module for setting the clock period and requires two parameters: value and time unit. Line 3 contains a synchronization command to the testbench package. The parameter list for this command specifies the modules to be synchronized (ALL for line 3; A2M and CFG for line 7). Since, in general, all TMs operate in parallel – and thus request and execute commands independently – it is important to synchronize them at dedicated points within the command file. When receiving a synchronization command, the specified TMs will stop requesting new commands until all of them have reached the synchronization point.

Figure

Figure 3. Command file example

Introducing a SystemC-based approach

The motivation for applying SystemC is to enhance the existing VHDL-based testbench approach. The original VHDL approach defined a sequence of commands which were executed by several TMs, describing a testcase within a simple text file. This worked well enough, but usage showed that more flexibility within the command file was desirable. Besides, VHDL itself lacks the advanced verification features found in hardware verification languages (HVLs) such as e, SystemVerilog HVL and SystemC, as well as the SystemC Verification Library (SCV).

Since the original concept had proved efficient, it was decided to extend the existing approach. In making this choice, it was concluded that a hardware description language like VHDL is not really suitable for the implementation of a testbench controller which has to parse and execute an external command file. So, SystemC was used instead because it provides the maximum flexibility, thanks to its C++ nature and the large variety of available libraries, especially the SCV. Using SystemC does require a mixed-language simulation – the DUT may still be implemented in VHDL, while the testbench moves towards SystemC – but commercial simulators are available to support this.

The implemented SystemC testbench controller covers the full functionality of the VHDL testbench package and additionally supports several extensions of the command file syntax. This makes the use of existing command files fully compliant with the new approach. The new SystemC controller allows us to apply variables, arithmetic expressions, nested loops, control statements and random expressions that are then defined within the command file. The intended impact of these features is that testcases run more efficiently and flexibly.

In general, the major part of testbench behavior should be implemented in VHDL or SystemC within the TMs. Thus, the strategy implements more complex module commands rather than very complicated command files. However, the SystemC approach does not only extend command syntax, it also provides static script checks, more meaningful error messages and debugging features.

Implementing the testbench controller in C++, by following an object-oriented structure, makes the concept easier to use. A SystemC TM is inherited from a TM base class. Hence, only module-specific features have to be implemented. For example, the VHDL-based approach requires implementation of a command loop process for each TM in order to fetch the next command. This is not the case with SystemC because the command thread is inherited from the base class – only the command functions have to be implemented. The implementation of features such as expression evaluation particularly shows the advantage using C++ with its many libraries (e.g., the Spirit Library7 is used to resolve arithmetic expressions within the command file).

Another important and practical requirement is that existing VHDL-based TMs can be used unchanged. SystemC co-simulation wrappers need to be implemented and they are generated by using the fully automated transformation approach described in Oetjens Gerlach & Rosenstiel8. All VHDL TMs are wrapped in SystemC and a new SystemC testbench top-level is built automatically. This allows the user to take advantage of the new command file syntax without re-implementing any TM, and the introduction of randomization within the command file means existing testcases can be enhanced with minimal effort.

Figure

Figure 4. SystemC testbench approach

Figure

Figure 5. Decimation filter

Figure 4 shows a testbench environment that includes both VHDL and SystemC TMs. As a first step, legacy TMs are retained, as is shown for TM1, TM2 and TM4. Some TMs, like TM3, may be replaced by more powerful SystemC modules later. SystemC modules allow the easy integration of C/C++ functions. Moreover, the TMs provide the interface handling and correct timing for connecting a piece of software.

Design example

Some extended and new verification features were applied to our SystemC-based testbench approach for a specific industrial design, a configurable decimation filter from a Bosch car infotainment application. The filter is used to reduce the sampling frequency of audio data streams, and consists of two independent filter cores. The first can reduce the input sample frequency of one stereo channel by a factor of three, while the second can either work on two stereo channels with a decimation factor of two or one stereo channel with a decimation factor of four. The filter module possesses two interfaces with handshake protocols: one for audio data transmission and the other for accessing the configuration registers.

The original verification environment was implemented in VHDL, based on the legacy testbench concept described in “Verification strategies.” Besides a clock generation module, two testbench modules for accessing both the data transmission and the configuration interface were required. To fulfill the verification plan, a set of directed testcases (command files) was created.

Figure 5 shows the top-level architecture embedded within a SystemC-based testbench. The example demonstrates the smooth transition towards our SystemC-based testbench approach as well as the application of constraint random and coverage-driven verification techniques. This approach also proved flexible enough to offer efficient hardware-software co-verification.

Constraint random verification

The randomization mechanisms of the SystemC-based testbench were extensively used, and the associated regression tests uncovered some interesting corner cases. As a first step, the existing VHDL TMs were implemented in SystemC. No significant difficulties were encountered nor was any extra implementation time required. To check compliance with the legacy VHDL approach, all existing testcases were re-simulated. Since reference audio data was available for all the filter configurations, a random simulation could be implemented quickly with randomization techniques applied to both the TMs and the command file. The command file was split into a main file containing the general function and an include file holding randomized variable assignments. The main command file consisted of a loop which applied all randomized variables from the include file to reconfigure and run the filter for a dedicated time.

Figure

Figure 6. Constraint include file

Figure 6 illustrates an excerpt from the include file. Line 24 describes the load scenario at the audio data interface. The variable #rand_load was applied as a parameter to a command of module A2M later within the main command file. A directed test was enforced by assigning constant values instead of randomized items. Hence, the required tests in the verification plan could be implemented more efficiently as constraint include files. After the verification plan had been fulfilled all parameters were randomized for the running of overnight regressions and identification of corner cases.

Coverage-driven verification

Coverage metrics are required to monitor the verification’s progress, especially for random verification. Analyzing the code coverage is necessary but not in itself sufficient.

For this example, a set of functional coverage points was implemented using PSL5. Since PSL does not support cover groups and cross coverage, a Perl9 script was written to generate those cross coverage points. Implementing coverage points required considerable effort, but as a result of that work some verification ‘holes’ in our VHDL-directed testbench were identified. Considering the fully randomized testcase, all coverage points will eventually be covered. In order to meet the coverage goals faster and thus reduce simulation time, a more efficient approach defines multiple randomized testcases using stronger constraints.

Replacing the manual constraints extended our knowledge of TBA techniques with regard to the automatic adaptation of constraints due to the measured coverage results. Therefore, it was necessary to manually define dependencies between constraints and coverage items. Such a testbench hits all desired coverage points automatically. The disadvantage is the considerable effort needed to define the constraints, coverage items and their dependencies.

Nevertheless, a methodology based on our SystemC testbench and PSL was created. First, access to our coverage points was required. Therefore, coverage points were assigned to VHDL signals that could be observed from SystemC. Then, dependencies were identified between the coverage results and constraints within either the command file or a SystemC testbench module. To automate this step, improvements were made to the Perl script. Thus, a CDV testbench module was generated that either passed coverage information to the command file or could be extended for the adoption of constraints in SystemC.

HW/SW co-simulation

In the target application, the decimation filter is embedded within an SoC and controlled by a processor. To set up a system-level simulation, a vendor-specific processor model was given in C and Verilog. Hence, the compiled and assembled target application software, implemented in C, could be executed as binary code on the given processor model. However for this co-simulation, simulation performance decreased notably, although the actual behavior of the processor model was not relevant in this case.

The application C code consisted of a main function and several interrupt service routines. Control of the audio processing module (the decimation filter) was achieved by accessing memory-mapped registers. Thus, the processor performed read and write accesses via its hardware interface. To then overcome performance limitations, the processor model was omitted and the C code connected directly to a testbench module, as illustrated in Figure 5 (p.21).

Due to its C++ nature, the SystemC-based testbench approach offered a smart solution. The intention was to map the TMs' read and write functions to register accesses within the application C code. Therefore, the existing register definitions were re-implemented, using an objectoriented technique. This allowed overloading of the assignment and implicit cast operators for those registers. Hence, reading a register and thus applying the implicit cast resulted in a read command being executed by the TM. Similarly, assigning a value to a register resulted in a write command being executed by the testbench module.

Finally, a mechanism was required to initiate the execution of the main and interrupt functions from the application C-code. Therefore, module commands to initiate those C-functions were implemented.

Hence, control and synchronization over the execution of those functions was available within our command file. This was essential to control the audio TM, which is required to transmit and receive audio data with respect to the current configuration. To execute the interrupt functions, the interrupt mechanism in our testbench concept was used.

Conclusions

Taking a company-internal VHDL-based testbench approach as an example, a smooth transition path towards advanced verification techniques based on SystemC can be demonstrated. The approach allows the reuse of existing verification components and testcases. Therefore, there is some guarantee that ongoing projects will benefit from new techniques without risking the loss of design efficiency or quality. This maximizes acceptance of these new techniques among developers, which is essential for their successful introduction.

Acknowledgements

This work was partially funded by the German BMBF (Bundesministerium für Bildung und Forschung) under grant 01M3078. This paper is abridged from the version originally presented at the 2007 Design Automation and Test in Europe conference in Nice.

References

  1. Open SystemC Initiative (OSCI), SystemC 2.1 Library, www.systemc.org
  2. Open SystemC Initiative (OSCI), SystemC Verification Library 1.0, www.systemc.org
  3. IEEE Std 1800-2005, IEEE Standard for SystemVerilog – Unified Hardware Design, Specification, and Verification Language
  4. IEEE Std 1647-2006, IEEE Standard for the Functional Verification Language ‘e’
  5. IEEE Std 1850-2005, IEEE Standard for Property Specification Language (PSL)
  6. The MathWorks homepage, www.mathworks.com
  7. Spirit Library, spirit.sourceforge.net (NB: no ‘www’)
  8. J.H. Oetjens, J. Gerlach, W. Rosenstiel, "An XML Based Approach for Flexible Representation and Transformation of System Descriptions", Forum on Specification & Design Languages (FDL) 2004, Lille, France.
  9. Wall, Larry, et.al., Programming Perl (Second Edition), O’Reilly & Associates, Sebastopol, CA., 1996.
  10. IEEE Std 1076.3-1997, IEEE standard for VHDL synthesis packages