Navigating VUnit: A Practical Guide to Modifying Testing Approaches Michał Barczak, Application Engineer at Aldec Like(0) Comments (0) In the two previous blogs, we introduced you to the world of VUnit, guided you through creating a project from scratch, and demonstrated how to run multi-threaded unit testing of multiple independent tests. In this third blog we will explore the details of the three basic testing approaches with VUnit (hardcoded, semi-automated and fully automated). We will discuss how to modify each testing approach to enhance your unit testing with VUnit even further, making it more intuitive to use. We will discuss topics such as modifying testbenches and Python run- scripts. What Are the Differences Between Testing Approaches? Generally, there are three distinct testing approaches: Hardcoded Approach: This method is entirely manual. For instance, in the AES example used in the 2nd blog, you must manually provide input data and the expected output, requiring prior knowledge of all variables. This approach is suitable only for small projects and lacks automation. Semi-Automated Approach (VHDL generics-based): This approach employs VHDL generics for input and output data declarations, although user intervention is still required for data initialization. Unlike the hardcoded method, test suites are created automatically based on generics, and checking procedures differ; hardcoded tests rely on manual data comparison, while the semi-automated method uses the dedicated VUnit check() function. Fully Automated Approach (Randomized, VHDL generics-based): Here, the number of tests is defined by the user, with a default set at 20. The creation of test cases and comparison processes are entirely handled by VUnit. In this project, a specialized script has been developed to manage the preparation of AES encryption data, which includes generating input and output data, randomizing it, and formatting it for the generic testbench. This third approach is the most suitable for larger projects. Modifying the Three Approaches Hardcoded Approach Preparing the run.py Script The first step is to create a Python script that will interact with your testbench. Guidance on how to prepare this script can be found in our earlier blogs or the official VUnit site. The modified script should include the names of the testbench and the test it will run. If only one testbench is associated with the project, there’s no need to define the testbench name and test case name even if multiple test cases exist within one testbench, as they will be executed sequentially. The modified script is shown in Figure 1. Figure 1: run.py Script Prepared for Hardcoded Testbench The base Python script differs from this version only by the addition of lines 23-24, which specify the selected testbench and test. Preparing the Testbench To create a base testbench compatible with all VUnit functionalities, refer to the guidelines provided in the first blog or on the main VUnit site. The testbench we will modify is named tb_enc_hardcoded.vhd. Besides its basic syntax, this testbench includes practical examples used in the tests. The primary simulation process begins on line 44 and concludes on line 74, as illustrated below. Figure 2: Simulation Process in the Hardcoded Test Approach This testbench currently contains only one hardcoded test case. Detailed instructions on executing the example can be found in the previous blog or in the ‘Detailed_Example_Description.md’ file located on Aldec’s GitHub. Upon running the test, you should see a summary report in VUnit indicating that the single test has passed, as per Figure 3. Figure 3: Console Output After Successful Execution of Hardcoded Test with One Test Case Modifying the Hardcoded Test Approach To extend the current test suite, you can add additional tests. To do this, modify the run.py script by adding a line that points to the next test suite. The keyword test defines each test suite within the testbench. For example, let’s add a test named "another_hardcoded_test". The updated run.py script will appear as per Figure 4. Figure 4: Extended run.py Script with Additional Test It is crucial to add an additional test case with the exact same name in the testbench’s if statement within the while loop for the test suite. This is mostly a copy-paste task, but with different input data. Figure 5 illustrates the necessary modifications. Figure 5: Hardcoded Testbench with Two Test Suites Please note that there are some special rules governing the initialization of any inputs. Consider the following example: Plaintext: 49be27d3 1acc7f8f b2639010 aa0be7d2 Key: 27bace90 dd7fee4e 5b19ac7c 03ce4322 Ciphertext: 36d01889 cc67a8a1 5b7015e9 507611e2 In these representations, each byte is denoted by two hex digits and arranged column-wise. Thus, to convert these representations into 128-bit hex values, the byte order must be reversed, as the last cell (bottom right) in the AES state array is considered the most significant byte in this implementation. For example, the hex vectors corresponding to the above test vectors are: Plaintext: x"d2e70baa109063b28f7fcc1ad327be49" Key: x"2243ce037cac195b4eee7fdd90ceba27" Ciphertext: x"e2117650e915705ba1a867cc8918d036" You can also add data for the second test after line 71 shown in Figure 2, but doing so would prevent tests from running simultaneously and compromise their independence. After executing the example with two hardcoded test suites, you can expect two different results in the summary: either some tests failed, or all passed. See Figure 6. Figure 6: Two Variants of Hardcoded Test Results. Some tests will fail if the provided data are incorrect or if the reset signal is missing. If all tests pass, you have the option to run them simultaneously to expedite the verification process. Running Hardcoded Tests Simultaneously You can run these tests in parallel by executing the following command: python3 run.py -p 2 # on Linux python run.py -p 2 # on Windows This will initiate both tests simultaneously. Example results are shown in Figure 7. Figure 7: Two Hardcoded Tests Running in Parallel For further assistance or guidance on executing the example, please visit this VUnit blog post. VHDL Generic-based Testbench This approach differs slightly from the previous one, offering a more automated solution. There’s no need to declare each test suite within the while loop anymore; each test suite aligns with the generics. Preparing the run.py Script The run.py script remains the same as the one shown in Figure 4 until line 22. This approach will utilize a different testbench, based on generics. Unlike the hardcoded method, all input data must be listed in the run.py script. You can define as many test suites as you desire. In this example, we have six different tests. Figure 8 illustrates part of the run.py script that manages data initialization for generics. Figure 8: run.py File with Test Suites Data Initialization The list includes: Source: the test suite name Key: the ciphering key Plaintext: the message to be encrypted Ciphertext: the expected result of the encryption process After populating the list, you must connect the list arguments with the generics from the testbench. Preparing the Testbench The testbench file contains slightly different entity generics than in the hardcoded example, as shown in Figure 9. Figure 9: Entity Generics for Generic-Based Test Approach These generics are linked to the run.py script. The key, plaintext, and expected ciphertext are incorporated into the test configuration, sourced from the run.py script, and subsequently utilized throughout the encryption process. In the main simulation process expected and received ciphertexts comparison is handled by the VUnit check function. The main simulation process is depicted in Figure 10. Figure 10: Main Simulation Process in Generics-Based Test This if statement will execute each test from the run.py script because the test is designated as "generic_encryption_test" and each test in the list has its unique name, but they belong to the "generic_encryption_test" group. Upon executing the example, we expect six passed tests in the VUnit summary. To run the test, invoke: python3 run.py # or python run.py for Windows An example console output after initializing the test is shown in Figure 11. Figure 11: Running the Generic-Based Test. Modifying Generic-Based Test Suite To add more test suites, simply expand the list in the run.py file while leaving the testbench unchanged. Inputting test vectors should follow the same format as in the hardcoded test. The example result of adding an additional test case is illustrated in Figure 12. Figure 12: Run.py Script Extension for Additional Test Case in Generic-Based Approach After running the run.py script again, you can test additional cases. Example results are shown in Figure 13. Figure 13: Extended Generic-Based Test Results Random Generic-based Test Approach This testing case is fully automated. By specifying the desired number of tests, users can extend the testing duration and cover a broader test suite. Preparing the run.py Script In this approach, the run.py script imports an additional file responsible for data generation, randomization and shifting procedures. The structure of the run.py script in this approach is shown in Figure 14. Figure 14: Run.py Script for Random Generic-Based Test Approach Notice that in line 32 of Figure 14, the data_list value is defined as a list format filled by the prepare_data function, which is outlined in the rand_enc.py script. By adjusting the argument passed into the prepare_data function, you can generate the exact number of test cases required. Each test suite's data is transferred to the generics in the testbench file and processed in VHDL. The external script, rand_enc.py, employs an additional Python package, pycryptodome, to implement the AES encryption algorithm. This script serves as a randomizer and data shifter, enhancing the variety of test cases and ensuring the correct input format. The overall structure of the script is shown in Figure 15. Figure 15: Structure of rand_enc.py Script In this script, the prepare_data function is first declared. This function initializes a list called data_list, which stores all necessary data for the encryption process. Based on the number of tests specified in the function argument, a loop iterates to generate randomized plaintext and cipher keys, which are then correctly formatted for AES encryption. The result of the encryption process - the cyphertext - is also formatted. The data_list is populated in the following order: test name as ‘random_test_number’ plus the ongoing iteration number, cipher key, plaintext, and encrypted text. After preparation, the randomized and formatted input data will be automatically allocated for simulation. Modifying the Testbench File Both approaches - generic-based and random generic-based - rely on generic data implementation, so the testbenches for these methods are nearly identical, with the only difference being the name of the test used. The main simulation process in the testbench for the random generic-based approach is shown in Figure 16. Figure 16: Main Simulation Process in Random Generic-Based Test Approach Integrating Different Approaches in One Project If you wish to utilize multiple test cases, you can do so, even connecting different approaches and testbenches. In this scenario, we will integrate all three distinct approaches. Thus, our run.py file will incorporate elements from the hardcoded, generic-based, and random generic-based approaches. We will also employ two different testbenches - one for hardcoded tests and another for generic-based tests. The basic run.py script, as shown in Figure 17, will be populated with essential data, such as the testbench name and test suite case, to inform VUnit which test case to execute with the corresponding testbench file. Figure 17: Basic run.py Script Ready to be Filled with Data for Test Cases After adding source files, and prior to launching, ensure that the code responsible for altering the configuration of the ‘tb’ and ‘test’ values is included. Additionally, if necessary, code should be implemented to facilitate data transfer from Python files to the appropriate testbench. The run.py file encompassing three different approaches is available on Aldec’s GitHub repository here. Once prepared, the run.py file can be executed, and the results of the test run are shown in Figure 18. Figure 18: Running Example with Combined Test Approaches Summary In summary, the three basic testing approaches can be combined within your design, providing scalability based on project complexity. A significant advantage of this integration is the provision of detailed results, as errors are immediately identified and reported. Also, if you are not currently using Active-HDL or Riviera-PRO, you can request free evaluation licenses for fully functional versions of the tools here. Following a successful testing procedure, detailed metrics - including the time taken for each test case - are also provided. We encourage you to explore these tools and experiment with the approaches discussed in your own projects. Stay tuned for future installments, where we’ll dive deeper into advanced features and best practices for design verification. References Previous Blog – Speeding Up Simulation with VUnit for Parallel Testing Project Repository on Aldec’s GitHub VUnit Official Website Tags:Aceleration,Functional Verification,HDL,Riviera-PRO,Simulation,Verification,VHDL