Speeding Up Simulation with VUnit for Parallel Testing

Michał Barczak, Application Engineer at Aldec
Like(0)  Comments  (2)

Effective simulation is essential in hardware development, as time and accuracy are critical factors that can determine the success or failure of a project.

 

In our previous blog post, we introduced VUnit, an open-source framework designed to streamline unit testing for VHDL and SystemVerilog development. We also discussed key concepts such as the purpose of VUnit, its advantages, its compatibility with Aldec's simulators, and the steps involved in setting up a project with VUnit

 

If you have not had a chance to read that earlier blog post yet, I highly recommend you do so before reading this one. The earlier post provides a solid foundation that will help you get the most out of the more advanced techniques we will cover in this post, in which we will explore a practical example that demonstrates the use of VUnit with Riviera-PRO, focusing on AES encryption.

 

The example is available on Aldec’s Github and includes multiple source files, testbenches, and a top-level Python script. In this blog we will highlight the benefits of using VUnit and Riviera-PRO on the example.

 

The Power of Multithreaded Processing for Independent Tests

Imagine dealing with a massive testbench file that takes an eternity to simulate. By breaking down the main testbench process into multiple independent tests that can run concurrently, you can significantly reduce the simulation and verification time for units in your project.

 

Each test can operate on a separate CPU thread, allowing for different calculations to be performed simultaneously. The process of VUnit testing can be monitored via a simulator GUI. Once all tests are completed cumulative results are shown in the console. The parallel processing is illustrated in Figure 1.

 

 

Figure 1: VUnit Multithreaded Simulation Architecture with Independent Threads using an Aldec Simulator (Active-HDL or Riviera-PRO).

 

In this architecture, the VHDL process organizes test cases which are then executed independently on separate CPU cores within the Aldec simulator.

 

While this may appear complex, I will simplify things and show you how to seamlessly integrate VUnit into your current project. So, let us navigate through the steps together to enhance your development workflow.

 

Testing Approaches

There are three basic approaches to using VUnit:

  1. Hardcoded test: Suitable for small projects with minimal automation requirements. Input data is hard coded and this approach is ideal for basic testing scenarios.
  2. Generics-based tests: These are more automated and can handle larger examples. By using VHDL generics, the necessary input data for testing can be enhanced. In the example, there are six test cases based on a set of references, each requiring unique data provided manually.
  3. Randomized, generic-based tests: This approach involves randomized inputs and a Python reference package. It allows for thorough testing, including corner cases. The use of Python reference package enables the input data randomization and calculation of expected ciphertext for encryption correctness verification. It is enabled to specify the number of tests to be conducted, with 20 tests set as the default. This approach is highly automated and suitable for projects of all sizes.

 

Workflow and Testbench Integration

The above approaches increase in complexity – in terms implementation - but the return is better performance and greater test coverage.

 

All three testing approaches are put together in this project.

 

Approaches 2 and 3 share the same testbench structure, while the first one has a different testbench file. The common connection between the Python run-script, testbenches, and Riviera-PRO is illustrated in Figure 2.

 

 

Figure 2: Example Workflow with Python Script Testbenches and Aldec’s Simulator

 

The workflow involves several key steps:

  • Source Inputs: The process begins with two source directories: source files and testbench files.
  • Python Run-Script: These inputs are processed by a top-level Python script, which manages and orchestrates various testing tasks:
    • Work Library: The source files and testbench files are compiled into a work library.
    • Hardcoded Test: This involves running predefined tests.
    • Generics-Based Tests: These are tests that use generic parameters, allowing for more flexible and reusable test scenarios.
    • Generics-Based Randomized Tests: These are similar to generics-based tests, but with added randomization to cover a wider range of scenarios and increase test coverage.
    • Simulation Options Setting: This step configures various simulation flags to control the behavior and output of the test runs.
  • Data Profiler: The Python run-script interfaces with a data profiler tool, which collects performance and profiling data during the test runs. The data profiler uses one of Aldec’s simulators at a time.
  • Testbenches: The organized tests from the Python run-script are executed within specific testbenches:
    • Hardcoded_tb: Executes the hardcoded tests.
    • Generic_tb: Executes the generics-based tests.
  • Results: The outputs from the testbenches are compiled into a results container, which includes:
    • VUnit Summary: A summary of the test results, generated by VUnit, and displayed in the console window.
    • Profiled Data: Detailed profiling data collected during the test runs.

 

Running the AES Encryption Example

AES stands for Advanced Encryption Standard. It is a symmetric block cipher, a method of encrypting data in blocks to produce ciphertext using a cryptographic key and algorithm. AES puts data, or plain text messages, through multiple encryption rounds and splits a message into smaller blocks of 128 bits called ciphertext.

 

Since AES is a symmetric encryption algorithm, it means that it uses the same key to encrypt and decrypt the data.
To run the example successfully, ensure you have:

  • Python3
  • The vunit_hdl and pycryptodome Python packages
  • A valid Riviera-PRO license

 

It is crucial to have the appropriate number of simulator licenses for multithread testing, and if you encounter any issues during setup, refer to the example which includes a README file for assistance. In the README file each step is described both for Riviera-PRO and Active-HDL users.

 

Once the setup is complete, it is crucial to provide localization of the simulator to the Python runner. Navigate to the main directory of the example and open the terminal or CMD.

 

Add the Riviera-PRO installation directory to the PATH value.

 

The command may vary slightly based on the operating system:

  • export PATH=/bin:$PATH # for Linux
  • set PATH=\bin;%PATH% # for Windows

There are some unusual ways to provide access to the simulator, but do not worry – all are described in ‘Detailed_Example_Description.md’ file here. In case you are using more than one simulator it is crucial to set VUNIT_SIMULATOR variable.

 

Again, the command may vary slightly based on the operating system:

  • export VUNIT_SIMULATOR=rivierapro # for Linux
  • set VUNIT_SIMULATOR=rivierapro # for Windows

The last step is to run the Python run-script file in the system console using the command:

  • python3 run.py

To run the script with multiple threads, add the `-p` flag followed by the desired number of threads:

  • python3 run.py -p

 

Note for Windows users: the command begins with python not python3 by default.

 

VUnit does not restrict the number of threads, and it will try to create the provided number if there are enough test cases to fill the threads. It will never create more threads than the number of test cases and never more than the limit set by the `–p`option.

 

Upon running the script, the compilation process for all project files will commence, followed by the execution of the three test scenarios. The results will be displayed in the console and should look similar to those shown in Figure 3.

 

 

Figure 3: Results View on the Console

 

To change testing time, consider adjusting the integer argument in the prepare_data() function within the run.py script to increase the number of test cases with randomized input data.

 

Results and Performance Analysis

Let us analyze the performance results of our AES encoding example. The testing time varies depending on the hardware and may differ between different machines. In this case, all tests were conducted on the same machine. A total of 27 test cases were executed: one hardcoded test, six tests created using generics and input data from references, and 20 randomized tests.

 

Figure 4 to 9 show the results for all 27 test cases when the number of threads is increased:

 

 

Figure 4: Using a single thread (single Riviera-PRO license), process takes about 33 seconds.

 

 

Figure 5: Using the same script with two threads (licenses) reduces the testing time to around 16 seconds, giving us a twofold performance boost

 

 

 

Figure 6: Further enhancing the testing environment with a third thread reduces verification time to 10 seconds

 

 

Figure 7: Using five Riviera-PRO licenses and employing five threads produces even better results, with testing completing in about 7 seconds (not surprisingly, it is almost five times faster than single-threaded testing).

 

 

Figure 8: Running the script with 15 threads/licenses to 2.6 seconds

 

 

Figure 9: Utilizing 30 threads leads to testing completion in less than 2 seconds.

 

All test results were compiled into a graph for easier analysis. The graph clearly shows that increasing the number of threads and utilizing Aldec's licenses results in shorter simulation times. See Figure 10.

 

 

Figure 10: Elapsed Simulation Time vs. Number of Threads

 

These results highlight VUnit's effectiveness in accelerating unit verification processes, and though the above reduced the test time from circa 33 seconds (single thread/license) to less than 2 seconds (30 threads/licenses), it is easy to see how overall verification timelines could be reduced from months to days.

 

The overall performance of testing is influenced by the host machine, its processor, thread count, and testbench complexity.

 

Profiling Data with Riviera-PRO Simulator

After running the example, you can profile the data during simulation.

 

Logic simulation profiler is an advanced debugging feature of Riviera-PRO and provides valuable insights into the performance of each process in the design, making it easier to optimize performance and identify project bottlenecks. Aldec’s simulator profiler can be used to find bottlenecks in the design and even more enhance verification performance.

 

Note: to ensure the profiler works correctly, a specific patch needs to be applied. Refer to the example on Aldec’s GitHub for detailed instructions on how to apply the necessary patch.

 

Navigate to the directory where the run.py script was executed. Next, launch the Riviera-PRO simulator. To generate a profiler report in HTML format (or other formats as detailed in Aldec's Riviera-PRO online documentation here), use the following command:

 

profiler report -tbp $curdir/vunit_out/test_output//rivierapro/Profiler/profiler.tbp -html profiler_report.html

 

Once the report is generated, you can view it (see Figure 11). You have the option to open it using the Riviera-PRO GUI or a web browser.

 

 

Figure 11: Data Profiler Report Generated in Riviera-PRO

 

Summary and Conclusion

The multi-threaded simulation of independent tests leads to significant simulation speed ups. The more threads, the faster the simulation. However, you will need one license per thread.

 

VUnit has been successfully utilized in both large-scale production environments, where thousands of tests run on powerful multi-core machines over several hours, and in small open-source projects, where testing a small package takes only a few seconds.

 

By integrating with the Aldec simulator, users can access detailed profiled data for enhanced verification, or they can try out other advanced debugging features. This versatility makes VUnit and the Aldec simulator valuable assets for developments teams of all sizes.

 

So why wait? Dive into the world of testing and verification with VUnit and leverage the power of our simulators to unlock the full potential of your FPGA projects.

 

Also, if you are not currently using Active-HDL or Riviera-PRO, you can request free evaluation licenses for fully functional versions of the tools here.

 

If you are ready to further explore VUnit and try modifying the project presented in this blog post by editing test suite cases or experimenting with different approaches simultaneously, we invite you to visit the next blog in this series by clicking here.

 

References

Previous Blog - Introduction to VUnit

Next Blog – Navigating VUnit: a Practical Guide to Modyfing Testing Approaches

Project Repository on Aldec’s GitHub

Riviera-PRO Online Documentation

Michał Barczak is an Application Engineer at Aldec, specializing in RTAX/RTSX adaptors. He joined the team in 2022 and possesses a strong background in verification methodologies such as UVM, OSVVM, and UVVM. With expertise in PCIe technology, BFM simulations, and automotive applications, Michał brings a wealth of experience to his role. He holds a Master of Science degree in Microelectronic Engineering from Gdańsk University of Technology in Poland. Outside of work, Michał harbors a dream of one day traveling to the moon, despite his fear of heights.

Comments

How many Riviera-PRO/Active-HDL licenses do you use when running "Multithreaded Processing for Independent Tests"? Is it one or one per thread?
Jim L. about 4 hours
How many Riviera-PRO/Active-HDL licenses do you use when running "Multithreaded Processing for Independent Tests"? Is it one or one per thread?
Jim L. about 4 hours
Ask Us a Question
x
Ask Us a Question
x
Captcha ImageReload Captcha
Incorrect data entered.
Thank you! Your question has been submitted. Please allow 1-3 business days for someone to respond to your question.
Internal error occurred. Your question was not submitted. Please contact us using Feedback form.
We use cookies to ensure we give you the best user experience and to provide you with content we believe will be of relevance to you. If you continue to use our site, you consent to our use of cookies. A detailed overview on the use of cookies and other website information is located in our Privacy Policy.