Increase to Queue Size not acted on

Hi,

I’m getting the following error:

UVVM:
UVVM: *** TB_WARNING #1 ***
UVVM: 12875098160 ns UART_VVC,1,RX
UVVM: Queue is now at 950 of 1000 elements.
UVVM:
UVVM:
UVVM: =========================================================================================================================================================================
UVVM: *** TB_ERROR #1 ***
UVVM: 12875098160 ns UART_VVC,1,RX
UVVM: check_value() => Failed. Boolean was false. ‘add() into generic queue (of size 1000) when full’
UVVM:

I have increased the queue sizes using the generic method:

i1_uart_vvc : entity bitvis_vip_uart.uart_vvc
generic map(
GC_INSTANCE_IDX => 1,
GC_UART_CONFIG => C_UART_BFM_CONFIG,
GC_CMD_QUEUE_COUNT_MAX => 5000,
GC_CMD_QUEUE_COUNT_THRESHOLD => 4900,
GC_RESULT_QUEUE_COUNT_MAX => 5000,
GC_RESULT_QUEUE_COUNT_THRESHOLD => 4900
)

and, when that didn’t work, for good measure, also using the shared variable:

shared_uart_vvc_config(RX, 1).cmd_queue_count_max := 5000;
shared_uart_vvc_config(TX, 1).cmd_queue_count_max := 5000;
shared_uart_vvc_config(RX, 1).cmd_queue_count_threshold := 4900;
shared_uart_vvc_config(TX, 1).cmd_queue_count_threshold := 4900;

shared_uart_vvc_config(RX, 1).result_queue_count_max := 5000;
shared_uart_vvc_config(TX, 1).result_queue_count_max := 5000;
shared_uart_vvc_config(RX, 1).result_queue_count_threshold := 4900;
shared_uart_vvc_config(TX, 1).result_queue_count_threshold := 4900;

The log is showing that the size of the queue has increased but the error is still triggering:

UVVM: ID_CONSTRUCTOR 0.0 ns UART_VVC,1,TX VVC instantiated for channel TX
UVVM: ID_CONSTRUCTOR_SUB 0.0 ns UART_VVC,1,TX Command queue instantiated and will give a warning when reaching 5000 elements in queue.
UVVM: ID_CONSTRUCTOR_SUB 0.0 ns UART_VVC,1,TX Result queue instantiated and will give a warning when reaching 5000 elements in queue.
UVVM: ID_CONSTRUCTOR 0.0 ns UART_VVC,1,RX VVC instantiated for channel RX
UVVM: ID_CONSTRUCTOR_SUB 0.0 ns UART_VVC,1,RX Command queue instantiated and will give a warning when reaching 5000 elements in queue.
UVVM: ID_CONSTRUCTOR_SUB 0.0 ns UART_VVC,1,RX Result queue instantiated and will give a warning when reaching 5000 elements in queue.

There aren’t any later ID_CONSTRUCTOR_SUB log entries.

The problem seems to occur when I am using fetch_result. It seems like the command or result is not being cleared off the queue. Is there something else I need to do to ensure the queue gets cleared down - note that flush_command_queue() doesn’t seem to make any difference, perhaps pointing to an issue with the Results queue.

I had the problem a couple of days ago and got around it by getting rid of un-needed calls to fetch_result, but I have now added more tests where I do need to be able to receive and inspect the contents of the messages.

The test bench is reasonably large, it runs for 12778094550.0 ns before failing. The TB is sending and receiving a 32 byte message in and out of the DUT at 10Hz.

UVVM tag: 2025.03.25
ModelSim: SE-64-2021.3

Any help or workarounds would be appreciated.

Many thanks,
Pete

Hi again,

I’ve got around the problem by modifying the queue declaration in UVVM-2025.03.25/uvvm_util/src/generic_queue_pkg.vhd to be:

package generic_queue_pkg is

generic(type t_generic_element;
scope : string := C_SCOPE;
GC_QUEUE_COUNT_MAX : natural := 5000; – 1000;
GC_QUEUE_COUNT_THRESHOLD : natural := 4900); – 950);

However, this doesn’t make much sense as the code that is used to change the size of the queue looks good - it changes the correct variable, and that variable is used throughout as expected.

So something weird is going on - maybe I have a compiler problem??

Also, there is still the problem that the entries do not appear to be being removed from the queue and causing it to fill up in the first place.

Confused, but able to move on…

Cheers,
Pete

Hi, thank you for the report. I agree this seems a bit strange - we will have to investigate what is happening. Having a quick look at the code, I do not see why there should be an issue with setting the queue size.
The issue with the queue filling up seems to be a bug. I will create a ticket so that we can follow up this issue.