A general solution is to use a timer to generate ticks of some suitable length (perhaps a convenient or standardized period like 0.1ms). This is 80MHz divided by 8000 (a 16 bit timer). Generally timers have pre-scalers that divide the 80MHz clock to give a variety of ratios. At the end of the timer period an interrupt is generated.
In the interrupt service routine (ISR), use global variables to count ticks down to zero. Other routines can set these variables to the period to be timed, and check for them reaching zero. There can be a number of variables for different routines to check different times simultaneously. With 80MHz clock there can be quite a few instructions in the ISR, so it can even run a calendar clock as well as a number of interval times. This allows other processes to continue with only the small overhead of the ISR.
If you need an interrupt when the time is done, maybe there is a software interrupt instruction, or use a second timer for the countdown that generates an interrupt.
If the time needs to be precise, you could adjust the master clock to a slightly different frequency as well as set the counter appropriately, so the integer division ratio gives the exact time required.
e.g. If you had 15MHz for the timer clock, and divided by 32767 (for a 16 bit timer) gives 2.1845ms. If you want exactly 2ms, 15Mhz divide by 30000 gives 2ms.
With this approach you can have subroutines/functions/macros for wait or delay or to set an interval that calculates the correct settings. Also if a particular i/o action is needed regularly like flash a led, it can be in the ISR for this fast machine.
For very high accuracy you would need an accurate external clock for the timer. For very low overhead you could use an external timer (even a 555 monostable) reset by software through i/o and generating an interrupt on a digital i/o when done.
Sorry I am not familiar with this assembler syntax, so I only have a vague idea what is happening. Maybe $t4 is a timer. It would need 19 bits or more to load 500,000.