Skip to content

delayMicrosecond sometimes makes much shorter delay than requested. #176

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
fpistm opened this issue Dec 12, 2017 · 7 comments
Closed

delayMicrosecond sometimes makes much shorter delay than requested. #176

fpistm opened this issue Dec 12, 2017 · 7 comments
Assignees
Labels
bug 🐛 Something isn't working

Comments

@fpistm
Copy link
Member

fpistm commented Dec 12, 2017

Issuer reported here by ut2uz:
http://stm32duino.com/viewtopic.php?f=48&t=2960

delayMicrosecond sometimes makes much shorter delay than requested. Say, 1 us instead of 4500.

Original implementation from wiring_time.h:

static inline void delayMicroseconds(uint32_t usec){
  uint32_t start = GetCurrentMicro();

  while((start+usec) > GetCurrentMicro());
}

Proposed solution by ut2uz:

static inline void delayMicroseconds(uint32_t usec){
  uint32_t end = GetCurrentMicro() + usec;

  while(((int32_t)(GetCurrentMicro()-end))<0);
}

Note that delays greater than 2147483647 (2147 sec = 35 min ) are handled incorrectly. I'm not familiar with compiler warnings in this IDE, so left the solution as is.

@fpistm fpistm self-assigned this Dec 13, 2017
@fpistm fpistm added the bug 🐛 Something isn't working label Dec 13, 2017
@lacklustrlabs
Copy link
Contributor

lacklustrlabs commented Jan 8, 2018

This is how esp32duino handles the overflow problem:

void IRAM_ATTR delayMicroseconds(uint32_t us)
{
    uint32_t m = micros();
    if(us){
        uint32_t e = (m + us);
        if(m > e){ //overflow
            while(micros() > e){
                NOP();
            }
        }
        while(micros() < e){
            NOP();
        }
    }
}

I think this solution retains the full uint32_t resolution.

Edit: better solution (just like delay() minus the unpredictable yield):

static inline void delayMicroseconds(uint32_t us) {
  if(us==0) 
    return;
  uint32_t start = micros();
  do {
    _NOP();
  } while (micros() - start < us);
}

@fpistm
Copy link
Member Author

fpistm commented Jan 8, 2018

Thanks,
In fact the issue is more deeper. It's linked to how the micros() is handle in this core:

uint32_t GetCurrentMicro(void)
{
  uint32_t m0 = HAL_GetTick();
  uint32_t u0 = SysTick->LOAD - SysTick->VAL;
  uint32_t m1 = HAL_GetTick();
  uint32_t u1 = SysTick->LOAD - SysTick->VAL;

  if (m1 > m0) {
    return ( m1 * 1000 + (u1 * 1000) / SysTick->LOAD);
  } else {
    return ( m0 * 1000 + (u0 * 1000) / SysTick->LOAD);
  }
}

Using the interrupt based software counters (ie SysTick) is not the right approach, I think.
Using the DWT->CYCCNT or a cycle count based should be better.
Unfortunately DWT->CYCCNT is not available on Cortex-M0/M0+

@fpistm fpistm added the on going Currently work on this label Jan 27, 2018
@lacklustrlabs
Copy link
Contributor

I was just reading up on DWT->CYCCNT.
Any chance of making it accessible for those boards that do support it?

@fpistm
Copy link
Member Author

fpistm commented Jan 27, 2018

that's precisely my question. Use DWT for those one that support it or doing the same implementation for all (using a timer or asm code).
I've already rewrote the micros which now have a better accuracy but this not solve the drawback of delayMicroseconds. One certitude it must not rely to the micro()

@fpistm fpistm added this to the 1.2.1 milestone Mar 23, 2018
@fpistm fpistm removed this from the 1.2.1/1.3.0 milestone May 28, 2018
@fpistm fpistm removed the on going Currently work on this label Oct 1, 2018
@ppescher
Copy link
Contributor

ppescher commented Nov 7, 2018

If you address timing issues, please also have a look at how delay() is currently implemented.

Unfortunately, it seems that many Arduino cores (if not all) are getting this wrong: they all use unsigned subtraction to compare time values. While that may seem mathematically correct, it's not how it works for the C language (and many others), which is using modular arithmetic.

This is the STM32 core implementation:

uint32_t start = GetCurrentMilli();
do {
yield();
} while (GetCurrentMilli() - start < ms);

The correct way to handle this (and timeout conditions in general) is to use a signed subtraction (or cast to a signed integer after subtraction, that yields the same result).

... while ((int32_t)(GetCurrentMilli() - start) < (int32_t)ms);

Actually, it is correct as long as the delay or timeout is less than or equal to INT_MAX, that means the time counter has wrapped around at most once.
Please note that you need to cast both sides of the relational operator, otherwise the difference is implicitly casted to unsigned again! Watch your proposed delayMicroseconds().

It would have caused much less trouble if they had defined millis() and delay() to use signed integers. There is no real advantage to use unsigned numbers, as they will also wrap around sooner or later. Going back to zero or starting from negative values doesn't really make any difference if all you do is measuring relative time (delays/timeouts), and the application must know it cannot simply measure absolute time indefinitely with a timer counter. It will overflow.

For those who are not sure and want to try some code, I shared an example:
https://onlinegdb.com/rk2pqEepm

@fpistm
Copy link
Member Author

fpistm commented Nov 7, 2018

I know that even if in your example it still one issue when sub instruction overflow int size:

a = 0, b = 20, c = 4294967276, d = 10
using signed subtraction                                                                                                                    
a - b <= d                                                                                                                                  
a - c > d                                                                                                                                   
using unsigned subtraction with cast to signed                                                                                              
a - b <= d                                                                                                                                  
a - c > d      

0 - 4294967276 is evaluated as 20 in any case.

Anyway, I'm currently not working on this at this time but all time stuff require a deep check.

@ppescher
Copy link
Contributor

ppescher commented Nov 7, 2018

I see what you mean. In the delay() function it does not happen that you do a - b with b > a because the timer counter is always incrementing.

Generally I don't feel safe doing subtractions on unsigned integers (been bitten before), however in this case it should be fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants