44 Commits

Author SHA1 Message Date
ouyangxiangzhen
df183b6682 sched/wqueue: Remove the work_queue_period.
For simplicity, better performance and lower memory-overhead, this commit replaced the periodical workqueue APIs with the more expressive work_queue_next. The work_queue_next restarts work based on the last expiration time.

Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
2025-06-10 11:02:45 -03:00
ouyangxiangzhen
fce40edb6b sched/wqueue: Add max delay tick limitation.
This commit added max delay tick limitation for the workqueue.

Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
2025-05-27 12:10:39 +02:00
ouyangxiangzhen
64a7049dec clock: Add clock_delay2abstick.
This commit added a macro function clock_delay2abstick to calculate the
absolute tick after the delay.

Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
2025-05-12 19:56:29 +08:00
ouyangxiangzhen
36a4d5feaf sched/wqueue: Improve performance of the work_queue.
This commit improve the performance of the work_queue by reducing
unnecessary wdog timer setting.

Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
2025-05-11 10:47:13 +08:00
ouyangxiangzhen
6f72f5481d sched/wqueue: Refactor delayed and periodical workqueue.
This commit refactors the logic of workqueue processing delayed and periodic work, and changes the timer to be set in `work_thread`. The improvements of this change are as follows:
- Fixed the memory reuse problem of the original periodic workqueue implementation.
- By removing the `wdog_s` structure in the `work_s` structure, the memory overhead of each `work_s` structure is reduced by about 30 bytes.
- Set the timer for each workqueue instead of each work, which improves system performance.
- Simplified the workqueue cancel logic.

Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
2025-05-07 02:02:10 +08:00
ouyangxiangzhen
9dbb9b49c6 sched/wqueue: Change dq to list.
In NuttX, the dq and the list are two different implementations of the double-linked list. Comparing to the dq, the list implementation has less branch conditions such as checking whether the head or tail is NULL. In theory and practice, the list is more friendly to the CPU pipeline. This commit changed the dq to the list in the wqueue implementation.

Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
2025-05-07 02:02:10 +08:00
ouyangxiangzhen
bcad1d038a sched/wqueue: Rename periodic workqueue API.
This commit renamed the periodic workqueue API.

Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
2025-03-08 13:52:37 -03:00
ouyangxiangzhen
ed04000626 sched/wqueue: support for periodic workqueue.
This commit added support for periodic workqueue.

Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
2025-03-08 13:52:37 -03:00
hujun5
ac26f9c690 Revert "sched/wqueue: some minor improve to reduce sched_lock range"
This reverts commit 2779989add.
2025-02-13 20:48:15 +08:00
Xiang Xiao
45422e2ad9 Revert "No need to call sched_lock explicitly after call spin_lock_irqsave, since it will be called in func spin_lock_irqsave."
This reverts commit 8f3a2a6f76.
2025-02-13 14:15:43 +08:00
wangzhi16
8f3a2a6f76 No need to call sched_lock explicitly after call spin_lock_irqsave, since it will be called in func spin_lock_irqsave.
Signed-off-by: wangzhi16 <wangzhi16@xiaomi.com>
2025-01-24 17:27:50 +08:00
chao an
2779989add sched/wqueue: some minor improve to reduce sched_lock range
Signed-off-by: chao an <anchao.archer@bytedance.com>
2025-01-17 23:33:23 +08:00
hujun5
b8660fc39c sim: fix regression from https://github.com/apache/nuttx/pull/14623
reason:
work_timer_expiry may be called in thread context

Signed-off-by: hujun5 <hujun5@xiaomi.com>
2025-01-17 18:59:29 +08:00
hujun5
61f0c97193 wqueue: wqueue remove csection
reason:
We decouple semcount from business logic
by using an independent counting variable,
which allows us to remove critical sections in many cases.

Signed-off-by: hujun5 <hujun5@xiaomi.com>
2025-01-15 17:26:07 +08:00
Alin Jerpelea
eb9030c891 sched: migrate to SPDX identifier
Most tools used for compliance and SBOM generation use SPDX identifiers
This change brings us a step closer to an easy SBOM generation.

Signed-off-by: Alin Jerpelea <alin.jerpelea@sony.com>
2024-09-12 01:10:14 +08:00
ligd
ce2ad51b3a wqueue: expose wqueue API for customization
Signed-off-by: ligd <liguiding1@xiaomi.com>
2024-08-30 01:52:22 +08:00
dulibo1
5e8ce0b9f0 work queue:fix work_cancel_sync can not cancel the work which call work_queue in handler
1.work_queue a work which call work_queue again
2.work_cancel_sync the work
3.the work is just calling in workthread
4.work_queue the work again,and post the work_cancel_sync wakeup
5.the work is not canceled

Signed-off-by: dulibo1 <dulibo1@xiaomi.com>
2024-08-30 01:52:22 +08:00
chao an
e456c88c09 Revert "sched: replace some global variables to macro"
sched implementation not depends on macro abstraction, so revert below commit:

This reverts commit 4e62d0005a
This reverts commit 0f0c370520
This reverts commit ad0efd04ee

Signed-off-by: chao an <anchao@lixiang.com>
2024-06-06 22:00:25 +08:00
chao an
ad0efd04ee sched/wqueue: replace some global variables to macro
replace to macro will help to extend the scheduling implementation

Signed-off-by: chao an <anchao@lixiang.com>
2024-03-21 11:22:41 +09:00
Zhe Weng
c9a38f42f7 sched/wqueue: Do as much work as possible in work_thread
Decouple the semcount and the work queue length.

Previous Problem:

If a work is queued and cancelled in high priority threads (or queued
by timer and cancelled by another high priority thread) before
work_thread runs, the queue operation will mark work_thread as ready to
run, but the cancel operation minus the semcount back to -1 and makes
wqueue->q empty. Then the work_thread still runs, found empty queue,
and wait sem again, then semcount becomes -2 (being minused by 1)

This can be done multiple times, then semcount can become very small
value. Test case to produce incorrect semcount:

high_priority_task()
{
  for (int i = 0; i < 10000; i++)
    {
      work_queue(LPWORK, &work, worker, NULL, 0);
      work_cancel(LPWORK, &work);
      usleep(1);
    }

  /* Now the g_lpwork.sem.semcount is a value near -10000 */
}

With incorrect semcount, any queue operation when the work_thread is
busy, will only increase semcount and push work into queue, but cannot
trigger work_thread (semcount is negative but work_thread is not
waiting), then there will be more and more works left in queue while
the work_thread is waiting sem and cannot call them.

Signed-off-by: Zhe Weng <wengzhe@xiaomi.com>
2023-03-21 17:50:40 +02:00
zhangyuan21
63039b80e1 sched/wqueue: do work_cancel when worker is not null
Signed-off-by: zhangyuan21 <zhangyuan21@xiaomi.com>
2023-01-16 13:37:00 +08:00
Xiang Xiao
40ef5bc6db libc: Move queue.h from include to include/nuttx
to avoid the conflict with libuv's queue.h

Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2022-09-26 08:04:58 +02:00
ligd
bc17563a8f wqueue: fix race-condition on work_queue
CPU0                     CPU1

work_queue(a)            work_queue(a)
                         -> work_cancel(a)
-> work_cancel(a)
-> enter_critical()
-> sq_addlast(a)
-> leave_critical()
                         -> enter_critical()
                         -> sq_addlast(a) // double add, wrong
                         -> leave_critical()

Also, this happens in mulit-threads in one CPU.

Fix:

work_cancel() should in critical section.

Signed-off-by: ligd <liguiding1@xiaomi.com>
2022-09-09 21:52:21 +02:00
ligd
4a87578bdb wqueue: change single queue to double queue to improve speed
Signed-off-by: ligd <liguiding1@xiaomi.com>
2022-09-08 15:03:54 +02:00
ligd
e5420c8eee wqueue: fix NO leave_critical_section() when only CONFIG_SCHED_HPWORK
Signed-off-by: ligd <liguiding1@xiaomi.com>
2021-12-23 02:27:34 -06:00
Jiuzhu Dong
855c78bb9d work_queue: schedule the work queue using the timer mechanism
Signed-off-by: Jiuzhu Dong <dongjiuzhu1@xiaomi.com>
2021-07-27 21:01:38 -07:00
Nathan Hartman
af1fd49fb0 wqueue: Fix typos
include/nuttx/wqueue.h:
libs/libc/wqueue/work_cancel.c:
libs/libc/wqueue/work_queue.c:
sched/wqueue/kwork_cancel.c:
sched/wqueue/kwork_queue.c:

    * Fix spelling, grammar, and typos.
    * Improve wording in a few areas.
    * These changes affect comments only. No functional changes.
2021-07-16 19:54:08 -03:00
Xiang Xiao
517974787f Rename clock_systime[r|spec] to clock_systime_[ticks|timespec]
follow up the new naming convention:
https://cwiki.apache.org/confluence/display/NUTTX/Naming+of+OS+Internal+Functions
2020-05-10 14:35:50 -06:00
Nathan Hartman
a5e643b0cd Fix typos in comments and documentation. 2020-03-16 20:01:11 -06:00
Gregory Nutt
45b8e3ce7f sched/wqueue: Update to Apache 2.0 headers.
Also run files through nxstyle to assurand coding standard compliance.
2020-03-11 18:39:53 -03:00
Gregory Nutt
8fdbb1e0a4 Elimate use of the non-standard type systime_t and replace it the equivalent, standard type clock_t
Squashed commit of the following:

    sched:  Rename all use of system_t to clock_t.
    syscall:  Rename all use of system_t to clock_t.
    net:  Rename all use of system_t to clock_t.
    libs:  Rename all use of system_t to clock_t.
    fs:  Rename all use of system_t to clock_t.
    drivers:  Rename all use of system_t to clock_t.
    arch:  Rename all use of system_t to clock_t.
    include:  Remove definition of systime_t; rename all use of system_t to clock_t.
2018-06-16 12:16:13 -06:00
Gregory Nutt
7cf88d7dbd Make sure that labeling is used consistently in all function headers. 2018-02-01 10:00:02 -06:00
Gregory Nutt
4993b0cb66 Work queue: In a recent change for a problem noted by Pascal Speck, it was noted (again by Pascal Speck) that the cancellation of existing work and replacement with new work must be atomic. Thanks, Pascal. 2017-08-31 07:43:47 -06:00
Gregory Nutt
92f44c5607 Networking: Move net/inet/net_monitor.c to net/tcp/tcp_monitor.c in preparation for design change to fix monitoring of duplicated sockets. 2017-08-29 08:40:13 -06:00
Gregory Nutt
bbf4d5048a work_queue() must cancel existing work prior to queuing new work, otherwise the work queue can become corrupted. Problem noted by Pascal Speck. 2017-08-28 07:46:48 -06:00
Heesub Shin
8e94d8e7cc Signal sent from work_signal() may interrupt the low priority worker thread that is already running. For example, the worker thread that is waiting for a semaphore could be woken up by the signal and break any synchronization assumption as a result. It also does not make any sense to send signal if it is already running and busy. This commit fixes it. 2016-11-06 08:00:12 -06:00
Gregory Nutt
6e3107650d nuttx/sched: Replace irqsave() with enter_critical_section(); replace irqrestore() with leave_critical_section() 2016-02-14 08:17:46 -06:00
=?UTF-8?Q?Manuel_St=c3=bchn?=
2bacc40350 Fix mismatched prototype error in work_queue() 2016-01-24 12:48:24 -06:00
Gregory Nutt
cb9e27c3b0 Standardize naming used for public data and function groupings 2015-10-02 16:30:35 -06:00
Gregory Nutt
4cd57e1e4e Work queues: Logic that sets the queued indication and the logic that does the actual queuing must be atomic 2015-09-30 11:04:29 -06:00
Gregory Nutt
4a4b3ac537 Add support for multiple low-priority worker threads 2014-10-10 16:24:50 -06:00
Gregory Nutt
cf59a195ba User-mode work queue logic should not disable interrupts 2014-10-10 14:52:04 -06:00
Gregory Nutt
2015fd76e2 Fix some conditional logic in last work queue repartitioning change 2014-10-10 08:47:41 -06:00
Gregory Nutt
1afc9773ac Decoupling work queue data structures. This is part of the preparation to support multiple low-priority worker threads 2014-10-10 08:35:58 -06:00