Merge pull request #15587 from esphome/bump-2026.4.0b1
CI for docker images / Build docker containers (docker, ubuntu-24.04) (push) Has been cancelled
CI for docker images / Build docker containers (docker, ubuntu-24.04-arm) (push) Has been cancelled
CI for docker images / Build docker containers (ha-addon, ubuntu-24.04) (push) Has been cancelled
CI for docker images / Build docker containers (ha-addon, ubuntu-24.04-arm) (push) Has been cancelled
CI / Create common environment (push) Has been cancelled
CI / Check pylint (push) Has been cancelled
CI / Run script/ci-custom (push) Has been cancelled
CI / Run pytest (macOS-latest, 3.11) (push) Has been cancelled
CI / Run pytest (macOS-latest, 3.14) (push) Has been cancelled
CI / Run pytest (ubuntu-latest, 3.11) (push) Has been cancelled
CI / Run pytest (ubuntu-latest, 3.13) (push) Has been cancelled
CI / Run pytest (ubuntu-latest, 3.14) (push) Has been cancelled
CI / Run pytest (windows-latest, 3.11) (push) Has been cancelled
CI / Run pytest (windows-latest, 3.14) (push) Has been cancelled
CI / Determine which jobs to run (push) Has been cancelled
CI / Run integration tests (push) Has been cancelled
CI / Run C++ unit tests (push) Has been cancelled
CI / Run CodSpeed benchmarks (push) Has been cancelled
CI / Run script/clang-tidy for ESP32 IDF (push) Has been cancelled
CI / Run script/clang-tidy for ESP8266 (push) Has been cancelled
CI / Run script/clang-tidy for ZEPHYR (push) Has been cancelled
CI / Run script/clang-tidy for ESP32 Arduino (push) Has been cancelled
CI / Run script/clang-tidy for ESP32 Arduino 1/4 (push) Has been cancelled
CI / Run script/clang-tidy for ESP32 Arduino 2/4 (push) Has been cancelled
CI / Run script/clang-tidy for ESP32 Arduino 3/4 (push) Has been cancelled
CI / Run script/clang-tidy for ESP32 Arduino 4/4 (push) Has been cancelled
CI / Test components batch (${{ matrix.components }}) (push) Has been cancelled
CI / pre-commit.ci lite (push) Has been cancelled
CI / Build target branch for memory impact (push) Has been cancelled
CI / Build PR branch for memory impact (push) Has been cancelled
CI / Comment memory impact (push) Has been cancelled
CI / CI Status (push) Has been cancelled

2026.4.0b1
This commit is contained in:
Jesse Hills
2026-04-09 13:40:37 +12:00
committed by GitHub
1338 changed files with 42124 additions and 17160 deletions
+194 -2
View File
@@ -124,6 +124,28 @@ This document provides essential context for AI models interacting with this pro
* **Indentation:** Use spaces (two per indentation level), not tabs
* **Type aliases:** Prefer `using type_t = int;` over `typedef int type_t;`
* **Line length:** Wrap lines at no more than 120 characters
* **Constructor parameters vs setters:** Component properties that are both **required** and **invariant**
(never change after construction) should be constructor parameters rather than set via setter methods.
This makes the dependency explicit and prevents use of the object in an incompletely-initialized state.
In code generation, when calling `cg.new_Pvariable()` or the relevant helper function to create the component, pass these as arguments.
```cpp
// Good - required invariant dependency as constructor parameter
class SourceTextSensor : public text_sensor::TextSensor, public Component {
public:
explicit SourceTextSensor(text::Text *source) : source_(source) {}
protected:
text::Text *source_;
};
```
```cpp
// Bad - required invariant dependency as setter
class SourceTextSensor : public text_sensor::TextSensor, public Component {
public:
void set_source(text::Text *source) { this->source_ = source; }
protected:
text::Text *source_{nullptr};
};
```
* **Component Structure:**
* **Standard Files:**
@@ -217,6 +239,123 @@ This document provides essential context for AI models interacting with this pro
var = await switch.new_switch(config)
```
* **Automations (Triggers, Actions, Conditions):**
Automations have three building blocks: **Triggers** (fire when something happens), **Actions** (do something), and **Conditions** (check if something is true).
* **Triggers -- Callback method (preferred):**
Use `build_callback_automation()` for simple triggers. This eliminates the need for a C++ Trigger class by using a lightweight pointer-sized forwarder struct registered directly as a callback. No `CONF_TRIGGER_ID` in the schema.
**Python:**
```python
from esphome import automation
CONFIG_SCHEMA = cv.Schema({
cv.GenerateID(): cv.declare_id(MyComponent),
cv.Optional(CONF_ON_STATE): automation.validate_automation({}),
}).extend(cv.COMPONENT_SCHEMA)
async def to_code(config):
var = cg.new_Pvariable(config[CONF_ID])
await cg.register_component(var, config)
for conf in config.get(CONF_ON_STATE, []):
await automation.build_callback_automation(
var, "add_on_state_callback", [(bool, "x")], conf
)
```
`build_callback_automation` arguments: `parent`, `callback_method` (C++ method name), `args` (template args as `[(type, name)]` tuples), `config`, and optional `forwarder` (defaults to `TriggerForwarder<Ts...>`).
For boolean filtering (e.g. `on_press`/`on_release`), use built-in forwarders with `args=[]`:
```python
for conf_key, forwarder in (
(CONF_ON_PRESS, automation.TriggerOnTrueForwarder),
(CONF_ON_RELEASE, automation.TriggerOnFalseForwarder),
):
for conf in config.get(conf_key, []):
await automation.build_callback_automation(
var, "add_on_state_callback", [], conf, forwarder=forwarder
)
```
**C++ -- no trigger class needed.** The callback registration method must be templatized to accept both `std::function` and lightweight forwarder structs (which avoid heap allocation):
```cpp
class MyComponent : public Component {
public:
// Must be a template -- accepts both std::function and pointer-sized forwarder structs
template<typename F> void add_on_state_callback(F &&callback) {
this->state_callback_.add(std::forward<F>(callback));
}
protected:
// Use CallbackManager when callbacks are always registered (e.g. core components)
CallbackManager<void(bool)> state_callback_;
// Use LazyCallbackManager when callbacks are often not registered -- saves 8 bytes
// (nullptr vs empty std::vector) per instance when no callbacks are added
// LazyCallbackManager<void(bool)> state_callback_;
};
```
* **Triggers -- Trigger class method:**
Use `build_automation()` with a `Trigger<Ts...>` subclass only when the forwarder needs **mutable state beyond a single `Automation*` pointer** (e.g. edge detection tracking previous state, timing logic).
**Python:**
```python
TurnOnTrigger = my_ns.class_("TurnOnTrigger", automation.Trigger.template())
CONFIG_SCHEMA = cv.Schema({
cv.Optional(CONF_ON_TURN_ON): automation.validate_automation(
{cv.GenerateID(CONF_TRIGGER_ID): cv.declare_id(TurnOnTrigger)}
),
})
async def to_code(config):
for conf in config.get(CONF_ON_TURN_ON, []):
trigger = cg.new_Pvariable(conf[CONF_TRIGGER_ID], var)
await automation.build_automation(trigger, [], conf)
```
**C++:**
```cpp
class TurnOnTrigger : public Trigger<> {
public:
explicit TurnOnTrigger(MyComponent *parent) : last_on_{false} {
parent->add_on_state_callback([this](bool state) {
if (state && !this->last_on_)
this->trigger();
this->last_on_ = state;
});
}
protected:
bool last_on_;
};
```
* **Actions:**
```cpp
template<typename... Ts> class MyAction : public Action<Ts...> {
public:
explicit MyAction(MyComponent *parent) : parent_(parent) {}
void play(const Ts &...) override { this->parent_->do_something(); }
protected:
MyComponent *parent_;
};
```
Register with `@automation.register_action("my_component.do_something", MyAction, schema, synchronous=True)`. Use `synchronous=True` for actions that run to completion inside `play()` without deferring. Use `synchronous=False` if the action may suspend/defer execution (e.g. `delay`, `wait_until`, `script.wait`) or store trigger arguments for later use.
* **Conditions:**
```cpp
template<typename... Ts> class MyCondition : public Condition<Ts...> {
public:
explicit MyCondition(MyComponent *parent) : parent_(parent) {}
bool check(const Ts &...) override { return this->parent_->is_active(); }
protected:
MyComponent *parent_;
};
```
Register with `@automation.register_condition("my_component.is_active", MyCondition, schema)`.
* **Configuration Validation:**
* **Common Validators:** `cv.int_`, `cv.float_`, `cv.string`, `cv.boolean`, `cv.int_range(min=0, max=100)`, `cv.positive_int`, `cv.percentage`.
* **Complex Validation:** `cv.All(cv.string, cv.Length(min=1, max=50))`, `cv.Any(cv.int_, cv.string)`.
@@ -252,10 +391,39 @@ This document provides essential context for AI models interacting with this pro
* **Component Tests:** YAML-based compilation tests are located in `tests/`. The structure is as follows:
```
tests/
├── test_build_components/ # Base test configurations
└── components/[component]/ # Component-specific tests
├── test_build_components/
└── common/ # Shared bus packages (uart, i2c, spi, etc.)
│ ├── uart/ # UART at default baud rate
│ ├── uart_115200/ # UART at 115200 baud
│ ├── i2c/ # I2C bus
│ └── spi/ # SPI bus
└── components/[component]/
├── common.yaml # Component-only config (no bus definitions)
├── test.esp32-idf.yaml
├── test.esp8266-ard.yaml
└── test.rp2040-ard.yaml
```
Run them using `script/test_build_components`. Use `-c <component>` to test specific components and `-t <target>` for specific platforms.
* **Test Grouping with Packages:** Components that use shared bus packages can be grouped together in CI to reduce build count. **Never define buses (uart, i2c, spi, modbus) directly in test YAML files** — always use packages from `test_build_components/common/`:
```yaml
# test.esp32-idf.yaml — use packages for buses
packages:
uart: !include ../../test_build_components/common/uart_115200/esp32-idf.yaml
<<: !include common.yaml
```
```yaml
# common.yaml — component config only, NO bus definitions
my_component:
id: my_instance
sensor:
- platform: my_component
name: My Sensor
```
Components that define buses directly are flagged as "NEEDS MIGRATION" and cannot be grouped, increasing CI build time.
* **Testing All Components Together:** To verify that all components can be tested together without ID conflicts or configuration issues, use:
```bash
./script/test_component_grouping.py -e config --all
@@ -395,6 +563,30 @@ This document provides essential context for AI models interacting with this pro
Note: Avoiding heap allocation after `setup()` is always required regardless of component type. The prioritization above is about the effort spent on container optimization (e.g., migrating from `std::vector` to `StaticVector`).
**Callback Managers:**
ESPHome provides two callback manager types in `esphome/core/helpers.h` for the observer pattern. Both support `std::function`, lambdas, and lightweight forwarder structs via their templatized `add()` method.
| Type | Idle overhead (32-bit) | When to use |
|------|----------------------|-------------|
| `CallbackManager<void(Ts...)>` | 12 bytes (empty `std::vector`) | Callbacks are always or almost always registered |
| `LazyCallbackManager<void(Ts...)>` | 4 bytes (`nullptr`) | Callbacks are often not registered (common case) |
`LazyCallbackManager` is a drop-in replacement for `CallbackManager` that defers allocation until the first callback is added. Prefer it for entity-level callbacks where most instances have no subscribers.
**Important:** Registration methods that add to a callback manager **must always be templatized** to accept both `std::function` and pointer-sized forwarder structs (used by `build_callback_automation`). Never use `std::function` in the method signature:
```cpp
// Bad -- forces heap allocation for forwarder structs
void add_on_state_callback(std::function<void(bool)> &&callback) {
this->state_callback_.add(std::move(callback));
}
// Good -- accepts any callable without forcing std::function wrapping
template<typename F> void add_on_state_callback(F &&callback) {
this->state_callback_.add(std::forward<F>(callback));
}
```
* **State Management:** Use `CORE.data` for component state that needs to persist during configuration generation. Avoid module-level mutable globals.
**Bad Pattern (Module-Level Globals):**
+1 -1
View File
@@ -1 +1 @@
8e48e836c6fc196d3da000d46eb09db243b87fe33518a74e49c8e009d756074a
f31f13994768b5b07e29624406c9b053bf4bb26e1623ac2bc1e9d4a9477502d6
+1 -1
View File
@@ -22,7 +22,7 @@ runs:
python-version: ${{ inputs.python-version }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache/restore@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: venv
# yamllint disable-line rule:line-length
+5 -4
View File
@@ -235,19 +235,20 @@ async function detectDeprecatedComponents(github, context, changedFiles) {
}
}
// Get PR head to fetch files from the PR branch
const prNumber = context.payload.pull_request.number;
// Get base branch ref to check if deprecation already exists for the component
// This prevents flagging a PR that simply adds deprecation
const baseRef = context.payload.pull_request.base.ref;
// Check each component's __init__.py for DEPRECATED_COMPONENT constant
for (const component of components) {
const initFile = `esphome/components/${component}/__init__.py`;
try {
// Fetch file content from PR head using GitHub API
// Fetch file content from base branch using GitHub API
const { data: fileData } = await github.rest.repos.getContent({
owner,
repo,
path: initFile,
ref: `refs/pull/${prNumber}/head`
ref: baseRef
});
// Decode base64 content
+1 -1
View File
@@ -27,7 +27,7 @@ jobs:
- name: Generate a token
id: generate-token
uses: actions/create-github-app-token@29824e69f54612133e76f7eaac726eef6c875baf # v2
uses: actions/create-github-app-token@f8d387b68d61c58ab83c6c016672934102569859 # v2
with:
app-id: ${{ secrets.ESPHOME_GITHUB_APP_ID }}
private-key: ${{ secrets.ESPHOME_GITHUB_APP_PRIVATE_KEY }}
+2 -2
View File
@@ -40,7 +40,7 @@ jobs:
echo "You have modified clang-tidy configuration but have not updated the hash." | tee -a $GITHUB_STEP_SUMMARY
echo "Please run 'script/clang_tidy_hash.py --update' and commit the changes." | tee -a $GITHUB_STEP_SUMMARY
- if: failure()
- if: failure() && github.event.pull_request.head.repo.full_name == github.repository
name: Request changes
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
@@ -53,7 +53,7 @@ jobs:
body: 'You have modified clang-tidy configuration but have not updated the hash.\nPlease run `script/clang_tidy_hash.py --update` and commit the changes.'
})
- if: success()
- if: success() && github.event.pull_request.head.repo.full_name == github.repository
name: Dismiss review
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
+72 -20
View File
@@ -47,7 +47,7 @@ jobs:
python-version: ${{ env.DEFAULT_PYTHON }}
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: venv
# yamllint disable-line rule:line-length
@@ -106,6 +106,7 @@ jobs:
script/build_codeowners.py --check
script/build_language_schema.py --check
script/generate-esp32-boards.py --check
script/generate-rp2040-boards.py --check
pytest:
name: Run pytest
@@ -153,12 +154,12 @@ jobs:
. venv/bin/activate
pytest -vv --cov-report=xml --tb=native -n auto tests --ignore=tests/integration/
- name: Upload coverage to Codecov
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2
uses: codecov/codecov-action@57e3a136b779b570ffcdbf80b3bdc90e7fab3de2 # v6.0.0
with:
token: ${{ secrets.CODECOV_TOKEN }}
- name: Save Python virtual environment cache
if: github.ref == 'refs/heads/dev'
uses: actions/cache/save@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: venv
key: ${{ runner.os }}-${{ steps.restore-python.outputs.python-version }}-venv-${{ needs.common.outputs.cache-key }}
@@ -170,6 +171,8 @@ jobs:
- common
outputs:
integration-tests: ${{ steps.determine.outputs.integration-tests }}
integration-tests-run-all: ${{ steps.determine.outputs.integration-tests-run-all }}
integration-test-files: ${{ steps.determine.outputs.integration-test-files }}
clang-tidy: ${{ steps.determine.outputs.clang-tidy }}
clang-tidy-mode: ${{ steps.determine.outputs.clang-tidy-mode }}
python-linters: ${{ steps.determine.outputs.python-linters }}
@@ -182,6 +185,7 @@ jobs:
cpp-unit-tests-run-all: ${{ steps.determine.outputs.cpp-unit-tests-run-all }}
cpp-unit-tests-components: ${{ steps.determine.outputs.cpp-unit-tests-components }}
component-test-batches: ${{ steps.determine.outputs.component-test-batches }}
benchmarks: ${{ steps.determine.outputs.benchmarks }}
steps:
- name: Check out code from GitHub
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
@@ -194,7 +198,7 @@ jobs:
python-version: ${{ env.DEFAULT_PYTHON }}
cache-key: ${{ needs.common.outputs.cache-key }}
- name: Restore components graph cache
uses: actions/cache/restore@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: .temp/components_graph.json
key: components-graph-${{ hashFiles('esphome/components/**/*.py') }}
@@ -210,6 +214,8 @@ jobs:
# Extract individual fields
echo "integration-tests=$(echo "$output" | jq -r '.integration_tests')" >> $GITHUB_OUTPUT
echo "integration-tests-run-all=$(echo "$output" | jq -r '.integration_tests_run_all')" >> $GITHUB_OUTPUT
echo "integration-test-files=$(echo "$output" | jq -c '.integration_test_files')" >> $GITHUB_OUTPUT
echo "clang-tidy=$(echo "$output" | jq -r '.clang_tidy')" >> $GITHUB_OUTPUT
echo "clang-tidy-mode=$(echo "$output" | jq -r '.clang_tidy_mode')" >> $GITHUB_OUTPUT
echo "python-linters=$(echo "$output" | jq -r '.python_linters')" >> $GITHUB_OUTPUT
@@ -222,9 +228,10 @@ jobs:
echo "cpp-unit-tests-run-all=$(echo "$output" | jq -r '.cpp_unit_tests_run_all')" >> $GITHUB_OUTPUT
echo "cpp-unit-tests-components=$(echo "$output" | jq -c '.cpp_unit_tests_components')" >> $GITHUB_OUTPUT
echo "component-test-batches=$(echo "$output" | jq -c '.component_test_batches')" >> $GITHUB_OUTPUT
echo "benchmarks=$(echo "$output" | jq -r '.benchmarks')" >> $GITHUB_OUTPUT
- name: Save components graph cache
if: github.ref == 'refs/heads/dev'
uses: actions/cache/save@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: .temp/components_graph.json
key: components-graph-${{ hashFiles('esphome/components/**/*.py') }}
@@ -246,7 +253,7 @@ jobs:
python-version: "3.13"
- name: Restore Python virtual environment
id: cache-venv
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: venv
key: ${{ runner.os }}-${{ steps.python.outputs.python-version }}-venv-${{ needs.common.outputs.cache-key }}
@@ -261,9 +268,20 @@ jobs:
- name: Register matcher
run: echo "::add-matcher::.github/workflows/matchers/pytest.json"
- name: Run integration tests
env:
INTEGRATION_TEST_FILES: ${{ needs.determine-jobs.outputs.integration-test-files }}
INTEGRATION_TESTS_RUN_ALL: ${{ needs.determine-jobs.outputs.integration-tests-run-all }}
run: |
. venv/bin/activate
pytest -vv --no-cov --tb=native -n auto tests/integration/
if [[ "$INTEGRATION_TESTS_RUN_ALL" == "true" ]]; then
echo "Running all integration tests"
pytest -vv --no-cov --tb=native -n auto tests/integration/
else
# Parse JSON array into bash array to avoid shell expansion issues
mapfile -t test_files < <(echo "$INTEGRATION_TEST_FILES" | jq -r '.[]')
echo "Running ${#test_files[@]} specific integration tests"
pytest -vv --no-cov --tb=native -n auto "${test_files[@]}"
fi
cpp-unit-tests:
name: Run C++ unit tests
@@ -292,6 +310,40 @@ jobs:
script/cpp_unit_test.py $ARGS
fi
benchmarks:
name: Run CodSpeed benchmarks
runs-on: ubuntu-24.04
needs:
- common
- determine-jobs
if: >-
(github.event_name == 'push' && github.ref_name == 'dev') ||
(github.event_name == 'pull_request' && needs.determine-jobs.outputs.benchmarks == 'true')
steps:
- name: Check out code from GitHub
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Restore Python
uses: ./.github/actions/restore-python
with:
python-version: ${{ env.DEFAULT_PYTHON }}
cache-key: ${{ needs.common.outputs.cache-key }}
- name: Build benchmarks
id: build
run: |
. venv/bin/activate
export BENCHMARK_LIB_CONFIG=$(python script/setup_codspeed_lib.py)
# --build-only prints BUILD_BINARY=<path> to stdout
BINARY=$(script/cpp_benchmark.py --all --build-only | grep '^BUILD_BINARY=' | tail -1 | cut -d= -f2-)
echo "binary=$BINARY" >> $GITHUB_OUTPUT
- name: Run CodSpeed benchmarks
uses: CodSpeedHQ/action@db35df748deb45fdef0960669f57d627c1956c30 # v4
with:
run: ${{ steps.build.outputs.binary }}
mode: simulation
clang-tidy-single:
name: ${{ matrix.name }}
runs-on: ubuntu-24.04
@@ -335,14 +387,14 @@ jobs:
- name: Cache platformio
if: github.ref == 'refs/heads/dev'
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: ~/.platformio
key: platformio-${{ matrix.pio_cache_key }}-${{ hashFiles('platformio.ini') }}
- name: Cache platformio
if: github.ref != 'refs/heads/dev'
uses: actions/cache/restore@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: ~/.platformio
key: platformio-${{ matrix.pio_cache_key }}-${{ hashFiles('platformio.ini') }}
@@ -414,14 +466,14 @@ jobs:
- name: Cache platformio
if: github.ref == 'refs/heads/dev'
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: ~/.platformio
key: platformio-tidyesp32-${{ hashFiles('platformio.ini') }}
- name: Cache platformio
if: github.ref != 'refs/heads/dev'
uses: actions/cache/restore@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: ~/.platformio
key: platformio-tidyesp32-${{ hashFiles('platformio.ini') }}
@@ -503,14 +555,14 @@ jobs:
- name: Cache platformio
if: github.ref == 'refs/heads/dev'
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: ~/.platformio
key: platformio-tidyesp32-${{ hashFiles('platformio.ini') }}
- name: Cache platformio
if: github.ref != 'refs/heads/dev'
uses: actions/cache/restore@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: ~/.platformio
key: platformio-tidyesp32-${{ hashFiles('platformio.ini') }}
@@ -671,7 +723,7 @@ jobs:
cache-key: ${{ needs.common.outputs.cache-key }}
- uses: esphome/pre-commit-action@43cd1109c09c544d97196f7730ee5b2e0cc6d81e # v3.0.1 fork with pinned actions/cache
env:
SKIP: pylint,clang-tidy-hash
SKIP: pylint,clang-tidy-hash,ci-custom
- uses: pre-commit-ci/lite-action@5d6cc0eb514c891a40562a58a8e71576c5c7fb43 # v1.1.0
if: always()
@@ -765,7 +817,7 @@ jobs:
- name: Restore cached memory analysis
id: cache-memory-analysis
if: steps.check-script.outputs.skip != 'true' && steps.check-tests.outputs.skip != 'true'
uses: actions/cache/restore@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: memory-analysis-target.json
key: ${{ steps.cache-key.outputs.cache-key }}
@@ -789,7 +841,7 @@ jobs:
- name: Cache platformio
if: steps.check-script.outputs.skip != 'true' && steps.check-tests.outputs.skip != 'true' && steps.cache-memory-analysis.outputs.cache-hit != 'true'
uses: actions/cache/restore@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: ~/.platformio
key: platformio-memory-${{ fromJSON(needs.determine-jobs.outputs.memory_impact).platform }}-${{ hashFiles('platformio.ini') }}
@@ -830,7 +882,7 @@ jobs:
- name: Save memory analysis to cache
if: steps.check-script.outputs.skip != 'true' && steps.check-tests.outputs.skip != 'true' && steps.cache-memory-analysis.outputs.cache-hit != 'true' && steps.build.outcome == 'success'
uses: actions/cache/save@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: memory-analysis-target.json
key: ${{ steps.cache-key.outputs.cache-key }}
@@ -877,7 +929,7 @@ jobs:
python-version: ${{ env.DEFAULT_PYTHON }}
cache-key: ${{ needs.common.outputs.cache-key }}
- name: Cache platformio
uses: actions/cache/restore@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: ~/.platformio
key: platformio-memory-${{ fromJSON(needs.determine-jobs.outputs.memory_impact).platform }}-${{ hashFiles('platformio.ini') }}
@@ -945,13 +997,13 @@ jobs:
python-version: ${{ env.DEFAULT_PYTHON }}
cache-key: ${{ needs.common.outputs.cache-key }}
- name: Download target analysis JSON
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8.0.0
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
with:
name: memory-analysis-target
path: ./memory-analysis
continue-on-error: true
- name: Download PR analysis JSON
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8.0.0
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
with:
name: memory-analysis-pr
path: ./memory-analysis
@@ -10,6 +10,9 @@ name: Codeowner Approved Label
on:
pull_request_target:
types: [opened, synchronize, reopened, ready_for_review]
branches-ignore:
- release
- beta
permissions:
issues: write
@@ -13,6 +13,9 @@ on:
# Needs to be pull_request_target to get write permissions
pull_request_target:
types: [opened, reopened, synchronize, ready_for_review]
branches-ignore:
- release
- beta
permissions:
pull-requests: write
+2 -2
View File
@@ -58,7 +58,7 @@ jobs:
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@0d579ffd059c29b07949a3cce3983f0780820c98 # v4.32.6
uses: github/codeql-action/init@c10b8064de6f491fea524254123dbe5e09572f13 # v4.35.1
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
@@ -86,6 +86,6 @@ jobs:
exit 1
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@0d579ffd059c29b07949a3cce3983f0780820c98 # v4.32.6
uses: github/codeql-action/analyze@c10b8064de6f491fea524254123dbe5e09572f13 # v4.35.1
with:
category: "/language:${{matrix.language}}"
+15
View File
@@ -3,6 +3,9 @@ name: PR Title Check
on:
pull_request:
types: [opened, edited, synchronize, reopened]
branches-ignore:
- release
- beta
permissions:
contents: read
@@ -65,6 +68,18 @@ jobs:
return;
}
// Check for angle brackets not wrapped in backticks.
// Astro docs MDX treats bare < as JSX component opening tags.
const stripped = title.replace(/`[^`]*`/g, '');
if (/[<>]/.test(stripped)) {
core.setFailed(
'PR title contains `<` or `>` not wrapped in backticks.\n' +
'Astro docs MDX interprets bare `<` as JSX components.\n' +
'Please wrap angle brackets with backticks, e.g.: [component] Add `<feature>` support'
);
return;
}
// Check title starts with [tag] prefix
const bracketPattern = /^\[\w+\]/;
if (!bracketPattern.test(title)) {
+9 -9
View File
@@ -70,7 +70,7 @@ jobs:
pip3 install build
python3 -m build
- name: Publish
uses: pypa/gh-action-pypi-publish@ed0c53931b1dc9bd32cbe73a98c7f6766f8a527e # v1.13.0
uses: pypa/gh-action-pypi-publish@cef221092ed1bacb1cc03d23a2d87d1d172e277b # v1.14.0
with:
skip-existing: true
@@ -102,12 +102,12 @@ jobs:
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
- name: Log in to docker hub
uses: docker/login-action@b45d80f862d83dbcd57f89517bcf500b2ab88fb2 # v4.0.0
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
with:
username: ${{ secrets.DOCKER_USER }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Log in to the GitHub container registry
uses: docker/login-action@b45d80f862d83dbcd57f89517bcf500b2ab88fb2 # v4.0.0
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -171,7 +171,7 @@ jobs:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Download digests
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8.0.0
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
with:
pattern: digests-*
path: /tmp/digests
@@ -182,13 +182,13 @@ jobs:
- name: Log in to docker hub
if: matrix.registry == 'dockerhub'
uses: docker/login-action@b45d80f862d83dbcd57f89517bcf500b2ab88fb2 # v4.0.0
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
with:
username: ${{ secrets.DOCKER_USER }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Log in to the GitHub container registry
if: matrix.registry == 'ghcr'
uses: docker/login-action@b45d80f862d83dbcd57f89517bcf500b2ab88fb2 # v4.0.0
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -221,7 +221,7 @@ jobs:
steps:
- name: Generate a token
id: generate-token
uses: actions/create-github-app-token@29824e69f54612133e76f7eaac726eef6c875baf # v2.2.1
uses: actions/create-github-app-token@f8d387b68d61c58ab83c6c016672934102569859 # v3.0.0
with:
app-id: ${{ secrets.ESPHOME_GITHUB_APP_ID }}
private-key: ${{ secrets.ESPHOME_GITHUB_APP_PRIVATE_KEY }}
@@ -256,7 +256,7 @@ jobs:
steps:
- name: Generate a token
id: generate-token
uses: actions/create-github-app-token@29824e69f54612133e76f7eaac726eef6c875baf # v2.2.1
uses: actions/create-github-app-token@f8d387b68d61c58ab83c6c016672934102569859 # v3.0.0
with:
app-id: ${{ secrets.ESPHOME_GITHUB_APP_ID }}
private-key: ${{ secrets.ESPHOME_GITHUB_APP_PRIVATE_KEY }}
@@ -287,7 +287,7 @@ jobs:
steps:
- name: Generate a token
id: generate-token
uses: actions/create-github-app-token@29824e69f54612133e76f7eaac726eef6c875baf # v2.2.1
uses: actions/create-github-app-token@f8d387b68d61c58ab83c6c016672934102569859 # v3.0.0
with:
app-id: ${{ secrets.ESPHOME_GITHUB_APP_ID }}
private-key: ${{ secrets.ESPHOME_GITHUB_APP_PRIVATE_KEY }}
+1 -1
View File
@@ -24,7 +24,7 @@ jobs:
- name: Setup Python
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
with:
python-version: 3.13
python-version: "3.14"
- name: Install Home Assistant
run: |
+5 -1
View File
@@ -11,7 +11,7 @@ ci:
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.15.5
rev: v0.15.9
hooks:
# Run the linter.
- id: ruff
@@ -65,3 +65,7 @@ repos:
files: ^(\.clang-tidy|platformio\.ini|requirements_dev\.txt)$
pass_filenames: false
additional_dependencies: []
- id: ci-custom
name: ci-custom
entry: python3 script/run-in-env.py script/ci-custom.py
language: system
+8 -2
View File
@@ -92,6 +92,7 @@ esphome/components/bmp3xx_i2c/* @latonita
esphome/components/bmp3xx_spi/* @latonita
esphome/components/bmp581_base/* @danielkent-net @kahrendt
esphome/components/bmp581_i2c/* @danielkent-net @kahrendt
esphome/components/bmp581_spi/* @danielkent-net @kahrendt
esphome/components/bp1658cj/* @Cossid
esphome/components/bp5758d/* @Cossid
esphome/components/bthome_mithermometer/* @nagyrobi
@@ -141,12 +142,13 @@ esphome/components/dlms_meter/* @SimonFischer04
esphome/components/dps310/* @kbx81
esphome/components/ds1307/* @badbadc0ffee
esphome/components/ds2484/* @mrk-its
esphome/components/dsmr/* @glmnet @PolarGoose @zuidwijk
esphome/components/dsmr/* @glmnet @PolarGoose
esphome/components/duty_time/* @dudanov
esphome/components/ee895/* @Stock-M
esphome/components/ektf2232/touchscreen/* @jesserockz
esphome/components/emc2101/* @ellull
esphome/components/emmeti/* @E440QF
esphome/components/emontx/* @FredM67 @glynhudson @TrystanLea
esphome/components/ens160/* @latonita
esphome/components/ens160_base/* @latonita @vincentscode
esphome/components/ens160_i2c/* @latonita
@@ -216,6 +218,7 @@ esphome/components/hbridge/light/* @DotNetDann
esphome/components/hbridge/switch/* @dwmw2
esphome/components/hc8/* @omartijn
esphome/components/hdc2010/* @optimusprimespace @ssieb
esphome/components/hdc2080/* @G-Pereira @jesserockz
esphome/components/hdc302x/* @joshuasing
esphome/components/he60r/* @clydebarrow
esphome/components/heatpumpir/* @rob-deutsch
@@ -244,7 +247,6 @@ esphome/components/hyt271/* @Philippe12
esphome/components/i2c/* @esphome/core
esphome/components/i2c_device/* @gabest11
esphome/components/i2s_audio/* @jesserockz
esphome/components/i2s_audio/media_player/* @jesserockz
esphome/components/i2s_audio/microphone/* @jesserockz
esphome/components/i2s_audio/speaker/* @jesserockz @kahrendt
esphome/components/iaqcore/* @yozik04
@@ -330,6 +332,7 @@ esphome/components/mipi_dsi/* @clydebarrow
esphome/components/mipi_rgb/* @clydebarrow
esphome/components/mipi_spi/* @clydebarrow
esphome/components/mitsubishi/* @RubyBailey
esphome/components/mitsubishi_cn105/* @crnjan
esphome/components/mixer/speaker/* @kahrendt
esphome/components/mlx90393/* @functionpointer
esphome/components/mlx90614/* @jesserockz
@@ -458,6 +461,9 @@ esphome/components/sn74hc165/* @jesserockz
esphome/components/socket/* @esphome/core
esphome/components/sonoff_d1/* @anatoly-savchenkov
esphome/components/sound_level/* @kahrendt
esphome/components/spa06_base/* @danielkent-net
esphome/components/spa06_i2c/* @danielkent-net
esphome/components/spa06_spi/* @danielkent-net
esphome/components/speaker/* @jesserockz @kahrendt
esphome/components/speaker/media_player/* @kahrendt @synesthesiam
esphome/components/speaker_source/* @kahrendt
+1 -1
View File
@@ -48,7 +48,7 @@ PROJECT_NAME = ESPHome
# could be handy for archiving the generated documentation or if some version
# control system is used.
PROJECT_NUMBER = 2026.3.3
PROJECT_NUMBER = 2026.4.0b1
# Using the PROJECT_BRIEF tag one can provide an optional one line description
# for a project that appears at the top of each page and should give viewer a
+1 -1
View File
@@ -1,4 +1,4 @@
# ESPHome [![Discord Chat](https://img.shields.io/discord/429907082951524364.svg)](https://discord.gg/KhAMKrd) [![GitHub release](https://img.shields.io/github/release/esphome/esphome.svg)](https://GitHub.com/esphome/esphome/releases/)
# ESPHome [![Discord Chat](https://img.shields.io/discord/429907082951524364.svg)](https://discord.gg/KhAMKrd) [![GitHub release](https://img.shields.io/github/release/esphome/esphome.svg)](https://GitHub.com/esphome/esphome/releases/) [![CodSpeed](https://img.shields.io/endpoint?url=https://codspeed.io/badge.json)](https://codspeed.io/esphome/esphome)
<a href="https://esphome.io/">
<picture>
+72 -2
View File
@@ -1046,7 +1046,11 @@ def show_logs(config: ConfigType, args: ArgsProtocol, devices: list[str]) -> int
):
from esphome.components.api.client import run_logs
return run_logs(config, network_devices)
return run_logs(
config,
network_devices,
subscribe_states=not getattr(args, "no_states", False),
)
if port_type in (PortType.NETWORK, PortType.MQTT) and has_mqtt_logging():
from esphome import mqtt
@@ -1079,7 +1083,7 @@ def command_config(args: ArgsProtocol, config: ConfigType) -> int | None:
# add the console decoration so the front-end can hide the secrets
if not args.show_secrets:
output = re.sub(
r"(password|key|psk|ssid)\: (.+)", r"\1: \\033[5m\2\\033[6m", output
r"(password|key|psk|ssid)\: (.+)", r"\1: \\033[8m\2\\033[28m", output
)
if not CORE.quiet:
safe_print(output)
@@ -1238,6 +1242,38 @@ def command_clean(args: ArgsProtocol, config: ConfigType) -> int | None:
return 0
def command_bundle(args: ArgsProtocol, config: ConfigType) -> int | None:
from esphome.bundle import BUNDLE_EXTENSION, ConfigBundleCreator
creator = ConfigBundleCreator(config)
if args.list_only:
files = creator.discover_files()
for bf in sorted(files, key=lambda f: f.path):
safe_print(f" {bf.path}")
_LOGGER.info("Found %d files", len(files))
return 0
result = creator.create_bundle()
if args.output:
output_path = Path(args.output)
else:
stem = CORE.config_path.stem
output_path = CORE.config_dir / f"{stem}{BUNDLE_EXTENSION}"
output_path.parent.mkdir(parents=True, exist_ok=True)
output_path.write_bytes(result.data)
_LOGGER.info(
"Bundle created: %s (%d files, %.1f KB)",
output_path,
len(result.files),
len(result.data) / 1024,
)
return 0
def command_dashboard(args: ArgsProtocol) -> int | None:
from esphome.dashboard import dashboard
@@ -1513,6 +1549,7 @@ POST_CONFIG_ACTIONS = {
"rename": command_rename,
"discover": command_discover,
"analyze-memory": command_analyze_memory,
"bundle": command_bundle,
}
SIMPLE_CONFIG_ACTIONS = [
@@ -1664,6 +1701,11 @@ def parse_args(argv):
help="Reset the device before starting serial logs.",
default=os.getenv("ESPHOME_SERIAL_LOGGING_RESET"),
)
parser_logs.add_argument(
"--no-states",
action="store_true",
help="Do not show entity state changes in log output.",
)
parser_discover = subparsers.add_parser(
"discover",
@@ -1809,6 +1851,24 @@ def parse_args(argv):
"configuration", help="Your YAML configuration file(s).", nargs="+"
)
parser_bundle = subparsers.add_parser(
"bundle",
help="Create a self-contained config bundle for remote compilation.",
)
parser_bundle.add_argument(
"configuration", help="Your YAML configuration file(s).", nargs="+"
)
parser_bundle.add_argument(
"-o",
"--output",
help="Output path for the bundle archive.",
)
parser_bundle.add_argument(
"--list-only",
help="List discovered files without creating the archive.",
action="store_true",
)
# Keep backward compatibility with the old command line format of
# esphome <config> <command>.
#
@@ -1887,6 +1947,16 @@ def run_esphome(argv):
_LOGGER.warning("Skipping secrets file %s", conf_path)
return 0
# Bundle support: if the configuration is a .esphomebundle, extract it
# and rewrite conf_path to the extracted YAML config.
from esphome.bundle import is_bundle_path, prepare_bundle_for_compile
if is_bundle_path(conf_path):
_LOGGER.info("Extracting config bundle %s...", conf_path)
conf_path = prepare_bundle_for_compile(conf_path)
# Update the argument so downstream code sees the extracted path
args.configuration[0] = str(conf_path)
CORE.config_path = conf_path
CORE.dashboard = args.dashboard
+207 -3
View File
@@ -1,6 +1,6 @@
"""Memory usage analyzer for ESPHome compiled binaries."""
from collections import defaultdict
from collections import Counter, defaultdict
from dataclasses import dataclass, field
import logging
from pathlib import Path
@@ -40,6 +40,15 @@ _READELF_SECTION_PATTERN = re.compile(
r"\s*\[\s*\d+\]\s+([\.\w]+)\s+\w+\s+[\da-fA-F]+\s+[\da-fA-F]+\s+([\da-fA-F]+)"
)
# Regex for extracting call targets from objdump disassembly
# Matches direct call instructions across architectures:
# Xtensa: call0/call4/call8/call12/callx0/callx4/callx8/callx12 <addr> <symbol>
# ARM: bl/blx <addr> <symbol>
# Captures the mangled symbol name inside angle brackets.
_CALL_TARGET_PATTERN = re.compile(
r"\t(?:call(?:0|4|8|12)|callx(?:0|4|8|12)|blx?)\s+[\da-fA-F]+ <([^>]+)>"
)
# Component category prefixes
_COMPONENT_PREFIX_ESPHOME = "[esphome]"
_COMPONENT_PREFIX_EXTERNAL = "[external]"
@@ -47,6 +56,10 @@ _COMPONENT_PREFIX_LIB = "[lib]"
_COMPONENT_CORE = f"{_COMPONENT_PREFIX_ESPHOME}core"
_COMPONENT_API = f"{_COMPONENT_PREFIX_ESPHOME}api"
# Placement new storage suffix (generated by codegen Pvariable)
_PSTORAGE_SUFFIX = "__pstorage"
# C++ namespace prefixes
_NAMESPACE_ESPHOME = "esphome::"
_NAMESPACE_STD = "std::"
@@ -192,20 +205,27 @@ class MemoryAnalyzer:
self._cswtch_symbols: list[tuple[str, int, str, str]] = []
# Library symbol mapping: symbol_name -> library_name
self._lib_symbol_map: dict[str, str] = {}
# Source file symbol mapping: symbol_name -> component_name
# Used for extern "C" and other symbols without C++ namespace
self._source_symbol_map: dict[str, str] = {}
# Library dir to name mapping: "lib641" -> "espsoftwareserial",
# "espressif__mdns" -> "mdns"
self._lib_hash_to_name: dict[str, str] = {}
# Heuristic category to library redirect: "mdns_lib" -> "[lib]mdns"
self._heuristic_to_lib: dict[str, str] = {}
# Function call counts: mangled_name -> call_count
self._function_call_counts: Counter[str] = Counter()
def analyze(self) -> dict[str, ComponentMemory]:
"""Analyze the ELF file and return component memory usage."""
self._parse_sections()
self._parse_symbols()
self._scan_libraries()
self._scan_source_symbols()
self._categorize_symbols()
self._analyze_cswtch_symbols()
self._analyze_sdk_libraries()
self._analyze_function_calls()
return dict(self.components)
def _parse_sections(self) -> None:
@@ -316,6 +336,13 @@ class MemoryAnalyzer:
# Demangle C++ names if needed
demangled = self._demangle_symbol(symbol_name)
# Check for placement new storage symbols (generated by codegen)
# Format: {component}__{id}__pstorage
if demangled.endswith(_PSTORAGE_SUFFIX) and (
component := self._match_pstorage_component(demangled)
):
return component
# Check for special component classes first (before namespace pattern)
# This handles cases like esphome::ESPHomeOTAComponent which should map to ota
if _NAMESPACE_ESPHOME in demangled:
@@ -351,6 +378,11 @@ class MemoryAnalyzer:
if lib_name := self._lib_symbol_map.get(symbol_name):
return f"{_COMPONENT_PREFIX_LIB}{lib_name}"
# Check source file mapping (catches extern "C" functions in ESPHome sources)
# Must be before heuristic patterns since source attribution is authoritative
if component := self._source_symbol_map.get(symbol_name):
return component
# Check against symbol patterns
for component, patterns in SYMBOL_PATTERNS.items():
if any(pattern in symbol_name for pattern in patterns):
@@ -378,14 +410,33 @@ class MemoryAnalyzer:
# Track uncategorized symbols for analysis
return "other"
def _match_pstorage_component(self, symbol_name: str) -> str | None:
"""Match a __pstorage symbol to its ESPHome component.
Symbol format: {component}__{id}__pstorage
The component namespace is embedded by codegen before the double underscore.
"""
prefix = symbol_name[: -len(_PSTORAGE_SUFFIX)]
# Extract component namespace before the first double underscore
dunder_pos = prefix.find("__")
if dunder_pos == -1:
return None
component_name = prefix[:dunder_pos]
if component_name in get_esphome_components():
return f"{_COMPONENT_PREFIX_ESPHOME}{component_name}"
if component_name in self.external_components:
return f"{_COMPONENT_PREFIX_EXTERNAL}{component_name}"
return None
def _batch_demangle_symbols(self, symbols: list[str]) -> None:
"""Batch demangle C++ symbol names for efficiency."""
if not symbols:
return
_LOGGER.info("Demangling %d symbols", len(symbols))
self._demangle_cache = batch_demangle(symbols, objdump_path=self.objdump_path)
_LOGGER.info("Successfully demangled %d symbols", len(self._demangle_cache))
demangled = batch_demangle(symbols, objdump_path=self.objdump_path)
self._demangle_cache.update(demangled)
_LOGGER.info("Successfully demangled %d symbols", len(demangled))
def _demangle_symbol(self, symbol: str) -> str:
"""Get demangled C++ symbol name from cache."""
@@ -640,6 +691,7 @@ class MemoryAnalyzer:
return None
symbol_map: dict[str, str] = {}
source_symbol_map: dict[str, str] = {}
current_symbol: str | None = None
section_prefixes = (".text.", ".rodata.", ".data.", ".bss.", ".literal.")
@@ -675,9 +727,18 @@ class MemoryAnalyzer:
if dir_key in source_path:
symbol_map[current_symbol] = lib_name
break
else:
# Map ESPHome source files to components for extern "C"
# and other symbols without C++ namespace
component = self._source_file_to_component(source_path)
if component.startswith(
(_COMPONENT_PREFIX_ESPHOME, _COMPONENT_PREFIX_EXTERNAL)
):
source_symbol_map[current_symbol] = component
current_symbol = None
self._source_symbol_map = source_symbol_map
return symbol_map or None
def _scan_libraries(self) -> None:
@@ -728,6 +789,112 @@ class MemoryAnalyzer:
len(libraries),
)
def _scan_source_symbols(self) -> None:
"""Scan ESPHome source object files to map extern "C" symbols to components.
When no linker map file is available, this uses ``nm`` to scan ``.o`` files
under ``src/esphome/`` and build a symbol-to-component mapping. This catches
``extern "C"`` functions and other symbols that lack C++ namespace prefixes.
Skips scanning if ``_source_symbol_map`` was already populated by
``_parse_map_file()``.
"""
if self._source_symbol_map or not self.nm_path:
return
obj_dir = self._find_object_files_dir()
if obj_dir is None:
return
# Find ESPHome source object files
esphome_src_dir = obj_dir / "src" / "esphome"
if not esphome_src_dir.is_dir():
return
obj_files = sorted(esphome_src_dir.rglob("*.o"))
if not obj_files:
return
# Run nm with --print-file-name to get file:symbol mapping
result = run_tool(
[self.nm_path, "--print-file-name", "-g", "--defined-only"]
+ [str(f) for f in obj_files],
)
if result is None or result.returncode != 0:
_LOGGER.debug("nm scan of source objects failed")
return
self._source_symbol_map = self._parse_nm_source_output(result.stdout, obj_dir)
if self._source_symbol_map:
_LOGGER.info(
"Built source symbol map from nm: %d symbols",
len(self._source_symbol_map),
)
def _parse_nm_source_output(self, output: str, base_dir: Path) -> dict[str, str]:
"""Parse nm output to map non-namespaced symbols to ESPHome components.
Extracts global defined symbols from ESPHome source object files that
don't use C++ namespacing (e.g. ``extern "C"`` functions).
Args:
output: Raw stdout from ``nm --print-file-name -g --defined-only``
or ``nm --print-file-name -S``.
base_dir: Build directory for computing relative paths.
Returns:
Dict mapping symbol names to component names.
"""
source_map: dict[str, str] = {}
for line in output.splitlines():
# Format: /path/to/file.o: addr type name
# or: /path/to/file.o: addr size type name (with -S)
colon_idx = line.rfind(".o:")
if colon_idx == -1:
continue
file_path = line[: colon_idx + 2]
fields = line[colon_idx + 3 :].split()
if len(fields) < 3:
continue
# With -S flag, format is: addr size type name
# Without -S flag: addr type name
# type is a single char; size is hex digits
# Detect by checking if fields[1] is a single uppercase letter (type)
if len(fields[1]) == 1 and fields[1].isalpha():
# addr type name
sym_type = fields[1]
symbol_name = fields[2]
elif len(fields) >= 4:
# addr size type name
sym_type = fields[2]
symbol_name = fields[3]
else:
continue
# Only global defined symbols (uppercase type)
if not sym_type.isupper() or sym_type == "U":
continue
# Skip symbols already in esphome:: namespace
if symbol_name.startswith("_ZN7esphome"):
continue
# Make path relative to base_dir for _source_file_to_component
try:
rel_path = str(Path(file_path).relative_to(base_dir))
except ValueError:
continue
component = self._source_file_to_component(rel_path)
if component.startswith(
(_COMPONENT_PREFIX_ESPHOME, _COMPONENT_PREFIX_EXTERNAL)
):
source_map[symbol_name] = component
return source_map
def _find_object_files_dir(self) -> Path | None:
"""Find the directory containing object files for this build.
@@ -1011,6 +1178,43 @@ class MemoryAnalyzer:
total_size,
)
def _analyze_function_calls(self) -> None:
"""Count function call sites by parsing disassembly output.
Parses direct call instructions (call0/call8/bl/blx) from objdump -d
to count how many times each function is called. This helps identify
inlining candidates — frequently called small functions benefit most
from inlining.
"""
result = run_tool(
[self.objdump_path, "-d", str(self.elf_path)],
timeout=60,
)
if result is None or result.returncode != 0:
_LOGGER.debug("Failed to disassemble ELF for function call analysis")
return
self._function_call_counts = Counter(
match.group(1)
for line in result.stdout.splitlines()
if (match := _CALL_TARGET_PATTERN.search(line))
)
# Demangle any call targets not already in the cache
missing = [
name
for name in self._function_call_counts
if name not in self._demangle_cache
]
if missing:
self._batch_demangle_symbols(missing)
_LOGGER.debug(
"Function call analysis: %d unique targets, %d total calls",
len(self._function_call_counts),
sum(self._function_call_counts.values()),
)
def get_unattributed_ram(self) -> tuple[int, int, int]:
"""Get unattributed RAM sizes (SDK/framework overhead).
+144 -13
View File
@@ -15,6 +15,7 @@ from . import (
_COMPONENT_PREFIX_ESPHOME,
_COMPONENT_PREFIX_EXTERNAL,
_COMPONENT_PREFIX_LIB,
_PSTORAGE_SUFFIX,
RAM_SECTIONS,
MemoryAnalyzer,
)
@@ -23,6 +24,17 @@ if TYPE_CHECKING:
from . import ComponentMemory
def _format_pstorage_name(name: str) -> str:
"""Format a __pstorage symbol as 'storage for {id}'."""
if not name.endswith(_PSTORAGE_SUFFIX):
return name
prefix = name[: -len(_PSTORAGE_SUFFIX)]
# Strip component namespace prefix: {component}__{id} -> {id}
dunder_pos = prefix.find("__")
var_id = prefix[dunder_pos + 2 :] if dunder_pos != -1 else prefix
return f"storage for {var_id}"
class MemoryAnalyzerCLI(MemoryAnalyzer):
"""Memory analyzer with CLI-specific report generation."""
@@ -148,11 +160,14 @@ class MemoryAnalyzerCLI(MemoryAnalyzer):
If section is one of the RAM sections (.data or .bss), a label like
" [data]" or " [bss]" is appended. For non-RAM sections or when
section is None, no section label is added.
Placement new storage symbols are formatted as "storage for {id}".
"""
display_name = _format_pstorage_name(demangled)
section_label = ""
if section in RAM_SECTIONS:
section_label = f" [{section[1:]}]" # .data -> [data], .bss -> [bss]
return f"{demangled} ({size:,} B){section_label}"
return f"{display_name} ({size:,} B){section_label}"
def _add_top_symbols(self, lines: list[str]) -> None:
"""Add a section showing the top largest symbols in the binary."""
@@ -175,11 +190,13 @@ class MemoryAnalyzerCLI(MemoryAnalyzer):
for i, (_, demangled, size, section, component) in enumerate(top_symbols):
# Format section label
section_label = f"[{section[1:]}]" if section else ""
# Truncate demangled name if too long
# Format storage symbols readably
display_name = _format_pstorage_name(demangled)
# Truncate if too long
demangled_display = (
f"{demangled[:truncate_limit]}..."
if len(demangled) > self.COL_TOP_SYMBOL_NAME
else demangled
f"{display_name[:truncate_limit]}..."
if len(display_name) > self.COL_TOP_SYMBOL_NAME
else display_name
)
lines.append(
f"{i + 1:>2}. {size:>7,} B {section_label:<8} {demangled_display:<{self.COL_TOP_SYMBOL_NAME}} {component}"
@@ -231,6 +248,110 @@ class MemoryAnalyzerCLI(MemoryAnalyzer):
lines.append(f" {size:>6,} B {sym_name}")
lines.append("")
# Number of top called functions to show
TOP_CALLS_LIMIT: int = 50
# Number of inlining candidates to show
INLINE_CANDIDATES_LIMIT: int = 25
# Maximum function size in bytes to consider for inlining
INLINE_SIZE_THRESHOLD: int = 16
def _build_symbol_sizes(self) -> dict[str, int]:
"""Build a size lookup from all component symbols: mangled_name -> size."""
return {
symbol: size
for symbols in self._component_symbols.values()
for symbol, _, size, _ in symbols
}
def _format_call_row(
self, index: int, mangled: str, count: int, symbol_sizes: dict[str, int]
) -> str:
"""Format a single row for call frequency tables."""
demangled = self._demangle_cache.get(mangled, mangled)
if len(demangled) > 80:
demangled = f"{demangled[:77]}..."
size = symbol_sizes.get(mangled)
size_str = f"{size:>5,} B" if size is not None else " ?"
return f"{index:>3} {count:>5} {size_str} {demangled}"
def _add_call_table_header(self, lines: list[str]) -> None:
"""Add the header row for call frequency tables."""
lines.append(f"{'#':>3} {'Calls':>5} {'Size':>7} Function")
lines.append(f"{'---':>3} {'-----':>5} {'-------':>7} {'-' * 60}")
def _add_function_call_analysis(self, lines: list[str]) -> None:
"""Add function call frequency analysis section.
Shows the most frequently called functions by call site count.
"""
self._add_section_header(lines, "Top Called Functions")
symbol_sizes = self._build_symbol_sizes()
# Sort by call count descending
sorted_calls = sorted(
self._function_call_counts.items(), key=lambda x: x[1], reverse=True
)
self._add_call_table_header(lines)
for i, (mangled, count) in enumerate(sorted_calls[: self.TOP_CALLS_LIMIT]):
lines.append(self._format_call_row(i + 1, mangled, count, symbol_sizes))
total_calls = sum(self._function_call_counts.values())
lines.append("")
lines.append(
f"Total: {len(self._function_call_counts)} unique targets, "
f"{total_calls:,} call sites"
)
lines.append("")
def _add_inline_candidates(self, lines: list[str]) -> None:
"""Add inlining candidates section.
Shows frequently called functions that are small enough to benefit
from inlining (< 16 bytes). These are the best candidates for
reducing call overhead.
"""
self._add_section_header(
lines,
f"Inlining Candidates (<{self.INLINE_SIZE_THRESHOLD} B, by call count)",
)
symbol_sizes = self._build_symbol_sizes()
# Filter to small functions with known size, sort by call count
candidates = sorted(
(
(mangled, count)
for mangled, count in self._function_call_counts.items()
if mangled in symbol_sizes
and symbol_sizes[mangled] < self.INLINE_SIZE_THRESHOLD
),
key=lambda x: x[1],
reverse=True,
)
if not candidates:
lines.append("No candidates found.")
lines.append("")
return
self._add_call_table_header(lines)
for i, (mangled, count) in enumerate(
candidates[: self.INLINE_CANDIDATES_LIMIT]
):
lines.append(self._format_call_row(i + 1, mangled, count, symbol_sizes))
lines.append("")
lines.append(
f"Showing top {min(len(candidates), self.INLINE_CANDIDATES_LIMIT)} "
f"of {len(candidates)} functions under "
f"{self.INLINE_SIZE_THRESHOLD} B"
)
lines.append("")
def generate_report(self, detailed: bool = False) -> str:
"""Generate a formatted memory report."""
components = sorted(
@@ -469,15 +590,16 @@ class MemoryAnalyzerCLI(MemoryAnalyzer):
lines.append(f"Total size: {comp_mem.flash_total:,} B")
lines.append("")
# Show all symbols above threshold for better visibility
# Show symbols above threshold, always include storage symbols
large_symbols = [
(sym, dem, size, sec)
for sym, dem, size, sec in sorted_symbols
if size > self.SYMBOL_SIZE_THRESHOLD
or dem.endswith(_PSTORAGE_SUFFIX)
]
lines.append(
f"{comp_name} Symbols > {self.SYMBOL_SIZE_THRESHOLD} B ({len(large_symbols)} symbols):"
f"{comp_name} Symbols > {self.SYMBOL_SIZE_THRESHOLD} B & storage ({len(large_symbols)} symbols):"
)
for i, (symbol, demangled, size, section) in enumerate(large_symbols):
lines.append(
@@ -500,7 +622,10 @@ class MemoryAnalyzerCLI(MemoryAnalyzer):
# Sort by size descending
sorted_ram_syms = sorted(ram_syms, key=lambda x: x[2], reverse=True)
large_ram_syms = [
s for s in sorted_ram_syms if s[2] > self.RAM_SYMBOL_SIZE_THRESHOLD
s
for s in sorted_ram_syms
if s[2] > self.RAM_SYMBOL_SIZE_THRESHOLD
or s[1].endswith(_PSTORAGE_SUFFIX)
]
lines.append(f"{name} ({mem.ram_total:,} B total RAM):")
@@ -518,13 +643,14 @@ class MemoryAnalyzerCLI(MemoryAnalyzer):
for symbol, demangled, size, section in large_ram_syms[:10]:
# Format section label consistently by stripping leading dot
section_label = section.lstrip(".") if section else ""
display_name = _format_pstorage_name(demangled)
# Add ellipsis if name is truncated
demangled_display = (
f"{demangled[:70]}..." if len(demangled) > 70 else demangled
)
lines.append(
f" {size:>6,} B [{section_label}] {demangled_display}"
display_name = (
f"{display_name[:70]}..."
if len(display_name) > 70
else display_name
)
lines.append(f" {size:>6,} B [{section_label}] {display_name}")
if len(large_ram_syms) > 10:
lines.append(f" ... and {len(large_ram_syms) - 10} more")
lines.append("")
@@ -533,6 +659,11 @@ class MemoryAnalyzerCLI(MemoryAnalyzer):
if self._cswtch_symbols:
self._add_cswtch_analysis(lines)
# Function call frequency analysis
if self._function_call_counts:
self._add_function_call_analysis(lines)
self._add_inline_candidates(lines)
lines.append(
"Note: This analysis covers symbols in the ELF file. Some runtime allocations may not be included."
)
-1
View File
@@ -408,7 +408,6 @@ SYMBOL_PATTERNS = {
],
"arduino_core": [
"pinMode",
"resetPins",
"millis",
"micros",
"delay(", # More specific - Arduino delay function with parenthesis
+97 -7
View File
@@ -1,3 +1,4 @@
from dataclasses import dataclass, field
import logging
import esphome.codegen as cg
@@ -137,6 +138,9 @@ UpdateComponentAction = cg.esphome_ns.class_("UpdateComponentAction", Action)
SuspendComponentAction = cg.esphome_ns.class_("SuspendComponentAction", Action)
ResumeComponentAction = cg.esphome_ns.class_("ResumeComponentAction", Action)
Automation = cg.esphome_ns.class_("Automation")
TriggerForwarder = cg.esphome_ns.class_("TriggerForwarder")
TriggerOnTrueForwarder = cg.esphome_ns.class_("TriggerOnTrueForwarder")
TriggerOnFalseForwarder = cg.esphome_ns.class_("TriggerOnFalseForwarder")
LambdaCondition = cg.esphome_ns.class_("LambdaCondition", Condition)
StatelessLambdaCondition = cg.esphome_ns.class_("StatelessLambdaCondition", Condition)
@@ -247,7 +251,9 @@ async def and_condition_to_code(
args: TemplateArgsType,
) -> MockObj:
conditions = await build_condition_list(config, template_arg, args)
return cg.new_Pvariable(condition_id, template_arg, conditions)
return cg.new_Pvariable(
condition_id, cg.TemplateArguments(len(conditions), *template_arg), conditions
)
@register_condition("or", OrCondition, validate_condition_list)
@@ -258,7 +264,9 @@ async def or_condition_to_code(
args: TemplateArgsType,
) -> MockObj:
conditions = await build_condition_list(config, template_arg, args)
return cg.new_Pvariable(condition_id, template_arg, conditions)
return cg.new_Pvariable(
condition_id, cg.TemplateArguments(len(conditions), *template_arg), conditions
)
@register_condition("all", AndCondition, validate_condition_list)
@@ -269,7 +277,9 @@ async def all_condition_to_code(
args: TemplateArgsType,
) -> MockObj:
conditions = await build_condition_list(config, template_arg, args)
return cg.new_Pvariable(condition_id, template_arg, conditions)
return cg.new_Pvariable(
condition_id, cg.TemplateArguments(len(conditions), *template_arg), conditions
)
@register_condition("any", OrCondition, validate_condition_list)
@@ -280,7 +290,9 @@ async def any_condition_to_code(
args: TemplateArgsType,
) -> MockObj:
conditions = await build_condition_list(config, template_arg, args)
return cg.new_Pvariable(condition_id, template_arg, conditions)
return cg.new_Pvariable(
condition_id, cg.TemplateArguments(len(conditions), *template_arg), conditions
)
@register_condition("not", NotCondition, validate_potentially_and_condition)
@@ -302,7 +314,9 @@ async def xor_condition_to_code(
args: TemplateArgsType,
) -> MockObj:
conditions = await build_condition_list(config, template_arg, args)
return cg.new_Pvariable(condition_id, template_arg, conditions)
return cg.new_Pvariable(
condition_id, cg.TemplateArguments(len(conditions), *template_arg), conditions
)
@register_condition("lambda", LambdaCondition, cv.returning_lambda)
@@ -413,13 +427,16 @@ async def if_action_to_code(
template_arg: cg.TemplateArguments,
args: TemplateArgsType,
) -> MockObj:
has_else = CONF_ELSE in config
# Prepend HasElse bool to template arguments: IfAction<HasElse, Ts...>
if_template_arg = cg.TemplateArguments(has_else, *template_arg)
cond_conf = next(el for el in config if el in (CONF_ANY, CONF_ALL, CONF_CONDITION))
condition = await build_condition(config[cond_conf], template_arg, args)
var = cg.new_Pvariable(action_id, template_arg, condition)
var = cg.new_Pvariable(action_id, if_template_arg, condition)
if CONF_THEN in config:
actions = await build_action_list(config[CONF_THEN], template_arg, args)
cg.add(var.add_then(actions))
if CONF_ELSE in config:
if has_else:
actions = await build_action_list(config[CONF_ELSE], template_arg, args)
cg.add(var.add_else(actions))
return var
@@ -658,3 +675,76 @@ async def build_automation(
actions = await build_action_list(config[CONF_THEN], templ, args)
cg.add(obj.add_actions(actions))
return obj
async def build_callback_automation(
parent: MockObj,
callback_method: str,
args: TemplateArgsType,
config: ConfigType,
forwarder: MockObj | MockObjClass | None = None,
) -> None:
"""Build an Automation and register it as a callback on the parent.
Eliminates the need for a Trigger wrapper object by registering the
automation's trigger() directly as a callback on the parent component.
Uses template forwarder structs so the compiler deduplicates the operator()
body across all call sites with the same signature. The forwarder must be
pointer-sized (single Automation* field) to fit inline in Callback::ctx_
and avoid heap allocation.
:param parent: The component object (e.g., button, sensor).
:param callback_method: Name of the callback method (e.g., "add_on_press_callback").
:param args: Automation template args as list of (type, name) tuples.
:param config: The automation config dict.
:param forwarder: Optional forwarder type to use instead of the default
TriggerForwarder<Ts...>. Pass any struct type whose aggregate init takes
a single Automation pointer (e.g., TriggerOnTrueForwarder).
"""
arg_types = [arg[0] for arg in args]
templ = cg.TemplateArguments(*arg_types)
obj = cg.new_Pvariable(config[CONF_AUTOMATION_ID], templ)
actions = await build_action_list(config[CONF_THEN], templ, args)
cg.add(obj.add_actions(actions))
# Use template forwarder structs for deduplication. The compiler generates
# one operator() per forwarder type; different automation pointers are just
# data in the struct.
if forwarder is None:
forwarder = TriggerForwarder.template(templ)
# RawExpression for aggregate init — both forwarder and obj are codegen
# MockObjs (not user input), and there's no Expression type for positional
# aggregate initialization (StructInitializer uses named fields).
cg.add(getattr(parent, callback_method)(cg.RawExpression(f"{forwarder}{{{obj}}}")))
@dataclass(frozen=True, slots=True)
class CallbackAutomation:
"""A single callback automation entry for build_callback_automations."""
conf_key: str
callback_method: str
args: TemplateArgsType = field(default_factory=list)
forwarder: MockObj | MockObjClass | None = None
async def build_callback_automations(
parent: MockObj,
config: ConfigType,
entries: tuple[CallbackAutomation, ...],
) -> None:
"""Build multiple callback automations from a tuple of entries.
:param parent: The component object (e.g., button, sensor).
:param config: The full component config dict.
:param entries: Tuple of CallbackAutomation entries to process.
"""
for entry in entries:
for conf in config.get(entry.conf_key, []):
await build_callback_automation(
parent,
entry.callback_method,
entry.args,
conf,
forwarder=entry.forwarder,
)
+10 -9
View File
@@ -53,6 +53,13 @@ def get_project_cmakelists() -> str:
variant = get_esp32_variant()
idf_target = variant.lower().replace("-", "")
# Extract compile definitions from build flags (-DXXX -> XXX)
compile_defs = [flag for flag in CORE.build_flags if flag.startswith("-D")]
extra_compile_options = "\n".join(
f'idf_build_set_property(COMPILE_OPTIONS "{compile_def}" APPEND)'
for compile_def in compile_defs
)
return f"""\
# Auto-generated by ESPHome
cmake_minimum_required(VERSION 3.16)
@@ -61,6 +68,9 @@ set(IDF_TARGET {idf_target})
set(EXTRA_COMPONENT_DIRS ${{CMAKE_SOURCE_DIR}}/src)
include($ENV{{IDF_PATH}}/tools/cmake/project.cmake)
{extra_compile_options}
project({CORE.name})
"""
@@ -70,10 +80,6 @@ def get_component_cmakelists(minimal: bool = False) -> str:
idf_requires = [] if minimal else (get_available_components() or [])
requires_str = " ".join(idf_requires)
# Extract compile definitions from build flags (-DXXX -> XXX)
compile_defs = [flag[2:] for flag in CORE.build_flags if flag.startswith("-D")]
compile_defs_str = "\n ".join(sorted(compile_defs)) if compile_defs else ""
# Extract compile options (-W flags, excluding linker flags)
compile_opts = [
flag
@@ -104,11 +110,6 @@ idf_component_register(
# Apply C++ standard
target_compile_features(${{COMPONENT_LIB}} PUBLIC cxx_std_20)
# ESPHome compile definitions
target_compile_definitions(${{COMPONENT_LIB}} PUBLIC
{compile_defs_str}
)
# ESPHome compile options
target_compile_options(${{COMPONENT_LIB}} PUBLIC
{compile_opts_str}
+699
View File
File diff suppressed because it is too large Load Diff
+1
View File
@@ -79,6 +79,7 @@ from esphome.cpp_types import ( # noqa: F401
float_,
global_ns,
gpio_Flags,
int8,
int16,
int32,
int64,
@@ -1,22 +1,29 @@
#include "esphome/core/log.h"
#include "absolute_humidity.h"
namespace esphome {
namespace absolute_humidity {
namespace esphome::absolute_humidity {
static const char *const TAG = "absolute_humidity.sensor";
static const char *const TAG{"absolute_humidity.sensor"};
void AbsoluteHumidityComponent::setup() {
this->temperature_sensor_->add_on_state_callback([this](float state) {
this->temperature_ = state;
this->enable_loop();
});
ESP_LOGD(TAG, " Added callback for temperature '%s'", this->temperature_sensor_->get_name().c_str());
this->temperature_sensor_->add_on_state_callback([this](float state) { this->temperature_callback_(state); });
// Get initial value
if (this->temperature_sensor_->has_state()) {
this->temperature_callback_(this->temperature_sensor_->get_state());
this->temperature_ = this->temperature_sensor_->get_state();
}
this->humidity_sensor_->add_on_state_callback([this](float state) {
this->humidity_ = state;
this->enable_loop();
});
ESP_LOGD(TAG, " Added callback for relative humidity '%s'", this->humidity_sensor_->get_name().c_str());
this->humidity_sensor_->add_on_state_callback([this](float state) { this->humidity_callback_(state); });
// Get initial value
if (this->humidity_sensor_->has_state()) {
this->humidity_callback_(this->humidity_sensor_->get_state());
this->humidity_ = this->humidity_sensor_->get_state();
}
}
@@ -46,14 +53,12 @@ void AbsoluteHumidityComponent::dump_config() {
}
void AbsoluteHumidityComponent::loop() {
if (!this->next_update_) {
return;
}
this->next_update_ = false;
// Only run once
this->disable_loop();
// Ensure we have source data
const bool no_temperature = std::isnan(this->temperature_);
const bool no_humidity = std::isnan(this->humidity_);
const bool no_temperature{std::isnan(this->temperature_)};
const bool no_humidity{std::isnan(this->humidity_)};
if (no_temperature || no_humidity) {
if (no_temperature) {
ESP_LOGW(TAG, "No valid state from temperature sensor!");
@@ -67,9 +72,9 @@ void AbsoluteHumidityComponent::loop() {
}
// Convert to desired units
const float temperature_c = this->temperature_;
const float temperature_k = temperature_c + 273.15;
const float hr = this->humidity_ / 100;
const float temperature_c{this->temperature_};
const float temperature_k{temperature_c + 273.15f};
const float hr{this->humidity_ / 100.0f};
// Calculate saturation vapor pressure
float es;
@@ -90,7 +95,7 @@ void AbsoluteHumidityComponent::loop() {
}
// Calculate absolute humidity
const float absolute_humidity = vapor_density(es, hr, temperature_k);
const float absolute_humidity{vapor_density(es, hr, temperature_k)};
ESP_LOGD(TAG, "Saturation vapor pressure %f kPa, absolute humidity %f g/m³", es, absolute_humidity);
@@ -103,16 +108,16 @@ void AbsoluteHumidityComponent::loop() {
// More accurate than Tetens in normal meteorologic conditions
float AbsoluteHumidityComponent::es_buck(float temperature_c) {
float a, b, c, d;
if (temperature_c >= 0) {
a = 0.61121;
b = 18.678;
c = 234.5;
d = 257.14;
if (temperature_c >= 0.0f) {
a = 0.61121f;
b = 18.678f;
c = 234.5f;
d = 257.14f;
} else {
a = 0.61115;
b = 18.678;
c = 233.7;
d = 279.82;
a = 0.61115f;
b = 18.678f;
c = 233.7f;
d = 279.82f;
}
return a * expf((b - (temperature_c / c)) * (temperature_c / (d + temperature_c)));
}
@@ -120,14 +125,14 @@ float AbsoluteHumidityComponent::es_buck(float temperature_c) {
// Tetens equation (https://en.wikipedia.org/wiki/Tetens_equation)
float AbsoluteHumidityComponent::es_tetens(float temperature_c) {
float a, b;
if (temperature_c >= 0) {
a = 17.27;
b = 237.3;
if (temperature_c >= 0.0f) {
a = 17.27f;
b = 237.3f;
} else {
a = 21.875;
b = 265.5;
a = 21.875f;
b = 265.5f;
}
return 0.61078 * expf((a * temperature_c) / (temperature_c + b));
return 0.61078f * expf((a * temperature_c) / (temperature_c + b));
}
// Wobus equation
@@ -146,18 +151,18 @@ float AbsoluteHumidityComponent::es_wobus(float t) {
//
// Baker, Schlatter 17-MAY-1982 Original version.
const float c0 = +0.99999683e00;
const float c1 = -0.90826951e-02;
const float c2 = +0.78736169e-04;
const float c3 = -0.61117958e-06;
const float c4 = +0.43884187e-08;
const float c5 = -0.29883885e-10;
const float c6 = +0.21874425e-12;
const float c7 = -0.17892321e-14;
const float c8 = +0.11112018e-16;
const float c9 = -0.30994571e-19;
const float p = c0 + t * (c1 + t * (c2 + t * (c3 + t * (c4 + t * (c5 + t * (c6 + t * (c7 + t * (c8 + t * (c9)))))))));
return 0.61078 / pow(p, 8);
constexpr float c0{+0.99999683e+00f};
constexpr float c1{-0.90826951e-02f};
constexpr float c2{+0.78736169e-04f};
constexpr float c3{-0.61117958e-06f};
constexpr float c4{+0.43884187e-08f};
constexpr float c5{-0.29883885e-10f};
constexpr float c6{+0.21874425e-12f};
constexpr float c7{-0.17892321e-14f};
constexpr float c8{+0.11112018e-16f};
constexpr float c9{-0.30994571e-19f};
const float p{c0 + t * (c1 + t * (c2 + t * (c3 + t * (c4 + t * (c5 + t * (c6 + t * (c7 + t * (c8 + t * (c9)))))))))};
return 0.61078f / powf(p, 8.0f);
}
// From https://www.environmentalbiophysics.org/chalk-talk-how-to-calculate-absolute-humidity/
@@ -168,11 +173,10 @@ float AbsoluteHumidityComponent::vapor_density(float es, float hr, float ta) {
// hr = relative humidity [0-1]
// ta = absolute temperature (K)
const float ea = hr * es * 1000; // vapor pressure of the air (Pa)
const float mw = 18.01528; // molar mass of water (g⋅mol⁻¹)
const float r = 8.31446261815324; // molar gas constant (J⋅K⁻¹)
const float ea{hr * es * 1000.0f}; // vapor pressure of the air (Pa)
const float mw{18.01528f}; // molar mass of water (g⋅mol⁻¹)
const float r{8.31446261815324f}; // molar gas constant (J⋅K⁻¹)
return (ea * mw) / (r * ta);
}
} // namespace absolute_humidity
} // namespace esphome
} // namespace esphome::absolute_humidity
@@ -3,8 +3,7 @@
#include "esphome/core/component.h"
#include "esphome/components/sensor/sensor.h"
namespace esphome {
namespace absolute_humidity {
namespace esphome::absolute_humidity {
/// Enum listing all implemented saturation vapor pressure equations.
enum SaturationVaporPressureEquation {
@@ -16,8 +15,6 @@ enum SaturationVaporPressureEquation {
/// This class implements calculation of absolute humidity from temperature and relative humidity.
class AbsoluteHumidityComponent : public sensor::Sensor, public Component {
public:
AbsoluteHumidityComponent() = default;
void set_temperature_sensor(sensor::Sensor *temperature_sensor) { this->temperature_sensor_ = temperature_sensor; }
void set_humidity_sensor(sensor::Sensor *humidity_sensor) { this->humidity_sensor_ = humidity_sensor; }
void set_equation(SaturationVaporPressureEquation equation) { this->equation_ = equation; }
@@ -27,15 +24,6 @@ class AbsoluteHumidityComponent : public sensor::Sensor, public Component {
void loop() override;
protected:
void temperature_callback_(float state) {
this->next_update_ = true;
this->temperature_ = state;
}
void humidity_callback_(float state) {
this->next_update_ = true;
this->humidity_ = state;
}
/** Buck equation for saturation vapor pressure in kPa.
*
* @param temperature_c Air temperature in °C.
@@ -57,19 +45,15 @@ class AbsoluteHumidityComponent : public sensor::Sensor, public Component {
* @param es Saturation vapor pressure in kPa.
* @param hr Relative humidity 0 to 1.
* @param ta Absolute temperature in K.
* @param heater_duration The duration in ms that the heater should turn on for when measuring.
*/
static float vapor_density(float es, float hr, float ta);
sensor::Sensor *temperature_sensor_{nullptr};
sensor::Sensor *humidity_sensor_{nullptr};
bool next_update_{false};
float temperature_{NAN};
float humidity_{NAN};
SaturationVaporPressureEquation equation_;
};
} // namespace absolute_humidity
} // namespace esphome
} // namespace esphome::absolute_humidity
+2 -1
View File
@@ -22,7 +22,8 @@ namespace adc {
#ifdef USE_ESP32
// clang-format off
#if (ESP_IDF_VERSION_MAJOR == 5 && \
#if ESP_IDF_VERSION_MAJOR >= 6 || \
(ESP_IDF_VERSION_MAJOR == 5 && \
((ESP_IDF_VERSION_MINOR == 0 && ESP_IDF_VERSION_PATCH >= 5) || \
(ESP_IDF_VERSION_MINOR == 1 && ESP_IDF_VERSION_PATCH >= 3) || \
(ESP_IDF_VERSION_MINOR >= 2)) \
+7 -3
View File
@@ -2,6 +2,7 @@
#include "adc_sensor.h"
#include "esphome/core/log.h"
#include <cinttypes>
namespace esphome {
namespace adc {
@@ -346,7 +347,8 @@ float ADCSensor::sample_autorange_() {
ESP_LOGVV(TAG, "Autorange summary:");
ESP_LOGVV(TAG, " Raw readings: 12db=%d, 6db=%d, 2.5db=%d, 0db=%d", raw12, raw6, raw2, raw0);
ESP_LOGVV(TAG, " Voltages: 12db=%.6f, 6db=%.6f, 2.5db=%.6f, 0db=%.6f", mv12, mv6, mv2, mv0);
ESP_LOGVV(TAG, " Coefficients: c12=%u, c6=%u, c2=%u, c0=%u, sum=%u", c12, c6, c2, c0, csum);
ESP_LOGVV(TAG, " Coefficients: c12=%" PRIu32 ", c6=%" PRIu32 ", c2=%" PRIu32 ", c0=%" PRIu32 ", sum=%" PRIu32, c12,
c6, c2, c0, csum);
if (csum == 0) {
ESP_LOGE(TAG, "Invalid weight sum in autorange calculation");
@@ -354,8 +356,10 @@ float ADCSensor::sample_autorange_() {
}
const float final_result = (mv12 * c12 + mv6 * c6 + mv2 * c2 + mv0 * c0) / csum;
ESP_LOGV(TAG, "Autorange final: (%.6f*%u + %.6f*%u + %.6f*%u + %.6f*%u)/%u = %.6fV", mv12, c12, mv6, c6, mv2, c2, mv0,
c0, csum, final_result);
ESP_LOGV(TAG,
"Autorange final: (%.6f*%" PRIu32 " + %.6f*%" PRIu32 " + %.6f*%" PRIu32 " + %.6f*%" PRIu32 ")/%" PRIu32
" = %.6fV",
mv12, c12, mv6, c6, mv2, c2, mv0, c0, csum, final_result);
return final_result;
}
+9 -5
View File
@@ -12,11 +12,15 @@ CONF_ADS1118_ID = "ads1118_id"
ads1118_ns = cg.esphome_ns.namespace("ads1118")
ADS1118 = ads1118_ns.class_("ADS1118", cg.Component, spi.SPIDevice)
CONFIG_SCHEMA = cv.Schema(
{
cv.GenerateID(): cv.declare_id(ADS1118),
}
).extend(spi.spi_device_schema(cs_pin_required=True))
CONFIG_SCHEMA = (
cv.Schema(
{
cv.GenerateID(): cv.declare_id(ADS1118),
}
)
.extend(spi.spi_device_schema(cs_pin_required=True))
.extend(cv.COMPONENT_SCHEMA)
)
async def to_code(config):
+9 -5
View File
@@ -35,7 +35,7 @@ CONFIG_SCHEMA = (
cv.Schema(
{
cv.GenerateID(): cv.declare_id(AGS10Component),
cv.Optional(CONF_TVOC): sensor.sensor_schema(
cv.Required(CONF_TVOC): sensor.sensor_schema(
unit_of_measurement=UNIT_PARTS_PER_BILLION,
icon=ICON_RADIATOR,
accuracy_decimals=0,
@@ -97,7 +97,7 @@ AGS10_NEW_I2C_ADDRESS_SCHEMA = cv.maybe_simple_value(
async def ags10newi2caddress_to_code(config, action_id, template_arg, args):
var = cg.new_Pvariable(action_id, template_arg)
await cg.register_parented(var, config[CONF_ID])
address = await cg.templatable(config[CONF_ADDRESS], args, int)
address = await cg.templatable(config[CONF_ADDRESS], args, cg.uint8)
cg.add(var.set_new_address(address))
return var
@@ -112,7 +112,9 @@ AGS10_SET_ZERO_POINT_ACTION_MODE = {
AGS10_SET_ZERO_POINT_SCHEMA = cv.Schema(
{
cv.GenerateID(): cv.use_id(AGS10Component),
cv.Required(CONF_MODE): cv.enum(AGS10_SET_ZERO_POINT_ACTION_MODE, upper=True),
cv.Required(CONF_MODE): cv.templatable(
cv.enum(AGS10_SET_ZERO_POINT_ACTION_MODE, upper=True)
),
cv.Optional(CONF_VALUE, default=0xFFFF): cv.templatable(cv.uint16_t),
},
)
@@ -127,8 +129,10 @@ AGS10_SET_ZERO_POINT_SCHEMA = cv.Schema(
async def ags10setzeropoint_to_code(config, action_id, template_arg, args):
var = cg.new_Pvariable(action_id, template_arg)
await cg.register_parented(var, config[CONF_ID])
mode = await cg.templatable(config.get(CONF_MODE), args, enumerate)
mode = await cg.templatable(
config.get(CONF_MODE), args, AGS10SetZeroPointActionMode
)
cg.add(var.set_mode(mode))
value = await cg.templatable(config[CONF_VALUE], args, int)
value = await cg.templatable(config[CONF_VALUE], args, cg.uint16)
cg.add(var.set_value(value))
return var
+1 -1
View File
@@ -43,7 +43,7 @@ async def aic3204_set_volume_to_code(config, action_id, template_arg, args):
paren = await cg.get_variable(config[CONF_ID])
var = cg.new_Pvariable(action_id, template_arg, paren)
template_ = await cg.templatable(config.get(CONF_MODE), args, int)
template_ = await cg.templatable(config.get(CONF_MODE), args, cg.uint8)
cg.add(var.set_auto_mute_mode(template_))
return var
@@ -10,7 +10,6 @@ from esphome.const import (
CONF_ID,
CONF_MQTT_ID,
CONF_ON_STATE,
CONF_TRIGGER_ID,
CONF_WEB_SERVER,
)
from esphome.core import CORE, CoroPriority, coroutine_with_priority
@@ -34,39 +33,9 @@ CONF_ON_READY = "on_ready"
alarm_control_panel_ns = cg.esphome_ns.namespace("alarm_control_panel")
AlarmControlPanel = alarm_control_panel_ns.class_("AlarmControlPanel", cg.EntityBase)
StateTrigger = alarm_control_panel_ns.class_(
"StateTrigger", automation.Trigger.template()
)
TriggeredTrigger = alarm_control_panel_ns.class_(
"TriggeredTrigger", automation.Trigger.template()
)
ClearedTrigger = alarm_control_panel_ns.class_(
"ClearedTrigger", automation.Trigger.template()
)
ArmingTrigger = alarm_control_panel_ns.class_(
"ArmingTrigger", automation.Trigger.template()
)
PendingTrigger = alarm_control_panel_ns.class_(
"PendingTrigger", automation.Trigger.template()
)
ArmedHomeTrigger = alarm_control_panel_ns.class_(
"ArmedHomeTrigger", automation.Trigger.template()
)
ArmedNightTrigger = alarm_control_panel_ns.class_(
"ArmedNightTrigger", automation.Trigger.template()
)
ArmedAwayTrigger = alarm_control_panel_ns.class_(
"ArmedAwayTrigger", automation.Trigger.template()
)
DisarmedTrigger = alarm_control_panel_ns.class_(
"DisarmedTrigger", automation.Trigger.template()
)
ChimeTrigger = alarm_control_panel_ns.class_(
"ChimeTrigger", automation.Trigger.template()
)
ReadyTrigger = alarm_control_panel_ns.class_(
"ReadyTrigger", automation.Trigger.template()
)
StateAnyForwarder = alarm_control_panel_ns.class_("StateAnyForwarder")
StateEnterForwarder = alarm_control_panel_ns.class_("StateEnterForwarder")
AlarmControlPanelState = alarm_control_panel_ns.enum("AlarmControlPanelState")
ArmAwayAction = alarm_control_panel_ns.class_("ArmAwayAction", automation.Action)
ArmHomeAction = alarm_control_panel_ns.class_("ArmHomeAction", automation.Action)
@@ -89,61 +58,17 @@ _ALARM_CONTROL_PANEL_SCHEMA = (
cv.OnlyWith(CONF_MQTT_ID, "mqtt"): cv.declare_id(
mqtt.MQTTAlarmControlPanelComponent
),
cv.Optional(CONF_ON_STATE): automation.validate_automation(
{
cv.GenerateID(CONF_TRIGGER_ID): cv.declare_id(StateTrigger),
}
),
cv.Optional(CONF_ON_TRIGGERED): automation.validate_automation(
{
cv.GenerateID(CONF_TRIGGER_ID): cv.declare_id(TriggeredTrigger),
}
),
cv.Optional(CONF_ON_ARMING): automation.validate_automation(
{
cv.GenerateID(CONF_TRIGGER_ID): cv.declare_id(ArmingTrigger),
}
),
cv.Optional(CONF_ON_PENDING): automation.validate_automation(
{
cv.GenerateID(CONF_TRIGGER_ID): cv.declare_id(PendingTrigger),
}
),
cv.Optional(CONF_ON_ARMED_HOME): automation.validate_automation(
{
cv.GenerateID(CONF_TRIGGER_ID): cv.declare_id(ArmedHomeTrigger),
}
),
cv.Optional(CONF_ON_ARMED_NIGHT): automation.validate_automation(
{
cv.GenerateID(CONF_TRIGGER_ID): cv.declare_id(ArmedNightTrigger),
}
),
cv.Optional(CONF_ON_ARMED_AWAY): automation.validate_automation(
{
cv.GenerateID(CONF_TRIGGER_ID): cv.declare_id(ArmedAwayTrigger),
}
),
cv.Optional(CONF_ON_DISARMED): automation.validate_automation(
{
cv.GenerateID(CONF_TRIGGER_ID): cv.declare_id(DisarmedTrigger),
}
),
cv.Optional(CONF_ON_CLEARED): automation.validate_automation(
{
cv.GenerateID(CONF_TRIGGER_ID): cv.declare_id(ClearedTrigger),
}
),
cv.Optional(CONF_ON_CHIME): automation.validate_automation(
{
cv.GenerateID(CONF_TRIGGER_ID): cv.declare_id(ChimeTrigger),
}
),
cv.Optional(CONF_ON_READY): automation.validate_automation(
{
cv.GenerateID(CONF_TRIGGER_ID): cv.declare_id(ReadyTrigger),
}
),
cv.Optional(CONF_ON_STATE): automation.validate_automation({}),
cv.Optional(CONF_ON_TRIGGERED): automation.validate_automation({}),
cv.Optional(CONF_ON_ARMING): automation.validate_automation({}),
cv.Optional(CONF_ON_PENDING): automation.validate_automation({}),
cv.Optional(CONF_ON_ARMED_HOME): automation.validate_automation({}),
cv.Optional(CONF_ON_ARMED_NIGHT): automation.validate_automation({}),
cv.Optional(CONF_ON_ARMED_AWAY): automation.validate_automation({}),
cv.Optional(CONF_ON_DISARMED): automation.validate_automation({}),
cv.Optional(CONF_ON_CLEARED): automation.validate_automation({}),
cv.Optional(CONF_ON_CHIME): automation.validate_automation({}),
cv.Optional(CONF_ON_READY): automation.validate_automation({}),
}
)
)
@@ -186,41 +111,66 @@ ALARM_CONTROL_PANEL_CONDITION_SCHEMA = maybe_simple_id(
)
_CALLBACK_AUTOMATIONS = (
automation.CallbackAutomation(
CONF_ON_STATE, "add_on_state_callback", forwarder=StateAnyForwarder
),
automation.CallbackAutomation(
CONF_ON_TRIGGERED,
"add_on_state_callback",
forwarder=StateEnterForwarder.template(
AlarmControlPanelState.ACP_STATE_TRIGGERED
),
),
automation.CallbackAutomation(
CONF_ON_ARMING,
"add_on_state_callback",
forwarder=StateEnterForwarder.template(AlarmControlPanelState.ACP_STATE_ARMING),
),
automation.CallbackAutomation(
CONF_ON_PENDING,
"add_on_state_callback",
forwarder=StateEnterForwarder.template(
AlarmControlPanelState.ACP_STATE_PENDING
),
),
automation.CallbackAutomation(
CONF_ON_ARMED_HOME,
"add_on_state_callback",
forwarder=StateEnterForwarder.template(
AlarmControlPanelState.ACP_STATE_ARMED_HOME
),
),
automation.CallbackAutomation(
CONF_ON_ARMED_NIGHT,
"add_on_state_callback",
forwarder=StateEnterForwarder.template(
AlarmControlPanelState.ACP_STATE_ARMED_NIGHT
),
),
automation.CallbackAutomation(
CONF_ON_ARMED_AWAY,
"add_on_state_callback",
forwarder=StateEnterForwarder.template(
AlarmControlPanelState.ACP_STATE_ARMED_AWAY
),
),
automation.CallbackAutomation(
CONF_ON_DISARMED,
"add_on_state_callback",
forwarder=StateEnterForwarder.template(
AlarmControlPanelState.ACP_STATE_DISARMED
),
),
automation.CallbackAutomation(CONF_ON_CLEARED, "add_on_cleared_callback"),
automation.CallbackAutomation(CONF_ON_CHIME, "add_on_chime_callback"),
automation.CallbackAutomation(CONF_ON_READY, "add_on_ready_callback"),
)
@setup_entity("alarm_control_panel")
async def setup_alarm_control_panel_core_(var, config):
for conf in config.get(CONF_ON_STATE, []):
trigger = cg.new_Pvariable(conf[CONF_TRIGGER_ID], var)
await automation.build_automation(trigger, [], conf)
for conf in config.get(CONF_ON_TRIGGERED, []):
trigger = cg.new_Pvariable(conf[CONF_TRIGGER_ID], var)
await automation.build_automation(trigger, [], conf)
for conf in config.get(CONF_ON_ARMING, []):
trigger = cg.new_Pvariable(conf[CONF_TRIGGER_ID], var)
await automation.build_automation(trigger, [], conf)
for conf in config.get(CONF_ON_PENDING, []):
trigger = cg.new_Pvariable(conf[CONF_TRIGGER_ID], var)
await automation.build_automation(trigger, [], conf)
for conf in config.get(CONF_ON_ARMED_HOME, []):
trigger = cg.new_Pvariable(conf[CONF_TRIGGER_ID], var)
await automation.build_automation(trigger, [], conf)
for conf in config.get(CONF_ON_ARMED_NIGHT, []):
trigger = cg.new_Pvariable(conf[CONF_TRIGGER_ID], var)
await automation.build_automation(trigger, [], conf)
for conf in config.get(CONF_ON_ARMED_AWAY, []):
trigger = cg.new_Pvariable(conf[CONF_TRIGGER_ID], var)
await automation.build_automation(trigger, [], conf)
for conf in config.get(CONF_ON_DISARMED, []):
trigger = cg.new_Pvariable(conf[CONF_TRIGGER_ID], var)
await automation.build_automation(trigger, [], conf)
for conf in config.get(CONF_ON_CLEARED, []):
trigger = cg.new_Pvariable(conf[CONF_TRIGGER_ID], var)
await automation.build_automation(trigger, [], conf)
for conf in config.get(CONF_ON_CHIME, []):
trigger = cg.new_Pvariable(conf[CONF_TRIGGER_ID], var)
await automation.build_automation(trigger, [], conf)
for conf in config.get(CONF_ON_READY, []):
trigger = cg.new_Pvariable(conf[CONF_TRIGGER_ID], var)
await automation.build_automation(trigger, [], conf)
await automation.build_callback_automations(var, config, _CALLBACK_AUTOMATIONS)
if web_server_config := config.get(CONF_WEB_SERVER):
await web_server.add_entity_config(var, web_server_config)
if mqtt_id := config.get(CONF_MQTT_ID):
@@ -31,12 +31,12 @@ void AlarmControlPanel::publish_state(AlarmControlPanelState state) {
this->last_update_ = millis();
if (state != this->current_state_) {
auto prev_state = this->current_state_;
ESP_LOGD(TAG, "'%s' >> %s (was %s)", this->get_name().c_str(),
ESP_LOGV(TAG, "'%s' >> %s (was %s)", this->get_name().c_str(),
LOG_STR_ARG(alarm_control_panel_state_to_string(state)),
LOG_STR_ARG(alarm_control_panel_state_to_string(prev_state)));
this->current_state_ = state;
// Single state callback - triggers check get_state() for specific states
this->state_callback_.call();
// Single state callback - listeners receive the new state as an argument
this->state_callback_.call(state);
#if defined(USE_ALARM_CONTROL_PANEL) && defined(USE_CONTROLLER_REGISTRY)
ControllerRegistry::notify_alarm_control_panel_update(this);
#endif
@@ -51,22 +51,6 @@ void AlarmControlPanel::publish_state(AlarmControlPanelState state) {
}
}
void AlarmControlPanel::add_on_state_callback(std::function<void()> &&callback) {
this->state_callback_.add(std::move(callback));
}
void AlarmControlPanel::add_on_cleared_callback(std::function<void()> &&callback) {
this->cleared_callback_.add(std::move(callback));
}
void AlarmControlPanel::add_on_chime_callback(std::function<void()> &&callback) {
this->chime_callback_.add(std::move(callback));
}
void AlarmControlPanel::add_on_ready_callback(std::function<void()> &&callback) {
this->ready_callback_.add(std::move(callback));
}
void AlarmControlPanel::arm_with_code_(AlarmControlPanelCall &(AlarmControlPanelCall::*arm_method)(),
const char *code) {
auto call = this->make_call();
@@ -37,25 +37,24 @@ class AlarmControlPanel : public EntityBase {
*
* @param callback The callback function
*/
void add_on_state_callback(std::function<void()> &&callback);
template<typename F> void add_on_state_callback(F &&callback) {
this->state_callback_.add(std::forward<F>(callback));
}
/** Add a callback for when the state of the alarm_control_panel clears from triggered
*
* @param callback The callback function
*/
void add_on_cleared_callback(std::function<void()> &&callback);
/** Add a callback for when the state of the alarm_control_panel clears from triggered. */
template<typename F> void add_on_cleared_callback(F &&callback) {
this->cleared_callback_.add(std::forward<F>(callback));
}
/** Add a callback for when a chime zone goes from closed to open
*
* @param callback The callback function
*/
void add_on_chime_callback(std::function<void()> &&callback);
/** Add a callback for when a chime zone goes from closed to open. */
template<typename F> void add_on_chime_callback(F &&callback) {
this->chime_callback_.add(std::forward<F>(callback));
}
/** Add a callback for when a ready state changes
*
* @param callback The callback function
*/
void add_on_ready_callback(std::function<void()> &&callback);
/** Add a callback for when a ready state changes. */
template<typename F> void add_on_ready_callback(F &&callback) {
this->ready_callback_.add(std::forward<F>(callback));
}
/** A numeric representation of the supported features as per HomeAssistant
*
@@ -146,8 +145,8 @@ class AlarmControlPanel : public EntityBase {
uint32_t last_update_;
// the call control function
virtual void control(const AlarmControlPanelCall &call) = 0;
// state callback - triggers check get_state() for specific state
LazyCallbackManager<void()> state_callback_{};
// state callback - passes the new state to listeners
LazyCallbackManager<void(AlarmControlPanelState)> state_callback_{};
// clear callback - fires when leaving TRIGGERED state
LazyCallbackManager<void()> cleared_callback_{};
// chime callback
@@ -5,60 +5,27 @@
namespace esphome::alarm_control_panel {
/// Trigger on any state change
class StateTrigger : public Trigger<> {
public:
explicit StateTrigger(AlarmControlPanel *alarm_control_panel) {
alarm_control_panel->add_on_state_callback([this]() { this->trigger(); });
/// Callback forwarder that triggers an Automation<> on any state change.
/// Pointer-sized (single Automation* field) to fit inline in Callback::ctx_.
struct StateAnyForwarder {
Automation<> *automation;
void operator()(AlarmControlPanelState /*state*/) const { this->automation->trigger(); }
};
/// Callback forwarder that triggers an Automation<> only when the alarm enters a specific state.
/// Pointer-sized (single Automation* field) to fit inline in Callback::ctx_.
template<AlarmControlPanelState State> struct StateEnterForwarder {
Automation<> *automation;
void operator()(AlarmControlPanelState state) const {
if (state == State)
this->automation->trigger();
}
};
/// Template trigger that fires when entering a specific state
template<AlarmControlPanelState State> class StateEnterTrigger : public Trigger<> {
public:
explicit StateEnterTrigger(AlarmControlPanel *alarm_control_panel) : alarm_control_panel_(alarm_control_panel) {
alarm_control_panel->add_on_state_callback([this]() {
if (this->alarm_control_panel_->get_state() == State)
this->trigger();
});
}
protected:
AlarmControlPanel *alarm_control_panel_;
};
// Type aliases for state-specific triggers
using TriggeredTrigger = StateEnterTrigger<ACP_STATE_TRIGGERED>;
using ArmingTrigger = StateEnterTrigger<ACP_STATE_ARMING>;
using PendingTrigger = StateEnterTrigger<ACP_STATE_PENDING>;
using ArmedHomeTrigger = StateEnterTrigger<ACP_STATE_ARMED_HOME>;
using ArmedNightTrigger = StateEnterTrigger<ACP_STATE_ARMED_NIGHT>;
using ArmedAwayTrigger = StateEnterTrigger<ACP_STATE_ARMED_AWAY>;
using DisarmedTrigger = StateEnterTrigger<ACP_STATE_DISARMED>;
/// Trigger when leaving TRIGGERED state (alarm cleared)
class ClearedTrigger : public Trigger<> {
public:
explicit ClearedTrigger(AlarmControlPanel *alarm_control_panel) {
alarm_control_panel->add_on_cleared_callback([this]() { this->trigger(); });
}
};
/// Trigger on chime event (zone opened while disarmed)
class ChimeTrigger : public Trigger<> {
public:
explicit ChimeTrigger(AlarmControlPanel *alarm_control_panel) {
alarm_control_panel->add_on_chime_callback([this]() { this->trigger(); });
}
};
/// Trigger on ready state change
class ReadyTrigger : public Trigger<> {
public:
explicit ReadyTrigger(AlarmControlPanel *alarm_control_panel) {
alarm_control_panel->add_on_ready_callback([this]() { this->trigger(); });
}
};
static_assert(sizeof(StateAnyForwarder) <= sizeof(void *));
static_assert(std::is_trivially_copyable_v<StateAnyForwarder>);
static_assert(sizeof(StateEnterForwarder<ACP_STATE_TRIGGERED>) <= sizeof(void *));
static_assert(std::is_trivially_copyable_v<StateEnterForwarder<ACP_STATE_TRIGGERED>>);
template<typename... Ts> class ArmAwayAction : public Action<Ts...> {
public:
+13
View File
@@ -9,6 +9,10 @@ from esphome.const import (
CONF_POWER,
CONF_SPEED,
CONF_VOLTAGE,
DEVICE_CLASS_CURRENT,
DEVICE_CLASS_POWER,
DEVICE_CLASS_VOLTAGE,
STATE_CLASS_MEASUREMENT,
UNIT_AMPERE,
UNIT_CUBIC_METER_PER_HOUR,
UNIT_METER,
@@ -27,26 +31,35 @@ CONFIG_SCHEMA = (
cv.Optional(CONF_FLOW): sensor.sensor_schema(
unit_of_measurement=UNIT_CUBIC_METER_PER_HOUR,
accuracy_decimals=2,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional(CONF_HEAD): sensor.sensor_schema(
unit_of_measurement=UNIT_METER,
accuracy_decimals=2,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional(CONF_POWER): sensor.sensor_schema(
unit_of_measurement=UNIT_WATT,
accuracy_decimals=2,
device_class=DEVICE_CLASS_POWER,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional(CONF_CURRENT): sensor.sensor_schema(
unit_of_measurement=UNIT_AMPERE,
accuracy_decimals=2,
device_class=DEVICE_CLASS_CURRENT,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional(CONF_SPEED): sensor.sensor_schema(
unit_of_measurement=UNIT_REVOLUTIONS_PER_MINUTE,
accuracy_decimals=2,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional(CONF_VOLTAGE): sensor.sensor_schema(
unit_of_measurement=UNIT_VOLT,
accuracy_decimals=2,
device_class=DEVICE_CLASS_VOLTAGE,
state_class=STATE_CLASS_MEASUREMENT,
),
}
)
@@ -8,6 +8,7 @@ from esphome.const import (
DEVICE_CLASS_BATTERY,
ENTITY_CATEGORY_DIAGNOSTIC,
ICON_BRIGHTNESS_5,
STATE_CLASS_MEASUREMENT,
UNIT_PERCENT,
)
@@ -26,11 +27,13 @@ CONFIG_SCHEMA = (
device_class=DEVICE_CLASS_BATTERY,
accuracy_decimals=0,
entity_category=ENTITY_CATEGORY_DIAGNOSTIC,
state_class=STATE_CLASS_MEASUREMENT,
),
cv.Optional(CONF_ILLUMINANCE): sensor.sensor_schema(
unit_of_measurement=UNIT_PERCENT,
icon=ICON_BRIGHTNESS_5,
accuracy_decimals=0,
state_class=STATE_CLASS_MEASUREMENT,
),
}
)
@@ -19,8 +19,8 @@ class AnalogThresholdBinarySensor : public Component, public binary_sensor::Bina
protected:
sensor::Sensor *sensor_{nullptr};
TemplatableValue<float> upper_threshold_{};
TemplatableValue<float> lower_threshold_{};
TemplatableFn<float> upper_threshold_{};
TemplatableFn<float> lower_threshold_{};
bool raw_state_{false}; // Pre-filter state for hysteresis logic
};
@@ -40,10 +40,10 @@ async def to_code(config):
cg.add(var.set_sensor(sens))
if isinstance(config[CONF_THRESHOLD], dict):
lower = await cg.templatable(config[CONF_THRESHOLD][CONF_LOWER], [], float)
upper = await cg.templatable(config[CONF_THRESHOLD][CONF_UPPER], [], float)
lower = await cg.templatable(config[CONF_THRESHOLD][CONF_LOWER], [], cg.float_)
upper = await cg.templatable(config[CONF_THRESHOLD][CONF_UPPER], [], cg.float_)
else:
lower = await cg.templatable(config[CONF_THRESHOLD], [], float)
lower = await cg.templatable(config[CONF_THRESHOLD], [], cg.float_)
upper = lower
cg.add(var.set_upper_threshold(upper))
cg.add(var.set_lower_threshold(lower))
+6 -2
View File
@@ -301,11 +301,12 @@ CONFIG_SCHEMA = cv.All(
# Maximum queued send buffers per connection before dropping connection
# Each buffer uses ~8-12 bytes overhead plus actual message size
# Platform defaults based on available RAM and typical message rates:
# CONF_MAX_SEND_QUEUE defaults are power of 2 for efficient modulo
cv.SplitDefault(
CONF_MAX_SEND_QUEUE,
esp8266=5, # Limited RAM, need to fail fast
esp8266=4, # Limited RAM, need to fail fast
esp32=8, # More RAM, can buffer more
rp2040=5, # Limited RAM
rp2040=8, # Moderate RAM
bk72xx=8, # Moderate RAM
nrf52=8, # Moderate RAM
rtl87xx=8, # Moderate RAM
@@ -454,6 +455,9 @@ async def to_code(config: ConfigType) -> None:
cg.add_define("USE_API_PLAINTEXT")
cg.add_define("USE_API_NOISE")
cg.add_library("esphome/noise-c", "0.1.11")
# Enable optimized memzero/memcmp in libsodium instead of volatile byte loops
cg.add_build_flag("-DHAVE_WEAK_SYMBOLS=1")
cg.add_build_flag("-DHAVE_INLINE_ASM=1")
else:
cg.add_define("USE_API_PLAINTEXT")
File diff suppressed because it is too large Load Diff
+6
View File
@@ -44,6 +44,12 @@ class APIBuffer {
this->reserve(n);
this->size_ = n; // no zero-fill
}
/// Reserve capacity for max(reserve_size, new_size) bytes, then set size to new_size.
/// Single grow_ check regardless of argument order.
inline void reserve_and_resize(size_t reserve_size, size_t new_size) ESPHOME_ALWAYS_INLINE {
this->reserve(std::max(reserve_size, new_size));
this->size_ = new_size;
}
uint8_t *data() { return this->data_.get(); }
const uint8_t *data() const { return this->data_.get(); }
size_t size() const { return this->size_; }
+63 -106
View File
@@ -72,6 +72,14 @@ static constexpr uint32_t HANDSHAKE_TIMEOUT_MS = 60000;
static constexpr auto ESPHOME_VERSION_REF = StringRef::from_lit(ESPHOME_VERSION);
// Cross-validate C++ constants against proto max_data_length annotations in api.proto
static_assert(MAC_ADDRESS_PRETTY_BUFFER_SIZE - 1 == 17,
"Update max_data_length for mac_address/bluetooth_mac_address in api.proto");
static_assert(Application::BUILD_TIME_STR_SIZE - 1 == 25, "Update max_data_length for compilation_time in api.proto");
static_assert(sizeof(ESPHOME_VERSION) - 1 <= 32, "Update max_data_length for esphome_version in api.proto");
static_assert(ESPHOME_DEVICE_NAME_MAX_LEN <= 31, "Update max_data_length for name in api.proto");
static_assert(ESPHOME_FRIENDLY_NAME_MAX_LEN <= 120, "Update max_data_length for friendly_name in api.proto");
static const char *const TAG = "api.connection";
#ifdef USE_CAMERA
static const int CAMERA_STOP_STREAM = 5000;
@@ -132,8 +140,6 @@ APIConnection::APIConnection(std::unique_ptr<socket::Socket> sock, APIServer *pa
#endif
}
uint32_t APIConnection::get_batch_delay_ms_() const { return this->parent_->get_batch_delay(); }
void APIConnection::start() {
this->last_traffic_ = App.get_loop_component_start_time();
@@ -234,7 +240,7 @@ void APIConnection::loop() {
this->last_traffic_ = now;
}
// read a packet
this->read_message(buffer.data_len, buffer.type, buffer.data);
this->read_message_(buffer.data_len, buffer.type, buffer.data);
if (this->flags_.remove)
return;
}
@@ -309,6 +315,8 @@ void APIConnection::process_active_iterator_() {
this->destroy_active_iterator_();
if (this->flags_.state_subscription) {
this->begin_iterator_(ActiveIterator::INITIAL_STATE);
} else {
this->finalize_iterator_sync_();
}
} else {
this->process_iterator_batch_(this->iterator_storage_.list_entities);
@@ -316,21 +324,27 @@ void APIConnection::process_active_iterator_() {
} else { // INITIAL_STATE
if (this->iterator_storage_.initial_state.completed()) {
this->destroy_active_iterator_();
// Process any remaining batched messages immediately
if (!this->deferred_batch_.empty()) {
this->process_batch_();
}
// Now that everything is sent, enable immediate sending for future state changes
this->flags_.should_try_send_immediately = true;
// Release excess memory from buffers that grew during initial sync
this->deferred_batch_.release_buffer();
this->helper_->release_buffers();
this->finalize_iterator_sync_();
} else {
this->process_iterator_batch_(this->iterator_storage_.initial_state);
}
}
}
void APIConnection::finalize_iterator_sync_() {
// Flush any remaining batched messages immediately so clients
// receive completion responses (e.g. ListEntitiesDoneResponse)
// without waiting for the batch timer.
if (!this->deferred_batch_.empty()) {
this->process_batch_();
}
// Enable immediate sending for future state changes
this->flags_.should_try_send_immediately = true;
// Release excess memory from buffers that grew during initial sync
this->deferred_batch_.release_buffer();
this->helper_->release_buffers();
}
void APIConnection::process_iterator_batch_(ComponentIterator &iterator) {
size_t initial_size = this->deferred_batch_.size();
size_t max_batch = this->get_max_batch_size_();
@@ -400,7 +414,7 @@ uint16_t APIConnection::fill_and_encode_entity_info(EntityBase *entity, InfoResp
#ifdef USE_DEVICES
msg.device_id = entity->get_device_id();
#endif
return encode_to_buffer(size_fn(&msg), encode_fn, &msg, conn, remaining_size);
return encode_to_buffer_slow(size_fn(&msg), encode_fn, &msg, conn, remaining_size);
}
uint16_t APIConnection::fill_and_encode_entity_info_with_device_class(EntityBase *entity, InfoResponseProtoMessage &msg,
@@ -1465,7 +1479,7 @@ void APIConnection::send_infrared_rf_receive_event(const InfraredRFReceiveEvent
void APIConnection::on_serial_proxy_configure_request(const SerialProxyConfigureRequest &msg) {
auto &proxies = App.get_serial_proxies();
if (msg.instance >= proxies.size()) {
ESP_LOGW(TAG, "Serial proxy instance %u out of range (max %u)", msg.instance,
ESP_LOGW(TAG, "Serial proxy instance %" PRIu32 " out of range (max %" PRIu32 ")", msg.instance,
static_cast<uint32_t>(proxies.size()));
return;
}
@@ -1476,7 +1490,7 @@ void APIConnection::on_serial_proxy_configure_request(const SerialProxyConfigure
void APIConnection::on_serial_proxy_write_request(const SerialProxyWriteRequest &msg) {
auto &proxies = App.get_serial_proxies();
if (msg.instance >= proxies.size()) {
ESP_LOGW(TAG, "Serial proxy instance %u out of range", msg.instance);
ESP_LOGW(TAG, "Serial proxy instance %" PRIu32 " out of range", msg.instance);
return;
}
proxies[msg.instance]->write_from_client(msg.data, msg.data_len);
@@ -1485,7 +1499,7 @@ void APIConnection::on_serial_proxy_write_request(const SerialProxyWriteRequest
void APIConnection::on_serial_proxy_set_modem_pins_request(const SerialProxySetModemPinsRequest &msg) {
auto &proxies = App.get_serial_proxies();
if (msg.instance >= proxies.size()) {
ESP_LOGW(TAG, "Serial proxy instance %u out of range", msg.instance);
ESP_LOGW(TAG, "Serial proxy instance %" PRIu32 " out of range", msg.instance);
return;
}
proxies[msg.instance]->set_modem_pins(msg.line_states);
@@ -1494,7 +1508,7 @@ void APIConnection::on_serial_proxy_set_modem_pins_request(const SerialProxySetM
void APIConnection::on_serial_proxy_get_modem_pins_request(const SerialProxyGetModemPinsRequest &msg) {
auto &proxies = App.get_serial_proxies();
if (msg.instance >= proxies.size()) {
ESP_LOGW(TAG, "Serial proxy instance %u out of range", msg.instance);
ESP_LOGW(TAG, "Serial proxy instance %" PRIu32 " out of range", msg.instance);
return;
}
SerialProxyGetModemPinsResponse resp{};
@@ -1506,7 +1520,7 @@ void APIConnection::on_serial_proxy_get_modem_pins_request(const SerialProxyGetM
void APIConnection::on_serial_proxy_request(const SerialProxyRequest &msg) {
auto &proxies = App.get_serial_proxies();
if (msg.instance >= proxies.size()) {
ESP_LOGW(TAG, "Serial proxy instance %u out of range", msg.instance);
ESP_LOGW(TAG, "Serial proxy instance %" PRIu32 " out of range", msg.instance);
return;
}
switch (msg.type) {
@@ -1519,16 +1533,16 @@ void APIConnection::on_serial_proxy_request(const SerialProxyRequest &msg) {
resp.instance = msg.instance;
resp.type = enums::SERIAL_PROXY_REQUEST_TYPE_FLUSH;
switch (proxies[msg.instance]->flush_port()) {
case uart::FlushResult::SUCCESS:
case uart::UARTFlushResult::UART_FLUSH_RESULT_SUCCESS:
resp.status = enums::SERIAL_PROXY_STATUS_OK;
break;
case uart::FlushResult::ASSUMED_SUCCESS:
case uart::UARTFlushResult::UART_FLUSH_RESULT_ASSUMED_SUCCESS:
resp.status = enums::SERIAL_PROXY_STATUS_ASSUMED_SUCCESS;
break;
case uart::FlushResult::TIMEOUT:
case uart::UARTFlushResult::UART_FLUSH_RESULT_TIMEOUT:
resp.status = enums::SERIAL_PROXY_STATUS_TIMEOUT;
break;
case uart::FlushResult::FAILED:
case uart::UARTFlushResult::UART_FLUSH_RESULT_FAILED:
resp.status = enums::SERIAL_PROXY_STATUS_ERROR;
break;
}
@@ -1536,7 +1550,7 @@ void APIConnection::on_serial_proxy_request(const SerialProxyRequest &msg) {
break;
}
default:
ESP_LOGW(TAG, "Unknown serial proxy request type: %u", static_cast<uint32_t>(msg.type));
ESP_LOGW(TAG, "Unknown serial proxy request type: %" PRIu32, static_cast<uint32_t>(msg.type));
break;
}
}
@@ -1549,6 +1563,7 @@ uint16_t APIConnection::try_send_infrared_info(EntityBase *entity, APIConnection
auto *infrared = static_cast<infrared::Infrared *>(entity);
ListEntitiesInfraredResponse msg;
msg.capabilities = infrared->get_capability_flags();
msg.receiver_frequency = infrared->get_traits().get_receiver_frequency_hz();
return fill_and_encode_entity_info(infrared, msg, conn, remaining_size);
}
#endif
@@ -1717,6 +1732,7 @@ bool APIConnection::send_device_info_response_() {
static constexpr auto MANUFACTURER = StringRef::from_lit(ESPHOME_MANUFACTURER);
resp.manufacturer = MANUFACTURER;
#endif
static_assert(sizeof(ESPHOME_MANUFACTURER) - 1 <= 20, "Update max_data_length for manufacturer in api.proto");
#undef ESPHOME_MANUFACTURER
#ifdef USE_ESP8266
@@ -1994,53 +2010,15 @@ bool APIConnection::send_message_(uint32_t payload_size, uint8_t message_type, M
size_t write_start = shared_buf.size();
shared_buf.resize(write_start + payload_size);
ProtoWriteBuffer buffer{&shared_buf, write_start};
encode_fn(msg, buffer);
encode_fn(msg, buffer PROTO_ENCODE_DEBUG_INIT(&shared_buf));
return this->send_buffer(ProtoWriteBuffer{&shared_buf}, message_type);
}
// Encodes a message to the buffer and returns the total number of bytes used,
// including header and footer overhead. Returns 0 if the message doesn't fit.
uint16_t APIConnection::encode_to_buffer(uint32_t calculated_size, MessageEncodeFn encode_fn, const void *msg,
APIConnection *conn, uint32_t remaining_size) {
#ifdef HAS_PROTO_MESSAGE_DUMP
if (conn->flags_.log_only_mode) {
auto *proto_msg = static_cast<const ProtoMessage *>(msg);
DumpBuffer dump_buf;
conn->log_send_message_(proto_msg->message_name(), proto_msg->dump_to(dump_buf));
return 1;
}
#endif
// Cache frame sizes to avoid repeated virtual calls
const uint8_t header_padding = conn->helper_->frame_header_padding();
const uint8_t footer_size = conn->helper_->frame_footer_size();
// encode_to_buffer is defined inline in api_connection.h (ESPHOME_ALWAYS_INLINE)
// Calculate total size with padding for buffer allocation
size_t total_calculated_size = calculated_size + header_padding + footer_size;
// Check if it fits
if (total_calculated_size > remaining_size)
return 0; // Doesn't fit
auto &shared_buf = conn->parent_->get_shared_buffer_ref();
if (conn->flags_.batch_first_message) {
// First message - buffer already prepared by caller, just clear flag
conn->flags_.batch_first_message = false;
} else {
// Batch message second or later
// Add padding for previous message footer + this message header
size_t current_size = shared_buf.size();
shared_buf.reserve(current_size + total_calculated_size);
shared_buf.resize(current_size + footer_size + header_padding);
}
// Pre-resize buffer to include payload, then encode through raw pointer
size_t write_start = shared_buf.size();
shared_buf.resize(write_start + calculated_size);
ProtoWriteBuffer buffer{&shared_buf, write_start};
encode_fn(msg, buffer);
// Return total size (header + payload + footer)
return static_cast<uint16_t>(header_padding + calculated_size + footer_size);
// Noinline version for cold paths — single shared copy
uint16_t APIConnection::encode_to_buffer_slow(uint32_t calculated_size, MessageEncodeFn encode_fn, const void *msg,
APIConnection *conn, uint32_t remaining_size) {
return encode_to_buffer(calculated_size, encode_fn, msg, conn, remaining_size);
}
bool APIConnection::send_buffer(ProtoWriteBuffer buffer, uint8_t message_type) {
const bool is_log_message = (message_type == SubscribeLogsResponse::MESSAGE_TYPE);
@@ -2072,37 +2050,9 @@ void APIConnection::on_fatal_error() {
this->flags_.remove = true;
}
void __attribute__((flatten)) APIConnection::DeferredBatch::push_item(const BatchItem &item) { items.push_back(item); }
void APIConnection::DeferredBatch::add_item(EntityBase *entity, uint8_t message_type, uint8_t estimated_size,
uint8_t aux_data_index) {
// Check if we already have a message of this type for this entity
// This provides deduplication per entity/message_type combination
// O(n) but optimized for RAM and not performance.
// Skip deduplication for events - they are edge-triggered, every occurrence matters
#ifdef USE_EVENT
if (message_type != EventResponse::MESSAGE_TYPE)
#endif
{
for (const auto &item : items) {
if (item.entity == entity && item.message_type == message_type)
return; // Already queued
}
}
// No existing item found (or event), add new one
this->push_item({entity, message_type, estimated_size, aux_data_index});
}
void APIConnection::DeferredBatch::add_item_front(EntityBase *entity, uint8_t message_type, uint8_t estimated_size) {
// Add high priority message and swap to front
// This avoids expensive vector::insert which shifts all elements
// Note: We only ever have one high-priority message at a time (ping OR disconnect)
// If we're disconnecting, pings are blocked, so this simple swap is sufficient
this->push_item({entity, message_type, estimated_size, AUX_DATA_UNUSED});
if (items.size() > 1) {
// Swap the new high-priority item to the front
std::swap(items.front(), items.back());
}
bool APIConnection::schedule_message_front_(EntityBase *entity, uint8_t message_type, uint8_t estimated_size) {
this->deferred_batch_.add_item_front(entity, message_type, estimated_size);
return this->schedule_batch_();
}
bool APIConnection::send_message_smart_(EntityBase *entity, uint8_t message_type, uint8_t estimated_size,
@@ -2195,17 +2145,15 @@ void APIConnection::process_batch_multi_(APIBuffer &shared_buf, size_t num_items
"MessageInfo must remain trivially destructible with this placement-new approach");
const size_t messages_to_process = std::min(num_items, MAX_MESSAGES_PER_BATCH);
const uint8_t frame_overhead = header_padding + footer_size;
// Stack-allocated array for message info
alignas(MessageInfo) char message_info_storage[MAX_MESSAGES_PER_BATCH * sizeof(MessageInfo)];
MessageInfo *message_info = reinterpret_cast<MessageInfo *>(message_info_storage);
size_t items_processed = 0;
uint16_t remaining_size = std::numeric_limits<uint16_t>::max();
// Track where each message's header padding begins in the buffer
// For plaintext: this is where the 6-byte header padding starts
// For noise: this is where the 7-byte header padding starts
// The actual message data follows after the header padding
// Track where each message's header begins in the buffer
// First message: offset 0 (max padding, may have unused leading bytes)
// Subsequent messages: offset points to exact header start (no gaps)
uint32_t current_offset = 0;
// Process items and encode directly to buffer (up to our limit)
@@ -2221,13 +2169,14 @@ void APIConnection::process_batch_multi_(APIBuffer &shared_buf, size_t num_items
}
// Message was encoded successfully
// payload_size is header_padding + actual payload size + footer_size
uint16_t proto_payload_size = payload_size - frame_overhead;
// payload_size = header_size + proto_payload_size + footer_size
uint16_t proto_payload_size = payload_size - this->batch_header_size_ - footer_size;
// Use placement new to construct MessageInfo in pre-allocated stack array
// This avoids default-constructing all MAX_MESSAGES_PER_BATCH elements
// Explicit destruction is not needed because MessageInfo is trivially destructible,
// as ensured by the static_assert in its definition.
new (&message_info[items_processed++]) MessageInfo(item.message_type, current_offset, proto_payload_size);
new (&message_info[items_processed++])
MessageInfo(item.message_type, current_offset, proto_payload_size, this->batch_header_size_);
// After first message, set remaining size to MAX_BATCH_PACKET_SIZE to avoid fragmentation
if (items_processed == 1) {
remaining_size = MAX_BATCH_PACKET_SIZE;
@@ -2244,6 +2193,13 @@ void APIConnection::process_batch_multi_(APIBuffer &shared_buf, size_t num_items
shared_buf.resize(shared_buf.size() + footer_size);
}
// Ensure TCP_NODELAY is on before writing batch data.
// Log messages enable Nagle (NODELAY off) to coalesce small packets.
// Without this, batch data written to the socket sits in LWIP's Nagle
// buffer — the remote won't ACK until it sends its own data (e.g. a
// ping), which can take 20+ seconds.
this->helper_->set_nodelay_for_message(false);
// Send all collected messages
APIError err = this->helper_->write_protobuf_messages(ProtoWriteBuffer{&shared_buf},
std::span<const MessageInfo>(message_info, items_processed));
@@ -2277,6 +2233,7 @@ void APIConnection::process_batch_multi_(APIBuffer &shared_buf, size_t num_items
uint16_t APIConnection::dispatch_message_(const DeferredBatch::BatchItem &item, uint32_t remaining_size,
bool batch_first) {
this->flags_.batch_first_message = batch_first;
this->batch_message_type_ = item.message_type;
#ifdef USE_EVENT
// Events need aux_data_index to look up event type from entity
if (item.message_type == EventResponse::MESSAGE_TYPE) {
File diff suppressed because it is too large Load Diff
+67 -134
View File
@@ -100,149 +100,81 @@ const LogString *api_error_to_logstr(APIError err) {
return LOG_STR("UNKNOWN");
}
// Default implementation for loop - handles sending buffered data
APIError APIFrameHelper::loop() {
if (this->tx_buf_count_ > 0) {
APIError err = try_send_tx_buf_();
if (err != APIError::OK && err != APIError::WOULD_BLOCK) {
return err;
}
}
return APIError::OK; // Convert WOULD_BLOCK to OK to avoid connection termination
}
// Common socket write error handling
APIError APIFrameHelper::handle_socket_write_error_() {
if (errno == EWOULDBLOCK || errno == EAGAIN) {
return APIError::WOULD_BLOCK;
}
HELPER_LOG("Socket write failed with errno %d", errno);
this->state_ = State::FAILED;
return APIError::SOCKET_WRITE_FAILED;
}
// Helper method to buffer data from IOVs
void APIFrameHelper::buffer_data_from_iov_(const struct iovec *iov, int iovcnt, uint16_t total_write_len,
uint16_t offset) {
// Check if queue is full
if (this->tx_buf_count_ >= API_MAX_SEND_QUEUE) {
HELPER_LOG("Send queue full (%u buffers), dropping connection", this->tx_buf_count_);
this->state_ = State::FAILED;
return;
}
uint16_t buffer_size = total_write_len - offset;
auto &buffer = this->tx_buf_[this->tx_buf_tail_];
buffer = std::make_unique<SendBuffer>(SendBuffer{
.data = std::make_unique<uint8_t[]>(buffer_size),
.size = buffer_size,
.offset = 0,
});
uint16_t to_skip = offset;
uint16_t write_pos = 0;
for (int i = 0; i < iovcnt; i++) {
if (to_skip >= iov[i].iov_len) {
// Skip this entire segment
to_skip -= static_cast<uint16_t>(iov[i].iov_len);
} else {
// Include this segment (partially or fully)
const uint8_t *src = reinterpret_cast<uint8_t *>(iov[i].iov_base) + to_skip;
uint16_t len = static_cast<uint16_t>(iov[i].iov_len) - to_skip;
std::memcpy(buffer->data.get() + write_pos, src, len);
write_pos += len;
to_skip = 0;
}
}
// Update circular buffer tracking
this->tx_buf_tail_ = (this->tx_buf_tail_ + 1) % API_MAX_SEND_QUEUE;
this->tx_buf_count_++;
}
// This method writes data to socket or buffers it
APIError APIFrameHelper::write_raw_(const struct iovec *iov, int iovcnt, uint16_t total_write_len) {
// Returns APIError::OK if successful (or would block, but data has been buffered)
// Returns APIError::SOCKET_WRITE_FAILED if socket write failed, and sets state to FAILED
if (iovcnt == 0)
return APIError::OK; // Nothing to do, success
#ifdef HELPER_LOG_PACKETS
for (int i = 0; i < iovcnt; i++) {
LOG_PACKET_SENDING(reinterpret_cast<uint8_t *>(iov[i].iov_base), iov[i].iov_len);
}
void APIFrameHelper::log_packet_sending_(const void *data, uint16_t len) {
LOG_PACKET_SENDING(reinterpret_cast<const uint8_t *>(data), len);
}
#endif
// Try to send any existing buffered data first if there is any
if (this->tx_buf_count_ > 0) {
APIError send_result = try_send_tx_buf_();
// If real error occurred (not just WOULD_BLOCK), return it
if (send_result != APIError::OK && send_result != APIError::WOULD_BLOCK) {
return send_result;
}
// If there is still data in the buffer, we can't send, buffer
// the new data and return
if (this->tx_buf_count_ > 0) {
this->buffer_data_from_iov_(iov, iovcnt, total_write_len, 0);
return APIError::OK; // Success, data buffered
APIError APIFrameHelper::drain_overflow_and_handle_errors_() {
if (this->overflow_buf_.try_drain(this->socket_.get()) == -1) {
int err = errno;
if (err != EWOULDBLOCK && err != EAGAIN) {
this->state_ = State::FAILED;
HELPER_LOG("Socket write failed with errno %d", err);
return APIError::SOCKET_WRITE_FAILED;
}
}
// Try to send directly if no buffered data
// Optimize for single iovec case (common for plaintext API)
ssize_t sent =
(iovcnt == 1) ? this->socket_->write(iov[0].iov_base, iov[0].iov_len) : this->socket_->writev(iov, iovcnt);
if (sent == -1) {
APIError err = this->handle_socket_write_error_();
if (err == APIError::WOULD_BLOCK) {
// Socket would block, buffer the data
this->buffer_data_from_iov_(iov, iovcnt, total_write_len, 0);
return APIError::OK; // Success, data buffered
}
return err; // Socket write failed
} else if (static_cast<uint16_t>(sent) < total_write_len) {
// Partially sent, buffer the remaining data
this->buffer_data_from_iov_(iov, iovcnt, total_write_len, static_cast<uint16_t>(sent));
}
return APIError::OK; // Success, all data sent or buffered
return APIError::OK;
}
// Common implementation for trying to send buffered data
// IMPORTANT: Caller MUST ensure tx_buf_count_ > 0 before calling this method
APIError APIFrameHelper::try_send_tx_buf_() {
// Try to send from tx_buf - we assume it's not empty as it's the caller's responsibility to check
while (this->tx_buf_count_ > 0) {
// Get the first buffer in the queue
SendBuffer *front_buffer = this->tx_buf_[this->tx_buf_head_].get();
// Single-buffer write path: wraps in iovec and delegates.
APIError APIFrameHelper::write_raw_buf_(const void *data, uint16_t len, ssize_t sent) {
struct iovec iov = {const_cast<void *>(data), len};
APIError err = this->write_raw_iov_(&iov, 1, len, sent);
#ifdef HELPER_LOG_PACKETS
// Log after write/enqueue so re-entrant log sends can't corrupt data before it's sent
if (err == APIError::OK)
LOG_PACKET_SENDING(reinterpret_cast<const uint8_t *>(data), len);
#endif
return err;
}
// Try to send the remaining data in this buffer
ssize_t sent = this->socket_->write(front_buffer->current_data(), front_buffer->remaining());
if (sent == -1) {
return this->handle_socket_write_error_();
} else if (sent == 0) {
// Nothing sent but not an error
return APIError::WOULD_BLOCK;
} else if (static_cast<uint16_t>(sent) < front_buffer->remaining()) {
// Partially sent, update offset
// Cast to ensure no overflow issues with uint16_t
front_buffer->offset += static_cast<uint16_t>(sent);
return APIError::WOULD_BLOCK; // Stop processing more buffers if we couldn't send a complete buffer
} else {
// Buffer completely sent, remove it from the queue
this->tx_buf_[this->tx_buf_head_].reset();
this->tx_buf_head_ = (this->tx_buf_head_ + 1) % API_MAX_SEND_QUEUE;
this->tx_buf_count_--;
// Continue loop to try sending the next buffer
// Handles partial writes, errors, and overflow buffering.
// Called when the inline fast path couldn't complete the write,
// or directly from cold paths (handshake, error handling).
APIError APIFrameHelper::write_raw_iov_(const struct iovec *iov, int iovcnt, uint16_t total_write_len, ssize_t sent) {
if (sent <= 0) {
if (sent == WRITE_NOT_ATTEMPTED) {
// Cold path: no write attempted yet, drain overflow and try
if (!this->overflow_buf_.empty()) {
APIError err = this->drain_overflow_and_handle_errors_();
if (err != APIError::OK)
return err;
}
if (this->overflow_buf_.empty()) {
sent = this->write_iov_to_socket_(iov, iovcnt);
if (sent == static_cast<ssize_t>(total_write_len))
return APIError::OK;
// Partial write or -1: fall through to error check / enqueue below
} else {
// Overflow backlog remains after drain; skip socket write, enqueue everything
sent = 0;
}
}
// WRITE_FAILED (-1): fast path or retry write returned -1, check errno
if (sent == WRITE_FAILED) {
int err = errno;
if (err != EWOULDBLOCK && err != EAGAIN) {
this->state_ = State::FAILED;
HELPER_LOG("Socket write failed with errno %d", err);
return APIError::SOCKET_WRITE_FAILED;
}
sent = 0; // Treat WOULD_BLOCK as zero bytes sent
}
}
return APIError::OK; // All buffers sent successfully
// Full write completed (possible when called directly, not via write_raw_fast_buf_)
if (sent == static_cast<ssize_t>(total_write_len))
return APIError::OK;
// Queue unsent data into overflow buffer
if (!this->overflow_buf_.enqueue_iov(iov, iovcnt, total_write_len, static_cast<uint16_t>(sent))) {
HELPER_LOG("Overflow buffer full, dropping connection");
this->state_ = State::FAILED;
return APIError::SOCKET_WRITE_FAILED;
}
return APIError::OK;
}
const char *APIFrameHelper::get_peername_to(std::span<char, socket::SOCKADDR_STR_LEN> buf) const {
@@ -278,11 +210,12 @@ APIError APIFrameHelper::init_common_() {
APIError APIFrameHelper::handle_socket_read_result_(ssize_t received) {
if (received == -1) {
if (errno == EWOULDBLOCK || errno == EAGAIN) {
const int err = errno;
if (err == EWOULDBLOCK || err == EAGAIN) {
return APIError::WOULD_BLOCK;
}
state_ = State::FAILED;
HELPER_LOG("Socket read failed with errno %d", errno);
HELPER_LOG("Socket read failed with errno %d", err);
return APIError::SOCKET_READ_FAILED;
} else if (received == 0) {
state_ = State::FAILED;
+85 -50
View File
@@ -9,9 +9,11 @@
#include "esphome/core/defines.h"
#ifdef USE_API
#include "esphome/components/api/api_buffer.h"
#include "esphome/components/api/api_overflow_buffer.h"
#include "esphome/components/socket/socket.h"
#include "esphome/core/application.h"
#include "esphome/core/log.h"
#include "proto.h"
namespace esphome::api {
@@ -37,8 +39,6 @@ static constexpr uint16_t RX_BUF_NULL_TERMINATOR = 1;
// Must be >= MAX_INITIAL_PER_BATCH in api_connection.h (enforced by static_assert there)
static constexpr size_t MAX_MESSAGES_PER_BATCH = 34;
class ProtoWriteBuffer;
// Max client name length (e.g., "Home Assistant 2026.1.0.dev0" = 28 chars)
static constexpr size_t CLIENT_INFO_NAME_MAX_LEN = 32;
@@ -49,12 +49,17 @@ struct ReadPacketBuffer {
};
// Packed message info structure to minimize memory usage
// Note: message_type is uint8_t — all current protobuf message types fit in 8 bits.
// The noise wire format encodes types as 16-bit, but the high byte is always 0.
// If message types ever exceed 255, this and encrypt_noise_message_ must be updated.
struct MessageInfo {
uint16_t offset; // Offset in buffer where message starts
uint16_t payload_size; // Size of the message payload
uint8_t message_type; // Message type (0-255)
uint8_t header_size; // Actual header size used (avoids recomputation in write path)
MessageInfo(uint8_t type, uint16_t off, uint16_t size) : offset(off), payload_size(size), message_type(type) {}
MessageInfo(uint8_t type, uint16_t off, uint16_t size, uint8_t hdr)
: offset(off), payload_size(size), message_type(type), header_size(hdr) {}
};
enum class APIError : uint16_t {
@@ -105,9 +110,9 @@ class APIFrameHelper {
}
virtual ~APIFrameHelper() = default;
virtual APIError init() = 0;
virtual APIError loop();
virtual APIError loop() = 0;
virtual APIError read_packet(ReadPacketBuffer *buffer) = 0;
bool can_write_without_blocking() { return this->state_ == State::DATA && this->tx_buf_count_ == 0; }
bool can_write_without_blocking() { return this->state_ == State::DATA && this->overflow_buf_.empty(); }
int getpeername(struct sockaddr *addr, socklen_t *addrlen) { return socket_->getpeername(addr, addrlen); }
APIError close() {
if (state_ == State::CLOSED)
@@ -147,31 +152,47 @@ class APIFrameHelper {
//
void set_nodelay_for_message(bool is_log_message) {
if (!is_log_message) {
if (this->nodelay_state_ != NODELAY_ON) {
if (this->nodelay_counter_) {
this->set_nodelay_raw_(true);
this->nodelay_state_ = NODELAY_ON;
this->nodelay_counter_ = 0;
}
return;
}
// Log messages: state transitions -1 -> 1 -> ... -> LOG_NAGLE_COUNT -> -1 (flush)
if (this->nodelay_state_ == NODELAY_ON) {
// Log message: enable Nagle on first, flush after LOG_NAGLE_COUNT
if (!this->nodelay_counter_)
this->set_nodelay_raw_(false);
this->nodelay_state_ = 1;
} else if (this->nodelay_state_ >= LOG_NAGLE_COUNT) {
if (++this->nodelay_counter_ > LOG_NAGLE_COUNT) {
this->set_nodelay_raw_(true);
this->nodelay_state_ = NODELAY_ON;
} else {
this->nodelay_state_++;
this->nodelay_counter_ = 0;
}
}
// Write a single protobuf message - the hot path (87-100% of all writes).
// Caller must ensure state is DATA before calling.
virtual APIError write_protobuf_packet(uint8_t type, ProtoWriteBuffer buffer) = 0;
// Write multiple protobuf messages in a single operation
// messages contains (message_type, offset, length) for each message in the buffer
// The buffer contains all messages with appropriate padding before each
// Write multiple protobuf messages in a single batched operation.
// Caller must ensure state is DATA and messages is not empty.
// messages contains (message_type, offset, length) for each message in the buffer.
// The buffer contains all messages with appropriate padding before each.
virtual APIError write_protobuf_messages(ProtoWriteBuffer buffer, std::span<const MessageInfo> messages) = 0;
// Get the frame header padding required by this protocol
// Get the maximum frame header padding required by this protocol (worst case)
uint8_t frame_header_padding() const { return frame_header_padding_; }
// Get the actual frame header size for a specific message.
// For noise: always returns frame_header_padding_ (fixed 7-byte header).
// For plaintext: computes actual size from varint lengths (3-6 bytes).
// Distinguishes protocols via frame_footer_size_ (noise always has a non-zero MAC
// footer, plaintext has footer=0). If a protocol with a plaintext footer is ever
// added, this should become a virtual method.
uint8_t frame_header_size(uint16_t payload_size, uint8_t message_type) const {
#if defined(USE_API_NOISE) && defined(USE_API_PLAINTEXT)
return this->frame_footer_size_
? this->frame_header_padding_
: static_cast<uint8_t>(1 + ProtoSize::varint16(payload_size) + ProtoSize::varint8(message_type));
#elif defined(USE_API_NOISE)
return this->frame_header_padding_;
#else // USE_API_PLAINTEXT only
return static_cast<uint8_t>(1 + ProtoSize::varint16(payload_size) + ProtoSize::varint8(message_type));
#endif
}
// Get the frame footer size required by this protocol
uint8_t frame_footer_size() const { return frame_footer_size_; }
// Check if socket has data ready to read
@@ -187,28 +208,46 @@ class APIFrameHelper {
}
protected:
// Buffer containing data to be sent
struct SendBuffer {
std::unique_ptr<uint8_t[]> data;
uint16_t size{0}; // Total size of the buffer
uint16_t offset{0}; // Current offset within the buffer
// Drain backlogged overflow data to the socket and handle errors.
// Called when overflow_buf_.empty() is false. Out-of-line to keep the
// fast path (empty check) inline at call sites.
// Returns OK for transient errors (WOULD_BLOCK), SOCKET_WRITE_FAILED for hard errors.
APIError drain_overflow_and_handle_errors_();
// Using uint16_t reduces memory usage since ESPHome API messages are limited to UINT16_MAX (65535) bytes
uint16_t remaining() const { return size - offset; }
const uint8_t *current_data() const { return data.get() + offset; }
};
// Sentinel values for the sent parameter in write_raw_ methods
static constexpr ssize_t WRITE_FAILED = -1; // Fast path: write()/writev() returned -1
static constexpr ssize_t WRITE_NOT_ATTEMPTED = -2; // Cold path: no write attempted yet
// Common implementation for writing raw data to socket
APIError write_raw_(const struct iovec *iov, int iovcnt, uint16_t total_write_len);
// Dispatch to write() or writev() based on iovec count
inline ssize_t ESPHOME_ALWAYS_INLINE write_iov_to_socket_(const struct iovec *iov, int iovcnt) {
return (iovcnt == 1) ? this->socket_->write(iov[0].iov_base, iov[0].iov_len) : this->socket_->writev(iov, iovcnt);
}
// Try to send data from the tx buffer
APIError try_send_tx_buf_();
// Helper method to buffer data from IOVs
void buffer_data_from_iov_(const struct iovec *iov, int iovcnt, uint16_t total_write_len, uint16_t offset);
// Common socket write error handling
APIError handle_socket_write_error_();
// Inlined write methods — used by hot paths (write_protobuf_packet, write_protobuf_messages)
// These inline the fast path (overflow empty + full write) and tail-call the out-of-line
// slow path only on failure/partial write.
inline APIError ESPHOME_ALWAYS_INLINE write_raw_fast_buf_(const void *data, uint16_t len) {
if (this->overflow_buf_.empty()) [[likely]] {
ssize_t sent = this->socket_->write(data, len);
if (sent == static_cast<ssize_t>(len)) [[likely]] {
#ifdef HELPER_LOG_PACKETS
this->log_packet_sending_(data, len);
#endif
return APIError::OK;
}
// sent is -1 (WRITE_FAILED) or partial write count
return this->write_raw_buf_(data, len, sent);
}
return this->write_raw_buf_(data, len, WRITE_NOT_ATTEMPTED);
}
// Out-of-line write paths: handle partial writes, errors, overflow buffering
// sent: WRITE_NOT_ATTEMPTED (cold path), WRITE_FAILED (fast path write returned -1), or bytes sent (partial write)
APIError write_raw_buf_(const void *data, uint16_t len, ssize_t sent = WRITE_NOT_ATTEMPTED);
APIError write_raw_iov_(const struct iovec *iov, int iovcnt, uint16_t total_write_len,
ssize_t sent = WRITE_NOT_ATTEMPTED);
#ifdef HELPER_LOG_PACKETS
void log_packet_sending_(const void *data, uint16_t len);
#endif
// Socket ownership (4 bytes on 32-bit, 8 bytes on 64-bit)
std::unique_ptr<socket::Socket> socket_;
@@ -243,8 +282,8 @@ class APIFrameHelper {
return APIError::WOULD_BLOCK;
}
// Containers (size varies, but typically 12+ bytes on 32-bit)
std::array<std::unique_ptr<SendBuffer>, API_MAX_SEND_QUEUE> tx_buf_;
// Backlog for unsent data when TCP send buffer is full (rarely used in production)
APIOverflowBuffer overflow_buf_;
APIBuffer rx_buf_;
// Client name buffer - stores name from Hello message or initial peername
@@ -255,21 +294,17 @@ class APIFrameHelper {
State state_{State::INITIALIZE};
uint8_t frame_header_padding_{0};
uint8_t frame_footer_size_{0};
uint8_t tx_buf_head_{0};
uint8_t tx_buf_tail_{0};
uint8_t tx_buf_count_{0};
// Nagle batching state for log messages. NODELAY_ON (-1) means NODELAY is enabled
// (immediate send). Values 1..LOG_NAGLE_COUNT count log messages in the current Nagle batch.
// After LOG_NAGLE_COUNT logs, we switch to NODELAY to flush and reset.
// Nagle batching counter for log messages. 0 means NODELAY is enabled (immediate send).
// Values 1..LOG_NAGLE_COUNT count log messages in the current Nagle batch.
// After LOG_NAGLE_COUNT logs, we flush by re-enabling NODELAY and resetting to 0.
// ESP8266 has the tightest TCP send buffer (2×MSS) and needs conservative batching.
// ESP32 (4×MSS+), RP2040 (8×MSS), and LibreTiny (4×MSS) can coalesce more.
static constexpr int8_t NODELAY_ON = -1;
#ifdef USE_ESP8266
static constexpr int8_t LOG_NAGLE_COUNT = 2;
static constexpr uint8_t LOG_NAGLE_COUNT = 2;
#else
static constexpr int8_t LOG_NAGLE_COUNT = 3;
static constexpr uint8_t LOG_NAGLE_COUNT = 3;
#endif
int8_t nodelay_state_{NODELAY_ON};
uint8_t nodelay_counter_{0};
// Internal helper to set TCP_NODELAY socket option
void set_nodelay_raw_(bool enable) {
+203 -186
View File
@@ -47,15 +47,8 @@ static constexpr size_t API_MAX_LOG_BYTES = 168;
format_hex_pretty_to(hex_buf_, (buffer).data(), \
(buffer).size() < API_MAX_LOG_BYTES ? (buffer).size() : API_MAX_LOG_BYTES)); \
} while (0)
#define LOG_PACKET_SENDING(data, len) \
do { \
char hex_buf_[format_hex_pretty_size(API_MAX_LOG_BYTES)]; \
ESP_LOGVV(TAG, "Sending raw: %s", \
format_hex_pretty_to(hex_buf_, data, (len) < API_MAX_LOG_BYTES ? (len) : API_MAX_LOG_BYTES)); \
} while (0)
#else
#define LOG_PACKET_RECEIVED(buffer) ((void) 0)
#define LOG_PACKET_SENDING(data, len) ((void) 0)
#endif
/// Convert a noise error code to a readable error
@@ -153,8 +146,10 @@ APIError APINoiseFrameHelper::loop() {
}
}
// Use base class implementation for buffer sending
return APIFrameHelper::loop();
if (!this->overflow_buf_.empty()) [[unlikely]] {
return this->drain_overflow_and_handle_errors_();
}
return APIError::OK;
}
/** Read a packet into the rx_buf_.
@@ -242,132 +237,144 @@ APIError APINoiseFrameHelper::try_read_frame_() {
* If an error occurred, returns that error. Only returns OK if the transport is ready for data
* traffic.
*/
// Split into per-state methods so the compiler doesn't allocate stack space
// for all branches simultaneously. On RP2040 the core0 stack lives in a 4KB
// scratch RAM bank; the Noise crypto path (curve25519) needs ~2KB+ of stack,
// so every byte saved in the caller matters.
APIError APINoiseFrameHelper::state_action_() {
int err;
APIError aerr;
if (state_ == State::INITIALIZE) {
HELPER_LOG("Bad state for method: %d", (int) state_);
return APIError::BAD_STATE;
switch (this->state_) {
case State::INITIALIZE:
HELPER_LOG("Bad state for method: %d", (int) this->state_);
return APIError::BAD_STATE;
case State::CLIENT_HELLO:
return this->state_action_client_hello_();
case State::SERVER_HELLO:
return this->state_action_server_hello_();
case State::HANDSHAKE:
return this->state_action_handshake_();
case State::CLOSED:
case State::FAILED:
return APIError::BAD_STATE;
default:
return APIError::OK;
}
if (state_ == State::CLIENT_HELLO) {
// waiting for client hello
aerr = this->try_read_frame_();
if (aerr != APIError::OK) {
return handle_handshake_frame_error_(aerr);
}
// ignore contents, may be used in future for flags
// Resize for: existing prologue + 2 size bytes + frame data
size_t old_size = this->prologue_.size();
size_t rx_size = this->rx_buf_.size();
this->prologue_.resize(old_size + 2 + rx_size);
this->prologue_[old_size] = (uint8_t) (rx_size >> 8);
this->prologue_[old_size + 1] = (uint8_t) rx_size;
if (rx_size > 0) {
std::memcpy(this->prologue_.data() + old_size + 2, this->rx_buf_.data(), rx_size);
}
state_ = State::SERVER_HELLO;
}
APIError APINoiseFrameHelper::state_action_client_hello_() {
// waiting for client hello
APIError aerr = this->try_read_frame_();
if (aerr != APIError::OK) {
return handle_handshake_frame_error_(aerr);
}
if (state_ == State::SERVER_HELLO) {
// send server hello
const auto &name = App.get_name();
char mac[MAC_ADDRESS_BUFFER_SIZE];
get_mac_address_into_buffer(mac);
// Calculate positions and sizes
size_t name_len = name.size() + 1; // including null terminator
size_t name_offset = 1;
size_t mac_offset = name_offset + name_len;
size_t total_size = 1 + name_len + MAC_ADDRESS_BUFFER_SIZE;
// 1 (proto) + name (max ESPHOME_DEVICE_NAME_MAX_LEN) + 1 (name null)
// + mac (MAC_ADDRESS_BUFFER_SIZE - 1) + 1 (mac null)
constexpr size_t max_msg_size = 1 + ESPHOME_DEVICE_NAME_MAX_LEN + 1 + MAC_ADDRESS_BUFFER_SIZE;
uint8_t msg[max_msg_size];
// chosen proto
msg[0] = 0x01;
// node name, terminated by null byte
std::memcpy(msg + name_offset, name.c_str(), name_len);
// node mac, terminated by null byte
std::memcpy(msg + mac_offset, mac, MAC_ADDRESS_BUFFER_SIZE);
aerr = write_frame_(msg, total_size);
if (aerr != APIError::OK)
return aerr;
// start handshake
aerr = init_handshake_();
if (aerr != APIError::OK)
return aerr;
state_ = State::HANDSHAKE;
// ignore contents, may be used in future for flags
// Resize for: existing prologue + 2 size bytes + frame data
size_t old_size = this->prologue_.size();
size_t rx_size = this->rx_buf_.size();
this->prologue_.resize(old_size + 2 + rx_size);
this->prologue_[old_size] = (uint8_t) (rx_size >> 8);
this->prologue_[old_size + 1] = (uint8_t) rx_size;
if (rx_size > 0) {
std::memcpy(this->prologue_.data() + old_size + 2, this->rx_buf_.data(), rx_size);
}
if (state_ == State::HANDSHAKE) {
int action = noise_handshakestate_get_action(handshake_);
if (action == NOISE_ACTION_READ_MESSAGE) {
// waiting for handshake msg
aerr = this->try_read_frame_();
if (aerr != APIError::OK) {
return handle_handshake_frame_error_(aerr);
}
if (this->rx_buf_.empty()) {
send_explicit_handshake_reject_(LOG_STR("Empty handshake message"));
return APIError::BAD_HANDSHAKE_ERROR_BYTE;
} else if (this->rx_buf_[0] != 0x00) {
HELPER_LOG("Bad handshake error byte: %u", this->rx_buf_[0]);
send_explicit_handshake_reject_(LOG_STR("Bad handshake error byte"));
return APIError::BAD_HANDSHAKE_ERROR_BYTE;
}
NoiseBuffer mbuf;
noise_buffer_init(mbuf);
noise_buffer_set_input(mbuf, this->rx_buf_.data() + 1, this->rx_buf_.size() - 1);
err = noise_handshakestate_read_message(handshake_, &mbuf, nullptr);
if (err != 0) {
// Special handling for MAC failure
send_explicit_handshake_reject_(err == NOISE_ERROR_MAC_FAILURE ? LOG_STR("Handshake MAC failure")
: LOG_STR("Handshake error"));
return handle_noise_error_(err, LOG_STR("noise_handshakestate_read_message"),
APIError::HANDSHAKESTATE_READ_FAILED);
}
aerr = check_handshake_finished_();
if (aerr != APIError::OK)
return aerr;
} else if (action == NOISE_ACTION_WRITE_MESSAGE) {
uint8_t buffer[65];
NoiseBuffer mbuf;
noise_buffer_init(mbuf);
noise_buffer_set_output(mbuf, buffer + 1, sizeof(buffer) - 1);
err = noise_handshakestate_write_message(handshake_, &mbuf, nullptr);
APIError aerr_write = handle_noise_error_(err, LOG_STR("noise_handshakestate_write_message"),
APIError::HANDSHAKESTATE_WRITE_FAILED);
if (aerr_write != APIError::OK)
return aerr_write;
buffer[0] = 0x00; // success
aerr = write_frame_(buffer, mbuf.size + 1);
if (aerr != APIError::OK)
return aerr;
aerr = check_handshake_finished_();
if (aerr != APIError::OK)
return aerr;
} else {
// bad state for action
state_ = State::FAILED;
HELPER_LOG("Bad action for handshake: %d", action);
return APIError::HANDSHAKESTATE_BAD_STATE;
}
}
if (state_ == State::CLOSED || state_ == State::FAILED) {
return APIError::BAD_STATE;
}
state_ = State::SERVER_HELLO;
return APIError::OK;
}
APIError APINoiseFrameHelper::state_action_server_hello_() {
// send server hello
const auto &name = App.get_name();
char mac[MAC_ADDRESS_BUFFER_SIZE];
get_mac_address_into_buffer(mac);
// Calculate positions and sizes
size_t name_len = name.size() + 1; // including null terminator
size_t name_offset = 1;
size_t mac_offset = name_offset + name_len;
size_t total_size = 1 + name_len + MAC_ADDRESS_BUFFER_SIZE;
// 1 (proto) + name (max ESPHOME_DEVICE_NAME_MAX_LEN) + 1 (name null)
// + mac (MAC_ADDRESS_BUFFER_SIZE - 1) + 1 (mac null)
constexpr size_t max_msg_size = 1 + ESPHOME_DEVICE_NAME_MAX_LEN + 1 + MAC_ADDRESS_BUFFER_SIZE;
uint8_t msg[max_msg_size];
// chosen proto
msg[0] = 0x01;
// node name, terminated by null byte
std::memcpy(msg + name_offset, name.c_str(), name_len);
// node mac, terminated by null byte
std::memcpy(msg + mac_offset, mac, MAC_ADDRESS_BUFFER_SIZE);
APIError aerr = write_frame_(msg, total_size);
if (aerr != APIError::OK)
return aerr;
// start handshake
aerr = init_handshake_();
if (aerr != APIError::OK)
return aerr;
state_ = State::HANDSHAKE;
return APIError::OK;
}
APIError APINoiseFrameHelper::state_action_handshake_() {
int action = noise_handshakestate_get_action(this->handshake_);
if (action == NOISE_ACTION_READ_MESSAGE) {
return this->state_action_handshake_read_();
} else if (action == NOISE_ACTION_WRITE_MESSAGE) {
return this->state_action_handshake_write_();
}
// bad state for action
this->state_ = State::FAILED;
HELPER_LOG("Bad action for handshake: %d", action);
return APIError::HANDSHAKESTATE_BAD_STATE;
}
APIError APINoiseFrameHelper::state_action_handshake_read_() {
APIError aerr = this->try_read_frame_();
if (aerr != APIError::OK) {
return this->handle_handshake_frame_error_(aerr);
}
if (this->rx_buf_.empty()) {
this->send_explicit_handshake_reject_(LOG_STR("Empty handshake message"));
return APIError::BAD_HANDSHAKE_ERROR_BYTE;
} else if (this->rx_buf_[0] != 0x00) {
HELPER_LOG("Bad handshake error byte: %u", this->rx_buf_[0]);
this->send_explicit_handshake_reject_(LOG_STR("Bad handshake error byte"));
return APIError::BAD_HANDSHAKE_ERROR_BYTE;
}
NoiseBuffer mbuf;
noise_buffer_init(mbuf);
noise_buffer_set_input(mbuf, this->rx_buf_.data() + 1, this->rx_buf_.size() - 1);
int err = noise_handshakestate_read_message(this->handshake_, &mbuf, nullptr);
if (err != 0) {
// Special handling for MAC failure
this->send_explicit_handshake_reject_(err == NOISE_ERROR_MAC_FAILURE ? LOG_STR("Handshake MAC failure")
: LOG_STR("Handshake error"));
return this->handle_noise_error_(err, LOG_STR("noise_handshakestate_read_message"),
APIError::HANDSHAKESTATE_READ_FAILED);
}
return this->check_handshake_finished_();
}
APIError APINoiseFrameHelper::state_action_handshake_write_() {
uint8_t buffer[65];
NoiseBuffer mbuf;
noise_buffer_init(mbuf);
noise_buffer_set_output(mbuf, buffer + 1, sizeof(buffer) - 1);
int err = noise_handshakestate_write_message(this->handshake_, &mbuf, nullptr);
APIError aerr = this->handle_noise_error_(err, LOG_STR("noise_handshakestate_write_message"),
APIError::HANDSHAKESTATE_WRITE_FAILED);
if (aerr != APIError::OK)
return aerr;
buffer[0] = 0x00; // success
aerr = this->write_frame_(buffer, mbuf.size + 1);
if (aerr != APIError::OK)
return aerr;
return this->check_handshake_finished_();
}
void APINoiseFrameHelper::send_explicit_handshake_reject_(const LogString *reason) {
// Max reject message: "Bad handshake packet len" (24) + 1 (failure byte) = 25 bytes
uint8_t data[32];
@@ -450,73 +457,83 @@ APIError APINoiseFrameHelper::read_packet(ReadPacketBuffer *buffer) {
buffer->type = type;
return APIError::OK;
}
APIError APINoiseFrameHelper::write_protobuf_packet(uint8_t type, ProtoWriteBuffer buffer) {
// Resize to include MAC space (required for Noise encryption)
buffer.get_buffer()->resize(buffer.get_buffer()->size() + frame_footer_size_);
MessageInfo msg{type, 0,
static_cast<uint16_t>(buffer.get_buffer()->size() - frame_header_padding_ - frame_footer_size_)};
return write_protobuf_messages(buffer, std::span<const MessageInfo>(&msg, 1));
}
// Encrypt a single noise message in place and return the encrypted frame length.
// Returns APIError::OK on success.
APIError APINoiseFrameHelper::encrypt_noise_message_(uint8_t *buf_start, uint16_t payload_size, uint8_t message_type,
uint16_t &encrypted_len_out) {
// Write noise header
buf_start[0] = 0x01; // indicator
// buf_start[1], buf_start[2] to be set after encryption
APIError APINoiseFrameHelper::write_protobuf_messages(ProtoWriteBuffer buffer, std::span<const MessageInfo> messages) {
APIError aerr = this->check_data_state_();
// Write message header (to be encrypted)
constexpr uint8_t msg_offset = 3;
buf_start[msg_offset] = static_cast<uint8_t>(message_type >> 8); // type high byte
buf_start[msg_offset + 1] = static_cast<uint8_t>(message_type); // type low byte
buf_start[msg_offset + 2] = static_cast<uint8_t>(payload_size >> 8); // data_len high byte
buf_start[msg_offset + 3] = static_cast<uint8_t>(payload_size); // data_len low byte
// payload data is already in the buffer starting at offset + 7
// Encrypt the message in place
NoiseBuffer mbuf;
noise_buffer_init(mbuf);
noise_buffer_set_inout(mbuf, buf_start + msg_offset, 4 + payload_size, 4 + payload_size + this->frame_footer_size_);
int err = noise_cipherstate_encrypt(this->send_cipher_, &mbuf);
APIError aerr =
this->handle_noise_error_(err, LOG_STR("noise_cipherstate_encrypt"), APIError::CIPHERSTATE_ENCRYPT_FAILED);
if (aerr != APIError::OK)
return aerr;
if (messages.empty()) {
return APIError::OK;
}
// Fill in the encrypted size
buf_start[1] = static_cast<uint8_t>(mbuf.size >> 8);
buf_start[2] = static_cast<uint8_t>(mbuf.size);
encrypted_len_out = static_cast<uint16_t>(3 + mbuf.size); // indicator + size + encrypted data
return APIError::OK;
}
APIError APINoiseFrameHelper::write_protobuf_packet(uint8_t type, ProtoWriteBuffer buffer) {
#ifdef ESPHOME_DEBUG_API
assert(this->state_ == State::DATA);
#endif
// Resize buffer to include footer space for Noise MAC
if (this->frame_footer_size_)
buffer.get_buffer()->resize(buffer.get_buffer()->size() + this->frame_footer_size_);
uint16_t payload_size =
static_cast<uint16_t>(buffer.get_buffer()->size() - HEADER_PADDING - this->frame_footer_size_);
uint8_t *buf_start = buffer.get_buffer()->data();
uint16_t encrypted_len;
APIError aerr = this->encrypt_noise_message_(buf_start, payload_size, type, encrypted_len);
if (aerr != APIError::OK)
return aerr;
return this->write_raw_fast_buf_(buf_start, encrypted_len);
}
APIError APINoiseFrameHelper::write_protobuf_messages(ProtoWriteBuffer buffer, std::span<const MessageInfo> messages) {
#ifdef ESPHOME_DEBUG_API
assert(this->state_ == State::DATA);
assert(!messages.empty());
#endif
// Noise messages are already contiguous in the buffer:
// HEADER_PADDING (7) exactly matches the fixed header size, and
// footer space (16) is consumed by the encryption MAC.
uint8_t *buffer_data = buffer.get_buffer()->data();
// Stack-allocated iovec array - no heap allocation
StaticVector<struct iovec, MAX_MESSAGES_PER_BATCH> iovs;
uint8_t *write_start = buffer_data + messages[0].offset;
uint16_t total_write_len = 0;
// We need to encrypt each message in place
for (const auto &msg : messages) {
// The buffer already has padding at offset
uint8_t *buf_start = buffer_data + msg.offset;
// Write noise header
buf_start[0] = 0x01; // indicator
// buf_start[1], buf_start[2] to be set after encryption
// Write message header (to be encrypted)
constexpr uint8_t msg_offset = 3;
buf_start[msg_offset] = static_cast<uint8_t>(msg.message_type >> 8); // type high byte
buf_start[msg_offset + 1] = static_cast<uint8_t>(msg.message_type); // type low byte
buf_start[msg_offset + 2] = static_cast<uint8_t>(msg.payload_size >> 8); // data_len high byte
buf_start[msg_offset + 3] = static_cast<uint8_t>(msg.payload_size); // data_len low byte
// payload data is already in the buffer starting at offset + 7
// Make sure we have space for MAC
// The buffer should already have been sized appropriately
// Encrypt the message in place
NoiseBuffer mbuf;
noise_buffer_init(mbuf);
noise_buffer_set_inout(mbuf, buf_start + msg_offset, 4 + msg.payload_size,
4 + msg.payload_size + frame_footer_size_);
int err = noise_cipherstate_encrypt(send_cipher_, &mbuf);
APIError aerr =
handle_noise_error_(err, LOG_STR("noise_cipherstate_encrypt"), APIError::CIPHERSTATE_ENCRYPT_FAILED);
uint16_t encrypted_len;
APIError aerr = this->encrypt_noise_message_(buf_start, msg.payload_size, msg.message_type, encrypted_len);
if (aerr != APIError::OK)
return aerr;
// Fill in the encrypted size
buf_start[1] = static_cast<uint8_t>(mbuf.size >> 8);
buf_start[2] = static_cast<uint8_t>(mbuf.size);
// Add iovec for this encrypted message
size_t msg_len = static_cast<size_t>(3 + mbuf.size); // indicator + size + encrypted data
iovs.push_back({buf_start, msg_len});
total_write_len += msg_len;
total_write_len += encrypted_len;
}
// Send all encrypted messages in one writev call
return this->write_raw_(iovs.data(), iovs.size(), total_write_len);
return this->write_raw_fast_buf_(write_start, total_write_len);
}
APIError APINoiseFrameHelper::write_frame_(const uint8_t *data, uint16_t len) {
@@ -525,16 +542,16 @@ APIError APINoiseFrameHelper::write_frame_(const uint8_t *data, uint16_t len) {
header[1] = (uint8_t) (len >> 8);
header[2] = (uint8_t) len;
if (len == 0) {
return this->write_raw_buf_(header, 3);
}
struct iovec iov[2];
iov[0].iov_base = header;
iov[0].iov_len = 3;
if (len == 0) {
return this->write_raw_(iov, 1, 3); // Just header
}
iov[1].iov_base = const_cast<uint8_t *>(data);
iov[1].iov_len = len;
return this->write_raw_(iov, 2, 3 + len); // Header + data
return this->write_raw_iov_(iov, 2, 3 + len);
}
/** Initiate the data structures for the handshake.
@@ -600,7 +617,7 @@ APIError APINoiseFrameHelper::check_handshake_finished_() {
if (aerr != APIError::OK)
return aerr;
frame_footer_size_ = noise_cipherstate_get_mac_length(send_cipher_);
this->frame_footer_size_ = noise_cipherstate_get_mac_length(send_cipher_);
HELPER_LOG("Handshake complete!");
noise_handshakestate_free(handshake_);
@@ -9,14 +9,16 @@ namespace esphome::api {
class APINoiseFrameHelper final : public APIFrameHelper {
public:
// Noise header structure:
// Pos 0: indicator (0x01)
// Pos 1-2: encrypted payload size (16-bit big-endian)
// Pos 3-6: encrypted type (16-bit) + data_len (16-bit)
// Pos 7+: actual payload data
static constexpr uint8_t HEADER_PADDING = 1 + 2 + 2 + 2; // indicator + size + type + data_len
APINoiseFrameHelper(std::unique_ptr<socket::Socket> socket, APINoiseContext &ctx)
: APIFrameHelper(std::move(socket)), ctx_(ctx) {
// Noise header structure:
// Pos 0: indicator (0x01)
// Pos 1-2: encrypted payload size (16-bit big-endian)
// Pos 3-6: encrypted type (16-bit) + data_len (16-bit)
// Pos 7+: actual payload data
frame_header_padding_ = 7;
frame_header_padding_ = HEADER_PADDING;
}
~APINoiseFrameHelper() override;
APIError init() override;
@@ -27,8 +29,15 @@ class APINoiseFrameHelper final : public APIFrameHelper {
protected:
APIError state_action_();
APIError state_action_client_hello_();
APIError state_action_server_hello_();
APIError state_action_handshake_();
APIError state_action_handshake_read_();
APIError state_action_handshake_write_();
APIError try_read_frame_();
APIError write_frame_(const uint8_t *data, uint16_t len);
APIError encrypt_noise_message_(uint8_t *buf_start, uint16_t payload_size, uint8_t message_type,
uint16_t &encrypted_len_out);
APIError init_handshake_();
APIError check_handshake_finished_();
void send_explicit_handshake_reject_(const LogString *reason);
@@ -39,15 +39,8 @@ static constexpr size_t API_MAX_LOG_BYTES = 168;
format_hex_pretty_to(hex_buf_, (buffer).data(), \
(buffer).size() < API_MAX_LOG_BYTES ? (buffer).size() : API_MAX_LOG_BYTES)); \
} while (0)
#define LOG_PACKET_SENDING(data, len) \
do { \
char hex_buf_[format_hex_pretty_size(API_MAX_LOG_BYTES)]; \
ESP_LOGVV(TAG, "Sending raw: %s", \
format_hex_pretty_to(hex_buf_, data, (len) < API_MAX_LOG_BYTES ? (len) : API_MAX_LOG_BYTES)); \
} while (0)
#else
#define LOG_PACKET_RECEIVED(buffer) ((void) 0)
#define LOG_PACKET_SENDING(data, len) ((void) 0)
#endif
/// Initialize the frame helper, returns OK if successful.
@@ -64,8 +57,10 @@ APIError APIPlaintextFrameHelper::loop() {
if (state_ != State::DATA) {
return APIError::BAD_STATE;
}
// Use base class implementation for buffer sending
return APIFrameHelper::loop();
if (!this->overflow_buf_.empty()) [[unlikely]] {
return this->drain_overflow_and_handle_errors_();
}
return APIError::OK;
}
/** Read a packet into the rx_buf_.
@@ -203,7 +198,6 @@ APIError APIPlaintextFrameHelper::read_packet(ReadPacketBuffer *buffer) {
// Make sure to tell the remote that we don't
// understand the indicator byte so it knows
// we do not support it.
struct iovec iov[1];
// The \x00 first byte is the marker for plaintext.
//
// The remote will know how to handle the indicator byte,
@@ -218,14 +212,12 @@ APIError APIPlaintextFrameHelper::read_packet(ReadPacketBuffer *buffer) {
"Bad indicator byte";
char msg[INDICATOR_MSG_SIZE];
memcpy_P(msg, MSG_PROGMEM, INDICATOR_MSG_SIZE);
iov[0].iov_base = (void *) msg;
this->write_raw_buf_(msg, INDICATOR_MSG_SIZE);
#else
static const char MSG[] = "\x00"
"Bad indicator byte";
iov[0].iov_base = (void *) MSG;
this->write_raw_buf_(MSG, INDICATOR_MSG_SIZE);
#endif
iov[0].iov_len = INDICATOR_MSG_SIZE;
this->write_raw_(iov, 1, INDICATOR_MSG_SIZE);
}
return aerr;
}
@@ -235,76 +227,101 @@ APIError APIPlaintextFrameHelper::read_packet(ReadPacketBuffer *buffer) {
buffer->type = this->rx_header_parsed_type_;
return APIError::OK;
}
// Encode a 16-bit varint (1-3 bytes) using pre-computed length.
ESPHOME_ALWAYS_INLINE static inline void encode_varint_16(uint16_t value, uint8_t varint_len, uint8_t *p) {
if (varint_len >= 2) {
*p++ = static_cast<uint8_t>(value | 0x80);
value >>= 7;
if (varint_len == 3) {
*p++ = static_cast<uint8_t>(value | 0x80);
value >>= 7;
}
}
*p = static_cast<uint8_t>(value);
}
// Encode an 8-bit varint (1-2 bytes) using pre-computed length.
ESPHOME_ALWAYS_INLINE static inline void encode_varint_8(uint8_t value, uint8_t varint_len, uint8_t *p) {
if (varint_len == 2) {
*p++ = static_cast<uint8_t>(value | 0x80);
*p = static_cast<uint8_t>(value >> 7);
} else {
*p = value;
}
}
// Write plaintext header into pre-allocated padding before payload.
// padding_size: bytes reserved before payload (HEADER_PADDING for first/single msg,
// actual header size for contiguous batch messages).
// Returns the total header length (indicator + varints).
ESPHOME_ALWAYS_INLINE static inline uint8_t write_plaintext_header(uint8_t *buf_start, uint16_t payload_size,
uint8_t message_type, uint8_t padding_size) {
uint8_t size_varint_len = ProtoSize::varint16(payload_size);
uint8_t type_varint_len = ProtoSize::varint8(message_type);
uint8_t total_header_len = 1 + size_varint_len + type_varint_len;
// The header is right-justified within the padding so it sits immediately before payload.
//
// Single/first message (padding_size = HEADER_PADDING = 6):
// Example (small, header=3): [0-2] unused | [3] 0x00 | [4] size | [5] type | [6...] payload
// Example (medium, header=4): [0-1] unused | [2] 0x00 | [3-4] size | [5] type | [6...] payload
// Example (large, header=6): [0] 0x00 | [1-3] size | [4-5] type | [6...] payload
//
// Batch messages 2+ (padding_size = actual header size, no unused bytes):
// Example (small, header=3): [0] 0x00 | [1] size | [2] type | [3...] payload
// Example (medium, header=4): [0] 0x00 | [1-2] size | [3] type | [4...] payload
#ifdef ESPHOME_DEBUG_API
assert(padding_size >= total_header_len);
#endif
uint32_t header_offset = padding_size - total_header_len;
// Write the plaintext header
buf_start[header_offset] = 0x00; // indicator
// Encode varints directly into buffer using pre-computed lengths
encode_varint_16(payload_size, size_varint_len, buf_start + header_offset + 1);
encode_varint_8(message_type, type_varint_len, buf_start + header_offset + 1 + size_varint_len);
return total_header_len;
}
APIError APIPlaintextFrameHelper::write_protobuf_packet(uint8_t type, ProtoWriteBuffer buffer) {
MessageInfo msg{type, 0, static_cast<uint16_t>(buffer.get_buffer()->size() - frame_header_padding_)};
return write_protobuf_messages(buffer, std::span<const MessageInfo>(&msg, 1));
#ifdef ESPHOME_DEBUG_API
assert(this->state_ == State::DATA);
#endif
uint16_t payload_size = static_cast<uint16_t>(buffer.get_buffer()->size() - HEADER_PADDING);
uint8_t *buffer_data = buffer.get_buffer()->data();
uint8_t header_len = write_plaintext_header(buffer_data, payload_size, type, HEADER_PADDING);
return this->write_raw_fast_buf_(buffer_data + HEADER_PADDING - header_len,
static_cast<uint16_t>(header_len + payload_size));
}
APIError APIPlaintextFrameHelper::write_protobuf_messages(ProtoWriteBuffer buffer,
std::span<const MessageInfo> messages) {
APIError aerr = this->check_data_state_();
if (aerr != APIError::OK)
return aerr;
if (messages.empty()) {
return APIError::OK;
}
#ifdef ESPHOME_DEBUG_API
assert(this->state_ == State::DATA);
assert(!messages.empty());
#endif
uint8_t *buffer_data = buffer.get_buffer()->data();
// Stack-allocated iovec array - no heap allocation
StaticVector<struct iovec, MAX_MESSAGES_PER_BATCH> iovs;
uint16_t total_write_len = 0;
// First message has max padding (header_size = HEADER_PADDING), may have unused leading bytes.
// Subsequent messages were encoded with exact header sizes (header_size = actual header len).
// write_plaintext_header right-justifies the header within header_size bytes of padding.
const auto &first = messages[0];
uint8_t *first_start = buffer_data + first.offset;
uint8_t header_len = write_plaintext_header(first_start, first.payload_size, first.message_type, HEADER_PADDING);
uint8_t *write_start = first_start + HEADER_PADDING - header_len;
uint16_t total_len = header_len + first.payload_size;
for (const auto &msg : messages) {
// Calculate varint sizes for header layout
uint8_t size_varint_len = api::ProtoSize::varint(static_cast<uint32_t>(msg.payload_size));
uint8_t type_varint_len = api::ProtoSize::varint(static_cast<uint32_t>(msg.message_type));
uint8_t total_header_len = 1 + size_varint_len + type_varint_len;
// Calculate where to start writing the header
// The header starts at the latest possible position to minimize unused padding
//
// Example 1 (small values): total_header_len = 3, header_offset = 6 - 3 = 3
// [0-2] - Unused padding
// [3] - 0x00 indicator byte
// [4] - Payload size varint (1 byte, for sizes 0-127)
// [5] - Message type varint (1 byte, for types 0-127)
// [6...] - Actual payload data
//
// Example 2 (medium values): total_header_len = 4, header_offset = 6 - 4 = 2
// [0-1] - Unused padding
// [2] - 0x00 indicator byte
// [3-4] - Payload size varint (2 bytes, for sizes 128-16383)
// [5] - Message type varint (1 byte, for types 0-127)
// [6...] - Actual payload data
//
// Example 3 (large values): total_header_len = 6, header_offset = 6 - 6 = 0
// [0] - 0x00 indicator byte
// [1-3] - Payload size varint (3 bytes, for sizes 16384-2097151)
// [4-5] - Message type varint (2 bytes, for types 128-32767)
// [6...] - Actual payload data
//
// The message starts at offset + frame_header_padding_
// So we write the header starting at offset + frame_header_padding_ - total_header_len
uint8_t *buf_start = buffer_data + msg.offset;
uint32_t header_offset = frame_header_padding_ - total_header_len;
// Write the plaintext header
buf_start[header_offset] = 0x00; // indicator
// Encode varints directly into buffer
encode_varint_to_buffer(msg.payload_size, buf_start + header_offset + 1);
encode_varint_to_buffer(msg.message_type, buf_start + header_offset + 1 + size_varint_len);
// Add iovec for this message (header + payload)
size_t msg_len = static_cast<size_t>(total_header_len + msg.payload_size);
iovs.push_back({buf_start + header_offset, msg_len});
total_write_len += msg_len;
for (size_t i = 1; i < messages.size(); i++) {
const auto &msg = messages[i];
header_len = write_plaintext_header(buffer_data + msg.offset, msg.payload_size, msg.message_type, msg.header_size);
total_len += header_len + msg.payload_size;
}
// Send all messages in one writev call
return write_raw_(iovs.data(), iovs.size(), total_write_len);
return this->write_raw_fast_buf_(write_start, total_len);
}
} // namespace esphome::api

Some files were not shown because too many files have changed in this diff Show More