Engine for modeling, simulation and measurement of multiscale structural coherence through structural operators (emission, reception, coherence, dissonance, coupling, resonance, silence, expansion, contraction, self‑organization, mutation, transition, recursivity).
tnfr
is a Python library to operate with form: build nodes, couple them into networks, and modulate their coherence over time using structural operators. It does not describe “things”; it activates processes. Its theoretical basis is the Teoria de la Naturaleza Fractal Resonante (TNFR), which understands reality as networks of coherence that persist because they resonate.
In practical terms, tnfr
lets you:
- Model Resonant Fractal Nodes (NFR) with parameters for frequency (νf), phase (θ), and form (EPI). Use the ASCII constants
VF_KEY
andTHETA_KEY
to reference these attributes programmatically; the Unicode names remain available as aliases. - Apply structural operators to start, stabilize, propagate, or reconfigure coherence.
- Simulate nodal dynamics with discrete/continuous integrators.
- Measure global coherence C(t), nodal gradient ΔNFR, and the Sense Index (Si).
- Visualize states and trajectories (coupling matrices, C(t) curves, graphs).
A form emerges and persists when internal reorganization (ΔNFR) resonates with the node’s frequency (νf).
from tnfr import create_nfr, run_sequence
from tnfr.structural import (
Emision,
Recepcion,
Coherencia,
Resonancia,
Silencio,
)
from tnfr.metrics.common import compute_coherence
from tnfr.metrics.sense_index import compute_Si
G, nodo = create_nfr("A", epi=0.2, vf=1.0, theta=0.0)
ops = [Emision(), Recepcion(), Coherencia(), Resonancia(), Silencio()]
run_sequence(G, nodo, ops)
C, delta_nfr_medio, depi_medio = compute_coherence(G, return_means=True)
si_por_nodo = compute_Si(G)
print(f"C(t)={C:.3f}, ΔNFR̄={delta_nfr_medio:.3f}, dEPI/dt̄={depi_medio:.3f}, Si={si_por_nodo[nodo]:.3f}")
La secuencia respeta la ecuación nodal porque create_nfr
inicializa el nodo con su νf y fase, y run_sequence
valida la gramática TNFR antes de aplicar los operadores en el orden provisto. Tras cada operador invoca el gancho compute_delta_nfr
del grafo para recalcular únicamente ΔNFR (por defecto usa dnfr_epi_vf_mixed
, que mezcla EPI y νf sin alterar la fase). La fase solo cambiará si los propios operadores la modifican o si se ejecutan pasos dinámicos posteriores (por ejemplo tnfr.dynamics.step
o coordinate_global_local_phase
). Cuando necesites sincronización de fase automática, utiliza el ciclo de dinámica completo (tnfr.dynamics.step
/tnfr.dynamics.run
) o invoca explícitamente los coordinadores de fase después de run_sequence
. Esa telemetría permite medir C(t) y Si, adelantando lo que se desarrolla en Key concepts (operational summary) y Main metrics.
Archivo secuencia.json
:
[
"emision",
"recepcion",
"coherencia",
"resonancia",
"silencio"
]
tnfr sequence --nodes 1 --sequence-file secuencia.json --save-history historia.json
El subcomando sequence
carga la trayectoria canónica del archivo JSON, ejecuta los operadores con la gramática oficial y actualiza νf, ΔNFR y fase usando los mismos ganchos que la API de Python. Al finalizar se vuelcan en historia.json
las series de C(t), ΔNFR medio y Si, que amplían las secciones sobre operadores estructurales y métricas.
pip install tnfr
- https://pypi.org/project/tnfr/
- Requires Python ≥ 3.9.
- Install extras:
- NumPy:
pip install tnfr[numpy]
- YAML:
pip install tnfr[yaml]
- orjson (faster JSON serialization):
pip install tnfr[orjson]
- All:
pip install tnfr[numpy,yaml,orjson]
- NumPy:
- When
orjson
is unavailable the engine falls back to Python's built-injson
module.
Use tnfr.cached_import
to load optional dependencies and cache the result
via a process-wide LRU cache. Missing modules (or attributes) yield None
without triggering repeated imports. The helper records failures and emits a
single warning per module to keep logs tidy. When optional packages are
installed at runtime call prune_failed_imports
to clear the consolidated
failure/warning registry before retrying:
from tnfr import cached_import, prune_failed_imports
np = cached_import("numpy")
safe_load = cached_import("yaml", "safe_load")
# provide a shared cache with an explicit lock
from cachetools import TTLCache
import threading
cache = TTLCache(32, 60)
lock = threading.Lock()
cached_import("numpy", cache=cache, lock=lock)
# clear caches after installing a dependency at runtime
cached_import.cache_clear()
prune_failed_imports()
Run the test suite from the project root using the helper script, which sets
the necessary PYTHONPATH
and mirrors the checks described in
CONTRIBUTING.md
:
./scripts/run_tests.sh
The script sequentially executes pydocstyle
, pytest
under coverage
, the
coverage summary, and vulture --min-confidence 80 src tests
. Avoid running
pytest
directly or executing the script from other directories, as the
environment may be misconfigured and imports will fail. To pass additional
flags to pytest
, append them after --
, for example:
./scripts/run_tests.sh -- -k coherence
The engine centralises reusable process-wide locks in
tnfr.locking
. Modules obtain named locks via locking.get_lock()
and
use the returned threading.Lock
in their own critical sections. This
avoids scattering threading.Lock
instances across the codebase and
ensures that shared resources are synchronised consistently.
Module-level caches or global state should always use these named
locks; only short-lived objects may instantiate ad-hoc locks directly
when they are not shared.
Callback errors are stored in a ring buffer attached to the graph. The
buffer retains at most the last 100 errors by default, but the limit can be
adjusted at runtime via tnfr.callback_utils.callback_manager.set_callback_error_limit
and inspected with tnfr.callback_utils.callback_manager.get_callback_error_limit
.
tnfr.helpers
bundles a compact set of public helpers that stay stable across
releases. They provide ergonomic access to the most common preparation steps
when orchestrating TNFR experiments.
ensure_collection(it, *, max_materialize=...)
— materialize potentially lazy iterables once, enforcing a configurable limit to keep simulations bounded.clamp(x, a, b)
andclamp01(x)
— restrict scalars to safe ranges for operator parameters.kahan_sum_nd(values, dims)
— numerically stable accumulators used to track coherence magnitudes across long trajectories (usedims=1
for scalars,dims=2
for paired values, etc.).angle_diff(a, b)
— compute minimal angular differences (radians) to compare structural phases.
push_glyph(nd, glyph, window)
— registra la aplicación de un operador en el historial del nodo respetando la ventana configurada.recent_glyph(nd, glyph, window)
— comprueba si un operador específico aparece en el historial reciente de un nodo.ensure_history(G)
— prepare the graph-level history container with the appropriate bounds.last_glyph(nd)
— inspecciona el último operador emitido por un nodo.count_glyphs(G, window=None, *, last_only=False)
— agrega el uso de operadores estructurales en la red ya sea desde todo el historial o una ventana limitada.
cached_node_list(G)
— lazily cache a stable tuple of node identifiers, respecting opt-in sorted ordering.ensure_node_index_map(G)
/ensure_node_offset_map(G)
— expose cached index and offset mappings for graphs that need to project nodes to arrays.node_set_checksum(G, nodes=None, *, presorted=False, store=True)
— produce deterministic BLAKE2b hashes to detect topology changes.stable_json(obj)
— render deterministic JSON strings suited for hashing and reproducible logs.get_graph(obj)
/get_graph_mapping(G, key, warn_msg)
— normalise access to graph-level metadata regardless of wrappers.EdgeCacheManager
together withedge_version_cache
,cached_nodes_and_A
andedge_version_update
encapsulate the edge version cache.increment_edge_version
bumps the version manually for imperative workflows.mark_dnfr_prep_dirty(G)
— invalidate precomputed ΔNFR preparation when mutating edges outside the cache helpers.
- From objects to coherences: you model processes that hold, not fixed entities.
- Operators instead of rules: you compose structural operators (e.g., emission, coherence, dissonance) to build trajectories.
- Operational fractality: the same pattern works for ideas, teams, tissues, narratives; the scales change, the logic doesn’t.
-
Node (NFR): a unit that persists because it resonates. Parameterized by νf (frequency), θ (phase), and EPI (coherent form).
-
Structural operators - functions that reorganize the network:
- Emission (start), Reception (open), Coherence (stabilize), Dissonance (creative tension), Coupling (synchrony), Resonance (propagate), Silence (latency), Expansion, Contraction, Self‑organization, Mutation, Transition, Recursivity.
-
Magnitudes:
- C(t): global coherence.
- ΔNFR: nodal gradient (need for reorganization).
- νf: structural frequency (Hz_str).
- Si: sense index (ability to generate stable shared coherence).
- Model your system as a network: nodes (agents, ideas, tissues, modules) and couplings.
- Select a trajectory of operators aligned with your goal (e.g., start → couple → stabilize).
- Simulate the dynamics: number of steps, step size, tolerances.
- Measure: C(t), ΔNFR, Si; identify bifurcations and collapses.
- Iterate with controlled dissonance to open mutations without losing form.
coherence(traj) → C(t)
: global stability; higher values indicate sustained form.gradient(state) → ΔNFR
: local demand for reorganization (high = risk of collapse/bifurcation).sense_index(traj) → Si
: proxy for structural sense (capacity to generate shared coherence) combining νf, phase, and topology.
Use tnfr.operators.apply_topological_remesh
(from tnfr.operators import apply_topological_remesh
)
to reorganize connectivity based on nodal EPI similarity while preserving
graph connectivity. Modes:
"knn"
– connect each node to itsk
nearest neighbours (with optional rewiring)."mst"
– retain only a minimum spanning tree."community"
– collapse modular communities and reconnect them by similarity.
All modes ensure connectivity by adding a base MST.
Recorded series are stored under G.graph['history']
. Set HISTORY_MAXLEN
in
the graph (or override the default) to keep only the most recent entries. The
value must be non‑negative; negative values raise ValueError
. When the
limit is positive the library uses bounded deque
objects and removes the
least populated series when the number of history keys grows beyond the limit.
To reduce costly comparisons the engine stores a per‑step random subset of
node ids under G.graph['_node_sample']
. Operators may use this to avoid
scanning the whole network. Sampling is skipped automatically when the graph
has fewer than 50 nodes, in which case all nodes are included.
random_jitter
uses an LRU cache of random.Random
instances keyed by (seed, node)
.
JITTER_CACHE_SIZE
controls the maximum number of cached generators (default: 256
);
when the limit is exceeded the least‑recently used entry is discarded. Increase it for
large graphs or heavy jitter usage, or lower it to save memory.
To adjust the number of cached jitter sequences used for deterministic noise,
obtain the manager with get_jitter_manager
before calling setup
:
from tnfr.operators import get_jitter_manager
manager = get_jitter_manager()
# Resize cache to keep only 512 entries
manager.max_entries = 512
# or in a single call that also clears previous counters
manager.setup(max_entries=512)
setup
preserves the current size unless a new max_entries
value is
supplied. Custom sizes persist across subsequent setup
calls, and
max_entries
assignments take effect immediately.
Wrap sequences of edge mutations with edge_version_update(G)
so the edge
version increments on entry and exit. This keeps caches and structural logs
aligned with the network's evolution.
inject_defaults
evita copias profundas cuando los valores son inmutables (números,
cadenas, tuplas). Solo se usa copy.deepcopy
para estructuras mutables, reduciendo
el costo de inicializar grafos con parámetros por defecto.
- Removed deprecated alias
sigma_vector_global
; usesigma_vector_from_graph
instead. - Removed legacy
tnfr.program
alias; import programming helpers fromtnfr.execution
. - Stopped re-exporting
CallbackSpec
andapply_topological_remesh
at the package root; import them viatnfr.trace
andtnfr.operators
.
Copyright (c) 2025 TNFR - Teoría de la naturaleza fractral resonante
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
If you use tnfr
in research or projects, please cite the TNFR conceptual framework and link to the PyPI package.