What’s up Python? Frame pointers, sentinel values, venv discovery, more rust in Python...
April, 2026
Summary
Busy month!
PEP 831 – Frame Pointers Everywhere has been accepted and will make observability in Python more precise, robust, and fast at the cost of a small perf drop for regular workloads.
PEP 661: Sentinel Values will land in 3.15 and finally let you distinguish the difference between
Noneand “not passed”.PEP 832: virtual environment discovery, a way to normalize how tooling figures out where the hell your project's venv is, is heavily debated, and no consensus has been found yet.
Rust for CPython is making serious progress.
And moar than usual.
PEP 831 – Frame Pointers Everywhere
The history of Python calls and variables is stored in the Python stack; this is what you see a representation of when the Python interpreter crashes: the stack trace.
You can read it directly from Python, and if you want to do it faster, or while the Python process is running live, you can do it from the outside by observing the C compiled code running. The latter is harder and more error-prone right now.
But it’s also very valuable in production: it allows you to observe what your Python process is doing in real time.
Python has an option to make that faster and more reliable, called “frame pointers”. I’ll let you read the excellent explanation of how they work here (the whole PEP in general is an outstanding pedagogical effort and deserves much praise), but the fact is, it’s not activated by default.
Unfortunately, to make it work, you currently have to recompile Python with the options -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer, something that is not done in the installers on Python.org nor in the main Linux distributions. Therefore, most people will not benefit from it. The proposal is to simply make those options the default and take the official stance that this is the recommended way for the whole ecosystem from now on.
Now you might wonder, how is it any good for me?
Well, for the average developer, it’s not that critical. But for people deploying big Python services in production, it opens the door to seeing what your Python program is doing live with more accuracy, and up to 200x faster than with the current methods. Not 200%, 200 times faster!
This means running it always, all the time in production becomes dirt cheap, and it’s why big players like Netflix, OpenAI, and Meta always activate this option so they can have the info. Of course, they can afford to recompile Python every time they need to, but who else has the time for that?
It will make profiling and debugging your Python program faster, though, and probably help the JIT gain more perfs in the future. Besides, it means hosting will be able to show you killer dashboards, and maybe some open source ones will pop up. But more importantly, it potentially leads to seamless transitions from Python code to compiled extension code during debugging sessions. If your bottleneck or bug is somewhere inside numpy, it’s something you want
But if it’s that good, why wasn’t it the default so far? Because while it does speed up watching a Python program, it also slows down Python execution. In the 32-bit era, the cost was around 20% slow down, and nobody wanted to pay for that. Now with 64-bit arch, we are down to between 0.5 and 3%, which is much more acceptable and is why the move has been accepted.
We are kinda late to the party: Node.js, Rust, and Go all had this default for years.
Now for the kicker: python-build-standalone already has this option activated, meaning all Python installed with uv have been using this for a while already.
Which, I assume, most people haven’t noticed. But it’s nice to know.
PEP 832: virtual environment discovery
There is currently a heavy debate around adding a mechanism to detect where the virtualenv of a given project is located. It’s easy to forget, but unlike, say node_modules, there is no standard for the location nor naming of Python virtual environments. CLI tools and IDE rely on some blurry heuristics to figure out where to run the executable and load the libs from.
An official way to say “hey, there it is” would make things easier and more reliable for tool makers.
The proposal started by suggesting first detecting if there is a .venv directory with a pyvenv.cfg file (typical venv layout), and if it’s not the case, the user should provide a venv file containing a single line with the path to the directory of the venv.
This has generated a lot of pushback, mostly because venvs can have different layouts, projects can be scattered around workspaces, it is ambiguous in the case of multiple venvs, and a file could disturb tooling that always expects a dir.
Right now, the discussion is steered in favor of declaring (for example, in pyproject.toml), and an executable to call that should return where the venv is. E.G:
[workflow]
virtual-environment = {tool = "pdm", cli = ["pdm", "venv", "--path"], shell = false}Which would return:
{
"version": "1.0",
"environments": [
{
"path": ".venv",
...
}
]
}This makes the proposal significantly more complicated but also much more flexible. The debate is not settled, and is once again a good illustration of why things are so slow to evolve in packaging.
It’s because it’s complicated.
PEP 661: Sentinel Values
This one was just approved, it will land in 3.15, and there is already a doc.
If you initialize a parameter in a function like this:
def reverse_entropy_in(place: str | None=None):
# Implementation left to the student for homeworkThere is no way to distinguish between the caller not passing a parameter:
reverse_entropy_in()And the caller passing explicitly None as a parameter:
reverse_entropy_in(place=None)In most cases, it’s not important, but sometimes it matters. For example, the dataclasses module declares a MISSING value that way:
class _MISSING_TYPE:
pass
MISSING = _MISSING_TYPE()So it knows if attributes were set to None or not set at all.
Others prefer to use object() for this.
PEP 661 is finally an official and standardized solution to this problem, and it will add the following built-in to Python (so no import needed):
>>> MISSING = sentinel('MISSING')
>>> MISSING
MISSINGSo you can do something like this:
def reverse_entropy_in(place: str | MISSING | None=MISSING):
if place is MISSING:
raise ValueError('You need to pass a place in which to reverse entropy')
if place is None:
print('Reversing entropy from the void')
...It’s been designed to be easy to use:
Sentinels can be used as their own type declaration, like
None. So hereMISSINGis both the value and the type.Sentinels are singletons and can be compared using
is, even after pickling.They can’t be subclassed.
They have a very short
repr
All stuff that object() fails at.
Basically, it’s more or less the equivalent of this:
class sentinel:
"""Unique sentinel values."""
__slots__ = ("__name__", "_module_name")
def __init_subclass__(cls):
raise TypeError("type 'sentinel' is not an acceptable base type")
def __init__(self, name, /):
if not isinstance(name, str):
raise TypeError("sentinel name must be a string")
self.__name__ = name
self._module_name = sys._getframemodulename(1)
@property
def __module__(self):
return self._module_name
def __repr__(self):
return self.__name__
def __reduce__(self):
return self.__name__
def __copy__(self):
return self
def __deepcopy__(self, memo):
return self
def __or__(self, other):
return typing.Union[self, other]
def __ror__(self, other):
return typing.Union[other, self]
No, why do you need to create your own sentinel and not provide a global MISSING constant for everyone? It’s because sentinels are not just for not passing values, it can be for signaling other things like:
default should apply,
field missing in JSON,
value intentionally cleared,
lazy initialization pending,
tombstones,
reached a point (E.G: iter() second param).
And so on.
So this is a more global mechanism. Not to mention, we already have NameError and AttributeError on undefined names, so a JS-like solution would not work.
Rust for CPython is progressing
Remember the pre-PEP discussion about adding Rust to CPython, in the same spirit as the Linux kernel did?
Such a big change requires a LOT of proof of work from the people suggesting it before being seriously considered.
And it’s being done.
CPython now successfully builds with Rust enabled across all supported CI platforms, resolving a major blocker for the project.
The team is collaborating with Rust core devs for design decisions and has transitioned from infrastructure work to API and language design. They are beginning to define a (for now unstable and private) internal Rust API for CPython.
The roadmap looks roughly like this:
April–May 2026: design and start implementation of the internal Rust API; select one CPython extension module to rewrite in Rust as an initial experiment.
June–July 2026: draft and submission of a PEP proposing Rust integration for CPython.
Post-PEP: extended discussion period before Python 3.16 beta (the initial 3.15 target was judged too early).
This is, of course, to be considered a gradual and experimental introduction rather than an immediate rewrite of CPython. The goal is to carefully evaluate Rust as a safer systems-language option for parts of CPython, starting small and expanding only after community agreement. And it’s clearly taken in that sense by every person involved.
But it’s not tongue in cheek either, they really went at it.
And, you know, moar...
The change that added an incremental GC in 3.14 and 3.15 will be reversed after memory leaks have been discovered.
PyPI is hiring an engineer to make PyPI financially sustainable so it doesn’t depend entirely on donations and sponsorships. I assume they mean adding new optional paid features to the service. After they had to give up $1.5 million in grants, it makes sense.
Talking about Pypi, it has completed its second security audit, finding 14 problems, of which they have already fixed 12.
uvis now supported natively by read the doc, which means you can now have a simple dedicated block in your.readthedocs.yaml.JetBrain and the Django foundation are running a joint promo right now. Until May 1, you can purchase a PyCharm Pro licence with a 30% discount. The proceeds will go to the Django project.
Mypy 1.20 is Released, with t-string support. It drops support for Python 3.9, and it’s probably the last release before the breaking jump to 2.0, so brace yourself.
Allison Kaptur, the author of the excellent Software Design by Example, published a 500 lines Python interpretter written in Python.
The latest pip version adds experimental support for
pylock.tomlfiles, the new official Python lock file format.3.15.0a8 is supported by the latest
uvrelease, meaning we can finally play with lazy imports for real!
❯ uv self update
info: Checking for updates...
success: Upgraded uv from v0.9.18 to v0.11.7! https://github.com/astral-sh/uv/releases/tag/0.11.7
❯ uvx -p 3.15 --with ipython ipython
...
>>> lazy import foo
>>> foo.bar()
imported !
bar executedtynow has a--fixoption likeruff. For now, it only removes “ignore” comments you don’t need anymore. It also has a nice improvement regarding untyped attributes. This now works:
class Foo:
def __init__(self) -> None:
self.value = 1
reveal_type(Foo().value) # revealed: int
Foo().value = "x" # error: [invalid-assignment]ty used to consider .value to be of the type int | Unknown but not anymore. It will now correctly give you an error on this. Now the only time it will do a union with unknown in the case of None | Unknown:
class Foo:
def __init__(self) -> None:
self.value = NoneWhich makes sense since you might just use None as a place holder until the real value comes in.
