I ended up adding a note to the plugin author docs suggesting lazy loading inside of functions - https://llm.datasette.io/en/stable/plugins/advanced-model-pl... - but having a core Python language feature for this would be really nice.
I would have preferred a system where modules opt in to being lazy-loaded, with no extra syntax on the import side. That would simplify things since only large libraries would have to care about laziness. To be fair, in such a design, the interpreter would have to eagerly look up imports on the filesystem to decide whether they should be lazy-loaded. And there are probably other downsides I'm not thinking of.
Previously, if you had some thread hazardous code at module import time, it was highly likely to only run during the single threaded process startup phase, so it was likely harmless. Lazy loading is going to unearth these errors in the most inconvenient way (as Heisenbugs)
(Function level import can trigger this as well, but the top of a function is at least a slightly more deterministic place for imports to happen, and an explicit line of syntax triggering the import/bug)
However, there is a pattern in python to raise an error if, say, pandas doesn't have an excel library installed, which is fine. In the future, will maintainers opt to include a bunch of unused libraries since they won't negatively impact startup time? (Think pandas including 3-4 excel parsers by default, since it will only be loaded when called). It's a much better UX, but, now if you opt out of lazy loading, your code will take longer to load than without it.
On the other hand, it would create confusion for users of a library when the performance hit of importing a library was delayed to the site of usage. They might not expect, for example, a lag to occur there. I don't think it would cause outright breakage, but people might not like the way it behaved.
If one could dream, modules should have to explicitly declare whether they have side-effects or not, with a marker at the top of the module. Not declaring this and trying anything except declaring a function or class should lead to type-checker errors. Modules declaring this pure-marker then automatically become lazy. Others should require explicit "import_with_side_effects" keyword.
__pure__ = True
import logging
import threading
app = Flask() # << ImpureError
sys.path.append() # << ImpureError
with open(.. ) # << ImpureError
logging.basicSetup() # << ImpureError
if foo: # << ImpureError (arguable)
@app.route("/foo") # Questionable
def serve(): # OK
...
serve() # << ImpureError
t = threading.Thread(target=serve) # << ImpureError
All of this would be impossible today, given how much the Python relies on metaprogramming. Even standard library exposes functions to create classes on the fly like Enum and dataclasses, that are difficult to assert as either pure or impure. With more and more of the ecosystem embracing typed Python, this metaprogramming is reducing into a less dynamic subset. Type checkers and LSPs must have at least some awareness of these modules, without executing them as plain python-code.Soooo instead now we're going to be in a situation where you're going to be writing "lazy import ..." 99% of the time: unless you're a barbarian, you basically never have side effects at the top level.
I could, and sometimes do, go through all the imports to figure out which ones are taking a long time to load, but it's a chore.
> A world in which Python only supported imports behaving in a lazy manner would likely be great...we do not envision the Python langauge transitioning to a world where lazy imports are the default...this concept would add complexity to our ecosystem.
Why can't lazy be the default and instead propose an `eager` syntax? The only argument I can imagine is that there's some API that runs a side effect based on importing, but perhaps making it eager for modules with side effects would be a sufficient temporary fix?
I mean I get why that makes the most sense in most scenarios, but it is not as if the problem of having to choose between declaring dependencies up front or deferring expensive imports until needed does not happen in functions.
Take for instance a function that quickly fails because the arguments are incorrect, it might do a whole bunch of imports that only make sense for that function but which are made immediately obsolete.
It feels like it is forbidden just because someone thought it wasn't a good coding style but to me there is no obvious reason it couldn't work.
I was skeptical and cautious with it at first but I've since moved large chunks of my codebase to it - it's caused surprisingly few problems (honestly none besides forgetting to handle some import-time registration in some modules) and the speed boost is addictive.
I hope this proposal succeeds. I would love to use this feature.
They make that, what should be the default, a special case. Soon, every new code will use "lazy". The long term effect of such changes is a verbose language syntax.
They should have had a period where one, if they want lazy imports, has to do "from __future__ import lazy_import". After that period, lazy imports become the default. For old-style immediate imports, introduce a syntax: "import now foo" and "from foo import now bar".
All which authors of old code would have to do is run a provided fix script in the root directory of their code.
And that is actually a problem for more than just performance. In some cases, importing at the top might actually just fail. For example if you need a platform specific library, but only if it is running on that platform.
import expensive_module
could be:
@lazy
import expensive_module
or you could do:
@retry(3)
x = failure_prone_call(y)
lazy is needed, but maybe there is a more basic change that could give more power with more organic syntax, and not create a new keyword that is special purpose (and extending an already special purpose keyword)
- lazy imports are a hugely impactful feature
- lazy imports are already possible without syntax
This means any libraries that get large benefit from lazy imports already use import statements within functions. They can't really use the new feature since 3.14 EoL is _2030_, forever from now. The __lazy_modules__ syntax preserves compatibility only but being eager, not performance - libraries that need lazy imports can't use it until 2030.
This means that the primary target for a long time is CLI authors, which can have a more strict python target and is mentioned many times in the PEP, and libraries that don't have broad Python version support (meaning not just works but works well), which indicates they are probably not major libraries.
Unless the feature gets backported to 2+ versions, it feels not so compelling. But given how it modifies the interpreter to a reasonable degree, I wonder if even any backport is on the table.
I’m pretty sure there will be new keywords in Python in the future that only solve one thing.
lazily import package.foo
vs
defer import package.foo
Also the grammar is super weird for from imports. lazy from package import foo
vs.
from package defer import foo.What I want is for imports to not suck and be slow. I’ve had projects where it was faster to compile and run C++ than launch and start a Python CLI. It’s so bad.
Edit
> A somewhat common way to delay imports is to move the imports into functions (inline imports), but this practice requires more work to implement and maintain, and can be subverted by a single inadvertent top-level import. Additionally, it obfuscates the full set of dependencies for a module. Analysis of the Python standard library shows that approximately 17% of all imports outside tests (nearly 3500 total imports across 730 files) are already placed inside functions or methods specifically to defer their execution. This demonstrates that developers are already manually implementing lazy imports in performance-sensitive code, but doing so requires scattering imports throughout the codebase and makes the full dependency graph harder to understand at a glance.
I think this is a really weak foundation for this language feature. We don't need it.
The use of these sorts of Python import internals is highly non-obvious. The Stack Overflow Q&A I found about it (https://stackoverflow.com/questions/42703908/) doesn't result in an especially nice-looking UX.
So here's a proof of concept in existing Python for getting all imports to be lazy automatically, with no special syntax for the caller:
import sys
import threading # needed for python 3.13, at least at the REPL, because reasons
from importlib.util import LazyLoader # this has to be eagerly imported!
class LazyPathFinder(sys.meta_path[-1]): # <class '_frozen_importlib_external.PathFinder'>
@classmethod
def find_spec(cls, fullname, path=None, target=None):
base = super().find_spec(fullname, path, target)
base.loader = LazyLoader(base.loader)
return base
sys.meta_path[-1] = LazyPathFinder
We've replaced the "meta path finder" (which implements the logic "when the module isn't in sys.modules, look on sys.path for source code and/or bytecode, including bytecode in __pycache__ subfolders, and create a 'spec' for it") with our own wrapper. The "loader" attached to the resulting spec is replaced with an importlib.util.LazyLoader instance, which wraps the base PathFinder's provided loader. When an import statement actually imports the module, the name will actually get bound to a <class 'importlib.util._LazyModule'> instance, rather than an ordinary module. Attempting to access any attribute of this instance will trigger the normal module loading procedure — which even replaces the global name.Now we can do:
import this # nothing shows up
print(type(this)) # <class 'importlib.util._LazyModule'>
rot13 = this.s # the module is loaded, printing the Zen
print(type(this)) # <class 'module'>
That said, I don't know what the PEP means by "mostly" here.> The dominant convention in Python code is to place all imports at the beginning of the file. This avoids repetition, makes import dependencies clear and minimizes runtime overhead.
> A somewhat common way to delay imports is to move the imports into functions, but this practice requires more work [and] obfuscates the full set of dependencies for a module.
The first part is just saying the traditions exist because the traditions have always existed. Traditions are allowed to change!
The second part is basically saying if you do your own logic-based lazy imports (inline imports in functions) then you’re going against the traditions. Again, traditions are allowed to change!
The point about the import graph being obfuscated would ring more true if Python didn’t already provide lightning fast static analysis tools like ast. If you care about import graphs at the module level then you’re probably already automating everything with ast anyway, at which point you just walk the whole tree looking for imports rather than the top level.
So, really, the whole argument for a new lazy keyword (instead of inlining giant imports where they are needed) is because people like to see import pytorch at the top of the file, and baulk at seeing it — and will refuse to even look for it! — anywhere else? Hmmm.
What does seem like a pain in the ass is having to do this kind of repetitive crap (which they mention in their intro):
def f1():
import pytorch
…
def f2():
import pytorch
…
def f3():
import pytorch
…
But perhaps the solution is a pattern where you put all your stuff like that in your own module and it’s that module which is lazy loaded instead? def antislash(A, b):
from numpy.linalg import solve
return solve(A, b)
thus numpy.linalg is only imported the first time you call the antislash function. Much cleaner than a global import.Ignore wrong traditions. Put all imports in the innermost scopes of your code!
Also if that doesn't strike your fancy all of the importlib machinery is at your disposal and it's really not very much work to write an import_path() function. It's one of the patterns plug-in systems use and so is stable and expected to be used by end users. No arcane magic required.
If a tool has different capabilities that use different imports, why load all of them if only a subset is required?
As a simple example, a tool that can generate output in various formats (e.g., json, csv, xml, ...) should only import the appropriate modules to handle the output format after having determined which ones will be used in this invocations.
I know/heard there are "some" (which I haven't seen by the way) libraries that depend on import side effects, but the advantage is much bigger.
First of all, the circular import problem will go away, especially on type hints. Although there was a PEP or recent addition to make the annotation not not cause such issue.
Second and most important of all, is the launch time of Python applications. A CLI that uses many different parts of the app has to wait for all the imports to be done.
The second point becomes a lot painful when you have a large application, like a Django project where the auto reload becomes several seconds. Not only auto reload crawls, the testing cycle is slow as well. Every time you want to run test command, it has to wait several seconds. Painful.
So far the solution has been to do the lazy import by importing inside the methods where it's required. That is something, I never got to like to be honest.
Maybe it will be fixed in Python 4, where the JIT uses the type hints as well /s
Only do imports when you know you need them -- or as an easy approximation, only if the easy command line options have been handled and there's still something to do.
Note that this is global to the entire process, so for example if you make an import of Numpy lazy this way, then so are the imports of all the sub-modules. Meaning that large parts of Numpy might not be imported at all if they aren't needed, but pauses for importing individual modules might be distributed unpredictably across the runtime.
Edit: from further experimentation, it appears that if the source does something like `import foo.bar.baz` then `foo` and `foo.bar` will still be eagerly loaded, and only `foo.bar.baz` itself is deferred. This might be part of what the PEP meant by "mostly". But it might also be possible to improve my implementation to fix that.
The flexibility of this system also entails that you can in effect define a completely new programming language, describe the process of creating Python bytecode from your custom source, and have clients transparently `import` source in the other language as a result. Or you can define an import process that grabs code from the Internet (not that it would be a good idea...).
If you mean "by explicitly specifying a relative path, and having it be interpreted according to the path of the current module's source code", well first you have to consider that the current module isn't required to have source code. But if it does, then generally it will have been loaded with the default mechanism, which means the module object will have a `__file__` attribute with an absolute path, and you just set your path relative to that.
This is not fearmongering. There is a reason why the only flavor of Python with lazy imports comes from Meta, which is one of the most well-resourced companies in the world.
Too many people in this thread hold the view of "importing {pandas, numpy, my weird module that is more tangled than an eight-player game of Twister} takes too long and I will gladly support anything that makes them faster". I would be willing to bet a large sum of money that most people who hold this opinion are unable to describe how Python's import system works, let alone describe how to implement lazy imports.
PEP 690 describes a number of drawbacks. For example, lazy imports break code that uses decorators to add functions to a central registry. This behavior is crucial for Dash, a popular library for building frontends that has been around for more than a decade. At import-time, Dash uses decorators to bind a JavaScript-based interface to callbacks written in Python. If these imports were made lazy, Dash would break. Frontends used by thousands, if not millions of people, would immediately become unresponsive.
You may cry, "But lazy imports are opt-in! Developers can choose to opt-out of lazy imports if it doesn't work for them." What if these imports were transitive? What if our frontend needed to be completely initialized before starting a critical process, else it would cause a production outage? What if you were a maintainer of a library that was used by millions of people? How could you be sure that adding lazy imports wouldn't break any code downstream? Many people made this argument for type hints, which is sensible because type hints have no effect on runtime behavior*. This is not true for lazy imports; import statements exist in essentially every nontrivial Python program, and changing them to be lazy will fundamentally alter runtime behavior.
This is before we even get to the rest of the issues the PEP describes, which are even weirder and crazier than this. This is a far more difficult undertaking than many people realize.
---
* You can make a program change its behavior based on type annotations, but you'd need to explicitly call into typing APIs to do this. Discussion about this is beyond the scope of this post.
It can just implement lazy loading itself today, by using module-level __getattr__ (https://docs.python.org/3/reference/datamodel.html#customizi...) to overwrite itself with a private implementation module at the appropriate time. Something like:
# foo.py
def __getattr__(name):
# clean up the lazy loader before loading
# this way it's cleaned up if the implementation doesn't replace it,
# and not scrubbed if it does
global __getattr__
del __getattr__
import sys
self = sys.modules[__name__]
from . import _foo
# "star-import" by adding names that show up in __dir__
self.__dict__.update(**{k: getattr(_foo, k) for k in _foo.__dir__()})
# On future attribute lookups, everything will be in place.
# But this time, we need to delegate to the normal lookup explicitly
return getattr(self, name)
Genericizing this is left as an exercise.I think it's on you to explain why that's a better approach for everyone's use cases instead of this language feature
Unless you are writing scripts or very simple stuff running side effects when modules are loaded should be avoided at all cost anyway.
In fact, half of the community basically uses only a modernized set of python 2.4 features and that's one of the beauties of the language. You don't need a lot to be productive, and if you want more, you can optionally reach for it.
It has worked very well for the last 2 decades and it will likely work again.
try:
import module
except ImportError:
import slow_module as module
Conditional support testing would also break, like having tests which only run if module2 is available: try:
import module2
except ImportError:
def if_has_module2(f):
return unittest.skip("module2 not available")(f)
else:
def if_has_module2(f):
return f
@if_has_module2
class TestModule2Bindings(....
The proto-PEP also gives an example of using with suppress_warnings():
import module3
where some global configuration changes only during import.In general, "import this" and "import antigravity" - anything with import side-effects - would stop working.
Oh, and as the proto-PEP points out, changes to sys.path and others can cause problems because of the delay between time of lazy import and time of resolution.
Then there's code where you do a long computation then make use of a package which might not be present.
import database # remember to install!!
import qcd_simulation
universe = qcd_simulation.run(seconds = 30*24*60*60)
database.save(universe)
All of these would be replaced with "import module; module.__name__" or something to force the import, or by an explicit use to __import__.edit: ok well "xxx in sys.modules" would indeed be a problem
Bad performing third party plugins are user error.
From merely browsing through a few comments, people have mostly positive opinions regarding this proposal. Then why did it fail many times, but not this time? What drives the success behind this PEP?
https://discuss.python.org/t/pep-810-explicit-lazy-imports/1...
The specific part of the PEP:
https://github.com/python/peps/pull/4628/files#diff-ca011267...
It's really beautiful work, especially since touching the back bone (the import system) of a language as popular as Python with such a diverse community is super dangerous surgery.
I'm impressed.
- It reduces visibility into a module’s dependencies.
- It increases the risk of introducing circular dependencies later on.
For other libraries they can of course choose as they want, but generally I don't think it's so common for libraries to be as generous with the support length as cpython.
Just might as well add `defer` keyword like Golang.
I don't consider startup time "superficial" at all; I work in a Django monolith where this problem resulted in each and every management command, test invokation, and container reload incurring a 10-15sec penalty because of just a handful of heavy-to-import libraries used by certain tasks/views. Deferring these made a massive difference.
Doesn't work if version resolution decides to upgrade or downgrade your installed package, so you need to make sure the declared version is satisfactory, too.
As this comment mentions Dash apps would not support lazy loaded imports until the underlying Dash library changes how it loads in callbacks and component libraries (the two features which would be most impacted here), but that doesn't mean there's no path to success. We've been discussing some ways we could resolve this internally and if this PEP is accepted we'd certainly go further to see if we can fully support lazy loaded imports (of both the Dash library itself/Dash component libraries and for relative imports in Dash apps).
No, they are saying the tradition is there for a reason. Imports at the beginning of the file makes reasoning about the dependencies of a module much easier and faster. I've had to deal with both and I sure as hell know which I'd prefer. Lazy imports by functions is a sometimes necessary evil and it would be very nice if it became unnecessary.
Think you need to read again.
$ time python -c '' # baseline
real 0m0.020s
user 0m0.015s
sys 0m0.005s
$ time python -c 'import sys; old = len(sys.modules); import asyncio; print(len(sys.modules) - old)'
104
real 0m0.076s
user 0m0.067s
sys 0m0.009s
For comparison, with the (seemingly optimized) Numpy included with my system: $ time python -c 'import sys; old = len(sys.modules); import numpy; print(len(sys.modules) - old)'
185
real 0m0.124s
user 0m0.098s
sys 0m0.026s`__init__.py` has nothing to do with making this work. It is neither necessary (as of 3.3, yes, you got it right: see https://peps.python.org/pep-0420/) nor sufficient (careless use of sys.path and absolute imports could make it so that the current folder hasn't been imported yet, so you can't even go "up into" it). The folder will already be represented by a module object.
What `__init__.py` does is:
1. Prevents relative imports from also potentially checking in other paths.
2. Provides a space for code that runs before sub-modules, for example to set useful package attributes such as `__all__` (which controls star-imports).
haha no
> all of the importlib machinery is at your disposal
This breaks all tooling. Awful option.
https://pep-previews--4622.org.readthedocs.build/pep-0810/#f...
Q: Why not use importlib.util.LazyLoader instead?
A: LazyLoader has significant limitations:
Requires verbose setup code for each lazy import.
Has ongoing performance overhead on every attribute access.
Doesn’t work well with from ... import statements.
Less clear and standard than dedicated syntax.
import torch==2.6.0+cu124
import numpy>=1.2.6
and support having multiple simultaneous versions of any Python library installed. End this conda/virtualenv/docker/bazel/[pick your poison] mess> All which authors of old code would have to do is run a provided fix script in the root directory of their code.
As I recall, `lib2to3` didn't do a lot to ease tensions. And `six` is still absurdly popular, mainly thanks to `python-dateutil` still attempting to support 2.7.
$ time pip install --disable-pip-version-check
ERROR: You must give at least one requirement to install (see "pip help install")
real 0m0.399s
user 0m0.360s
sys 0m0.041s
Almost all of this time is spent importing (and later unloading) ultimately useless vendored code. From my testing (hacking the wrapper script to output some diagnostics), literally about 500 modules get imported in total (on top of the baseline for a default Python process), including almost a hundred modules related to Requests and its dependencies, even though no web request was necessary for this command.Right now all the imports are getting resolved at runtime example in a code like below
from file1 import function1
When you write this, the entire file1 module is executed right away, which may trigger side effects.If lazy imports suddenly defer execution, those side effects won’t run until much later (or not at all, if the code path isn’t hit). That shift in timing could easily break existing code that depends on import-time behavior.
To avoid using lazy, this there is also a proposal of adding the modules you want to load lazily to a global `__lazy_modules__` variable.
Introducing new keyword has become a recent thing in Python.
Seems Python has a deep scare since Python2 to Python3 time and is scared to do anything that causes such drama again.
For me, the worst of all is "async". If 2to3 didn't cause much division, the async definitely divided Python libraries in 2. Sync and Async.
Maybe if they want backward compatible solution, this can be done by some compile or runtime flag like they did with free threading no-gil.
Using `importlib` is a horrible hack that breaks basically all tooling. You very very obviously are not supposed to do that.
def my_func():
import my_mod
my_mod.do_stuff()
as lazy import my_mod
def my_func():
my_mod.do_stuff()
Ie, with lazy, the import happens at the site of usage. Since clearly this is code that could already be written, it only breaks things in the sense that someone could already write broken code. Since it is opt in, if using it breaks some code, then people will notice that and choose not to rewrite that code using it.I don't see how. It adds a new, entirely optional syntax using a soft keyword. The semantics of existing code do not change. Yes, yes, you anticipated the objection:
> What if these imports were transitive? ... How could you be sure that adding lazy imports wouldn't break any code downstream?
I would need to see concrete examples of how this would be a realistic risk in principle. (My gut reaction is that top-level code in libraries shouldn't be doing the kinds of things that would be problematic here, in the first place. In my experience, the main thing they do at top level is just eagerly importing everything else for convenience, or to establish compatibility aliases.)
But if it were, clearly that's a breaking change, and the library bumps the major version and clients do their usual dependency version management. As you note, type hints work similarly. And "explicitly calling into typing APIs" is more common than you might think; https://pypistats.org/packages/pydantic exists pretty much to do exactly this. It didn't cause major problems.
> Import statements fundamentally have side effects, and when and how these side effects are applied will cause mysterious breakages that will keep people up for many nights.
They do have side effects that can be arbitrarily complex. But someone who opts in to changing import timing and encounters a difficult bug can just roll back the changes. It shouldn't cause extended debugging sessions unless someone really needs the benefits of the deferral. And people in that situation will have been hand-rolling their own workarounds anyway.
> Too many people in this thread hold the view of "importing {pandas, numpy, my weird module that is more tangled than an eight-player game of Twister} takes too long and I will gladly support anything that makes them faster".
I don't think they're under the impression that this necessarily makes things faster. Maybe I haven't seen the same comments you have.
Deferring imports absolutely would allow, for example, pip to do trivial tasks faster — because it could avoid importing unnecessary things at all. As things currently stand, a huge fraction of the vendored codebase will get imported pretty much no matter what. It's analogous to tree shaking, but implicitly, at runtime and without actually removing code.
Yes, this could be deferred to explicitly chosen times to get more or less the same benefit. It would also be more work.
oh, you want a "break my libraries" flag? :D
seriously, in theory lazy imports may be "transparent" for common use cases, but I've saw too many modules rely on the side effects of the importing, that I understand why they needed to make this a "double opt in" feature
Same is true for C++.
In this specific case, I think a lazy load directive isn’t a bad addition. But one does need to be careful about adding new language features just because you have an active community.
> Has ongoing performance overhead on every attribute access.
I would have expected so, but in my testing it seems like the lazy load does some kind of magic to replace the proxy with the real thing. I haven't properly dug into it, though. It appears this point is removed in the live version (https://peps.python.org/pep-0810).
> Doesn’t work well with from ... import statements.
Hmm. The PEP doesn't seem to explain how reification works in this case. Per the above it's a solved problem for modules; I guess for the from-imports it could be made to work essentially the same way. Presumably this involves the proxy holding a reference to the namespace where the import occurred. That probably has a lot to do with restricting the syntax to top level. (Which is the opposite of how we've seen soft keywords used before!)
> Requires verbose setup code for each lazy import.
> Less clear and standard than dedicated syntax.
If you want to use it in a fine-grained way, then sure.
# /// script
# dependencies = [
# "requests<3",
# "rich",
# ]
# ///
import requests
from rich.pretty import pprint
resp = requests.get("https://peps.python.org/api/peps.json")
data = resp.json()
pprint([(k, v["title"]) for k, v in data.items()][:10])I don't want to lose multiple hours debugging why something did go wrong because I am using three versions of numpy and seven of torch at the same time and there was a mixup
https://pep-previews--4622.org.readthedocs.build/pep-0810/#f...
Q: How does this differ from the rejected PEP 690?
A: PEP 810 takes an explicit, opt-in approach instead of PEP 690’s implicit global approach. The key differences are:
Explicit syntax: lazy import foo clearly marks which imports are lazy.
Local scope: Laziness only affects the specific import statement, not cascading to dependencies.
Simpler implementation: Uses proxy objects instead of modifying core dictionary behavior.
If people do not run such upgrade scripts, it must be documented better.
Python is free. In order to also stay elegant, they should say to their users: "We expect from you to run an upgrade script on your code once per Python upgrade"
So, they agreed on a common system of linting error codes? Is that documented somewhere?
gives me ten minutes to edit (rewrite if i’m honest) before other people see it.
They took a decade to solidify that. At some point, you have to balance evolution and stability. For a language as popular as Python, you can not break the world every week, but you can't stay put for 5 years.
But I don’t think I really agree, the extensible annotation syntaxes they mention always feel clunky and awkward to me. For a first-party language feature (especially used as often as this will be), I think dedicated syntax seems right.
Exactly. This gives you the flexibility to distribute a complex package across multiple locations.
> Often when you run Python you don't even have a package path.
Any time you successfully import foo.bar, you necessarily have imported foo (because bar is an attribute of that object!), and therefore bar can `from . import` its siblings.
> Using `importlib` is a horrible hack that breaks basically all tooling. You very very obviously are not supposed to do that.
It is exactly as obvious (and true) that you are not "supposed to", in the exact same sense, directly specify where on disk the source code file you want to import is. After all, this constrains the import process to use a source code file. (Similarly if you specify a .pyc directly.) Your relative path doesn't necessarily make any sense after you have packaged and distributed your code and someone else has installed it. It definitely doesn't make any sense if you pack all your modules into a zipapp.
Regarding risks in practice:
> Libraries such as PyTorch, Numba, NumPy, and SciPy, among others, did not seamlessly align with the deferred module loading approach. These libraries often rely on import side effects and other patterns that do not play well with Lazy Imports. The order in which Python imports could change or be postponed, often led to side effects failing to register classes, functions, and operations correctly. This required painstaking troubleshooting to identify and address import cycles and discrepancies.
This isn't precisely the scenario I described above, but it is a concrete example of how deferred imports can cause issues that are difficult to debug.
Regarding performance benefits:
> At Meta, the quest for faster model training has yielded an exciting milestone: the adoption of Lazy Imports and the Python Cinder runtime. ... we’ve been able to significantly improve our model training times, as well as our overall developer experience (DevX) by adopting Lazy Imports and the Python Cinder runtime.
Right now in python, you can move import statement inside a function. Lazy imports at top level are not needed. All lazy imports do is make you think less about what you are writing. If you like that, then just vibe code all of your stuff, and leave the language spec alone.
https://llm.datasette.io/en/stable/plugins/plugin-hooks.html...
You got three other responses before me all pointing at uv. They are all wrong, because uv did not introduce this functionality to the Python ecosystem. It is a standard defined by https://peps.python.org/pep-0723/, implemented by multiple other tools, notably pipx.
uv venv —seed —python=3.12 && source .venv/bin/activate && pip3 install requests && …
Probably because PEP 8 says
> Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants
from lazy import make_lazy
from package import module @make_lazy @local @nogil
Let's say this syntax gets introduced in Python 3.16. The @nogil feature can be introduced in 3.17. If such code is running in Python 3.16, the @nogil marker will be ignored.The problem with new keywords is that you have to stick to the newest Python version every time a new keyword is added. Older Python versions will give a syntax error. It's a big problem for libraries. You need to wait for 3-5 years before adding it to a library. There are a lot of people who still use Python 3.8 from 2019.
That's not to say this PEP should not be accepted. One could always apply a no-lazy-imports style rule or disable it via global lazy import control.
https://peps.python.org/pep-0810/#global-lazy-imports-contro...
This is an assertion that has absolutely no reasoning behind it. I'm not saying I disagree; I'm just saying there is a time and a place for importlib.
IME circular import errors aren't due to poor organization; they're due to an arbitrary restriction Python has.
But that's rare, and could be handled with existing workarounds.
Normally, a module needs to be eagerly imported if and only if it has side effects.
What if we have a program where one feature works only when lazy imports are enabled and one feature only when lazy imports are disabled?
This is not a contrived concern. Let’s say I’m a maintainer of an open-source library and I choose to use lazy imports in my library. Because I’m volunteering my time, I don’t test whether my code works with eager imports.
Now, let’s say someone comes and builds an application on top of this library. It doesn’t work with lazy imports for some unknown reason. If they reach for a “force all imports” flag, their application might break in another mysterious way because the code they depend on is not built to work with eager imports. And even if my dependency doesn’t break, what about all the other packages the application may depend on?
The only solution here would be for the maintainer to ensure that their code works with both lazy and eager imports. However, this imposes a high maintenance cost and is part of the reason why PEP 690 was rejected. (And if your proposed solution was “don’t use libraries made by random strangers on the Internet”, boy do I have news for you...)
My point is that many things _will_ break if migrated to lazy imports. Whether they should have been written in Python in the first place is a separate question that isn’t relevant to this discussion.
This can already happen with non top level imports so it is not a necessarily a new issue, but could become more prevalent if there is an overall uptake in this feature for optional dependencies.
>>> import nonexistent_module
Traceback (most recent call last):
File "<python-input-2>", line 1, in <module>
import nonexistent_module
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1322, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1262, in _find_spec
File "<python-input-0>", line 8, in find_spec
base.loader = LazyLoader(base.loader)
^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'loader'
The implementation should probably convert that exception back to ImportError for you, but the point is that the absence of an implementation can still be detected eagerly while the actual loading occurs lazily.However it isn't trivial. First problem coming to my mind:
module a importing first somelib>=1.2.0 and then b and b then requiring somelib>1.2.1 and both being available, will it be the same or will I have a mess from combining?
I banished the worst/heaviest libraries to this list at my workplace and it's been really helpful at keeping startup times from regressing.
import numpy==2.1
And let's say numpy didn't expose a version number in a standard (which could be agreed upon in a PEP) field, then it would just throw an import exception. It wouldn't break any old code. And only packages with that explicit field would support the pinned version import.
And it wouldn't involve trying to extract and parse versions from older packages with some super spotty heuristics.
But it would make new code impossible to use with older versions of python, and older packages, but that's already the case.
Maybe the issue is with module name spacing?
You're making the common mistake of conflating how things currently work with how things could work if the responsible group agrees to change how things work. Something being the way it is right now is not the same as something else being "not possible".
In fact there's already many packages already defining __version__ at a package level.
https://packaging.python.org/en/latest/discussions/versionin...
Edit: What they are solving with UV is at the moment of standing up an environment, but you're more concerned about code-level protection, where are they're more concerned about environment setup protection for versioning.
Not possible? Come on.
Almost everyone already uses one of a small handfull of conventional ways to specify it, eg `__version__` attribute. It's long overdue that this be standardized so library versions can reliably be introspected at runtime.
Allowing multiple versions to be installed side-by-side and imported explicitly would be a massive improvement.
That sounds like it is absolutely fixable to me, but more of a matter of not having the will to fix it based on some kind of traditionalism. I've used python, a lot. But it is stuff like this that is just maddeningly broken for no good reason at all that has turned me away from it. So as long as I have any alternative I will avoid python because I've seen way too many accidents on account of stuff like this and many lost nights of debugging only to find out that an easily avoidable issue became - once again - the source of much headscratching.
I should be able to do "python foo.py" and everything should just work. foo.py should define what it wants and python should fetch it and provide it to foo. I should be able to do "pyc foo.py; ./foo" and everything should just work, dependencies balled up and statically included like Rust or Go. Even NodeJS can turn an entire project into one file to execute. That's what a modern language should look and work like.
The moment I see "--this --that" just to run the default version of something you've lost me. This is 2025.
My main complaint, though, about Python async is - because it is opt-in I never know if I forgot a sync IO call somewhere that will block my worker. JS makes everything async by default and there is effectively no chance of blocking.
I have zero concerns about this PEP and look forward to its implementation.
I have bad memories of using a network filesystem where my Python app's startup time was 5 or more seconds because of all the small file lookups for the import were really slow.
I fixed it by importing modules in functions, only when needed, so the time went down to less than a second. (It was even better using a zipimport, but for other reasons we didn't use that option.)
If I understand things correctly, your code would have the same several-second delay as it tries to resolve everything?
Although I guess that doesn't work in all cases, like defining foreign key relationships when using an orm (like sqlalchemy) for example. But in the orm case, the way to get around that is... lazy resolution :^)
This only helps for those that do, and it hasn't been any kind of standard the entire time. But more importantly, that helps only the tiniest possible bit with resolving the "import a specific version" syntax. All it solves is letting the file-based import system know whether it found the right folder for the requested (or worse: "a compatible") version of the importable package. It doesn't solve finding the right one if this one is wrong; it doesn't determine how the different versions of the same package are positioned relative to each other in the environment (so that "finding the right one" can work properly); it doesn't solve provisioning the right version. And most importantly, it doesn't solve what happens when there are multiple requests for different versions of the same module at runtime, which incidentally could happen arbitrarily far apart in time, and also the semantics of the code may depend on the same object being used to represent the module in both places.
Yes, this part actually is as simple as you imagine. But that means in practical terms that you can't use the feature until at best the next release of Numpy. If you want to say for example that you need at least version 2 (breaking changes, after all), well, there are already 18 published packages that meet that requirement but are unable to communicate that in the new syntax. This can to my understanding be fixed with post releases, but it's a serious burden for maintainers and most projects are just not going to do that sort of thing (it bloats PyPI, too).
And more importantly, that's only one of many problems that need to be solved. And by far the simplest of them.
Yes, you absolutely can create a language that has syntax otherwise identical to Python (or at least damn close) which implements a feature like this. No, you cannot just replace Python with it. If the Python ecosystem just accepted that clearly better things were clearly better, and started using them promptly, we wouldn't have https://pypi.org/project/six/ making it onto https://pypistats.org/top (see also https://sethmlarson.dev/winning-a-bet-about-six-the-python-2...).
Do you know what happens when Python does summon the will to fix obviously broken things? The Python 2->3 migration happens. (Perl 6 didn't manage any better, either.) Now "Python 3 is the brand" and the idea of version 4 can only ever be entertained as a joke.
#!/usr/bin/env -S uv run --script # # /// script # requires-python = ">=3.12" # dependencies = ["httpx"] # ///
import httpx
print(httpx.get("https://example.com"))
https://docs.astral.sh/uv/guides/scripts/#improving-reproduc...
There are also projects py2exe pyinstaller iirc and others that try to get the whole static binary thing going.
You’re trying imo to make Python into Golang and if you’re wanting to do that just use Golang. That seems like a far better use of your time.
Curious how much reliable Python code was written before those tools existed.
For that matter, curious how much was written before the `types` and `typing` standard library modules appeared.
Some situations could be improved by allowing multiple library versions, but this would introduce new headaches elsewhere. I certainly do not want my program to have N copies of numpy, PyTorch, etc because some intermediate library claims to have just-so dependency tree.
In fact, all the code you see in the module is "side effects", in a sense. A `class` body, for example, has to actually run at import time, creating the class object and attaching it as an attribute of the module object. Similarly for functions. Even a simple assignment of a constant actually has to run at module import. And all of these things add up.
Further, if there isn't already cached bytecode available for the module, by default it will be written to disk as part of the import process. That's inarguably a side effect.
(Trying to do "fallback" logic with lazily-loaded modules is also susceptible to race conditions, of course. What if someone defines the module before you try to use it?)
Nobody is claiming this is a trivial problem to solve but its also not an impossible problem. Other languages have managed to figure out how to achieve this and still maintain backwards compatibility.
Modules being singletons is not a problem in itself I think? This could work like having two versions of the same library in two modules named like library_1_23 and library_1_24. In my program I could hypothetically have imports like `import library_1_23 as library` in one file, and `import library_1_24 as library` in another file. Both versions would be singletons. Then writing `import library==1.23` could be working like syntax sugar for `import library_1_23 as library`.
Of course, having two different versions of a library running in the same program could be a nightmare, so all of that may not be a good idea at all, but maybe not because of module singletons.
That may well be the case, and Python's development processes would definitely tend to assume the same. That has a lot to do with why PEP 690 was rejected, and why this proposal is opt-in even though many people are concerned that a lot of projects will "opt in everywhere" and create a lot of noise.
Believe it or not, the process is actually very conservative. People complaining about the "churn" caused by deprecations and removals seem not to have any concept of how few suggestions actually get implemented, and how many are rejected (including ones that perennially occur to many new users). A browse through "ideas" forum where new pre-PEP ideas are commonly pitched (https://discuss.python.org/c/ideas/6) gives one the impression of a leisurely stroll through a graveyard.
And many people (including myself) can tell you that you'll often be put through a run-around: if your idea is good and can be implemented, then surely it falls on you to make and publicize (!) a third-party package, to prove the demand and the community support for your specific implementation; but if you somehow get there, now it's trivial for people to install support, so it doesn't need to be in the standard library (cf. Requests).
The charitable interpretation of this proposed feature is that it would handle this case exactly as well as the current situation, if the situation isn't improved by the feature.
This feature says nothing about the automatic installation of libraries.
This feature is absolutely not about supporting multiple simultaneous versions of a library at runtime.
In the situation you describe, there would have to be a dependency resolution, just like there is when installing the deps for a program today. It would be good enough for me if "first import wins".
Sure thing you can declare globals variable and run anything on a module file global scope (outside funcs and class body), but even that 'global' scope is just an illusion, and everything declared there, as yourself said, is scoped to the module's namespace
(and you can't leak the 'globals' when importing the module unless you explicity do so 'from foo import *'. Think of python's import as eval but safer because it doesn't leaks the results from the module execution)
So for a module to have side-effect (for me) it would either:
- Change/Create attributes from other modules
- Call some other function that does side-effect (reflection builtins? IO stuff)
I'm happy with the solution I have now, which is to encourage plugin authors not to import PyTorch or other heavy dependencies at the root level of their plugin code.
Author:
Pablo Galindo Salgado , Germán Méndez Bravo <german.mb at gmail.com>, Thomas Wouters , Dino Viehland , Brittany Reynoso , Noah Kim , Tim Stumbaugh
Discussions-To:
Status:
Accepted
Type:
Standards Track
Created:
02-Oct-2025
Python-Version:
3.15
Post-History:
Resolution:
Table of Contents
from ... import ... statements?lazy from module import Class load the entire module or just the class?TYPE_CHECKING imports?from module import *)?importlib.util.LazyLoader instead?isort or black?dir(), getattr(), and module introspection?sys.modules? When does a lazy import appear there?lazy from __future__ import feature work?lazy as the keyword name?__lazy_modules__ = ["*"] as built-in syntaxwith blockswith blocks under the global flag__class__ mutationlazy imports find the module without loading itlazy keyword in the middle of from importslazy keyword at the end of import statementseager keywordglobals()__dict__ or globals() accessThis PEP introduces syntax for lazy imports as an explicit language feature:
lazy import json lazy from json import dumps
Lazy imports defer the loading and execution of a module until the first time the imported name is used, in contrast to ‘normal’ imports, which eagerly load and execute a module at the point of the import statement.
By allowing developers to mark individual imports as lazy with explicit syntax, Python programs can reduce startup time, memory usage, and unnecessary work. This is particularly beneficial for command-line tools, test suites, and applications with large dependency graphs.
This proposal preserves full backwards compatibility: normal import statements remain unchanged, and lazy imports are enabled only where explicitly requested.
The dominant convention in Python code is to place all imports at the module level, typically at the beginning of the file. This avoids repetition, makes import dependencies clear and minimizes runtime overhead by only evaluating an import statement once per module.
A major drawback with this approach is that importing the first module for an execution of Python (the “main” module) often triggers an immediate cascade of imports, and optimistically loads many dependencies that may never be used. The effect is especially costly for command-line tools with multiple subcommands, where even running the command with --help can load dozens of unnecessary modules and take several seconds. This basic example demonstrates what must be loaded just to get helpful feedback to the user on how to run the program at all. Inefficiently, the user incurs this overhead again when they figure out the command they want and invoke the program “for real.”
A somewhat common way to delay imports is to move the imports into functions (inline imports), but this practice requires more work to implement and maintain, and can be subverted by a single inadvertent top-level import. Additionally, it obfuscates the full set of dependencies for a module. Analysis of the Python standard library shows that approximately 17% of all imports outside tests (nearly 3500 total imports across 730 files) are already placed inside functions or methods specifically to defer their execution. This demonstrates that developers are already manually implementing lazy imports in performance-sensitive code, but doing so requires scattering imports throughout the codebase and makes the full dependency graph harder to understand at a glance.
The standard library provides the LazyLoader class to solve some of these inefficiency problems. It permits imports at the module level to work mostly like inline imports do. Many scientific Python libraries have adopted a similar pattern, formalized in SPEC 1. There’s also the third-party lazy_loader package, yet another implementation of lazy imports. Imports used solely for static type checking are another source of potentially unneeded imports, and there are similarly disparate approaches to minimizing the overhead. The various approaches used here to defer or remove eager imports do not cover all potential use-cases for a general lazy import mechanism. There is no clear standard, and there are several drawbacks including runtime overhead in unexpected places, or worse runtime introspection.
This proposal introduces syntax for lazy imports with a design that is local, explicit, controlled, and granular. Each of these qualities is essential to making the feature predictable and safe to use in practice.
The behavior is local: laziness applies only to the specific import marked with the lazy keyword, and it does not cascade recursively into other imports. This ensures that developers can reason about the effect of laziness by looking only at the line of code in front of them, without worrying about whether imported modules will themselves behave differently. A lazy import is an isolated decision each time it is used, not a global shift in semantics.
The semantics are explicit. When a name is imported lazily, the binding is created in the importing module immediately, but the target module is not loaded until the first time the name is accessed. After this point, the binding is indistinguishable from one created by a normal import. This clarity reduces surprises and makes the feature accessible to developers who may not be deeply familiar with Python’s import machinery.
Lazy imports are controlled, in the sense that lazy loading is only triggered by the importing code itself. In the general case, a library will only experience lazy imports if its own authors choose to mark them as such. This avoids shifting responsibility onto downstream users and prevents accidental surprises in library behavior. Since library authors typically manage their own import subgraphs, they retain predictable control over when and how laziness is applied.
The mechanism is also granular. It is introduced through explicit syntax on individual imports, rather than a global flag or implicit setting. This allows developers to adopt it incrementally, starting with the most performance-sensitive areas of a codebase. As this feature is introduced to the community, we want to make the experience of onboarding optional, progressive, and adaptable to the needs of each project.
Lazy imports provide several concrete advantages:
if TYPE_CHECKING: blocks [1]. With lazy imports, annotation-only imports impose no runtime penalty, eliminating the need for such guards and making annotated codebases cleaner.The design of this proposal is centered on clarity, predictability, and ease of adoption. Each decision was made to ensure that lazy imports provide tangible benefits without introducing unnecessary complexity into the language or its runtime.
It is also worth noting that while this PEP outlines one specific approach, we list alternate implementation strategies for some of the core aspects and semantics of the proposal. If the community expresses a strong preference for a different technical path that still preserves the same core semantics or there is fundamental disagreement over the specific option, we have included the brainstorming we have already completed in preparation for this proposal as reference.
The choice to introduce a new lazy keyword reflects the need for explicit syntax. Lazy imports have different semantics from normal imports: errors and side effects occur at first use rather than at the import statement. This semantic difference makes it critical that laziness is visible at the import site itself, not hidden in global configuration or distant module-level declarations. The lazy keyword provides local reasoning about import behavior, avoiding the need to search elsewhere in the code to understand whether an import is deferred. The rest of the import semantics remain unchanged: the same import machinery, module finding, and loading mechanisms are used.
Another important decision is to represent lazy imports with proxy objects in the module’s namespace, rather than by modifying dictionary lookup. Earlier approaches experimented with embedding laziness into dictionaries, but this blurred abstractions and risked affecting unrelated parts of the runtime. The dictionary is a fundamental data structure in Python – literally every object is built on top of dicts – and adding hooks to dictionaries would prevent critical optimizations and complicate the entire runtime. The proxy approach is simpler: it behaves like a placeholder until first use, at which point it resolves the import and rebinds the name. From then on, the binding is indistinguishable from a normal import. This makes the mechanism easy to explain and keeps the rest of the interpreter unchanged.
Compatibility for library authors was also a key concern. Many maintainers need a migration path that allows them to support both new and old versions of Python at once. For this reason, the proposal includes the __lazy_modules__ global as a transitional mechanism. A module can declare which imports should be treated as lazy (by listing the module names as strings), and on Python 3.15 or later those imports will become lazy automatically, as if they were imported with the lazy keyword. On earlier versions the declaration is ignored, leaving imports eager. This gives authors a practical bridge until they can rely on the keyword as the canonical syntax.
Finally, the feature is designed to be adopted incrementally. Nothing changes unless a developer explicitly opts in, and adoption can begin with just a few imports in performance-sensitive areas. This mirrors the experience of gradual typing in Python: a mechanism that can be introduced progressively, without forcing projects to commit globally from day one. Notably, the adoption can also be done from the “outside in”, permitting CLI authors to introduce lazy imports and speed up user-facing tools, without requiring changes to every library the tool might use.
lazy import syntax, there are scenarios – such as large applications, testing environments, or frameworks – where enabling laziness consistently across many modules provides the most benefit. A global switch makes it easy to experiment with or enforce consistent behavior, while still working in combination with the filtering API to respect exclusions or tool-specific configuration. This ensures that global adoption can be practical without reducing flexibility or control.A new soft keyword lazy is added. A soft keyword is a context-sensitive keyword that only has special meaning in specific grammatical contexts; elsewhere it can be used as a regular identifier (e.g., as a variable name). The lazy keyword only has special meaning when it appears before import statements:
import_name: | 'lazy'? 'import' dotted_as_names
import_from: | 'lazy'? 'from' ('.' | '...')* dotted_name 'import' import_from_targets | 'lazy'? 'from' ('.' | '...')+ 'import' import_from_targets
The soft keyword is only allowed at the global (module) level, not inside functions, class bodies, try blocks, or import *. Import statements that use the soft keyword are potentially lazy. Imports that can’t be lazy are unaffected by the global lazy imports flag, and instead are always eager. Additionally, from __future__ import statements cannot be lazy.
Examples of syntax errors:
# SyntaxError: lazy import not allowed inside functions def foo(): lazy import json
# SyntaxError: lazy import not allowed inside classes class Bar: lazy import json
# SyntaxError: lazy import not allowed inside try/except blocks try: lazy import json except ImportError: pass
# SyntaxError: lazy from ... import * is not allowed lazy from json import *
# SyntaxError: lazy from __future__ import is not allowed lazy from __future__ import annotations
When the lazy keyword is used, the import becomes potentially lazy (see Lazy imports filter for advanced override mechanisms). The module is not loaded immediately at the import statement; instead, a lazy proxy object is created and bound to the name. The actual module is loaded on first use of that name.
When using lazy from ... import, each imported name is bound to a lazy proxy object. The first access to any of these names triggers loading of the entire module and reifies only that specific name to its actual value. Other names remain as lazy proxies until they are accessed. The interpreter’s adaptive specialization will optimize away the lazy checks after a few accesses.
Example with lazy import:
import sys
lazy import json
print('json' in sys.modules) # False - module not loaded yet
# First use triggers loading result = json.dumps({"hello": "world"})
print('json' in sys.modules) # True - now loaded
Example with lazy from ... import:
import sys
lazy from json import dumps, loads
print('json' in sys.modules) # False - module not loaded yet
# First use of 'dumps' triggers loading json and reifies ONLY 'dumps' result = dumps({"hello": "world"})
print('json' in sys.modules) # True - module now loaded
# Accessing 'loads' now reifies it (json already loaded, no re-import) data = loads(result)
A module may define a __lazy_modules__ variable in its global scope, which specifies which module names should be made potentially lazy (as if the lazy keyword was used). This variable is checked on each import statement to determine whether the import should be made potentially lazy. The check is performed by calling __contains__ on the __lazy_modules__ object with a string containing the fully qualified module name being imported. Typically, __lazy_modules__ is a set of fully qualified module name strings. When a module is made lazy this way, from-imports using that module are also lazy, but not necessarily imports of sub-modules.
The normal (non-lazy) import statement will check the global lazy imports flag. If it is “all”, all imports are potentially lazy (except for imports that can’t be lazy, as mentioned above.)
Example:
__lazy_modules__ = ["json"] import json print('json' in sys.modules) # False result = json.dumps({"hello": "world"}) print('json' in sys.modules) # True
If the global lazy imports flag is set to “none”, no potentially lazy import is ever imported lazily, and the behavior is equivalent to a regular import statement: the import is eager (as if the lazy keyword was not used).
Finally, the application may use a custom filter function on all potentially lazy imports to determine if they should be lazy or not (this is an advanced feature, see Lazy imports filter). If a filter function is set, it will be called with the name of the module doing the import, the name of the module being imported, and (if applicable) the fromlist. An import remains lazy only if the filter function returns True. If no lazy import filter is set, all potentially lazy imports are lazy.
The lazy import mechanism does not apply to .pth files processed by the site module. While .pth files have special handling for lines that begin with import followed by a space or tab, this special handling will not be adapted to support lazy imports. Imports specified in .pth files remain eager as they always have been.
Lazy modules, as well as names lazy imported from modules, are represented by types.LazyImportType instances, which are resolved to the real object (reified) before they can be used. This reification is usually done automatically (see below), but can also be done by calling the lazy object’s resolve method.
When an import is lazy, __lazy_import__ is called instead of __import__. __lazy_import__ has the same function signature as __import__. It adds the module name to sys.lazy_modules, a set of fully-qualified module names which have been lazily imported at some point (primarily for diagnostics and introspection), and returns a types.LazyImportType object for the module.
The implementation of from ... import (the IMPORT_FROM bytecode implementation) checks if the module it’s fetching from is a lazy module object, and if so, returns a types.LazyImportType for each name instead.
The end result of this process is that lazy imports (regardless of how they are enabled) result in lazy objects being assigned to global variables.
Lazy module objects do not appear in sys.modules, they’re just listed in the sys.lazy_modules set. Under normal operation lazy objects should only end up stored in global variables, and the common ways to access those variables (regular variable access, module attributes) will resolve lazy imports (reify) and replace them when they’re accessed.
It is still possible to expose lazy objects through other means, like debuggers. This is not considered a problem.
When a lazy object is used, it needs to be reified. This means resolving the import at that point in the program and replacing the lazy object with the concrete one. Reification imports the module at that point in the program. Notably, reification still calls __import__ to resolve the import, which uses the state of the import system (e.g. sys.path, sys.meta_path, sys.path_hooks and __import__) at reification time, not the state when the lazy import statement was evaluated.
When the module is reified, it’s removed from sys.lazy_modules (even if there are still other unreified lazy references to it). When a package is reified and submodules in the package were also previously lazily imported, those submodules are not automatically reified but they are added to the reified package’s globals (unless the package already assigned something else to the name of the submodule).
If reification fails (e.g., due to an ImportError), the lazy object is not reified or replaced. Subsequent uses of the lazy object will re-try the reification. Exceptions that happen during reification are raised as normal, but the exception is enhanced with chaining to show both where the lazy import was defined and where it was accessed (even though it propagates from the code that triggered reification). This provides clear debugging information:
# app.py - has a typo in the import lazy from json import dumsp # Typo: should be 'dumps'
print("App started successfully") print("Processing data...")
# Error occurs here on first use result = dumsp({"key": "value"})
The traceback shows both locations:
App started successfully Processing data... Traceback (most recent call last): File "app.py", line 2, in lazy from json import dumsp ImportError: lazy import of 'json.dumsp' raised an exception during resolution
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "app.py", line 8, in result = dumsp({"key": "value"}) ^^^^^ ImportError: cannot import name 'dumsp' from 'json'. Did you mean: 'dump'?
This exception chaining clearly shows:
Reification does not automatically occur when a module that was previously lazily imported is subsequently eagerly imported. Reification does not immediately resolve all lazy objects (e.g. lazy from statements) that referenced the module. It only resolves the lazy object being accessed.
Accessing a lazy object (from a global variable or a module attribute) reifies the object.
However, calling globals() or accessing a module’s __dict__ does not trigger reification – they return the module’s dictionary, and accessing lazy objects through that dictionary still returns lazy proxy objects that need to be manually reified upon use. A lazy object can be resolved explicitly by calling the resolve method. Calling dir() at the global scope will not reify the globals, nor will calling dir(mod) (through special-casing in mod.__dir__.) Other, more indirect ways of accessing arbitrary globals (e.g. inspecting frame.f_globals) also do not reify all the objects.
Example using globals() and __dict__:
# my_module.py import sys lazy import json
# Calling globals() does NOT trigger reification g = globals() print('json' in sys.modules) # False - still lazy print(type(g['json'])) # <class 'LazyImport'>
# Accessing __dict__ also does NOT trigger reification d = __dict__ print(type(d['json'])) # <class 'LazyImport'>
# Explicitly reify using the resolve() method resolved = g['json'].resolve() print(type(resolved)) # <class 'module'> print('json' in sys.modules) # True - now loaded
A reference implementation is available at: https://github.com/LazyImportsCabal/cpython/tree/lazy
A demo is available (not necessarily synced with the latest PEP) for evaluation purposes at: https://lazy-import-demo.pages.dev/
Lazy imports are implemented through modifications to four bytecode instructions: IMPORT_NAME, IMPORT_FROM, LOAD_GLOBAL, and LOAD_NAME.
The lazy syntax sets a flag in the IMPORT_NAME instruction’s oparg (oparg & 0x01). The interpreter checks this flag and calls _PyEval_LazyImportName() instead of _PyEval_ImportName(), creating a lazy import object rather than executing the import immediately. The IMPORT_FROM instruction checks whether its source is a lazy import (PyLazyImport_CheckExact()) and creates a lazy object for the attribute rather than accessing it immediately.
When a lazy object is accessed, it must be reified. The LOAD_GLOBAL instruction (used in function scopes) and LOAD_NAME instruction (used at module and class level) both check whether the object being loaded is a lazy import. If so, they call _PyImport_LoadLazyImportTstate() to perform the actual import and store the module in sys.modules.
This check incurs a very small cost on each access. However, Python’s adaptive interpreter can specialize LOAD_GLOBAL after observing that a lazy import has been reified. After several executions, LOAD_GLOBAL becomes LOAD_GLOBAL_MODULE, which accesses the module dictionary directly without checking for lazy imports.
Examples of the bytecode generated:
lazy import json # IMPORT_NAME with flag set
Generates:
IMPORT_NAME 1 (json + lazy)
lazy from json import dumps # IMPORT_NAME + IMPORT_FROM
Generates:
IMPORT_NAME 1 (json + lazy) IMPORT_FROM 1 (dumps)
lazy import json x = json # Module-level access
Generates:
LOAD_NAME 0 (json)
lazy import json
def use_json(): return json.dumps({}) # Function scope
Before any calls:
LOAD_GLOBAL 0 (json) LOAD_ATTR 2 (dumps)
After several calls, LOAD_GLOBAL specializes to LOAD_GLOBAL_MODULE:
LOAD_GLOBAL_MODULE 0 (json) LOAD_ATTR_MODULE 2 (dumps)
Note: This is an advanced feature. These are intended for specialized/advanced users who need fine-grained control over lazy import behavior when using the global flags. Library developers are discouraged from using these functions as they can affect the runtime execution of applications (similar to ``sys.setrecursionlimit()``, ``sys.setswitchinterval()``, or ``gc.set_threshold()``).
This PEP adds the following new functions to the sys module to manage the lazy imports filter:
sys.set_lazy_imports_filter(func) - Sets the filter function. If func=None then the import filter is removed. The func parameter must have the signature: func(importer: str, name: str, fromlist: tuple[str, ...] | None) -> boolsys.get_lazy_imports_filter() - Returns the currently installed filter function, or None if no filter is set.sys.set_lazy_imports(mode, /) - Programmatic API for controlling lazy imports at runtime. The mode parameter can be "normal" (respect lazy keyword only), "all" (force all imports to be potentially lazy), or "none" (force all imports to be eager).sys.get_lazy_imports() - Returns the current lazy imports mode as a string: "normal", "all", or "none".The filter function is called for every potentially lazy import, and must return True if the import should be lazy. This allows for fine-grained control over which imports should be lazy, useful for excluding modules with known side-effect dependencies or registration patterns. The filter function is called at the point of execution of the lazy import or lazy from import statement, not at the point of reification. The filter function may be called concurrently.
The filter mechanism serves as a foundation that tools, debuggers, linters, and other ecosystem utilities can leverage to provide better lazy import experiences. For example, static analysis tools could detect modules with side effects and automatically configure appropriate filters. In the future (out of scope for this PEP), this foundation may enable better ways to declaratively specify which modules are safe for lazy importing, such as package metadata, type stubs with lazy-safety annotations, or configuration files. The current filter API is designed to be flexible enough to accommodate such future enhancements without requiring changes to the core language specification.
Example:
import sys
def exclude_side_effect_modules(importer, name, fromlist): """ Filter function to exclude modules with import-time side effects.
Args:
importer: Name of the module doing the import
name: Name of the module being imported
fromlist: Tuple of names being imported (for 'from' imports), or None
Returns:
True to allow lazy import, False to force eager import
"""
\# Modules known to have important import-time side effects
side\_effect\_modules \= {'legacy\_plugin\_system', 'metrics\_collector'}
if name in side\_effect\_modules:
return False \# Force eager import
return True \# Allow lazy import
# Install the filter sys.set_lazy_imports_filter(exclude_side_effect_modules)
# These imports are checked by the filter lazy import data_processor # Filter returns True -> stays lazy lazy import legacy_plugin_system # Filter returns False -> imported eagerly
print('data_processor' in sys.modules) # False - still lazy print('legacy_plugin_system' in sys.modules) # True - loaded eagerly
# First use of data_processor triggers loading result = data_processor.transform(data) print('data_processor' in sys.modules) # True - now loaded
Note: This is an advanced feature. This is intended for application developers and framework authors who need to control lazy imports across their entire application. Library developers are discouraged from using the global activation mechanism as it can affect the runtime execution of applications (similar to ``sys.setrecursionlimit()``, ``sys.setswitchinterval()``, or ``gc.set_threshold()``).
The global lazy imports flag can be controlled through:
-X lazy_imports=<mode> command-line optionPYTHON_LAZY_IMPORTS=<mode> environment variablesys.set_lazy_imports(mode) function (primarily for testing)The precedence order for setting the lazy imports mode follows the standard Python pattern: sys.set_lazy_imports() takes highest precedence, followed by -X lazy_imports=<mode>, then PYTHON_LAZY_IMPORTS=<mode>. If none are specified, the mode defaults to "normal".
Where <mode> can be:
"normal" (or unset): Only explicitly marked lazy imports are lazy"all": All module-level imports (except in try blocks and import *) become potentially lazy"none": No imports are lazy, even those explicitly marked with lazy keywordWhen the global flag is set to "all", all imports at the global level of all modules are potentially lazy except for those inside a try block or any wild card (from ... import *) import.
If the global lazy imports flag is set to "none", no potentially lazy import is ever imported lazily, the import filter is never called, and the behavior is equivalent to a regular import statement: the import is eager (as if the lazy keyword was not used).
Python code can run the sys.set_lazy_imports() function to override the state of the global lazy imports flag inherited from the environment or CLI. This is especially useful if an application needs to ensure that all imports are evaluated eagerly, via sys.set_lazy_imports("none").
Lazy imports are opt-in. Existing programs continue to run unchanged unless a project explicitly enables laziness (via lazy syntax, __lazy_modules__, or an interpreter-wide switch).
import and from ... import ... statements remain eager unless explicitly made potentially lazy by the local or global mechanisms provided.__import__() and importlib.import_module().These changes are limited to bindings explicitly made lazy:
Error timing. Exceptions that would have occurred during an eager import (for example ImportError or AttributeError for a missing member) now occur at the use of the lazy name.
# With eager import - error at import statement import broken_module # ImportError raised here
# With lazy import - error deferred lazy import broken_module print("Import succeeded") broken_module.foo() # ImportError raised here on use
Side-effect timing. Import-time side effects in lazily imported modules occur at first use of the binding, not at module import time.
Import order. Because modules are imported on first use, the order in which modules are imported may differ from how they appear in code.
Presence in ``sys.modules``. A lazily imported module does not appear in sys.modules until first use. After reification, it must appear in sys.modules. If some other code eagerly imports the same module before first use, the lazy binding resolves to that existing (lazy) module object when it is first used.
Proxy visibility. Before first use, the bound name refers to a lazy proxy. Indirect introspection that touches the value may observe a proxy lazy object representation. After first use (provided the module was imported successfully), the name is rebound to the real object and becomes indistinguishable from an eager import.
Reification follows the existing import-lock discipline. Exactly one thread performs the import and atomically rebinds the importing module’s global to the resolved object. Concurrent readers thereafter observe the real object.
Lazy imports are thread-safe and have no special considerations for free-threading. A module that would normally be imported in the main thread may be imported in a different thread if that thread triggers the first access to the lazy import. This is not a problem: the import lock ensures thread safety regardless of which thread performs the import.
Subinterpreters are supported. Each subinterpreter maintains its own sys.lazy_modules and import state, so lazy imports in one subinterpreter do not affect others.
Lazy imports have no measurable performance overhead. The implementation is designed to be performance-neutral for both code that uses lazy imports and code that doesn’t.
After reification (provided the import was successful), lazy imports have zero overhead. The adaptive interpreter specializes the bytecode (typically after 2-3 accesses), eliminating any checks. For example, LOAD_GLOBAL becomes LOAD_GLOBAL_MODULE, which directly accesses the module identically to normal imports.
The pyperformance suite confirms the implementation is performance-neutral.
The filter function (set via sys.set_lazy_imports_filter()) is called for every potentially lazy import to determine whether it should actually be lazy. When no filter is set, this is simply a NULL check (testing whether a filter function has been registered), which is a highly predictable branch that adds essentially no overhead. When a filter is installed, it is called for each potentially lazy import, but this still has almost no measurable performance cost. To measure this, we benchmarked importing all 278 top-level importable modules from the Python standard library (which transitively loads 392 total modules including all submodules and dependencies), then forced reification of every loaded module to ensure everything was fully materialized.
Note that these measurements establish the baseline overhead of the filter mechanism itself. Of course, any user-defined filter function that performs additional work beyond a trivial check will add overhead proportional to the complexity of that work. However, we expect that in practice this overhead will be dwarfed by the performance benefits gained from avoiding unnecessary imports. The benchmarks below measure the minimal cost of the filter dispatch mechanism when the filter function does essentially nothing.
We compared four different configurations:
| Configuration | Mean ± Std Dev (ms) | Overhead vs Baseline |
|---|---|---|
| Eager imports (baseline) | 161.2 ± 4.3 | 0% |
| Lazy + filter forcing eager | 161.7 ± 4.2 | +0.3% ± 3.7% |
| Lazy + filter allowing lazy + reification | 162.0 ± 4.0 | +0.5% ± 3.7% |
| Lazy + no filter + reification | 161.4 ± 4.3 | +0.1% ± 3.8% |
The four configurations:
False for all imports, forcing eager execution, then all imports are reified at script end. Measures pure filter calling overhead since every import goes through the filter but executes eagerly.True for all imports, allowing lazy execution. All imports are reified at script end. Measures filter overhead when imports are actually lazy.The benchmarks used hyperfine, testing 278 standard library modules. Each ran in a fresh Python process. All configurations force the import of exactly the same set of modules (all modules loaded by the eager baseline) to ensure a fair comparison.
The benchmark environment used CPU isolation with 32 logical CPUs (0-15 at 3200 MHz, 16-31 at 2400 MHz), the performance scaling governor, Turbo Boost disabled, and full ASLR randomization. The overhead error bars are computed using standard error propagation for the formula (value - baseline) / baseline, accounting for uncertainties in both the measured value and the baseline.
The primary performance benefit of lazy imports is reduced startup time by loading only the modules actually used at runtime, rather than optimistically loading entire dependency trees at startup.
Real-world deployments at scale have demonstrated that the benefits can be massive, though of course this depends on the specific codebase and usage patterns. Organizations with large, interconnected codebases have reported substantial reductions in server reload times, ML training initialization, command-line tool startup, and Jupyter notebook loading. Memory usage improvements have also been observed as unused modules remain unloaded.
For detailed case studies and performance data from production deployments, see:
The benefits scale with codebase complexity: the larger and more interconnected the codebase, the more dramatic the improvements. The PySide implementation particularly highlights how frameworks with heavy initialization overhead can benefit significantly from opt-in lazy loading.
Type checkers and static analyzers may treat lazy imports as ordinary imports for name resolution. At runtime, annotation-only imports can be marked lazy to avoid startup overhead. IDEs and debuggers should be prepared to display lazy proxies before first use and the real objects thereafter.
Tools that install packages while performing imports from that the same environment should ensure all modules are imported eagerly, or reified, before the installation step, to avoid newly installed distributions from shadowing them.
Such tools can use sys.set_lazy_imports() with "none" to force eager evaluation, or provide a sys.set_lazy_imports_filter() function for fine-grained control.
The new lazy keyword will be documented as part of the language standard.
As this feature is opt-in, new Python users should be able to continue using the language as they are used to. For experienced developers, we expect them to leverage lazy imports for the variety of benefits listed above (decreased latency, decreased memory usage, etc) on a case-by-case basis. Developers interested in the performance of their Python binary will likely leverage profiling to understand the import time overhead in their codebase and mark the necessary imports as lazy. In addition, developers can mark imports that will only be used for type annotations as lazy.
Additional documentation will be added to the Python documentation, including guidance, a dedicated how-to guide, and updates to the import system documentation covering: identifying slow-loading modules with profiling tools (such as -X importtime), migration strategies for existing codebases, best practices for avoiding common pitfalls with import-time side effects, and patterns for using lazy imports effectively with type annotations and circular imports.
Below is guidance on how to best take advantage of lazy imports and how to avoid incompatibilities:
When adopting lazy imports, users should be aware that eliding an import until it is used will result in side effects not being executed. In turn, users should be wary of modules that rely on import time side effects. Perhaps the most common reliance on import side effects is the registry pattern, where population of some external registry happens implicitly during the importing of modules, often via decorators but sometimes implemented via metaclasses or __init_subclass__. Instead, registries of objects should be constructed via explicit discovery processes (e.g. a well-known function to call).
# Problematic: Plugin registers itself on import # my_plugin.py from plugin_registry import register_plugin
@register_plugin("MyPlugin") class MyPlugin: pass # In main code: lazy import my_plugin # Plugin NOT registered yet - module not loaded!
# Better: Explicit discovery # plugin_registry.py def discover_plugins(): from my_plugin import MyPlugin register_plugin(MyPlugin) # In main code: plugin_registry.discover_plugins() # Explicit loading
Always import needed submodules explicitly. It is not enough to rely on a different import to ensure a module has its submodules as attributes. Plainly, unless there is an explicit from . import bar in foo/__init__.py, always use import foo.bar; foo.bar.Baz, not import foo; foo.bar.Baz. The latter only works (unreliably) because the attribute foo.bar is added as a side effect of foo.bar being imported somewhere else.
Users who are moving imports into functions to improve startup time, should instead consider keeping them where they are but adding the lazy keyword. This allows them to keep dependencies clear and avoid the overhead of repeatedly re-resolving the import but will still speed up the program.
# Before: Inline import (repeated overhead) def process_data(data): import json # Re-resolved on every call return json.dumps(data) # After: Lazy import at module level lazy import json
def process_data(data): return json.dumps(data) # Loaded once on first call
Avoid using wild card (star) imports, as those are always eager.
PEP 810 takes an explicit, opt-in approach instead of PEP 690’s implicit global approach. The key differences are:
lazy import foo clearly marks which imports are lazy.What changes (the timing):
What stays the same (everything else):
__import__, same hooks, same loaderssys.path, sys.meta_path, etc. at reification time (not at import statement time)In other words: lazy imports only change when something happens, not what happens. After reification, a lazy-imported module is indistinguishable from an eagerly imported one.
Import errors (ImportError, ModuleNotFoundError, syntax errors) are deferred until first use of the lazy name. This is similar to moving an import into a function. The error will occur with a clear traceback pointing to the first access of the lazy object.
The implementation provides enhanced error reporting through exception chaining. When a lazy import fails during reification, the original exception is preserved and chained, showing both where the import was defined and where it was first used:
Traceback (most recent call last): File "test.py", line 1, in lazy import broken_module ImportError: lazy import of 'broken_module' raised an exception during resolution
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "test.py", line 3, in broken_module.foo() ^^^^^^^^^^^^^ File "broken_module.py", line 2, in 1/0 ZeroDivisionError: division by zero
Exceptions during reification prevent the replacement of the lazy object, and subsequent uses of the lazy object will retry the whole reification.
Side effects are deferred until first use. This is generally desirable for performance, but may require code changes for modules that rely on import-time registration patterns. We recommend:
from ... import ... statements?Yes, as long as you don’t use from ... import *. Both lazy import foo and lazy from foo import bar are supported. The bar name will be bound to a lazy object that resolves to foo.bar on first use.
lazy from module import Class load the entire module or just the class?It loads the entire module, not just the class. This is because Python’s import system always executes the complete module file – there’s no mechanism to execute only part of a .py file. When you first access Class, Python:
module.py fileClass attribute from the resulting module objectClass to the name in your namespaceThis is identical to eager from module import Class behavior. The only difference with lazy imports is that steps 1-3 happen on first use instead of at the import statement.
# heavy_module.py print("Loading heavy_module") # This ALWAYS runs when module loads
class MyClass: pass
class UnusedClass: pass # Also gets defined, even though we don't import it
# app.py lazy from heavy_module import MyClass
print("Import statement done") # heavy_module not loaded yet obj = MyClass() # NOW "Loading heavy_module" prints # (and UnusedClass gets defined too)
Key point: Lazy imports defer when a module loads, not what gets loaded. You cannot selectively load only parts of a module – Python’s import system doesn’t support partial module execution.
TYPE_CHECKING imports?Lazy imports eliminate the common need for TYPE_CHECKING guards. You can write:
lazy from collections.abc import Sequence, Mapping # No runtime cost
def process(items: Sequence[str]) -> Mapping[str, int]: ...
Instead of:
from typing import TYPE_CHECKING if TYPE_CHECKING: from collections.abc import Sequence, Mapping
def process(items: Sequence[str]) -> Mapping[str, int]: ...
The overhead is minimal:
Benchmarking with the pyperformance suite shows the implementation is performance neutral when lazy imports are not used.
Yes. If module foo is imported both lazily and eagerly in the same program, the eager import takes precedence and both bindings resolve to the same module object.
Migration is incremental:
lazy keyword to imports that aren’t needed immediately.__lazy_modules__ for compatibility with older Python versions.from module import *)?Wild card (star) imports cannot be lazy - they remain eager. This is because the set of names being imported cannot be determined without loading the module. Using the lazy keyword with star imports will be a syntax error. If lazy imports are globally enabled, star imports will still be eager.
Import hooks and loaders work normally. When a lazy object is used, the standard import protocol runs, including any custom hooks or loaders that were in place at reification time.
Lazy import reification is thread-safe. Only one thread will perform the actual import, and the binding is atomically updated. Other threads will see either the lazy proxy or the final resolved object.
Yes, individual lazy objects can be resolved by calling their resolve() method.
importlib.util.LazyLoader instead?The standard library’s LazyLoader was designed for specific use cases but has fundamental limitations as a general-purpose lazy import mechanism.
Most critically, LazyLoader does not support from ... import statements. There is no straightforward mechanism to lazily import specific attributes from a module - users would need to manually wrap and proxy individual attributes, which is both error-prone and defeats the performance benefits.
Additionally, LazyLoader must resolve the module spec before creating the lazy loader, which introduces overhead that reduces the performance benefits of lazy loading. The spec resolution involves filesystem operations and path searching that this PEP’s approach defers until actual module use.
LazyLoader also operates at the import machinery level rather than providing language-level syntax, which means there’s no canonical way for tools like linters and type checkers to recognize lazy imports. A dedicated syntax enables ecosystem-wide standardization and allows compiler and runtime optimizations that would be impossible with a purely library-based approach.
Finally, LazyLoader requires significant boilerplate, involving manual manipulation of module specs, loaders, and sys.modules, making it impractical for common use cases where multiple modules need to be lazily imported.
isort or black?Linters, formatters, and other tools will need updates to recognize the lazy keyword, but the changes should be minimal since the import structure remains the same. The keyword appears at the beginning, making it easy to parse.
Most libraries should work fine with lazy imports. Libraries that might have issues:
When in doubt, test lazy imports with your specific use cases.
Note: This is an advanced feature. You can use the lazy imports filter to exclude specific modules that are known to have problematic side effects:
import sys
def my_filter(importer, name, fromlist): # Don't lazily import modules known to have side effects if name in {'problematic_module', 'another_module'}: return False # Import eagerly return True # Allow lazy import
sys.set_lazy_imports_filter(my_filter)
The filter function receives the importer module name, the module being imported, and the fromlist (if using from ... import). Returning False forces an eager import.
Alternatively, set the global mode to "none" via -X lazy_imports=none to turn off all lazy imports for debugging.
No, the lazy keyword is only allowed at module level. For function-level lazy loading, use traditional inline imports or move the import to module level with lazy.
Use the __lazy_modules__ global for compatibility:
# Works on Python 3.15+ as lazy, eager on older versions __lazy_modules__ = ['expensive_module', 'expensive_module_2'] import expensive_module from expensive_module_2 import MyClass
The __lazy_modules__ attribute is a list of module name strings. When an import statement is executed, Python checks if the module name being imported appears in __lazy_modules__. If it does, the import is treated as if it had the lazy keyword (becoming potentially lazy). On Python versions before 3.15 that don’t support lazy imports, the __lazy_modules__ attribute is simply ignored and imports proceed eagerly as normal.
This provides a migration path until you can rely on the lazy keyword. For maximum predictability, it’s recommended to define __lazy_modules__ once, before any imports. But as it is checked on each import, it can be modified between import statements.
Python 3.14 implemented deferred evaluation of annotations, as specified by PEP 649 and PEP 749. If an annotation is not stringified, it is an expression that is evaluated at a later time. It will only be resolved if the annotation is accessed. In the example below, the fake_typing module is only loaded when the user inspects the __annotations__ dictionary. The fake_typing module would also be loaded if the user uses annotationlib.get_annotations() or getattr to access the annotations.
lazy from fake_typing import MyFakeType def foo(x: MyFakeType): pass print(foo.__annotations__) # Triggers loading the fake_typing module
dir(), getattr(), and module introspection?Accessing lazy imports through normal attribute access or getattr() will trigger reification of the accessed attribute. Calling dir() on a module will be special cased in mod.__dir__ to avoid reification.
lazy import json
# Before any access # json not in sys.modules
# Any of these trigger reification: dumps_func = json.dumps dumps_func = getattr(json, 'dumps') # Now json is in sys.modules
Lazy imports don’t automatically solve circular import problems. If two modules have a circular dependency, making the imports lazy might help only if the circular reference isn’t accessed during module initialization. However, if either module accesses the other during import time, you’ll still get an error.
Example that works (deferred access in functions):
# user_model.py lazy import post_model
class User: def get_posts(self): # OK - post_model accessed inside function, not during import return post_model.Post.get_by_user(self.name)
# post_model.py lazy import user_model
class Post: @staticmethod def get_by_user(username): return f"Posts by {username}"
This works because neither module accesses the other at module level – the access happens later when get_posts() is called.
Example that fails (access during import):
# module_a.py lazy import module_b
result = module_b.get_value() # Error! Accessing during import
def func(): return "A"
# module_b.py lazy import module_a
result = module_a.func() # Circular dependency error here
def get_value(): return "B"
This fails because module_a tries to access module_b at import time, which then tries to access module_a before it’s fully initialized.
The best practice is still to avoid circular imports in your code design.
After first use (provided the import succeed), lazy imports have zero overhead thanks to the adaptive interpreter. The interpreter specializes the bytecode (e.g., LOAD_GLOBAL becomes LOAD_GLOBAL_MODULE) which eliminates the lazy check on subsequent accesses. This means once a lazy import is reified, accessing it is just as fast as a normal import.
lazy import json
def use_json(): return json.dumps({"test": 1})
# First call triggers reification use_json()
# After 2-3 calls, bytecode is specialized use_json() use_json()
You can observe the specialization using dis.dis(use_json, adaptive=True):
=== Before specialization === LOAD_GLOBAL 0 (json) LOAD_ATTR 2 (dumps)
=== After 3 calls (specialized) === LOAD_GLOBAL_MODULE 0 (json) LOAD_ATTR_MODULE 2 (dumps)
The specialized LOAD_GLOBAL_MODULE and LOAD_ATTR_MODULE instructions are optimized fast paths with no overhead for checking lazy imports.
sys.modules? When does a lazy import appear there?A lazily imported module does not appear in sys.modules until it’s reified (first used). Once reified, it appears in sys.modules just like any eager import.
import sys lazy import json
print('json' in sys.modules) # False
result = json.dumps({"key": "value"}) # First use
print('json' in sys.modules) # True
lazy from __future__ import feature work?No, future imports can’t be lazy because they’re parser/compiler directives. It’s technically possible for the runtime behavior to be lazy but there’s no real value in it.
lazy as the keyword name?Not “why”… memorize! :)
The following ideas have been considered but are deliberately deferred to focus on delivering a stable, usable core feature first. These may be considered for future enhancements once we have real-world experience with lazy imports.
Several alternative syntax forms have been suggested to improve ergonomics:
Type-only imports: A specialized syntax for imports used exclusively in type annotations (similar to the type keyword in other contexts) could be added, such as type from collections.abc import Sequence. This would make the intent clearer than using lazy for type-only imports and would signal to readers that the import is never used at runtime. However, since lazy imports already solve the runtime cost problem for type annotations, we prefer to start with the simpler, more general mechanism and evaluate whether specialized syntax adds sufficient value after gathering usage data.
Block-based syntax: Grouping multiple lazy imports in a block, such as:
as lazy:
import foo
from bar import baz
This could reduce repetition when marking many imports as lazy. However, it would require introducing an entirely new statement form (as lazy: blocks) that doesn’t fit into Python’s existing grammar patterns. It’s unclear how this would interact with other language features or what the precedent would be for similar block-level modifiers. This approach also makes it less clear when scanning code whether a particular import is lazy, since you must look at the surrounding context rather than the import line itself.
While these alternatives could provide different ergonomics in certain contexts, they share similar drawbacks: they would require introducing new statement forms or overloading existing syntax in non-obvious ways, and they open the door to many other potential uses of similar syntax patterns that would significantly expand the language. We prefer to start with the explicit lazy import syntax and gather real-world feedback before considering additional syntax variations. Any future ergonomic improvements should be evaluated based on actual usage patterns rather than speculative benefits.
if TYPE_CHECKING blocksA future enhancement could automatically treat all imports inside if TYPE_CHECKING: blocks as lazy:
from typing import TYPE_CHECKING
if TYPE_CHECKING: from foo import Bar # Could be automatically lazy
However, this would require significant changes to make this work at compile time, since TYPE_CHECKING is currently just a runtime variable. The compiler would need special knowledge of this pattern, similar to how from __future__ import statements are handled. Additionally, making TYPE_CHECKING a built-in would be required for this to work reliably. Since lazy imports already solve the runtime cost problem for type-only imports, we prefer to start with the explicit syntax and evaluate whether this optimization adds sufficient value.
A module-level declaration to make all imports in that module lazy by default:
from __future__ import lazy_imports import foo # Automatically lazy
This was discussed but deferred because it raises several questions. Using from __future__ import implies this would become the default behavior in a future Python version, which is unclear and not currently planned. It also raises questions about how such a mode would interact with the global flag and what the transition path would look like. The current explicit syntax and __lazy_modules__ provide sufficient control for initial adoption.
Future enhancements could allow packages to declare in their metadata whether they are safe for lazy importing (e.g., no import-time side effects). This could be used by the filter mechanism or by static analysis tools. The current filter API is designed to accommodate such future additions without requiring changes to the core language specification.
No dedicated C API is planned for creating or resolving lazy imports. This feature is designed as a purely Python-facing mechanism, as C extensions typically need immediate access to modules and cannot benefit from deferred loading. Existing C API functions like PyImport_ImportModule() remain unchanged and continue to perform eager imports. If compelling use cases emerge, this could be revisited in future versions.
Here are some alternative design decisions that were considered during the development of this PEP. While the current proposal represents what we believe to be the best balance of simplicity, performance, and maintainability, these alternatives offer different trade-offs that may be valuable for implementers to consider or for future refinements.
Instead of updating the internal dict object to directly add the fields needed to support lazy imports, we could create a subclass of the dict object to be used specifically for Lazy Import enablement. This would still be a leaky abstraction though - methods can be called directly such as dict.__getitem__ and it would impact the performance of globals lookup in the interpreter.
For this PEP, we decided to propose lazy for the explicit keyword as it felt the most familiar to those already focused on optimizing import overhead. We also considered a variety of other options to support explicit lazy imports. The most compelling alternates were defer and delay.
Changing import to be lazy by default is outside of the scope of this PEP. From the discussion on PEP 690 it is clear that this is a fairly contentious idea, although perhaps once we have wide-spread use of lazy imports this can be reconsidered.
__lazy_modules__ = ["*"] as built-in syntaxThe suggestion to support __lazy_modules__ = ["*"] as a convenient way to make all imports in a module lazy without explicit enumeration has been considered. This approach was rejected because __lazy_modules__ already represents implicit action-at-a-distance behavior that is tolerated solely as a backwards compatibility mechanism. Extending support to wildcard patterns would significantly increase implementation complexity and invite scope creep into pattern matching and globbing functionality. As __lazy_modules__ is a permanent language feature that cannot be removed in future versions, the design prioritizes minimalism and restricts its scope to serving as a transitional tool for backwards compatibility.
It is worth noting that the implementation performs membership checks by calling __contains__ on the __lazy_modules__ object. Consequently, users requiring wildcard behavior may provide a custom object implementing __contains__ to return True for all queries or other desired patterns. This design provides the necessary flexibility for advanced use cases while maintaining a simple, focused specification for the primary mechanism. If this PEP is accepted, adding such helper objects to the standard library can be discussed in a future issue. Presently, it is out of scope for this PEP.
with blocksAn earlier version of this PEP proposed disallowing lazy import statements inside with blocks, similar to the restriction on try blocks. The concern was that certain context managers (like contextlib.suppress(ImportError)) could suppress import errors in confusing ways when combined with lazy imports.
However, this restriction was rejected because with statements have much broader semantics than try/except blocks. While try/except is explicitly about catching exceptions, with blocks are commonly used for resource management, temporary state changes, or scoping – contexts where lazy imports work perfectly fine. The lazy import syntax is explicit enough that developers who write it inside a with block are making an intentional choice, aligning with Python’s “consenting adults” philosophy. For genuinely problematic cases like with suppress(ImportError): lazy import foo, static analysis tools and linters are better suited to catch these patterns than hard language restrictions.
with blocks under the global flagAnother rejected idea was to make imports inside with blocks remain eager even when the global lazy imports flag is set to "all". The rationale was to be conservative: since with statements can affect how imports behave (e.g., by modifying sys.path or suppressing exceptions), forcing imports to remain eager could prevent subtle bugs. However, this would create inconsistent behavior where lazy import is allowed explicitly in with blocks, but normal imports remain eager when the global flag is enabled. This inconsistency between explicit and implicit laziness is confusing and hard to explain.
The simpler, more consistent rule is that the global flag affects imports everywhere that explicit lazy import syntax is allowed. This avoids having three different sets of rules (explicit syntax, global flag behavior, and filter mechanism) and instead provides two: explicit syntax rules match what the global flag affects, and the filter mechanism provides escape hatches for edge cases. For users who need fine-grained control, the filter mechanism (sys.set_lazy_imports_filter()) already provides a way to exclude specific imports or patterns. Additionally, there’s no inverse operation: if the global flag forces imports eager in with blocks but a user wants them lazy, there’s no way to override it, creating an asymmetry.
In summary: imports in with blocks behave consistently whether marked explicitly with lazy import or implicitly via the global flag, creating a simple rule that’s easy to explain and reason about.
The initial PEP for lazy imports (PEP 690) relied heavily on the modification of the internal dict object to support lazy imports. We recognize that this data structure is highly tuned, heavily used across the codebase, and very performance sensitive. Because of the importance of this data structure and the desire to keep the implementation of lazy imports encapsulated from users who may have no interest in the feature, we’ve decided to invest in an alternate approach.
The dictionary is the foundational data structure in Python. Every object’s attributes are stored in a dict, and dicts are used throughout the runtime for namespaces, keyword arguments, and more. Adding any kind of hook or special behavior to dicts to support lazy imports would:
Past decisions that violated this principle of keeping core abstractions clean have caused significant pain in the CPython ecosystem, making optimization difficult and introducing subtle bugs.
__class__ mutationAn alternative implementation approach was proposed where lazy import objects would be transformed into their final form by mutating their internal state, rather than replacing the object entirely. Under this approach, a lazy object would be transformed in-place after the actual import completes.
This approach was rejected for several reasons:
from statements. When a user writes lazy from foo import bar, the object bar could be any Python object (a function, class, constant, etc.), not just a module. Any transformation approach would require that the lazy proxy object have compatible memory layout and other considerations with the target object, which is impossible to know before loading the module. This creates a fundamental asymmetry where lazy import x and lazy from x import y would require completely different implementation strategies, with the latter still needing the proxy replacement mechanism.sys.modules that inherit from or replace the standard module type. These custom module classes can have different memory layouts and sizes than PyModuleObject. The transformation approach cannot work with such generic custom module implementations, creating fragility and maintenance burden across the ecosystem.PyObject_TypeCheck. The transformation also requires careful coordination between the lazy import machinery and the type system to ensure that the object remains valid throughout the transformation process. The current proxy-based design avoids these issues by maintaining clear boundaries between the lazy proxy and the actual imported object.The current design, which uses object replacement through the LazyImportType proxy pattern, provides a consistent mechanism that works uniformly for both import and from ... import statements while maintaining cleaner separation between the lazy import machinery and Python’s core object model.
lazy imports find the module without loading itThe Python import machinery separates out finding a module and loading it, and the lazy import implementation could technically defer only the loading part. However, this approach was rejected for several critical reasons.
A significant part of the performance win comes from skipping the finding phase. The issue is particularly acute on NFS-backed filesystems and distributed storage, where each stat() call incurs network latency. In these kinds of environments, stat() calls can take tens to hundreds of milliseconds depending on network conditions. With dozens of imports each doing multiple filesystem checks traversing sys.path, the time spent finding modules before executing any Python code can become substantial. In some measurements, spec finding accounts for the majority of total import time. Skipping only the loading phase would leave most of the performance problem unsolved.
More critically, separating finding from loading creates the worst of both worlds for error handling. Some exceptions from the import machinery (e.g., ImportError from a missing module, path resolution failures, ModuleNotFoundError) would be raised at the lazy import statement, while others (e.g., SyntaxError, ImportError from circular imports, attribute errors from from module import name) would be raised later at first use. This split is both confusing and unpredictable: developers would need to understand the internal import machinery to know which errors happen when. The current design is simpler: with full lazy imports, all import-related errors occur at first use, making the behavior consistent and predictable.
Additionally, there are technical limitations: finding the module does not guarantee the import will succeed, nor even that it will not raise ImportError. Finding modules in packages requires that those packages are loaded, so it would only help with lazy loading one level of a package hierarchy. Since “finding” attributes in modules requires loading them, this would create a hard to explain difference between from package import module and from module import function.
lazy keyword in the middle of from importsWhile we found from foo lazy import bar to be a really intuitive placement for the new explicit syntax, we quickly learned that placing the lazy keyword here is already syntactically allowed in Python. This is because from . lazy import bar is legal syntax (because whitespace does not matter.)
lazy keyword at the end of import statementsWe discussed appending lazy to the end of import statements like such import foo lazy or from foo import bar, baz lazy but ultimately decided that this approach provided less clarity. For example, if multiple modules are imported in a single statement, it is unclear if the lazy binding applies to all of the imported objects or just a subset of the items.
eager keywordSince we’re not changing the default behavior, and we don’t want to encourage use of the global flags, it’s too early to consider adding superfluous syntax for the common, default case. It would create too much confusion about what the default is, or when the eager keyword would be necessary, or whether it affects lazy imports in the explicitly eagerly imported module.
As lazy imports allow some forms of circular imports that would otherwise fail, as an intentional and desirable thing (especially for typing-related imports), there was a suggestion to add a way to override the global disable and force particular imports to be lazy, for instance by calling the lazy imports filter even if lazy imports are globally disabled.
This approach could introduce a complex hierarchy of the different “override” systems, making it much harder to analyze and reason about the code. Additionally, this may require additional complexity to introduce finer-grained systems to enable or disable particular imports as the use of lazy imports evolves. The global disable is not expected to see commonplace use, but be more of a debugging and selective testing tool for those who want to tightly control their dependency on lazy imports. We think it’s reasonable for package maintainers, as they update packages to adopt lazy imports, to decide to not support running with lazy imports globally disabled.
It may be that this means that in time, as more and more packages embrace both typing and lazy imports, the global disable becomes mostly unused and unusable. Similar things have happened in the past with other global flags, and given the low cost of the flag this seems acceptable. It’s also easier to add more specific re-enabling mechanisms later, when we have a clearer picture of real-world use and patterns, than it is to remove a hastily added mechanism that isn’t quite right.
The global activation and filter functions (sys.set_lazy_imports, sys.set_lazy_imports_filter, sys.get_lazy_imports_filter) could be marked as “private” or “advanced” by using underscore prefixes (e.g., sys._set_lazy_imports_filter). This was rejected because branding as advanced features through documentation is sufficient. These functions have legitimate use cases for advanced users, particularly operators of large deployments. Providing an official mechanism prevents divergence from upstream CPython. The global mode is intentionally documented as an advanced feature for operators running huge fleets, not for day-to-day users or libraries. Python has precedent for advanced features that remain public APIs without underscore prefixes - for example, gc.disable(), gc.get_objects(), and gc.set_threshold() are advanced features that can cause issues if misused, yet they are not underscore-prefixed.
A decorator-based syntax could mark imports as lazy:
@lazy import json
@lazy from foo import bar
This approach was rejected because it introduces too many open questions and complications. Decorators in Python are designed to wrap and transform callable objects (functions, classes, methods), not statements. Allowing decorators on import statements would open the door to many other potential statement decorators (@cached, @traced, @deprecated, etc.), significantly expanding the language’s syntax in ways we don’t want to explore. Furthermore, this raises the question of where such decorators would come from: they would need to be either imported or built-in, creating a bootstrapping problem for import-related decorators. This is far more speculative and generic than the focused lazy import syntax.
A backward compatible syntax, for example in the form of a context manager, has been proposed:
with lazy_imports(...): import json
This would replace the need for __lazy_modules__, and allow libraries to use one of the existing lazy imports implementations in older Python versions. However, adding magic with statements with that kind of effect would be a significant change to Python and with statements in general, and it would not be easy to combine with the implementation for lazy imports in this proposal. Adding standard library support for existing lazy importers without changes to the implementation amounts to the status quo, and does not solve the performance and usability issues with those existing solutions.
globals()An alternative to reifying on globals() or exposing lazy objects would be to return a proxy dictionary that automatically reifies lazy objects when they’re accessed through the proxy. This would seemingly give the best of both worlds: globals() returns immediately without reification cost, but accessing items through the result would automatically resolve lazy imports.
However, this approach is fundamentally incompatible with how globals() is used in practice. Many standard library functions and built-ins expect globals() to return a real dict object, not a proxy:
exec(code, globals()) requires a real dict.eval(expr, globals()) requires a real dict.type(globals()) is dict would break..update() would need special handling.The proxy would need to be so transparent that it would be indistinguishable from a real dict in almost all cases, which is extremely difficult to achieve correctly. Any deviation from true dict behavior would be a source of subtle bugs.
__dict__ or globals() accessThree options were considered for how globals() and mod.__dict__ should behave with lazy imports:
globals() or mod.__dict__ traverses and resolves all lazy objects before returning.globals() or mod.__dict__ returns the dictionary with lazy objects present (chosen).globals() returns the dictionary with lazy objects, but mod.__dict__ reifies everything.We chose option 2: both globals() and __dict__ return the raw namespace dictionary without triggering reification. This provides a clean, predictable model where low-level introspection APIs don’t trigger side effects.
Having globals() and __dict__ behave identically creates symmetry and a simple mental model: both expose the raw namespace view. Low-level introspection APIs should not automatically trigger imports, which would be surprising and potentially expensive. Real-world experience implementing lazy imports in the standard library (such as the traceback module) showed that automatic reification on __dict__ access was cumbersome and forced introspection code to load modules it was only examining.
Option 1 (always reifying) was rejected because it would make globals() and __dict__ access surprisingly expensive and prevent introspecting the lazy state of a module. Option 3 was initially considered to “protect” external code from seeing lazy objects, but real-world usage showed this created more problems than it solved, particularly for stdlib code that needs to introspect modules without triggering side effects.
We would like to thank Paul Ganssle, Yury Selivanov, Łukasz Langa, Lysandros Nikolaou, Pradyun Gedam, Mark Shannon, Hana Joo and the Python Google team, the Python team(s) @ Meta, the Python @ HRT team, the Bloomberg Python team, the Scientific Python community, everyone who participated in the initial discussion of PEP 690, and many others who provided valuable feedback and insights that helped shape this PEP.
This document is placed in the public domain or under the CC0-1.0-Universal license, whichever is more permissive.
If’s not:
* Importing other modules.
* Taking a long time to import.
* Writing .pyc files.
If any program can “import foo” and still execute exactly the same bytecode afterward as before, you can say that foo doesn’t have side effects.
That might be considered a design mistake -- one that should be easy to migrate away from.
You won't need to do anything, of course, if the lazy import becomes available on common Python installs some day in the future. That might take years, though.
Note that you will be expected to have familiarized yourself generally with previous failed proposals of this sort, and proactively considered all the reasonably obvious corner cases.
When an installer resolves dependency conflicts, the project code isn't running. The installer is free to discover new constraints on the fly, and to backtrack. It is in effect all being done "statically", in the sense of being ahead of the time that any other system cares about it being complete and correct.
Python `import` statements on the other hand execute during the program's runtime, at arbitrary separation, with other code intervening.
> This feature says nothing about the automatic installation of libraries.
It doesn't have to. The runtime problems still occur.
I guess I'll have to reproduce the basic problem description from memory again. If you have modules A and B in your project that require conflicting versions of C, you need a way to load both at runtime. But the standard import mechanism already hard-codes the assumptions that i) imports are cached in a key-value store; ii) the values are singleton and client code absolutely may rely on this for correctness; iii) "C" is enough information for lookup. And the ecosystem is further built around the assumption that iv) this system is documented and stable and can be interacted with in many clever ways for metaprogramming. Changing any of this would be incredibly disruptive.
> This feature is absolutely not about supporting multiple simultaneous versions of a library at runtime.
You say that, but you aren't the one who proposed it. And https://news.ycombinator.com/item?id=45467350 says explicitly:
> and support having multiple simultaneous versions of any Python library installed.
Which would really be the only reason for the feature. For the cases where a single version of the third-party code satisfies the entire codebase, the existing packaging mechanisms all work fine. (Plus they properly distinguish between import names and distribution names.)
Installed. Not loaded.
The reason is to do away with virtual environments.
I just want to say `import numpy@2.3.x as np` in my code. If 2.3.2 is installed, it gets loaded as the singleton runtime library. If it's not installed, load the closest numpy available and print a warning to stderr. If a transient dependency in the runtime tree wants an incompatible numpy, tough luck, the best you get is a warning message on stderr.
You already have the A, B, C dependency resolution problem you describe today. And if it's not caught at the time of installing your dependencies, you see the failure at runtime.
But virtual environments are quite simply not a big deal. Installed libraries can be hard-linked and maybe even symlinked between environments and this can be set up very quickly. A virtual environment is defined by the pyvenv.cfg marker file; you don't need to use or even have activation scripts, and you especially don't (generally) need a separate copy of pip for each one, even if you do use pip.
On the flip side, allowing multiple versions of a library in a virtual environment has very little effect on package resolution; it just allows success in cases of conflict, but normally there aren't conflicts (because you're typically making a separate environment for a single "root" package, and it's supposed to be possible to use that package in Python as it actually exists, without hacks). The installer still has to scrounge up metadata (and discover it recursively) and check constraints.
There are many people here who think enabling lazy imports is as simple as flipping a light switch. They have no idea what they're talking about.
If you want examples then just look at one of the other languages that have implemented compiler / runtime dependency version checks.
Even Go has better dependency resolution than Python, and Go is often the HN poster child for how not to do things.
The crux of the matter is this is a solvable problem. The real issue isn’t that it’s technically impossible, is that it’s not annoying enough of a day to day problem for people who are in a position to influence this change. I’m not that person and don’t aspire to be that person (I have plenty of other projects on my plate as it is)
Saying something is possible isn’t the same as saying something is easy.
If you were to have said “it’s a different problem to solve because of…” then you wouldn’t have had any pushback. But you didn’t. You said it “this is not possible”. And that’s the part that people were disputing.
This thread was a tangent from lazy imports.
And actually people do appreciate the complexities of changes like this. We were responding to a specific comment that that said “it’s impossible”. Saying something is “possible” isn’t the same as saying “it’s easy”.