ex-pytype dev here - we knew this was coming and it's definitely the right thing to do, but it's still a little sad to see the end of an era. in particular, pytype's ability to do flow-based analysis across function boundaries (type checking calls to unannotated functions by symbolically executing the function body with the types of the call arguments) has not been implemented by any of the other checkers (again for good reasons; it's a performance hit and the world is moving towards annotations over pure inference anyway, but I still think it's a nice feature to have and makes for more powerful checking).
as an aside, while I agree that bytecode-based analysis has its drawbacks, I think it's a tool worth having in the overall python toolbox. I spun off pycnite from pytype in the hope that anyone else who wanted to experiment with it would have an easier time getting started - https://github.com/google/pycnite
I have recently jumped onto the "write python tooling in rust" bandwagon and might look into a rust reimplementation of pycnite at some point, because I still feel that bytecode analysis lets you reuse a lot of work the compiler has already done for you.
> I have recently jumped onto the "write python tooling in rust" bandwagon
I know Go and Rust are the belles du jour, but this kind of thing really hampers integrators' ability to support platforms other than x86-64 and armv8. In my particular case, it results in me being unable to build software that depends on pyca/cryptography on platforms like s390x, which makes me sad. It also makes development environment management, including CI/CD pipeline maintenance, that much more complicated. It was already bad enough when I was trying to compile binary distributions on Windows, and that was just with the Visual C++ toolchain mess that is the Microsoft development experience.
As Steve notes, Rust does support s390x. Even prior to shipping Rust code, we never tested or claimed to support s390x.
If there's genuine interest in more people supporting s390x in the open source world, folks will need to do the work to make it possible to build, test, and run CI on it. IBM recently contributed official PPC64le support to pyca/cryptography (by way of Github Actions runners so we could test and build in CI), and they've been responsive on weird platform issues we've hit (e.g., absl not support ppc64le on musl: https://github.com/pyca/infra/pull/710#issuecomment-31789057...). That level of commitment is what's really required to make a platform practical, treating "well, it's in C and every platform has a C compiler" as the sum total of support wasn't realistic.
If you don't mind, how did you get into cryptography development? I have heard many say that don't do this unless you're experienced but I wonder how one becomes more experienced if you don't do it by yourself.
It's important for projects to distinguish between "this might or might not work but it's nobody's job to support it" versus "people depend on this and will keep it working", so that people don't build on the former and think it's the latter. The latter requires active support and widespread user/developer interest, as well as comparable support from the upstream projects you build on (something shouldn't be tier 1 for you if it's tier 3 for one of your critical dependencies).
On top of all this, pretty much everything depends on Python (even glibc's build system depends on it now), and Rust is relatively hard to bootstrap. So bootstrapping a usable Python on glibc could one day involve bootstrapping all the way up to Rust on musl (including e.g. llvm) just to get Python just to build the final system's libc.
please note that I am not talking about introducing rust into the python interpreter (where, I agree, bootstrapping concerns would make the gain in code maintainability not really worth it), but in writing developer tools that work with python in rust or a mix of rust and python rather than in pure python. these are tools that run on the developer's machine or on test/ci servers, not in the target environment.
I can sympathise with that, but to argue a bit for the other side, these tools are mainly intended to run on the developer's machine or in the CI pipeline. in both cases they overwhelmingly use architectures that rust supports, and in the case of CI surely it's easier to deploy a single rust binary than a python binary and all its library dependencies.
I used to be very much in the "write your language tools in the language and the community will contribute" camp, but astral has really shown what a big difference pure speed can make, and I now have more of a "write tools in rust and ideally make sure they also expose libraries that you can call from python" mindset.
Visual C++ toolchain is rather easy to install, the issue is briging other OS expectations to Windows, likewise the other way around.
However I fully agree with you, the tooling for a given language should be written in the language itself, and it is an ecosystem failure when it doesn't happen.
I also feel that this trend, as usual, is people building their portfolio in the current trendy languages.
In a nutshell it would build a dependency graph first using simplified algorithm that only looked at imports. Then it would pick a function, eval types statement by statement (I think that's what you call symbolic execution), and if any named value as result would expand the list of possible types it could have, all the functions that use that value (global var, passed as argument, etc) would be queued for reevaluation. Until either the set of possible types for all values is steady or some reevaluation limit is reached.
because there is a ton of existing python code that people find a lot of value in, and that no one wants to abandon. the return on investment for making python better is insanely higher than that on porting hundreds of millions of lines of code to another language.
I used Pytype at Google years ago and while it's well written and the team was responsive, ultimately Python is not well suited for type checking Python. It's compute intensive.
I think the Ty people at Astral have the correct idea, and hope it'll work out.
To be fair, even if there is/were a team, I don’t know that writing a new backend from scratch would be a good use of their time. pytype apparently started before mypy or any of the other Python type checkers existed. [1] But at this point there’s mypy, pyright, pyre/pyrefly, Ty, and probably more I’m not thinking of. It sounds more useful to collaborate with one of those existing projects than to write yet another new type checker.
Especially when, in my experience, each checker produces slightly different results on the same code, effectively creating its own slightly different language dialect with the associated fragmentation cost. In theory that cost could be avoided through more rigorous standardization efforts to ensure all the checkers work exactly the same way. But that would also reduce the benefit of writing a new type checker, since there would be less room to innovate or differentiate.
there is a new python team, we met up with them at pycon and had some nice conversations. as a former pytype dev I will be the first to admit that maintaining it as a legacy project without the context of having developed it over the years would not have been a pleasant experience at all, but also pytype, while very powerful at what it did, definitely had some flaws that put it firmly in the last generation of type checkers.
the current generation (mostly ty and pyrefly right now, though major props to pyright for being ahead of the curve) is moving towards fast, incremental type checking with LSP integration, and pytype was never going to get there. it's fundamentally a slow, batch-based type checker, which will catch a lot of errors in your project, but which will never be usable as an incremental type checker within your ide. add that to the fact that it had a different philosophy of type checking from most of the other major checkers and you had users facing the issue that their code would be checked one way by pyright in the ide, and then a subtly different way by pytype in the CI pipeline.
I loved my time working on pytype, and I would like to see some of its features added to pyrefly, but it has definitely been superseded by now.
Wouldn't the other way around be easier for finding good tools? Figure out what matters to you, inspect if the project fulfills those needs and then go with it after making sure it works well for you.
> Wouldn't the other way around be easier for finding good tools?
I agree, and Pyrefly seemed good; I was just wondering why people don't mention it.
Thank you for the comparison thread and post, I've read it and found it useful! Thanks to that post I know ty has a "gradual typing" philosophy, which I disprefer.
(Pyrefly dev here) As another commenter mentioned, Pyrefly is still in alpha. Sorry we don't make that more clear!
While we are in alpha, and there are plenty of open issues we are still working through, I think Pyrefly is actually pretty usable already, especially for code navigation.
(Pyrefly dev here) Thanks for trying it out! If you have any feedback or bug reports, please don't hesitate to file issues on GitHub or find us on Discord. We have some open issues for SQLAlchemy (like [1]). I'm definitely curious to hear if there are any gaps from your perspective, having an already strictly-typed codebase.
The cost of your dogmatic preference is your Python experience being more miserable than it should be. Astral's ruff and uv are widely adopted for a good reason, and there is no reason to think that ty will come any different.
> What alternatives can I consider?
There are four Python static type checkers that can be considered: mypy and Pyright have been released to the community for a while and have well established user bases. Pyrefly, ty were announced recently at PyCon US 2025 and are in active development stage in the current time of August 2025 when this was written.
pylance and others are great for IDE type checking as you go along, but when you ship your code off to the CI it's best to stick to mypy for the full automated run, since mypy is in some aspects a bit of the "reference implementation" for python typing (meaning, it's a good choice as the common denominator the code you ship will have with other code it interacts with).
pytype had two features that made it uniquely suited to google's needs:
1. it had powerful type inference over partially or even completely unannotated code, which meant no one has to go back and annotate the very large pre-type-checking codebase.
2. it had a file-at-a-time architecture which was specifically meant to handle the large monorepo without trying to load an entire dependency tree into memory at once, while still doing cross-module analysis
there were a couple of attempts to get mypy running within google, but the impedance mismatch was just too great.
Google, Facebook, and Microsoft all maintain(ed) independent non-mypy typecheckers for internal and external uses that aren't served by mypy.
The various features mypy didn't support include speed, type inference/graduality, and partial checking in the presence of syntax errors (for linter/interactive usecases and code completion).
Personal experience: if you use injector[1] with NewType so that you can give your primitive types a meaning and add them to your injection stack it completely fails. For example:
```python
ModelName = NewType("ModelName", str)
# You bind your string within a module:
binder.bind(ModelName, to=ModelName(parsed_args.model_name))
# When you need it:
model_name = injector.get(ModelName) # Here it fails, saying that you need "concrete" types or something similar
```
So while it is great already it definitely still has many rough edges still. But it is to be expected from alpha releases
Pytype is used heavily inside Google so they bear the penalty likely more than you. Besides, py typing libraries is a dynamically changing landscape so it isn't anything out of the norm. Not everything is an abandoned project, and if anything Google abandons some projects well after the winners and losers are apparent eg Tensorflow.
This would have been a fair point had Scala 3 supported Python's packages and was compatible with Python's tooling. At the very least until Mojo is mature and open-sourced, there are simply no alternatives to pouring time and effort into making Python a better language.
> This would have been a fair point had Scala 3 supported Python's packages
It supports the entirety of JDK-compatible packages and FFI/JNI bindings, which is a fair point of the comparison. Not sure why you have to frame it around the idea of necessarily having to improve Python, as the JDK infra and tooling have been around for ages, available to be picked by the devs.
Both python language and its ecosystem are such a mess. Imagine the amount of human hours spent on tooling development trying to fix fundamental flaws in a language design.
Pytype was cool before Python type annotations became widespread. It seems to me like the industry is naturally moving toward native type annotations and linters and away from static analyzers like Pytype and mypy.
Well yes but with native annotations the linter you’re already using can do a lot of the type checking work so for many teams it’s not worth it to add Pytype or mypy
ex-pytype dev here - we knew this was coming and it's definitely the right thing to do, but it's still a little sad to see the end of an era. in particular, pytype's ability to do flow-based analysis across function boundaries (type checking calls to unannotated functions by symbolically executing the function body with the types of the call arguments) has not been implemented by any of the other checkers (again for good reasons; it's a performance hit and the world is moving towards annotations over pure inference anyway, but I still think it's a nice feature to have and makes for more powerful checking).
as an aside, while I agree that bytecode-based analysis has its drawbacks, I think it's a tool worth having in the overall python toolbox. I spun off pycnite from pytype in the hope that anyone else who wanted to experiment with it would have an easier time getting started - https://github.com/google/pycnite
I have recently jumped onto the "write python tooling in rust" bandwagon and might look into a rust reimplementation of pycnite at some point, because I still feel that bytecode analysis lets you reuse a lot of work the compiler has already done for you.
> while I agree that bytecode-based analysis has its drawbacks
abstract interpretation of the bytecode like y'all were doing is the only way to robustly do type inference in python.
> https://github.com/google/pycnite
there's also https://github.com/MatthieuDartiailh/bytecode which is a good collection
MOPSA does abstract interpretation for both C and Python. It even works across language boundaries.
https://mopsa.lip6.fr/#features
It also has more abstraction domains than „just“ the type of objects.
yeah, that's a really nice project too!
> I have recently jumped onto the "write python tooling in rust" bandwagon
I know Go and Rust are the belles du jour, but this kind of thing really hampers integrators' ability to support platforms other than x86-64 and armv8. In my particular case, it results in me being unable to build software that depends on pyca/cryptography on platforms like s390x, which makes me sad. It also makes development environment management, including CI/CD pipeline maintenance, that much more complicated. It was already bad enough when I was trying to compile binary distributions on Windows, and that was just with the Visual C++ toolchain mess that is the Microsoft development experience.
(pyca/cryptography dev here)
As Steve notes, Rust does support s390x. Even prior to shipping Rust code, we never tested or claimed to support s390x.
If there's genuine interest in more people supporting s390x in the open source world, folks will need to do the work to make it possible to build, test, and run CI on it. IBM recently contributed official PPC64le support to pyca/cryptography (by way of Github Actions runners so we could test and build in CI), and they've been responsive on weird platform issues we've hit (e.g., absl not support ppc64le on musl: https://github.com/pyca/infra/pull/710#issuecomment-31789057...). That level of commitment is what's really required to make a platform practical, treating "well, it's in C and every platform has a C compiler" as the sum total of support wasn't realistic.
If you don't mind, how did you get into cryptography development? I have heard many say that don't do this unless you're experienced but I wonder how one becomes more experienced if you don't do it by yourself.
Note that Python, in 2022, adopted a "target tier policy" based on the one I wrote for Rust in 2019. (See https://peps.python.org/pep-0011/ , history at https://peps.python.org/pep-0011/#discussions ; original Rust version at https://doc.rust-lang.org/nightly/rustc/target-tier-policy.h... .)
It's important for projects to distinguish between "this might or might not work but it's nobody's job to support it" versus "people depend on this and will keep it working", so that people don't build on the former and think it's the latter. The latter requires active support and widespread user/developer interest, as well as comparable support from the upstream projects you build on (something shouldn't be tier 1 for you if it's tier 3 for one of your critical dependencies).
See https://news.ycombinator.com/item?id=43673439 for a comment from last time this came up.
s390x is currently supported by Rust, at tier 2, including host tools: https://doc.rust-lang.org/nightly/rustc/platform-support/s39... .
Rust supports s390x. https://doc.rust-lang.org/stable/rustc/platform-support/s390...
On top of all this, pretty much everything depends on Python (even glibc's build system depends on it now), and Rust is relatively hard to bootstrap. So bootstrapping a usable Python on glibc could one day involve bootstrapping all the way up to Rust on musl (including e.g. llvm) just to get Python just to build the final system's libc.
That doesn't feel great to me.
please note that I am not talking about introducing rust into the python interpreter (where, I agree, bootstrapping concerns would make the gain in code maintainability not really worth it), but in writing developer tools that work with python in rust or a mix of rust and python rather than in pure python. these are tools that run on the developer's machine or on test/ci servers, not in the target environment.
I can sympathise with that, but to argue a bit for the other side, these tools are mainly intended to run on the developer's machine or in the CI pipeline. in both cases they overwhelmingly use architectures that rust supports, and in the case of CI surely it's easier to deploy a single rust binary than a python binary and all its library dependencies.
I used to be very much in the "write your language tools in the language and the community will contribute" camp, but astral has really shown what a big difference pure speed can make, and I now have more of a "write tools in rust and ideally make sure they also expose libraries that you can call from python" mindset.
Visual C++ toolchain is rather easy to install, the issue is briging other OS expectations to Windows, likewise the other way around.
However I fully agree with you, the tooling for a given language should be written in the language itself, and it is an ecosystem failure when it doesn't happen.
I also feel that this trend, as usual, is people building their portfolio in the current trendy languages.
I worked on https://github.com/Microsoft/PTVS (not being in MSFT) around 2019 so I know they did type check calls across function boundaries.
neat. did they also do it by symbolically executing the function body?
In a nutshell it would build a dependency graph first using simplified algorithm that only looked at imports. Then it would pick a function, eval types statement by statement (I think that's what you call symbolic execution), and if any named value as result would expand the list of possible types it could have, all the functions that use that value (global var, passed as argument, etc) would be queued for reevaluation. Until either the set of possible types for all values is steady or some reevaluation limit is reached.
why not just go with a language that has gradual typing built in - eg raku
because there is a ton of existing python code that people find a lot of value in, and that no one wants to abandon. the return on investment for making python better is insanely higher than that on porting hundreds of millions of lines of code to another language.
that’s why raku has modules like
and strong FFI (Foreign Function Interface) chopscertainly I see the economic sense in continuing with Python, but for some folks there’s a limit to how much lipstick you want on your pig
I think this is for the best.
I used Pytype at Google years ago and while it's well written and the team was responsive, ultimately Python is not well suited for type checking Python. It's compute intensive.
I think the Ty people at Astral have the correct idea, and hope it'll work out.
https://docs.astral.sh/ty/
In theory, nothing prevents the pytype team at Google to develop a new backend in a different language.
In practice, there is no longer a pytype team at Google [https://news.ycombinator.com/item?id=40171125], which I suspect is the real reason for the discontinuation.
To be fair, even if there is/were a team, I don’t know that writing a new backend from scratch would be a good use of their time. pytype apparently started before mypy or any of the other Python type checkers existed. [1] But at this point there’s mypy, pyright, pyre/pyrefly, Ty, and probably more I’m not thinking of. It sounds more useful to collaborate with one of those existing projects than to write yet another new type checker.
Especially when, in my experience, each checker produces slightly different results on the same code, effectively creating its own slightly different language dialect with the associated fragmentation cost. In theory that cost could be avoided through more rigorous standardization efforts to ensure all the checkers work exactly the same way. But that would also reduce the benefit of writing a new type checker, since there would be less room to innovate or differentiate.
[1] https://news.ycombinator.com/item?id=19486938
there is a new python team, we met up with them at pycon and had some nice conversations. as a former pytype dev I will be the first to admit that maintaining it as a legacy project without the context of having developed it over the years would not have been a pleasant experience at all, but also pytype, while very powerful at what it did, definitely had some flaws that put it firmly in the last generation of type checkers.
the current generation (mostly ty and pyrefly right now, though major props to pyright for being ahead of the curve) is moving towards fast, incremental type checking with LSP integration, and pytype was never going to get there. it's fundamentally a slow, batch-based type checker, which will catch a lot of errors in your project, but which will never be usable as an incremental type checker within your ide. add that to the fact that it had a different philosophy of type checking from most of the other major checkers and you had users facing the issue that their code would be checked one way by pyright in the ide, and then a subtly different way by pytype in the CI pipeline.
I loved my time working on pytype, and I would like to see some of its features added to pyrefly, but it has definitely been superseded by now.
There is still a team within Google in charge of this space.
I've heard of `ty` too but recently I learned about Pyrefly, which is not in pre-production alpha, and is also Rust: https://pyrefly.org/
Is there a good reason to avoid using Pyrefly?
> Is there a good reason to avoid using Pyrefly?
Wouldn't the other way around be easier for finding good tools? Figure out what matters to you, inspect if the project fulfills those needs and then go with it after making sure it works well for you.
Regardless, a comparison between the two was posted to HN not too long time ago: https://news.ycombinator.com/item?id=44107655
> Wouldn't the other way around be easier for finding good tools?
I agree, and Pyrefly seemed good; I was just wondering why people don't mention it.
Thank you for the comparison thread and post, I've read it and found it useful! Thanks to that post I know ty has a "gradual typing" philosophy, which I disprefer.
(Pyrefly dev here) As another commenter mentioned, Pyrefly is still in alpha. Sorry we don't make that more clear!
While we are in alpha, and there are plenty of open issues we are still working through, I think Pyrefly is actually pretty usable already, especially for code navigation.
https://github.com/facebook/pyrefly/releases
Pyrefly v0.29.0
Status : ALPHA
Hah, I stand corrected. In my defense, Ty make it a lot more obvious and ominous on their github (https://github.com/astral-sh/ty):
> /!\ Warning
> ty is in preview and is not ready for production use.
> We're working hard to make ty stable and feature-complete, but until then, expect to encounter bugs, missing features, and fatal errors.
I believe Pyrefly is stricter, so it may be a better choice for new projects but harder to integrate into existing ones without type-checking.
I have a medium-sized codebase that is all green when running mypy with the strictest configuration possible.
Pyrefly spits put around 200 errors for the same codebase.
Most errors are related to SQLAlchemy.
(Pyrefly dev here) Thanks for trying it out! If you have any feedback or bug reports, please don't hesitate to file issues on GitHub or find us on Discord. We have some open issues for SQLAlchemy (like [1]). I'm definitely curious to hear if there are any gaps from your perspective, having an already strictly-typed codebase.
1. https://github.com/facebook/pyrefly/issues/954
I mean, sqlalchemy until very recently needed a mypy plugin to type correctly (https://docs.sqlalchemy.org/en/20/orm/extensions/mypy.html), which was just deprecated in 2.0.0.
Perhaps you should do the upgrade (https://docs.sqlalchemy.org/en/20/changelog/whatsnew_20.html...) and try again?
I'm personally just staying away from startups anywhere in my dependencies.
The cost of your dogmatic preference is your Python experience being more miserable than it should be. Astral's ruff and uv are widely adopted for a good reason, and there is no reason to think that ty will come any different.
There is a reason and a potential much bigger cost I'm avoiding, that you are conveniently ignoring.
in the related FAQ https://github.com/google/pytype/issues/1925 they point explicitly to the future:
> What alternatives can I consider? There are four Python static type checkers that can be considered: mypy and Pyright have been released to the community for a while and have well established user bases. Pyrefly, ty were announced recently at PyCon US 2025 and are in active development stage in the current time of August 2025 when this was written.
mypy - https://github.com/python/mypy
Pyright - https://github.com/microsoft/pyright
Pyrefly - https://github.com/facebook/pyrefly
ty - https://github.com/astral-sh/ty
fwiw the original pytype team was laid off as part of laying off the Python team last year.
Google lays off its Python team | Hacker News https://news.ycombinator.com/item?id=40171125
I'm surprised Google still maintained their own solution for this for so long. The standard for statically type checking Python nowadays is mypy.
Mypy is far too slow to type check a codebase like Google's. That's why Facebook, Google, and Microsoft have/had their own solutions.
pylance and others are great for IDE type checking as you go along, but when you ship your code off to the CI it's best to stick to mypy for the full automated run, since mypy is in some aspects a bit of the "reference implementation" for python typing (meaning, it's a good choice as the common denominator the code you ship will have with other code it interacts with).
Ideally yes, but there's conflicts between the type checkers.
pytype had two features that made it uniquely suited to google's needs:
1. it had powerful type inference over partially or even completely unannotated code, which meant no one has to go back and annotate the very large pre-type-checking codebase.
2. it had a file-at-a-time architecture which was specifically meant to handle the large monorepo without trying to load an entire dependency tree into memory at once, while still doing cross-module analysis
there were a couple of attempts to get mypy running within google, but the impedance mismatch was just too great.
Google, Facebook, and Microsoft all maintain(ed) independent non-mypy typecheckers for internal and external uses that aren't served by mypy.
The various features mypy didn't support include speed, type inference/graduality, and partial checking in the presence of syntax errors (for linter/interactive usecases and code completion).
Maybe they could do typechecking using an LLM agent? I'm sure they'd fund a team for that.
This is the way
Just let it interpret the python code /S
It could also detect any bugs and fix them on the fly.
astral bags another one
Is ty more mature than pyright or mypy?
I'm currently using pyright, but I'm going to migrate once ty and its vscode extension are given the "production ready" greenlight.
at this stage I get very few false positives and it's so much easier to configure and use than pyright
Personal experience: if you use injector[1] with NewType so that you can give your primitive types a meaning and add them to your injection stack it completely fails. For example:
```python
ModelName = NewType("ModelName", str)
# You bind your string within a module: binder.bind(ModelName, to=ModelName(parsed_args.model_name))
# When you need it: model_name = injector.get(ModelName) # Here it fails, saying that you need "concrete" types or something similar
```
So while it is great already it definitely still has many rough edges still. But it is to be expected from alpha releases
[1] https://pypi.org/project/injector/
ty still doesn't understand match + typing.assert_never pattern, last barrier for me to switching.
tl;dr please use pyright instead
[flagged]
Another abandoned project from Google? Not surprised. Never trust on Google.
Pytype is used heavily inside Google so they bear the penalty likely more than you. Besides, py typing libraries is a dynamically changing landscape so it isn't anything out of the norm. Not everything is an abandoned project, and if anything Google abandons some projects well after the winners and losers are apparent eg Tensorflow.
You have no ideia how many projects from them I used to use and from day to night they drop the development.
Google sucks.
DAE GOole killing another project?!!! (Apparently maintaining something completely free of charge for 13 years is not enough for online cannibals)
Scala 3: exists, https://docs.scala-lang.org/scala3/book/scala-features.html
Developers: mypy, pyright, pyrefly, ty, pypy, nogil, faster-python, sub-interpreters, free-threading, asyncio, ...
This would have been a fair point had Scala 3 supported Python's packages and was compatible with Python's tooling. At the very least until Mojo is mature and open-sourced, there are simply no alternatives to pouring time and effort into making Python a better language.
> This would have been a fair point had Scala 3 supported Python's packages
It supports the entirety of JDK-compatible packages and FFI/JNI bindings, which is a fair point of the comparison. Not sure why you have to frame it around the idea of necessarily having to improve Python, as the JDK infra and tooling have been around for ages, available to be picked by the devs.
Both python language and its ecosystem are such a mess. Imagine the amount of human hours spent on tooling development trying to fix fundamental flaws in a language design.
Pytype was cool before Python type annotations became widespread. It seems to me like the industry is naturally moving toward native type annotations and linters and away from static analyzers like Pytype and mypy.
Pytype and mypy check native annotations.
Well yes but with native annotations the linter you’re already using can do a lot of the type checking work so for many teams it’s not worth it to add Pytype or mypy