

AFAIK setuptools and hatch are for building. Publishing is a different process. You can try uv
for publishing, but idk if it supports publishing to alternatives to PyPI.
AFAIK setuptools and hatch are for building. Publishing is a different process. You can try uv
for publishing, but idk if it supports publishing to alternatives to PyPI.
Have you tried hatch?
I don’t know why people are still bothering with setuptools for new projects.
Multi-root workspaces will let you choose the interpreter for each directory;
I think that’s the best way to make it work if you want to have more than one project in the same VS Code instance.
nah, the main reason we have 15 standards was the lack of an official one. This is good.
scanning and re-encoding is the way if you don’t care about the exact image or pattern
“Do one thing and one thing well”
This is why the Python landscape is such a mess in the first place. The “one thing” should have been project management. Instead, we end up with 20 different tools that have a very limited context, often overlapping or being mutually exclusive to each other in functionality, and it’s up to each project to adopt and configure them correctly.
The mass adoption of uv is a clear sign that we’re tired of this flawed approach. Leave the Unix philosophy to core utilities of an OS.
That’s pretty much the conclusion: you should try uv first, and there’s a small chance it doesn’t work for you and you’re not willing to fix it, or it’s out of your hands.
Examples include legacy projects and companies that don’t allow it (but I do question how they’d even enforce this, and how developers can even do their jobs if they can’t run binaries at the user level).
it’s turtles all the way regardless; but it’s much easier to handle side effects if you have more numerous but smaller functions.
I prefer that because fully reading a module or component is not the most common scenario. The most common use case of reading code is (or should be) not caring about most of the implementation details until you need to; only then you have to go down the rabbit hole.
Longer functions force the reader to understand most of their context every time they get there, a problem especially when the function has a bunch of local vars.
single use functions are fine; I often break 20+ line functions apart and it makes it easier to test and reason about, it’s not just to avoid comments: block comments are just a sign that the function might be getting too complex.
I do the same. The exception is test data, which sometimes is too large to not dominate the sdist size, so I choose to not include it.
It has records, a schema, and can be safely validated!
uh… a database implies use of a database management system. I don’t think saying that a YAML/TOML/JSON/whatever file is a database is very useful, as these files are usually created and modified without any guarantees.
It’s not even about being incorrect, it’s just not that useful.
It seems you’re describing a lock file. No one is proposing to use or currently using pyproject.toml as a lock file. And even lock files have well defined schemas, not just an arbitrary JSON-like object.
it’s a config file that should be readable and writeable by both humans and tools. So yeah, it makes sense.
And I don’t lile yaml personally, so that’s a plus to me. My pet peeve is never knowing what names before a colon are part of the schema and which ones are user-defined. Even with strictyaml, reading the nesting only through indentation is harder than in toml.
I didn’t know about StrictYAML, we’re really going in circles lol
TOML is already RW by Poetry, PDM, and uv.
additional layer to abstract away from pip
reqs.txt files are not standardized and pip can read from a pyproject.toml
- which is - using pip install .
there are still many unresolved matters with dependency resolution, but we need to leave requirements.txt
files behind.
Which you can still do. That said, the “correct” and less problematic way of installing packages should be easier than the alternative.
You still have the option to choose not to use a venv and risk breaking your user space.
The changes make this harder to do it by accident by encouraging use of a venv. Part of the problem is that pip install --user
is not exactly in the user space and may in fact break system packages, and as you wrote, the user shouldn’t be able to inadvertently change the OS.
The fastest way to learn best practices in Python is to use a linter like Ruff. It also features a formatter so you don’t have to spend time beautifying the code.