Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Honestly, I have mixed feelings about this. Pydantic is pretty cool but also kind of a strange bird.

It seems to have been originally conceived to do the job of forms in a web app (serialization/deserialization). But then it also has a double life as a runtime type validation framework. And I don't think it really does either one quite that well. It definitely offers a level of conciseness that is nice compared with traditional form or serializer frameworks. But then at times it seems underpowered or awkward to customize. I think that's because it doesn't seem to know exactly what it wants to be.

I've worked on projects where we used pydantic and it eventually became clear that we were either overusing or misusing it. It always seemed like the abuse of pydantic came out of developers on the team not really understanding what its purpose was. Maybe it's just my personal bias, but I think it's worth bringing a bit of skepticism to this debate and not just accepting the description of things in the issue ticket at face value. In other words, is it possible that pydantic is colliding with the steering council because it's misusing the annotations feature? I don't think I have a clear answer but it's worth asking the question anyway.



I agree, I don't love Pydantic. I much prefer the "stack" of Attrs + Marshmallow + Desert + Apispec. More tools, yes, but they compose nicely and they each do one thing well. Now where have I heard that one before...

I wish FastAPI had chosen that path instead of Pydantic, but here we are. FastAPI itself is a wonderful tool, with wonderful docs.

is it possible that pydantic is colliding with the steering council because it's misusing the annotations feature?

Yes.

There are lots of good reasons why types aren't necessarily a great match with runtime validation.

It gets the whole thing backwards IMO; I want to be erasing types at runtime and verifying them ahead-of-time. If I consume a piece of data that might be the wrong type, I want to explicitly validate the data myself, and return the desired type if and only if my validation passes. This is the Attrs approach and it works much better, even if you have to put in a few extra keystrokes in the handful of situations when you want explicit validation upon every class instantiation.

Moreover, making annotations lazy is not mindless adherence to purity or some kind of intended "fuck you" to the Pydantic ecosystem. PEP 649 clearly states its motivations. Type annotations have always been a static analysis tool first and foremost, and I think it's absolutely wrong to even suggest that CPython should suffer a higher maintenance burden and suboptimal performance to support what is effectively an off-label usage.


To each their own. I dislike Marshmallow and I adore pydantic. Runtime validation that aligns with static types makes sense to my brain, and keeps me from losing my mind in a complex system.

I don't think making the type annotations lazy is a "fuck you" to pydantic, but it is a breaking change on a minor revision.


Well, it is a breaking change on a minor revision, but Python doesn't use semantic versioning so there's no reason that can't happen.

The relevant PEP, https://www.python.org/dev/peps/pep-0387/, states that breaking changes may be made with two minor versions of warning (or one major version). Minor versions are released on a calendar schedule (again, not semantic versioning), so this policy is 2-3 years of warning in practice.


Which again is a poor way to go about things.


I generally agree with you. Building automatic validation on top of dataclasses seems like a far-superior solution moving forward, given the language's trajectory.


It's easy to say, but pydantic exists, and automatic validation on top of dataclasses does not/there's no well-known, well-tested library? (I'd also prefer something other than pydantic that doesn't coerce types by default, but alas...)

And it's a huge use-case that should've been obvious. I get type annotations are not static typing, but hand-rolling validation doesn't feel like "batteries included". More generally, there are already tools using type annotations, and there'd likely be more if it was robust/easy. Python is generally good at enabling metaprogramming, but type annotations are a bit of a minefield. See even dataclass' own impl, e.g. [0]. And building on top of dataclasses is also tricky.

I'm hopeful it can be resolved. Type annotations are a huge boon to the language, and it's already no small feat to have added them without breaking backwards compatibility.

[0] https://github.com/python/cpython/blob/8a232c7b17a2e41ae14d8...


Dataclass child can't inherit from a parent that has a default value set. How is this not a dealbreaker for most? That's literally the only reason I keep using pydantic instead of dataclasses.


I agree it's a deal breaker, it's also the reason we have to use a custom solution :(

If you can add the dependency I would recommend using attrs, it's so much better than the stdlib dataclasses :(


Pydantic does in fact support dataclasses as well, with a few caveats




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: