> How did we get into a reusable library authoring?
That was always my premise. Maybe I didn't make it clear enough, because I tend to just take it for granted that that's how you write code, in a style that's suited for reuse.
> But the context were programs here, or not?
That's the "global" I was talking about: Code that's using multiprocessing needs to know the context that it's embedded in. Any moment I might grab that piece of code and transfer it to a library of reusable components, because that's how I work - code that starts out as part of a standalone program doesn't necessarily stay that way. Multiprocessing gets in the way of that.
>That was always my premise. Maybe I didn't make it clear enough, because I tend to just take it for granted that that's how you write code, in a style that's suited for reuse.
That's somewhat condescending. You can write code "in a style that's suitable for reuse" without being a library author - well, without publishing public packages anyway. Re-use is not only about some totally generic package that can run under any arbitrary context willy nilly.
And of course there are tons of programs where the parts don't make sense as libraries, because they're tied to the specific functionality and overall design (whether because of the domain logic required or due to optimization or other constraints). You write them to be modular and clean, but not with "arbitrary people running my code in whatever context" in mind.
Not to mention the mountains of purpose-specific throwaway scripts, e.g. in the scientific community especially, where Python is big, there's little regard for reuse (even less so for library building), and it's not because multiprocess is stopping them :)
So, yeah, I'd say, even if not 100% suitable for generic reusable library-style code, it doesn't mean multiprocess can't be applied in a huge number of specific people's problems and codebases.
>Code that's using multiprocessing needs to know the context that it's embedded in.
If you want to speed up your Python program and there's something that can run in parallel with no shared state, you can use multiprocess to run it.
If having it as a "reusable component" that hides away the fact multiprocess is used, and that can be called in any arbitrary context, is your concern, it's a valid one, but then perhaps a specific Python program and its performance is not your main priority. Library writing is, instead :)
Else, it's enough that the user calling multiprocessing knows the function that is to be passed and its dependencies (or lack thereof). Other than that, they don't have to change their top level program's architecture.
I didn't mean to say that no one should ever use multiprocessing. I was laying out the reasons why I don't.
I'm really looking forward to subinterpreters. I think they have great potential for supporting a style of multiprocessing that is both faster and better isolated.
That was always my premise. Maybe I didn't make it clear enough, because I tend to just take it for granted that that's how you write code, in a style that's suited for reuse.
> But the context were programs here, or not?
That's the "global" I was talking about: Code that's using multiprocessing needs to know the context that it's embedded in. Any moment I might grab that piece of code and transfer it to a library of reusable components, because that's how I work - code that starts out as part of a standalone program doesn't necessarily stay that way. Multiprocessing gets in the way of that.