Doing what it's told to do: that's pretty much the Linux world. Doing what the developer thinks is best for you: that's Microsoft in Windows.
The second sure looks more agreeable - if you're not a power user, your computer actually works 'by itself'. It is also very fine if the automated procedures are perfect, can't be hijacked, can't run out of their context...
Unfortunately, Windows was never made to be safe, and was never made to preent a program from running in a restricted context.
Vista's UAC tried to solve that. It should have worked well. However, there are so many programs that try to do stuff by hitting a system feature they don't really need, that UAC keeps popping up - software writer have gotten so used to Windows returning nothing on a bogus call, that they do so all the time. On another OS (say, Linux), these developers would get an error code from the system, so they'd have to tighten up their interfaces, do more coherence tests etc.
Yes, they'd actually have to develop better code. My goodness!
That was described in recent Wine releases (1.1.38/39), where developers decided that they should shut Wine's debugger up in error cases when Windows wouldn't return an error.
In 7, UAC is not as annoying - because it's actually gotten pretty much useless. It allows a recent piece of software (which was programmed more tightly) to run well, and doesn't pop up as much on older pieces of software. Which doesn't help: because some of the stuff 7's UAC allows could be used to hack into the system!
In Linux, due to the software being already developed, then packaged, by people who hold on to the tenet "a software should do what it's supposed to do the best it can, no more no less", ensuring a piece of software can't hack the system is rather easy: if i doesn't need to touch the system, it won't; if it tries to anyway, it'll throw an error (thus it was badly programmed, or was compromized), or ask for superuser rights (it is thus intentional, or if it was compromized, the user may just notice that previous uses didn't require superuser rights).
The programming tenet, "be permissive on what you accept, be strict on what you output" is not true when dealing with security - it should be "be restrictive on what you accept, be strict on what you output".
That's also where standards help: they tell you what you should be strict on, by defining what you should get or output.