There's no way to delete multiple files with a single syscall. There's also no way to purge the folder of all its contents. At least not on Windows, using an API.
The syscalls are quick enough when done locally. What I think is going on is that for a directory with N files on a share, I get N (or likely multiples of N) network roundtrips.
If the network round trip time is 1second and the time to delete the directory with 1000 files is 1 second when done on the local drive, then I want the delete on the network share to be 2 seconds not 2000 or 4000 seconds.
For deleting - yes, you will get a round-trip per delete request. So if they are serialized (which is the case with standard tools), it will be excruciatingly slow indeed.
For scanning - you'll get the list of content in chunks. The size of a chunk depends on the size of the buffer you provide. For smaller folders it'll normally fit in a single request.
unlink(2) returns EISDIR (on Linux at least) if you try to unlink a directory. I'm not sure if you can still do it, but long ago you could unlink(2) a directory as root, but it was not a good idea since it did not recursively free up the contents back to the file system's raw storage.
To remove a directory on Linux you have to empty it out (recursively, using unlink(2) on files and rmdir(2) on empty directories) and then rmdir(2) the top level directory.