There are actually a couple of implementations of this. I think that I most recently saw it in Pulsar[0].
I'm kind of glad these have never gotten too popular because to maintain a healthy swarm using the BitTorrent protocol, peers should not be prioritizing chunks sequentially.
"Doesn't sequential download on bittorrent is bad?
Generally, yes. However, XBMCtorrent respects the same requirements "defined" by uTorrent 3. Also, XBMCtorrent tries to make it up to the swarm by seeding while you watch the movie."
I wonder if this prioritizes chunks in order? Or does one have to wait for an entire mp4 to be downloaded before knowing it will play all the way through?
Right, if streams aren't downloaded in order, that'd be very inefficient, and if it has to wait for the whole file to be downloaded, that'd defeat the purpose of streaming.
I've use that on OS X, worked quite well. Ultimately I don't see much use for it though, maybe if you want to get information about the headers of files before you download them completely or something like that.
Not exactly; the checksums in BitTorrent are fairly strong, but you could get that benefit with just a FUSE layer that knew how to associate checksum metadata with files.
I also am not entirely sure what benefit this brings other than "amusing proof of concept", currently, so take my above note with a grain of salt.
I agree that the standard torrent use case of torrents containing small numbers of huge files rather fails with this protocol. However, I think it could be interesting for systems with large numbers of small files.
For instance, think about a LaTeX install. The user could, in a matter of seconds, download a torrent file to a multiple gigabyte repository of all of the fonts an modules that are available. They then run latex on whatever document that they're trying to render. The filesystem would download the libraries as needed, so everything the user needs would be available, but she wouldn't have to wait while downloading a bunch of libraries that she didn't need. Nor would she need to decide before hand which libraries to install or ignore, as everything would be on the filesystem. The user experience of waiting for the executables and base libraries to download whne run for the first time wouldn't be great, but arguable better than waiting for the entire distro to install in the standard case.
Having access to a large repository of files without actually downloading the files to your computer already exists at large corporations, but you want a more traditional serving hierarchy in order to get acceptable read/write latencies.
Bittorrent is more useful when 1) local peer transfer speeds exceed the transfer speed from a master copy, and 2) everybody wants an identical copy of the data.
For example, if you want to deploy a new version of Facebook.com to a far away data center, Bittorrent is great (you only have to upload maybe 2-3x the size of the binary to the datacenter, and the thousands of nodes inside the datacenter can share the pieces amongst themselves).
The basic problem is that very few apps are designed with the mindset that file I/O can take some time. So they'll freeze until you get all of the packets. Even if it only takes 2-3 seconds, that's sort of unacceptable.
In this case, a better experience could be had via a web-to-torrent proxy. Browsers are one of the few apps which don't freeze while waiting for remote I/O...