Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
OpenBitTorrent – An Open Tracker Project (openbittorrent.com)
200 points by antonkozlov on July 28, 2019 | hide | past | favorite | 61 comments


I use this convenient github repo that contains a regularly updated list of open (and working) trackers:

https://github.com/ngosang/trackerslist

trackers_best is usually sufficient.


NewTrackon is fantastic, and it has an API: https://newtrackon.com/list

I've written a Chrome extension which periodically grabs the list of active trackers from NewTrackon and automatically adds them to magnet links as you're browsing. It's made torrenting so much faster for me.


Could you share the extension?


Ha I also navigate here

https://raw.githubusercontent.com/ngosang/trackerslist/maste...

And paste the list in my torrent app. The above list updates daily or so, so just by visiting it you get the latest and greatest.


What we really need is a way to mirror that GH repo full of tracker URLs all around the world managed by Git clone-pushing bots so that these things can possibly never be shut down.

There - I said it, we need more GitHubs around the world, hosted in every country to curb censorship :)


What’s your workflow to query these and find what you’re looking for?


If you want to know what files are being shared, there are some indexing apps for selfhosting that can scan the peers of a tracker (though DHT crawling is probably the better solution)


Uhh. Your torrent client will automatically handle querying them and finding what you're looking for (peers).


op is very obviously asking how to find out which files are being shared rather than finding peers


I’m talking about creating new torrents, not finding existing ones.


I suppose another use case is when you found a rare torrent which no longer works (no seeds, trackers down, etc), you might be able to get it to work this way. My question is: anyone got success stories with this use case?


Please let me ask a bit offtopic question.

Bittorrent is based on torrent files which describe a specific list of files. Is it really required to maintain them as a single pack/folder? For years I have an idea where one could just tell torrent client where their data storage is (with a flexible granularity ofc), so it could index it and share without any specific torrent file, simply by block hashes. I.e. if I have a file somewhere deep in the tree, and someone wants to download a block that corresponds to that file, or a part of it, it would be served without being a part of some torrent on my side. Why it is not a thing from the beginning?

For me the upside is that I could sort and categorize files (even install and forget) without removing/doubling them off the “Torrents/“ folder. I believe this could reduce maintenance burden, thus seeders will not give up on seeds only because there is too much to handle. And also provide vast deduplication and cross-torrent seeding. It’s like a distributed filesystem, where you autoshare inodes instead of prepared folders.


If I am understanding your suggestion properly, that's basically what IPFS does :)


If it's only about being able to move your files anywhere, all clients have the ability to do that and continue seeding from where you want it.

If what you want is to be able to send a very specific file to someone, then single-file torrents exist for this purpose already. There might be some manual work indeed, but that's only because no one really had the need before.


Not much, though. I use transmission-create from the command line to create a torrent file, then grab the hash out of it to email a magnet link to people I know in order to share arbitrary files. I can do it while on the phone with somebody, and they can forward the email if they want to pass it on.


There's one more obstacle in the way of adopting any such system - the private torrent DRM garbage. Torrent creators stupidly mark their torrents private or not, changing that flag changes the infohash and you might not be able to access the files with the new hash, it's incredibly hypocritical.


> the private torrent DRM garbage. Torrent creators stupidly mark their torrents private or not, changing that flag changes the infohash and you might not be able to access the files with the new hash, it's incredibly hypocritical.

Private is not used for DRM, it's used for security on every private tracker.


The private flag also annoyingly exists on many public torrents with no good way to override. It is DRM.


You're not forced to make your torrents private, so I'm not sure how that's related.


If the original suggestion is to share files without restrictions between people or torrents active that would require dropping DRM that stops people from using things like DHT/PEX/LPD with their torrents.


I agree that it’d be a cool hack to be able to somehow reuse all the existing BitTorrent software and network to sync up arbitrary pieces of files that people just-to-happen to have. I’ve been thinking about this for a while, but I haven’t come up with any good solutions.

The main problems are that:

1. torrents have different “piece sizes” (i.e. the torrent creator chooses a power-of-two size for the partitions of the torrent’s data that get their hashes stored in the torrent file); and

2. multi-file torrents produce “pieces” that are an arbitrary split of the concatenation of their files, as if they were actually a torrent of an archive file—just a virtual one synthesized on the spot by the torrent client, as if it was using FUSE to translate the filesystem it sees into a packfile to share.

The “solution” to problem #1, if you can call it that, is to walk your files and generate (log N) different torrents per file, for each piece size. Then—at least for single-file torrents—you could in theory build a tracker (and client-API-hooked search engine) that exposes “virtual torrents” for every possible combination of pieces (when fed this sort of... torrent-data overlapped Merkle tree.)

I don’t think there’s really a more clever solution to problem #2 than the one torrents already use, though. It really does make sense to pack files together into single large pieces, when files are small and numerous, because the metadata of sharing a single file on the network (or, to a lesser extent, of storing a single file separately on disk) has overhead, and so people don’t tend to like seeding tons of tiny files, preferring to instead seed archives of them. (Or, even worse, when a file is seen by most as useless except in the context of being a “resource file” of a particular application, people will only ever bother to seed it in the form of the installation archive of said application.)

To find a particular tiny file on the network, then, you need an index mapping the metadata (and/or hash) of the file you want, to the metadata and hash of an archive/packfile that contains it, and which is tracked by the network. Which is... the thing torrent search engines—usually built on the backs of torrent trackers—give us: the ability to plug in a filename, and find torrents that list that name in their manifests.

(IPFS doesn’t solve problem #2 either, if you’re curious. It just keeps everything as individual file hashes, and then allows you to retrieve directory manifests by their hashes—but you can’t predict a directory manifest hash from a file hash, in order to discover the directories that “make use of” a given file, and might therefore have a more-well-pinned packfile equivalent representation that includes that file.)

I think we could solve #2 if we had some sort of system of hashing—not necessarily cryptographic hashing—that enabled us to ask, in O(1) time, “given that the hash of a packfile of bytes of size Xsz is Xh, does the packfile contain a file whose size is Ysz≤Xsz but otherwise arbitrary, and whose hash is Yh; and if so, at what range within the packfile can we find the file?” (The “but otherwise arbitrary” part is important; we can’t just pre-chunk up the packfile and hash all possible chunks of it, like we can with #1, since the files in it might be in any arbitrary positions with any arbitrary lengths, potentially even straddling a piece boundary.)

If we had such a hash algorithm, then any file hash of this type that a tracker received in a request could be used as, essentially, a zero-knowledge proof of what packfiles contain that file (since you can just iterate all the packfiles you have and check whether they contain it, and use that to build an inverted index from segment hashes to packfile hashes.) But I don’t think a hashing scheme like this currently exists. (At least, in this use-case... YouTube’s ContentID algorithm is kind of an implementation of this for audio fingerprinting. Or maybe not; maybe it’s just pre-generating comparator fingerprints for every possible slice of each claimed audio track!)


If I understand correctly, 2 is a problem of how already existing torrents were created. But if we, say, somehow could convert all torrents to [{pathname, hashes:[...]}] of 64k 0-padded blocks? (Not zero-pad blocks on a wire, only hash this way.) If such tech were seen as useful, it seems that clients only need to rehash contents once, save a map to original torrent and let data go, sort of a backwards-compatible upgrade step.

Ed: seen your edit, will read later, thanks for your thoughts on this.


Yeah, adds overhead, but works somewhat in the “easy” case where it’s the torrent client itself producing the packfile.

But most files we might want to find on the network aren’t actually left “unpacked” on disk by their seeders and only virtually packed at torrent creation time; but rather are sitting in existing archive files that the seeder wants to redistribute unchanged (usually because the archive is cryptographically signed, or is a functioning self-extracting installer, or has some other property it would lose if the extraction of the archive’s contents were given instead.)

In such cases, a naive index wouldn’t work; but you could just slap libarchive into torrent clients, allowing them to pull out the leaf-node hashes from files packed into an archive file at any depth (and also the branch-node hashes of any intermediate archives.)

In a sense, we’re talking about the same thing that virus scanners do, except the “signature database” is a produced and shared artifact of the network rather than a manually-curated set!

This requires every torrent file in existence to be re-created from the source data, though, which is probably unlikely at this point, unless someone is planning to write the equivalent of Archive.org’s Wayback Machine to spider all torrent trackers, download everything they’ve got, recreate the torrents, and seed them in the new format, which is hopefully backward-compatible with the old format in such a way that this seed shares peers with the existing seeds.


Yeah, I see that this raises more complex questions. On one hand, it could be solved by signing an entire torrent instead of original data, and/or builtin zipping instead of pre-packaging. On the other hand, existing torrents and packaging habits would be likely incompatible.

I also thought of a thing similar to libarchive - a format analyzer which could e.g. separate a movie from its subtitles and audio tracks for specific formats, enabling world-wide availability for heavy streams. Or sharing jpegs/mp3s with metadata changed to ones needs, but with main parts intact and thus still seedable. It’s not about really splitting a file in parts, but about detecting better block beginnings, sort of network-wide struct member alignment.


This is a very old project, and has been online for a long time. It started getting popular when TPB ran into trouble and they started to add OBT as a secondary tracker to uploaded torrents. They also got sued by Hollywood studios but it didn't go anywhere apparently.

- https://torrentfreak.com/publicbt-tracker-set-to-patch-bitto...

- https://torrentfreak.com/hollywood-appeals-decision-not-to-s...

- https://torrentfreak.com/court-refuses-to-order-shutdown-of-...


What makes it "open"? What makes it different than other trackers that don't require registration?


That you don't need to upload a torrent file somewhere in order for it to be made available by the tracker. That is, it's not running in the typical "whitelist mode", but instead, by simply announcing yourself and a torrent hash, that torrent is made available in the swarm for anyone to get in on.

This mode of operation is one of the fundamental modes of opentracker - http://erdgeist.org/arts/software/opentracker/ - which is the software pretty much all of the many open BitTorrent trackers run.


What makes this “open” is - I presume - other trackers used to be tied to a forum with a registration, while this is supposedly not tied to a forum or a torrent search website.

However, from what I read some time ago, it’s actually connected to Pirate Bay, and when Pirate Bay used to be down, OpenBitTorrent was also down.


I'm not sure what problem this is solving, and the info on the website is sparse to say the least. Anyone care to explain?


Most of the sites that index torrents no longer host their own trackers. They stay “once removed” that way I presume.

This tracker started in 2009 and has gone offline twice since, returning from the ashes most recently in 2018.

It runs on opentracker software that anyone can download and run, but it would seem popularity and widespread use make a tracker more useful to the public.

It also bears the brunt of legal issues since it remains high-profile and subject to numerous court cases.


> This tracker started in 2009 and has gone offline twice since

It has gone down a few orders of magnitude more times than that.


Well it's a free tracker for torrents. A tracker is a server that manages the swarm of people sharing a torrent file.

I'm not sure why this is really necessary given the existence of trackerless DHT torrents though. Maybe using a tracker makes it faster somehow?


Having a tracker is faster and also helps you bootstrap your DHT if you're having problems connecting to the bootstrap peers hardcoded in your client.


Far as I can tell it's just a regular tracker.


The irregular (open) bit is that it does not require uploading torrents for them to be accepted by the tracker. You can freely announce a hash, no pre-requisites, and the torrent will exist in the swarm.


And it has been around for over a decade.


I'm curious - what makes people operate a BT tracker?


just some good old fashioned community service, why would anyone need a better reason? P2P networks often rely on volunteers.


The same reason people run forums, Mastodon nodes, FTP servers. It's fun and a technical challenge.


Part of the fun of operating something like a forum or an IRC server is the interaction with your users. In BT there's none of that.

Also, FTP servers? Are there public FTP server people operate? Why?


"is the interaction with your users. In BT there's none"

This is by design, but I agree: it would be nice to have at least a way to point users at a place where a common standard and open protocol would allow communications related to the torrent. An IRC channel named after the file hash maybe? In the old days of Napster and later the OpenNap network (anyone here used the Lopster client?) I discovered a lot about unknown rock bands just by looking at files shared by other people and asking them for more information. Those were the early days of multiuser downloads, when downloading a not so famous movie could take like two months (it happened to me) so pestering the poor user on the other side with continuous downloads wasn't an option. Being able to chat also gave the users the opportunity to help each other to solve networking problems or other technical issues. Having some way of communicating without altering the protocol (that is, external server) with the torrent containing just a field in a structure identifying a channel, could be an interesting improvement.


It's not torrents, but Soulseek was still running last I checked. Browsing peers' collections, chatrooms, etc.


> Part of the fun of operating something like a forum or an IRC server is the interaction with your users.

And responsibility, a requirement to keep the software up to date, and an increased attack surface.

> In BT there's none of that.

Not anymore I suppose because it used to be mandatory to stay on good terms with channel and server ops.

> Also, FTP servers? Are there public FTP server people operate? Why?

To save costs. You'd think an ISP in for example The Netherlands who host a mirror of (legal) .iso or .deb is doing this for fun but in reality they're saving themselves costs because customers don't have to download from further away (where they got less bandwidth and less good peering agreements than their local server).


Also, don't know if this applies to others but we (uni I work for) maintain public ftp servers because we are obligated to do so by rhel. I think beinga uni we get a cheaper rate this way. We also host some of the foss software we use on these servers too.


Data acquisition! You know who is downloading and what is being downloaded.


This is at least 10 years old. What is it doing here now?


I'm rather curious why the UDP tracker protocol still exists (and is default on this tracker). Back in the day when the only viable web server was a prefork Apache, the small TCP connections from torrent clients would easily overwhelm servers. Nowadays with event based multi threaded servers, it seems this is no longer an issue but I still see the UDP protocol used very frequently, despite its serious design flaws that allow UDP DoS amplification attacks.


I'm not aware of any way to cause a UDP amplification attack with a torrent tracker - announcing to it doesn't cause it to connect back to you.


> I'm not aware of any way to cause a UDP amplification attack with a torrent tracker

You are getting peers from tracker in the response and it's more data than in announce, which means amplification attack is possible.

> announcing to it doesn't cause it to connect back to you.

UDP is connectionless protocol, which means it doesn't connect to anything, it just sends data.


Countdown to the domain being seized. You'd think there would be some alternative to DNS by now for the pirate crowd.


It's been around for 10 years and doesn't distribute anything you don't find elsewhere, I don't think it's going anywhere.


BitTorrent does not require trackers. You can make trackerless torrents that use DHT for peers.


DHT does have a few DNS based bootstrap servers.


They can be operated by anyone, most Torrent software lets you set the bootstrap servers, rtorrent only requires connecting to a node that knows a bootstrap server.


And magnet links so you don't even need to download a .torrent anymore either.


There's OpenNIC but nobody really uses it.

https://www.opennic.org/


Another off topic? Does this have anything to do with the TRON acquisition of BitTorrent?


Why all this old post are published here?


.com


What's your point?


I think he is implying that a .com domain would be more easy to seize by the US authorities.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: