Swarm vs. burst?

Technical discussion about the NMDC and <a href="http://dcpp.net/ADC.html">ADC</A> protocol. The NMDC protocol is documented in the <a href="http://dcpp.net/wiki/">Wiki</a>, so feel free to refer to it.

Moderator: Moderators

Locked
galvanometer
Posts: 4
Joined: 2003-11-13 23:23

Swarm vs. burst?

Post by galvanometer » 2003-11-13 23:27

I think a lot of issues with p2p programs in general could be solved by overcompensating for a lack of data integrity by transmitting data in bursts, across several ports, for maximum transmission effectiveness. Of course, this is slower by far, since its recursive. You have to keep on bursting until all data has been succesfully transmitted. Swarm simply shoots the data into one opening, "swarming" it in, which in my opinion causes frequent corruption.
I've setup some examples of each, but need to upload 'em. I'll edit when /me is done.

distiller
Posts: 66
Joined: 2003-01-05 18:05
Location: Sweden
Contact:

Post by distiller » 2003-11-15 02:57

That sounds like an insane idea. Why send the same data more than once? Bandwidth is wasted. Precious bandwidth!

TCP/IP has perfect control over packets, no need for additional error checking on the transfers.

The different sources should have hashed their files first so that only sources with the exact same file will be used in the swarming process. There are several threads about hashing in this forum.

GargoyleMT
DC++ Contributor
Posts: 3212
Joined: 2003-01-07 21:46
Location: .pa.us

Post by GargoyleMT » 2003-11-15 09:00

Galvanometer, mind sharing the background on why you're bringing this up now? I think that would help a lot.

Windrider
Posts: 4
Joined: 2003-01-03 10:55

Post by Windrider » 2003-11-18 17:16

The "burst" way to compensate for data integrity it's something designed to be used on very fast networks (multi GB), because on those networks it takes less time to send the data several times than to check for it's integrity using conventional methods ala CRC.
Btw swarming is not what you're reffering to, since a single point of data transmision isn exactly a swarm. A swarm is to use multiple data sources so you get the data from different places, a careful use of crc and hashing techniques avoids data corruption. Usually it's the lack of use of these methods what brings you corrupted files -like in dc-
I belive you've misunderstood some concepts.

blobby
Posts: 3
Joined: 2003-11-29 10:30

Post by blobby » 2003-11-29 10:36

What I'd liek to see is some kind of UDP based transfer.

TCP/IP does guarentee delivery of packets but it's precisely this that creates a large drag factor which gets worse the higher the latency between a and b.

What about banging off files in UDP packets, doing some custom hash checking and re-requesting any packets that got lost on the way?

distiller
Posts: 66
Joined: 2003-01-05 18:05
Location: Sweden
Contact:

Post by distiller » 2003-12-08 18:30

There's not really much to gain from using UDP over TCP, a couple of percent, 10% tops if all packages arrives intact, at least when I've been testing my conenction. It's just not worth it. It would be like writing your own tcp/ip stack and that would be painful to say the least. Just look at how well M$ did their first tcp/ip stack... :roll:

Get a better connection instead. :D

Locked