Multistreaming
Moderator: Moderators
-
- Posts: 36
- Joined: 2003-04-09 04:04
Multistreaming
I ran a search, and was surprised that there were no results on this subject. I was wondering if it would be possible to include multistreaming into the next version of the client.
-
- DC++ Contributor
- Posts: 3212
- Joined: 2003-01-07 21:46
- Location: .pa.us
we have explosive
Streaming, at least to me, is something that you do with audio or video from the internet - listening/watching as you go. To me it also implies (in the case of audio) that the data is tossed after you listen to it. For video, it implies (to me) that the movie/clip will go poof after you close the viewer.
In addition to Volkris' suggestion, try "segmented downloading." Heck, "multi downoading" will probably bring up discussion as well.
In addition to Volkris' suggestion, try "segmented downloading." Heck, "multi downoading" will probably bring up discussion as well.
Re: Multistreaming
Do you mean using multiple tcp streams with the same src,dst pair to increase throughput?Animated 0wner wrote:I ran a search, and was surprised that there were no results on this subject. I was wondering if it would be possible to include multistreaming into the next version of the client.
-
- Posts: 36
- Joined: 2003-04-09 04:04
I've been thinking about this alto recently. Here is my proposal on how to do it.
FIrst if this enabled all download slots and such control is handed over to the code. You will only be able to control priority's. And the number of requested slots per file.
download slots will be opened as long as they don't effect a( upload speed and b) download speed. if your download overall download speed stays the same after you have opened a slot no more slots will be opened.. If buy opening a download slot it obversely effects an upload speed no more slots will be opened.
Priorities & slots in this situation.
the user can request another slot for a file but if the system has maxed out on slots then it will bull the slot from the lowest priority transfer that is running or from a user selected transfer.
Also what will need to be written is a tutorial on how to use this feature.
and why these rules have been imposed and recommend that it only be enabled for users with faster connections (t1, t3, etc, (not cable or dsl)),
My connection right now is a t13? and it would help to have multi-transfer downloads.
And about streams. Streams are UDP most times and if the data is lost it's lost.so when you capture a stream some data is lost. but calling multi-transfers, multi-streams is also correct under the programing definition that a stream can be anything that provides data input or takes output.
I feel like i have forgot something i wanted to say about multi-transfer downloads.
FIrst if this enabled all download slots and such control is handed over to the code. You will only be able to control priority's. And the number of requested slots per file.
download slots will be opened as long as they don't effect a( upload speed and b) download speed. if your download overall download speed stays the same after you have opened a slot no more slots will be opened.. If buy opening a download slot it obversely effects an upload speed no more slots will be opened.
Priorities & slots in this situation.
the user can request another slot for a file but if the system has maxed out on slots then it will bull the slot from the lowest priority transfer that is running or from a user selected transfer.
Also what will need to be written is a tutorial on how to use this feature.
and why these rules have been imposed and recommend that it only be enabled for users with faster connections (t1, t3, etc, (not cable or dsl)),
My connection right now is a t13? and it would help to have multi-transfer downloads.
And about streams. Streams are UDP most times and if the data is lost it's lost.so when you capture a stream some data is lost. but calling multi-transfers, multi-streams is also correct under the programing definition that a stream can be anything that provides data input or takes output.
I feel like i have forgot something i wanted to say about multi-transfer downloads.
-
- Posts: 36
- Joined: 2003-04-09 04:04
I don't see why slots or bandwidth would be an issue here. If you're downloading a file from multiple people, then you're only getting a small portion of the file from any one person. Which means you will have their slot for a shorter period of time than if you were downloading the whole file from them. It seems to me that it would work out for the better, because (A) most downloads would take less time to complete and (B) an users slot wouldn't be taken for hours on end by the same user waiting for an entire file to download instead of just a small portion of it. If there's more to this that what I've stated, please tell me what I'm missing here.
If this feature makes it in, no such limitation will stick - don't plan for it.How about limiting the ability to download a file from multiple people to around two or three total. I think even just that would be a big help.
Same comment: this attitude doesn't mesh very well with reality, where people don't want their program artificially hampering them.FIrst if this enabled all download slots and such control is handed over to the code.
The first idea should be at the discretion of the client - it can open them if it wants, however unproductive it may be - and the second, preventing download slots from interfering with upload speed, relies on the client acting against the (downloading) interests of its user, always dangerous.if your download overall download speed stays the same after you have opened a slot no more slots will be opened.. If buy opening a download slot it obversely effects an upload speed no more slots will be opened.
Any such restrictions you want to implement should be on the uploader's side, which has an interest in not giving out too much of its bandwidth, rather than the downloader's side, which wants to suck up all the bandwidth and slots of others it can.
I have no objection to multisource downloading, but I'd like for DC to develop in a direction such that it assumes clients act in their own (direct) self-interest, and not rely on them to be nice to others, for the latter system is vulnerable to exploitation.
-
- Posts: 19
- Joined: 2003-05-06 22:00
Heres a good idea for multisource downloading... it was mentioned by someone else b4 but i have an improvement to it.
1. every say 5 minutes your direct connect client would take the fastest connection from the list of people your downloading from and send that user a priority flag like a 1. All the other people you were downloading from would get a priority flag 2. Now lets say that your downloading a file from 3 people persons a, b and c at a rate of 5k, 4k, and 3k respectivly. Person a would have a priority flag of 1. persons b and c would have a priority flag of 2. if person b got a priority 1 flag download request then they would drop you and you would only be getting your file from person a and c at 8k. Now if person a got a request then you would keep the slot Guarenteeing that you would be able to maintain the download. If person c got a request then you could be dropped from them but person A would be a "Focused" Slot. Make sense.
The disadvantages i can see are that if person a was the only 1 with a file the person d with the request would have to wait... but he would have to wait anyways and he would be waiting less time if my file was getting completed faster. This way we can all share faster in theory without destroying the slot system.
Also i have never leached but and i hear its supposed to be bad (using clones to connect to 1 person more than once using 2 slots to get 2 files at once) but you could turn it into a good thing like prioritized leaching so to speak... with someone getting 2 files at once on 2 slots with 1 flagged at.... say lvl 3 so that multisourcers could get that extra slot if they needed it. And put a switch in it for control by the hub owners.
But i think multisourcing could work if everyone prioritized the usage of the slots.
1. every say 5 minutes your direct connect client would take the fastest connection from the list of people your downloading from and send that user a priority flag like a 1. All the other people you were downloading from would get a priority flag 2. Now lets say that your downloading a file from 3 people persons a, b and c at a rate of 5k, 4k, and 3k respectivly. Person a would have a priority flag of 1. persons b and c would have a priority flag of 2. if person b got a priority 1 flag download request then they would drop you and you would only be getting your file from person a and c at 8k. Now if person a got a request then you would keep the slot Guarenteeing that you would be able to maintain the download. If person c got a request then you could be dropped from them but person A would be a "Focused" Slot. Make sense.
The disadvantages i can see are that if person a was the only 1 with a file the person d with the request would have to wait... but he would have to wait anyways and he would be waiting less time if my file was getting completed faster. This way we can all share faster in theory without destroying the slot system.
Also i have never leached but and i hear its supposed to be bad (using clones to connect to 1 person more than once using 2 slots to get 2 files at once) but you could turn it into a good thing like prioritized leaching so to speak... with someone getting 2 files at once on 2 slots with 1 flagged at.... say lvl 3 so that multisourcers could get that extra slot if they needed it. And put a switch in it for control by the hub owners.
But i think multisourcing could work if everyone prioritized the usage of the slots.
Build a Man a Fire, and he will be Warm for a day.
Set a Man on Fire, and he will be Warm for the Rest of his Life.
Set a Man on Fire, and he will be Warm for the Rest of his Life.
This is precisely the type of suggestion I had hoped to avert with my previous post (read the last couple of paragraphs of that to see what I mean). Its fundamental flaw is that is relies on users to voluntarily relinquish slots, when they have no incentive do to so, and plenty of incentive to keep those slots.
Hub owners have no inherent means of controlling client-client connections, nor should they. They can see only that one client would like to connect to another, and not what files it wishes to download, or in what manner it will do so. The hub owners, then, have absolutely no say in this, and with regard to multisource downloading are completely irrelevant.And put a switch in it for control by the hub owners.
Um, right. I don't know about you, but I'm not going to use a client that gives up slots on my behalf.But i think multisourcing could work if everyone prioritized the usage of the slots.
-
- Posts: 147
- Joined: 2003-01-04 02:20
- Location: Canada http://hub-link.sf.net
- Contact:
If we accept the fact that multi-source downloads will do the following:
1. Reduce the number of available slots in the short term (as 1 downloader can now occupy slots on many available uploaders)
2. Reduce the length of time that a downloader requires a slot (as the downloader is getting the file from multiple sources, it will complete sooner)
3. Regardless of initial implementations, once someone makes it work reliably in an Open Source Client, there WILL BE a mod that allows you to d/l from 500,000 people simultaneously
Then why not...
1. when an item is added to the Q, before the D/L starts determine the number of "parts" say have a default value that can be changed (ala FlashGet)
2. Allow the downloader to control how many concurrent D/L threads can run (already there, but might need some changes)
Like cologic said, clients can only be relied upon to act in their own downloading interest, any other approach is doomed to exploitation/failure.
What is the drawback to this? Less slots now, but, happier downloaders and in general, more availability of slots as they aren't held for as long by any specific user.
HaArD
1. Reduce the number of available slots in the short term (as 1 downloader can now occupy slots on many available uploaders)
2. Reduce the length of time that a downloader requires a slot (as the downloader is getting the file from multiple sources, it will complete sooner)
3. Regardless of initial implementations, once someone makes it work reliably in an Open Source Client, there WILL BE a mod that allows you to d/l from 500,000 people simultaneously
Then why not...
1. when an item is added to the Q, before the D/L starts determine the number of "parts" say have a default value that can be changed (ala FlashGet)
2. Allow the downloader to control how many concurrent D/L threads can run (already there, but might need some changes)
Like cologic said, clients can only be relied upon to act in their own downloading interest, any other approach is doomed to exploitation/failure.
What is the drawback to this? Less slots now, but, happier downloaders and in general, more availability of slots as they aren't held for as long by any specific user.
HaArD
-
- Posts: 19
- Joined: 2003-05-06 22:00
Thats my point. My proposal allows for people still being able to get a slot from someone with a file because they are entitled to at least 1 slot as long as the person has a secondary slot. and lets be honest... there are a lot of times when someone has no slots... my proposal is going to take up MORE slots in theory but people will still have the same odds of of getting a file because they are guarenteed 1 slot by priority if someone elses slots are taken up by multisourcers.
It could work... it really could...
And as for the switch that hub owners could turn on and off... that was for the "leeching" proposal... that would allow 1 person to get more than 1 file using 2 slots until someone else wanted that slot either for multisourcing or for a download.
It could work... it really could...
And as for the switch that hub owners could turn on and off... that was for the "leeching" proposal... that would allow 1 person to get more than 1 file using 2 slots until someone else wanted that slot either for multisourcing or for a download.
Build a Man a Fire, and he will be Warm for a day.
Set a Man on Fire, and he will be Warm for the Rest of his Life.
Set a Man on Fire, and he will be Warm for the Rest of his Life.
What mechanism guarantees them this?they are guarenteed 1 slot by priority if someone elses slots are taken up by multisourcers.
Sure, if people suddenly ceased being selfish.It could work... it really could...
What do the hub owners have to do with this? They don't affect client-client transactions in the slightest once they're initiated with the $ConnectToMe.And as for the switch that hub owners could turn on and off... that was for the "leeching" proposal... that would allow 1 person to get more than 1 file using 2 slots until someone else wanted that slot either for multisourcing or for a download.
-
- Posts: 19
- Joined: 2003-05-06 22:00
Ok scratch my idea. I figured if it was hard coded into the program that would force people to abide by the priority flag rules it would only work if it were with some form of nmdc... with dc++ being open source then those saavy programmers would be able to fake priority a flags on all their different sources so nevermind new plan.
Now if people could be selfish AND patient, and be willing to share what they obtain then the system would work better... but your right... selfish people only bog down the system.
And the point of having a switch to turn on and off features is because some people would not mind having someone get more than 1 file at 1 time, and other people would hate it.... so just like everything else... if you have 30 gigs, you would join a 30 gig hub so theres more files than a 12 gig hub, if you didn't want anyone with a 56 k modem connecting to you you would join a broadband only hub... its all about hub options... if you don't want 1 person connected to you more than once you would join the hub with the feature turned off... however if you wanted to connect to someone for more than 1 file at once then you would join a hub with that feature turned on... its all about getting the people with common interests together. THAT is why you would put a switch to turn on or off my proposed "leech" priority thingy i explained earlier and that is why you would put a switch to turn on or off multistreaming capability in a hub.
Now if people could be selfish AND patient, and be willing to share what they obtain then the system would work better... but your right... selfish people only bog down the system.
And the point of having a switch to turn on and off features is because some people would not mind having someone get more than 1 file at 1 time, and other people would hate it.... so just like everything else... if you have 30 gigs, you would join a 30 gig hub so theres more files than a 12 gig hub, if you didn't want anyone with a 56 k modem connecting to you you would join a broadband only hub... its all about hub options... if you don't want 1 person connected to you more than once you would join the hub with the feature turned off... however if you wanted to connect to someone for more than 1 file at once then you would join a hub with that feature turned on... its all about getting the people with common interests together. THAT is why you would put a switch to turn on or off my proposed "leech" priority thingy i explained earlier and that is why you would put a switch to turn on or off multistreaming capability in a hub.
Build a Man a Fire, and he will be Warm for a day.
Set a Man on Fire, and he will be Warm for the Rest of his Life.
Set a Man on Fire, and he will be Warm for the Rest of his Life.
Well, there are slot lockers and fake sharers for NMDC, and a hack to set the priority flag to something more favorable to the downloader isn't particularly more complex.I figured if it was hard coded into the program that would force people to abide by the priority flag rules it would only work if it were with some form of nmdc... with dc++ being open source then those saavy programmers would be able to fake priority a flags on all their different sources so nevermind new plan.
Sure, go to hubs with others users who share interests. However, that's a separate question from whether multisource downloading should be allowed: (1) it may not be possible to reliably distinguish between multisource and single source downloads, and (2) even if it is, why allow the hub to decide rather than individual users? After all, it's their computer and internet connection.THAT is why you would put a switch to turn on or off my proposed "leech" priority thingy i explained earlier and that is why you would put a switch to turn on or off multistreaming capability in a hub.
Just a little thing I noticed.... Smirnof100 - you do realize that no one is proposing to allow more than one connection from one user to another user, right? That is, just because multi-source-downloading, multistreaming, segmented downloading or whatnot is added, it will still not be possible to connect to a user more than once? (Yes, yes, I know about the "one connection per hub you share with user"-bug, but it is a bug, AFAIK.)
Just hope I've managed to make this clear... or provoke a flame-war. In any case, good luck with the discussion, fellas!
Sarf
---
A running program is the moment of truth. All else is prophecy or nostalgia.
Just hope I've managed to make this clear... or provoke a flame-war. In any case, good luck with the discussion, fellas!
Sarf
---
A running program is the moment of truth. All else is prophecy or nostalgia.
Hrm, a few things occured to me to point out on these ideas.
If, for example, a 56K user were downloading a file from multiple sources, they might have, say 3KB/s, 2KB/s, and 1KB/s downloads. The 3KB/s will get the priority 1 flag there and that's just about their full bandwidth with many phonelines (like mine was when I was on 56K.) Now, suppose a DSL user was getting downloads of 60KB/s, 50KB/s, and 40KB/s. The 50KB/s download is the same person that the 56K user is getting from. The 50KB/s download gets a priority 2 flag by your system. The 56K user gets cut off and someone else jumps in the moment the slot opens and starts downloading. They get a priority 1 because it's their fastest download. The 56K user tries to download again after reconnecting and sends a priority 1 request, and the DSL user getting 50KB/s is now removed to let a modem user get at 3KB/s. In other words, your system needs a lot more checks and balances for sure. Also, don't forget that if you do stuff like that, the jerks out there will make a modified client that only sends the absolute highest priority flags, and possibly some jerk will also make a client that treats 56K and lower users as always having a priority 1 flag. Here's an idea of how to improve your system already though. Just have it prioritize by the average download speed. Then the people getting 150KB/s get a higher priority over the people getting at 1.5KB/s automatically. That doesn't do anything for hacked clients though. Such a thing would be handled on the "server" client side since there's no reason to be sending the speed when both sides see it, but someone could still modify the server quite easily to see what they want it to see.
BTW, BitTorrent uses some prioritizing type of stuff based on how much you are uploading (I think as a ratio of some kind to what you are downloading,) but that is handled server side if I'm not mistaken, so wouldn't do here as there is no "neutral" server but the hub and it can't do that sort of stuff (besides, you've seen how people complain about hubs stealing their bandwidth, and that's with just a few very basic things. BT takes a lot of power.)
Probably there is just no safe way to handle priorities and things of that sort with DC. Anything you do that lets some users in, but not others, is basically screaming for someone to modify them. Perhaps it could be made so well that it takes them a long time to do it, but, eventually, someone WILL modify them.
Anyway, I definitely agree with the whole point of multisource downloading. If you download the file faster, you truly do end up off of those slots faster. Of course, there's more people getting in those slots this way, but overall it works out well enough that things like Kazaa are immensely popular despite being so hard to find anything besides MP3s and porn (and when I said popular, I'm referring to finding other things, of course it's popular for the things people can get easily, that doesn't count d-: ) Someone was talking in another thread about how really multisource will be a simple matter once they get some kind of hashing system in place that would make it verify the chunks of the file or something like that. There is a DC client that does in fact do multisource, but everyon says that it is to terribly buggy that right now it's just not worth trying. Hopefully some day it will be nice and stable, and when it is, if DC++ hasn't caught up or surpassed them, DC++ could just learn from them.
If, for example, a 56K user were downloading a file from multiple sources, they might have, say 3KB/s, 2KB/s, and 1KB/s downloads. The 3KB/s will get the priority 1 flag there and that's just about their full bandwidth with many phonelines (like mine was when I was on 56K.) Now, suppose a DSL user was getting downloads of 60KB/s, 50KB/s, and 40KB/s. The 50KB/s download is the same person that the 56K user is getting from. The 50KB/s download gets a priority 2 flag by your system. The 56K user gets cut off and someone else jumps in the moment the slot opens and starts downloading. They get a priority 1 because it's their fastest download. The 56K user tries to download again after reconnecting and sends a priority 1 request, and the DSL user getting 50KB/s is now removed to let a modem user get at 3KB/s. In other words, your system needs a lot more checks and balances for sure. Also, don't forget that if you do stuff like that, the jerks out there will make a modified client that only sends the absolute highest priority flags, and possibly some jerk will also make a client that treats 56K and lower users as always having a priority 1 flag. Here's an idea of how to improve your system already though. Just have it prioritize by the average download speed. Then the people getting 150KB/s get a higher priority over the people getting at 1.5KB/s automatically. That doesn't do anything for hacked clients though. Such a thing would be handled on the "server" client side since there's no reason to be sending the speed when both sides see it, but someone could still modify the server quite easily to see what they want it to see.
BTW, BitTorrent uses some prioritizing type of stuff based on how much you are uploading (I think as a ratio of some kind to what you are downloading,) but that is handled server side if I'm not mistaken, so wouldn't do here as there is no "neutral" server but the hub and it can't do that sort of stuff (besides, you've seen how people complain about hubs stealing their bandwidth, and that's with just a few very basic things. BT takes a lot of power.)
Probably there is just no safe way to handle priorities and things of that sort with DC. Anything you do that lets some users in, but not others, is basically screaming for someone to modify them. Perhaps it could be made so well that it takes them a long time to do it, but, eventually, someone WILL modify them.
Anyway, I definitely agree with the whole point of multisource downloading. If you download the file faster, you truly do end up off of those slots faster. Of course, there's more people getting in those slots this way, but overall it works out well enough that things like Kazaa are immensely popular despite being so hard to find anything besides MP3s and porn (and when I said popular, I'm referring to finding other things, of course it's popular for the things people can get easily, that doesn't count d-: ) Someone was talking in another thread about how really multisource will be a simple matter once they get some kind of hashing system in place that would make it verify the chunks of the file or something like that. There is a DC client that does in fact do multisource, but everyon says that it is to terribly buggy that right now it's just not worth trying. Hopefully some day it will be nice and stable, and when it is, if DC++ hasn't caught up or surpassed them, DC++ could just learn from them.
-
- Posts: 19
- Joined: 2003-05-06 22:00
ya but if you noticed the first sentence in my last post... i SCRATCHED my idea. That is i have acknowledged that it would not work.
The point of prioritized multisourcing is to not totally trash out the current slot system... but with dc++ which is open source then there is a guarentee that it will get trashed by others who modify the code.
As for allowing 1 person to connect twice (leeching) i figured it could be done at the lowest priority because leeching trashes the slot system to which is why no one likes it. But if it could be prioritized without being so easy to modify it could work.
But like i said... There is no way my idea could work because people are selfish arrogent and greedy... and would modify the program to fuck up everyone elses shit.... Figures that every time i come up with a good idea theres always some one to trash it for me :(
The point of prioritized multisourcing is to not totally trash out the current slot system... but with dc++ which is open source then there is a guarentee that it will get trashed by others who modify the code.
As for allowing 1 person to connect twice (leeching) i figured it could be done at the lowest priority because leeching trashes the slot system to which is why no one likes it. But if it could be prioritized without being so easy to modify it could work.
But like i said... There is no way my idea could work because people are selfish arrogent and greedy... and would modify the program to fuck up everyone elses shit.... Figures that every time i come up with a good idea theres always some one to trash it for me :(
Build a Man a Fire, and he will be Warm for a day.
Set a Man on Fire, and he will be Warm for the Rest of his Life.
Set a Man on Fire, and he will be Warm for the Rest of his Life.
The problem is worse. Everytime anyone comes up with a good idea, someone trashes it. ESPECIALLY if you make the idea free. If you charge for it, then there's less people screwing around with it.
Ah well, despite that, free software is still so wonderful. There's some crap out there, but then there's stuff like DC++ that makes it all worthwhile.
Ah well, despite that, free software is still so wonderful. There's some crap out there, but then there's stuff like DC++ that makes it all worthwhile.
-
- DC++ Contributor
- Posts: 3212
- Joined: 2003-01-07 21:46
- Location: .pa.us
Well, actually it's important to have criticism of features.
The problem is that there aren't enough coders on the board to work on all of them, nor are we coders always working on DC++.
Anyone who is disouraged from a good idea by some negative comments or complications or potential for abuse really ought to have a little more faith in themselves. Or to fight back and defend their idea.
(This is in general, just a response to Nazo's post, that's all.)
The problem is that there aren't enough coders on the board to work on all of them, nor are we coders always working on DC++.
Anyone who is disouraged from a good idea by some negative comments or complications or potential for abuse really ought to have a little more faith in themselves. Or to fight back and defend their idea.
(This is in general, just a response to Nazo's post, that's all.)
"Bad users" have managed to slot-lock NM DC as well as fakesharing with it... I don't think a closed source DLL will prove a problem, especially as the DLL would have to rely on DC++ to give it data about connections et cetera (unless you want to make 40% of the logic in DC++ part of that DLL).
Sarf
---
If you don't care where you are, then you ain't lost.
Sarf
---
If you don't care where you are, then you ain't lost.
Multi-source downloading is simple and already implemented in DC++ plus its alot better than any other P2P program I've ever used. KEEP THE FILES AS RAR'S. This has so many advantages over unpacked files (avi, bin/cue, etc.)
Implementing multi-sourcing from unpacked files (avi, bin/cue, etc.) is just going to raise the number of corrupt transfers and resources needed to run DC++. Not to mention a huge task for the developers. Take a look at all those other P2P programs, hogging resources, and you always hope you don't get a corrupt transfer. Corrupt transfers happen, but I'd rather have a corrupt 15, 20, or 50MB part transfer bad, rather than a 700MB file. Plus keeping it as rars lets you multi-source with great file verification (sfv)
Theres a reason why these files are packed to begin with. Other P2P programs are the opitimy of releases coz they cant handle rars like DC++ can. Rather see the developers work on something i think they've already started to work on (implementing auto-sfv checking) rather than copy one of those other buggy and slow P2P programs. The unique things of DC++ is what makes it so great.
Implementing multi-sourcing from unpacked files (avi, bin/cue, etc.) is just going to raise the number of corrupt transfers and resources needed to run DC++. Not to mention a huge task for the developers. Take a look at all those other P2P programs, hogging resources, and you always hope you don't get a corrupt transfer. Corrupt transfers happen, but I'd rather have a corrupt 15, 20, or 50MB part transfer bad, rather than a 700MB file. Plus keeping it as rars lets you multi-source with great file verification (sfv)
Theres a reason why these files are packed to begin with. Other P2P programs are the opitimy of releases coz they cant handle rars like DC++ can. Rather see the developers work on something i think they've already started to work on (implementing auto-sfv checking) rather than copy one of those other buggy and slow P2P programs. The unique things of DC++ is what makes it so great.
Actually the hashing system would essentially make it a piece by piece download regardless of file type, if I remember correctly.
By the way I haven't downloaded a rar'ed video file that wasn't corrupt, including the video in games, I mean busting the file into 58 pieces forcibly isn't exactly good for the color set especially when most people compress those file on top of the compression build into the existing file, for example avi , most compressions don't like other compressions.
By the way I haven't downloaded a rar'ed video file that wasn't corrupt, including the video in games, I mean busting the file into 58 pieces forcibly isn't exactly good for the color set especially when most people compress those file on top of the compression build into the existing file, for example avi , most compressions don't like other compressions.
"Tomorrow sees undone, what happens not today. Indecision brings delays. Days lost lamenting lost days"
"I’m getting some kind of sick pleasure out of watching her squirm!?!........must be a perk!"
"I’m getting some kind of sick pleasure out of watching her squirm!?!........must be a perk!"
I'm not saying using it for archival does anything to the file, i'm saying breaking it into 50 pieces and/or compressing it is was gets the video.
Most people are under the impression the compressing a file won't damage it in any way, but in reality even the slightest compression damages files, most files are not designed to be compressed thus suffer damage when compressed.
Most people are under the impression the compressing a file won't damage it in any way, but in reality even the slightest compression damages files, most files are not designed to be compressed thus suffer damage when compressed.
"Tomorrow sees undone, what happens not today. Indecision brings delays. Days lost lamenting lost days"
"I’m getting some kind of sick pleasure out of watching her squirm!?!........must be a perk!"
"I’m getting some kind of sick pleasure out of watching her squirm!?!........must be a perk!"
I'm not sure if I understand what you mean - lossy compression can only be used for audio, video and still images - and those have each their own defined fromat. For everything else, lossless compression must be used, otherwise we'd be in deep trouble (a single byte changed in an executable can render the whole program unusable). When you compress something with 7Z, RAR, ZIP, ACE, ARJ etc., you'll get byte-identical file back when you uncompress it. If the file is already compressed (eg. video), you won't damage it by compressing it to RAR, you just won't be able to compress it much.
The reason for packing video and bin/cue files is not for compression, its only a few MB gain and takes a long time to compress/decompress and adds the chance of corruption if not tested correctly. The main reason is for error checking and in Direct Connect's case multi-sourcing. But a proper sfv file MUST be in the same directory for it to work right. Virtually every file that comes from the 'Scenes' (0-Day, DivX, etc.) is packed uncompressed or else it doesnt get released becuase of the chances of a corrupt transfer. Probably 95% or more of these larger files comes from the scenes and already packed properly, users should just leave it as is on direct connect. Theres so many advantages to it, and no disadvantages I can think of, other than that other P2P programs can't handle packed files as well as direct connect.
Slicer, you might want to look into how your unpacking or who you are getting these packed files from. I've never had a corrupt packed file in the past few years. I see alot more corrupt transfers of avi and bin/cue files then I'd like and the only way to fix it is to download the whole thing again, what a waste of time. All it takes is one byte out of 700,000,000+ to transfer bad and its worthless, is it really worth the risk?
In my opinion hashing is a joke when it comes to P2P programs, it adds alot more of a chance for a corrupt transfer. Not to mention the resources and bandwidth it takes. Also a big task for developers and getting the hashing virtually bug free is very complicated if not impossible.
I suppose a discussion like this doesnt belong in a feature request section it should be in a section like 'using the features that are already implented to your advantage'
Slicer, you might want to look into how your unpacking or who you are getting these packed files from. I've never had a corrupt packed file in the past few years. I see alot more corrupt transfers of avi and bin/cue files then I'd like and the only way to fix it is to download the whole thing again, what a waste of time. All it takes is one byte out of 700,000,000+ to transfer bad and its worthless, is it really worth the risk?
In my opinion hashing is a joke when it comes to P2P programs, it adds alot more of a chance for a corrupt transfer. Not to mention the resources and bandwidth it takes. Also a big task for developers and getting the hashing virtually bug free is very complicated if not impossible.
I suppose a discussion like this doesnt belong in a feature request section it should be in a section like 'using the features that are already implented to your advantage'
-
- Posts: 6
- Joined: 2003-03-30 18:18
Why do you think, that hashing and/or compression increases chance of file corruption? And bandwith... If you want daya correct, you have to pay for it.COLYPTiC wrote:The reason for packing video and bin/cue files is (...), it takes a long time to compress/decompress and adds the chance of corruption if not tested correctly.
(...)
In my opinion hashing is a joke when it comes to P2P programs, it adds alot more of a chance for a corrupt transfer. Not to mention the resources and bandwidth it takes.
Every reasonable protocol should have some sort of transferred data correction. They teach me on the tech. univ., that it is the second (of 7) layer of designing protocol.COLYPTiC wrote:Also a big task for developers and getting the hashing virtually bug free is very complicated if not impossible.
Or maybe i've misunderstood you?
-
- Posts: 202
- Joined: 2003-01-06 06:22
- Location: Salford, England.
- Contact:
Yeah, you're almost correct Ryan.
Isn't it the Transport layer that's responsible for error correction (layer 4)?
..and not lal protocols have error correction.
COLYPTiC - Obviously, you have failed to realse that eny transfer protocol used here, will have error correction.. it would be insane not to!
Isn't it the Transport layer that's responsible for error correction (layer 4)?
..and not lal protocols have error correction.
COLYPTiC - Obviously, you have failed to realse that eny transfer protocol used here, will have error correction.. it would be insane not to!
I keep reading about "Hashing" and how it seems to be a requirement for segment downloading (or at least a major part.) Except I don't see the need for this. Can hashing please be explained more or can I be pointed out to where to read about it?
The main problem I have with it, is that as far as I understand, it would require both clients to have this feature, and if that is the case, it would only help in a very small population.
I'm interested to see how other segment downloading feature programs, such as Download Accelerator or GetRight verify the file. They work perfectly fine, at least to my uses of them. Has anybody ever had problems with them?
I think a simple approach is just by verifying a rollback ammount on each segment. And any kind of prioritizing would be a waste of programming effort, and my reason for this is other dc clients that work on such segment downloading don't add such complexities to it, and they seem to fit fine in the community without being an obvious problem to the slot system.
The main problem I have with it, is that as far as I understand, it would require both clients to have this feature, and if that is the case, it would only help in a very small population.
I'm interested to see how other segment downloading feature programs, such as Download Accelerator or GetRight verify the file. They work perfectly fine, at least to my uses of them. Has anybody ever had problems with them?
I think a simple approach is just by verifying a rollback ammount on each segment. And any kind of prioritizing would be a waste of programming effort, and my reason for this is other dc clients that work on such segment downloading don't add such complexities to it, and they seem to fit fine in the community without being an obvious problem to the slot system.
-
- DC++ Contributor
- Posts: 3212
- Joined: 2003-01-07 21:46
- Location: .pa.us
Hashing would let you know for sure that two files are identical without having to get overlapping portions of them. It would also let you detect (for Tiger Tree Hashes [TTH]) when you got a bad block from a client. Currently, DC++ resumes a file by getting an overlapping portion of the file.Qbert wrote:I keep reading about "Hashing" and how it seems to be a requirement for segment downloading (or at least a major part.) Except I don't see the need for this.
The trick without hashes is to figure out how to get multiple portions of the "same" file from different users, and be able to eliminate all the bytes from one of the sources, should it turn out to be incompatible. Sidenote: users probably wouldn't be happy about the wasted bandwidth.
Also this is complicated by the fact that there's no easy way to get just segments of a file (without disconnecting). So this really also depends on GetZBlock or GetBlock code.
-
- Posts: 6
- Joined: 2003-03-30 18:18
GargoyleMT: There is nothing like wasted banwitch when it is about files correctness. I'd rather waste 4kB every reconnect that download .ISO once again.
OLDoMiNiON: You may be right about the layer #. However: not every protocol has error correction, but every time to design smth new, you should think about error corection.
OLDoMiNiON: You may be right about the layer #. However: not every protocol has error correction, but every time to design smth new, you should think about error corection.
-
- Posts: 202
- Joined: 2003-01-06 06:22
- Location: Salford, England.
- Contact:
oh, yes of course Ryan
I was just simply reffering to the fact that it's not a requirement for a protocol. X25 (?routing/routed? i can't remember, heh) i think is an example of a protocol that has little or no error correction, thus rendering it useless in thge current day and age, in the shadow of protocols such a tcp/ip which is redundant, i.e. it was designed to be error proof, and has succeded!
I was just simply reffering to the fact that it's not a requirement for a protocol. X25 (?routing/routed? i can't remember, heh) i think is an example of a protocol that has little or no error correction, thus rendering it useless in thge current day and age, in the shadow of protocols such a tcp/ip which is redundant, i.e. it was designed to be error proof, and has succeded!
-
- Forum Moderator
- Posts: 1420
- Joined: 2003-04-22 14:37
-
- DC++ Contributor
- Posts: 3212
- Joined: 2003-01-07 21:46
- Location: .pa.us
Ryan O'Connell wrote:GargoyleMT: There is nothing like wasted banwitch when it is about files correctness. I'd rather waste 4kB every reconnect that download .ISO once again.
I was thinking of wastes on a slightly larger scale than 4kbytes.
Specifically, if one segmented a file in half in the current system, the only time you'd know when the second source was compatible with the first one would be half way through the file, when the first segment overlaps the second one. So you'd waste potentially a whole lot of bandwidth (unless your second source was really slow).
-
- Posts: 6
- Joined: 2003-03-30 18:18
Slots is not a problem.
Lack of slots is a quiet big excuse. There are other DC clients that supports this why not DC++.
I understood that DC++ is not supposed to protect hub keepers or something else, just make client users day easier. Sure i can change to use linux side client and it is not a problem. I've heard that the hashing (to findout if the other file is same) is the only _real_ problem in here and yes a lazy coders also
I understood that DC++ is not supposed to protect hub keepers or something else, just make client users day easier. Sure i can change to use linux side client and it is not a problem. I've heard that the hashing (to findout if the other file is same) is the only _real_ problem in here and yes a lazy coders also