simple solution to a minor problem..

Archived discussion about features (predating the use of Bugzilla as a bug and feature tracker)

Moderator: Moderators

Locked
rusium
Posts: 3
Joined: 2003-06-08 20:53

simple solution to a minor problem..

Post by rusium » 2003-06-08 21:01

ok, i dont know why this hasn't been thought of yet, but I often download multiple instances of the same file because connection speeds obviously vary between every user, i find it to be a pain in the ass to specify a unique file name for each instance of the file i am downloading. i think it should at *least* be optional for dcpp to automatically add a suffix to duplicate files, similar to kazaa. lets say for instance you want to download 10 copies of somebigmovie.avi, you obviously cant really predict which one is going to be fastest. so you want to download all 10 copies and find out which one is fastest so you have to specify a unique filename for each instance of the file to be downloaded, now how hard would it be to add a suffix like (1), (2), (3), or even a random interger, if the file already is queued under thant name? it would be a lot more practical to have an option to automatically add a suffix imo.

-Dan

Gratch06
Posts: 141
Joined: 2003-05-25 01:48
Location: USA

Post by Gratch06 » 2003-06-08 22:31

The DC++ system is set to add that as an alternate to an already enqueued file. I think this is really a superior method of doing things. It makes it harder to mass leech...who wants someone grabbing a file from them at 10K/s just to not use it. This causes two things: 1) wasted bandwidth on everyone who you don't get the file from 2) you take up 10 slots throughout the hub that are now not available to some other user. I'd prefer my bandwidth actually went to something good like someone who wants my file. You can always disconnect and try another alternate if the DL is going slow. Patience IS a virtue :)

- Gratch

rusium
Posts: 3
Joined: 2003-06-08 20:53

.

Post by rusium » 2003-06-09 11:22

its really just a practical means of convenience, it doesn't enable you to leech anymore than you already can.

Wisp
Posts: 218
Joined: 2003-04-01 10:58

Re: .

Post by Wisp » 2003-06-09 17:49

just download the filelist, then you'll get a good idea of the user's speed (and you don't have many unfinished files)

jjulmajuha
Posts: 4
Joined: 2003-06-10 06:44

Post by jjulmajuha » 2003-06-10 07:03

I download a lot large movie files during night. I usually have from dozen to ~30 alternatives to the file in download queue. Is it possible/ could it be done that if the download resumes from a alternative source with a "speed" of 1kb/s to _automatically_ remove that source and resume from the next one, hoping that's faster (probably is). My max download is 25kb/s, so I hope I could set the minimum to like 5kb/s.

I sleep at nights :) at daytime I do this manually

jjjkkk

Nazo
Posts: 68
Joined: 2003-04-03 14:35

Post by Nazo » 2003-06-10 23:35

The things that always bugged me about that automatic removal idea is that, for one thing, that could potentially be the only source that ever actually works at all or the best one (I've seen far worse, usually from "T3" users.) Another thing to consider is that once you remove the source, it can and probably will take a good while for it to find them again, so if it can't connect to another or ends up no better, you've actually lost more time than you gained. Finally, the thing that bugs me most of all about the idea is that the moment you give up a slot 99% of the time, you can count on spending a _LONG_ time trying to get back in even if you manually add the source and try to make it connect immediately.

If someone could figure out how to make the multisource download work reliably (emphasis on reliably) then such things wouldn't be an issue.

Anyway, my only guess of some way that one might possibly come up with such a system is to have some kind of temporary buffering where you try to start the download from one, maybe even more, seperate connections, and if the speed is better, drop the first and keep the other one that was faster. That way, at least, you could somewhat guarantee that you actually can get into another slot before dropping the first and that you actually would want to. Of course, the speed might slow down on the second one or speed up on the first, there's just no way to tell.

Gratch06
Posts: 141
Joined: 2003-05-25 01:48
Location: USA

Post by Gratch06 » 2003-06-11 00:09

Nazo wrote:Anyway, my only guess of some way that one might possibly come up with such a system is to have some kind of temporary buffering where you try to start the download from one, maybe even more, seperate connections, and if the speed is better, drop the first and keep the other one that was faster.
Oh boy! Can we, can we! I would LOVE to see all of my bandwidth go to users who won't even be finishing the file from me! :D

Sure this would be a nice feature, but the manual handling does fine for it. I really think you'd struggle to find a way to implement it that would be fair to the other users and still be effective. Show me the code.

- Gratch

GargoyleMT
DC++ Contributor
Posts: 3212
Joined: 2003-01-07 21:46
Location: .pa.us

Post by GargoyleMT » 2003-06-13 14:39

Gratch06 wrote:Sure this would be a nice feature, but the manual handling does fine for it. I really think you'd struggle to find a way to implement it that would be fair to the other users and still be effective. Show me the code.
Although Nazo's dig was a pDC++, I think that your comment is right on. The idea he had is possible, but it might be really troubling to get to work properly. It might also add a lot of complication to the DC++ source... However, the best place to test-out the idea is in pDC++ or a similar modified client.

Locked