Search system should have some kind of timestamp (it is to take into account
in both binary protocol or text protocol). I have noticed weird things on
hubs/bots when a client has an heavily loaded connection to the hub.
Let's say the client is configured to avoid sending search query too close
(15s delay between each) and the hub is configured to do something (warn/kick
or anything else) if the delay between 2 queries is smaller than 10s.
T(0) => search query 1 emitted.
T(0->6) => network congestion (the first value can be smaller than 0).
or big packet loss creating lot of network packet emission.
T(15) => search query 2 emitted.
Technically, the client has waited 15 seconds but on the hub, you will have
T(x) => search query 1 received (x>=6 due to network problem)
T(15) => search query 2 received
For the hub, the delay between queries is 15-x (<= 9) seconds
I have experienced such cases on very big hubs and also on heavily loaded
connections with traffic shapping or QoS support. My absolute "record" is a
hub with a min delay of 15 seconds and a client min delay of 120 seconds.