Package net.i2p.router.tunnel
Class BatchedPreprocessor
java.lang.Object
net.i2p.router.tunnel.TrivialPreprocessor
net.i2p.router.tunnel.BatchedPreprocessor
- All Implemented Interfaces:
TunnelGateway.QueuePreprocessor
- Direct Known Subclasses:
BatchedRouterPreprocessor
Batching preprocessor that will briefly delay the sending of a message
if it doesn't fill up a full tunnel message, in which case it queues up
an additional flush task. This is a very simple threshold algorithm -
as soon as there is enough data for a full tunnel message, it is sent. If
after the delay there still isn't enough data, what is available is sent
and padded.
As explained in the tunnel document, the preprocessor has a lot of
potential flexibility in delay, padding, or even reordering.
We keep things relatively simple for now.
However much of the efficiency results from the clients selecting
the correct MTU in the streaming lib such that the maximum-size
streaming lib message fits in an integral number of tunnel messages.
See ConnectionOptions in the streaming lib for details.
Aside from obvious goals of minimizing delay and padding, we also
want to minimize the number of tunnel messages a message occupies,
to minimize the impact of a router dropping a tunnel message.
So there is some benefit in starting a message in a new tunnel message,
especially if it will fit perfectly if we do that (a 964 or 1956 byte
message, for example).
An idea for the future...
If we are in the middle of a tunnel msg and starting a new i2np msg,
and this one won't fit,
let's look to see if we have somthing that would fit instead
by reordering:
if (allocated > 0 && msg.getFragment == 0) {
for (j = i+1, j < pending.size(); j++) {
if it will fit and it is fragment 0 {
msg = pending.remove(j)
pending.add(0, msg)
}
}
}
-
Field Summary
Fields inherited from class net.i2p.router.tunnel.TrivialPreprocessor
_context, _dataCache, _log, IV_SIZE, PREPROCESSED_SIZE
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionlong
how long do we want to wait before flushingprotected long
Wait up to this long before sending (flushing) a small tunnel message Warning - overridden in BatchedRouterPreprocessorboolean
preprocessQueue
(List<PendingGatewayMessage> pending, TunnelGateway.Sender sender, TunnelGateway.Receiver rec) Return true if there were messages remaining, and we should queue up a delayed flush to clear them NOTE: Unused here, see BatchedPreprocessor override, super is not called.protected void
send
(List<PendingGatewayMessage> pending, int startAt, int sendThrough, TunnelGateway.Sender sender, TunnelGateway.Receiver rec) Preprocess the messages from the pending list, grouping items startAt through sendThrough (though only part of the last one may be fully sent), delivering them through the sender/receiver.Methods inherited from class net.i2p.router.tunnel.TrivialPreprocessor
getInstructionAugmentationSize, getInstructionsSize, notePreprocessing, preprocess, writeFirstFragment, writeSubsequentFragment
-
Field Details
-
DEFAULT_DELAY
static long DEFAULT_DELAY
-
-
Constructor Details
-
BatchedPreprocessor
-
-
Method Details
-
getSendDelay
protected long getSendDelay()Wait up to this long before sending (flushing) a small tunnel message Warning - overridden in BatchedRouterPreprocessor -
getDelayAmount
public long getDelayAmount()how long do we want to wait before flushing- Specified by:
getDelayAmount
in interfaceTunnelGateway.QueuePreprocessor
- Overrides:
getDelayAmount
in classTrivialPreprocessor
-
preprocessQueue
public boolean preprocessQueue(List<PendingGatewayMessage> pending, TunnelGateway.Sender sender, TunnelGateway.Receiver rec) Description copied from class:TrivialPreprocessor
Return true if there were messages remaining, and we should queue up a delayed flush to clear them NOTE: Unused here, see BatchedPreprocessor override, super is not called.- Specified by:
preprocessQueue
in interfaceTunnelGateway.QueuePreprocessor
- Overrides:
preprocessQueue
in classTrivialPreprocessor
- Parameters:
pending
- list of Pending objects for messages either unsent or partly sent. This list should be update with any values removed (the preprocessor owns the lock) Messages are not removed from the list until actually sent. The status of unsent and partially-sent messages is stored in the Pending structure.- Returns:
- true if we should delay before preprocessing again
-
send
protected void send(List<PendingGatewayMessage> pending, int startAt, int sendThrough, TunnelGateway.Sender sender, TunnelGateway.Receiver rec) Preprocess the messages from the pending list, grouping items startAt through sendThrough (though only part of the last one may be fully sent), delivering them through the sender/receiver.- Parameters:
startAt
- first index in pending to send (inclusive)sendThrough
- last index in pending to send (inclusive)
-