Package net.i2p.router.tunnel
Class TrivialPreprocessor
java.lang.Object
net.i2p.router.tunnel.TrivialPreprocessor
- All Implemented Interfaces:
TunnelGateway.QueuePreprocessor
- Direct Known Subclasses:
BatchedPreprocessor
,TrivialRouterPreprocessor
Do the simplest thing possible for preprocessing - for each message available,
turn it into the minimum number of fragmented preprocessed blocks, sending
each of those out. This does not coallesce message fragments or delay for more
optimal throughput.
See FragmentHandler Javadoc for tunnel message fragment format
Not instantiated directly except in unit tests; see BatchedPreprocessor
-
Field Summary
Modifier and TypeFieldDescriptionprotected final RouterContext
protected static final ByteCache
Here in tunnels, we take from the cache but never add to it.protected final Log
protected static final int
static final int
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionlong
how long do we want to wait before flushingprotected static int
getInstructionAugmentationSize
(PendingGatewayMessage msg, int offset, int instructionsSize) protected static int
protected void
notePreprocessing
(long messageId, int numFragments, int totalLength, List<Long> messageIds, String msg) protected void
preprocess
(byte[] fragments, int fragmentLength) Wrap the preprocessed fragments with the necessary padding / checksums to act as a tunnel message.boolean
preprocessQueue
(List<PendingGatewayMessage> pending, TunnelGateway.Sender sender, TunnelGateway.Receiver rec) Return true if there were messages remaining, and we should queue up a delayed flush to clear them NOTE: Unused here, see BatchedPreprocessor override, super is not called.protected int
writeFirstFragment
(PendingGatewayMessage msg, byte[] target, int offset) protected int
writeSubsequentFragment
(PendingGatewayMessage msg, byte[] target, int offset)
-
Field Details
-
_context
-
_log
-
PREPROCESSED_SIZE
public static final int PREPROCESSED_SIZE- See Also:
-
IV_SIZE
protected static final int IV_SIZE- See Also:
-
_dataCache
Here in tunnels, we take from the cache but never add to it. In other words, we take advantage of other places in the router also using 1024-byte ByteCaches (since ByteCache only maintains once instance for each size) Used in BatchedPreprocessor; see add'l comments there
-
-
Constructor Details
-
TrivialPreprocessor
-
-
Method Details
-
getDelayAmount
public long getDelayAmount()how long do we want to wait before flushing- Specified by:
getDelayAmount
in interfaceTunnelGateway.QueuePreprocessor
-
preprocessQueue
public boolean preprocessQueue(List<PendingGatewayMessage> pending, TunnelGateway.Sender sender, TunnelGateway.Receiver rec) Return true if there were messages remaining, and we should queue up a delayed flush to clear them NOTE: Unused here, see BatchedPreprocessor override, super is not called.- Specified by:
preprocessQueue
in interfaceTunnelGateway.QueuePreprocessor
- Parameters:
pending
- list of Pending objects for messages either unsent or partly sent. This list should be update with any values removed (the preprocessor owns the lock) Messages are not removed from the list until actually sent. The status of unsent and partially-sent messages is stored in the Pending structure.- Returns:
- true if we should delay before preprocessing again
-
notePreprocessing
-
preprocess
protected void preprocess(byte[] fragments, int fragmentLength) Wrap the preprocessed fragments with the necessary padding / checksums to act as a tunnel message.- Parameters:
fragmentLength
- fragments[0:fragmentLength] is used
-
writeFirstFragment
-
writeSubsequentFragment
-
getInstructionsSize
- Returns:
- generally 3 or 35 or 39 for first fragment, 7 for subsequent fragments. Does NOT include 4 for the message ID if the message will be fragmented; call getInstructionAugmentationSize() for that.
-
getInstructionAugmentationSize
protected static int getInstructionAugmentationSize(PendingGatewayMessage msg, int offset, int instructionsSize) - Returns:
- 0 or 4
-