Appendix D: TCP Socket Importing (Socket Migration)
The purpose of this work is to allow an existing socket connection between a client and server (Fig. D.1) to be split into two new socket connections (Fig. D.2) without disrupting the session between the two remote stations. This is an advanced feature that allows an intermediary Treck node to transition from a packet forwarding role to a proxy role by seamlessly splicing into an active TCP stream between two remote endpoints.
The network must be such that the intermediary Treck node forms a critical link between the remote endpoints. Otherwise, some packets may find an alternate route around the Treck node, which would likely cause one or both remote endpoints to reset the connection. The user will be required to perform stateful packet inspection at the device driver level to determine the exact TCP state of each endpoint at the time of transition.
The term, TCP socket importing, reflects the action taken by the Treck intermediary to create an internal copy of an external TCP socket endpoint. The user imports the TCP state information of the server's socket into a virtual local server and imports the TCP state information of the client's socket into a virtual local client. The actual remote endpoints will be unaware that they are now communicating with virtual counterparts and the intermediary node will have total control over what is sent from each imported socket.
|Fig. D.1 One TCP Connection||Fig. D.2 Two TCP Connections|
Normally, a socket connection starts with an exchange of information (the TCP handshake) before application data transfer can proceed. Treck provides a new function, tfUserCreateTcpSocket() that loads the connection information into a new socket without having to communicate with the peer. The socket descriptor returned by tfUserCreateTcpSocket() allows your application to assume the role of the imported endpoint. By importing both TCP endpoints, your application can view and even change the data flow between the remote endpoints.
The local server and client must run within their own independent Treck context (see Running Multiple Instances of Treck), so that Treck can maintain separate IP routing tables for each side. Each context will need it's own pseudo-device to buffer the packets between the imported socket and your real device driver. You may need a third context in which to operate the automatic Treck IP routing and forwarding for connections that have not been imported.
Your device driver will need to monitor and route the packet traffic flowing through your node. Prior to socket importing, you need to monitor the TCP information described in structure ttUserTcpCon. After you have taken your TCP snapshot, no packets must be allowed to pass through that would alter the information that you have collected. After successfully calling tfUserCreateTcpSocket() with your TCP snapshot data, your device driver must examine and redirect incoming packets with destination ttUserTcpCon.uconAddrLocal to the appropriate pseudo-device.
The application is responsible for maintaining the data flow between the remote endpoints by manually forwarding data bidirectionally between the local client and local server. This requires that the application call tfSetCurrentContext() to put itself in the correct Treck context to read and write each local socket. As stated earlier, your pseudo-devices must also operate within the correct context.
|Warning:||Preemptive multitasking requires special management of Treck contexts, since the O/S can switch tasks without the necessary call to tfSetCurrentContext(). See Running Multiple Instances of Treck and TM_USE_KERNEL_CONTEXT.|
To enable support for this feature, uncomment the following configuration macros in trsystem.h:
Treck functions mentioned on this page:
Treck data type mentioned on this page:
Other pertinent Treck information: