TY - GEN
T1 - Packet processing with blocking for bursty traffic on multi-thread network processor
AU - Chang, Yeim Kuan
AU - Kuo, Fang Chen
PY - 2009/12/1
Y1 - 2009/12/1
N2 - It is well-known that there are bursty accesses in network traffic. It means a burst of packets with the same meaningful headers are usually received by routers at the same time. With such traffic, routers usually perform the same computations and access the same memory location repeatedly. To utilize this characteristic of network traffic, many cache schemes are proposed to deal with the bursty access patterns. However, in the multi-thread network processor based routers, the existing cache schemes will not suit to the bursty traffic. Since all threads may all deal with the packets with the same headers, if the former threads do not update the cache entries yet, the subsequent threads still have to repeat the computations due to the cache miss. In this paper, we propose a cache scheme called B-cache for the multi-thread network processors. B-cache blocks the subsequent threads from doing the same computations which are being processed by the former thread. By applying B-cache, any packet processing tasks with high locality characteristic, such as IP address lookup, packet classification, and intrusion detection, can avoid the duplicate computations and hence achieve a better packet processing rate. We implement the proposed B-cache scheme on Intel IXP2400 network processor, the experimental results shows that our B-cache scheme can achieves the line speed of Intel IXP2400.
AB - It is well-known that there are bursty accesses in network traffic. It means a burst of packets with the same meaningful headers are usually received by routers at the same time. With such traffic, routers usually perform the same computations and access the same memory location repeatedly. To utilize this characteristic of network traffic, many cache schemes are proposed to deal with the bursty access patterns. However, in the multi-thread network processor based routers, the existing cache schemes will not suit to the bursty traffic. Since all threads may all deal with the packets with the same headers, if the former threads do not update the cache entries yet, the subsequent threads still have to repeat the computations due to the cache miss. In this paper, we propose a cache scheme called B-cache for the multi-thread network processors. B-cache blocks the subsequent threads from doing the same computations which are being processed by the former thread. By applying B-cache, any packet processing tasks with high locality characteristic, such as IP address lookup, packet classification, and intrusion detection, can avoid the duplicate computations and hence achieve a better packet processing rate. We implement the proposed B-cache scheme on Intel IXP2400 network processor, the experimental results shows that our B-cache scheme can achieves the line speed of Intel IXP2400.
UR - http://www.scopus.com/inward/record.url?scp=74949121357&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=74949121357&partnerID=8YFLogxK
U2 - 10.1109/HPSR.2009.5307419
DO - 10.1109/HPSR.2009.5307419
M3 - Conference contribution
AN - SCOPUS:74949121357
SN - 9781424451746
T3 - 2009 International Conference on High Performance Switching and Routing, HPSR 2009
BT - 2009 International Conference on High Performance Switching and Routing, HPSR 2009
T2 - 2009 International Conference on High Performance Switching and Routing, HPSR 2009
Y2 - 22 June 2009 through 24 June 2009
ER -