Packet Switching

Packet switching is a digital networking communications method that groups all transmitted data – regardless of content, type, or structure – into suitably sized blocks, called packets. First proposed for military uses in the early 1960s and implemented on small networks in 1968, this method of data transmission became the fundamental networking technology behind the Internet and most local area networks.

Packet switching features delivery of variable-bit-rate data streams (sequences of packets) over a shared network. When traversing network adapters, switches, routers and other network nodes, packets are buffered and queued, resulting in variable delay and throughput depending on the traffic load in the network.

Packet switching contrasts with another principal networking paradigm, circuit switching, a method which sets up a limited number of dedicated connections of constant bit rate and constant delay between nodes for exclusive use during the communication session. In case of traffic fees (as opposed to flat rate), for example in cellular communication services, circuit switching is characterized by a fee per time unit of connection time, even when no data is transferred, while packet switching is characterized by a fee per unit of information.

Two major packet switching modes exist; (1) connectionless packet switching, also known as datagram switching, and (2) connection-oriented packet switching, also known as virtual circuit switching. In the first case each packet includes complete addressing or routing information. The packets are routed individually, sometimes resulting in different paths and out-of-order delivery. In the second case a connection is defined and preallocated in each involved node during a connection phase before any packet is transferred. The packets include a connection identifier rather than address information, and are delivered in order. See below.

Packet mode communication may be utilized with or without intermediate forwarding nodes (packet switches or routers). In all packet mode communication, network resources are managed by statistical multiplexing or dynamic bandwidth allocation in which a communication channel is effectively divided into an arbitrary number of logical variable-bit-rate channels or data streams. Statistical multiplexing, packet switching and other store-and-forward buffering introduces varying latency and throughput in the transmission. Each logical stream consists of a sequence of packets, which normally are forwarded by the multiplexers and intermediate network nodes asynchronously using first-in, first-out buffering. Alternatively, the packets may be forwarded according to some scheduling discipline for fair queuing, traffic shaping or for differentiated or guaranteed quality of service, such as weighted fair queuing or leaky bucket. In case of a shared physical medium, the packets may be delivered according to some packet-mode multiple access scheme.

Multiplex
techniques
Circuit mode
TDM · FDM/WDM · SDM
Polarization multiplexing
Spatial multiplexing (MIMO)
OAM multiplexing
Statistical multiplexing
Packet mode · Dynamic TDM
FHSS · DSSS
OFDMA · SC-FDM · MC-SS
Related topics
Channel access methods
Media Access Control (MAC)

Read more about Packet Switching:  History, Connectionless and Connection-oriented Packet Switching, Packet Switching in Networks, X.25 Vs. Frame Relay Packet Switching

Famous quotes containing the word packet:

    we know our end
    A packet of worm-seed, a garden of spent tissues.
    Allen Tate (1899–1979)